text
stringlengths 14
1.76M
|
|---|
There is growing awareness that errors in the model equations cannot be ignored in data assimilation methods such as four-dimensional variational assimilation (4D-Var). If allowed for, more information can be extracted from observations, longer time windows are possible, and the minimisation process is easier, at least in principle. Weak constraint 4D-Var estimates the model error and minimises a series of linear least-squares cost functions, which can be achieved using the conjugate gradient (CG) method; minimising each cost function is called an inner loop. CG needs preconditioning to improve its performance. In previous work, limited memory preconditioners (LMPs) have been constructed using approximations of the eigenvalues and eigenvectors of the Hessian in the previous inner loop. If the Hessian changes significantly in consecutive inner loops, the LMP may be of limited usefulness. To circumvent this, we propose using randomised methods for low rank eigenvalue decomposition and use these approximations to cheaply construct LMPs using information from the current inner loop. Three randomised methods are compared. Numerical experiments in idealized systems show that the resulting LMPs perform better than the existing LMPs. Using these methods may allow more efficient and robust implementations of incremental weak constraint 4D-Var.
Keywords: data assimilation, weak constraint 4D-Var, limited memory preconditioners, randomised methods, sparse symmetric positive definite systems
*Correspondence: Ieva Daužickaitė, Department of Mathematics and Statistics, Pepper Lane, Whiteknights, Reading RG6 6AX, UK. Email<EMAIL_ADDRESS>
§ INTRODUCTION
In numerical weather prediction, data assimilation provides the initial conditions for the weather model and hence influences the accuracy of the forecast <cit.>. Data assimilation uses observations of a dynamical system to correct a previous estimate (background) of the system's state. The statistical knowledge of the errors in the observations and the background is incorporated in the process. A variational data assimilation method called weak constraint 4D-Var provides a way to also take into account the model error <cit.>, which can lead to a better analysis (e.g. <cit.>).
We explore the weak constraint 4D-Var cost function. In its incremental version, the state is updated by a minimiser of the linearised version of the cost function. The minimiser can be found by solving a large sparse linear system.
The process of solving each system is called an inner loop. Because the second derivative of the cost function, the Hessian, is symmetric positive definite, the systems may be solved with the conjugate gradient (CG) method <cit.>, whose convergence rate depends on the eigenvalue distribution of the Hessian. Limited memory preconditioners (LMPs) have been shown to improve the convergence of CG when minimising the strong constraint 4D-Var cost function <cit.>. Strong constraint 4D-Var differs from the weak constraint 4D-Var by making the assumption that the dynamical model has no error.
LMPs can be constructed using approximations to the eigenvalues and eigenvectors (eigenpairs) of the Hessian. The Lanczos and CG connection (Section 6.7 of <cit.>) can be exploited to obtain approximations to the eigenpairs of the Hessian in one inner loop, and these approximations may then be employed to construct the preconditioner for the next inner loop <cit.>. This approach does not describe how to precondition the first inner loop, and the number of CG iterations used on the $i$th inner loop limits the number of vectors available to construct the preconditioner on the $(i+1)$st inner loop. Furthermore, the success of preconditioning relies on the assumption that the Hessians do not change significantly from one inner loop to the next.
In this paper, we propose addressing these drawbacks by using the easy to implement subspace iteration methods (see Chapter 5 of <cit.>) to obtain approximations of the largest eigenvalues and corresponding eigenvectors of the Hessian in the current inner loop. The subspace iteration method first approximates the range of the Hessian by multiplying it with a start matrix (for approaches to choosing it see e.g. <cit.>) and the speed of convergence depends on the choice of this matrix (e.g. <cit.>). A variant of subspace iteration, which uses a Gaussian random start matrix, is called the Randomised Eigenvalue Decomposition (REVD). REVD has been popularised by probabilistic analysis <cit.>.
It has been shown that REVD, which is equivalent to one iteration of the subspace iteration method, can often generate a satisfactory approximation of the largest eigenpairs of a matrix that has rapidly decreasing eigenvalues. Because the Hessian is symmetric positive definite, a randomised Nyström method for computing a low rank eigenvalue decomposition can also be used. It is expected to give a higher quality approximation than REVD (e.g. <cit.>). We explore these two methods and another implementation of REVD, which is based on the ritzit implementation of the subspace method <cit.>. The methods differ in the number of matrix-matrix products with the Hessian. Even though more computations are required to generate the preconditioner in the current inner loop compared to using information from the previous inner loop, the randomised methods are block methods and hence easily parallelisable.
In Section 2, we discuss the weak constraint 4D-Var method and, in Section 3, we consider LMPs and ways to obtain spectral approximations. The three randomised methods are examined in Section 4. Numerical experiments with linear advection and Lorenz 96 models are presented in Section 5, followed by a concluding discussion in Section 6.
§ WEAK CONSTRAINT 4D-VAR
We are interested in estimating the state evolution of a dynamical system $\bx_0, \bx_1,\dots,\bx_N$, with $\bx_i \in \mathbb{R}^n$, at times $t_0, t_1,\dots,t_N$. Prior information about the state at $t_0$ is called the background and is denoted by $\bx^b \in \mathbb{R}^n$. It is assumed that $\bx^b$ has Gaussian errors with zero mean and covariance matrix $\bB \in \mathbb{R}^{n \times n}$. Observations of the system at time $t_i$ are denoted by $\by_i \in \mathbb{R}^{q_i}$ and their errors are assumed to be Gaussian with zero mean and covariance matrix $\bR_i \in \mathbb{R}^{q_i \times q_i}$ ($q_i \ll n$). An observation operator $\mathcal{H}_i$ maps the model variables into the observed quantities at the correct location, i.e. $\by_i = \mathcal{H}_i (\bx_i) + \boldsymbol{\zeta}_i$, where $\boldsymbol{\zeta}_i$ is the observation error. We assume that the observation errors are uncorrelated in time.
The dynamics of the system are described using a nonlinear model $\mathcal{M}_i$ such that
\begin{equation}\label{eq:model}
\bx_{i+1} = \mathcal{M}_i (\bx_i) + \bleta_{i+1},
\end{equation}
where $\bleta_{i+1}$ is the model error at time $t_{i+1}$. The model errors are assumed to be uncorrelated in time and to be Gaussian with zero mean and covariance matrix $\bQ_i \in \mathbb{R}^{n \times n}$.
The forcing formulation of the nonlinear weak constraint 4D-Var cost function in which we solve for the initial state and the model error realizations, is
\begin{align}
J(\bx_0, \bleta_1, \dots, \bleta_N) & = \frac{1}{2} (\bx_0 - \bx^b)^T \bB^{-1} (\bx_0 - \bx^b) + \frac{1}{2} \sum_{i=0}^{N} (\by_i - \mathcal{H}_i (\bx_i))^T \bR_i^{-1} (\by_i - \mathcal{H}_i (\bx_i)) \label{eq:4D-var_error} \\ \nonumber
& + \frac{1}{2} \sum_{i=1}^{N} \bleta_i^T \bQ_i^{-1} \bleta_i,
\end{align}
where $\bx_i$ satisfies the model equation (<ref>) <cit.>. The analysis (approximation of the state evolution over the time window) $\bx^a_0, \bx^a_1,\dots,\bx^a_N$ can be obtained from the minimiser of (<ref>) using constraints (<ref>).
§.§ Incremental 4D-Var
One way to compute the analysis is to approximate the minimum of (<ref>) with an inexact Gauss-Newton algorithm <cit.>, where a sequence of quadratic cost functions is minimised. In this approach, we update $\bx_0$ and the model error
\begin{equation}\label{eq:error_update}
\bold{p}^{(j+1)} = \bold{p}^{(j)} +\delta \bold{p}^{(j)},
\end{equation}
where $\bold{p}^{(j)} = (\bx_0^{(j)T},\bleta_1^{(j)T},\dots, \bleta_N^{(j)T})^T$ is the $j$th approximation and $\delta \bold{p}^{(j)} = (\delta \bx_0^{(j)T}, \delta \bleta_1^{(j)T},\dots, \delta \bleta_N^{(j)T})^T$. The $j$th approximation of the state $\bx^{(j)}= (\bx_0^{(j)\ T},\dots, \bx_N^{(j)\ T})^T$ is calculated with (<ref>) using $\bold{p}^{(j)}$. The update $\delta \bold{p}^{(j)}$ is obtained by minimising the following cost function
\begin{equation}\label{eq:incr_wc4d-var_forcing}
J^{\delta} (\delta \bold{p}^{(j)}) = \frac{1}{2} || \delta \bold{p}^{(j)}- \bold{b}^{(j)} ||^2_{\bold{D}^{-1}} + \frac{1}{2} || \bold{H}^{(j)} (\bold{L}^{-1})^{(j)} \delta \bold{p}^{(j)} - \bold{d}^{(j)} ||^2_{\bold{R}^{-1}},
\end{equation}
where $||\bold{a}||^2_{\bold{A}^{-1}}=\bold{a}^T\bold{A}^{-1}\bold{a}$ and the covariance matrices are block diagonal, i.e. $\bold{D} = diag(\bB,\bQ_1,\dots,\bQ_N) \in \mathbb{R}^{n(N+1) \times n(N+1)}$ and $\bold{R} = diag(\bR_0, \dots, \bR_N) \in \mathbb{R}^{q \times q} $, where $q = \Sigma_{i=0}^{N} q_i$. We use the notation (following <cit.>) $\bold{H}^{(j)} = diag(\bH_0^{(j)}, \dots, \bH_N^{(j)}) \in \mathbb{R}^{q \times n(N+1) }$, where $\bH_i^{(j)}$ is the linearised observation operator, and
\begin{align}
(\bL^{-1})^{(j)} = & \left( \begin{array}{ccccc}
\bI & & & & \\
\bM_{0,0}^{(j)} & \bI & & & \\
\bM_{0,1}^{(j)} & \bM_{1,1}^{(j)} & \bI & & \\
\vdots & \vdots & \ddots & \ddots & \\
\bM_{0,N-1}^{(j)} & \bM_{1,N-1}^{(j)} & \cdots & \bM_{N-1,N-1}^{(j)} & \bI \\
\end{array} \right), \
\\
\bold{b}^{(j)} = & \left( \begin{array}{c}
\bx^b - \bx_0^{(j)}\\
- \bleta_1^{(j)} \\
\vdots \\
\end{array} \right), \
\\
\bold{d}^{(j)} = & \left( \begin{array}{c}
\by_0 - \mathcal{H}_0 (\bx_0^{(j)})\\
\by_1 - \mathcal{H}_1 (\bx_1^{(j)}) \\
\vdots \\
\by_N - \mathcal{H}_N (\bx_N^{(j)})
\end{array} \right),
\end{align}
where $\bM_{i,l}^{(j)} =\bM_l^{(j)} \dots \bM_i^{(j)}$ and $\bM_i^{(j)} $ is the linearised model, i.e. $\bM_{i,l}^{(j)}$ denotes the linearised model integration from time $t_i$ to $t_{l+1}$, $(\bL^{-1})^{(j)} \in \mathbb{R}^{n(N+1) \times n(N+1)}$, $\bx^{(j)}, \delta \bx^{(j)}, \bold{b}^{(j)} \in \mathbb{R}^{n(N+1)}$ and $\bold{d}^{(j)} \in \mathbb{R}^q$. The outer loop consists of updating (<ref>), calculating $\bx^{(j)}, \bold{b}^{(j)}, \bold{d}^{(j)}$, and linearising $\mathcal{H}_i$ and $\mathcal{M}_i$ for the next inner loop.
The minimum of the quadratic cost function (<ref>) can be found by solving a linear system
\begin{align}
\boldsymbol{\mathcal{A}}^{(j)} \delta \bold{p}^{(j)} & = \bold{D}^{-1} \bold{b}^{(j)} + (\bL^{-T})^{(j)} (\bold{H}^T)^{(j)} \bold{R}^{-1} \bold{d^{(j)}}, \label{eq:forcing_form} \\
\boldsymbol{\mathcal{A}}^{(j)} & =(\bold{D}^{-1} + (\bL^{-T})^{(j)} (\bold{H}^T)^{(j)} \bold{R}^{-1} (\bold{H})^{(j)} (\bL^{-1})^{(j)} ) \in \mathbb{R}^{n(N+1) \times n(N+1)}, \label{eq:forcing_form_matrix}
\end{align}
where $\boldsymbol{\mathcal{A}}^{(j)}$ is the Hessian of (<ref>), which is symmetric positive definite. These large sparse systems are usually solved with the conjugate gradient (CG) method, whose convergence properties depend on the spectrum of $\boldsymbol{\mathcal{A}}^{(j)}$ (see Section <ref> for a discussion). In general, clustered eigenvalues result in fast convergence. We consider a technique to cluster eigenvalues of $\boldsymbol{\mathcal{A}}^{(j)}$ in the following section. From now on we omit the superscript $(j)$.
§.§ Control Variable Transform
A control variable transform, which is also called first level preconditioning, maps the variables $\delta \bold{p}$ to $\delta \tilde{\bold{p} }$, whose errors are uncorrelated (see, e.g. Section 3.2 of <cit.>). This can be denoted as the transformation $\bold{D}^{1/2} \delta \tilde{\bold{p} } =\delta \bold{p}$, where $\bold{D} = \bold{D}^{1/2} \bold{D}^{1/2}$ and $\bold{D}^{1/2}$ is the symmetric square root. The update $\delta \tilde{\bold{p} }$ is then the solution of
\begin{equation}\label{eq:forcing_1stlvl_prec}
\boldsymbol{\mathcal{A}}^{pr} \delta \tilde{\bold{p} }= \bold{D}^{-1/2} \bold{b} + \bold{D}^{1/2} \bL^{-T} \bold{H}^T \bold{R}^{-1} \bold{d}, \quad \boldsymbol{\mathcal{A}}^{pr}=\bold{I} + \bold{D}^{1/2} \bL^{-T} \bold{H}^T \bold{R}^{-1} \bold{H} \bL^{-1} \bold{D}^{1/2}.
\end{equation}
$\boldsymbol{\mathcal{A}}^{pr}$ is the sum of the identity matrix and a rank $q$ positive semi-definite matrix. Hence, it has a cluster of $n(N+1) - q$ eigenvalues at one and $q$ eigenvalues that are greater than one. Thus, the spectral condition number $\kappa = \lambda_{max} / \lambda_{min}$ (here $\lambda_{max}$ and $\lambda_{min} $ are the largest and smallest, respectively, eigenvalues of $\boldsymbol{\mathcal{A}}^{pr}$) is equal to $\lambda_{max}$. We discuss employing second level preconditioning to reduce the condition number while also preserving the cluster of the eigenvalues at one. In the subsequent sections, we use notation that is common in numerical linear algebra. Namely, we use $\bA$ for the Hessian with first level preconditioning, $\bx$ for the unknown and $\bb$ for the right hand side of the system of linear equations. Thus, we denote (<ref>) by
\begin{equation}\label{eq:Ax_eq_b}
\bA \bx=\bb,
\end{equation}
where the right-hand side $\bb$ is known and $\bx$ is the required solution. We assume throughout that $\bA$ is symmetric positive definite.
§ PRECONDITIONING WEAK CONSTRAINT 4D-VAR
§.§ Preconditioned conjugate gradients
The CG method (see, e.g. <cit.>) is a popular Krylov subspace method for solving systems of the form (<ref>). A well known bound for the error at the $i$th CG iteration $\boldsymbol{\epsilon}_i = \bx - \bx_i$ is
\begin{equation}
\frac{|| \boldsymbol{\epsilon}_i ||_\bA }{|| \boldsymbol{\epsilon}_0 ||_\bA} \leq 2 \left( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^i,
\end{equation}
where $\kappa$ is the spectral condition number and $|| \boldsymbol{\epsilon}_i ||^2_\bA = \boldsymbol{\epsilon}_i^T \bA \boldsymbol{\epsilon}_i$ (see, e.g., Section 5.1. of <cit.>). Note that this bound describes the worst-case convergence and only takes into account the largest and smallest eigenvalues. The convergence of CG also depends on the distribution of the eigenvalues of $\bA$ (as well as the right hand side $\bb$ and the initial guess $\bx_0$); eigenvalues clustered away from zero suggest rapid convergence (Lecture 38 of <cit.>). Otherwise, CG can display slow convergence and preconditioning is used to try and tackle this problem (Chapter 9 of <cit.>). Preconditioning aims to map the system (<ref>) to another system that has the same solution, but different properties that imply faster convergence. Ideally, the preconditioner $\bP$ should be both cheap to construct and to apply, and the preconditioned system should be easy to solve.
If $\bP$ is a symmetric positive definite matrix that approximates $\bA^{-1}$ and is available in factored form $\bP = \bC \bC^T$, the following system is solved
\begin{equation}\label{eq:split_prec_system}
\bC^T \bA \bC \hat{\bx} = \bC^T \bb,
\end{equation}
where $\hat{\bx} = \bC^{-1} \bx$. Split preconditioned CG (PCG) for solving (<ref>) is described in Algorithm <ref> (see, for example, Algorithm 9.2 of <cit.>). Note that every CG iteration involves one matrix-vector product with $\bA$ (the product $\bA \bp_{j-1}$ is stored in step <ref> and reused in step <ref>) and this is expensive in weak constraint 4D-Var, because it involves running the linearised model throughout the assimilation window through the factor $\bL^{-1}$.
Split preconditioned CG for solving $\bA \bx=\bb$
Input: $\bA \in \mathbb{R}^{n_A \times n_A}$, $\bb \in \mathbb{R}^{n_A}$, preconditioner $\bP = \bC \bC^T \in \mathbb{R}^{n_A \times n_A}$, initial solution $\bx_0 \in \mathbb{R}^{n_A}$
Output: solution $\bx_j \in \mathbb{R}^{n_A}$
[1]
Compute $\br_0 =\bC^T (\bb - \bA \bx_0)$, and $\bp_0 =\bC \br_0$
$j=1,2, \dots,$ until convergence
$\alpha_j = (\br_{j-1}^T \br_{j-1}) / (\bp_{j-1}^T \bA \bp_{j-1})$
$\bx_j = \bx_{j-1} + \alpha_j \bp_{j-1} $
$\br_j = \br_{j-1} - \alpha_j \bC^T \bA \bp_{j-1}$
$\beta_j = (\br_j^T \br_j) / (\br_{j-1}^T \br_{j-1})$
$\bp_j = \bC \br_j + \beta_j \bp_{j-1}$
§.§ Limited memory preconditioners
In weak constraint 4D-Var the preconditioner $\bP$ approximates the inverse Hessian. Hence, $\bP$ can be obtained using Quasi-Newton methods for unconstrained optimization that construct an approximation of the Hessian matrix, which is updated regularly (see, for example, Chapter 6 of <cit.>). A popular method to approximate the Hessian is BFGS (named after Broyden, Fletcher, Goldfarb, and Shanno, who proposed it), but it is too expensive in terms of storage and updating the approximation. Instead, the so-called block BFGS method (derived by <cit.>) uses only a limited number of vectors to build the Hessian, and when new vectors are added older ones are dropped. This is an example of a limited memory preconditioner (LMP), and the one considered by
Tshimanga et al. (see <cit.> and <cit.>) in the context of strong constraint 4D-Var. An LMP for an $n_A \times n_A$ symmetric positive definite matrix $\bA$ is defined as follows
\begin{equation}\label{eq:LMP_general}
\bP_k = ( \bI_{n_A} - \bS (\bS^T \bA \bS)^{-1} \bS^T \bA) ( \bI_{n_A} - \bA \bS(\bS^T \bA \bS)^{-1} \bS^T) + \bS (\bS^T \bA \bS)^{-1} \bS^T,
\end{equation}
where $\bS$ is an $n_A \times k$ ($k \leq n_A$) matrix with linearly independent columns $\bs_1, \dots, \bs_k$, and $\bI_{n_A}$ is the $n_A \times n_A$ identity matrix <cit.>. $\bP_k$ is symmetric positive definite and if $k = n_A$ then $(\bS^T \bA \bS)^{-1} = \bS^{-1} \bA^{-1} \bS^{-T}$ and $\bP_k = \bA^{-1}$. In data assimilation, we have $k \ll n_A$, hence the name LMPs. $\bP_k$ is called a balancing preconditioner in <cit.>.
A potential problem for practical applications of (<ref>) is the need for expensive matrix-matrix products with $\bA$. Simpler formulations of (<ref>) are obtained by imposing more conditions on the vectors $\bs_1, \dots, \bs_k$. Two formulations that <cit.> calls spectral-LMP and Ritz-LMP have been used, for example, in ocean data assimilation in the Regional Ocean Modeling System (ROMS) <cit.> and the variational data assimilation software with the Nucleus for European Modelling of the Ocean (NEMO) ocean model (NEMOVAR) <cit.>, and coupled climate reanalysis in Coupled ECMWF ReAnalysis (CERA) <cit.>.
To obtain the spectral-LMP, let $\bv_1, \dots, \bv_k$ be orthonormal eigenvectors of $\bA$ with corresponding eigenvalues $\lambda_1, \dots, \lambda_k$. Set $\bV = (\bv_1, \dots, \bv_k)$ and $\boldsymbol{\Lambda} = diag(\lambda_1, \dots, \lambda_k)$ so that $\bA \bV = \bV \boldsymbol{\Lambda}$ and $\bV^T \bV = \bI_k$. If $\bs_i = \bv_i$, $i=1,\dots,k$, then the LMP in (<ref>) is the spectral-LMP $\bP_k^{sp}$ (it is called a deflation preconditioner in <cit.>), which can be simplified as
\begin{equation}\label{eq:spectral_LMP}
\bP_k^{sp} = \bI_{n_A} - \sum_{i=1}^{k} (1 - \lambda_i^{-1}) \bv_i \bv_i^T.
\end{equation}
Then $\bP_k^{sp} = \bC_k^{sp}(\bC_k^{sp})^T$ with (presented in Section 2.3.1 of <cit.>)
\begin{equation}\label{eq:spectral_LMP_factor}
\bC_k^{sp} = \prod_{i=1}^{k} \left(\bI_{n_A} - \left( 1 - (\sqrt{\lambda_i})^{-1}\right) \bv_i \bv_i^T \right).
\end{equation}
In many applications, including data assimilation, exact eigenpairs are not known, and their approximations, called Ritz values and vectors, are used (we discuss these in the following section).
If $\bu_1, \dots, \bu_k$ are orthogonal Ritz vectors, then the following relation holds $\bU^T \bA \bU = \boldsymbol{\Theta}$, where $\bU = (\bu_1, \dots, \bu_k)$, $\boldsymbol{\Theta} = diag(\theta_1, \dots, \theta_k)$ and $\theta_i$ is a Ritz value. Setting $\bs_i = \bv_i$, $i=1,\dots,k$, the Ritz-LMP $\bP_k^{Rt}$ is
\begin{equation}\label{eq:Ritz_LMP}
\bP_k^{Rt} = ( \bI_{n_A} - \bU \boldsymbol{\Theta}^{-1} \bU^T \bA) ( \bI_{n_A} - \bA \bU \boldsymbol{\Theta}^{-1} \bU^T) + \bU \boldsymbol{\Theta}^{-1} \bU^T.
\end{equation}
Each application of $\bP_k^{Rt}$ requires a matrix-matrix product with $\bA$. If the Ritz vectors are obtained by the Lanczos process (described in Section <ref> below), then (<ref>) can be further simplified, so that no matrix-matrix products with $\bA$ are needed (see Section 4.2.2. of <cit.> for details).
An important property is that if an LMP is constructed using $k$ vectors then at least $k$ eigenvalues of the preconditioned matrix $\bC^T \bA \bC$ will be equal to 1, and the remaining eigenvalues will lie between the smallest and largest eigenvalues of $\bA$ (see Theorem 3.4 of <cit.>). Moreover, if $\bA$ has a cluster of eigenvalues at 1, then LMPs preserve this cluster. This is crucial when preconditioning (<ref>): because the LMPs preserve the $n(N+1) - q$ smallest eigenvalues of $\boldsymbol{\mathcal{A}}^{pr}$ that are equal to $1$, the CG convergence can be improved by decreasing the largest eigenvalues. Hence, it is preferable to use the largest eigenpairs or their approximations.
In practice, both the spectral-LMP and Ritz-LMP use Ritz vectors and values to construct the LMPs.
It has been shown that the Ritz-LMP can perform better than the spectral-LMP in a strong constraint 4D-Var setting by correcting for the inaccuracies in the estimates of eigenpairs <cit.>. However, <cit.> (Theorem 4.5) have shown that if the preconditioners are constructed with Ritz vectors and values that have converged, then the spectral-LMP acts like the Ritz-LMP.
§.§ Ritz information
Calculating or approximating all the eigenpairs of a large sparse matrix is impractical. Hence, only a subset is approximated to construct the LMPs. This is often done by extracting approximations from a subspace, and the Rayleigh-Ritz (RR) procedure is a popular method for doing this.
Assume that $\mathcal{Z} \subset \mathbb{R}^{n_A}$ is an invariant subspace of $\bA$, i.e. $\bA \bz \in \mathcal{Z}$ for every $\bz \in \mathcal{Z}$, and the columns of $\bZ \in \mathbb{R}^{n_A \times m}$, $m < n_A$, form an orthonormal basis for $\mathcal{Z}$. If $(\lambda, \hat{\by})$ is an eigenpair of $\bK = \bZ^T \bA \bZ \in \mathbb{R}^{m \times m}$, then $(\lambda, \bv)$, where $\bv = \bZ \hat{\by}$, is an eigenpair of $\bA$ (see, e.g. Theorem 1.2 in Chapter 4 of <cit.>). Hence, eigenvalues of $\bA$ that lie in the subspace $\mathcal{Z}$ can be extracted by solving a small eigenvalue problem.
However, generally the computed subspace $\tilde{\mathcal{Z}}$ with orthonormal basis as columns of $\tilde{\bZ}$ is not invariant. Hence, only approximations $\tilde{\bv}$ to the eigenvectors $\bv$ belong to $\tilde{\mathcal{Z}}$. The RR procedure computes approximations $\bu$ to $\tilde{\bv}$. We give the RR procedure in Algorithm <ref>, where the eigenvalue decomposition is abbreviated as EVD. Approximations to eigenvalues $\lambda$ are called Ritz values $\theta$, and $\bu$ are the Ritz vectors. Eigenvectors of $\tilde{\bK}=\tilde{\bZ}^T \bA \tilde{\bZ}$, which is the projection of $\bA$ onto $\tilde{\mathcal{Z}}$, are denoted by $\bw$ and are called primitive Ritz vectors.
Rayleigh-Ritz procedure for computing approximations of eigenpairs of symmetric $\bA$
Input: symmetric matrix $\bA \in \mathbb{R}^{n_A \times n_A}$, orthogonal matrix $\tilde{\bZ} \in \mathbb{R}^{n_A \times m}$, $m<n_A$
Output: orthogonal $\bU \in \mathbb{R}^{n_A \times m}$ with approximations to eigenvectors of $\bA$ as its columns, and diagonal $\boldsymbol{\Theta} \in \mathbb{R}^{m \times m}$ with approximations to eigenvalues of $\bA$ on the diagonal
[1]
Form $\tilde{\bK}=\tilde{\bZ}^T \bA \tilde{\bZ} \in \mathbb{R}^{m \times m}$
Form EVD of $\tilde{\bK}:\ \tilde{\bK} = \bW \boldsymbol{\Theta} \bW^T$, where $\bW, \ \boldsymbol{\Theta} \in \mathbb{R}^{ m \times m}$
Form Ritz vectors $\bU = \tilde{\bZ} \bW \in \mathbb{R}^{n_A \times m}$
§.§ Spectral information from CG
<cit.> use Ritz pairs of the Hessian in one inner loop to construct LMPs for the following inner loop, i.e. information on $\bA^{(0)}$ is used to precondition $\bA^{(1)}$, and so on. Success relies on the Hessians not changing significantly from one inner loop to the next. Ritz information can be obtained from the Lanczos process that is connected to CG, hence information for the preconditioner can be gathered at a negligible cost.
The Lanczos process is used to obtain estimates of a few extremal eigenvalues and corresponding eigenvectors of a symmetric matrix $\bA$ (Section 10.1 of <cit.>). It produces a sequence of tridiagonal matrices $\bT_j \in \mathbb{R}^{j \times j}$, whose largest and smallest eigenvalues converge to the largest and smallest eigenvalues of $\bA$. Given a starting vector $\blf_0$, it also computes an orthonormal basis $\blf_0,\dots, \blf_{j-1}$ for the Krylov subspace $\mathcal{K}_j = span\{\blf_0, \bA \blf_0, \dots, \bA^{j-1} \blf_0\}$.
Ritz values $\theta_i$ are obtained as eigenvalues of a tridiagonal matrix, which has the following structure:
\begin{equation}
\bT_j = \left( \begin{array}{cccc}
\gamma_1 & \tau_1 & &\\
\tau_1 & \gamma_2 & \tau_2 & \\
& \ddots & \ddots & \ddots \\
& & \tau_{j-1} & \gamma_j \\
\end{array} \right).
\end{equation}
The Ritz vectors of $\bA$ are $\bu_i=\bF_j \bw_i$, where $\bF_j=(\blf_0, \dots, \blf_{j-1})$ and an eigenvector $\bw_i$ of $\bT_j$ is a primitive Ritz vector. Eigenpairs of $\bT_j$ can be obtained using a symmetric tridiagonal QR algorithm or Jacobi procedures (e.g. Section 8.5 of <cit.>).
Saad (Section 6.7.3 of <cit.>) discusses obtaining entries of $\bT_j$ when solving $\bA \bx=\bb$ with CG. At the $j$-th iteration of CG, new entries of $\bT_j$ are calculated as follows
\begin{gather}
\gamma_j = \begin{cases}
\frac{1}{\alpha_j} & \text{for } j =1 \\
\frac{1}{\alpha_j} + \frac{\beta_{j-1}}{\alpha_{j-1}} & \text{for } j>1 \\
\end{cases}\\
\tau_j = \frac{\sqrt{\beta_j}}{\alpha_j},
\end{gather}
and the vector $\blf_j = \br_j/ ||\br_j||$, where $||\br_j||^2 = \br_j^T \br_j$ and $\alpha_j, \beta_j$ and $\br_j$ are as in Algorithm <ref>. Hence, obtaining eigenvalue information requires normalizing the residual vectors and finding eigenpairs of the tridiagonal matrix $\bT_j$. In data assimilation, the dimension of $\bT_j$ is small, because the cost of matrix-vector products restricts the number of CG iterations in the previous inner loop. Hence its eigenpairs can be calculated cheaply. However, caution has to be taken to avoid `ghost' values, i.e. repeated Ritz values, due to the loss of orthogonality in CG (Section 10.3.5 of <cit.>). This can be addressed using a complete reorthogonalization in every CG iteration, which is done in the CONGRAD routine used at the European Centre for Medium Range Weather Forecasts <cit.>. This makes every CG iteration more expensive, but CG may converge in fewer iterations <cit.>.
§ RANDOMISED EIGENVALUE DECOMPOSITION
If the Hessian in one inner loop differs significantly from the Hessian in the previous inner loop, then it may not be useful to precondition the former with an LMP that is constructed with information from the latter. Employing the Lanczos process to obtain eigenpair estimates and use them to construct an LMP in the same inner loop is too computationally expensive, because each iteration of the Lanczos process requires a matrix-vector product with the Hessian, thus the cost is similar to the cost of CG. Hence, we explore a different approach.
Subspace iteration is a simple procedure to obtain approximations to the largest eigenpairs (see, e.g., Chapter 5 of <cit.>). It is easily understandable and can be implemented in a straightforward manner, although its convergence can be very slow if the largest eigenvalues are not well separated from the rest of the spectrum. The accuracy of subspace iteration may be enhanced by using a RR projection.
Such an approach is used in the Randomised Eigenvalue Decomposition (REVD) (see, e.g., <cit.>). This takes a Gaussian random matrix, i.e. a matrix with independent standard normal random variables with zero mean and variance equal to one as its entries, and applies one iteration of the subspace iteration method with RR projection, hence obtaining a rank $m$ approximation $\bA\approx \bZ_1 (\bZ_1^T \bA \bZ_1) \bZ_1^T$, where $\bZ_1 \in \mathbb{R}^{n_A \times m}$ is orthogonal. We present REVD in Algorithm <ref>. An important feature of REVD is the observation that the accuracy of the approximation is enhanced with oversampling (which is also called using guard vectors in <cit.>), i.e. working on a larger space than the required number of Ritz vectors. <cit.> claim that setting the oversampling parameter to 5 or 10 is often sufficient.
Randomised eigenvalue decomposition, REVD
Input: symmetric matrix $\bA \in \mathbb{R}^{n_A \times n_A}$, target rank $k$, an oversampling parameter $l$
Output: orthogonal $\bU_1 \in \mathbb{R}^{n_A \times k}$ with approximations to eigenvectors of $\bA$ as its columns, and diagonal $\bsTheta_1 \in \mathbb{R}^{k \times k}$ with approximations to the largest eigenvalues of $\bA$ on the diagonal
[1]
Form a Gaussian random matrix $\bG \in \mathbb{R}^{n_A \times (k+l)}$
Form a sample matrix $\bY=\bA \bG \in \mathbb{R}^{n_A \times (k+l)}$
Orthonormalize the columns of $\bY$ to obtain orthonormal $\bZ_1 \in \mathbb{R}^{n_A \times (k+l)}$
Form $\bK_1 = \bZ_1^T \bA \bZ_1 \in \mathbb{R}^{ (k+l) \times (k+l) }$
Form EVD of $\bK:\ \bK = \bW_1 \bsTheta_1 \bW_1^T$, where $\bW_1, \ \bsTheta_1 \in \mathbb{R}^{ (k+l) \times (k+l)}$, elements of $\bsTheta_1$ are sorted in decreasing order
Remove last $l$ columns and rows of $\bsTheta_1$, so that $\bsTheta_1 \in \mathbb{R}^{ k \times k}$
Remove last $l$ columns of $ \bW_1$, so that $\bW_1 \in \mathbb{R}^{ (k+l) \times k}$
Form $\bU_1 = \bZ_1 \bW_1 \in \mathbb{R}^{n_A \times k}$.
Randomised algorithms are designed to minimise the communication instead of the flop count. The expensive parts of Algorithm <ref> are the two matrix-matrix products $\bA \bG$ and $\bA \bZ_1$ in steps 2 and 4, i.e. matrix $\bA$ has to be multiplied with $2(k+l)$ vectors, which in serial computations would be essentially the cost of $2(k+l)$ iterations of unpreconditioned CG. However, note that these matrix-matrix products can be parallelised.
In weak constraint 4D-Var, $\bA$ is the Hessian, hence it is symmetric positive definite and its eigenpairs can also be approximated using a randomised Nyström method (Algorithm 5.5. of <cit.>), which is expected to give much more accurate results than REVD <cit.>. We present the Nyström method in Algorithm <ref>, where singular value decomposition is abbreviated as SVD. It considers a more elaborate rank $m$ approximation than in REVD: $\bA\approx (\bA \bZ_1) (\bZ_1^T \bA \bZ_1)^{-1} (\bA \bZ_1)^T = \bF \bF^T$, where $\bZ_1 \in \mathbb{R}^{n_A \times m}$ is orthogonal (obtained in the same way as in REVD, e.g. using a tall skinny QR (TSQR) decomposition<cit.>) and $\bF =(\bA \bZ_1) (\bZ_1^T \bA \bZ_1)^{-1/2} \in \mathbb{R}^{n_A \times m}$ is an approximate Cholesky factor of $\bA$, which is found in step 6. The eigenvalues of $\bF \bF^T$ are the squares of the singular values of $\bF$ (see section 2.4.2 of <cit.>). In numerical computations we store matrices $\bE^{(1)} = \bA \bZ_1$ and $\bE^{(2)} = \bZ_1^T \bE^{(1)} = \bZ_1^T \bA \bZ_1$ (step 4), perform the Cholesky factorization of $\bE^{(2)} = \bC^T \bC$ (step 5) and obtain $\bF$ by solving the triangular system $\bF \bC= \bE^{(1)}$.
Randomised eigenvalue decomposition for symmetric positive semidefinite $\bA$, Nyström
Input: symmetric positive semidefinite matrix $\bA \in \mathbb{R}^{n_A \times n_A}$, target rank $k$, an oversampling parameter $l$
Output: orthogonal $ \bU_2 \in \mathbb{R}^{n_A \times k}$ with approximations to eigenvectors of $\bA$ as its columns, and diagonal $ \bsTheta_2 \in \mathbb{R}^{k \times k}$ with approximations to eigenvalues of $\bA$ on the diagonal
[1]
Form a Gaussian random matrix $\bG \in \mathbb{R}^{n_A \times (k+l)}$
Form a sample matrix $\bY=\bA \bG \in \mathbb{R}^{n_A \times (k+l)}$
Orthonormalize the columns of $\bY$ to obtain orthonormal $ \bZ_1 \in \mathbb{R}^{n_A \times (k+l)}$
Form matrices $\bE^{(1)} = \bA \bZ_1 \in \mathbb{R}^{n_A \times (k+l)}$ and $\bE^{(2)} = \bZ_1^T \bE^{(1)} \in \mathbb{R}^{ (k+l) \times (k+l) }$
Form a Cholesky factorization $\bE^{(2)} = \bC^T \bC$
Solve $\bF \bC= \bE^{(1)}$ for $\bF \in \mathbb{R}^{n_A \times (k+l)}$
Form SVD of $\bF:\ \bF = \bU_2 \bsSigma \bV^T $, where $\bU_2, \ \bV \in \mathbb{R}^{n_A\times (k+l)}$, $\bsSigma \in \mathbb{R}^{ (k+l) \times (k+l)}$, elements of $\bsSigma$ are sorted in decreasing order
Remove last $l$ columns of $\bU_2$, so that $\bU_2 \in \mathbb{R}^{n_A \times k}$
Remove last $l$ columns and rows of $\bsSigma$, so that $\bsSigma \in \mathbb{R}^{ k \times k}$, and set $\bsTheta_2 = \bsSigma^2$
The matrix-matrix product with $\bA$ at step <ref> of Algorithms <ref> and <ref> is removed in Rutishauser's implementation of subspace iteration with RR projection called ritzit <cit.>. It can be derived in the following manner (see Chapter 14 of <cit.>). Assume that $\bG_3 \in \mathbb{R}^{n_A \times m}$ is an orthogonal matrix and the sample matrix is $\bY_3 = \bA \bG_3 = \bZ_3 \bR_3$, where $\bZ_3 \in \mathbb{R}^{n_A \times m}$ is orthogonal and $\bR_3 \in \mathbb{R}^{m \times m}$ is upper triangular. Then a projection of $\bA^2$ onto the column space of $\bG_3$ is $\hat{\bK} = \bY_3^T \bY_3 = \bR_3^T \bZ_3^T \bZ_3 \bR_3 = \bR_3^T \bR_3$. Then $\bK_3 = \bR_3 \bR_3^T = \bR_3 \bR_3^T \bR_3 \bR_3^{-1} = \bR_3 \hat{\bK} \bR_3^{-1}$, which is similar to $\hat{\bK}$ and hence has the same eigenvalues. This leads to another implementation of REVD presented in Algorithm <ref>. This is a single pass algorithm, meaning that $\bA$ has to be accessed just once, and to the best of our knowledge this method has not been considered in the context of randomised eigenvalue approximations.
Randomised eigenvalue decomposition based on ritzit, REVD_ritzit
Input: symmetric matrix $\bA \in \mathbb{R}^{n_A \times n_A}$, target rank $k$, an oversampling parameter $l$
Output: orthogonal $\bU_3 \in \mathbb{R}^{n_A \times k}$ with approximations to eigenvectors of $\bA$ as its columns, and diagonal $\bsTheta_3 \in \mathbb{R}^{k \times k}$ with approximations to eigenvalues of $\bA$ on the diagonal
[1]
Form a Gaussian random matrix $\bG \in \mathbb{R}^{n_A \times (k+l)}$
Orthonormalize the columns of $\bG$ to obtain orthonormal $\bG_3$
Form a sample matrix $\bY_3=\bA \bG_3 \in \mathbb{R}^{n_A \times (k+l)}$
Compute QR decomposition $\bY_3=\bZ_3 \bR_3$ to obtain orthogonal $\bZ_3 \in \mathbb{R}^{n_A \times (k+l)}$ and upper triangular $\bR_3 \in \mathbb{R}^{(k+l) \times (k+l)}$
Form $\bK_3 = \bR_3 \bR_3^T \in \mathbb{R}^{ (k+l) \times (k+l) }$
Form EVD of $\bK_3:\ \bK_3 = \bW_3 \bsTheta_3^2 \bW_3^T$, where $\bW_3, \ \bsTheta_3^2 \in \mathbb{R}^{ (k+l) \times (k+l)}$, elements of $\bsTheta_3$ are sorted in decreasing order
Remove last $l$ columns and rows of $\bsTheta_3^2$, so that $\bsTheta_3^2 \in \mathbb{R}^{ k \times k}$
Remove last $l$ columns of $ \bW_3$, so that $ \bW_3 \in \mathbb{R}^{ (k+l) \times k}$
Form $\bU_3 = \bZ_3 \bW_3 \in \mathbb{R}^{n_A \times k}$.
Note that the Ritz vectors given by Algorithms <ref>, <ref> and <ref> are different. Although Algorithm <ref> accesses the matrix $\bA$ only once, it requires an additional orthogonalisation of a matrix of size $n_A \times (k+l)$.
In Table <ref>, we summarise some properties of the Lanczos, REVD, Nyström and REVD_ritzit methods when they are used to compute Ritz values and vectors to generate a preconditioner for $\bA$ in incremental data assimilation. Note that the cost of applying the spectral-LMP depends on the number of vectors $k$ used in its construction and is independent of which method is used to obtain them. The additional cost of using randomised algorithms arises only once per inner loop when the preconditioner is generated. We recall that in these algorithms the required EVD or SVD of the small matrix can be obtained cheaply and the most expensive parts of the algorithms are the matrix-matrix products of $\bA$ and $n_A \times (k+l)$ matrices. If enough computational resources are available, these can be parallelised. In the best case scenario, all $k+l$ matrix-vector products can be performed at the same time, making the cost of the matrix-matrix product equivalent to the cost of one iteration of CG plus communication between the processors.
When a randomised method is used to generate the preconditioner, an inner loop is performed as follows. Estimates of the Ritz values of the Hessian and the corresponding Ritz vectors are obtained with a randomised method (Algorithm <ref>, <ref> or <ref>) and used to construct an LMP. Then the system (<ref>) with the exact Hessian $\bold{A}$ is solved with PCG (Algorithm <ref>) using the LMP. The state is updated in the outer loop using the PCG solution.
Lanczos REVD Nyström REVD_ritzit
Information source Previous inner loop Current inner loop Current inner loop Current inner loop
Preconditioner for the
first inner loop No Yes Yes Yes
$k$ dependency on the
previous inner loop Bounded by the number
of CG iterations Independent Independent Independent
Matrix-matrix products
with $\bA$ None 2 products with
$n_A \times (k+l)$ matrices 2 products with
$n_A \times (k+l)$ matrices 1 product with
$n_A \times (k+l)$ matrix
QR decomposition None None None $\breve{\bY} \in \mathbb{R}^{n_A \times (k+l)}$
Orthogonalisation None $\bY \in \mathbb{R}^{n_A \times (k+l)}$ $\bY \in \mathbb{R}^{n_A \times (k+l)}$ $\bG \in \mathbb{R}^{n_A \times (k+l)}$
Cholesky factorization None None $\bE^{(2)} \in \mathbb{R}^{ (k+l) \times (k+l) }$ None
Triangular solve None None $\bF \bC = \bE^{(1)}$
for $\bF \in \mathbb{R}^{n_A \times (k+l)}$ None
Deterministic EVD $\bT_k \in \mathbb{R}^{k \times k}$ * $\bar \bK \in \mathbb{R}^{ (k+l) \times (k+l) }$ None $\breve{\bK} \in \mathbb{R}^{ (k+l) \times (k+l) }$
Deterministic SVD None None $\bF \in \mathbb{R}^{n_A \times (k+l)}$ None
A summary of the properties of the different methods of obtaining $k$ Ritz vectors and values to generate the preconditioner for a $n_A \times n_A$ matrix $\bA$ in the $i$th inner loop. Here $l$ is the oversampling parameter. * applies for CG with reorthogonalization.
§ NUMERICAL EXPERIMENTS
We demonstrate our proposed preconditioning strategies using two models: a simple linear advection model to explore the spectra of the preconditioned Hessian and the nonlinear Lorenz 96 model <cit.> to explore the convergence of split preconditioned CG (PCG) . We perform identical twin experiments, where $\bold{x}^t = ((\bx^t_0)^T, \dots, (\bx^t_N)^T)^T$ denotes the reference trajectory. The observations and background state are generated by adding noise with covariance matrices $\bold{R}$ and $\bB$, respectively, to $\mathcal{H}_i (\bx^t_i)$ and $\bx_0$. We use direct observations, thus the observation operator $\mathcal{H}_i$ is linear.
We use covariance matrices $\bR_i=\sigma_o^2 \bI_{q_i}$, where $q_i$ is the number of observations at time $t_i$, $\bQ_i = \sigma_q^2 \bC_q$, where $\bC_q$ is a Laplacian correlation matrix <cit.>, and $\bB=\sigma_b^2 \bC_b$, where $\bC_b$ is a second-order auto-regressive correlation matrix <cit.>.
We assume that first level preconditioning has already been applied (recall (<ref>)). In data assimilation, using Ritz-LMP as formulated in (<ref>) is impractical because of the matrix products with $\bA$ and we cannot use a simple formulation of Ritz-LMP when the Ritz values and vectors are obtained with the randomised methods. Hence, we use the spectral-LMP. However, as we mentioned in Section <ref>, the spectral-LMP that is constructed with well converged Ritz values and vectors acts like Ritz-LMP. When we consider the second inner loop, we compare the spectral-LMPs with information from the randomised methods with the spectral-LMP constructed with information obtained with the Matlab function eigs in the previous inner loop. eigs returns a highly accurate estimate of a few largest or smallest eigenvalues and corresponding eigenvectors. We will use the term randomised LMP to refer to the spectral-LMPs that are constructed with information from the randomised methods, and deterministic LMP to refer to the spectral-LMP that is constructed with information from eigs.
The computations are performed with Matlab R2017b. Linear systems are solved using the Matlab implementation of PCG (function pcg), which was modified to allow split preconditioning to maintain the symmetric coefficient matrix at every loop.
§.§ Advection model
First, we consider the linear advection model:
\begin{equation}\label{eq:advection}
\frac{\partial u(z,t)}{\partial t} + \frac{\partial u(z,t)}{\partial z} = 0,
\end{equation}
where $z \in [0,1]$ and $t \in (0, T)$. An upwind numerical scheme is used to discretise (<ref>) (see, e.g. Chapter 4 of <cit.>). To allow us to compute all the eigenvalues (described in the following section), we consider a small system with the linear advection model. The domain is divided into $n=40$ equally spaced grid points, with grid spacing $\Delta z = 1 / n$. We run the model for 51 time steps ($N=50$) with the time step size $\Delta t = 1 / N$, hence $\bA$ is a $2040 \times 2040$ matrix. The Courant number is $C = 0.8$ (the upwind scheme is stable with $C \in [0,1]$). The initial conditions are Gaussian $u(z,0) = 6 \exp \left( -\frac{(z-0.5)^2}{2\times 0.1^2} \right)$, and the boundary conditions are periodic $u(1,t) = u(0,t)$.
We set $\sigma_o = 0.05$, $\sigma_q = 0.05$ and $\sigma_b = 0.1$. $\bC_q$ and $\bC_b$ have length scales equal to $10 \Delta y$. Every 4th model variable is observed at every 5th time step, ensuring that there is an observation at the final time step (100 observations in total). Because the model and the observational operator $\mathcal{H}_i$ are linear, the cost function (<ref>) is quadratic and its minimiser is found in the first loop of the incremental method.
§.§.§ Eigenvalues of the preconditioned matrix
We apply the randomised LMPs in the first inner loop. Note that if the deterministic LMP is used, it is unclear how to precondition the first inner loop. We explore what effect the randomised LMPs have on the eigenvalues of $\bA$. The oversampling parameter $l$ is set to 5 and the randomised LMPs are constructed with $k=25$ vectors.
The Ritz values of $\bA$ given by the randomised methods are compared to those computed using eigs (Figure <ref>). The Nyström method produces a good approximation of the largest eigenvalues, REVD gives a slightly worse approximation, except for the largest five eigenvalues. The REVD_ritzit method underestimates the largest eigenvalues significantly.
The largest eigenvalues of the preconditioned matrices are smaller than the largest eigenvalue of $\bA$ (Figure <ref>). However, the smallest eigenvalues of the preconditioned matrices are less than one and hence applying the preconditioner expands the spectrum of $\bA$ at the lower boundary (Figure <ref>), so that Theorem 3.4 of <cit.>, which considers the non-expansiveness of the spectrum of the Hessian after preconditioning with an LMP, does not hold. This happens because the formulation of the spectral-LMP is derived assuming that the eigenvalues and eigenvectors are exact, while the randomized methods provide only approximations. Note that even though REVD_ritzit gives the worst approximations of the largest eigenvalues of the Hessian, using the randomised LMP with information from REVD_ritzit reduces the largest eigenvalues of the preconditioned matrix the most and the smallest eigenvalues are close to one. Using the randomised LMP with estimates from Nyström gives similar results. Hence, the condition number of the preconditioned matrix is lower when the preconditioners are constructed with REVD_ritzit or Nyström compared to REVD.
The values of the quadratic cost function at the first ten iterations of PCG are shown in Figure <ref>. Using the randomised LMP that is constructed with information from REVD is detrimental to the PCG convergence compared to using no preconditioning. Using information from the Nyström and REVD_ritzit methods results in similar PCG convergence and low values of the quadratic cost function are reached in fewer iterations than without preconditioning. The PCG convergence may be explained by the favourable distribution of the eigenvalues after preconditioning using Nyström and REVD_ritzit, and the smaller than one eigenvalues when using REVD. These results, however, do not necessarily generalize to an operational setting as this system is well conditioned while operational settings are not. This will be investigated further in the next section.
Ritz values
Preconditioned spectra, largest
Preconditioned spectra, smallest
Quadratic cost function
Advection problem. (fig:ritz-val-advection) 25 largest eigenvalues of $\bA$ (eigs) and their estimates given by randomised methods, the largest eigenvalues and their estimates given by REVD and Nyström coincide. (fig:preconditioned-eigs-advection-largest) largest eigenvalues of $\bA$ (no LMP, the same as eigs in (fig:ritz-val-advection)) and $(\bC_{25}^{sp})^T \bA \bC_{25}^{sp}$, where $\bC_{25}^{sp}$ is constructed with Ritz values in (fig:ritz-val-advection) and corresponding Ritz vectors. (fig:preconditioned-eigs-smallest-advection) smallest eigenvalues of $\bA$ and $(\bC_{25}^{sp})^T \bA \bC_{25}^{sp}$. (fig:qcf-vs-PCG_it-advection) quadratic cost function value versus PCG iteration when solving systems with $\bA$ and $(\bC_{25}^{sp})^T \bA \bC_{25}^{sp}$.
§.§ Lorenz 96 model
We next use the Lorenz 96 model to examine what effect the randomised LMPs have on PCG performance. In the Lorenz 96 model the evolution of the $n$ components $X^j, \ j \in \{1,2,\dots,n\}$ of $\bx_i$ is governed by a set of $n$ coupled ODEs:
\begin{equation}\label{eq:lorenz96}
\frac{dX^j}{dt} = -X^{j-2} X^{j-1} + X^{j-1} X^{j+1} - X^j + F,
\end{equation}
where $X^{-1} = X^{n-1}, X^0 = X^n$ and $X^{n+1} = X^1$ and $F=8$. The equations are integrated using a fourth order Runge-Kutta scheme <cit.>. We set $n=80$ and $N=150$ (the size of $\bA$ is $12080 \times 12080$) and observe every 10th model variable at every 10th time step (120 observations in total), ensuring that there are observations at the final time step. The grid point distance is $\Delta X = 1/n$ and the time step is set to $\Delta t = 2.5 \times 10^{-2}$.
For the covariance matrices we use $\sigma_o =0.15 $ and $\sigma_b = 0.2$. $\bC_b$ has length scale equal to $2 \Delta X$. Two setups are used for the model error covariance matrix:
* $\sigma_q = 0.1$ and $\bC_q$ has length scale $L_q = 2 \Delta X$ (the same as $\bC_b$);
* $\sigma_q = 0.05$ and $\bC_q$ has length scale $L_q = 0.25 \Delta X$.
In our numerical experiments, the preconditioners have very similar effect using both setups. Hence, we present plots for the case $\sigma_q = 0.1$ and $L_q = 2 \Delta X$ in the following sections, except Figure <ref>.
The first outer loop is performed and no second level preconditioning is used in the first inner loop, where PCG is run for 100 iterations or until the relative residual norm reaches $10^{-6}$. In the following sections, we use randomised and deterministic LMPs in the second inner loop. PCG has the same stopping criteria as in the first inner loop.
§.§.§ Minimising the inner loop cost function
In Figure <ref>, we compare the performance of the randomised LMPs
with the deterministic LMP.
We also consider the effect of varying $k$, the number of vectors used to construct the preconditioner. We set the oversampling parameter to $l=5$.
Because results from the randomized methods depend on the random matrix used, we perform 50 experiments with different realizations for the random matrix. We find that the different realizations lead to very similar results (see Figure <ref>).
Independently of the $k$ value, there is an advantage in using the second level preconditioning. The reduction in the value of the quadratic cost function is faster using the randomised LMPs compared to the deterministic LMPs, with REVD_ritzit performing the best after the first few iterations. The more information we use in the preconditioner (i.e. the higher $k$ value), the faster REVD_ritzit shows better results than the other methods. The performances of the REVD and Nyström methods are similar. Note that as $k$ increases, the storage (see Table <ref>) and work per PCG iteration increase. Examination of the Ritz values given by the randomised methods shows that REVD_ritzit gives the worse estimate of the largest eigenvalues, as was the case when using the advection model. We calculated the smallest eigenvalue of the preconditioned matrix $(\bC_{5}^{sp})^T \bA \bC_{5}^{sp}$ using eigs. When $\bC_{5}^{sp}$ is constructed using REVD_ritzit or Nyström the smallest eigenvalue of $(\bC_{5}^{sp})^T \bA \bC_{5}^{sp}$ is equal to one, whereas using REVD it is approximately $0.94$. This may explain why the preconditioner constructed using REVD may not perform as well as other randomised preconditioners, but it is not entirely clear why the preconditioner that uses REVD_ritzit shows the best performance.
$k=5$, all runs
A comparison of the value of the quadratic cost function at every PCG iteration when the spectral-LMP is constructed with $k \in \{5, 10, 15\}$ Ritz values and vectors obtained with the randomised methods in the current inner loop, and function eigs in the previous inner loop. We also show no second level preconditioning (no LMP), which is the same in all four panels. For the randomised methods, (fig:qcf_all_runns_ft10_fx10_ps37_k5) shows 50 experiments for $k=5$ and the rest display means over 50 experiments. Here $\sigma_q = 0.1$ and $L_q = 2 \Delta X$.
The PCG convergence when using the deterministic LMP and the randomised LMP with information from REVD_ritzit with different $k$ values is compared in Figure <ref> for both setups of the model error covariance matrix.
For the deterministic LMP, varying $k$ has little effect, especially in the first iterations of PCG. However, for REVD_ritzit, increasing $k$ results in a greater decrease of the cost function in the first iterations of PCG. Also, at any iteration of PCG we obtain a lower value of the quadratic cost function using the randomised LMP with $k=5$ compared to the deterministic LMP with $k=15$, which uses exact eigenpair information from the Hessian of the previous loop.
$\sigma_q = 0.1$ and $L_q = 2 \Delta X$
$\sigma_q = 0.05$ and $L_q = 0.25 \Delta X$
A comparison of the values of the quadratic cost function at every PCG iteration when using deterministic LMP with information from the previous loop (eigs) and the randomised LMP with information from REVD_ritzit for different $k$ values (5, 10 and 15). No second level preconditioning is also shown (case (fig:qcf_previous_ritzit_means_ft10_fx10_ps37_k-5-10-15) is the same as in Figure <ref>). In cases (fig:qcf_previous_ritzit_means_ft10_fx10_ps37_k-5-10-15) and (fig:qcf_previous_ritzit_means_ft10_fx10_ps64_k-5-10-15) the model error covariance matrices are constructed using parameters $\sigma_q $ and $L_q$.
§.§.§ Effect of the observation network
To understand the sensitivities of the results from the different LMPs to the observation network,
we consider a system with the same parameters as in the previous section, where we had 120 observations, but we now observe
* every 5th model variable at every 5th time step (480 observations in total);
* every 2nd variable at every 2nd time step (3000 observations in total).
The oversampling parameter is again set to $l=5$ and we set $k=5$ and $k=15$ for both observation networks. Since the number of observations is equal to the number of eigenvalues that are larger than one and there are more observations than in the previous section, there are more eigenvalues that are larger than one after the first level preconditioning.
Because all 50 experiments with different Gaussian matrices in the previous section were close to the mean, we perform 10 experiments for each randomised method, solve the systems and report the means of the quadratic cost function.
The results are presented in Figure <ref>.
Again, the randomised LMPs perform better than the deterministic LMP. However, if the preconditioner is constructed with a small amount of information about the system ($k=5$ for both systems and $k=15$ for the system with 3000 observations), then there is little difference in the performance of different randomised LMPs.
Also, when the number of observations is increased, more iterations of PCG are needed to get any improvement in the minimisation of the quadratic cost function when using the deterministic LMP over using no second level preconditioning.
When comparing the randomised and deterministic LMPs with different values of $k$ for these systems, we obtain similar results to those in Figure <ref>, i.e. it is more advantageous to use the randomised LMP constructed with $k=5$ than using the deterministic LMP constructed with $k=15$.
$k=5$, $q=480$
$k=5$, $q=3000$
$k=15$, $q=480$
$k=15$, $q=3000$
As in Figure <ref>, but for two systems with $q$ observations and 10 experiments are done for each randomised method and the mean values plotted.
§.§.§ Effect of oversampling
We next consider the effect of increasing the value of the oversampling parameter $l$. The observation network is as in Section <ref> (120 observations in total). We set $k=15$ and perform the second inner loop 50 times for every value of $l \in \{5,10,15\}$ with all three randomised methods. The standard deviation of the value of the quadratic cost function at every iteration is presented in Figure <ref>.
For all the methods, the standard deviation is greatest in the first iterations of PCG. It is reduced when the value of $l$ is increased and the largest reduction happens in the first iterations. However, REVD_ritzit is the least sensitive to the increase of the oversampling. With all values of $l$, REVD_ritzit has the largest standard deviation in the first few iterations, but it stills gives the largest reduction of the quadratic cost function. Hence, large oversampling is not necessary if REVD_ritzit is used.
Standard deviation of the quadratic cost function at every iteration of PCG when the spectral-LMP is constructed with different randomised methods. For every randomised method we do 50 experiments. Here $\sigma_q = 0.1$ and $L_q = 2 \Delta X$.
§ CONCLUSIONS AND FUTURE WORK
We have proposed a new randomised approach to second level preconditioning of the incremental weak constraint 4D-Var forcing formulation. It can be preconditioned with an LMP that is constructed using approximations of eigenpairs of the Hessian. Previously, by using the Lanzcos and CG connection these approximations were obtained at a very low cost in one inner loop and then used to construct the LMP in the following inner loop. We have considered three methods (REVD, Nyström and REVD_ritzit) that employ randomisation to compute the approximations. These methods can be used to cheaply construct the preconditioner in the current inner loop, with no dependence on the previous inner loop, and are parallelisable.
Numerical experiments with the linear advection and Lorenz-96 models have shown that the randomised LMPs constructed with approximate eigenpairs improve the convergence of PCG more than deterministic LMPs with information from the previous loop. The quadratic cost function reduces more rapidly when using a randomised LMP rather than a deterministic LMP, even if the randomised LMP is constructed with fewer vectors than the deterministic LMP. Also, for the randomised LMPs, the more information about the system we use (i.e. more approximations of eigenpairs are used to construct the preconditioner), the greater the reduction in the quadratic cost function. Using more information to construct a deterministic LMP may not result in larger reduction of the quadratic cost function, especially in the first iterations of PCG, which is in line with results in <cit.>. However, if not enough information is included in the randomised LMP, then preconditioning may have no effect on the first few iterations of PCG.
Of the randomised methods considered, the best overall performance was for REVD_ritzit. However, if we run a small number of PCG iterations, the preconditioners obtained with different randomised methods give similar results. The performance was independent of the choice of the random Gaussian start matrix and it may be improved with oversampling.
In this work we apply randomised methods to generate a preconditioner, which is then used to accelerate the solution of the exact inner loop problem (<ref>) with PCG method (as discussed in Section <ref>). A different approach has been explored by <cit.> and <cit.>, who presented and tested a randomised solution algorithm called the Randomized Incremental Optimal Technique (RIOT) in data assimilation. RIOT is designed to be used instead of PCG and employs a randomised eigenvalue decomposition of the Hessian (using a different method than the ones presented in this paper) to directly construct the solution $\bold{x}$ in (<ref>), which approximates the solution given by PCG.
The randomised preconditioning approach can also be employed to minimise other quadratic cost functions, including the strong constraint 4D-Var formulation. Further exploration of other single-pass versions of the randomised methods for the eigenvalue decomposition, that are discussed in <cit.>, may be useful. In particular, the single-pass version of the Nyström method is potentially attractive. If a large number of Ritz vectors are used to construct the preconditioner, more attention can be paid to choosing the value of the oversampling parameter $l$ in the randomised methods. In some cases a better approximation may be obtained if $l$ linearly depends on the target rank of the approximation <cit.>.
§ ACKNOWLEDGEMENTS
We are grateful to Dr. Adam El-Said for his code for the weak constraint 4D-Var assimilation system. We would like to thank two anonymous reviewers, whose comments helped us to improve the manuscript.
§ CONFLICT OF INTEREST
The authors declare no conflict of interest.
§ FUNDING INFORMATION
UK Engineering and Physical Sciences Research Council, Grant/Award Number: EP/L016613/1; European Research Council CUNDA project, Grant/Award Number: 694509; NERC National Centre for Earth Observation.
|
# A roadmap for bootstrapping critical gauge theories: decoupling operators of
conformal field theories in $d>2$ dimensions
Yin-Chen He<EMAIL_ADDRESS>Perimeter Institute for
Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada Junchen Rong
<EMAIL_ADDRESS>DESY Hamburg, Theory Group, Notkestraße 85, D-22607
Hamburg, Germany Ning Su<EMAIL_ADDRESS>Department of Physics,
University of Pisa, I-56127 Pisa, Italy
###### Abstract
We propose a roadmap for bootstrapping conformal field theories (CFTs)
described by gauge theories in dimensions $d>2$. In particular, we provide a
simple and workable answer to the question of how to detect the gauge group in
the bootstrap calculation. Our recipe is based on the notion of _decoupling
operator_ , which has a simple (gauge) group theoretical origin, and is
reminiscent of the null operator of $2d$ Wess-Zumino-Witten CFTs in higher
dimensions. Using the decoupling operator we can efficiently detect the rank
(i.e. color number) of gauge groups, e.g., by imposing gap conditions in the
CFT spectrum. We also discuss the physics of the equation of motion, which has
interesting consequences in the CFT spectrum as well. As an application of our
recipes, we study a prototypical critical gauge theory, namely the scalar QED
which has a $U(1)$ gauge field interacting with critical bosons. We show that
the scalar QED can be solved by conformal bootstrap, namely we have obtained
its kinks and islands in both $d=3$ and $d=2+\epsilon$ dimensions.
###### Contents
1. I Introduction
2. II Decoupling operators in gauge theories
1. II.1 Null operator as a decoupling operator: the $SU(N)_{k}$ WZW CFT
2. II.2 Decoupling operator of bosonic gauge theories
3. III Consequence of the equation of motion
4. IV Numerical results
1. IV.1 Kinks of the $A\bar{A}$ bound
2. IV.2 Scalar QED islands in $3$ dimensions
3. IV.3 Scalar QED kinks and islands in $2+\epsilon$ dimensions
5. V Conclusion and outlook
6. A 3d WZW models and gauge theories
7. B More numerical data
## I Introduction
Coupling gapless particles with gauge fields is one of the few known ways to
obtain an interacting conformal field theory in dimensions $d>2$. These gauge
theory type of CFTs have interesting applications in both the high energy
Seiberg (1995); Maldacena (1999); Luty and Okui (2006) and condensed matter
physics. In condensed matter system, such CFTs describe phase transitions or
gapless phases beyond conventional Landau’s symmetry breaking paradigm Senthil
_et al._ (2004a, b); Hermele _et al._ (2005, 2008); Song _et al._ (2019);
Jain _et al._ (1990); Kivelson _et al._ (1992); Chen _et al._ (1993); Lee
_et al._ (2018), and they have interesting properties such as
fractionalization and long-range entanglement. Understanding such CFTs may
pave the way towards several long-standing problems in condensed matter,
including critical quantum spin liquids Hermele _et al._ (2005, 2008); Song
_et al._ (2019) and plateau transitions of fractional quantum Hall states Jain
_et al._ (1990); Kivelson _et al._ (1992); Chen _et al._ (1993); Lee _et
al._ (2018).
Compared to the Wilson-Fisher (WF) CFTs, these gauge theory CFTs are poorly
understood. Recently, conformal bootstrap Rattazzi _et al._ (2008) became a
powerful technique to study CFTs in dimensions higher than $2d$ Kos _et al._
(2014); El-Showk _et al._ (2014); Kos _et al._ (2015, 2016); Simmons-Duffin
(2017); Rong and Su (2018); Atanasov _et al._ (2018); Iliesiu _et al._
(2016, 2018); Chester _et al._ (2019, 2020) (see a review Poland _et al._
(2019)). The numerical bootstrap obtained critical exponents of $3d$ Ising Kos
_et al._ (2014) and $O(2)$ WF Chester _et al._ (2019) with the world record
precision, and importantly, has solved the long-standing inconsistency between
experiments and Monte-Carlo simulations of $O(2)$ WF Chester _et al._ (2019)
as well as the cubic instability of $O(3)$ WF Chester _et al._ (2020).
However, so far the gauge theory CFTs resist to be tackled by bootstrap
Nakayama (2018); Chester and Pufu (2016a); Li (2018); Li and Poland (2021).
The main challenge is built in the fundamental philosophy of bootstrap, namely
characterizing a theory without relying on a specific Lagrangian. More
concretely, in a bootstrap study one typically inputs the global symmetry of
the theory, and utilizes the consistency of crossing equations to constrain or
to compute the scaling dimensions of operators in certain representations of
the global symmetry. For a Wilson-Fisher type of CFT, it is believed that one
can uniquely define it by specifying its global symmetry as well as the
representation of the order parameter, i.e., the lowest lying operators. In
contrast, gauge theories with distinct gauge groups could have similar or even
identical global symmetries. Their lowest lying operators would sit in the
same representation and have similar scaling dimensions. Therefore, it is
unclear how to detect the gauge group in a typical bootstrap calculation.
As a concrete example, one can consider a family of theories described by
$N_{f}$ flavors of two-component Dirac fermions coupled to a $U(N_{c})$ gauge
field in $d=3$ dimensions. For a given color number $N_{c}$, most theories in
the infrared (IR) will flow into CFTs when $N_{f}$ is larger than a critical
value $N_{f}^{*}$. In other words, for a large enough $N_{f}$ there will be a
number of distinct CFTs that correspond to different $N_{c}$’s. These CFTs
have identical global symmetries, i.e. $(SU(N_{f})\times
U(1)_{top})/\mathbf{Z}_{N_{f}}$111For the precise global symmetry one may need
to further quotient out certain discrete symmetries. Such global part of the
global symmetry is not important for our discussion.. The $SU(N_{f})$
corresponds to the flavor symmetries of Dirac fermions, while the $U(1)_{top}$
symmetry corresponds to the $U(1)$ gauge flux conservation of $U(N_{c})$ gauge
group. The most important low lying (scalar) operators are 1) fermion
bilinears which are $SU(N_{f})$ adjoint but neutral under $U(1)_{top}$, its
scaling dimension is $\Delta=2+O(1/N_{f})$; 2) $2\pi$ monopole operators which
are charged under $U(1)_{top}$ and also carries a non-trivial representation
of $SU(N_{f})$ (which is independent of $N_{c}$), its scaling dimension is
$\Delta=0.265N_{f}-0.0383-0.516(N_{c}-1)+O(1/N_{f})$ Dyer _et al._ (2013).
There were efforts to bootstrap the 4pt of either fermion bilinears Nakayama
(2018); Li (2018) or monopole operators Chester and Pufu (2016a), but no
unambiguous signature of gauge theories is found.
In $d=2$ dimensions it is pretty common that distinct CFTs have the same
global symmetry. However, numerical bootstrap successfully detects some of
these CFTs, including the $2d$ Ising CFT out of minimal models Rychkov and
Vichi (2009) and the $SU(2)_{1}$ Wess-Zumino-Witten (WZW) CFT out of
$SU(2)_{k}$ WZW theories Ohtsuki (2016); He _et al._ (2021). It is found that
the Ising CFT ($SU(2)_{1}$ WZW) sits at the kink of numerical bounds, while
its cousins in the minimal model ($SU(2)_{k}$ WZW) saturate the numerical
bounds on the right (left) hand side of the kink. More interestingly, the
phenomenon that these CFTs appear at kinks of bootstrap bound is closely
related to the existence of a family of CFTs sharing the same global symmetry
and similar operator spectrum. Compared to their cousins, the Ising CFT and
$SU(2)_{1}$ WZW are special because they have null operators at low levels.
These null operators will lead to some non-analyticity in the numerical bound,
resulting in a kink El-Showk _et al._ (2014); Behan (2018); He _et al._
(2021).
The examples in $2d$ suggest that, the existence a family of cousins with the
same global symmetries is not an obstacle of bootstrapping a CFT, it could
instead guide us to find the right condition (i.e. null operator condition) to
bootstrap the CFT of interest. Theoretically, the existence of the null
operator at a certain level can serve as a defining feature of 2d minimal
model Ginsparg (1988). One cannot help to wonder if a similar physics also
exists for higher dimensional CFTs, and can it be further utilized in the
bootstrap study? We provide a positive answer to this by exploring gauge
theories and in particular, their relations with the $2d$ WZW CFTs. We will
show that in gauge theories there exists a family of operators, we dub
decoupling operators, that are reminiscent of null operators of $2d$ WZW CFTs
in higher dimensions. Similar to the Kac-Moody algebra, the structure of
decoupling operators of gauge theories are sensitive to the representations of
global symmetry. Moreover, the color number $N_{c}$ of the gauge group plays
the role of the WZW level (i.e. $k$) in WZW CFTs.
We also explore another related observation in higher dimensional CFTs, namely
the equation of motion (EOM) can lead to the phenomenon of operator missing in
the CFT spectrum Kos _et al._ (2015); El-Showk _et al._ (2014); Rychkov and
Tan (2015); Giombi and Kirilin (2016). Theoretically, it was understood that
as the consequence of the EOM of $\phi^{4}$ theory, i.e.
$\square\phi=g\phi^{3}$, $\phi^{3}$ becomes a descendent of $\phi$. In other
words, the operator $\phi^{3}$ becomes missing in the primary operator
spectrum of WF CFTs. This structure can further serve as an algebraic
definition of WF CFTs in $4-\epsilon$ dimensions Rychkov and Tan (2015).
Numerically, one can also impose the condition of $\phi^{3}$ being missing by
adding a large gap above $\phi$ in the $O(N)$ vector channel, this is indeed
how one obtains the famous bootstrap island of WF CFTs Kos _et al._ (2015,
2014). We will push this idea further by exploring the consequence of EOMs on
high level missing operators. Such higher level missing operators are actually
rather straightforward to visualize. For example, it is natural to expect that
$\phi(\square\phi-g\phi^{3})$ is missing as well. We will elaborate more on
this and its consequence in the main text.
To be concrete, we will discuss the idea of decoupling operators and their
bootstrap application in the context of a prototypical gauge theory, namely
the scalar QED. It is described by $N_{f}$ flavor critical bosons coupled to a
$U(1)$ gauge field,
$\mathcal{L}=\sum_{i=1}^{N_{f}}|(\partial_{\mu}-iA_{\mu})\phi_{i}|^{2}+m^{2}|\phi|^{2}+\frac{g}{4}|\phi|^{4}+\frac{1}{4e^{2}}F_{\mu\nu}^{2}.$
(1)
The global symmetry of the scalar QED is
$PSU(N_{f})=\frac{SU(N_{f})}{Z_{N_{f}}}$. One fundamental (gauge invariant)
operators of this theory are the boson bilinear
$\bar{\phi}^{i}\phi_{j}-\delta^{i}_{j}/N\bar{\phi}^{k}\phi_{k}$ and
$\bar{\phi}^{i}\phi_{i}$, which are in the $SU(N_{f})$ adjoint and singlet
representation, respectively. This theory is dual to the CP${}^{N_{f}-1}$
model, i.e., a non-linear sigma model (NL$\sigma$M) on the target space
$\textrm{CP}^{N_{f}-1}\cong\frac{U(N_{f})}{U(N_{f}-1)\times U(1)}$. Within the
NL$\sigma$M formulation, one can access the scalar QED fixed point using the
$2+\epsilon$ expansion Lawrie and Athrone (1983); Hikami (1981, 1979). It is
worth noting that in $d=3$ dimensions, there is one extra emergent symmetry
called $U(1)_{top}$, which corresponds to the flux conservation of the gauge
field. There will be a new type of primary operators, called monopoles Murthy
and Sachdev (1990), that are charged under $U(1)_{top}$. In this paper, we
will not study monopole operators.
For a large enough $N_{f}$, the scalar QED in $2<d<4$ dimensions will flow
into an interacting CFT as one tunes the mass of bosons to a critical value.
In a given dimension $d$, there exists a critical $N_{f}^{*}(d)$ below which
the scalar QED fixed point will disappear by colliding with the tri-critical
QED fixed point (see definition below) Halperin _et al._ (1974); Nahum _et
al._ (2015a); Benvenuti and Khachatryan (2019); Gorbenko _et al._ (2018). In
other words, only if $N_{f}>N_{f}^{*}(d)$ the scalar QED will be a real CFT
222We shall note it is an exception for the $N_{f}=1$ scalar QED in $3d$ as it
is dual to the $O(2)$ WF Dasgupta and Halperin (1981); Peskin (1978).. It is
believed that $N_{f}^{*}(d)$ monotonically increases with $d$, but its precise
form is unknown. From $2+\epsilon$ and $4-\epsilon$ expansion, it is found
that $N_{f}^{*}(d\rightarrow 2)\rightarrow 0$ Lawrie and Athrone (1983);
Hikami (1981, 1979) and $N_{f}^{*}(d\rightarrow 4)\approx 183$ Halperin _et
al._ (1974). It remains an open problem regarding the value of $N_{f}^{*}$ in
$d=3$ dimensions Sandvik (2010); Kaul and Sandvik (2012); Bonati _et al._
(2020). The $N_{f}=2$ scalar QED in $d=3$ dimensions is one of the dual
descriptions of the widely studied deconfined phase transition in condensed
matter literature Senthil _et al._ (2004a, b). There are extensive studies to
discuss whether it is truly a CFT in the deep infrared Sandvik (2007); Melko
and Kaul (2008); Kuklov _et al._ (2008); Nahum _et al._ (2015b, a).
The paper is organized as follows. In Sec. II we will introduce the notation
of decoupling operators. In particular, In Sec. II.1 we will show that the
null operators of $2d$ WZW CFTs can be interpreted as decoupling operators of
gauge theories. We then discuss decoupling operators of bosonic gauge theories
in Sec. II.2. In Sec. III we discuss the consequence of EOMs on the CFT
spectrum. In Sec. IV we will present our numerical results of the scalar QED.
In particular, by imposing the information of decoupling operators we show the
scalar QED in $3$ dimensions (Sec. IV.2) and $2+\epsilon$ dimensions (Sec.
IV.3) can be solved using conformal bootstrap : we have obtained kinks and
islands of scalar QED. We will conclude in Sec. V, and will provide more
numerical data in the appendix.
_Note added:_ Upon the completion of this work we became aware of an
independent work Manenti and Vichi (2021) that overlaps with ours.
## II Decoupling operators in gauge theories
In this section, we will define what we mean by the decoupling operator and
discuss several concrete examples in 2d CFTs and higher dimensional gauge
theories.
The decoupling operator of a CFT of interest $\mathcal{A}$ can be defined by
embedding $\mathcal{A}$ into a family of CFTs that share the same global
symmetry and similar operator spectrum. Then one can construct a possible
continuous interpolation between these different theories, and define
decoupling operators as operators that decouple from the theories’ spectrum as
one continuously tunes to the CFT $\mathcal{A}$. A textbook example is the
$2d$ minimal model $\mathcal{M}_{q,q-1}$ for which one can promote the integer
valued $q$ to be real valued, which then interpolates all the minimal models
$\mathcal{M}_{q,q-1}$. This is more than a conceptual interpolation, indeed we
can explicitly write down a number of crossing symmetric correlation functions
that continuously depend on $q$ 333In the end, a fully consistent solution of
all crossing symmetric correlation functions would only admit discrete
(integer valued) $q$.. As one continuously tunes $q$, there are operators
decoupled from the spectrum at integer valued $q$. These decoupling operators
are indeed null operators for a specific theory $\mathcal{M}_{q,q-1}$ Behan
(2018). Similarly, for the $2d$ WZW CFTs, one can promote the integer valued
WZW level $k$ to be real valued, and ask how are operators decoupled as one
continuously varies $k$ (see Sec. II.1 for more details). Different from the
example of the $2d$ minimal model, the decoupling (null) operators are lying
in representations that strongly depend on $k$’s: for the $SU(N)_{k=l}$ WZW
CFT, all the Kac-Moody primaries in the rank-$m$ symmetric tensor
representation with $m>l$ are becoming null operators Di Francesco _et al._
(2012).
Although null operators of 2D CFTs can be defined as decoupling operators,
null operators certainly have deeper implications in the algebra of CFTs, e.g.
they can act as differential operators that annihilate correlation functions
of primary operators. The decoupling operators, on the other hand, may or may
not have such fundamental applications in the operator algebra of higher
dimensional CFTs. It will be interesting to understand the similarity and
difference between 2d null operators and higher dimensional decoupling
operators in the future.
### II.1 Null operator as a decoupling operator: the $SU(N)_{k}$ WZW CFT
In this section, we will elaborate more on how to view the null operator of 2d
CFTs as a decoupling operator in the context of the $SU(N)_{k}$ WZW CFT. Let
us start with a simple case, i.e. $SU(2)_{k}$ WZW theory. It has a global
symmetry $SO(4)\cong SU(2)_{L}\times SU(2)_{R}/Z_{2}$, and its Kac-Moody
primary operators $|j,j\rangle$ are in the $SO(4)$ representations
$(SU(2)_{L},SU(2)_{R})=(j,j)$ with $j=0,1/2,1,\cdots,k/2$ 444In this notation,
$(1/2,1/2)$ corresponds to the $SO(4)$ vector, $(1,1)$ corresponds to the
rank-2 symmetric traceless tensor, $(1,0)\oplus(0,1)$ corresponds to the
rank-2 anti-symmetric tensor.. So $|1,1\rangle$ is a Kac-Moody primary of
$SU(2)_{k\geq 2}$ WZW CFT, while it becomes null in the $SU(2)_{1}$ WZW CFT.
Now we create an interpolation between all the $SU(2)_{k}$ WZW CFTs by
promoting the integer valued $k$ to be real valued. More precisely, the four-
point correlation function (4pt) of any primary operator of the $SU(2)_{k}$
WZW CFTs is an analytical function of $k$, so there is no obstacle to promote
$k$ to be real valued. For our purpose it is enough to consider the 4pt of the
Kac-Moody primary $|1/2,1/2\rangle$, which is a $SO(4)$ vector and we will
call it $\phi_{i}$:
$\displaystyle\langle\phi_{i}(x_{1})\phi_{j}(x_{2})\phi_{k}(x_{3})\phi_{l}(x_{4})\rangle=\frac{1}{x_{12}^{2\Delta_{\phi}}x_{34}^{2\Delta_{\phi}}}\left[\frac{\delta_{ij}\delta_{kl}}{N}G^{S}[z,\bar{z}]\right.$
$\displaystyle+(\frac{1}{2}\delta_{il}\delta_{jk}+\frac{1}{2}\delta_{ik}\delta_{jl}-\frac{1}{N}\delta_{ij}\delta_{kl})G^{T}[z,\bar{z}]$
$\displaystyle\left.+(\frac{1}{2}\delta_{il}\delta_{jk}-\frac{1}{2}\delta_{ik}\delta_{jl})G^{A}[z,\bar{z}]\right].$
(2)
Here $N=4$ and $G^{S}[z,\bar{z}]$, $G^{T}[z,\bar{z}]$, $G^{A}[z,\bar{z}]$
corresponds to the 4pt’s in the channels of the $SO(4)$ singlet, rank-2
symmetric traceless tensor, and rank-2 anti-symmetry tensor. The precise form
of these 4pt’s can be found in textbooks such as Di Francesco _et al._
(2012). We are primarily concerned with the Kac-Moody primary $|1,1\rangle$,
which is in the channel of rank-2 symmetric traceless tensor. The 4pt
corresponding in this channel is,
$\displaystyle\frac{G^{T}[z,\bar{z}]}{((1-z)(1-\bar{z}))^{\frac{1}{2k+4}}}=\frac{2z\bar{z}}{k^{2}}A[z,\bar{z}]+2(z\bar{z})^{\frac{2}{k+2}}\left(\frac{\Gamma^{2}(\frac{1}{k+2})\Gamma^{2}(\frac{3}{k+2})}{\Gamma^{4}(\frac{2}{k+2})}-\frac{4\Gamma^{2}(\frac{-2}{k+2})\Gamma^{2}(\frac{3}{k+2})}{\Gamma^{2}(\frac{2}{k+2})\Gamma^{2}(\frac{-1}{k+2})}\right)B[z,\bar{z}],$
(3)
with
$\displaystyle A[z,\bar{z}]$
$\displaystyle={}_{2}F_{1}(\frac{k+1}{k+2},\frac{k+3}{k+2},\frac{2k+2}{k+2},z)\,{}_{2}F_{1}(\frac{k+1}{k+2},\frac{k+3}{k+2},\frac{2k+2}{k+2},\bar{z}),$
(4) $\displaystyle B[z,\bar{z}]$
$\displaystyle={}_{2}F_{1}(\frac{1}{k+2},\frac{3}{k+2},\frac{2}{k+2},z)\,{}_{2}F_{1}(\frac{1}{k+2},\frac{3}{k+2},\frac{2}{k+2},\bar{z}).$
Decomposing this 4pt into the global conformal blocks, one obtains the low
lying spectrum to be $\Delta=\frac{4}{k+2},2,\cdots$. The first operator
(denoted as $t$) is nothing but the Kac-Moody primary $|1,1\rangle$, while the
second operator is a global primary obtained by applying Kac-Moody current
operator to the vacuum, i.e. $J_{L}J_{R}|0,0\rangle$. We can also work out the
OPE square $\lambda_{\phi\phi t}^{2}$ of $|1,1\rangle$,
$\lambda^{2}_{\phi\phi
t}=\frac{\Gamma^{2}(\frac{1}{k+2})\Gamma^{2}(\frac{3}{k+2})}{2\Gamma^{4}(\frac{2}{k+2})}-\frac{2\Gamma^{2}(\frac{-2}{k+2})\Gamma^{2}(\frac{3}{k+2})}{\Gamma^{2}(\frac{2}{k+2})\Gamma^{2}(\frac{-1}{k+2})}.$
(5)
The above formula is positive definite for $k>1$, and it vanishes precisely at
$k=1$. In other words the Kac-Moody primary $|1,1\rangle$ gets decoupled from
operator spectrum at $k=1$. Therefore, in this natural interpolation of
$SU(2)_{k}$ WZW CFTs we can view the null operator $|1,1\rangle$ of
$SU(2)_{1}$ WZW as a decoupling operator.
The above discussion can be easily generalized to the $SU(N)_{k}$ WZW CFTs.
Interestingly, in the large-$N$ limit we can directly relate the Kac-Moody
null operator to the decoupling operator of gauge theories, without relying on
any precise knowledge of the correlation function or operator spectrum of the
WZW CFTs. The key is to recognize a dual description for the $SU(N)_{k}$ WZW
CFTs, namely a gauge theory with $N$ flavors of 2-component Dirac fermions
interacting with a $U(k)$ gauge field. For the case of $k=1$, this duality can
be proved exactly as the $U(1)$ gauge theory is integrable Schwinger (1962).
For a general level-$k$ WZW CFT, there are reasonable evidences suggesting
that they are dual to a QCD2 theory (e.g. see Abdalla (1997) and references
therein), although the QCD2 is not integrable anymore.
The global symmetry of both the $SU(N)_{k}$ WZW and the $U(k)$ gauge theory is
$SU(N)_{L}\times SU(N)_{R}$, so we can consider the (global) primary operator
spectrum of these theories in various representations of $SU(N)_{L}\times
SU(N)_{R}$. Let us warm up with the lowest weight Kac-Moody primary (except
for the vacuum), i.e., a bi-fundamental of $SU(N)_{L}$ and $SU(N)_{R}$. This
operator exists for arbitrary $k$, and its scaling dimension is
$\Delta=\frac{N^{2}-1}{N(N+k)}$, which is $\Delta\approx 1+O(1/N)$ in the
limit of $N\gg k$. In the gauge theory, this operator is nothing but 2-fermion
operators, schematically written as $\bar{\psi}^{c}_{l}\psi_{r,c}$. We use a
convention that the right (left) moving fermion $\psi$ ($\bar{\psi}$) is the
fundamental (anti-fundamental) of $U(k)$ gauge field, and $c$ is the index of
its $SU(k)$ subgroup. The index $l$ and $r$ refer to the index of $SU(N)_{L}$
and $SU(N)_{R}$. So this 2-fermion operator $\bar{\psi}^{c}_{l}\psi_{r,c}$ is
the $SU(N)$ bi-fundamental and its scaling dimension is $\Delta=1+O(1/N)$ in
the $N\gg k$ limit. We have matched the lowest primary operators of
$SU(N)_{k}$ WZW with the 2-fermion operators of $U(k)$ gauge theories.
Let us now move to the 4-fermion operators (that are Lorentz scalar) of gauge
theories. Such operator can be schematically written as
$\bar{\psi}^{c_{1}}_{l_{1}}\bar{\psi}^{c_{2}}_{l_{2}}\psi_{r_{3},c_{3}}\psi_{r_{4},c_{4}}$.
The two left (right) moving fermions shall be totally antisymmetric, so we
shall have either the flavor indices or the color indices anti-symmetric, and
meanwhile keep the other indices symmetric. We need to further contract the
color indices of left and right moving fermions to get a gauge invariant
operator. For $k=1$, however, anti-symmetrizing color indices is not an
option, leaving the only possibility to be anti-symmetrizing the flavor
indices. Therefore, for $k>1$ there are two different 4-fermion operators
(that are Lorentz scalar) which are in the $SU(N)_{L}\times SU(N)_{R}$
representations $A_{L}A_{R}$ and $T_{L}T_{R}$. 555Here $T_{L}T_{R}$
($A_{L}A_{R}$) refers to rank-2 symmetric (anti-symmetric) tensor of
$SU(N)_{L}$ and $SU(N)_{R}$. What are these operators in the $SU(N)_{k}$ WZW
CFTs? There are nothing but the Kac-Moody primaries in the $A_{L}A_{R}$ and
$T_{L}T_{R}$ channel, whose scaling dimensions are
$\Delta=\frac{2(N-2)(N+1)}{N(N+k)}$ and $\Delta=\frac{2(N-1)(N+2)}{N(N+k)}$.
In the limit of $N\gg k$, these two scaling dimensions are $\Delta=2+O(1/N)$
matching what we expect for 4-fermion operators. On the other hand, when $k=1$
there is only one 4-fermions operator in the channel $A_{L}A_{R}$, as the
other operator in the channel $T_{L}T_{R}$ becomes a decoupling operator due
to the low rank of the gauge group. This again nicely matches the physics of
$SU(N)_{k}$ WZW CFTs, namely at $k=1$ the Kac-Moody primary in the
$T_{L}T_{R}$ channel becomes null (the Kac-Moody primary in the $A_{L}A_{R}$
channel is still intact). It is straightforward to generalize to other Kac-
Moody null operators for higher $k$’s, as well as to other WZW CFTs.
Therefore, on the phenomenological level null operators of 2d WZW CFT can be
understood as decoupling operators in the context of 2d gauge theories. From
the gauge theory side, we can also generalize the analysis to higher
dimensions. A complexity is that fermion is in the spinor representation of
$SO(d)$ Lorentz rotation, which has a strong dependence on the spacetime
dimension $d$. It turns out that it is easiest to discuss the idea of
decoupling operators in the context of bosonic gauge theories, namely critical
bosons coupled to gauge fields. We will discuss it in the following
subsection. It is also worth mentioning that, in higher dimensions (e.g. $3d$)
one can also make a straightforward connection between fermionic gauge
theories and WZW CFTs Zou _et al._ (2021); Komargodski and Seiberg (2018).
The details are a bit off the theme of the current paper, we will elaborate
more in the Appendix.
### II.2 Decoupling operator of bosonic gauge theories
In this subsection, we will discuss the decoupling operators of bosonic gauge
theories, namely critical bosons coupled to gauge fields. We will explain the
idea in the context of $U(N_{c})$ gauge theories, and the generalization to
other gauge groups $SU(N_{c})$, $SO(N_{c})$, and $USp(2N_{c})$ is rather
straightforward.
We can simply start by classifying gauge invariant operators (constructed by
bosonic field) in these gauge theories. We denote boson operators as
$\phi_{f,c}$ and $\bar{\phi}^{f,c}$, which are $SU(N_{f})$ ($U(N_{c})$)
fundamental and anti-fundamental, respectively. $f=1,\cdots,N_{f}$ and
$c=1,\cdots,N_{c}$ correspond to the flavor and color index. To keep the
$U(1)\subset U(N_{c})$ gauge invariance, we shall only consider operators like
$\bar{\phi}^{f_{1},c_{1}}\cdots\bar{\phi}^{f_{m},c_{m}}\phi_{f_{m+1},c_{m+1}}\cdots\phi_{f_{2m},c_{2m}}$.
Among these operators, one should further choose $SU(N_{c})$ gauge invariant
ones. Let us start with $m=1$, i.e. boson bilinears
$\bar{\phi}^{f_{1},c_{1}}{\phi}_{f_{2},c_{2}}$. Apparently, to keep
$SU(N_{c})$ invariance there are only two operators,
$\bar{\phi}^{f_{1},c_{1}}{\phi}_{f_{1},c_{1}}$ and
$\bar{\phi}^{f_{1},c_{1}}{\phi}_{f_{2},c_{1}}-\delta^{f_{1}}_{f_{2}}/N_{f}\bar{\phi}^{f,c_{1}}{\phi}_{f,c_{1}}$,
which are the $SU(N_{f})$ singlet and adjoint, respectively. Their
large$-N_{f}$ scaling dimensions are $\Delta=2+O(1/N_{f})$ and
$\Delta=d-2+O(1/N_{f})$ for $N_{c}\ll N_{f}$.
Things become interesting as one moves to $m=2$. Let us ask what is the lowest
operator in the representation $A^{[f_{1},f_{2}]}_{[f_{3},f_{4}]}$, where both
the upper and lower indices are antisymmetric. To construct an operator in
this representation, one needs at least 4 bosons,
$\bar{\phi}^{f_{1},c_{1}}\bar{\phi}^{f_{2},c_{2}}{\phi}_{f_{3},c_{3}}{\phi}_{f_{4},c_{4}}$.
If $N_{c}\geq 2$, one can simultaneously antisymmetrize the flavor indices
(i.e. $[f_{1},f_{2}]$, $[f_{3},f_{4}]$) and the color indices (i.e.
$[c_{1},c_{2}]$, $[c_{3},c_{4}]$) of
$\bar{\phi}^{f_{1},c_{1}}\bar{\phi}^{f_{2},c_{2}}$ and
${\phi}_{f_{3},c_{3}}{\phi}_{f_{4},c_{4}}$, and then contract their color
indices to get a $SU(N_{c})$ gauge invariant operator. This will then give an
operator in the required representation, with a scaling dimension
$\Delta=2(d-2)+O(1/N_{f})$. When $N_{c}=1$, in contrast, antisymmetrizing the
color indices of two identical bosons will vanish. So the lowest operator in
the required representation shall involve two covariant derivatives,
schematically written as
$(\bar{\phi}^{f_{1}}D_{\mu}\bar{\phi}^{f_{2}})({\phi}_{f_{3}}D_{\mu}{\phi}_{f_{4}})$.
Its scaling dimension is $\Delta=2(d-2)+2+O(1/N_{f})$. Therefore, in the
$A^{[f_{1},f_{2}]}_{[f_{3},f_{4}]}$ channel the QCD gauge theories ($N_{c}>1$)
have the spectrum $\Delta=2(d-2)+O(1/N_{f}),2(d-2)+2+O(1/N_{f}),\cdots$, while
for $N_{c}=1$ (e.g. scalar QED) the spectrum is
$\Delta=2(d-2)+2+O(1/N_{f}),\cdots$. In other words,
* •
_In the $SU(N_{f})$ $A^{[f_{1},f_{2}]}_{[f_{3},f_{4}]}$ channel, the lowest
operator of $U(N_{c}>1)$ gauge theories is decoupling at $N_{c}=1$._
One can easily generalize above discussions to arbitrary $N_{c}$,
* •
_In the interpolation between $U(N_{c})$ gauge theories, the lowest lying
operator in the $SU(N_{f})$ anti-symmetric representation
$A^{[i_{1},\cdots,i_{m}]}_{[j_{1},\cdots,j_{m}]}$ of $N_{c}>m-1$ is decoupling
at $N_{c}\leq m-1$._
This structure of decoupling operators is almost identical to the null
operator structure of $2d$ WZW CFTs, and the color number $N_{c}$ plays the
role of WZW level $k$. Similar structures can also be found in theories with
other gauge groups 666An independent work Reehorst _et al._ (2020) has a
similar analysis for $SO(2)$ gauge theory in the context of $SO(N)$ invariant
CFTs..
## III Consequence of the equation of motion
The notion of decoupling operator was formulated by identifying a family of
CFTs with the identical global symmetry. In the numerical bootstrap, one can
impose gap conditions based on the structure of the decoupling operator to
isolate the theory of interest from their cousins. On the practical side,
depending on the scheme of bootstrap, one may also need to consider other
theories that are consistent with crossing equations being bootstrapped. For
example, we will be bootstrapping the 4pt of $SU(N_{f})$ adjoint boson
bilinears, so besides the $U(N_{c})$ scalar gauge theory we also need to
consider other theories that contain such operator:
1. 1.
Tri-critical QED: It corresponds to the UV fixed point of the scalar QED. It
can also be described by Eq. (6), but different from the scalar QED, hitting
the tri-critical QED fixed point requires the fine tuning of two singlet
operators, i.e., $|\phi|^{2}$ and $(|\phi|^{2})^{2}$. The relation between the
tri-critical QED and scalar QED is similar to the relation between the
Gaussian and WF CFT.
2. 2.
$SU(N_{c})$ QCD: $N_{f}$ flavor of critcal bosons coupled to a $SU(N_{c}>1)$
gauge field.
3. 3.
$O(2N_{f})^{*}$ 777We adopt the terminology in condensed matter literatures
Sachdev (2007).: This theory is nothing but replacing the $U(1)$ gauge field
of scalar QED in Eq. (6) with a discrete gauge field (e.g. say $Z_{N}$). It is
almost identical to the $O(2N_{f})$ WF except only gauge invariant operators
are physically allowed in $O(2N_{f})^{*}$. Equivalently, one can also consider
branching $O(2N_{f})$ into $SU(N_{f})\times U(1)$, and only consider the
$U(1)$ neutral sector. In this branching, the $O(2N_{f})$ symemtric rank-2
traceless tensor becomes the $SU(N_{f})$ adjoint.
4. 4.
Chern-Simons (CS) gauge theories: In $3d$ one can add a quantized CS term to
the $U(1)$ gauge field at any integer level $N$,
$N/4\pi\epsilon_{\mu\nu\rho}a_{\mu}\partial_{\nu}a_{\rho}$, leading to a
family of parity breaking CFTs Wen and Wu (1993). Similarly, one can also
consider QCD theories with finite CS terms.
5. 5.
Generalized free field (GFF) theory: it is worth noting that there could be
different GFFs. One type of GFF (dubbed GFF-A) is made of the $SU(N_{f})$
fundamental $\phi_{i}$, meaning that the $SU(N_{f})$ adjoint is constructed by
$\phi^{i}\phi_{j}$. The other GFF (dubbed GFF-B) is directly made of
$SU(N_{f})$ adjoint $A^{i}_{j}$. One difference between these two GFFs is, the
OPE $(\phi^{i}\phi_{j})\times(\phi^{k}\phi_{l})$ in GFF-A contains
$\phi^{i}\phi_{j}$, while $A^{i}_{j}\times A^{k}_{l}$ in GFF-B does not
contain $A^{i}_{j}$.
The last four theories do not have the identical symmetry as the scalar QED,
but bootstrapping $SU(N_{f})$ adjoint will not be able to tell the difference
888One can also consider more complicated gauge theories, e.g., critical
bosons coupled to a product gauge field $G_{1}(N_{c}^{1})\times
G_{2}(N_{c}^{2})\cdots$, with $G_{i}$ to be Lie groups such as $U$, $SU$. The
decoupling operator can be used to exclude theories that contain non-Abelian
subgroups (gauge group), i.e. $\exists N_{c}^{i}>1$. So the remaining theory
one needs to consider has a gauge field $U(1)^{m}$, which happens to be
equivalent to the scalar QED..
The decoupling operator we identified in Sec. II.2 can be used to exclude
$SU(N_{c}>1)$ gauge theories and GFF-B, while for other theories we need to
rely on EOMs. Some consequences of EOMs have already been discussed Rychkov
and Tan (2015); Giombi and Kirilin (2016) and been used in the bootstrap
analysis Kos _et al._ (2015); Rong and Su (2019) . Here we push the idea
further, in specific we will discuss 1) the consequence of the EOM of gauge
field; 2) high level spectrum due to the EOM. These results will help us to
distinguish the scalar QED from its other cousins, particularly the tri-
critical QED, $O(2N_{f})^{*}$, and GFF-A.
One can easily analyze the consequence of EOM on the operator spectrum in the
perturbative regime, including the large $N_{f}$ limit, $2+\epsilon$ limit and
$4-\epsilon$ limit. Here we will consider the large $N_{f}$ limit. It is known
that in the large $N_{f}$ limit, the Lagrangian of the theory can be written
as Benvenuti and Khachatryan (2019),
$\mathcal{L}=\sum_{i=1}^{N_{f}}|(\partial_{\mu}-iA_{\mu})\phi_{i}|^{2}+\sigma|\phi|^{2},$
(6)
Here $\sigma$ is a Hubbard-Stratonovich auxiliary field, and the terms
$\sigma^{2}$ and $F_{\mu\nu}^{2}$ are dropped as they are irrelevant. They are
three EOMs (of $\phi$, $\sigma$ and $A_{\mu}$ respectively),
$\displaystyle D_{\mu}D_{\mu}\phi_{i}$ $\displaystyle=$
$\displaystyle\sigma\phi_{i},$ (7) $\displaystyle\bar{\phi}^{i}\phi_{i}$
$\displaystyle=$ $\displaystyle|\phi|^{2}=0,$ (8)
$\displaystyle\bar{\phi}^{i}\overleftrightarrow{D}_{\nu}\phi_{i}$
$\displaystyle=$ $\displaystyle 0.$ (9)
The first two are similar to the EOMs of the WF CFTs, with the difference that
the conventional derivative $\partial_{\mu}$ is replaced by the covariant
derivative $D_{\mu}=\partial_{\mu}-iA_{\mu}$. For the brevity of notation we
will also write $D_{\mu}D_{\mu}=\square$. The last one is unique for gauge
theories 999Here $\bar{\phi}^{i}\overleftrightarrow{D}_{\nu}\phi_{i}$ stands
for
$\bar{\phi}^{i}[(\partial_{\mu}-iA_{\mu})\phi_{i}]-[(\partial_{\mu}+iA_{\mu})\bar{\phi}_{i}]\phi_{i}$..
| Level | GFF-A | Scalar QED | tri-critical QED | $O(2N_{f})^{*}$
---|---|---|---|---|---
Singlet $l=0$ | $\Delta=1+O(\frac{1}{N_{f}})$ | $|\phi|^{2}$ | None | $|\phi|^{2}$ | None
$\Delta=2+O(\frac{1}{N_{f}})$ | $(|\phi|^{2})^{2}$ | $\sigma$ | $(|\phi|^{2})^{2}$ | $\sigma$
Adjoint $l=0$ | $\Delta=1+O(\frac{1}{N_{f}})$ | $\bar{\phi}^{i}\phi_{j}$ | $\bar{\phi}^{i}\phi_{j}$ | $\bar{\phi}^{i}\phi_{j}$ | $\bar{\phi}^{i}\phi_{j}$
$\Delta=2+O(\frac{1}{N_{f}})$ | $\bar{\phi}^{i}\phi_{j}|\phi|^{2}$ | None | $\bar{\phi}^{i}\phi_{j}|\phi|^{2}$ | None
$\Delta=3+O(\frac{1}{N_{f}})$ | $\bar{\phi}^{i}\phi_{j}(|\phi|^{2})^{2}$, $\bar{\phi}^{i}\square\phi_{j}$ | $\bar{\phi}^{i}\phi_{j}\sigma$ | $\bar{\phi}^{i}\phi_{j}(|\phi|^{2})^{2}$ | $\bar{\phi}^{i}\phi_{j}\sigma$
Singlet $l=1$ | $\Delta=2$ | $\bar{\phi}^{i}\overleftrightarrow{D}_{\mu}\phi_{i}$ | None | None | $\bar{\phi}^{i}\overleftrightarrow{D}_{\mu}\phi_{i}$
Adjoint $l=1$ | $\Delta=2$ | $\bar{\phi}^{i}\overleftrightarrow{D}_{\mu}\phi_{j}$ | $\bar{\phi}^{i}\overleftrightarrow{D}_{\mu}\phi_{j}$ | $\bar{\phi}^{i}\overleftrightarrow{D}_{\mu}\phi_{j}$ | $\bar{\phi}^{i}\overleftrightarrow{D}_{\mu}\phi_{j}$
$\Delta=3+O(\frac{1}{N_{f}})$ | $O_{1}$, $O_{2}$, $O_{3}$ | None | $O_{1}$, $O_{2}$ | $O_{3}$
$\Delta=4+O(\frac{1}{N_{f}})$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
Table 1: List of low lying (parity even) primary operators of various theories
in $3d$ and the large $N_{f}$ limit. For notational brevity we omit terms like
$-1/N_{f}\delta^{i}_{j}|\phi|^{2}$ for the operators in the adjoint
representation. In the table we have
$O_{1}=(\bar{\phi}^{i}D_{\mu}\phi_{j})|\phi|^{2}$,
$O_{2}=(\bar{\phi}^{i}\phi_{j})\partial_{\mu}|\phi|^{2}$,
$O_{3}=(\bar{\phi}^{i}\phi_{j})(\bar{\phi}^{k}D_{\mu}\phi_{k})$, and we skip
the concrete forms of operators in the last row as it is not illuminating to
write them down explicitly.
All the operators of the scalar QED can be built using $\phi_{i}$, $\sigma$
and $A_{\mu}$. Except for monopole operators in 3d, all other operators’
scaling dimensions are simply the summation of its constituents’ scaling
dimensions $(\Delta_{\phi_{i}},\Delta_{\sigma},\Delta_{A_{\mu}})=(d/2-1,2,1)$,
up to $1/N_{f}$ corrections. It is important to note that any operators
proportional to the EOM (e.g. $\bar{\phi}^{i}\phi_{j}|\phi|^{2}$ and
$(\bar{\phi}^{k}\overleftrightarrow{D}_{\nu}\phi_{k})\bar{\phi}^{i}\phi_{j}$)
shall be removed from the operator spectrum 101010This was known in the
context of large-$N_{f}$ WF CFTs, and was also discussed in the large-$N_{f}$
QED theory Chester and Pufu (2016b).. This would then distinguish the scalar
QED from its cousins. Table 1 listed the low lying (parity even 111111For a
parity preserving theory (e.g. scalar QED), only parity even operators can
appear in the OPE of two parity even scalars (e.g. $SU(N_{f})$ adjoint boson
bilinear operators).) primary operators of the scalar QED and its cousins,
including the GFF-A, tric-critical QED, $O(2N_{f})^{*}$ in the large-$N_{f}$
limit in 3d. Comparing the scalar QED with its cousins, one can find that the
former has several operators missing in specific channels. These missing
operators are the consequences of EOMs. For example, in the channel of
$SU(N_{f})$ singlet $l=0$, there is no operator in the scalar QED at the level
$\Delta=1+O(1/N_{f})$, as the operator $\bar{\phi}^{i}\phi_{i}$ should be
deleted due to the EOM of $\sigma$, $\bar{\phi}^{i}\phi_{i}=0$. Similarly,
from the EOM of $A_{\mu}$, we know that in the scalar QED any operator
proportional to the gauge current
$J_{\mu}^{g}=\bar{\phi}^{i}\overleftrightarrow{D}_{\mu}\phi_{i}$ should be
absent. This would then distinguish the scalar QED from the $O(2N_{f})^{*}$.
Let us also comment on Chern-Simons theories. The operator spectrum of Chern-
Simons theories is similar to the scalar QED, however it does not have parity
symmetry. This could be used to distinguish the scalar QED from Chern-Simons
theories as we will elaborate later.
It is worth emphasizing that even though we analyze EOMs caused missing
operators in the perturbative regime, these results are qualitatively correct
in the non-pertrubtative regime. Therefore, it is not only necessary but also
safe to input these information in a bootstrap study.
## IV Numerical results
In this section we will switch gear to numerical results. We will study the
scalar QED in $2<d\leq 4$ dimensions and will start with the single correlator
of $SU(N_{f})$ adjoint operators,
$a=\bar{\phi}^{i}\phi_{j}-\delta^{i}_{j}/N_{f}|\phi|^{2}$. The OPE $a\times a$
is,
$a\times
a=S^{+}+Adj^{+}+A\bar{A}^{+}+S\bar{S}^{+}+Adj^{-}+S\bar{A}^{-}+A\bar{S}^{-}.$
(10)
Here $S$ and $Adj$ refer $SU(N_{f})$ singlet and adjoint. $A\bar{A}$,
$S\bar{S}$, $S\bar{A}$, and $A\bar{S}$ are rank-4 tensors with two upper and
two lower indices. The naming convention of these representations is rather
simple, for example $A\bar{A}$ means that both the upper and lower indices are
anti-symmetric, while $S\bar{A}$ means that the lower indices are symmetric
and the upper indices are antisymmetric. The upper script $\pm$ means the
intermediate channel has even or odd spins. For the bootstrap equations, one
can check Ref. Nakayama (2018).
We will denote the low lying scalar operators in the singlet channel as
$s,s^{\prime},\cdots$; scalar operators in the adjoint channel as
$a,a^{\prime}\cdots$; $l=1$ operators in the adjoint channel as
$J_{\mu},J^{\prime}_{\mu},\cdots$. Besides the single correlator of $a$, we
will also present some results of mix correlators of $a$ and $s$. We note that
$a$ appears in the OPE of $a\times a$, so we impose this condition in all the
numerics, for example we require that all the scalars in the adjoint channel
should be no smaller than $\Delta_{a}$. Physically this gap condition does not
introduce any assumption to the CFT spectrum, but it does modify the numerical
bounds significantly. Most results are calculated with $\Lambda=27$ (the
number of derivatives included in the numerics) unless stated otherwise.
Before going to details, we will summarize some known results about the low
lying spectrum of the scalar QED. In $3d$, the large-$N_{f}$ calculation Kaul
and Sachdev (2008); Benvenuti and Khachatryan (2019) gives
$\displaystyle\Delta_{a}$ $\displaystyle=$ $\displaystyle
1-\frac{48}{3\pi^{2}N_{f}}+O(1/N_{f}^{2}),$ (11) $\displaystyle\Delta_{s}$
$\displaystyle=$ $\displaystyle 2-\frac{144}{3\pi^{2}N_{f}}+O(1/N_{f}^{2}).$
(12)
From $2+\epsilon$ expansion Lawrie and Athrone (1983); Hikami (1981, 1979),
one has
$\displaystyle\Delta_{a}$ $\displaystyle=$
$\displaystyle\epsilon-\frac{2}{N_{f}}\epsilon^{2}+O(\epsilon^{3}),$ (13)
$\displaystyle\Delta_{s}$ $\displaystyle=$ $\displaystyle
2-\frac{2}{N_{f}}\epsilon^{2}+O(\epsilon^{3}).$ (14)
It is also worth noting the tri-critical QED in $3d$ has Benvenuti and
Khachatryan (2019),
$\displaystyle\Delta_{a}$ $\displaystyle=$ $\displaystyle
1-\frac{64}{3\pi^{2}N_{f}}+O(1/N_{f}^{2}),$ (15) $\displaystyle\Delta_{s}$
$\displaystyle=$ $\displaystyle 1+\frac{128}{3\pi^{2}N_{f}}+O(1/N_{f}^{2}).$
(16)
Other results of spectrum will be discussed below when needed.
### IV.1 Kinks of the $A\bar{A}$ bound
As we discussed in Sec. II.2, the lowest operator in the $A\bar{A}$ channel of
non-Abelian gauge theories becomes decoupled in the abelian gauge theories
(e.g. scalar QED, tri-critical QED, $O(2N_{f}^{*})$), so it is natural to
bound $A\bar{A}$ channel gap $\Delta_{A\bar{A}}$ to see if this operator
decoupling can be detected. Concretely, for abelian gauge theories (and GFF-A)
we have
$\Delta_{A\bar{A}}=2(d-2)+2+O(1/N_{f}),$ (17)
while for non-Abelian gauge theories (and GFF-B) we have
$\Delta_{A\bar{A}}=2(d-2)+O(1/N_{f}).$ (18)
Figure 1: The numerical bounds of the lowest operator in the $A\bar{A}$
channel for $SU(10)$, $SU(100)$, and $SU(1000)$ CFTs in $d=3$. The dashed line
corresponds to the large$-N_{f}$ results of $\Delta_{a}$ for the $N_{f}=100$
and $N_{f}=1000$ scalar QED. The orange circle corresponds to
$(\Delta_{a},\Delta_{A\bar{A}})=(d-2,2(d-2)+2)$.
Fig. 1 shows the numerical bounds of $\Delta_{A\bar{A}}$ in $3d$. The
numerical bounds show clear kinks for different $N_{f}$’s, and the kink
evolves into a vertical jump from $(\Delta_{a},\Delta_{A\bar{A}})=(1,2)$ to
$(\Delta_{a},\Delta_{A\bar{A}})=(1,4)$ as $N_{f}\rightarrow\infty$. The
appearance of the $A\bar{A}$ kink can be ascribed to the decoupling operator
theorem of Abelian gauge theories we discussed above. In particular, in the
large-$N_{f}$ limit the Abelian gauge theories are living in the space after
the jump, while the non-Abelian gauge theories may live in the space before
the jump.
We note that this family of kinks is very similar to the non-WF kinks of
$O(N)$ theories He _et al._ (2021). In particular, in 2d the $O(4)$ non-WF
kink exactly corresponds to the $SU(2)_{1}$ WZW CFT. Given that the WZW CFTs’
null operators can be viewed as gauge theories’ decoupling operators, it is
very tempting to conjecture that the $A\bar{A}$ kinks here correspond to the
scalar QED. A careful analysis from both the numerical and theoretical
perspective suggests that the $A\bar{A}$ kink is unfortunately not the scalar
QED. Although the $1/N_{f}$ correction of $\Delta_{A\bar{A}}$ is unknown, we
can compare $\Delta_{a}$ of the kinks with the large-$N_{f}$ results Eq. (11).
In Fig. 1 we also plot large-$N_{f}$ $\Delta_{a}$ of $SU(100)$ and $SU(1000)$
scalar QED, which shows considerably large discrepancies to the kinks. Take a
closer look at the data, the $SU(100)$ kink sits around $\Delta_{a}\approx
0.953$, while the large $N_{f}$ results gives $\Delta_{a}\approx 0.984$. The
discrepancy between these two numbers is around $3/N_{f}$. Similarly, this is
also the case for $SU(1000)$, which has $\Delta_{a}\approx 0.995$ and
$\Delta_{a}\approx 0.998$ for the kink and large $N_{f}$, respectively. This
large discrepancy does not seem to be caused by a numerical convergence issue,
as the differences of $\Delta_{a}$ between $\Lambda=19,27,35$ are small.
Theoretically, it is indeed easy to convince oneself that the $A\bar{A}$ kink
cannot be the scalar QED. That is because the tri-critical QED also has
$\Delta_{A\bar{A}}=2(d-2)+2+O(1/N_{f})$, and its $\Delta_{a}$ (Eq. (15)) is
smaller than that of the scalar QED (Eq. (11)). As a side note, in theory the
$A\bar{A}$ kink could be the tri-critical QED, but numerically it does not
seem be so as their large-$N_{f}$ $\Delta_{a}$’s also have more than $2/N_{f}$
discrepancy from the numerical kink.
Figure 2: The numerical bounds of the lowest operator in the $A\bar{A}$
channel for $SU(N_{f})$ CFTs in $d=2.1$ (left) and $d=4$ dimensions (right).
The dashed line in the left corresponds to the $2+\epsilon$ results of
$\Delta_{a}$ for $N_{f}=4$ and $N_{f}=100$ scalar QED. The orange circle
corresponds to $(\Delta_{a},\Delta_{A\bar{A}})=(d-2,2(d-2)+2)$.
We have also studied $A\bar{A}$ bound in other dimensions (see Fig. 2). We
find that the $A\bar{A}$ kinks still exist in $2<d\leq 4$ dimensions, and the
kink approaches $(\Delta_{a},\Delta_{A\bar{A}})=(d-2,2(d-2)+2)$ as
$N_{f}\rightarrow\infty$. It is worth noting that, in 4d for $N_{f}\neq\infty$
the $A\bar{A}$ kink does not sit at $(\Delta_{a},\Delta_{A\bar{A}})=(2,6)$.
This again suggests this kink should not be identified as the scalar QED or
tri-critical QED, as both of them shall flow to the Gaussian fixed point in
$4d$. In $d=2.1$ dimensions, the $A\bar{A}$ kinks again have large deviations
to the $2+\epsilon$ results of the scalar QED.
Therefore, the single correlator can capture the essential physics of the
$A\bar{A}$ decoupling from non-Abelian gauge theories to Abelian gauge
theories. However, the $A\bar{A}$ kink does not correspond to any known CFT.
This result inspires us that, instead of bounding $\Delta_{A\bar{A}}$ we can
impose a gap in the $A\bar{A}$ channel to exclude all the non-Abelian gauge
theories. We will pursue this in the remaining part of this paper.
### IV.2 Scalar QED islands in $3$ dimensions
| Gap imposed | Scalar QED | Tri-critical QED | GFF-A | $O(2N_{f}^{*})$ | QCD | Chern-Simons
---|---|---|---|---|---|---|---
$\Delta_{A\bar{A}}$ | $3$ | $4+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$ | $2+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$
$\Delta_{J^{\prime}_{\mu}}$ | $3.1$ | $4+O(\frac{1}{N_{f}})$ | $3+O(\frac{1}{N_{f}})$ | $3+O(\frac{1}{N_{f}})$ | $3+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$ | $3+O(\frac{1}{N_{f}})$
$\Delta_{S\bar{S}^{\prime}}$ | $3.1$ | $4+O(\frac{1}{N_{f}})$ | $3+O(\frac{1}{N_{f}})$ | $3+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$ | $4+O(\frac{1}{N_{f}})$
Table 2: The imposed spectrum gaps for the scalar QED island in Fig. 3 and the
physical gaps of different theories. Most physical gaps have already been
analyzed above in Table 1. For Chern-Simons theories, we have
$J_{\mu}^{\prime}=a\cdot\varepsilon_{\mu\nu\rho}F_{\nu\rho}$, whose scaling
dimension is $\Delta=3+O(1/N_{f})$. In the scalar QED such operator also
exists, but it is a parity odd operator, hence will not appear in the $a\times
a$ OPE. Figure 3: Scalar QED islands in $3d$: (a) $N_{f}=1000$, (b)
$N_{f}=100$. The allowed regions (shaded) are obtained by imposing gaps in
operator spectrum summarized in Table 2. The green circles are the
large$-N_{f}$ results of scalar QED, with
$\Delta_{S\bar{S}}=2-\frac{48}{3\pi^{2}N_{f}}$Benvenuti and Khachatryan (2019)
and $\Delta_{a}$ in Eq. (11).
Interestingly, by imposing gaps $\Delta_{A\bar{A}}\geq 3$,
$\Delta_{J^{\prime}_{\mu}}\geq 3.1$, and $\Delta_{S\bar{S}^{\prime}}\geq 3.1$
in the operator spectrum, we are able to obtain bootstrap islands of scalar
QED in $d=3$ dimensions by scanning the $\Delta_{a}$-$\Delta_{S\bar{S}}$
space, as shown in Fig. 3. These three gaps are pretty mild compared to the
real gaps of scalar QED, and they have very clear physical meanings: 1)
$\Delta_{A\bar{A}}\geq 3$ serves to exclude non-Abelian gauge theories (i.e.
QCD); 2) $\Delta_{J^{\prime}_{\mu}}\geq 3.1$ serves to exclude $O(2N_{f})^{*}$
and Chern-Simons theories; 3) $\Delta_{S\bar{S}}\geq 3.1$ serves to exclude
tri-critical QED and GFF-A. Table 2 gives a summary of imposed gaps and
physical gaps of various theories.
The numerics seems to converge better for large $N_{f}$. For instance, for
$N_{f}=1000$ we can get an island with $\Lambda=19$, while for $N_{f}=100$
$\Lambda=19$ does not yield an island. Moreover, for $N_{f}=50$ we are not
able to obtain an island up to $\Lambda=35$. It could be that for small
$N_{f}$ the scalar QED island still exist for large $\Lambda$, which, however,
is beyond our computational power. Up to $\Lambda=35$, the size of the island
is proportional to $1/N_{f}$. It is unknown if the islands will further shrink
as $\Lambda$ increases. It is worth remarking that in $O(N)$ WF bootstrap, the
island of $O(3)$ WF from two operators mix also shrinks rather slowly with
$\Lambda$ (in a similar rate as Fig. 3) Kos _et al._ (2015), but three
operators mix could drastically shrink the island Chester _et al._ (2020). It
will be very interesting to mix $a$ with $S\bar{S}$ to see how small the
scalar QED island will shrink to, and to see if a scalar QED island will also
exist for small $N_{f}$ using an accessible $\Lambda$.
The appearance of scalar QED islands strongly advocates our proposed recipes
for bootstrapping gauge theories. We also remark that, a basic requirement to
isolate a CFT of interest into an island is to impose a set of gaps that
exclude other crossing symmetric theories. From this perspective, the single
correlator bootstrap could also do the job of isloating a CFT as long as it
can access a set of such gaps, and it has also been demonstrated for $O(N)$ WF
Li and Su (2017). The mixed correlator bootstrap, on the other hand, is
certainly more powerful than the single correlator bootstrap for several
reasons. Firstly, the mixed correlator can access new decoupling/missing
operators that are absent in the single correlator bootstrap. For instance,
the mixed correlator of $O(N)$ vector and singlet can detect the missing
operator in the $O(N)$ vector channel, i.e., $\phi|\phi|^{2}$ Kos _et al._
(2015). Secondly, the mixed correlator has stronger constraining power and
better numerical convergence.
### IV.3 Scalar QED kinks and islands in $2+\epsilon$ dimensions
Figure 4: The numerical bounds of $SU(4)$ singlet $\Delta_{s}$ of $SU(4)$ CFTs
in $d=2.1$ dimension. The allowed regions (shaded) are obtained with the gaps
$\Delta_{A\bar{A}}\geq 2(d-2)+1$ and $\Delta_{s^{\prime}}$ itemized in the
figure. The green circle corresponds to scaling dimensions of the scalar QED
from epsilon expansion $(\Delta_{a},\Delta_{s})\approx(0.095,1.995)$. The
right panel is the zoomed in plot of the left panel. In the left panel, we
also plot the numerical bounds (i.e. black curve) that does not have any gap
imposed.
The above discussed scalar QED islands in the $\Delta_{a}$-$\Delta_{S\bar{S}}$
space also exist in $2+\epsilon$ dimensions. Moreover, in $2+\epsilon$ limit
the numerical convergence becomes much faster, and we are able to obtain
bootstrap island for small $N_{f}$ (e.g. $N_{f}=4$) with a small $\Lambda$. We
will not repeat such discussions here. It turns out illuminating to study
$\Delta_{s}$ bound in $2+\epsilon$ dimensions, as will be detailed in this
section.
We first add a mild gap $\Delta_{A\bar{A}}\geq 2(d-2)+1$ in the $A\bar{A}$
channel that excludes all the non-Abelian gauge theories (as well as the
GFF-B). Having excluded all the non-Abelian gauge theories, the remaining
cousins of the scalar QED that are consistent with the crossing symmetry are
the tri-critical QED, GFF-A and $O(2N_{f})^{*}$. As we discussed in Sec. III
the difference between the scalar QED and tri-critical QED/GFF is that, the
former contains $\phi^{4}$ interactions, while the latter does not. This
difference is similar to the difference between the WF CFT and GFF/Gaussian.
For the $O(N)$ WF CFT, it is well known that one can detect it as a kink that
is above the GFF by bounding the $O(N)$ singlet Kos _et al._ (2015). So one
may expect that the scalar QED would appear as a kink if one bounds the
$SU(N_{f})$ singlet $\Delta_{s}$.
Fig. 4 shows the numerical bounds of $\Delta_{s}$ of $N_{f}=4$ 121212The
results of different $N_{f}$’s are rather similar, so we just choose $N_{f}=4$
as a representative one., which has a kink that is close to the $2+\epsilon$
expansion results of scalar QED 131313The discrepancy is of order
$O(\epsilon^{3})$ and $O(\epsilon^{2})$ for $\Delta_{a}$ and $\Delta_{s}$,
respectively.. We further impose a gap in the second low lying singlet
$\Delta_{s^{\prime}}\geq 3,3.5$ and scan the feasible region of $\Delta_{s}$.
The $\Delta_{s^{\prime}}$ gap carves out a large region, leaving a sharp tip
where the scalar QED sits in. This phenomenon is similar to that of Ising CFT,
for which imposing further constraints will carve the feasible region into a
small island Kos _et al._ (2014). Below we will show that the feasible region
of scalar QED also shrinks to an island in the $\Delta_{a}$-$\Delta_{s}$ space
with proper conditions imposed.
It is good to pause here to elaborate a bit more on the philosophy of imposing
gap conditions in bootstrap calculations. As we have explained, in many cases,
in particular for gauge theories, it is necessary to impose gaps in order to
exclude other theories that are also consistent with crossing equations. On
the other hand, in bootstrap calculations it is common that imposing gaps will
carve out feasible regions, possibly leaving a kink on the numerical bounds.
Sometimes, the kink is floating, namely it is moving as the gap changes (see
appendix for more details). Such floating kink does not unambiguously
correspond to an isolated CFT. On the practical side, it is hard to extract
useful information about the physical theory from a floating kink unless one
already has the knowledge of precise values of the gaps. In contrast, the kink
in Fig. 4 is stable, namely it does not move as long as the gap
($\Delta_{A\bar{A}}$) is in a finite window. We have explicitly checked that
the kink and numerical bounds are almost identical for different values of
gap, i.e. $\Delta_{A\bar{A}}\geq 2(d-2)+1$ and $\Delta_{A\bar{A}}\geq
2(d-2)+1.5$. On the other hand, if one removes the $\Delta_{A\bar{A}}$ gap,
$\Delta_{s}$ bound gets modified significantly (the black curve in Fig. 4(a)):
The scalar QED kink disappears, but there is one kink close to the unitary
bound (of $\Delta_{a}$) which is likely to be a WF type theory. These results
justify our decoupling operator based recipes for bootstrapping gauge
theories, in specific the $\Delta_{A\bar{A}}$ gap is serving to exclude all
the non-Abelian gauge theories.
We also remark that there is a vertical kink on the leftmost feasible region.
It corresponds to the $\Delta_{A\bar{A}}$ jump shown in Fig. 1-2. It is
noticeable that $\Delta_{s}$ is pretty small in this region, supporting again
that the $\Delta_{A\bar{A}}$ kink (jump) cannot be the scalar QED. It will be
interesting to know if the tri-critical QED lives in any special region (e.g.
the leftmost kink) of the numerical bounds.
Figure 5: The islands of the scalar QED with $N_{f}=4$. The green circles mark
$2+\epsilon$ results of the scalar QED. (a) $d=2.01$ dimensions: the feasible
regions are obtained from the single correlator. (b) $d=2.1$ dimensions: The
feasible region is obtained from the $a$, $s$ mixed correlator.
To get an island of the scalar QED, we need to find conditions to exclude all
its cousins. Similar to Table 2, we impose the following mild gaps in the
operator spectrum,
$\Delta_{A\bar{A}}\geq 2d-3,\Delta_{s^{\prime}}\geq
3,\Delta_{J_{\mu}^{\prime}}\geq 2d-2.8,\Delta_{S\bar{S}}\geq\Delta_{a},$ (19)
and we successfully isolate the scalar QED into a small island (in the
$\Delta_{a}-\Delta_{s}$ space) with the single correlator in $d=2.01$
dimensions, as shown in Fig. 5(a). The first three gaps have very clear
physical meanings, they serve to exclude non-Abelian gauge theories, tri-
critical QED/GFF, and $O(2N_{f}^{*})$. The last gap $\Delta_{S\bar{S}}$ is
rather mysterious, we do not have a clear idea what theory does it exclude.
Removing any of these four gaps, the scalar QED will not be isolated to an
island any more. Somewhat surprisingly, by increasing the dimensions slightly,
say $d=2.1$, the single correlator can not isolate an island any more. The
mixed correlator can still yield an island with a high $\Lambda=35$
141414$\Lambda=27$ does not produce an island. and more aggressive (but still
physical) gap conditions (Fig. 5(b)), i.e., $\Delta_{A\bar{A}}\geq 2(d-2)+1$,
$\Delta_{s^{\prime}}\geq 3.5$, $\Delta_{J_{\mu}^{\prime}}\geq d+0.5$,
$\Delta_{S\bar{S}}\geq\Delta_{a}$, $\Delta_{a^{\prime}}\geq\Delta_{a}+1.5$.
The appearance of scalar QED kinks and islands in $d=2+\epsilon$ dimension
again advocate our proposed recipes for bootstrapping critical gauge theories.
These nice results, however, do not sustain to $d=3$ dimensions. More detailed
numerical observations and discussions can be found in Appendix B.
## V Conclusion and outlook
We have introduced the notion of decoupling operators of critical gauge
theories in dimensions $d>2$. The decoupling operator is the higher
dimensional reminiscent of null operators of 2d WZW CFTs, and it can
efficiently detect the rank of the gauge group. Based on the information of
decoupling operators, one can then impose gap conditions in bootstrap
calculations to isolate gauge theories of interest from other theories. As an
illustrative example, we study a prototypical critical gauge theory, i.e., the
scalar QED. We firstly identified the concrete decoupling operators of the
scalar QED, and then showed how to use them in a bootstrap study.
In both the $3d$ large-$N_{f}$ limit and the $d=2+\epsilon$ limit, we have
successfully obtained kinks as well as islands of the scalar QED, by imposing
mild gap conditions inspired by the physics of decoupling operators and EOMs.
We shall remark that, even though these two limits can be accessed using
perturbative expansions, our bootstrap calculations do not rely on any of
these perturbative results. The gap conditions we imposed are very mild that
are likely to hold for any $N_{f}$ in $3d$. The success of bootstrap
calculations, however, does not sustain to the most interesting case, i.e.,
small $N_{f}$ in $3d$. The failure for small $N_{f}$ in $3d$ might be due to
the poor numerical convergence. It is possible that the mixed correlator
bootstrap between $a$ and $S\bar{S}$ will improve the numerical convergence
significantly and solve the long-standing problem regarding the properties of
small $N_{f}$ scalar QED in $3d$. We will leave this for the future study.
One interesting question is what does the $A\bar{A}$ kink in Fig. 1 and Fig. 2
correspond to? This family of kinks shares a lot of similarities as the
vertical jump in the bound of rank-2 symmetric tracless tensor of the $O(N)$
theories (this kink was dubbed non-WF kink) He _et al._ (2021). Also a
similar kink was recently observed in bootstrapping $O(N)$ rank-2 symmetric
traceless tensor Reehorst _et al._ (2020). We believe these kinks may have
similar physical mechanisms. They could either be unknown CFTs or artifacts of
numerical bootstrap. Even if they are numerical artifacts, the crossing
symmetric solution at the kink may have certain relations to gauge theories,
given that they are close to gauge theories in the parameter space.
Understanding them may help to eventually solve the gauge theories in $3d$.
We have showed how to use the decoupling operator in the $A\bar{A}$ channel to
bootstrap $U(1)$ gauge theories. In a similar fashion, one can bootstrap a
non-Abelian gauge theory with a specific gauge group $U(N_{c}=m)$ by using the
decoupling operators in the antisymmetric representations
$A^{[f_{1},\cdots,f_{n+1}]}_{[f_{n+2},\cdots,f_{2n+2}]}$ of $SU(N_{f})$ with
$n\leq m$. For example, in the channel
$A^{[f_{1},\cdots,f_{m+1}]}_{[f_{m+2},\cdots,f_{2m+2}]}$ the lowest operator
of different gauge theories will have distinct scaling dimensions: 1) the
$U(N_{c}>m)$ gauge theories have $\Delta=(m+1)(d-2)+O(1/N_{f})$; 2) the
$U(N_{c}=m)$ gauge theories have $\Delta=(m+1)(d-2)+2+O(1/N_{f})$; 3) the
$U(N_{c}<m)$ gauge theories have $\Delta\geq(m+1)(d-2)+4+O(1/N_{f})$. We also
remark that as a concrete example we analyzed decoupling operators of theories
with a $U(N_{c})$ gauge field coupled to bosons. It is straightforward to
generalize to other gauge groups (e.g. $SU(N_{c})$, $SO(N_{c})$,
$USp(2N_{c})$) as well as fermions coupled to gauge fields. It will be
interesting to try our decoupling operator based recipes to tackle other gauge
theories. In particular, exciting progress might be made by using advanced
bootstrap techniques such as mixing spinning operators Erramilli _et al._
(2020).
On the phenomenological level the decoupling operators of gauge theories share
several similarities with the null operators of $2d$ WZW CFTs. As detailed in
Sec. II.1 the null operators of $SU(N)_{k}$ WZW CFTs can be even considered as
decoupling operators of $2d$ $U(k)$ gauge theories. In the context of $2d$
CFTs the null operator has important applications, e.g. they can act as
differential operators that annihilate correlation functions. It is an open
question whether a similar application also exists for the decoupling
operators of gauge theories in a general dimension. The progress might be made
by looking for an exact interpolation between gauge theories with different
gauge groups, which is similar to the interpolation between WZW CFTs with
different WZW levels.
## Acknowledgements
YCH would like to thank Chong Wang and Liujun Zou for the stimulating
discussions and collaborations on $3d$ WZW models, and Zheng Zhou for the
discussions on the large$-N_{f}$ equation of motion, which benefit current
work. We thank Slava Rychkov for his critical reading of our manuscript and
for his various suggestions. Research at Perimeter Institute is supported in
part by the Government of Canada through the Department of Innovation, Science
and Industry Canada and by the Province of Ontario through the Ministry of
Colleges and Universities. This project has received funding from the European
Research Council (ERC) under the European Union’s Horizon 2020 research and
innovation programme (grant agreement no. 758903). The work of J.R. is
supported by the DFG through the Emmy Noether research group The Conformal
Bootstrap Program project number 400570283. The numerics is solved using SDPB
program Simmons-Duffin (2015) and simpleboot
(https://gitlab.com/bootstrapcollaboration/simpleboot). The computations in
this paper were run on the Symmetry cluster of Perimeter institute, and on the
EPFL SCITAS cluster funded by the Swiss National Science Foundation under
grant no. PP00P2-163670. NS would like to thank his parents for support during
the COVID-19 pandemic. NS would like to thank the hospitality of Institute of
Physics Chinese Academy of Sciences and the Center for Advanced Study,
Tsinghua University while part of the work was finished.
## Appendix A 3d WZW models and gauge theories
In this appendix, we will discuss some examples that show direct connections
between WZW CFTs and 3d gauge theories. The physics discussed here is not new,
it is the recollection of the results in Ref. Komargodski and Seiberg (2018);
Zou _et al._ (2021).
Despite of the pure algebraic definition, 2d WZW CFTs also have a Lagrangian
formulation, namely a non-linear sigma model (NL$\sigma$M) on a (Lie) group
manifold $G$ ($SU(N)$, $USp(2N)$, etc.) supplemented with a level $k$ WZW term
Di Francesco _et al._ (2012),
$\mathcal{L}=\frac{1}{4a^{2}}\int
d^{2}x\,\textrm{Tr}(\partial^{\mu}g^{-1})(\partial_{\mu}g)+k\cdot\frac{i}{24\pi}\epsilon_{\mu\nu\rho}\int_{B}d^{3}x\textrm{Tr}((\hat{g}^{-1}\partial^{\mu}\hat{g})(\hat{g}^{-1}\partial^{\nu}\hat{g})(\hat{g}^{-1}\partial^{\rho}\hat{g})).$
(20)
$g$ is a matrix field valued in a unitary presentation of the Lie group. The
first term is the ordinary kinetic term of NL$\sigma$M, the second term is the
WZW term defined in the 3-dimensional extended space. $k$ is quantized and
corresponds to the homotopy class $\pi_{3}(G)=\mathbf{Z}$. One shall also have
$\pi_{2}(G)=\mathbf{0}$ in order for the WZW term to be well defined. The
Lagrangian has a conformal fixed point (i.e. WZW CFT) at a finite coupling
strength.
It is straightforward to generalize the WZW Lagrangian to a higher dimension.
In 3d a non-trivial WZW term requires the target space $G$ to satisfy
$\pi_{4}(G)=\mathbf{Z}$ and $\pi_{3}(G)=\mathbf{0}$. There are several target
spaces, including Grassmannian and Stiefel manifold (e.g. $SO(N)/SO(4)$),
satisfying this requirement. One important difference in $3d$ is that the
NL$\sigma$M is non-renormalizable, making it hard to analyze 151515A theory
being non-renormalizable does not necessarily mean it is non-sensible. For the
context of NL$\sigma$M, we know that it can describe the WF CFTs although it
is non-renormalizable in $d>2$ dimensions.. Nevertheless, it was argued that
Zou _et al._ (2021) 161616Ref. Zou _et al._ (2021) studied Stiefel manifold,
but it should be readily generalized to other manifold., there are three fixed
points as the coupling strength $a^{2}$ increases from $0$:
1. 1.
An attractive fixed point of spontaneous symmetry breaking (SSB) phase at
$a^{2}=0$. The ground state manifold is the target space of NL$\sigma$M.
2. 2.
A repulsive fixed point of order-disorder phase transitions.
3. 3.
An attractive conformal fixed point preserving all the symmetries.
The last attractive conformal fixed point is the $3d$ version of the $2d$ WZW
CFT, while the first two fixed points merge into the Gaussian fixed point in
$2d$.
Ref. Zou _et al._ (2021) studied such $3d$ WZW models on the Stiefel
manifold, here we discuss a simpler situation–the $3d$ Grassmannian
$U(2N)/(U(N)\times U(N))$ WZW models Komargodski and Seiberg (2018). In
particular, we will argue that the Grassmannian WZW models have simple UV
completions, i.e., Dirac fermions coupled to a gauge field.
The UV completion of the $3d$ leve-$k$ $U(2N)/(U(N)\times U(N))$ WZW model is
the QCD3-Gross-Neveu model,
$\mathcal{L}=\sum_{i=1}^{2N}\bar{\psi}^{i}\gamma_{\mu}(\partial_{\mu}-i\alpha_{\mu})\psi_{i}+\lambda\phi^{i}_{j}\left(\bar{\psi}_{j}\psi_{i}-\frac{1}{2N}\delta^{j}_{i}\bar{\psi}\psi\right)+\textrm{Tr}((\partial\phi)^{2})+m\textrm{Tr}(\phi^{2})+u_{1}\textrm{Tr}(\phi^{4})+u_{2}(\textrm{Tr}(\phi^{2}))^{2}.$
(21)
Here $\alpha_{\mu}$ is a $SU(k)$ gauge field, $\psi_{i}$ Dirac fermions are in
the $SU(k)$ fundamental presentation. $\phi^{i}_{j}$ is a bosonic field in the
$SU(2N)$ adjoint representation, and it is coupled to the adjoint mass term of
the Dirac fermions.
The QCD3-Gross-Neveu model model has three fixed points, corresponding to a
SSB phase with ground state manifold $U(2N)/(U(N)\times U(N))$, QCD3-Gross-
Neveu CFT, and QCD3 CFT. The QCD3-Gross-Neveu CFT fixed point is unstable, and
will flow to either the SSB or the QCD3 CFT depending on the sign of
$m\textrm{Tr}(\phi^{2})$. This phase diagram coincides with that of
$U(2N)/(U(N)\times U(N))$ WZW models. In the SSB phase of the QCD3-Gross-Neveu
model, one can define a NL$\sigma$M model on the target space
$U(2N)/(U(N)\times U(N))$. In the SSB phase, the Dirac fermions are gapped,
integrating out of them will generate a level-$k$ WZW term Abanov and Wiegmann
(2000). The level $k$ (instead of $1$) comes from the color multiplicity of
Dirac fermions due to the $SU(k)$ gauge field. Therefore, we have proved that
the SSB fixed point of the QCD3-Gross-Neveu model and the level-$k$
$U(2N)/(U(N)\times U(N))$ WZW are dual to each other.
Given that phase diagrams of two models match and the SSB phase of two models
are dual, it is natural to conjecture that the QCD3-Gross-Neveu model is the
UV completion of 3d WZW model on $U(2N)/(U(N)\times U(N))$ manifold. In
particular,
* •
_The IR conformal fixed point of the $3d$ level$-k$ $U(2N)/(U(N)\times U(N))$
WZW model is dual to the QCD3 CFTs with $N_{f}=2N$ Dirac fermions coupled to a
$SU(k)$ gauge field._
There is an interesting sanity check for this duality. The Grassmannian
$U(2N)/(U(N)\times U(N))$ has a nontrivial $\pi_{2}=\mathbf{Z}$ leading to
Skyrmion operators. The Skyrmion is either a boson or fermion depending on the
evenness and oddness of $k$ Komargodski and Seiberg (2018). The Skyrmion can
be identified as the baryon operator of the $SU(k)$ gauge theory, whose
statistics also depends on $k$.
Similarly, one can derive,
* •
_The IR conformal fixed point of the $3d$ level$-k$ $SO(2N)/(SO(N)\times
SO(N))$ WZW model is dual to the QCD3 CFTs with $N_{f}=2N$ Dirac fermions
coupled to a $SO(k)$ gauge field._
* •
_The IR conformal fixed point of the $3d$ level$-k$ $USp(4N)/(USp(2N)\times
USp(2N))$ WZW model is dual to the QCD3 CFTs with $N_{f}=2N$ Dirac fermions
coupled to a $USp(2k)$ gauge field._
## Appendix B More numerical data
Figure 6: Floating kinks versus stable kinks in $d=2.1$ dimensions with a
global symmetry $SU(4)$. (a) Example of floating kinks. (b) Example of stable
kinks. The feasible regions calculated with the gap conditions
$\Delta_{A\bar{A}}\geq 1.2$ and $\Delta_{A\bar{A}}\geq 1.7$ are almost
identical to each other.
In this appendix we will provide more detailed numerical data, and most of the
data will focus on $2+\epsilon$ dimensions.
Firstly, let us briefly comment on floating kinks and stable kinks. As we have
explained in the main text, the floating kink means the kink is moving as the
imposed gap changes, while the stable kink means that the kink is not moving
as long as the imposed gap lies in a finite window. Fig. 6 shows a concrete
comparison between floating kinks and stable kinks. The floating kinks in Fig.
6(a) clearly show dependence on the values of $\Delta_{S\bar{S}}$ gap. In
contrast, the stable kinks in Fig. 6(b) show little dependence on the value of
the gap.
Figure 7: The numerical bounds of $\Delta_{J_{\mu}^{\prime}}$ of $SU(4)$ CFTs
with gap $\Delta_{A\bar{A}}\geq 2(d-2)+1$, $\Delta_{s}\geq 1$,
$\Delta_{S\bar{S}}\geq\Delta_{a}$ in (a) $d=2.01$ dimensions, (b) $d=2.07$
dimensions. The shaded regions are allowed regions. The green circles mark the
scalar QED and the green stars mark $O(2N_{f})^{*}$, up to $O(\epsilon^{3})$
and $O(\epsilon)$ corrections for $\Delta_{a}$ and
$\Delta_{J_{\mu}^{\prime}}$, respectively 181818In $O(2N_{f})^{*}$, $a$ (i.e.
$SU(N_{f})$ adjoint) corresponds to the rank-2 symmetric traceless tensor of
the $O(2N_{f})$ WF CFT. Its scaling dimension from $2+\epsilon$ expansion is
$\Delta_{a}=\frac{2N_{f}\epsilon}{2N_{f}-2}-\frac{2N_{f}\epsilon^{2}}{(-2+2N_{f})^{2}}+O(\epsilon^{3})$
Brézin and Zinn-Justin (1976); Brézin _et al._ (1976).. Figure 8: The
numerical bounds of $SU(N_{f})$ singlet $\Delta_{s}$ of $SU(100)$ CFTs in
$d=2.1$ (a), $d=2.4$ (b), $d=2.7$ (c), and $d=3$ (d) dimensions. The data is
obtained with a gap condition $\Delta_{A\bar{A}}\geq 2(d-2)+1$ imposed, and
the allowed regions do not change under a tighter $\Delta_{A\bar{A}}$ gap,
e.g. $\Delta_{A\bar{A}}\geq 2(d-2)+1.5$. The green circles mark the scalar
QED: (a) it corresponds to the $2+\epsilon$ expansion results
$(\Delta_{a},\Delta_{s})\approx(0.0998,1.9998)$; (d) it corresponds to the
large-$N_{f}$ results $(\Delta_{a},\Delta_{s})\approx(0.984,1.951)$.
To have a more intuitive idea about the magic of EOMs, we have investigated
how the bound of $\Delta_{J_{\mu}^{\prime}}$ evolves with $\Delta_{a}$. As
shown in Fig. 18, the scalar QED sits at a sharp spike, which is well
separated from $O(2N_{f})^{*}$. This is the consequence of EOM of gauge field,
as discussed in Table 1. The sharp spike also explains why the gap of
$\Delta_{J^{\prime}_{\mu}}$ helps to isolate the scalar QED into an island.
Another noteworthy observation is that convergence quickly becomes difficult
as the dimension $d$ increases slightly. In $d=2.01$ dimensions (Fig. 18(a))
$\Delta_{J_{\mu}^{\prime}}$ has a sharper spike for a small $\Lambda=11$, and
a larger $\Lambda=19$ does not improve the bound significantly. In contrast,
in $d=2.07$ dimensions $\Lambda=11$ does not produce a spike at all, while the
spike shows up weakly for $\Lambda=19$ and becomes sharper for $\Lambda=27$.
Moving to a higher dimension (e.g. $d=2.1$) the spike does not show up even
for $\Lambda=27$ (the feasible region looks similar to that of $d=2.07$ with
$\Lambda=11$ in Fig. 18(b)). This also explains why the single correlator does
not produce an island in $d=2.1$ dimensions for $N_{f}=4$. We also want to
remark that the convergence becomes easier for a larger $N_{f}$, e.g.
$\Delta_{J_{\mu}^{\prime}}$ still has a spike in $d=2.3$ dimensions for
$SU(100)$ with $\Lambda=19$. This also agrees that in $3d$ we are able to
obtain islands in the $\Delta_{S}-\Delta_{S\bar{S}}$ space for large $N_{f}$
(i.e. Fig. 3).
Finally, let us investigate how the scalar QED kinks evolve as we approach
$d=3$ dimensions. In a given dimension there exists a critical $N_{f}^{*}(d)$
below which the scalar QED will lose its conformality. It remains an open
question about the precise value of $N_{f}^{*}$ in $d=3$ dimensions. To avoid
the unnecessary complexity, we choose a large $N_{f}=100$ to monitor how the
scalar QED kink evolves as the dimension increases.
Fig. 8 shows the numerical bounds of $\Delta_{s}$ in $d=2.1,2.4,2.7,3$
dimensions. In every plot there is a sharp vertical kink on the leftmost side
of the feasible region. This kink is the $A\bar{A}$ kink discussed in Sec.
IV.1, and does not correspond to the scalar QED. In $d=2.1$ dimensions (Fig.
8(a)), similar to $N_{f}=4$ in Fig. 4 the numerical bound has a sharp kink
that is close to the $2+\epsilon$ result
$(\Delta_{a},\Delta_{s})=(0.0998,1.9998)$ of the scalar QED. As $d$ increases,
the scalar QED kink becomes weak in $d=2.4$ (Fig. 8(b)), and finally becomes
invisible in $d=2.7$ (Fig. 8(c)) and $d=3$ dimensions (Fig. 8(d)).
It is unclear that why the scalar QED kink disappears for $d$’s close to $3$
191919The scalar QED kink being disappearing shall not be ascribed to the
physics of fixed point annihilation as $N_{f}=100$ shall be large enough the
the scalar QED being conformal in $d=3$ dimensions.. One possible explanation
is that the numerical convergence becomes harder as $d$ increases, which can
be clearly seen by comparing the numerical bounds of $\Lambda=19$ and
$\Lambda=27$ in Fig. 8(b)-(d). It is also worth noting that, in $d=3$
dimensions, the numerical bound of $\Delta_{s}$ is much larger than the value
($\Delta\approx 2$) of the scalar QED. However, based on our numerical data
there is no indication that the scalar QED kink will show up in $d=3$
dimensions as $\Lambda\rightarrow\infty$.
Figure 9: The numerical bounds of $SU(N_{f})$ singlet $\Delta_{s}$ of
$SU(100)$ CFTs in $d=2.1$ (a), and $d=3$ (b) dimensions. The green circles
mark the scalar QED: (a) it corresponds to the $2+\epsilon$ expansion results
$(\Delta_{a},\Delta_{s})\approx(0.0998,1.9998)$; (d) it corresponds to the
large-$N_{f}$ results $(\Delta_{a},\Delta_{s})\approx(0.984,1.951)$. The light
orange and orange feasible regions are obtained with the gap condition i)
$\Delta_{A\bar{A}}\geq 2(d-2)+1$, ii) $\Delta_{A\bar{A}}\geq 2(d-2)+1$ and
$\Delta_{S\bar{S}}\geq\Delta_{a}$. The feasible regions do not change under
tighter (but still physical) conditions, e.g. $\Delta_{A\bar{A}}\geq
2(d-2)+1.5$ and $\Delta_{S\bar{S}}\geq 1.5\Delta_{a}$.
A curious observation is that, in $d=3$ dimensions the numerical bounds are
improved significantly by imposing a mild gap
$\Delta_{S\bar{S}}\geq\Delta_{a}$ 202020We note that this gap can be further
relaxed, but we have not examined it carefully to find the most optimal gap
condition., as shown in Fig. 9(b). In contrast, in $d=2.1$ dimensions (Fig.
9(a)) by imposing $\Delta_{S\bar{S}}\geq\Delta_{a}$ the numerical bounds are
only improved a little, and the position of the kink does not move. On the
other hand, the numerical bounds (for both $d=2.1$ and $d=3$) are not further
improved under a tighter gap condition, e.g. $\Delta_{S\bar{S}}\geq
1.5\Delta_{a}$. From the Extremal Functional Method (EFM) El-Showk and Paulos
(2013) we find that on the boundary of feasible region one roughly has
$\Delta_{S\bar{S}}\approx 2\Delta_{a}$, i.e., a relation expected for the
scalar QED. Also recall that in Fig. 5, to get the scalar QED island (in the
$\Delta_{a}-\Delta_{s}$ space) in $2+\epsilon$ dimensions it is necessary to
impose this mysterious gap $\Delta_{S\bar{S}}\geq\Delta_{a}$. These
observations suggest that this gap excludes some crossing symmetric solutions
for the bootstrap equations, but we are not able to identify any candidate
theory. Nevertheless, in $d=3$ dimensions with this extra gap imposed the
scalar QED kink still does not show up 212121The leftmost kink corresponds to
the $A\bar{A}$ kink, which shall not be the scalar QED as we explained
earlier., and the numerical bounds of $\Delta_{s}$ are still higher than that
of the scalar QED. It is possible that one needs to exclude other theories by
imposing extra gap conditions in order to spot the scalar QED kink in $d=3$
dimensions. We leave this for future exploration.
## References
* Seiberg (1995) N. Seiberg, “Electric - magnetic duality in supersymmetric nonAbelian gauge theories,” Nucl. Phys. B 435, 129–146 (1995), arXiv:hep-th/9411149 .
* Maldacena (1999) Juan Martin Maldacena, “The Large N limit of superconformal field theories and supergravity,” Int. J. Theor. Phys. 38, 1113–1133 (1999), arXiv:hep-th/9711200 .
* Luty and Okui (2006) Markus A. Luty and Takemichi Okui, “Conformal technicolor,” JHEP 09, 070 (2006), arXiv:hep-ph/0409274 .
* Senthil _et al._ (2004a) T. Senthil, Ashvin Vishwanath, Leon Balents, Subir Sachdev, and Matthew P. A. Fisher, “Deconfined quantum critical points,” Science 303, 1490 (2004a).
* Senthil _et al._ (2004b) T. Senthil, Leon Balents, Subir Sachdev, Ashvin Vishwanath, and Matthew P. A. Fisher, “Quantum criticality beyond the landau-ginzburg-wilson paradigm,” Phys. Rev. B 70, 144407 (2004b).
* Hermele _et al._ (2005) Michael Hermele, T. Senthil, and Matthew P. A. Fisher, “Algebraic spin liquid as the mother of many competing orders,” Phys. Rev. B 72, 104404 (2005), arXiv:cond-mat/0502215 [cond-mat.str-el] .
* Hermele _et al._ (2008) Michael Hermele, Ying Ran, Patrick A. Lee, and Xiao-Gang Wen, “Properties of an algebraic spin liquid on the kagome lattice,” Phys. Rev. B 77, 224413 (2008), arXiv:0803.1150 [cond-mat.str-el] .
* Song _et al._ (2019) Xue-Yang Song, Chong Wang, Ashvin Vishwanath, and Yin-Chen He, “Unifying description of competing orders in two-dimensional quantum magnets,” Nature Communications 10, 4254 (2019), arXiv:1811.11186 [cond-mat.str-el] .
* Jain _et al._ (1990) J. K. Jain, S. A. Kivelson, and Nandini Trivedi, “Scaling theory of the fractional quantum hall effect,” Phys. Rev. Lett. 64, 1993 (1990).
* Kivelson _et al._ (1992) Steven Kivelson, Dung-Hai Lee, and Shou-Cheng Zhang, “Global phase diagram in the quantum hall effect,” Phys. Rev. B 46, 2223 (1992).
* Chen _et al._ (1993) Wei Chen, Matthew P. A. Fisher, and Yong-Shi Wu, “Mott transition in an anyon gas,” Phys. Rev. B 48, 13749 (1993).
* Lee _et al._ (2018) Jong Yeon Lee, Chong Wang, Michael P. Zaletel, Ashvin Vishwanath, and Yin-Chen He, “Emergent Multi-Flavor QED3 at the Plateau Transition between Fractional Chern Insulators: Applications to Graphene Heterostructures,” Physical Review X 8, 031015 (2018), arXiv:1802.09538 [cond-mat.str-el] .
* Rattazzi _et al._ (2008) Riccardo Rattazzi, Vyacheslav S. Rychkov, Erik Tonni, and Alessandro Vichi, “Bounding scalar operator dimensions in 4D CFT,” Journal of High Energy Physics 2008, 031 (2008), arXiv:0807.0004 [hep-th] .
* Kos _et al._ (2014) Filip Kos, David Poland, and David Simmons-Duffin, “Bootstrapping Mixed Correlators in the 3D Ising Model,” JHEP 11, 109 (2014), arXiv:1406.4858 [hep-th] .
* El-Showk _et al._ (2014) Sheer El-Showk, Miguel F. Paulos, David Poland, Slava Rychkov, David Simmons-Duffin, and Alessandro Vichi, “Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents,” J. Stat. Phys. 157, 869 (2014), arXiv:1403.4545 [hep-th] .
* Kos _et al._ (2015) Filip Kos, David Poland, David Simmons-Duffin, and Alessandro Vichi, “Bootstrapping the O(N) Archipelago,” JHEP 11, 106 (2015), arXiv:1504.07997 [hep-th] .
* Kos _et al._ (2016) Filip Kos, David Poland, David Simmons-Duffin, and Alessandro Vichi, “Precision Islands in the Ising and $O(N)$ Models,” JHEP 08, 036 (2016), arXiv:1603.04436 [hep-th] .
* Simmons-Duffin (2017) David Simmons-Duffin, “The Lightcone Bootstrap and the Spectrum of the 3d Ising CFT,” JHEP 03, 086 (2017), arXiv:1612.08471 [hep-th] .
* Rong and Su (2018) Junchen Rong and Ning Su, “Bootstrapping minimal $\mathcal{N}=1$ superconformal field theory in three dimensions,” (2018), arXiv:1807.04434 [hep-th] .
* Atanasov _et al._ (2018) Alexander Atanasov, Aaron Hillman, and David Poland, “Bootstrapping the Minimal 3D SCFT,” JHEP 11, 140 (2018), arXiv:1807.05702 [hep-th] .
* Iliesiu _et al._ (2016) Luca Iliesiu, Filip Kos, David Poland, Silviu S. Pufu, David Simmons-Duffin, and Ran Yacoby, “Bootstrapping 3D fermions,” Journal of High Energy Physics 2016, 120 (2016), arXiv:1508.00012 [hep-th] .
* Iliesiu _et al._ (2018) Luca Iliesiu, Filip Kos, David Poland, Silviu S. Pufu, and David Simmons-Duffin, “Bootstrapping 3D fermions with global symmetries,” Journal of High Energy Physics 2018, 36 (2018), arXiv:1705.03484 [hep-th] .
* Chester _et al._ (2019) Shai M. Chester, Walter Landry, Junyu Liu, David Poland, David Simmons-Duffin, Ning Su, and Alessandro Vichi, “Carving out OPE space and precise $O(2)$ model critical exponents,” (2019), arXiv:1912.03324 [hep-th] .
* Chester _et al._ (2020) Shai M. Chester, Walter Landry, Junyu Liu, David Poland, David Simmons-Duffin, Ning Su, and Alessandro Vichi, “Bootstrapping Heisenberg Magnets and their Cubic Instability,” arXiv e-prints , arXiv:2011.14647 (2020), arXiv:2011.14647 [hep-th] .
* Poland _et al._ (2019) David Poland, Slava Rychkov, and Alessandro Vichi, “The conformal bootstrap: Theory, numerical techniques, and applications,” Reviews of Modern Physics 91, 015002 (2019), arXiv:1805.04405 [hep-th] .
* Nakayama (2018) Yu Nakayama, “Bootstrap experiments on higher dimensional cfts,” International Journal of Modern Physics A 33, 1850036 (2018).
* Chester and Pufu (2016a) Shai M. Chester and Silviu S. Pufu, “Towards bootstrapping qed3,” Journal of High Energy Physics 2016 (2016a), 10.1007/jhep08(2016)019.
* Li (2018) Zhijin Li, “Solving qed3 with conformal bootstrap,” (2018), arXiv:1812.09281 [hep-th] .
* Li and Poland (2021) Zhijin Li and David Poland, “Searching for gauge theories with the conformal bootstrap,” Journal of High Energy Physics 2021, 172 (2021), arXiv:2005.01721 [hep-th] .
* Dyer _et al._ (2013) Ethan Dyer, Márk Mezei, and Silviu S. Pufu, “Monopole Taxonomy in Three-Dimensional Conformal Field Theories,” arXiv e-prints , arXiv:1309.1160 (2013), arXiv:1309.1160 [hep-th] .
* Rychkov and Vichi (2009) Vyacheslav S. Rychkov and Alessandro Vichi, “Universal Constraints on Conformal Operator Dimensions,” Phys. Rev. D 80, 045006 (2009), arXiv:0905.2211 [hep-th] .
* Ohtsuki (2016) Tomoki Ohtsuki, _Applied Conformal Bootstrap_ , Ph.D. thesis, University of Tokyo (2016).
* He _et al._ (2021) Yin-Chen He, Junchen Rong, and Ning Su, “Non-Wilson-Fisher kinks of O(N) numerical bootstrap: from the deconfined phase transition to a putative new family of CFTs,” SciPost Physics 10, 115 (2021), arXiv:2005.04250 [hep-th] .
* Behan (2018) Connor Behan, “Unitary subsector of generalized minimal models,” Phys. Rev. D 97, 094020 (2018).
* Ginsparg (1988) Paul Ginsparg, “Applied Conformal Field Theory,” arXiv e-prints , hep-th/9108028 (1988), arXiv:hep-th/9108028 [hep-th] .
* Rychkov and Tan (2015) Slava Rychkov and Zhong Ming Tan, “The $\epsilon$ -expansion from conformal field theory,” Journal of Physics A Mathematical General 48, 29FT01 (2015), arXiv:1505.00963 [hep-th] .
* Giombi and Kirilin (2016) Simone Giombi and Vladimir Kirilin, “Anomalous dimensions in CFT with weakly broken higher spin symmetry,” Journal of High Energy Physics 2016, 68 (2016), arXiv:1601.01310 [hep-th] .
* Lawrie and Athrone (1983) ID Lawrie and C Athrone, “Phase transitions in nonlinear abelian higgs models,” Journal of Physics A: Mathematical and General 16, L587 (1983).
* Hikami (1981) S Hikami, “Three-loop ß-functions of non-linear $\sigma$ models on symmetric spaces,” Physics Letters B 98, 208 (1981).
* Hikami (1979) Shinobu Hikami, “Renormalization group functions of cpn-1 non-linear $\sigma$-model and n-component scalar qed model,” Progress of Theoretical Physics 62, 226 (1979).
* Murthy and Sachdev (1990) Ganpathy Murthy and Subir Sachdev, “Action of hedgehog instantons in the disordered phase of the (2+ 1)-dimensional cpn- 1 model,” Nucl. Phys. B 344, 557 (1990).
* Halperin _et al._ (1974) B. I. Halperin, T. C. Lubensky, and Shang-keng Ma, “First-order phase transitions in superconductors and smectic-$a$ liquid crystals,” Phys. Rev. Lett. 32, 292 (1974).
* Nahum _et al._ (2015a) Adam Nahum, J.T. Chalker, P. Serna, M. Ortuno, and A. M. Somoza, “Deconfined quantum criticality, scaling violations, and classical loop models,” Physical Review X 5 (2015a), 10.1103/physrevx.5.041048.
* Benvenuti and Khachatryan (2019) Sergio Benvenuti and Hrachya Khachatryan, “Easy-plane QED3’s in the large N f limit,” Journal of High Energy Physics 2019, 214 (2019), arXiv:1902.05767 [hep-th] .
* Gorbenko _et al._ (2018) Victor Gorbenko, Slava Rychkov, and Bernardo Zan, “Walking, weak first-order transitions, and complex CFTs,” Journal of High Energy Physics 2018, 108 (2018), arXiv:1807.11512 [hep-th] .
* Dasgupta and Halperin (1981) C. Dasgupta and B. I. Halperin, “Phase transition in a lattice model of superconductivity,” Phys. Rev. Lett. 47, 1556 (1981).
* Peskin (1978) Michael E Peskin, “Mandelstam-’t hooft duality in abelian lattice models,” Annals of Physics 113, 122 (1978).
* Sandvik (2010) Anders W. Sandvik, “Continuous quantum phase transition between an antiferromagnet and a valence-bond solid in two dimensions: Evidence for logarithmic corrections to scaling,” Phys. Rev. Lett. 104, 177201 (2010).
* Kaul and Sandvik (2012) Ribhu K. Kaul and Anders W. Sandvik, “Lattice model for the $\mathrm{SU}(n)$ néel to valence-bond solid quantum phase transition at large $n$,” Phys. Rev. Lett. 108, 137201 (2012).
* Bonati _et al._ (2020) Claudio Bonati, Andrea Pelissetto, and Ettore Vicari, “Lattice Abelian-Higgs model with noncompact gauge fields,” arXiv e-prints , arXiv:2010.06311 (2020), arXiv:2010.06311 [cond-mat.stat-mech] .
* Sandvik (2007) Anders W. Sandvik, “Evidence for deconfined quantum criticality in a two-dimensional heisenberg model with four-spin interactions,” Physical Review Letters 98 (2007), 10.1103/physrevlett.98.227202.
* Melko and Kaul (2008) Roger G. Melko and Ribhu K. Kaul, “Scaling in the fan of an unconventional quantum critical point,” Physical Review Letters 100 (2008), 10.1103/physrevlett.100.017203.
* Kuklov _et al._ (2008) A. B. Kuklov, M. Matsumoto, N. V. Prokof’ev, B. V. Svistunov, and M. Troyer, “Deconfined criticality: Generic first-order transition in the su(2) symmetry case,” Phys. Rev. Lett. 101, 050405 (2008).
* Nahum _et al._ (2015b) Adam Nahum, P. Serna, J.T. Chalker, M. Ortuno, and A.M. Somoza, “Emergent so(5) symmetry at the Néel to valence-bond-solid transition,” Physical Review Letters 115 (2015b), 10.1103/physrevlett.115.267203.
* Manenti and Vichi (2021) Andrea Manenti and Alessandro Vichi, “Exploring $SU(N)$ adjoint correlators in $3d$,” arXiv e-prints , arXiv:2101.07318 (2021), arXiv:2101.07318 [hep-th] .
* Di Francesco _et al._ (2012) Philippe Di Francesco, Pierre Mathieu, and David Sénéchal, _Conformal field theory_ (Springer Science & Business Media, 2012).
* Schwinger (1962) Julian Schwinger, “Gauge invariance and mass. ii,” Phys. Rev. 128, 2425 (1962).
* Abdalla (1997) E. Abdalla, “Two-dimensional Quantum Field Theory, examples and applications,” arXiv e-prints , hep-th/9704192 (1997), arXiv:hep-th/9704192 [hep-th] .
* Zou _et al._ (2021) Liujun Zou, Yin-Chen He, and Chong Wang, “Stiefel liquids: possible non-Lagrangian quantum criticality from intertwined orders,” arXiv e-prints , arXiv:2101.07805 (2021), arXiv:2101.07805 [cond-mat.str-el] .
* Komargodski and Seiberg (2018) Zohar Komargodski and Nathan Seiberg, “A symmetry breaking scenario for QCD3,” Journal of High Energy Physics 2018, 109 (2018), arXiv:1706.08755 [hep-th] .
* Reehorst _et al._ (2020) Marten Reehorst, Maria Refinetti, and Alessandro Vichi, “Bootstrapping traceless symmetric $O(N)$ scalars,” arXiv e-prints , arXiv:2012.08533 (2020), arXiv:2012.08533 [hep-th] .
* Sachdev (2007) Subir Sachdev, “Quantum phase transitions,” Handbook of Magnetism and Advanced Magnetic Materials (2007).
* Wen and Wu (1993) Xiao-Gang Wen and Yong-Shi Wu, “Transitions between the quantum hall states and insulators induced by periodic potentials,” Phys. Rev. Lett. 70, 1501 (1993).
* Rong and Su (2019) Junchen Rong and Ning Su, “Bootstrapping the $\mathcal{N}=1$ wess-zumino models in three dimensions,” (2019), arXiv:1910.08578 [hep-th] .
* Chester and Pufu (2016b) Shai M. Chester and Silviu S. Pufu, “Anomalous dimensions of scalar operators in qed3,” Journal of High Energy Physics 2016 (2016b), 10.1007/jhep08(2016)069.
* Kaul and Sachdev (2008) Ribhu K. Kaul and Subir Sachdev, “Quantum criticality of u(1) gauge theories with fermionic and bosonic matter in two spatial dimensions,” Phys. Rev. B 77, 155105 (2008).
* Li and Su (2017) Zhijin Li and Ning Su, “3D CFT Archipelago from Single Correlator Bootstrap,” arXiv e-prints , arXiv:1706.06960 (2017), arXiv:1706.06960 [hep-th] .
* Erramilli _et al._ (2020) Rajeev S. Erramilli, Luca V. Iliesiu, Petr Kravchuk, Walter Landry, David Poland, and David Simmons-Duffin, “blocks_3d: Software for general 3d conformal blocks,” arXiv e-prints , arXiv:2011.01959 (2020), arXiv:2011.01959 [hep-th] .
* Simmons-Duffin (2015) David Simmons-Duffin, “A Semidefinite Program Solver for the Conformal Bootstrap,” JHEP 06, 174 (2015), arXiv:1502.02033 [hep-th] .
* Abanov and Wiegmann (2000) A. G. Abanov and P. B. Wiegmann, “Theta-terms in nonlinear sigma-models,” Nuclear Physics B 570, 685–698 (2000), arXiv:hep-th/9911025 [hep-th] .
* Brézin and Zinn-Justin (1976) E. Brézin and J. Zinn-Justin, “Renormalization of the nonlinear $\sigma$ model in $2+\epsilon$ dimensions—application to the heisenberg ferromagnets,” Phys. Rev. Lett. 36, 691 (1976).
* Brézin _et al._ (1976) E. Brézin, J. Zinn-Justin, and J. C. Le Guillou, “Anomalous dimensions of composite operators near two dimensions for ferromagnets with $o(n)$ symmetry,” Phys. Rev. B 14, 4976 (1976).
* El-Showk and Paulos (2013) Sheer El-Showk and Miguel F. Paulos, “Bootstrapping conformal field theories with the extremal functional method,” Physical Review Letters 111 (2013), 10.1103/physrevlett.111.241601.
|
# E Pluribus Unum Ex Machina:
Learning from Many Collider Events at Once
Benjamin Nachman<EMAIL_ADDRESS>Physics Division, Lawrence Berkeley
National Laboratory, Berkeley, CA 94720, USA Berkeley Institute for Data
Science, University of California, Berkeley, CA 94720, USA Jesse Thaler
<EMAIL_ADDRESS>Center for Theoretical Physics, Massachusetts Institute of
Technology, Cambridge, MA 02139, USA The NSF AI Institute for Artificial
Intelligence and Fundamental Interactions
###### Abstract
There have been a number of recent proposals to enhance the performance of
machine learning strategies for collider physics by combining many distinct
events into a single ensemble feature. To evaluate the efficacy of these
proposals, we study the connection between single-event classifiers and multi-
event classifiers under the assumption that collider events are independent
and identically distributed (IID). We show how one can build optimal multi-
event classifiers from single-event classifiers, and we also show how to
construct multi-event classifiers such that they produce optimal single-event
classifiers. This is illustrated for a Gaussian example as well as for
classification tasks relevant for searches and measurements at the Large
Hadron Collider. We extend our discussion to regression tasks by showing how
they can be phrased in terms of parametrized classifiers. Empirically, we find
that training a single-event (per-instance) classifier is more effective than
training a multi-event (per-ensemble) classifier, as least for the cases we
studied, and we relate this fact to properties of the loss function gradient
in the two cases. While we did not identify a clear benefit from using multi-
event classifiers in the collider context, we speculate on the potential value
of these methods in cases involving only approximate independence, as relevant
for jet substructure studies.
††preprint: MIT-CTP 5271
###### Contents
1. I Introduction
2. II The Statistics of Per-Ensemble Learning
1. II.1 Review of Per-Instance Learning
2. II.2 Per-Ensemble Binary Classification
3. II.3 Comparing the Loss Gradients
4. II.4 Per-Ensemble Regression
1. II.4.1 Maximum Likelihood
2. II.4.2 Classifier Loss
3. II.4.3 Direct Regression
5. II.5 Beyond Regression
3. III Empirical Studies
1. III.1 Classifiers: Multi-Event from Single-Event
1. III.1.1 Two Gaussian Example
2. III.1.2 Dijet Resonance Search
3. III.1.3 Top Quark Mass Measurement
2. III.2 Classifiers: Single-Event from Multi-Event
3. III.3 Comparison of Regression Strategies
1. III.3.1 Gaussian Mean Example
2. III.3.2 Top Quark Mass Measurement
4. III.4 Beyond Regression Example
4. IV Conclusions
5. A Deriving Maximum Likelihood Classifier Loss
## I Introduction
Modern machine learning techniques are being widely applied to enhance or
replace existing analysis techniques across collider physics [1, 2, 3, 4, 5,
6]. These approaches hold great promise for new particle searches, for
Standard Model measurements, and for high-energy nuclear physics
investigations. A subset of these proposals have advocated for a multi-event
strategy whereby a machine-learned function acts on multiple collision events
at the same time [7, 8, 9, 10, 11, 12, 13, 14]. This multi-event (per-
ensemble) strategy contrasts with more typical single-event (per-instance)
machine learning methods that process one event at a time, although both
strategies make use of many events during the training process.
Intuitively, an ensemble approach might seem like a more promising learning
strategy because there is more information contained in $N>1$ collision events
than in one single event. There is, however, an important distinction between
the amount of information contained in a data set and the amount of
information needed to encode a machine-learned function. For this reason,
there need not be a gain from using multi-event strategies over single-event
strategies in the context of machine learning.
In this paper, we show that when directly compared on the same task, there is
indeed no informational benefit from training a function that processes
multiple events simultaneously compared to training a function that processes
only a single event at a time. This fact can be easily understood from the
statistical structure of collision data. To test for a practical benefit, we
perform empirical comparisons of per-ensemble and per-instance methods on
benchmark tasks relevant for the Large Hadron Collider (LHC), finding that
single-event (per-instance) methods are more effective for the cases we
studied.
To an excellent approximation, collider events are statistically independent
and identically distributed (IID). In simulation, this is exactly true up to
deficiencies in random number generators. In data, there are some small time-
dependent effects from changing conditions and there are also some
correlations between events introduced by detector effects with timescales
longer than a typical bunch crossing. These event-to-event correlations,
however, are truly negligible when considering the set of events typically
used for physics analysis that are selected by triggers. The probability for
two events next to each other in time to be saved by the triggers is
effectively zero, since triggers save only a tiny fraction of events. The IID
nature of collision events therefore ensures that the information content is
the same for ensembles of events and for single events drawn from an ensemble.
In equations, the probability to observe $N$ events $x_{i}$ is
$p(\\{x_{1},\ldots,x_{N}\\}|\theta)=\prod_{i=1}^{N}p(x_{i}|\theta),$ (1)
where $\theta$ represents possible parameters of the generative model, such as
the physics process being studied or the values of coupling constants. The
optimal classifier to distinguish whether events have been generated via
$\theta_{A}$ or via $\theta_{B}$ depends only on the per-ensemble likelihood
ratio [15]:
$\frac{p(\\{x_{1},\ldots,x_{N}\\}|\theta_{A})}{p(\\{x_{1},\ldots,x_{N}\\}|\theta_{B})}=\prod_{i=1}^{N}\frac{p(x_{i}|\theta_{A})}{p(x_{i}|\theta_{B})},$
(2)
which by the IID assumption only depends on knowing the per-instance
likelihood ratio $p(x_{i}|\theta_{A})/p(x_{i}|\theta_{B})$. This equality
explains the informational equivalence of per-ensemble and per-event learning.
Given the simplicity of Eq. (2), why are we writing a whole paper on this
topic (apart from the opportunity to invoke a gratuitously Latinate paper
title that incorporates an aspiration for national unity)? The studies in
Refs. [7, 8, 9, 10, 11, 12, 13, 14] find that per-ensemble learning is
effective for their respective tasks, in some cases arguing why per-instance
learning is deficient. It is certainly true that a set of events
$\\{x_{1},\ldots,x_{N}\\}$ contains more information than a single event
$x_{i}$ drawn from this set. What we will show in this paper is that if one
carefully combines the per-instance information, one can recover the per-
ensemble benefit, with the potential for a substantially reduced training
cost. We emphasize that our analysis does not contradict the studies in Refs.
[7, 8, 9, 10, 11, 12, 13, 14]; rather this work suggests the possibility of
achieving the same or better results by replacing per-ensemble learning with
per-instance learning. There may be specialized contexts where per-ensemble
learning is superior, particularly if the training procedure itself can be
made simpler, such as in the linear regression approach of Ref. [12]. This
paper also gives us a chance to mention some facts about loss functions that
are well known in the statistics literature but might not be as well
appreciated in collider physics. Moving away from the IID case, we speculate
on the relevance of our analysis for jet substructure tasks where there is a
notion of approximate independence of emissions.
The remainder of this paper is organized as follows. In Sec. II, we provide
the formal statistical basis for building multi-event classifiers from single-
event classifiers, and vice versa, under the IID assumption. We also explain
how regression tasks can be translated into the language of per-instance
parametrized classification. In Sec. III, we present empirical studies that
corroborate these analytic results. Our conclusions are given in Sec. IV.
## II The Statistics of Per-Ensemble Learning
### II.1 Review of Per-Instance Learning
Suppose that a collider event is represented by features in
$\mathbb{E}=\mathbb{R}^{M}$ and we are trying to train a binary classifier to
learn a target in $[0,1]$. Let $c:\mathbb{E}\rightarrow[0,1]$ be a function
that processes a single event, with the goal of distinguishing events being
generated by $\theta_{A}$ ($c\to 1$) versus those generated by $\theta_{B}$
($c\to 0$). Such a function can be obtained by minimizing an appropriate loss
functional, such as the binary cross entropy:
$\displaystyle L_{\rm BCE}[c]=-\int dx\,\Big{(}$ $\displaystyle
p(x|\theta_{A})\log c(x)$
$\displaystyle~{}+p(x|\theta_{B})\log(1-c(x))\Big{)},$ (3)
where $p(x|\theta)$ is the probability density of $x\in\mathbb{E}$ given class
$\theta$. Here and throughout this discussion, we consider the infinite
statistics limit such that we can replace sums over events by integrals. We
have also dropped the prior factors $p(\theta_{i})$, assuming that one has
equal numbers of examples from the two hypotheses during training. While this
is often true in practice, it is not strictly necessary for our main
conclusions, though it does simplify the notation. It is well-known [16, 17]
(also in high-energy physics [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
30]) that an optimally trained $c$ will have the following property:
$\displaystyle\frac{c(x)}{1-c(x)}=\frac{p(x|\theta_{A})}{p(x|\theta_{B})},$
(4)
such that one learns the per-instance likelihood ratio. By the Neyman–Pearson
lemma [15], this defines the optimal single-event classifier.
Loss Name | $A(f)$ | $B(f)$ | $\operatorname*{argmin}_{f}L[f]$ | Integrand of $-\min_{f}L[f]$ | Related Divergence/Distance
---|---|---|---|---|---
Binary Cross Entropy | $\log f$ | $\log(1-f)$ | $\frac{p_{A}}{p_{A}+p_{B}}$ | $p_{A}\log\frac{p_{A}}{p_{A}+p_{B}}+(A\leftrightarrow B)$ | $2\big{(}\text{Jensen-Shannon}-\log 2\big{)}$
Mean Squared Error | $-(1-f)^{2}$ | $-f^{2}$ | $\frac{p_{A}}{p_{A}+p_{B}}$ | $-\frac{p_{A}p_{B}}{p_{A}+p_{B}}$ | $\frac{1}{2}\big{(}\text{Triangular}-1\big{)}$
Square Root | $\frac{-1}{\sqrt{f}}$ | $-\sqrt{f}$ | $\frac{p_{A}}{p_{B}}$ | $-2\sqrt{p_{A}p_{B}}$ | $2\big{(}\text{Hellinger}^{2}-1\big{)}$
Maximum Likelihood Cl. | $\log f$ | $1-f$ | $\frac{p_{A}}{p_{B}}$ | $p_{A}\log\frac{p_{A}}{p_{B}}$ | Kullback–Leibler
Table 1: Examples of loss functionals in the form of Eq. (5), with the
associated location and value of the loss minimum, using the shorthand
$p_{i}\equiv p(x|\theta_{i})$. We have used the symbol $f$ in all cases to
denote the classifier, but some choices require explicit constraints on $f$ to
be either non-negative or in the range $[0,1]$. In the last column, we
indicate the relation of the loss minimum to statistical divergences and
distances, up to an overall scaling and offset. See Ref. [31] for additional
relations.
There are many loss functionals that satisfy this property. Consider a more
general loss functional that depends on a learnable function
$f:\mathbb{E}\rightarrow\mathbb{R}$ (which unlike $c$ may or may not map to
$[0,1]$) as well as fixed rescaling functions $A:\mathbb{R}\to\mathbb{R}$ and
$B:\mathbb{R}\to\mathbb{R}$:
$\displaystyle L[f]=-\int dx\,$
$\displaystyle\Big{(}p(x|\theta_{A})\,A(f(x))+p(x|\theta_{B})\
B(f(x))\Big{)}.$ (5)
Taking the functional derivative with respect to $f(x)$, the extremum of
$L[f]$ satisfies the property:
$\displaystyle-\frac{B^{\prime}(f(x))}{A^{\prime}(f(x))}=\frac{p(x|\theta_{A})}{p(x|\theta_{B})}.$
(6)
As long as $-B^{\prime}(f)/A^{\prime}(f)$ is a monotonic rescaling of $f$ and
the overall loss functional is convex, then the function $f(x)$ learned by
minimizing Eq. (5) defines an optimal classifier. In many cases, the minimum
value of $L[f]$ itself is interesting in the context of statistical
divergences and distances [31], and a few examples are shown in Table 1.
To simplify the following discussion, we will focus on the “maximum likelihood
classifier” (MLC) loss:
$\displaystyle L_{\text{MLC}}[f]=-\int dx\,\Big{(}$ $\displaystyle
p(x|\theta_{A})\log f(x)$ $\displaystyle~{}+p(x|\theta_{B})\,(1-f(x))\Big{)}.$
(7)
This is of the general form in Eq. (5) with $A(f)=\log f$ and $B(f)=1-f$. To
our knowledge, the MLC was first introduced in the collider physics context in
Refs. [32, 33], although with an exponential parametrization of $f(x)$. We
call Eq. (7) the MLC loss to distinguish it from the related maximum
likelihood loss that is often used to fit generative models [34, 35, 36].
Using Eq. (6), the minimum of this loss functional yields directly the
likelihood ratio:
$\operatorname*{argmin}_{f}L_{\text{MLC}}[f]=\frac{p(x|\theta_{A})}{p(x|\theta_{B})},$
(8)
which will be useful to simplify later analyses.111 A variation of Eq. (8)
holds for $A(f)=\log C(f)$ and $B(f)=1-C(f)$, where $C(f)$ is any
monotonically increasing function with range that covers $(0,\infty)$. In this
case, $C(\operatorname*{argmin}_{f}L[f])=p(x|\theta_{A})/p(x|\theta_{B})$.
This can be useful in practice if $C(f)$ is everywhere positive, since $f$ can
take on negative values and still yield a valid likelihood ratio. See Fig. 10
for an empirical study of $C(f)=\exp f$. The MLC loss functional value at the
minimum is
$-\min_{f}L_{\text{MLC}}[f]=\int
dx\,p(x|\theta_{A})\log\frac{p(x|\theta_{A})}{p(x|\theta_{B})},$ (9)
which is the Kullback–Leibler (KL) divergence, also known as the relative
entropy from $p(x|\theta_{B})$ to $p(x|\theta_{A})$. See App. A for an
intuitive derivation of Eq. (7).
### II.2 Per-Ensemble Binary Classification
To move from single-event classification to multi-event classification, we
want to learn a classification function $f_{N}$ that can process $N$ events
simultaneously. Here, we are using $f_{N}:\mathbb{E}^{N}\rightarrow\mathbb{R}$
instead of $c_{N}:\mathbb{E}^{N}\rightarrow[0,1]$ to avoid algebraic
manipulations like Eq. (4). We will use the vector notation
$\vec{x}=\\{x_{1},\dots,x_{N}\\}$ (10)
to represent an element of $\mathbb{E}^{N}$. Our goal is to distinguish
whether $\vec{x}$ is drawn from $p(\vec{x}|\theta_{A})$ ($f_{N}\to\infty$) or
from $p(\vec{x}|\theta_{B})$ ($f_{N}\to 0$). Note that we are trying to
classify a pure event ensemble as coming from either $\theta_{A}$ or
$\theta_{B}$, which is a different question than trying to determine the
proportion of events drawn from each class in a mixed event ensemble. For
$N=1$, $f_{1}$ is the same as $f$ discussed in Eq. (5).
If $f_{N}$ is trained optimally, then the classification performance of
$f_{N}$ evaluated on $N>1$ events will be better than the performance of
$f_{1}$ evaluated on a single event, as relevant to the discussions in Refs.
[7, 8, 9, 10, 11, 12, 13, 14]. The key point of this paper is that one can
construct a classifier $f_{1\to N}$ that is built only from $f_{1}$, acts on
$N$ events, and has the same asymptotic performance as $f_{N}$.
Using the MLC loss in Eq. (7), but now applied to $N$ events, we have
$\displaystyle L_{\text{MLC}}[f_{N}]=-\int d^{N}x\,\Big{(}$ $\displaystyle
p(\vec{x}|\theta_{A})\,\log f_{N}(\vec{x})$
$\displaystyle~{}+p(\vec{x}|\theta_{B})\,(1-f_{N}(\vec{x}))\Big{)},$ (11)
whose minimum is the per-ensemble likelihood ratio:
$\operatorname*{argmin}_{f_{N}}L_{\text{MLC}}[f_{N}]=\frac{p(\vec{x}|\theta_{A})}{p(\vec{x}|\theta_{B})}.$
(12)
By the Neyman–Pearson lemma, this yields the optimal per-ensemble classifier.
On the other hand, once we have trained a single-event classifier $f_{1}$
using Eq. (7), we can build a multi-event classifier $f_{1\to N}$ without any
additional training:
$f_{1\to
N}(\vec{x})\equiv\prod_{i=1}^{N}f_{1}(x_{i})\quad\rightarrow\quad\frac{p(\vec{x}|\theta_{A})}{p(\vec{x}|\theta_{B})},$
(13)
where in the last step we have combined the solution found in Eq. (8) with the
IID condition in Eq. (2). Whereas minimizing Eq. (11) requires sampling over
$\mathbb{E}^{N}$, constructing $f_{1\to N}$ only requires sampling over
$\mathbb{E}$, which is a considerable reduction in computational burden for
large $N$. The technical details of carrying out this procedure are explained
in Sec. III.1.
Going in the converse direction, we can learn a single-event classifier
$f_{N\to 1}$ starting from a constrained multi-event classifier
$\tilde{f}_{N}$. Using weight sharing, we can minimize Eq. (11) subject to the
constraint that $\tilde{f}_{N}$ takes the functional form:
$\tilde{f}_{N}(\\{x_{1},\dots,x_{N}\\})=\prod_{i=1}^{N}f_{N\to 1}(x_{i}),$
(14)
where $f_{N\to 1}(x)$ is a learnable function. Under the IID assumption,
$\tilde{f}_{N}$ can still learn the per-ensemble likelihood ratio, but the
learned $f_{N\to 1}(x)$ will now be the per-instance likelihood ratio, at
least asymptotically.222In the case that the two samples are composed of
mixtures of two categories, then the learned $f_{N\to 1}(x)$ will be the ratio
of the mixed sample likelihoods, which is monotonically related to the optimal
pure sample classifier, as discussed in Ref. [37]. An examination of this
converse construction is presented in Sec. III.2.
### II.3 Comparing the Loss Gradients
We have shown that the per-ensemble classifier $f_{N}$ and the composite per-
event classifier $f_{1\to N}$ have the same asymptotic information content,
but one might wonder if there is nevertheless a practical performance gain to
be had using per-ensemble learning.
Under the IID assumption, the optimal $f_{N}$ takes the form of
$\tilde{f}_{N}$ in Eq. (14), and in our empirical studies, we found no benefit
to letting $f_{N}$ have more functional freedom. Therefore, to get a sense of
the efficacy of per-ensemble versus per-instance training, we can compare the
effective loss functions for $f_{N\to 1}$ and $f_{1}$. Since the inputs and
outputs of these functions are the same (i.e. $\mathbb{E}\to\mathbb{R}$), we
can do an apples-to-apples comparison of their behavior under gradient
descent. The following analysis assumes that the neural network training
occurs in the vicinity of the global minimum of the loss function.
For the per-ensemble case, plugging Eq. (14) into Eq. (11) and using the IID
relation in Eq. (1), we find the effective loss functional:
$\displaystyle L_{\rm MLC}[f_{N\to 1}]+1$ $\displaystyle=-N\int
dx\,p(x|\theta_{A})\,\log f_{N\to 1}(x)$ $\displaystyle\quad+\left(\int
dx\,p(x|\theta_{B})f_{N\to 1}(x)\right)^{N}.$ (15)
This is to be contrasted with the per-instance loss functional from Eq. (7),
repeated for convenience with the $f_{1}$ notation and typeset to be parallel
to the above:
$\displaystyle L_{\text{MLC}}[f_{1}]+1$ $\displaystyle=-\int
dx\,p(x|\theta_{A})\,\log f_{1}(x)$ $\displaystyle\quad+\int
dx\,p(x|\theta_{B})\,f_{1}(x).$ (16)
To understand the loss gradients, we can Taylor expand the learned functions
about the optimal solution:
$\displaystyle f_{N\to 1}(x)$
$\displaystyle=\frac{p(x|\theta_{A})}{p(x|\theta_{B})}+\epsilon(x),$ (17)
$\displaystyle f_{1}(x)$
$\displaystyle=\frac{p(x|\theta_{A})}{p(x|\theta_{B})}+\epsilon(x).$ (18)
Plugging these into their respective loss functionals and looking at the
leading-order variations, we have:
$\displaystyle\frac{\delta L_{\rm MLC}[f_{N\to 1}]}{N}$ $\displaystyle=\int
dx\,\frac{\big{(}p(x|\theta_{B})\,\epsilon(x)\big{)}^{2}}{2\,p(x|\theta_{A})}$
$\displaystyle\quad+\frac{N-1}{2}\left(\int
dx\,p(x|\theta_{B})\,\epsilon(x)\right)^{2},$ (19) $\displaystyle\delta L_{\rm
MLC}[f_{1}]$ $\displaystyle=\int
dx\,\frac{\big{(}p(x|\theta_{B})\,\epsilon(x)\big{)}^{2}}{2\,p(x|\theta_{A})}.$
(20)
These expressions are quadratic in $\epsilon(x)$, which means that we are
expanding around the correct minimum.
The expression for $\delta L_{\rm MLC}[f_{1}]$ involves a single integral over
$x$, so under gradient descent, the value of $\epsilon(x)$ can be
independently adjusted at each point in phase space to find the minimum. By
contrast, $\delta L_{\rm MLC}[f_{N\to 1}]$ has an additional piece involving
an integral squared, so even if at a given point in phase space $x_{0}$ we
have achieved $\epsilon(x_{0})=0$, gradient descent will tend to push
$\epsilon(x_{0})$ away from the correct value until $\epsilon(x)=0$
everywhere. This correlated structure explains the slower convergence of
$L_{\rm MLC}[f_{N\to 1}]$ compared to $L_{\rm MLC}[f_{1}]$ in our empirical
studies. While we focused on the MLC loss to simplify the algebra, the
appearance of these (typically counterproductive) correlations in the loss
gradient appears to be a generic feature of per-ensemble learning.
### II.4 Per-Ensemble Regression
While the discussion above focused on binary classification, the same basic
idea applies to regression problems as well. The goal of regression is to
infer parameters $\theta$ from the data $\vec{x}$. There are a variety of
approaches that can be used for this task, and each can be connected to
parametrized per-instance classification.
#### II.4.1 Maximum Likelihood
Maximum likelihood is the most common strategy for inference in collider
physics. Symbolically, we are trying to find
$\theta_{\rm ML}=\operatorname*{argmax}_{\theta}p(\vec{x}|\theta).$ (21)
One way to determine $\theta_{\rm ML}$ is with a two-step approach. First, one
can train a parametrized classifier $f(x,\theta)$ [26, 38] using, e.g., the
per-instance MLC loss:
$\displaystyle L_{\rm MLC}[f]=-\int dx\,\Big{(}$ $\displaystyle
p(x|\theta)\,p(\theta)\log f(x,\theta)$
$\displaystyle~{}+p(x|\theta_{0})\,p(\theta)\,(1-f(x,\theta))\Big{)}.$ (22)
The top line corresponds to a synthetic dataset where every event is generated
from $p(x|\theta)$ with different $\theta$ values drawn from the probability
density $p(\theta)$. The bottom line corresponds to a synthetic dataset where
every event is generated using the same $p(x|\theta_{0})$ for fixed
$\theta_{0}$ and then augmented with a value $\theta$ that follows from
$p(\theta)$ independently of $x$. Minimizing Eq. (22) with respect to
$f(x,\theta)$, the asymptotic solution is the likelihood ratio:
$f(x,\theta)=\frac{p(x|\theta)}{p(x|\theta_{0})},$ (23)
where the factors of $p(\theta)$ have canceled out. Second, one can estimate
$\theta_{\rm ML}$ by using the IID properties of the event ensemble to relate
likelihoods to the classifier output $f(x,\theta)$:
$\displaystyle\theta_{\rm ML}$
$\displaystyle=\operatorname*{argmin}_{\theta}\left\\{-\sum_{i=1}^{N}\log
p(x_{i}|\theta)\right\\}$
$\displaystyle=\operatorname*{argmin}_{\theta}\left\\{-\sum_{i=1}^{N}\log\frac{p(x_{i}|\theta)}{p(x_{i}|\theta_{0})}\right\\}$
$\displaystyle\approx\operatorname*{argmin}_{\theta}\left\\{-\sum_{i=1}^{N}\log
f(x_{i},\theta)\right\\}.$ (24)
Thus, even though maximum likelihood regression uses information from the full
event ensemble, only a parametrized per-instance classifier is required for
this procedure.
#### II.4.2 Classifier Loss
Two recent proposals for parameter estimation are explicitly built on
classifiers for regression [18, 19]. For any classifier, one can perform the
following optimization:333Note that Ref. [18] used the (non-differentiable)
area under the curve instead of the classifier loss, as it is not sensitive to
differences in the prior $p(\theta)$ between the two data sets.
$\displaystyle\theta_{\rm
CL}=\operatorname*{argmax}_{\theta^{\prime}}\left\\{\begin{matrix}\text{Loss
of a classifier trained}\cr\text{to distinguish $\theta^{\prime}$ from
$\theta_{\rm data}$}\end{matrix}\right\\}.$ (25)
Here, we are imagining that the $\theta^{\prime}$ samples come from synthetic
data sets. The appearance of a maximum instead of minimum in Eq. (25) is
because, as highlighted in Table 1, it is negative loss functions that
correspond to statistical divergences and distances.
In general, the $\theta_{\rm CL}$ that minimizes the classifier loss will be
different from the $\theta_{\rm ML}$ that maximizes the likelihood. For the
special case of the MLC loss, though, they are the same in the asymptotic
limit if we set $\theta_{A}=\theta_{\rm data}$ and
$\theta_{B}=\theta^{\prime}$. To see this, recall from Eq. (9) that after
training, the value of the MLC loss is related to the KL divergence:
$\displaystyle\operatorname*{argmax}_{\theta^{\prime}}\\{\min_{f}L_{\text{MLC}}[f]\\}$
$\displaystyle\hskip
5.69054pt=\operatorname*{argmax}_{\theta^{\prime}}\left\\{-\int
dx\,p(x|\theta_{\rm data})\log\frac{p(x|\theta_{\rm
data})}{p(x|\theta^{\prime})}\right\\}$ $\displaystyle\hskip
5.69054pt\approx\operatorname*{argmax}_{\theta^{\prime}}\left\\{\sum_{i=1}^{N}\log\frac{p(x_{i}|\theta^{\prime})}{p(x_{i}|\theta_{\rm
data})}\right\\}$ $\displaystyle\hskip
5.69054pt=\operatorname*{argmin}_{\theta^{\prime}}\left\\{-\sum_{i=1}^{N}\log
p(x_{i}|\theta^{\prime})\right\\}$ $\displaystyle\hskip 5.69054pt=\theta_{\rm
ML}\,,$ (26)
where the sum is over data events.
#### II.4.3 Direct Regression
In terms of information content, a regression model trained in the usual way
can be built from a parametrized classification model. Suppose that
$\theta\in\mathbb{R}^{Q}$ and $g_{N}:\mathbb{E}^{N}\rightarrow\mathbb{R}^{Q}$
is a regression model trained with the mean squared error loss:
$L_{\rm MSE}[g_{N}]=-\int
d^{n}x\,p(\vec{x},\theta)\Big{(}g_{N}(\vec{x})-\theta\Big{)}^{2}$ (27)
It is well known that the optimally trained $g_{N}$ will be related to the
expectation value of $\theta$:
$g_{N}(\vec{x})=\mathbb{E}[\theta|\vec{x}]=\int
d\theta\,\theta\,p(\theta|\vec{x}).$ (28)
Other loss functions approximate other statistics, as discussed in Ref. [39].
For example, the mean absolute error loss approximates the median of $\theta$.
Ultimately, all direct regression methods are functionals of
$p(\theta|\vec{x})$.
We can relate $p(\theta|\vec{x})$ to a parametrized classifier
$f_{N}(\vec{x},\theta)$ trained to distinguish $\theta$ from a baseline
$\theta_{0}$:
$\displaystyle
p(\theta|\vec{x})=\frac{p(\vec{x}|\theta)\,p(\theta)}{p(\vec{x})}$
$\displaystyle=\frac{p(\vec{x}|\theta)\,p(\theta)}{\int
d\theta^{\prime}\,p(\vec{x}|\theta^{\prime})\,p(\theta^{\prime})}$
$\displaystyle=\frac{\frac{p(\vec{x}|\theta)}{p(\vec{x}|\theta_{0})}\,p(\theta)}{\int
d\theta^{\prime}\,\frac{p(\vec{x}|\theta^{\prime})}{p(\vec{x}|\theta_{0})}\,p(\theta^{\prime})}$
$\displaystyle=\frac{f_{N}(\vec{x},\theta)\,p(\theta)}{\int
d\theta^{\prime}\,f_{N}(\vec{x},\theta^{\prime})\,p(\theta^{\prime})},$ (29)
where $p(\theta)$ is the probability density of $\theta$ used during the
training of $g_{N}$. Following the same logic as Sec. II.2, the per-ensemble
classifier $f_{N}(\vec{x},\theta)$ can be related to a per-instance classifier
$f_{1}(x,\theta)$. Therefore, even though $g_{N}$ acts on $N$ events, it has
the same information content as a parametrized classifier that acts on single
events.
Performing regression via Eqs. (28) and (29) is straightforward but tedious.
In practice, one would train a parametrized per-instance classifier
$f_{1}(x,\theta)$ as in Eq. (23), multiply it to construct
$f_{N}(\vec{x},\theta)=\prod_{i=1}^{N}f_{1}(x_{i},\theta)$, and then sample
over values of $\theta$ to approximate the integrals. We show examples of the
above regression strategies in Sec. III.3
### II.5 Beyond Regression
In addition to classification and regression, a standard machine learning task
is density estimation. While some classical machine learning methods like
$k$-nearest neighbors [40, 41] do require multi-instance information at
prediction time, many of the standard deep learning solutions to implicit or
explicit generative modeling are built on per-instance functions. Such methods
include generative adversarial networks [42],444 In the context of adversarial
training, it may be beneficial to use per-ensemble information in the
discriminator to mitigate mode collapse, as utilized in Ref. [14]. This is
also the philosophy behind mini-batch discrimination [43]. variational
autoencoders [44], and normalizing flows [45].
One reason for computing explicit densities is to estimate the distance to a
reference density. A common set of tools for this task are the $f$-divergences
mentioned earlier. As discussed in Ref. [31] and highlighted in Table 1, there
is a direct mapping between the loss value of a per-instance classification
task and a corresponding $f$-divergence between the underlying probability
densities.
A related quantity is the mutual information between two random variables $X$
and $Y$:
$\displaystyle I(X,Y)=\int dx\,dy\,p(x,y)\log\frac{p(x,y)}{p(x)\,p(y)}.$ (30)
For example, $Y$ could be binary (a class label) and then $I(X,Y)$ would
encode how much information (in units of nats) is available in $X$ for doing
classification. This can be helpful in the context of ranking input features,
and was studied in the context of quark/gluon jet classification in Ref. [46].
Naively, Eq. (30) might seem like it requires estimating the densities $p(x)$,
$p(y)$, and $p(x,y)$, which in turn may require ensemble information (see e.g.
Ref. [47] for a study in the context of HEP). On the other hand, Eq. (30)
takes the same form as the KL divergence in Eq. (9). Therefore, this quantity
can be estimated using a similar strategy as in earlier sections, by training
a classifier to distinguish data following $p(x,y)$ from data following
$p(x)\,p(y)$ using the MLC loss. The value of the loss at the minimum will be
an estimate of the mutual information. A simple example of this will be
studied in Sec. III.4.
## III Empirical Studies
We now present empirical studies comparing per-instance and per-ensemble data
analysis strategies to highlight the points made in Sec. II. Our analyses are
based on three case studies: a simple two Gaussian example, searching for
dijet resonances, and measuring the top quark mass.
### III.1 Classifiers: Multi-Event from Single-Event
As argued in Sec. II.2, under the IID assumption we can build multi-event
classifiers from single-event classifiers. We now demonstrate how to construct
$f_{1\rightarrow N}$ defined in Eq. (13), comparing its performance to
$f_{N}$.
#### III.1.1 Two Gaussian Example
(a) (b)
Figure 1: Classification in the two Gaussian example. (a) A histogram of the
Gaussian random variable $X$, for the “signal” ($x_{0}=0.1$) and background
($x_{0}=-0.1$). (b) ROC curves for various binary classifiers. From the
single-event classifier $f_{1}$, we can construct a multi-event classifier
$f_{1\to 10}$ that matches the performance of a classifier trained on 10
events simultaneously ($f_{10}$).
Our first case study involves one-dimensional Gaussian random variables. As
shown in Fig. 1a, we consider two Gaussian distributions
$X\sim\mathcal{N}(\pm\epsilon,1)$, with slightly different means
($x_{0}=\pm\epsilon$) but the same variance ($\sigma=1$). Here, the “signal”
has positive mean while the “background” has negative mean, and we take
$\epsilon=0.1$ for concreteness.
Both the per-instance ($f_{1}$) and per-ensemble ($f_{N}$) classifiers are
parametrized by neural networks and implemented using Keras [48] with the
Tensorflow backend [49] and optimized with Adam [50]. We use the binary cross
entropy loss function so Eq. (4) is needed to convert the classifier output to
a likelihood ratio. Each classifier consists of two hidden layers with 128
nodes per layer. Rectified Linear Unit (ReLU) activation functions are used
for the intermediate layers while sigmoid activation is used for the last
layer. The only difference between the per-instance and per-ensemble networks
is that the input layer has one input for $f_{1}$ but $N$ inputs for $f_{N}$.
We train each network with 50,000 events to minimize the binary cross entropy
loss function, and we test the performance with an additional 50,000 events.
For each network, we train for up to 1000 epochs with a batch size of 10%,
which means that the number of batches per epoch is the same, as is the number
of events considered per batch. The training is stopped if the validation loss
does not decrease for 20 consecutive epochs (early stopping). For the ensemble
network, we take $N=10$. We did not do any detailed hyperparameter
optimization for these studies.
In Fig. 1b, we show the performance of the resulting classifiers $f_{1}$ and
$f_{10}$. We checked that the $f_{1}$ classifier parametrized by a neural
network has essentially the same performance as an analytic function derived
by taking the ratio of Gaussian probability densities, which means that the
neural network $f_{1}$ is nearly optimal. As expected, the per-instance
classifier $f_{1}$ has a worse receiver operating characteristic (ROC) curve
than the per-ensemble classifier $f_{10}$. This is not a relevant comparison,
however, because the two are solving different classification tasks (i.e.
classifying individual events as coming from signal or background versus
classifying an ensemble of $N=10$ events as all coming from signal or
background). With Eq. (13), we can use $f_{1}$ to build a $10$-instance
classifier $f_{1\rightarrow 10}$, whose ROC curve is nearly identical to
$f_{10}$, if not even slightly better. Thus, as expected from Eq. (2), all of
the information in the 10-instance classifier is contained in the per-instance
classifier.
#### III.1.2 Dijet Resonance Search
(a)
(b)
Figure 2: Classification in the dijet resonance search example. (a,b)
Histograms of the four jet features for the signal ($W^{\prime}\to XY$) and
background (QCD dijet) processes. (c) ROC curves for various binary
classifiers. The multi-event classifier $f_{1\to 3}$ (built from $f_{1}$)
outperforms three classifiers trained on triplets of events:
$f_{3}^{\text{list}}$ with randomly ordered inputs, $f_{3}^{\text{sort}}$ with
sorted inputs, and $f_{3}^{\text{set}}$ based on the deep sets/PFN strategy in
Eq. (31) with built-in permutation invariance.
(c)
We now consider an example from collider physics, motivated by a search for
new beyond-the-Standard-Model (BSM) particles in a dijet final state. The
simulations used for this study were produced for the LHC Olympics 2020
community challenge [51]. The background process involves generic quantum
chromodynamics (QCD) dijet events with a requirement of at least one such jet
with transverse momentum $p_{T}>1.3$ TeV. The signal process involves the
production of a hypothetical new resonance $W^{\prime}$ with mass
$m_{W^{\prime}}=3.5$ TeV, which decays via $W^{\prime}\rightarrow XY$ to two
hypothetical particles $X$ and $Y$ of masses 500 GeV and 100 GeV,
respectively. Each of the $X$ and $Y$ particles decays promptly into pairs of
quarks. Due to the mass hierarchy between the $W^{\prime}$ boson and its decay
products, the final state is characterized by two large-radius jets with two-
prong substructure. The background and signal are generated using Pythia 8.219
[52, 53]. A detector simulation is performed with Delphes 3.4.1 [54, 55, 56]
using the default CMS detector card. Particle flow objects are used as inputs
to jet clustering, implemented with FastJet 3.2.1 [57, 58] and the
anti-$k_{t}$ algorithm [59] using $R=1.0$ for the radius parameter. Events are
required to have a reconstructed dijet mass within the range
$m_{JJ}<[3.3,3.7]\,\text{GeV}$.
Four features are used to train our classifiers: the invariant mass of the
lighter jet, the mass difference of the leading two jets, and the
$N$-subjettiess ratios $\tau_{21}$ [60, 61] of the leading two jets. The
observable $\tau_{21}$ quantifies the degree to which a jet is characterized
by two subjets or one subjet, with smaller values indicating two-prong
substructure. The mass features are recorded in units of TeV so that they are
numerically $\mathcal{O}(1)$. Histograms of the four features for signal and
background are shown in Figs. 2a and 2b. The signal jet masses are localized
at the $X$ and $Y$ masses and the $\tau_{21}$ observables are shifted towards
lower values, indicating that the jets have two-prong substructure.
We train a per-instance classifier ($f_{1}$) and a per-ensemble classifier
($f_{3}$) using the same tools as for the Gaussian example above, again using
binary cross entropy for the loss function. Because signal and background are
so well separated in this example, we restrict our attention to $N=3$ to avoid
saturating the performance. Note that this is an artificially constructed
classification problem, since in a more realistic context one would be trying
to estimate the signal fraction in an event ensemble, not classify triplets of
events as all coming from signal or background.
For $f_{1}$, the neural network architecture is the same as Ref. [18] with
four hidden layers, each with 64 nodes and ReLU activation, and an output
layer with sigmoid activation. For $f_{3}$, the neural network involves
$4\times 3=12$ inputs, and the penultimate hidden layer is adjusted to have
128 nodes, yielding a marginal performance gain. In both cases, about 100,000
events are used for testing and training, with roughly balanced classes. All
of the networks are trained for up to 1000 epochs with the same early stopping
condition as in the Gaussian case and with a batch size of 10%. Following Eq.
(13), we construct a tri-event classifier $f_{1\rightarrow 3}$ from $f_{1}$.
The ROC curves for $f_{3}$ and $f_{1\to 3}$ are shown in Fig. 2c, with $f_{1}$
also shown for completeness. Interestingly, the $f_{1\rightarrow 3}$
classifier trained on single events significantly outperforms $f_{3}$ trained
on multiple events. There are a variety of reasons for this, but one important
deficiency of the $f_{3}$ classifier is that it does not respect the
permutation symmetry of its inputs. Because events are IID distributed, there
is no natural ordering of the events, but the fully connected architecture we
are using imposes an artificial ordering. Inspired by Ref. [12], we can break
the permutation symmetry of the inputs by imposing a particular order on the
events. Specifically, we train a network $f_{3}^{\text{sort}}$ where the
triplet of events is sorted by their leading jet mass. Using
$f_{3}^{\text{sort}}$ yields a small gain in performance seen in Fig. 2, but
not enough to close the gap with $f_{1\rightarrow 3}$.
(a) (b)
Figure 3: Classification in the top quark mass example. (a) A histogram of
$m_{b_{1}\mu\nu}$ for top quark masses of 172.5 GeV and 175 GeV. The “wgt.”
curve is explained later in Sec. III.3.2, where we test the performance of a
likelihood reweighting. (b) The difference in efficiency for the 172.5 GeV top
quark mass sample (true positive) and the 175 GeV top quark mass sample (false
positive) as a function of the true positive rate for various binary
classifiers. Once again, a multi-event classifier ($f_{1\to 20}$) built from
the single-event classifier ($f_{1}$) has the best performance. For the
classifiers trained to process 20 events simultaneously, the deep sets/PFN
approach ($f_{20}^{\text{set}}$) does better than sorting the inputs
($f_{20}^{\text{sort}}$).
A more powerful way to account for the permutation symmetry among events is to
explicitly build a permutation-invariant neural network architecture. For this
purpose, we use the deep sets approach [62]. In the particle physics context,
deep sets were first used to construct particle flow networks (PFNs) [63],
where the inputs involve sets of particles. Here, we are interested in sets of
events, though we will still use the PFN code from the
https://energyflow.network/ package. Following Refs. [62, 63], we decompose
our set-based classifier as:
$\displaystyle
f^{\text{set}}_{N}(\vec{x})=F\left(\sum_{i=1}^{N}\Phi(x_{i})\right),$ (31)
where $F:\mathbb{R}^{L}\rightarrow[0,1]$ and
$\Phi:\mathbb{E}\rightarrow\mathbb{R}^{L}$ are neural networks that are
simultaneously optimized. The network $\Phi$ embeds single events $x_{i}$ into
a $L$-dimensional latent space. The sum operator in Eq. (31) guarantees that
$f^{\text{set}}_{N}$ is invariant under permutations $x_{\sigma(i)}$ for
$\sigma\in S_{N}$, the permutation group acting on $N$ elements. We use the
default parameters from the PFN code, with $L=128$, $\Phi$ having two hidden
layers with 100 nodes each, and $F$ having three hidden nodes with 100 nodes
each. The same learning strategy (up to 1000 epochs, early stopping, 10% batch
size) as the other networks is used for the PFN.
The performance of $f^{\text{set}}_{3}$ is shown in Fig. 2, which gets much
closer to matching the performance of $f_{1\rightarrow 3}$. Part of this
improvement is due to enforcing the permutation symmetry, though there is also
a potential gain from the fact the PFN we used for $f^{\text{set}}_{3}$ has
more trainable weights than the fully connected network for
$f^{\text{sort}}_{3}$. All of the $f_{3}$ variants were considerably more
difficult to train than $f_{1\to 3}$, likely for the reason discussed in Sec.
II.3. Thus, we have empirical evidence for the superiority of single-event
training for multi-event classification.
#### III.1.3 Top Quark Mass Measurement
Our third and final example is motivated by the top quark mass measurement, as
recently studied in Refs. [18, 12]. Extracting the top quark mass is really a
regression problem, which we investigate in Sec. III.3. Here, we consider a
related classification task to distinguish two event samples generated with
different top quark masses (172.5 GeV and 175 GeV). This is a realistic
hypothesis testing task that requires full event ensemble information, though
only per-instance training as we will see.
We use the same dataset as Ref. [18]. Top quark pair production is generated
using Pythia 8.230 [52, 53] and detector effects are modeled with Delphes
3.4.1 [54, 55, 56] using the default CMS run card. After the production and
decay steps $t\bar{t}\to bW^{+}\bar{b}W^{-}$, one of the $W$ bosons is forced
to decay to $\mu^{+}\nu$ while the other $W$ boson decays hadronically. Each
event is recorded as a variable-length set of objects, consisting of jets,
muons, and neutrinos. At simulation-level, the neutrino is replaced with the
missing transverse momentum. Generator-level and simulation-level jets are
clustered with the anti-$k_{t}$ algorithm using $R=0.4$ and the simulation-
level jet is labeled as $b$-tagged if the highest energy parton inside the
nearest generator-level jet ($\Delta R<0.5$) is a $b$ quark. Jets are required
to have $p_{T}>20$ GeV and they can only be $b$-tagged if $|\eta|<2.5$.
Furthermore, jets overlapping with the muon are removed.
Events are only saved if they have at least two $b$-tagged jets and at least
two additional non $b$-tagged jets. The $b$-jet closest to the muon in
rapidity-azimuth is labeled $b_{1}$. Of the remaining $b$-tagged jets, the
highest $p_{T}$ one is labeled $b_{2}$. The two highest $p_{T}$ non-$b$-tagged
jets are labeled $j_{1}$ and $j_{2}$, and typically come from the $W$ boson.
(Imposing the $W$ mass constraint on $j_{1}$ and $j_{2}$ would yield lower
efficiency, though without significantly impacting the results.) The four-
momentum of the detector-level neutrino ($\nu$) is determined by solving the
quadratic equation for the $W$ boson mass; if there is no solution, the mass
is set to zero, while if there are two real solutions, the one with the
smaller $|p_{z}|$ is selected. Four observables are formed for performing the
top quark mass extraction, given by the following invariant masses:
$m_{b_{1}\mu\nu}$, $m_{b_{2}\mu\nu}$, $m_{b_{1}j_{1}j_{2}}$, and
$m_{b_{2}j_{1}j_{2}}$. A histogram of $m_{b_{1}\mu\nu}$ is shown for
illustration in Fig. 3a.
We use the same neural network architectures and training procedure as in the
BSM example above, with 1.5 million events per fixed-mass sample. The only
difference is that the batch size is set to 0.1% in order to keep the number
of examples to be $\mathcal{O}(1000)$. For the per-ensemble classifier, we
take $N=20$, though of course for a realistic hypothesis testing situation,
$N$ would be as large as the number of top quark events recorded in data. To
capture the permutation invariance of the inputs, we construct
$f_{20}^{\text{set}}$ using the deep sets approach in Eq. (31). We also build
a classifier $f_{1\to 20}$ from the per-instance classifier $f_{1}$ using Eq.
(13).
In Fig. 3b, we see that $f_{1\to 20}$ and $f_{20}^{\text{set}}$ have
comparable performance, though $f_{1\to 20}$ is noticeably better. Some of
this improvement may be due to differences in the network architecture, but we
suspect that most of the gain is due to the more efficient training in the
per-instance case. We checked that very poor performance is obtained for a
classifier $f_{20}$ lacking permutation invariance, with a ROC curve that was
not that much better than $f_{1}$ alone. Explicitly breaking the invariance by
sorting the inputs based on $m_{b_{1}\mu\nu}$ does help a little, as indicated
by the $f_{20}^{\text{sort}}$ curve in Fig. 3b, but does not reach the set-
based approach.
Figure 4: Computational performance of single-event versus multi-event
training. Shown is the efficiency for the 175 GeV sample (false positive) for
a fixed 50% efficiency for the 172.5 GeV sample (true positive), plotted as a
function of training epoch. Single-event training ($f_{1\to 20}$) outperforms
multi-event training ($f_{20}^{\text{set}}$), where both methods go through
the full data set per epoch.
Given the similar performance of $f_{1\rightarrow 20}$ and
$f_{20}^{\text{set}}$, it is interesting to examine which learning strategy is
more computationally efficient. In Fig. 4, we compare the performance as a
function of the training epoch, using the difference of the true and false
positive rates at a fixed 50% signal efficiency. In each epoch, both
$f_{1\rightarrow 20}$ and $f_{20}^{\text{set}}$ see the full ensemble of
events, so this is an apples-to-apples comparison as far as data usage is
concerned. In particular, we plot this information per epoch instead of per
compute time to avoid differences due to the structure of the neural networks.
(There is not an easy way to control for possible differences in the training
time due to the differences in the network structures, since the underlying
tasks are different.) The $f_{1\rightarrow 20}$ classifier trains much faster,
in agreement with the analysis in Sec. II.3, even though the ultimate
asymptotic performance is similar for both classifiers. Once again, we see
better empirical behavior from $f_{1\to 20}$ trained on one event at a time
version $f_{20}^{\text{set}}$ trained on multiple events
simultaneously.555Away from the asymptotic limit, one could try to improve the
empirical per-ensemble performance through data augmentation. Data
augmentation is a generic strategy to help neural networks learn symmetries,
and the IID structure can be reinforced by showing the network new ensembles
built from sampling instances from the existing ensembles.
### III.2 Classifiers: Single-Event from Multi-Event
In general, one cannot take a multi-event classifier $f_{N}$ and extract a
single-event classifier $f_{1}$. It is, however, possible to construct a
special $\tilde{f}_{N}$ network such that one can interpret a subnetwork as a
per-event classifier, as discussed in Sec. II.2. When using the MLC loss
function, we can use the functional form in Eq. (14), where $\tilde{f}_{N}$ is
a product of $f_{N\to 1}$ terms. Training $\tilde{f}_{N}$, where the only
trainable weights are contained in $f_{N\to 1}$, we can learn a single-event
classifier $f_{N\to 1}$ from multi-event samples.
For the binary cross entropy loss used in our case studies, where Eq. (4) is
needed to convert the classifier to a likelihood ratio, we have to introduce a
slightly different structure than Eq. (14). Let $f_{N}^{\text{set}}$ be a
permutation-invariant classifier, as defined in Eq. (31) using the deep
sets/PFN strategy. Taking the latent space dimension to be $L=1$, the $\Phi$
network can be interpreted as a single-event classifier. Because the $\Phi$
network outputs are pooled via summation, we can build an optimal multi-event
classifier if $\Phi$ learns the _logarithm_ of the likelihood ratio; cf. Eq.
(2). With this insight, we can fix the $F$ function to achieve the same
asymptotic performance as a trainable $F$ by setting:
$\displaystyle
F(\vec{x})=\frac{\exp\big{(}\sum_{i=1}^{N}\Phi(x_{i})\big{)}}{1+\exp\big{(}\sum_{i=1}^{N}\Phi(x_{i})\big{)}}\,.$
(32)
Using Eq. (4), one can check that this $F$ is monotonically related to the
ensemble likelihood ratio. Similarly, $\Phi$ will be monotonically related to
the optimal $f_{1}$, which we call $f_{N\to 1}$ for the remainder of this
discussion.
Figure 5: Revisiting the ROC curves for the two Gaussian example from Fig. 1b.
The multi-event classifier $\tilde{f}_{10}$ with the restricted functional
form in Eq. (32) has the same performance as $f_{10}$ with no restrictions.
Using $\tilde{f}_{10}$, we can construct a single-event classifier
$\tilde{f}_{10\to 1}$ with the same performance as $f_{1}$ trained directly.
This construction is demonstrated in Fig. 5 for the Gaussian example. We see
that the deep sets architecture with the fixed form of Eq. (32)
($\tilde{f}^{\rm set}_{10}$) has the same or better performance as the
10-instance fully-connected classifier with more network capacity ($f_{10}$).
Similarly, the $\Phi$ function used as a single-event classifier
($f_{10\rightarrow 1}$) has nearly the same performance as an independently
trained single-event classifier ($f_{1}$).
Figure 6: Revisiting the ROC curves for the dijet resonance search example in
Fig. 2c. The set-based multi-event classifiers $\tilde{f}_{3}^{\rm set}$ and
$f_{3}^{\rm set}$ have similar performance, but we can use the former to
construct a single-event classifier $f_{3\to 1}$. This construction is not as
effective as performing single-event training directly ($f_{1}$).
The same conclusion holds for the BSM classification task, shown in Fig. 6.
The only difference between the set-based architectures
$\tilde{f}_{3}^{\text{set}}$ and $f_{3}^{\text{set}}$ is that the former uses
the fixed functional form in Eq. (32). The fact that they achieve nearly the
same performance is ensured by the IID relation in Eq. (2). The per-instance
$f_{3\rightarrow 1}$ network extracted from $\tilde{f}_{3}^{\text{set}}$ is
not quite as powerful as the $f_{1}$ network trained independently on single
events, as expected from the gradient issue discussed in Sec. II.3. While we
found no benefit to extracting a single-event classifier from a multi-event
classifier, it is satisfying to see these IID-derived theoretical predictions
borne out in these empirical examples.
### III.3 Comparison of Regression Strategies
We now consider the regression methods introduced in Sec. II.4. For
classification, the mapping between per-instance and per-ensemble information
is relatively straightforward. For regression, though, per-ensemble regression
is structurally dissimilar from per-instance regression because of the need to
integrate over priors on the regression parameters. Nevertheless, we can
perform per-ensemble regression by first mapping the problem to per-instance
parametrized classification.
We compare three different regression strategies for our empirical studies.
The first method is a maximum-likelihood analysis, using the form in Eq. (24)
based on the single-event parametrized classifier in Eq. (23). The second
method is per-instance direct regression, using the construction in Eqs. (28)
and (29) based on the same classifier as above. The third method is per-
ensemble direct regression, based on minimizing the mean squared error loss in
Eq. (27).
#### III.3.1 Gaussian Mean Example
Our first regression study is based on the same one-dimensional Gaussian
distributions as Sec. III.1.1. The prior distribution for the Gaussian means
is taken to be uniform with $\mu\in[-0.5,0.5]$, while the variance is fixed at
$\sigma=1$. A training dataset is created from 100 examples each from 10,000
values of the Gaussian mean, for a total of one million training data points.
For the reference sample $p(x|\theta_{0})$ needed to build the single-event
parametrized classifier $f(x,\mu)$ in Eq. (23), we create a second dataset
with one million examples drawn from a standard normal distribution (i.e.
$\mu=0$). To implement the $p(\theta)$ term in the second line of Eq. (22),
each example $x_{i}$ from the reference dataset is assigned a random mean
value picked from the variable-mean dataset.
We train a parametrized neural network to distinguish the variable-mean
datasets from the reference dataset. This network takes as input two features:
one component of $\vec{x}$ and the random mean value $\mu$. The architecture
consists of three hidden layers with $(64,128,64)$ nodes per layer and ReLU
activation. The output layer has a single node and sigmoid activation. Binary
cross entropy is used to train the classifier and Eq. (4) is used to convert
it to the likelihood ratio form $f(x,\mu)$. The model is trained for 1000
epochs with early stopping and a batch size of 10% of the training statistics.
The same learned function $f(x,\mu)$ is used for both the maximum likelihood
analysis and per-instance direct regression. For the maximum-likelihood
analysis, the optimization in Eq. (24) is performed over a fixed grid with 20
evenly spaced values in $\mu\in[-0.5,0.5]$. For per-instance direct
regression, the function $f_{N}(\vec{x},\mu)$ in Eq. (29) is constructed by
taking a product of $f(x,\mu)$ outputs over all 100 examples in a given
ensemble data point $\vec{x}$. The integrals in Eqs. (28) and (29) are
approximated by evaluating $f_{N}(\vec{x},\mu)$ at 20 evenly spaced $\mu$
values between $-0.5$ and $0.5$ and then adding their values; this is possible
because the prior is uniform.
The per-ensemble direct regression approach uses a neural network $g_{N}$ that
takes as input 100 values (i.e. all of $\vec{x}$) and predicts a single mean
value. This network has the same architecture as $f(x,\mu)$, except it
directly takes as input $\vec{x}$ and has linear (instead of a sigmoid)
activation for the output layer, since the predicted mean can be both positive
or negative. It is trained to minimize the mean squared error loss in Eq.
(27).
Figure 7: Comparison of regression methods with the Gaussian example, with
the predicted value of the mean plotted against the true value of the mean.
The regression involves analyzing 100 instances drawn from the same Gaussian
distribution. Bands are the standard deviation of the predictions over 10,000
generated samples. The per-instance direct regression uses single-event
training, yet achieves comparable performance to per-ensemble direct
regression that processes 100 events simultaneously.
In Fig. 7, we see that all three approaches give nearly the same results in
terms of bias and variance. Strictly speaking, maximum likelihood and direct
regression are different tasks so their behavior could be different. For per-
instance and per-ensemble direct regression, they are constructed to yield the
same asymptotic behavior, but there will be differences due to, e.g., the
finite approximations to the integrals. Note that maximum likelihood and per-
instance direct regression only use neural networks that process per-instance
inputs; information about the rest of the events is used only through the
training procedure. Thus, we have empirical evidence that per-ensemble
regression can be accomplished via per-instance training.
#### III.3.2 Top Quark Mass Measurement
As a physics example of regression, we consider extracting the top quark mass.
Here, the top quark mass is the regression target and the setup is similar to
the Gaussian example above. We use the same event generation as Sec. III.1.3,
but now with top quark mass parameters sampled uniformly at random in
$m_{t}\in[170,180]~{}\text{GeV}$. As with the Gaussian example, a variable-
mass dataset is created. In this case, we have 100 events for each of 100,000
sampled top quark mass values. The reference sample uses a top quark mass of
172.5 GeV. Due to event selection effects, the actual number of events for
each top quark mass value varies from set-to-set, with a mean of about 40
events. Because this event selection has a slight top quark mass dependence,
this yields an effective non-uniform prior on $m_{t}$, which we account for
when assigning dummy mass values to the reference sample.
The parametrized classifier now takes five inputs: the four mass features from
Sec. III.1.3 ($m_{b_{1}\mu\nu}$, $m_{b_{2}\mu\nu}$, $m_{b_{1}j_{1}j_{2}}$, and
$m_{b_{2}j_{1}j_{2}}$) plus the top quark mass used for event generation. The
neural network has three hidden layers with 50 nodes per layer and ReLU
activation, and a single node output layer with sigmoid activation. We train
100 models and take the median as the classifier output, using Eq. (4) to
convert it to the likelihood ratio $f(x,m_{t})$. Each model is trained for
1000 epochs with early stopping with a patience of 20 epochs and a batch size
of 0.1%. To test the fidelity of the training, we extract the estimated
likelihood ratio of $m_{t}=175~{}\text{GeV}$ over $m_{t}=172.5~{}\text{GeV}$
and use it to reweight the $172.5~{}\text{GeV}$ sample. From Fig. 3a, we see
that we achieve good reweighting performance despite the relatively limited
training data.
(a) (b)
Figure 8: Regression in the top quark mass example. (a) An estimate of the log
likelihood for samples generated with 172.5 and 175 GeV top quark masses. The
vertical axis has been shifted such that the minimum value is at zero. Note
that the axis represents the average log likelihood which is a factor of
$N_{\text{events}}$ different from the total log likelihood. (b) Correlation
between the per-instance predicted mass and the per-ensemble predicted mass in
the context of direct regression. The per-ensemble mass values are put in bins
of 0.1 GeV width, and the bands represent the standard deviation of the per-
instance mass values in each bin.
The maximum likelihood analysis is performed by scanning the learned log
likelihood estimate over a fixed grid with 100 uniformly spaced steps in
$m_{t}\in[170,180]~{}\text{GeV}$. In Fig. 8a, we show this scan where the
target data comes from the high statistics 172.5 GeV and 175 GeV samples from
Sec. III.1.3. As desired, the minimum of the parabolic shapes are near the
input top quark masses.
For the per-instance direct regression, we follow the same strategy as in the
Gaussian case to convert $f(x,m_{t})$ into an estimate of
$\mathbb{E}[m_{t}|\vec{x}]$. The integrals in Eqs. (28) and (29) are
approximated by sampling 50 random top quark masses per set of 100 following
the probability density from the training dataset. Because 40 events are
insufficient to make a precision measurement of the top quark mass, we find a
noticeable bias between the estimated and true top mass values, which is
exacerbated by edge effects at the ends of the training range. For this
reason, we do not show a direct analog to Fig. 7, though this bias could be
overcome with much larger training datasets with many more than 100 examples
per mass value.
For the per-ensemble direct regression, we use the deep sets approach in Eq.
(31) to handle the permutation-invariance of the inputs. This approach is also
well suited to handle the large variation in the number of events in each set
due to the event selection effect. We again use PFNs for our practical
implementation. We use the default PFN hyperparameters from the
https://energyflow.network/ package, except we use linear activation in the
output layer and the mean squared error loss function. We found that it was
important for the model accuracy to standardize both the inputs and outputs of
the network. Note that this is a different per-ensemble direct regression
setup than used in Ref. [12], which found excellent performance using linear
regression on sorted inputs.
In Fig. 8b, we compare the output of per-ensemble direct regression to the
output of per-instance direct regression. We find a very strong correlation
between these two very different approaches to computing the same quantity
$\mathbb{E}[m_{t}|\vec{x}]$. The band in Fig. 8b is the standard deviation
over data sets with a true mass in the same one of the 100 bins that are
evenly spaced between 170 and 180 GeV. A key advantage of the per-instance
approach is that it does not need to be retrained if more events are acquired.
By contrast, the per-ensemble approach is only valid for event samples that
have the same sizes as were used during training.
### III.4 Beyond Regression Example
As remarked in Sec. II.5, the ideas discussed above apply to learning tasks
beyond just standard classification and regression. As one simple example to
illustrate this, we consider the Gaussian classification task from Sec.
III.1.1 and compute the mutual information between the Gaussian feature and
the label. This quantifies how much information is available in the feature
for classification and can be directly compared with other features and other
classification tasks.
Figure 9: Mutual information between a Gaussian feature and a label, where
the “ signal” ($x_{0}=\epsilon$) and “ background” ($x_{0}=-\epsilon$) have
opposite means. The estimate using the MLC loss approach shows good agreement
with the exact analytic expression.
For this illustration, $10^{5}$ events are generated each from two Gaussian
distributions with means $\pm|\epsilon|$ for fixed $\epsilon$. The mutual
information is estimated using a per-instance classifier as described in Sec.
II.5 and also computed analytically via Eq. (30). For the per-instance
classifier, we use a neural network that processes two inputs (label and
feature), has two hidden layers with ReLU activation, and has a single node
sigmoid output. The classification task is to distinguish the nominal dataset
from one where the labels are assigned uniformly at random to the features.
The value of the MLC loss yields an estimate of the mutual information.
The mutual information results are presented in Fig. 9, as a function of
$\epsilon$. As expected, the neural network strategy yields an excellent
approximation to the analytic calculation. Note that this strategy does
require any binning and naturally extends to high-dimensional data, since the
core component is a neural network classifier. We leave an investigation of
this approach in the particle physics context to future work.
## IV Conclusions
We have demonstrated a connection between classifiers trained on single events
and those that process multiple events at the same time. One can take a
generic single-event classifier and build an $N$-event classifier using simple
arithmetic operations. Such classifiers tend to out-perform generic $N$-event
classifiers, since we can enforce the IID assumptions into the learning task.
This performance gap can be mostly recovered by deploying a classifier that
respects the permutation invariance of the set of $N$ events. We used the deep
sets/PFN architecture [62, 63] for this purpose, but other set-based
architectures such as graph neural networks [64, 65] would also be
appropriate.
An amusing feature of the deep sets approach is that we can use it to reverse-
engineer a single-event classifier from a multi-event classifier by
restricting the latent space to be one-dimensional and fixing a static output
function. Even after enforcing these additional structures, though, we found
both theoretically and empirically that the loss function gradients are better
behaved for single-event classifiers than multi-event classifiers. Going
beyond classification, we explained how various regression tasks can be
phrased in terms of per-instance parametrized classification, yielding similar
performance to per-ensemble direct regression. We also mentioned how to
compute distances and divergences between probability densities without
requiring explicit density estimation. These results hold for any data sample
satisfying the IID property.
Ultimately, we did not find any formal or practical advantage for training a
multi-event classifier instead of a single-event classifier, as least for the
cases we studied. With a carefully selected multi-event architecture, one can
achieve similar performance to a scaled-up per-event classifier, but the
latter will typically train faster. For direct regression, the per-ensemble
strategy might be conceptually simpler than the per-instance method, though
the per-instance methods allow for a simpler treatment of variably-sized data
sets. Note that there may be situations where a simplifying assumption (e.g.
the linear regression model in Ref. [12]) could yield better per-ensemble
behavior than indicated by our case studies. At minimum, we hope this paper
has demystified aspects of per-ensemble learning and highlighted some
interesting features of the MLC loss function.
Going beyond the IID assumption, the duality between per-instance classifiers
and per-ensemble classifiers could have applications to problems with
approximate independence. For example, flavor tagging algorithms have
traditionally exploited the approximate independence of individual track
features within a jet [66, 67]. Similarly, emissions in the Lund jet plane
[68, 69] are approximately independent, with exact independence in the
strongly ordered limit of QCD. In both contexts, the instances are particles
(or particle-like features) and the ensemble is the jet. A potentially
powerful training procedure for these situations might be to first train a
per-particle classifier, then build a per-jet classifier using the
constructions described in this paper, and finally let the network train
further to learn interdependencies between the particles.
## Code and Data
The code for this paper can be found at
https://github.com/bnachman/EnsembleLearning. The physics datasets are hosted
on Zenodo at Ref. [70] for the top quark dataset and Ref. [71] for the BSM
dataset.
###### Acknowledgements.
We thank Anders Andreassen, Patrick Komiske, and Eric Metodiev for discussions
about the MLC loss. We thank Rikab Gambhir and Ian Convy for discussions about
mutual information. We thank Adi Suresh for discussions about the regression
task with the classifier loss. We thank Katherine Fraiser, Yue Lai, Duff
Neill, Bryan Ostdiek, Mateusz Ploskon, Felix Ringer, and Matthew Schwartz for
useful comments on our manuscript. BN is supported by the U.S. Department of
Energy (DOE), Office of Science under contract DE-AC02-05CH11231. JT is
supported by the National Science Foundation under Cooperative Agreement
PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental
Interactions, http://iaifi.org/), and by the U.S. DOE Office of High Energy
Physics under grant number DE-SC0012567. BN would also like to thank NVIDIA
for providing Volta GPUs for neural network training.
## Appendix A Deriving Maximum Likelihood Classifier Loss
Beyond just the practical value of learning the likelihood ratio, the MLC loss
in Eq. (7) has a nice interpretation in terms of learning probability
distributions.
Consider trying to learn a function $f(x)$ that is a normalized probability
distribution, up to a Jacobian factor $j(x)$:
$\int dx\,j(x)f(x)=1.$ (33)
We are given samples from a probability distribution $q(x)$, and we want to
learn $f(x)$ such that
$f(x)\to\frac{q(x)}{j(x)}.$ (34)
In other words, we want to learn a function $f(x)$ that reproduces the sampled
distribution $q(x)$ after including the Jacobian factor. This problem was
studied in Ref. [34], albeit in a context where $f(x)$ had a restricted
functional form such that Eq. (33) was automatically enforced.
Figure 10: A demonstration of the MLC loss for learning the likelihood ratio
directly, using the Gaussian example from Fig. 1a. The linear (lin) and
exponential (exp) parametrizations perform similarly. Shown for comparison is
likelihood ratio computed using the binary cross entropy (BCE) loss that
requires the manipulation in Eq. (4).
One strategy to accomplish this is to minimize the cross entropy of $f(x)$
with respect to $q(x)$, since the smallest cross entropy is obtained when
$f(x)$ has the same information content as $q(x)$. The associated loss
functional is:
$L[f]=-\int dx\,q(x)\log f(x)-\lambda\left(1-\int dx\,j(x)f(x)\right),$ (35)
where the first term is the cross entropy and $\lambda$ is a Lagrange
multiplier to enforce the normalization condition in Eq. (33). Taking the
functional derivative of Eq. (35) with respect to $f(x)$ and setting it equal
to zero, we find the extremum condition:
$-\frac{q(x)}{f(x)}+\lambda\,j(x)=0.$ (36)
Multiplying both sides of this equation by $f(x)$ and integrating over $x$ to
set the Lagrange multiplier, we find that Eq. (36) is solved for
$\lambda=1,\qquad f(x)=\frac{q(x)}{j(x)},$ (37)
so $f(x)$ learns the $q(x)/j(x)$ ratio as desired.
In the special case that $j(x)$ is itself a normalized probability
distribution, we can substitute for the Lagrange multiplier and rewrite Eq.
(35) in the following form:
$L[f]=-\int dx\,\Big{(}q(x)\log f(x)+j(x)(1-f(x))\Big{)}.$ (38)
Identifying $q(x)=p(x|\theta_{A})$ and $j(x)=p(x|\theta_{B})$, this is
precisely the MLC loss in Eq. (7). Therefore, we have an intuitive
understanding of the MLC loss as trying to maximize the (log) likelihood of
$f(x)$ with respect to $p(x|\theta_{A})$, subject to the constraint that
$f(x)\,p(x|\theta_{B})$ is a proper probability distribution.
In Fig. 10, we plot the learned likelihood ratio between the two Gaussian
samples from Fig. 1a, comparing the performance of MLC against binary cross
entropy and the exact analytic expression. In all cases, a network is trained
with 100 epochs and early stopping with a patience of 10 epochs. We also
compare the MLC loss against the $C(f)=\exp f$ variant discussed in footnote
1. We see that both the linear (i.e. $C(f)=f$) and exponential
parametrizations perform similarly in the region with ample data. That said,
the exponential parametrization has a more robust extrapolation towards the
edges, yielding similar behavior to binary cross entropy. Note that the
exponential parametrization of the MLC loss was used in Ref. [32].
## References
* Larkoski _et al._ [2020] A. J. Larkoski, I. Moult, and B. Nachman, Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning, Phys. Rept. 841, 1 (2020), arXiv:1709.04464 [hep-ph] .
* Guest _et al._ [2018] D. Guest, K. Cranmer, and D. Whiteson, Deep Learning and its Application to LHC Physics, Ann. Rev. Nucl. Part. Sci. 68, 161 (2018), arXiv:1806.11484 [hep-ex] .
* Albertsson _et al._ [2018] K. Albertsson _et al._ , Machine Learning in High Energy Physics Community White Paper, (2018), arXiv:1807.02876 [physics.comp-ph] .
* Radovic _et al._ [2018] A. Radovic _et al._ , Machine learning at the energy and intensity frontiers of particle physics, Nature 560, 41 (2018).
* Bourilkov [2020] D. Bourilkov, Machine and Deep Learning Applications in Particle Physics, Int. J. Mod. Phys. A 34, 1930019 (2020), arXiv:1912.08245 [physics.data-an] .
* [6] HEP ML Community, A Living Review of Machine Learning for Particle Physics.
* Lai [2018] Y. S. Lai, Automated Discovery of Jet Substructure Analyses, (2018), arXiv:1810.00835 [nucl-th] .
* Khosa _et al._ [2019] C. K. Khosa, V. Sanz, and M. Soughton, Using Machine Learning to disentangle LHC signatures of Dark Matter candidates, (2019), arXiv:1910.06058 [hep-ph] .
* Du _et al._ [2020] Y.-L. Du, K. Zhou, J. Steinheimer, L.-G. Pang, A. Motornenko, H.-S. Zong, X.-N. Wang, and H. Stöcker, Identifying the nature of the QCD transition in relativistic collision of heavy nuclei with deep learning, Eur. Phys. J. C 80, 516 (2020), arXiv:1910.11530 [hep-ph] .
* Mullin _et al._ [2019] A. Mullin, H. Pacey, M. Parker, M. White, and S. Williams, Does SUSY have friends? A new approach for LHC event analysis, (2019), arXiv:1912.10625 [hep-ph] .
* Chang _et al._ [2020] S. Chang, T.-K. Chen, and C.-W. Chiang, Distinguishing $W^{\prime}$ Signals at Hadron Colliders Using Neural Networks, (2020), arXiv:2007.14586 [hep-ph] .
* Flesher _et al._ [2020] F. Flesher, K. Fraser, C. Hutchison, B. Ostdiek, and M. D. Schwartz, Parameter Inference from Event Ensembles and the Top-Quark Mass, (2020), arXiv:2011.04666 [hep-ph] .
* Lazzarin _et al._ [2020] M. Lazzarin, S. Alioli, and S. Carrazza, MCNNTUNES: tuning Shower Monte Carlo generators with machine learning, (2020), arXiv:2010.02213 [physics.comp-ph] .
* Lai _et al._ [2020] Y. S. Lai, D. Neill, M. Płoskoń, and F. Ringer, Explainable machine learning of the underlying physics of high-energy particle collisions, (2020), arXiv:2012.06582 [hep-ph] .
* Neyman and Pearson [1933] J. Neyman and E. S. Pearson, On the problem of the most efficient tests of statistical hypotheses, Phil. Trans. R. Soc. Lond. A 231, 289 (1933).
* Hastie _et al._ [2001] T. Hastie, R. Tibshirani, and J. Friedman, _The Elements of Statistical Learning_ , Springer Series in Statistics (Springer New York Inc., New York, NY, USA, 2001).
* Sugiyama _et al._ [2012] M. Sugiyama, T. Suzuki, and T. Kanamori, _Density Ratio Estimation in Machine Learning_ (Cambridge University Press, 2012).
* A. Andreassen, S. Hsu, B. Nachman, N. Suaysom, A. Suresh [2020] A. Andreassen, S. Hsu, B. Nachman, N. Suaysom, A. Suresh, Parameter Estimation using Neural Networks in the Presence of Detector Effects, (2020), arXiv:2010.03569 [hep-ph] .
* Andreassen and Nachman [2020] A. Andreassen and B. Nachman, Neural Networks for Full Phase-space Reweighting and Parameter Tuning, Phys. Rev. D 101, 091901(R) (2020), arXiv:1907.08209 [hep-ph] .
* Stoye _et al._ [2018] M. Stoye, J. Brehmer, G. Louppe, J. Pavez, and K. Cranmer, Likelihood-free inference with an improved cross-entropy estimator, (2018), arXiv:1808.00973 [stat.ML] .
* Hollingsworth and Whiteson [2020] J. Hollingsworth and D. Whiteson, Resonance Searches with Machine Learned Likelihood Ratios, (2020), arXiv:2002.04699 [hep-ph] .
* Brehmer _et al._ [2018a] J. Brehmer, K. Cranmer, G. Louppe, and J. Pavez, Constraining Effective Field Theories with Machine Learning, Phys. Rev. Lett. 121, 111801 (2018a), arXiv:1805.00013 [hep-ph] .
* Brehmer _et al._ [2018b] J. Brehmer, K. Cranmer, G. Louppe, and J. Pavez, A Guide to Constraining Effective Field Theories with Machine Learning, Phys. Rev. D 98, 052004 (2018b), arXiv:1805.00020 [hep-ph] .
* Brehmer _et al._ [2020a] J. Brehmer, F. Kling, I. Espejo, and K. Cranmer, MadMiner: Machine learning-based inference for particle physics, Comput. Softw. Big Sci. 4, 3 (2020a), arXiv:1907.10621 [hep-ph] .
* Brehmer _et al._ [2020b] J. Brehmer, G. Louppe, J. Pavez, and K. Cranmer, Mining gold from implicit models to improve likelihood-free inference, Proc. Nat. Acad. Sci. , 201915980 (2020b), arXiv:1805.12244 [stat.ML] .
* Cranmer _et al._ [2015] K. Cranmer, J. Pavez, and G. Louppe, Approximating Likelihood Ratios with Calibrated Discriminative Classifiers, (2015), arXiv:1506.02169 [stat.AP] .
* Badiali _et al._ [2020] C. Badiali, F. Di Bello, G. Frattari, E. Gross, V. Ippolito, M. Kado, and J. Shlomi, Efficiency Parameterization with Neural Networks, (2020), arXiv:2004.02665 [hep-ex] .
* Andreassen _et al._ [2020a] A. Andreassen, B. Nachman, and D. Shih, Simulation Assisted Likelihood-free Anomaly Detection, Phys. Rev. D 101, 095004 (2020a), arXiv:2001.05001 [hep-ph] .
* Andreassen _et al._ [2020b] A. Andreassen, P. T. Komiske, E. M. Metodiev, B. Nachman, and J. Thaler, OmniFold: A Method to Simultaneously Unfold All Observables, Phys. Rev. Lett. 124, 182001 (2020b), arXiv:1911.09107 [hep-ph] .
* Erdmann _et al._ [2019] M. Erdmann, B. Fischer, D. Noll, Y. Rath, M. Rieger, and D. Schmidt, Adversarial Neural Network-based data-simulation corrections for jet-tagging at CMS, in _Proc. 19th Int. Workshop on Adv. Comp., Anal. Techn. in Phys. Research, ACAT2019_ (2019).
* Nguyen _et al._ [2005] X. Nguyen, M. J. Wainwright, and M. I. Jordan, On surrogate loss functions and $f$-divergences, arXiv Mathematics e-prints , math/0510521 (2005), arXiv:math/0510521 [math.ST] .
* D’Agnolo and Wulzer [2019] R. T. D’Agnolo and A. Wulzer, Learning New Physics from a Machine, Phys. Rev. D 99, 015014 (2019), arXiv:1806.02350 [hep-ph] .
* D’Agnolo _et al._ [2019] R. T. D’Agnolo, G. Grosso, M. Pierini, A. Wulzer, and M. Zanetti, Learning Multivariate New Physics, (2019), arXiv:1912.12155 [hep-ph] .
* Andreassen _et al._ [2019] A. Andreassen, I. Feige, C. Frye, and M. D. Schwartz, JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics, Eur. Phys. J. C 79, 102 (2019), arXiv:1804.09720 [hep-ph] .
* Brehmer and Cranmer [2020] J. Brehmer and K. Cranmer, Flows for simultaneous manifold learning and density estimation (2020), arXiv:2003.13913 [stat.ML] .
* Nachman and Shih [2020] B. Nachman and D. Shih, Anomaly Detection with Density Estimation, Phys. Rev. D 101, 075042 (2020), arXiv:2001.04990 [hep-ph] .
* Metodiev _et al._ [2017] E. M. Metodiev, B. Nachman, and J. Thaler, Classification without labels: Learning from mixed samples in high energy physics, JHEP 10, 174, arXiv:1708.02949 [hep-ph] .
* Baldi _et al._ [2016] P. Baldi, K. Cranmer, T. Faucett, P. Sadowski, and D. Whiteson, Parameterized neural networks for high-energy physics, Eur. Phys. J. C 76, 235 (2016), arXiv:1601.07913 [hep-ex] .
* S. Cheong, A. Cukierman, B. Nachman, M. Safdari, A. Schwartzman [2020] S. Cheong, A. Cukierman, B. Nachman, M. Safdari, A. Schwartzman, Parametrizing the Detector Response with Neural Networks, JINST 15, P01030, arXiv:1910.03773 [physics.data-an] .
* Fix and Hodges Jr. [1951] E. Fix and J. L. Hodges Jr., Discriminatory analysis-nonparametric discrimination: consistency properties, USAF School of Aviation Medicine, Project Number 21-49-004, Report Number 4 (1951).
* Cover M. and Hart [1967] T. Cover M. and P. E. Hart, Nearest neighbor pattern classification, IEEE Transactions on Information Theory 13, 21 (1967).
* Goodfellow _et al._ [2014] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in _Advances in Neural Information Processing Systems_, Vol. 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger (Curran Associates, Inc., 2014) pp. 2672–2680.
* Salimans _et al._ [2016] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, Improved Techniques for Training GANs, arXiv e-prints , arXiv:1606.03498 (2016), arXiv:1606.03498 [cs.LG] .
* Kingma and Welling [2014] D. P. Kingma and M. Welling, Auto-encoding variational bayes., in _ICLR_, edited by Y. Bengio and Y. LeCun (2014).
* Rezende and Mohamed [2015] D. Rezende and S. Mohamed, Variational inference with normalizing flows, in _Proceedings of the 32nd International Conference on Machine Learning_, Proceedings of Machine Learning Research, Vol. 37, edited by F. Bach and D. Blei (PMLR, Lille, France, 2015) pp. 1530–1538.
* Larkoski _et al._ [2014] A. J. Larkoski, J. Thaler, and W. J. Waalewijn, Gaining (Mutual) Information about Quark/Gluon Discrimination, JHEP 11, 129, arXiv:1408.3122 [hep-ph] .
* Carrara and Ernst [2019] N. Carrara and J. Ernst, On the estimation of mutual information (2019), arXiv:1910.00365 [physics.data-an] .
* Chollet [2017] F. Chollet, Keras, https://github.com/fchollet/keras (2017).
* Abadi _et al._ [2016] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, _et al._ , Tensorflow: A system for large-scale machine learning., in _OSDI_ , Vol. 16 (2016) pp. 265–283.
* Kingma and Ba [2014] D. Kingma and J. Ba, Adam: A method for stochastic optimization, (2014), arXiv:1412.6980 [cs] .
* Kasieczka _et al._ [2019a] G. Kasieczka, B. Nachman, and D. Shih, R&D Dataset for LHC Olympics 2020 Anomaly Detection Challenge, 10.5281/zenodo.2629073 (2019a), https://doi.org/10.5281/zenodo.2629073.
* Sjöstrand _et al._ [2006] T. Sjöstrand, S. Mrenna, and P. Z. Skands, PYTHIA 6.4 Physics and Manual, JHEP 05, 026, arXiv:hep-ph/0603175 [hep-ph] .
* Sjöstrand _et al._ [2008] T. Sjöstrand, S. Mrenna, and P. Z. Skands, A Brief Introduction to PYTHIA 8.1, Comput. Phys. Commun. 178, 852 (2008), arXiv:0710.3820 [hep-ph] .
* de Favereau _et al._ [2014] J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi (DELPHES 3), DELPHES 3, A modular framework for fast simulation of a generic collider experiment, JHEP 02, 057, arXiv:1307.6346 [hep-ex] .
* Mertens [2015] A. Mertens, New features in Delphes 3, _Proceedings, 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT 14): Prague, Czech Republic, September 1-5, 2014_ , J. Phys. Conf. Ser. 608, 012045 (2015).
* Selvaggi [2014] M. Selvaggi, DELPHES 3: A modular framework for fast-simulation of generic collider experiments, _Proceedings, 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013): Beijing, China, May 16-21, 2013_ , J. Phys. Conf. Ser. 523, 012033 (2014).
* Cacciari _et al._ [2012] M. Cacciari, G. P. Salam, and G. Soyez, FastJet User Manual, Eur. Phys. J. C72, 1896 (2012), arXiv:1111.6097 [hep-ph] .
* Cacciari and Salam [2006] M. Cacciari and G. P. Salam, Dispelling the $N^{3}$ myth for the $k_{t}$ jet-finder, Phys. Lett. B641, 57 (2006), arXiv:hep-ph/0512210 [hep-ph] .
* Cacciari _et al._ [2008] M. Cacciari, G. P. Salam, and G. Soyez, The anti-$k_{t}$ jet clustering algorithm, JHEP 04, 063, arXiv:0802.1189 [hep-ph] .
* Thaler and Van Tilburg [2012] J. Thaler and K. Van Tilburg, Maximizing Boosted Top Identification by Minimizing N-subjettiness, JHEP 02, 093, arXiv:1108.2701 [hep-ph] .
* Thaler and Van Tilburg [2011] J. Thaler and K. Van Tilburg, Identifying Boosted Objects with N-subjettiness, JHEP 03, 015, arXiv:1011.2268 [hep-ph] .
* Zaheer _et al._ [2017] M. Zaheer, S. Kottur, S. Ravanbhakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola, Deep sets, in _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , NIPS’17 (Curran Associates Inc., Red Hook, NY, USA, 2017) p. 3394–3404.
* Komiske _et al._ [2019] P. T. Komiske, E. M. Metodiev, and J. Thaler, Energy Flow Networks: Deep Sets for Particle Jets, JHEP 01, 121, arXiv:1810.05165 [hep-ph] .
* Scarselli _et al._ [2009] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, The graph neural network model, Trans. Neur. Netw. 20, 61–80 (2009).
* Shlomi _et al._ [2020] J. Shlomi, P. Battaglia, and J.-R. Vlimant, Graph Neural Networks in Particle Physics 10.1088/2632-2153/abbf9a (2020), arXiv:2007.13681 [hep-ex] .
* Aaboud _et al._ [2018] M. Aaboud _et al._ (ATLAS), Measurements of b-jet tagging efficiency with the ATLAS detector using $t\overline{t}$ events at $\sqrt{s}=13$ TeV, JHEP 08, 089, arXiv:1805.01845 [hep-ex] .
* Chatrchyan _et al._ [2013] S. Chatrchyan _et al._ (CMS), Identification of b-Quark Jets with the CMS Experiment, JINST 8, P04013, arXiv:1211.4462 [hep-ex] .
* Andersson _et al._ [1989] B. Andersson, G. Gustafson, L. Lonnblad, and U. Pettersson, Coherence Effects in Deep Inelastic Scattering, Z. Phys. C 43, 625 (1989).
* Dreyer _et al._ [2018] F. A. Dreyer, G. P. Salam, and G. Soyez, The Lund Jet Plane, JHEP 12, 064, arXiv:1807.04758 [hep-ph] .
* Andreassen _et al._ [2020c] A. Andreassen, S.-C. Hsu, B. Nachman, N. Suaysom, and A. Suresh, Srgn: Pythia + delphes $pp\rightarrow t\bar{t}$, 10.5281/zenodo.4067673 (2020c).
* Kasieczka _et al._ [2019b] G. Kasieczka, B. Nachman, and D. Shih, Official Datasets for LHC Olympics 2020 Anomaly Detection Challenge, 10.5281/zenodo.4287846 (2019b).
|
Integrability, intertwiners and non-linear algebras
in Calogero models
Francisca Carrillo–Moralesa, Francisco Correaa and Olaf Lechtenfeldb
aInstituto de Ciencias Físicas y Matemáticas
Universidad Austral de Chile, Casilla 567, Valdivia, Chile
bInstitut für Theoretische Physik and Riemann Center for Geometry and Physics
Leibniz Universität Hannover, Appelstrasse 2, 30167 Hannover, Germany
###### Abstract
For the rational quantum Calogero systems of type $A_{1}{\oplus}A_{2}$,
$AD_{3}$ and $BC_{3}$, we explicitly present complete sets of independent
conserved charges and their nonlinear algebras. Using intertwining (or shift)
operators, we include the extra ‘odd’ charges appearing for integral
couplings. Formulæ for the energy eigenstates are used to tabulate the low-
level wave functions.
## 1 Introduction and summary
There is a vast literature on quantum Calogero (or Calogero–Moser–Sutherland)
models as a paradigm for a superintegrable system with a finite number of
degrees of freedom [1, 2]. Since these systems admit an analytic computation
of basically every detail, a variety of rich mathematical structures has been
uncovered (for reviews, see e.g. [3, 4]; for a recent generalization, see
[5]). Nevertheless, some of the latter have not been displayed very explicitly
or are hidden in mathematical literature difficult to penetrate for most
physicists.
Among these aspects is the nonlinear algebra formed by the $2N{-}1$ (or, for
integral coupling, $2N$) independent conserved charges in the case of the
rational rank-$N$ Calogero model [6, 7]. A related feature is the form of the
intertwiners (or shift operators). These relate the simultaneous eigenstates
of the $N$ Liouville charges at integer-spaced coupling values, thereby
providing an alternative access to those states for integral couplings and
allowing for the $2N$th extra charge $Q$.
For this reason, we provide completely explicit formulæ for the rank-3
rational quantum Calogero models, based on the Coxeter reflection groups
$A_{1}{\oplus}A_{2}$, $AD_{3}$ and $BC_{3}$ (but not $H_{3}$), for the
nonlinear algebras, the intertwiners and the energy eigenstates. In contrast
to the customary $A_{1}{\oplus}A_{2}$ model (describing three nonrelativistic
unit-mass particles on the infinite line and interacting pairwise via an
inverse-square two-body potential), the $AD_{3}$ and $BC_{3}$ models have been
investigated much less. Yet, even the $A_{1}{\oplus}A_{2}$ case has not been
fully analyzed: The center-of-mass sector (associated to the $A_{1}$ part) is
usually free, as one imposes translational invariance as a physical prejudice.
However, nothing prevents one from giving it its own (external) inverse-square
potential. In fact, such is completely natural in the bigger scope, and here
we present all details for this generalized three-particle system as well.
The organization of the paper is as follows. In the remainder of this
introduction, we introduce the Dunkl operators [8, 9] as our major tool for
the construction of conserved charges, then review how the conformal algebra
can be used to extend the set of Liouville charges to an infinite (but
functionally dependent) set of higher integrals of motion, and finally present
the concept of intertwining operators and how they give rise to additional
conserved charges in the special situation of integral coupling(s). Sections
2, 3 and 4 then treat the cases of $A_{1}{\oplus}A_{2}$, $AD_{3}$ and $BC_{3}$
in turn, showing in each instance a full set of independent conserved charges,
a formula for the energy eigenstates, the concrete form of the basic
intertwiners, and finally the nonlinear algebra of all charges including the
extra ones at integral coupling. Some lengthy expressions and lists of low-
lying energy eigenstates are delegated to an appendix.
### 1.1 Liouville charges
Let us consider a set $R^{+}$ of positive roots $\alpha$ for a Coxeter group
$W$ of reflections $s_{\alpha}$ in $\mathbb{R}^{N}\ni x$. Then, the Dunkl
operators associated to the standard Cartesian basis $\\{e_{i}\\}$ with
$i=1,\ldots,N$ are given by
$\mathcal{D}_{i}\ =\ \partial_{i}-\sum_{\alpha\in
R^{+}}\dfrac{g_{\alpha}\alpha_{i}}{(\alpha,x)}s_{\alpha}\qquad\text{with}\quad(\alpha,x)=\alpha(x){\quad{\rm
and}\quad}\alpha_{i}=(\alpha,e_{i})=\alpha(e_{i})\ ,$ (1)
where $\partial_{i}=\partial/\partial x^{i}$ for $x=e_{i}x^{i}$, and we
canonically identify $\mathbb{R}^{N}$ with its dual. The real coupling
constants $g_{\alpha}$ depend only on the Weyl orbit of $\alpha$, so we shall
encounter at most two values, either $g_{s}$ for the short roots and
$g_{\ell}$ for the long roots, or a single coupling $g$ in the simply-laced
case. A key role is played by the Weyl-invariant polynomials $\sigma_{k}(x)$
of degree $k$, because the restriction “${\mathrm{res}}$” of
$\sigma_{k}(\mathcal{D})$ for $\mathcal{D}\equiv\\{\mathcal{D}_{i}\\}$ to
Weyl-invariant functions yields constants of motion $I_{k}$ known as Liouville
integrals, for any (generalized) Calogero model. Since the Dunkl operators
mutually commute [8, 9], the Liouville integrals also commute with one
another. The $N$ lowest-order such polynomials will provide $N$ functionally
independent Liouville charges.
A universal Weyl-invariant polynomial is $\sigma_{2}(x)=(x,x)=:r^{2}$. The
corresponding integral $I_{2}$ is (minus twice) the Hamiltonian of the system,
$H\ =\ -\tfrac{1}{2}{\mathrm{res}}(\mathcal{D},\mathcal{D})\ =\
\tfrac{1}{2}\sum_{i=1}^{N}p_{i}^{2}+\sum_{\alpha\in
R^{+}}\frac{g_{\alpha}(g_{\alpha}{-}1)(\alpha,\alpha)}{2\,(\alpha,x)^{2}}\qquad\text{with}\quad
p_{i}=-{\mathrm{i}}\partial_{i}\ ,$ (2)
and is invariant under $g_{\alpha}\to 1{-}g_{\alpha}$ for any $\alpha$. For
$g_{\alpha}=0$ or $g_{\alpha}=1$, the (free) Liouville charges are simply
given by
$I_{k}\ =\ \sigma_{k}(p)\ .$ (3)
### 1.2 Energy spectrum
The energy spectrum is continuous with $E\geq 0$ but highly degenerate. The
$H$ eigenstates may be labelled by the energy $E$ and by $N{-}1$ additional
quantum numbers $\ell_{k}$ corresponding to the Weyl-invariant polynomials
$\sigma_{k}$ (other than $\sigma_{2}$) and combining in the generalized
angular momentum
$q\ =\ \sum_{\alpha}g_{\alpha}+{\sum_{\\{k\\}}}^{\prime}k\,\ell_{k}$ (4)
where the prime indicates leaving out $k{=}2$. They are conveniently found by
separating the Schrödinger equation in spherical coordinates
$(r,\vec{\theta})$, yielding a basis of eigenfunctions
$\Psi^{(\\{g_{\alpha}\\})}_{{\scriptscriptstyle E},\\{\ell_{k}\\}}(x)\ =\
r^{-(N-2)/2}\,J_{q+(N-2)/2}({\scriptstyle\sqrt{2E}}\,r)\
v^{(\\{g_{\alpha}\\})}_{\\{\ell_{k}\\}}(\vec{\theta})$ (5)
with a Bessel-type radial dependence. The angular part of the wave function is
best expressed as
$v^{(\\{g_{\alpha}\\})}_{\\{\ell_{k}\\}}(\vec{\theta})\ =\
r^{-q}{\Delta\vphantom{\big{|}}}^{g}\,h^{(\\{g_{\alpha}\\})}_{\\{\ell_{k}\\}}(x)\qquad\text{with}\quad{\Delta\vphantom{\big{|}}}^{g}\
=\ \prod_{\alpha}(\alpha,x)^{g_{\alpha}}$ (6)
and a Dunkl-deformed harmonic polynomial 111 It is annihilated by
$\tilde{H}={\Delta\vphantom{\big{|}}}^{-g}H\,{\Delta\vphantom{\big{|}}}^{g}=-\tfrac{1}{2}\sum_{i}\partial_{i}^{2}+O(g)$.
$h^{(\\{g_{\alpha}\\})}_{\\{\ell_{k}\\}}(x)\ =\
r^{N-2+2q}\,\Bigl{\\{}{\prod_{\\{k\\}}}^{\prime}\sigma_{k}(\mathcal{D})^{\ell_{k}}\Bigr{\\}}\,r^{-N+2-2\sum_{\alpha}g_{\alpha}}$
(7)
of degree $\sum^{\prime}_{\\{k\\}}k\,\ell_{k}$. It may be noted that these
states generally are not eigenstates of the other Liouville charges but can be
linearly combined to jointly diagonalize all of them.
### 1.3 Conformal algebra
The Hamiltonian (2) and the two operators
$D=\tfrac{1}{2}\sum_{i=1}^{N}(x^{i}p_{i}+p_{i}x^{i}){\qquad{\rm
and}\qquad}K=\tfrac{1}{2}\sum_{i=1}^{N}(x^{i})^{2}={{\textstyle\frac{1}{2}}}(x,x)$
(8)
form the basis of a conformal algebra $sl(2,\mathbb{R})$,
$[D,H]=2{\mathrm{i}}H\ ,\qquad[D,K]=-2{\mathrm{i}}K\
,\qquad[K,H]={\mathrm{i}}D\ ,$ (9)
where $D$ generates the scale transformations (or dilatations) and $K$ the
special conformal transformations.
This dynamical symmetry is enhanced by extending $H$ to the entire set
$\\{I_{k}\\}$ of Liouville charges, which produces an infinite quadratic
algebra [6, 7]. The dilatation operator yields a $\mathbb{Z}$ grading, but
each commutation with $K$ provides a new layer of operators, starting with
$\tfrac{1}{{\mathrm{i}}}[K,I_{k}]=kJ_{k}$ (10)
satisfying the commutation relations
$\tfrac{1}{{\mathrm{i}}}[D,J_{k}]=(k{-}2)J_{k}{\qquad{\rm
and}\qquad}\tfrac{1}{{\mathrm{i}}}[H,J_{k}]=-I_{k}\ .$ (11)
Note that $J_{1}=\sum_{i}x^{i}$ and $J_{2}=D$. There are two obvious ways to
build further integrals of motion, which will not be in involution however.
The first one admits explicit time dependence,
$\bar{J}_{k}\ :=\
J_{k}-t\,I_{k}\qquad\Rightarrow\qquad{\textstyle\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\bar{J}_{k}\
=\ \partial_{t}\bar{J}_{k}+{\mathrm{i}}[H,\bar{J}_{k}]\ =\ 0\ .$ (12)
The second one forms the antisymmetric combinations
$L_{k,\ell}\ :=\
\tfrac{1}{2}(I_{k}J_{\ell}+J_{\ell}I_{k})-\tfrac{1}{2}(I_{\ell}J_{k}+J_{k}I_{\ell})\qquad\Rightarrow\qquad[H,L_{k,\ell}]\
=\
-\tfrac{{\mathrm{i}}}{2}(I_{k}I_{\ell}+I_{\ell}I_{k}-I_{\ell}I_{k}-I_{k}I_{\ell})\
=\ 0\ .$ (13)
The $\bar{J}_{k}$ or the $L_{k,\ell}$ form overcomplete sets of constants of
motion, and there exist many options for a functionally independent complete
subset. Here, we choose
$\displaystyle F_{k}\ :=\ L_{2,k}\ =\
\\{H,J_{k}\\}-{{\textstyle\frac{1}{2}}}\\{I_{k},D\\}$ (14)
with the same $N$ lowest values for $k$ determined by the first Weyl-invariant
polynomials $\sigma_{k}$. Since $F_{2}\equiv 0$ by definition, these provide
$N{-}1$ additional integrals of motion, revealing the complete
superintegrability of the rational quantum Calogero model. Our choice of extra
integrals bears a close relation to the Casimir element of the conformal
algebra,
$C\ =\ KH+HK-\tfrac{1}{2}D^{2}\ ,$ (15)
which generates the $F_{k}$ directly from the $I_{k}$,
$\tfrac{1}{{\mathrm{i}}}[C,I_{k}]\ =\ k\,F_{k}\ .$ (16)
The quadratic algebra spanned by the $I_{k}$ and $F_{k}$ has been presented in
[7].
### 1.4 Intertwining operators
The previous results were obtained for generic real couplings $g_{\alpha}$.
For integer coupling values, there appears one additional independent constant
of motion, due to the invariance of $H$ under $g\to 1{-}g$ and the existence
of intertwining (or shift) operators $M(g)$. The latter are constructed from
Weyl anti-invariant polynomials $\tau_{m}(x)$, again by replacing the
arguments $x^{i}$ with the Dunkl operators $\mathcal{D}_{i}$ and restricting
the result to Weyl-symmetric functions, as we did for the construction of the
Liouville integrals. The degree $m$ of those polynomials and the number of
independent ones depend on the root system under consideration. In this sense,
intertwining operators have been studied under this approach in angular and
trigonometric Calogero models [10, 11, 12].
Let us be more concrete for the simply-laced situation, $g_{\alpha}=g$. Any
intertwiner $M(g)$ establishes a relation between the Liouville integrals at
integrally shifted couplings,
$M(g)\,I_{k}(g)\ =\ I_{k}(g{+}1)\,M(g){\qquad{\rm
and}\qquad}M(1{-}g)\,I_{k}(g)\ =\ I_{k}(g{-}1)\,M(1{-}g)\ ,$ (17)
and hence transports eigenstates of $I_{k}(g)$ to eigenstates of
$I_{k}(g{+}1)$ (or zero). In particular,
$M(g)\,\Psi^{(g)}_{{\scriptscriptstyle E},\\{\ell_{k}\\}}(x)\ =\
\sum_{\\{k\\}}c_{\\{\ell_{k}\\}}^{\\{\ell^{\prime}_{k}\\}}(g)\,\Psi^{(g+1)}_{{\scriptscriptstyle
E},\\{\ell^{\prime}_{k}\\}}(x)\ ,$ (18)
with some coefficients $c_{\\{\ell_{k}\\}}^{\\{\ell^{\prime}_{k}\\}}(g)$. In
this way, simultaneous $I_{k}$ eigenstates at integer coupling can be obtained
from free eigenstates by a successive application of shift operators. As
another consequence, by shifting the coupling up and then down again, the
operator $M(-g)M(g)$ commutes with all Liouville charges,
$[M(-g)M(g),I_{k}(g)]\ =\ 0\ ,$ (19)
but it is not a new integral of motion since it is expressed in terms of them,
$M(-g)M(g)\ =\ \mathcal{R}(I(g))\qquad\textrm{for}\quad I=\\{I_{k}\\}\ .$ (20)
This polynomial of the $I_{k}$ must not depend on $g$ explicitly (take all
$(\alpha,x)\to\infty$). Therefore, one may easily compute it from the free
case $g{=}0$ ,
$\mathcal{R}(I)\ =\ M(0)^{2}\ .$ (21)
A novel feature appears for integral values of the coupling, say
$g=2,3,4,\ldots$. Shifting it all the way from $1{-}g$ to $g$, the combined
intertwiner
$Q(g)\ =\ M(g{-}1)M(g{-}2)\cdots M(1)M(0)M(-1)\cdots M(2{-}g)M(1{-}g)$ (22)
also commutes with all Liouville charges but, as a product of an odd number of
intertwiners, it is functionally independent. Only its square belongs to the
ring of Liouville charges,
$Q(g)^{2}\ =\ \mathcal{R}(I(g))^{2g-1}\ .$ (23)
Adjoining $Q(g)$ to the $2N{-}1$ conserved quantities $\\{I_{k},F_{k}\\}$
provides a $\mathbb{Z}_{2}$ grading and makes our model analytically
integrable, with $2N$ independent integrals of motion. We suspect other $Q$
intertwiners based on different shift operators $M$ to be functionally
dependent. The full set of commutators (also with the $F_{\ell}$) is given in
[7] for the $A_{n-1}\oplus A_{1}$ root system.
In the non-simply-laced case, one expects to find polynomials
$\tau^{\prime}_{m^{\prime}}$ antisymmetric under short-root reflections but
symmetric under long-root ones, as well as polynomials $\tau_{m}$ with the
opposite behavior. Inserting Dunkl operators as arguments and performing the
symmetric restriction, we produce $I_{k}$ intertwiners $M_{s}(g_{s},g_{\ell})$
and $M_{\ell}(g_{s},g_{\ell})$, which shift by unity only one coupling but not
the other. Also here, those shift operators allow one to build the joint
$I_{k}$ eigenstates for $(g_{s},g_{\ell})\in\mathbb{Z}\times\mathbb{Z}$ by
repeated application on the free eigenstates. Since we can independently
“wrap” from $1{-}g$ to $g$ for the short roots or for the long roots, there
exist two grading operators, $Q_{s}$ and $Q_{\ell}$, which we expect to be
functionally independent of one another.
## 2 The $\bf{A_{1}\oplus A_{2}}$ model
### 2.1 Integrals of motion
This is the traditional rational three-particle Calogero model, enhanced by an
external inverse-square potential for the center of mass. It is reducible
because the center-of-mass coordinate and momentum,
$X\ =\ {\textstyle\frac{1}{3}}(x^{1}+x^{2}+x^{3}){\qquad{\rm and}\qquad}P\ =\
p_{1}+p_{2}+p_{3}\ ,$ (24)
can be separated from the other degrees of freedom. Therefore, we may
introduce two coupling constants, say $g$ and $g^{\prime}$. The Hamiltonian of
the system (2) is given by ($i,j=1,2,3$)
$\displaystyle H$ $\displaystyle\ =\ \tfrac{1}{2}\sum_{i}p_{i}^{2}\ +\
\sum_{i<j}\frac{g(g{-}1)}{(x^{i}{-}x^{j})^{2}}\ +\
\frac{3\,g^{\prime}(g^{\prime}{-}1)}{2(x^{1}{+}x^{2}{+}x^{3})^{2}}$ (25)
$\displaystyle\ =\ \tfrac{1}{6}P^{2}\ +\
\frac{g^{\prime}(g^{\prime}{-}1)}{6\,X^{2}}\ +\
\tfrac{1}{6}\sum_{i<j}(p_{i}{-}p_{j})^{2}\ +\
\sum_{i<j}\frac{g(g{-}1)}{(x^{i}{-}x^{j})^{2}}\ =\ H_{1}\ +\ H_{2}\ .$
One can choose the positive roots as
$\mathcal{R}_{+}\ =\ \\{e_{1}{-}e_{2},\hskip 5.69046pte_{1}{-}e_{3},\hskip
5.69046pte_{2}{-}e_{3},\hskip 5.69046pte_{1}{+}e_{2}{+}e_{3}\\},$ (26)
such that the Dunkl operators (1) read
$\mathcal{D}_{i}\ =\ \partial_{i}\ -\ \sum_{j(\neq
i)}\frac{g}{x^{i}{-}x^{j}}s_{i-j}\ -\
\frac{g^{\prime}}{x^{1}{+}x^{2}{+}x^{3}}s_{0}\ ,$ (27)
where the $s_{i-j}$ are the two-particle permutation operators,
$\displaystyle s_{1-2}$ $\displaystyle:\ (x^{1},x^{2},x^{3})\ \mapsto\
(x^{2},x^{1},x^{3})\ ,$ (28) $\displaystyle s_{1-3}$ $\displaystyle:\
(x^{1},x^{2},x^{3})\ \mapsto\ (x^{3},x^{2},x^{1})\ ,$ $\displaystyle s_{2-3}$
$\displaystyle:\ (x^{1},x^{2},x^{3})\ \mapsto\ (x^{1},x^{3},x^{2})\ ,$
$\displaystyle\textrm{and}\quad s_{0}$ $\displaystyle:\ (x^{1},x^{2},x^{3})\
\mapsto\ (x^{1},x^{2},x^{3})-2X(1,1,1)\ .$
The lowest three Weyl-invariant polynomials are
$\displaystyle\sigma_{2}(x)$ $\displaystyle\ =\
(x^{1})^{2}+(x^{2})^{2}+(x^{3})^{2}\ ,$ (29)
$\displaystyle\tilde{\sigma}_{2}(x)$ $\displaystyle\ =\
(x^{1}{-}x^{2})^{2}+(x^{2}{-}x^{3})^{2}+(x^{3}{-}x^{1})^{2}\ ,$
$\displaystyle\tilde{\sigma}_{3}(x)$ $\displaystyle\ =\
(x^{1}{+}x^{2}{-}2x^{3})(x^{2}{+}x^{3}{-}2x^{1})(x^{3}{+}x^{1}{-}2x^{2})\ ,$
where the center-of-mass coordinate is only contained in $\sigma_{2}$.
In this basis, the first three Liouville integrals read
$\displaystyle I_{2}$ $\displaystyle\ =\ -{\rm
res}\bigl{(}\mathcal{D}_{1}^{2}{+}\mathcal{D}_{2}^{2}{+}\mathcal{D}_{3}^{2}\bigr{)}\
=\ 2\,H\ ,$ (30) $\displaystyle\tilde{I}_{2}$ $\displaystyle\ =\ -{\rm
res}\bigl{(}(\mathcal{D}_{1}{-}\mathcal{D}_{2})^{2}+(\mathcal{D}_{2}{-}\mathcal{D}_{3})^{2}+(\mathcal{D}_{3}{-}\mathcal{D}_{1})^{2}\bigr{)}\
=\ 6\,H_{2}\ ,$ $\displaystyle\tilde{I}_{3}$ $\displaystyle\ =\ \
{\mathrm{i}}\,{\rm
res}\bigl{(}(\mathcal{D}_{1}{+}\mathcal{D}_{2}{-}2\mathcal{D}_{3})(\mathcal{D}_{2}{+}\mathcal{D}_{3}{-}2\mathcal{D}_{1})(\mathcal{D}_{3}{+}\mathcal{D}_{1}{-}2\mathcal{D}_{2})\bigr{)}$
$\displaystyle\ =\
\prod_{i=1}^{3}(P{-}3p_{i})-9g(g{-}1)\left(\frac{(P{-}3p_{3})}{(x^{1}{-}x^{2})^{2}}+\frac{(P{-}3p_{2})}{(x^{3}{-}x^{1})^{2}}+\frac{(P{-}3p_{1})}{(x^{2}{-}x^{3})^{2}}\right)\
,$
and they are functionally independent and in involution,
$[I_{2},\tilde{I}_{2}]=[I_{2},\tilde{I}_{3}]=[\tilde{I}_{2},\tilde{I}_{3}]=0\
.$ (31)
The two lowest additional integrals of motion, which are not in involution,
are
$\tilde{F}_{2}\ =\
\\{H,\tilde{J}_{2}\\}-{{\textstyle\frac{1}{2}}}\\{\tilde{I}_{2},D\\}{\qquad{\rm
and}\qquad}\tilde{F}_{3}\ =\ \\{H,\tilde{J}_{3}\\}-\
{{\textstyle\frac{1}{2}}}\\{\tilde{I}_{3},D\\}\ ,\\\ $ (32)
where
$\displaystyle\tilde{J}_{2}$ $\displaystyle\ =\
-\tfrac{3}{2}\sum_{i=1}^{3}\\{X{-}x^{i},p_{i}\\}\ ,$ (33)
$\displaystyle\tilde{J}_{3}$ $\displaystyle\ =\
\sum_{i=1}^{3}(P{-}3p_{i})(X{-}x^{i})(P{-}3p_{i})-9g(g{-}1)\left(\frac{X{-}x^{1}}{(x^{2}{-}x^{3})^{2}}+\frac{X{-}x^{2}}{(x^{3}{-}x^{1})^{2}}+\frac{X{-}x^{3}}{(x^{1}{-}x^{2})^{2}}\right)\
.$
### 2.2 Energy eigenstates
The Hamiltonian eigenfunctions are for this case are
$\Psi^{(g,g^{\prime})}_{{\scriptscriptstyle E},\ell_{2},\ell_{3}}(x)\ \equiv\
{\langle}x\mid\ell_{2},\ell_{3}{\rangle}_{g,g^{\prime}}\ =\
j_{q}({\scriptstyle\sqrt{2E}}\,r)\,r^{-q}{\Delta\vphantom{\big{|}}}^{g}\,{X\vphantom{\big{|}}}^{g^{\prime}}\,h^{(g,g^{\prime})}_{\ell_{2},\ell_{3}}(x)\qquad\textrm{with}\qquad
q=3g+g^{\prime}+2\ell_{2}+3\ell_{3}\ ,$ (34)
where $\Delta=(x^{1}{-}x^{2})(x^{2}{-}x^{3})(x^{3}{-}x^{1})$ is the basic
anti-invariant, $j_{q}$ denotes the spherical Bessel function, and
$\displaystyle h_{\ell_{2},\ell_{3}}^{(g,g^{\prime})}(x)$ $\displaystyle\sim\
r^{6g+2g^{\prime}+1+4\ell_{2}+6\ell_{3}}\,{\Delta\vphantom{\big{|}}}^{-g}\,{X\vphantom{\big{|}}}^{-g^{\prime}}\,\tilde{\sigma}_{2}({\cal
D})^{\ell_{2}}\,\tilde{\sigma}_{3}({\cal
D})^{\ell_{3}}\,{X\vphantom{\big{|}}}^{g^{\prime}}\,{\Delta\vphantom{\big{|}}}^{g}\,r^{-1-6g-2g^{\prime}}$
(35) $\displaystyle\sim\
r^{6g+2g^{\prime}+1+4\ell_{2}+6\ell_{3}}\,\tilde{\sigma}_{2}(\widetilde{{\cal
D}})^{\ell_{2}}\,\tilde{\sigma}_{3}(\widetilde{{\cal
D}})^{\ell_{3}}\,r^{-1-6g-2g^{\prime}}$
is a deformed harmonic polynomial of degree $2\ell_{2}{+}3\ell_{3}$.
Conjugation with
${X\vphantom{\big{|}}}^{g^{\prime}}{\Delta\vphantom{\big{|}}}^{g}$ defines the
“potential-free” Dunkl operators
$\widetilde{{\cal D}}_{i}\ =\ \partial_{i}\ +\sum_{\alpha\in
R^{+}}\dfrac{g_{\alpha}\alpha_{i}}{(\alpha,x)}(1{-}s_{\alpha})\ =\
\partial_{i}\ +\sum_{j(\neq i)}\frac{g}{x^{i}{-}x^{j}}(1{-}s_{i-j})\ +\
\frac{g^{\prime}}{x^{1}{+}x^{2}{+}x^{3}}(1{-}s_{0})\ .$ (36)
The first polynomials read (up to a normalization constant)
$\displaystyle h^{(g,g^{\prime})}_{0,0}(x)$ $\displaystyle\ =\ 1\ ,$ (37)
$\displaystyle h^{(g,g^{\prime})}_{1,0}(x)$ $\displaystyle\ =\
6(3g{+}1)\sigma_{2}-(6g{+}2g^{\prime}{+}3)\tilde{\sigma}_{2}\ ,$
$\displaystyle h^{(g,g^{\prime})}_{0,1}(x)$ $\displaystyle\ =\
\tilde{\sigma}_{3}\ ,$ $\displaystyle h^{(g,g^{\prime})}_{2,0}(x)$
$\displaystyle\ =\
36(3g{+}1)(3g{+}2)\sigma_{2}^{2}+(6g{+}2g^{\prime}{+}5)(6g+2g^{\prime}{+}7)\tilde{\sigma}_{2}^{2}-12(3g{+}2)(6g{+}2g^{\prime}{+}5)\sigma_{2}\tilde{\sigma}_{2}\
,$ $\displaystyle h^{(g,g^{\prime})}_{1,1}(x)$ $\displaystyle\ =\
(6g{+}2g^{\prime}{+}9)\tilde{\sigma}_{2}\tilde{\sigma}_{3}-6(3g{+}4)\sigma_{2}\tilde{\sigma}_{3}\
,$ $\displaystyle h^{(g,g^{\prime})}_{0,2}(x)$ $\displaystyle\ =\
\kappa_{1}\tilde{\sigma}_{2}^{2}\sigma_{2}+\kappa_{2}\tilde{\sigma}_{2}\sigma_{2}^{2}+\kappa_{3}\sigma_{2}^{3}+\kappa_{4}\tilde{\sigma}_{3}^{2}\
,$ $\displaystyle h^{(g,g^{\prime})}_{3,0}(x)$ $\displaystyle\ =\
\kappa_{5}\sigma_{2}^{3}+\kappa_{6}\tilde{\sigma}_{2}\sigma_{2}^{2}+\kappa_{7}\tilde{\sigma}_{2}^{2}\sigma_{2}+\kappa_{8}\tilde{\sigma}_{2}^{3}\
,$ $\displaystyle h^{(g,g^{\prime})}_{2,1}(x)$ $\displaystyle\ =\
\kappa_{9}\tilde{\sigma}_{3}\sigma_{2}^{2}+\kappa_{10}\tilde{\sigma}_{2}\tilde{\sigma}_{3}\sigma_{2}+\kappa_{11}\tilde{\sigma}_{2}^{2}\tilde{\sigma}_{3}\
,$
where the coefficients $\kappa_{i}$ are given in Appendix A.
### 2.3 Intertwining Operators
The $A_{1}{\oplus}A_{2}$ model features two independent anti-invariant
polynomials,
$\tau^{\prime}_{1}(x)\ =\ x^{1}{+}x^{2}{+}x^{3}\ =\ 3\,X{\qquad{\rm
and}\qquad}\tau_{3}(x)\ =\ (x^{1}{-}x^{2})(x^{2}{-}x^{3})(x^{3}{-}x^{1})\ =\
{\Delta\vphantom{\big{|}}}\ ,$ (38)
where the first one is invariant under $s_{i-j}$ and anti-invariant under
$s_{0}$, and the second one behaves oppositely. They lead to the two
intertwining operators
$M^{\prime}(g,g^{\prime})\ =\
\text{res}\bigl{(}\mathcal{D}_{1}{+}\mathcal{D}_{2}{+}\mathcal{D}_{3}\bigr{)}{\qquad{\rm
and}\qquad}M(g,g^{\prime})\ =\
\text{res}\bigl{(}(\mathcal{D}_{1}{-}\mathcal{D}_{2})(\mathcal{D}_{2}{-}\mathcal{D}_{3})(\mathcal{D}_{3}{-}\mathcal{D}_{1})\bigr{)}$
(39)
satisfying (17) for one of the couplings but not shifting the other one. For
the first intertwiner one easily finds
$M^{\prime}(g,g^{\prime})=\sum_{i}\partial_{i}-g^{\prime}/X$ independent of
$g$. The second intertwiner is a more complicated expression; the explicit
form of $M(g,g^{\prime}{=}0)$ is given in [7]. The operators $\mathcal{R}(I)$
(20) associated to $M$ are obtained from the free case ($g{=}g^{\prime}{=}0$),
$\displaystyle\mathcal{R}(I)$ $\displaystyle\ =\ M(0,0)^{2}\ =\
(\partial_{1}{-}\partial_{2})^{2}(\partial_{2}{-}\partial_{3})^{2}(\partial_{3}{-}\partial_{1})^{2}$
(40) $\displaystyle\ =\
-\tfrac{1}{27}\bigl{(}(\partial_{1}{+}\partial_{2}{-}2\partial_{3})(\partial_{2}{+}\partial_{3}{-}2\partial_{1})(\partial_{3}{+}\partial_{1}{-}2\partial_{2})\bigr{)}^{2}+\tfrac{1}{54}\bigl{(}(\partial_{1}{-}\partial_{2})^{2}+(\partial_{2}{-}\partial_{3})^{2}+(\partial_{3}{-}\partial_{1})^{2}\bigr{)}^{3}$
$\displaystyle\ =\
\tfrac{1}{27}\tilde{I}_{3}^{2}-\tfrac{1}{54}\tilde{I}_{2}^{3}\ .$
For the $g^{\prime}$ intertwiner $M^{\prime}$, we have
$\mathcal{R^{\prime}}(I)\ =\ M^{\prime}(0,0)^{2}\ =\
(\partial_{1}{+}\partial_{2}{+}\partial_{3})^{2}\ =\
3(\partial_{1}^{2}{+}\partial_{2}^{2}{+}\partial_{3}^{2})-(\partial_{1}{-}\partial_{2})^{2}-(\partial_{2}{-}\partial_{3})^{2}-(\partial_{3}{-}\partial_{1})^{2}\
=\ \tilde{I}_{2}{-}3\,I_{2}\ .$ (41)
Applying a ($g{-}1$)-fold Darboux dressing with $M(h,0)$ for $h=1,2,\ldots
g{-}1$ and $M(h,0)^{*}=M(-h,0)$ to
$Q(1,0)\ =\ M(0,0)\ =\
(\partial_{1}{-}\partial_{2})(\partial_{2}{-}\partial_{3})(\partial_{3}{-}\partial_{1})\
,$ (42)
we obtain an exceptional independent conserved charge $Q(g,0)$ in the case of
integer coupling $g$ and $g^{\prime}{=}0$. Likewise, Darboux dressing with
$M^{\prime}(0,h^{\prime})$ and
$M^{\prime}(0,h^{\prime})^{*}=M^{\prime}(0,-h^{\prime})$ of
$Q^{\prime}(0,1)\ =\ M^{\prime}(0,0)\ =\
\partial_{1}{+}\partial_{2}{+}\partial_{3}$ (43)
produces such a charge $Q^{\prime}(0,g^{\prime})$ for $g{=}0$ and integer
$g^{\prime}$. Combining both, by following a sequence of $M$ and $M^{\prime}$
intertwiners starting either from $(1{-}g,g^{\prime})$ to $(g,g^{\prime})$ or
from $(g,1{-}g^{\prime})$ to $(g,g^{\prime})$, we can extend these special
charges to $Q(g,g^{\prime})$ and $Q^{\prime}(g,g^{\prime})$, respectively, for
all integral values of both couplings. In such cases, the two extra charges
$\displaystyle Q(g,g^{\prime})$ $\displaystyle\ =\
M(g{-}1,g^{\prime})\,M(g{-}2,g^{\prime})\cdots
M(1,g^{\prime})\,M(0,g^{\prime})\,M(-1,g^{\prime})\cdots
M(2{-}g,g^{\prime})\,M(1{-}g,g^{\prime})\ ,$ (44) $\displaystyle
Q^{\prime}(g,g^{\prime})$ $\displaystyle\ =\
M^{\prime}(g,g^{\prime}{-}1)\,M^{\prime}(g,g^{\prime}{-}2)\cdots
M^{\prime}(g,1)\,M^{\prime}(g,0)\,M^{\prime}(g,-1)\cdots
M^{\prime}(g,2{-}g^{\prime})\,M^{\prime}(g,1{-}g^{\prime})$
enhance the nonlinear algebra of integrals of motion to a
$\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}$ graded one,
$\displaystyle{\mathrm{i}}[\tilde{I}_{2},\tilde{F}_{2}]\ =\
2(3\tilde{I}_{2}I_{2}-\tilde{I}_{2}^{2})\
,\quad{\mathrm{i}}[\tilde{I}_{3},\tilde{F}_{2}]\ =\
3(3\tilde{I}_{3}I_{2}-\tilde{I}_{2}\tilde{I}_{3})\ ,$ (45)
$\displaystyle{\mathrm{i}}[\tilde{I}_{2},\tilde{F}_{3}]\ =\
2(3\tilde{I}_{3}I_{2}-\tilde{I}_{3}\tilde{I}_{2})\
,\quad{\mathrm{i}}[\tilde{I}_{3},\tilde{F}_{3}]\ =\
3(-\tilde{I}_{3}^{2}+{\textstyle\frac{3}{2}}\tilde{I}_{2}^{2}I_{2})\ ,$
$\displaystyle{\mathrm{i}}[\tilde{F}_{2},\tilde{F}_{3}]\ =\
\tfrac{1}{2}(\tilde{F}_{3}\tilde{I}_{2}+\tilde{I}_{2}\tilde{F}_{3})-\tfrac{3}{2}(\tilde{F}_{3}I_{2}+I_{2}\tilde{F}_{3})\
,$ $\displaystyle{\mathrm{i}}[Q,\tilde{F}_{2}]\ =\
-3(2g{-}1)Q(\tilde{I}_{2}-3I_{2})\ ,\quad{\mathrm{i}}[Q,\tilde{F}_{3}]\ =\
-3(2g{-}1)Q\tilde{I}_{3}\ ,$
$\displaystyle{\mathrm{i}}[Q^{\prime},\tilde{F}_{2}]\ =\
-(2g^{\prime}{-}1)Q^{\prime}\tilde{I}_{2}\
,\quad{\mathrm{i}}[Q^{\prime},\tilde{F}_{3}]\ =\
-(2g^{\prime}{-}1)Q^{\prime}\tilde{I}_{3}\ ,$ $\displaystyle Q^{2}\ =\
(\mathcal{R}(I))^{2g-1}\ ,\quad Q^{\prime 2}\ =\
(\mathcal{R^{\prime}}(I))^{2g^{\prime}-1}\ ,\quad[Q,Q^{\prime}]\ =\ 0\ .$
### 2.4 Translation-invariant limit
It is instructive to consider the special case of $g^{\prime}{=}0$, where
translation invariance is recovered and the center of mass feels no potential.
In this limit, the reflection $s_{0}$ disappears from the consideration, and
the total momentum appears as a first-order conserved charge. It is then more
convenient to replace (29) with the Newton sums
$\displaystyle\sigma_{1}(x)$ $\displaystyle\ =\ x^{1}+x^{2}+x^{3}\ =\ 3\,X\ ,$
(46) $\displaystyle\sigma_{2}(x)$ $\displaystyle\ =\
(x^{1})^{2}+(x^{2})^{2}+(x^{3})^{2}\ ,$ $\displaystyle\sigma_{3}(x)$
$\displaystyle\ =\ (x^{1})^{3}+(x^{2})^{3}+(x^{3})^{3}\ ,$
resulting in the basic Liouville integrals
$\displaystyle I_{1}$ $\displaystyle\ =\ -{\mathrm{i}}\,{\rm
res}\bigl{(}\mathcal{D}_{1}{+}\mathcal{D}_{2}{+}\mathcal{D}_{3}\bigr{)}\ =\ P\
,$ (47) $\displaystyle I_{2}$ $\displaystyle\ =\ \ -{\rm
res}\bigl{(}\mathcal{D}_{1}^{2}{+}\mathcal{D}_{2}^{2}{+}\mathcal{D}_{3}^{2}\bigr{)}\
=\ 2\,H\ ,$ $\displaystyle I_{3}$ $\displaystyle\ =\ \ \ {\mathrm{i}}\,{\rm
res}\bigl{(}\mathcal{D}_{1}^{3}{+}\mathcal{D}_{2}^{3}{+}\mathcal{D}_{3}^{3}\bigr{)}\
=\
\sum_{i}p_{i}^{3}+3\sum_{i<j}^{3}\frac{g(g{-}1)}{(x^{i}{-}x^{j})^{2}}(p_{i}{+}p_{j})\
.$
In this case, the two further integrals establishing superintegrability may be
chosen as
$F_{1}\ =\ \\{H,3X\\}-{{\textstyle\frac{1}{2}}}\\{P,D\\}{\qquad{\rm
and}\qquad}F_{3}\ =\ \\{H,J_{3}\\}-{{\textstyle\frac{1}{2}}}\\{I_{3},D\\}\ ,$
(48)
with
$J_{3}\ =\ p_{1}x^{1}p_{1}{+}p_{2}x^{2}p_{2}{+}p_{3}x^{3}p_{3}\ +\
g(g{-}1)\bigl{(}\tfrac{x^{1}+x^{2}}{(x^{1}-x^{2})^{2}}+\tfrac{x^{2}+x^{3}}{(x^{2}-x^{3})^{2}}+\tfrac{x^{3}+x^{1}}{(x^{3}-x^{1})^{2}}\bigr{)}\
.$ (49)
Due to the new first-order charge $P$, the energy eigenfunctions carry a label
$\ell_{1}$ rather than $\ell_{2}$,
$\Psi^{(g)}_{{\scriptscriptstyle E},\ell_{1},\ell_{3}}(x)\ =\
j_{q}({\scriptstyle\sqrt{2E}}\,r)\,r^{-q}{\Delta\vphantom{\big{|}}}^{g}\,h^{(g)}_{\ell_{1},\ell_{3}}(x)\qquad\textrm{with}\qquad
q=3g+\ell_{1}+3\ell_{3}\ ,$ (50)
where now
$\displaystyle h_{\ell_{1},\ell_{3}}^{(g)}(x)$ $\displaystyle\sim\
r^{6g+1+2\ell_{1}+6\ell_{3}}\,{\Delta\vphantom{\big{|}}}^{-g}\,\bigl{(}{\cal
D}_{1}{+}{\cal D}_{2}{+}{\cal D}_{3}\bigr{)}^{\ell_{1}}\,\bigl{(}{\cal
D}_{1}^{3}{+}{\cal D}_{2}^{3}{+}{\cal
D}_{3}^{3}\bigr{)}^{\ell_{3}}\,{\Delta\vphantom{\big{|}}}^{g}\,r^{-1-6g}$ (51)
$\displaystyle\sim\ r^{6g+1+2\ell_{1}+6\ell_{3}}\,\bigl{(}\widetilde{{\cal
D}}_{1}{+}\widetilde{{\cal D}}_{2}{+}\widetilde{{\cal
D}}_{3}\bigr{)}^{\ell_{1}}\,\bigl{(}\widetilde{{\cal
D}}_{1}^{3}{+}\widetilde{{\cal D}}_{2}^{3}{+}\widetilde{{\cal
D}}_{3}^{3}\bigr{)}^{\ell_{3}}\,r^{-1-6g}$
is a deformed harmonic polynomial of degree $\ell_{1}{+}3\ell_{3}$. The first
polynomials can be written as
$\displaystyle h_{0,0}^{(g)}$ $\displaystyle=1\ ,$ (52) $\displaystyle
h_{1,0}^{(g)}$ $\displaystyle=\sigma_{1}\ ,$ $\displaystyle h_{2,0}^{(g)}$
$\displaystyle=(2g{+}1)\sigma_{1}^{2}{-}\sigma_{2}\ ,$ $\displaystyle
h_{0,1}^{(g)}$
$\displaystyle=3(2g{+}1)\sigma_{1}\sigma_{2}{-}(6g{+}5)\sigma_{3}\ ,$
$\displaystyle h_{3,0}^{(g)}$
$\displaystyle=-(6g{+}5)\sigma_{1}^{3}{+}9\sigma_{2}\sigma_{1}\ ,$
$\displaystyle h_{1,1}^{(g)}$
$\displaystyle=-3(2g{+}1)(6g{+}5)\sigma_{2}\sigma_{1}^{2}{+}(6g{+}5)(6g{+}7)\sigma_{3}\sigma_{1}{-}6\sigma_{2}^{2}\
,$ $\displaystyle h_{4,0}^{(g)}$
$\displaystyle=(6g{+}5)(6g{+}7)\sigma_{1}^{4}{-}18(6g{+}5)\sigma_{2}\sigma_{1}^{2}{+}27\sigma_{2}^{2}\
,$ $\displaystyle h_{2,1}^{(g)}$
$\displaystyle=(2g{+}1)(6g{+}7)\sigma_{2}\sigma_{1}^{3}{-}(6g{+}7)(2g{+}3)\sigma_{3}\sigma_{1}^{2}{-}3(2g{-}1)\sigma_{2}^{2}\sigma_{1}{+}(6g{+}7)\sigma_{2}\sigma_{3}\
,$ $\displaystyle h_{5,0}^{(g)}$
$\displaystyle=-(6g{+}7)(2g{+}3)\sigma_{1}^{5}+10(6g{+}7)\sigma_{2}\sigma_{1}^{3}-45\sigma_{2}^{2}\sigma_{1}\
,$ $\displaystyle h_{0,2}^{(g)}$
$\displaystyle=\gamma_{1}\sigma_{2}^{2}\sigma_{1}^{2}{+}\gamma_{2}\sigma_{2}^{3}+\gamma_{3}\sigma_{2}\sigma_{1}^{4}+\gamma_{4}\sigma_{2}\sigma_{3}\sigma_{1}+\gamma_{5}\sigma_{3}^{2}\
,$ $\displaystyle h_{3,1}^{(g)}$
$\displaystyle=\gamma_{6}\sigma_{2}\sigma_{1}^{4}+\gamma_{7}\sigma_{3}\sigma_{1}^{3}+\gamma_{8}\sigma_{2}^{2}\sigma_{1}^{2}+\gamma_{9}\sigma_{2}\sigma_{3}\sigma_{1}+30\sigma_{2}^{3}\
,$ $\displaystyle h_{6,0}^{(g)}$
$\displaystyle=\gamma_{10}\sigma_{1}^{6}+\gamma_{11}\sigma_{2}\sigma_{1}^{4}+\
\gamma_{12}\sigma_{2}^{2}\sigma_{1}^{2}-135\sigma_{2}^{3}\ ,$
where the coefficients $\gamma_{i}$ are given in Appendix A. Roughly half of
these polynomials (as linear combinations) can be obtained as the translation-
invariant limit $h_{\ell_{2},\ell_{3}}^{(g,g^{\prime}=0)}$ of the $A_{1}\oplus
A_{2}$ polynomials, but new states arise in the limit, due to the new first-
order invariant $\sigma_{1}$.
Only the third-order intertwiner $M(g){\equiv}M(g,0)$ based on
${\Delta\vphantom{\big{|}}}$ remains, and via Darboux dressing it generates
the extra charge $Q(g){\equiv}Q(g,0)$ for integral values of $g$, which
enlarges the nonlinear algebra spanned by
$\\{I_{1},I_{2},I_{3},F_{1},F_{3}\\}$ to
$\displaystyle{\mathrm{i}}[I_{1},F_{1}]$ $\displaystyle\ =\ 3I_{2}-I_{1}^{2}\
,\qquad\qquad{\mathrm{i}}[I_{3},F_{1}]\ =\ -3I_{3}I_{1}+3I_{2}^{2}\ ,$ (53)
$\displaystyle{\mathrm{i}}[I_{1},F_{3}]$ $\displaystyle\ =\
-I_{3}I_{1}+I_{2}^{2}\ ,\qquad\quad\\!{\mathrm{i}}[I_{3},F_{3}]\ =\
-3I_{3}^{2}+4I_{3}I_{2}I_{1}+\tfrac{3}{2}I_{2}^{3}-3I_{2}^{2}I_{1}^{2}+\tfrac{1}{2}I_{2}I_{1}^{4}\
,$ $\displaystyle{\mathrm{i}}[F_{1},F_{3}]$ $\displaystyle\ =\
{\textstyle\frac{1}{2}}(F_{1}I_{3}+I_{3}F_{1}+F_{3}I_{1}+I_{1}F_{3})\ ,$
$\displaystyle{\mathrm{i}}[Q,F_{1}]$ $\displaystyle\ =\ -3(2g{-}1)\,Q\,I_{1}\
,\quad\ {\mathrm{i}}[Q,F_{3}]\ =\
-3(2g{-}1)\,Q\bigl{(}I_{3}-\tfrac{2}{3}I_{2}I_{1}\bigr{)}\ ,\qquad Q^{2}\ =\
(\mathcal{R}(I))^{2g-1}\ .$
## 3 The $\bf{AD_{3}}$ model
### 3.1 Integrals of motion
This rank-3 system is irreducible and simply-laced, so contains just a single
coupling $g$. Depending on the choice of variables, it takes the $A_{3}$ or
the $D_{3}$ form. Here, we choose the latter. The Hamiltonian reads
$H\ =\ \tfrac{1}{2}\sum_{i}p_{i}^{2}\
+\sum_{i<j}\Big{(}\frac{g(g{-}1)}{(x^{i}{-}x^{j})^{2}}+\frac{g(g{-}1)}{(x^{i}{+}x^{j})^{2}}\Big{)}\
.$ (54)
One can choose
$\mathcal{R}_{+}=\\{e_{1}{+}e_{2},\hskip 5.69046pte_{1}{+}e_{3},\hskip
5.69046pte_{2}{+}e_{3},\hskip 5.69046pte_{1}{-}e_{2},\hskip
5.69046pte_{1}{-}e_{3},\hskip 5.69046pte_{2}{-}e_{3}\\}\ ,$ (55)
leading to the Dunkl operators
$\mathcal{D}_{i}\ =\ \partial_{i}\ -\sum_{j(\neq
i)}\Big{(}\frac{g}{x^{i}{-}x^{j}}s_{i-j}+\frac{g}{x^{i}{+}x^{j}}s_{i+j}\Big{)}$
(56)
with reflections $s_{i-j}$ given in (28) and
$\displaystyle s_{1+2}$ $\displaystyle:\ (x^{1},x^{2},x^{3})\ \mapsto\
(-x^{2},-x^{1},x^{3})\ ,$ (57) $\displaystyle s_{1+3}$ $\displaystyle:\
(x^{1},x^{2},x^{3})\ \mapsto\ (-x^{3},x^{2},-x^{1})\ ,$ $\displaystyle
s_{2+3}$ $\displaystyle:\ (x^{1},x^{2},x^{3})\ \mapsto\ (x^{1},-x^{3},-x^{2})\
.$
The lowest three Weyl-invariant polynomials are
$\sigma_{2}(x)\ =\ (x^{1})^{2}+(x^{2})^{2}+(x^{3})^{2}\ ,\qquad\sigma_{3}(x)\
=\ x^{1}\,x^{2}\,x^{3}\ ,\qquad\sigma_{4}(x)\ =\
(x^{1})^{4}+(x^{2})^{4}+(x^{3})^{4}\ .$ (58)
Hence, the corresponding Liouville integrals read
$\displaystyle I_{2}$ $\displaystyle\ =\ -{\rm
res}\bigl{(}\mathcal{D}_{1}^{2}{+}\mathcal{D}_{2}^{2}{+}\mathcal{D}_{3}^{2}\bigr{)}\
=\ 2\,H\ ,$ (59) $\displaystyle I_{3}$ $\displaystyle\ =\ \ {\mathrm{i}}\,{\rm
res}\bigl{(}\mathcal{D}_{1}\,\mathcal{D}_{2}\,\mathcal{D}_{3}\bigr{)}\ =\
p_{1}\,p_{2}\,p_{3}-4g(g{-}1)\Bigl{(}\tfrac{x^{1}x^{2}\,p_{3}}{((x^{1})^{2}-(x^{2})^{2})^{2}}+\tfrac{x^{2}x^{3}\,p_{1}}{((x^{2})^{2}-(x^{3})^{2})^{2}}+\tfrac{x^{3}x^{1}\,p_{2}}{((x^{3})^{2}-(x^{1})^{2})^{2}}\Bigr{)}\
,$ $\displaystyle I_{4}$ $\displaystyle\ =\ \ \ {\rm
res}\bigl{(}\mathcal{D}_{1}^{4}{+}\mathcal{D}_{2}^{4}{+}\mathcal{D}_{3}^{4}\bigl{)}\
=\ p_{1}^{4}+2g(g{-}1)\textstyle\sum_{\ell\neq
1}\left\\{p_{1}^{2},{\textstyle\frac{1}{(x^{1}-x^{\ell})^{2}}}{+}{\textstyle\frac{1}{(x^{1}+x^{\ell})^{2}}}\right\\}+16g(g{-}1){\textstyle\frac{x^{1}x^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}p_{1}p_{2}$
$\displaystyle-2{\mathrm{i}}g(g{-}1)\textstyle\sum_{\ell\neq
1}\left\\{p_{1},{\textstyle\frac{1}{(x^{1}-x^{\ell})^{3}}}{+}{\textstyle\frac{1}{(x^{1}+x^{\ell})^{3}}}\right\\}+16g^{2}(g{-}1)^{2}\left({\textstyle\frac{(x^{1})^{4}+(x^{2})^{4}}{((x^{1})^{2}-(x^{2})^{2})^{4}}}+{\textstyle\frac{(x^{1})^{2}+(x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}{\textstyle\frac{(x^{1})^{2}+(x^{3})^{2}}{((x^{1})^{2}-(x^{3})^{2})^{2}}}\right)$
$\displaystyle+\text{cyclic}\ ,$
where the term “cyclic” refers to adding the cyclic permutations of the labels
(1,2,3). The two lowest additional conserved charges are
$F_{3}\ =\ \\{H,J_{3}\\}-{{\textstyle\frac{1}{2}}}\\{I_{3},D\\}{\qquad{\rm
and}\qquad}F_{4}\ =\ \\{H,J_{4}\\}-{{\textstyle\frac{1}{2}}}\\{I_{4},D\\}\ ,$
(60)
with
$\displaystyle J_{3}$ $\displaystyle\ =\
\tfrac{1}{3}(x^{1}p_{2}p_{3}{+}x^{2}p_{3}p_{1}{+}x^{3}p_{1}p_{2})-{\textstyle\frac{4}{3}}g(g{-}1)\Bigl{(}\tfrac{1}{(x^{1}-x^{2})^{2}(x^{1}+x^{2})^{2}}{+}\tfrac{1}{(x^{2}-x^{3})^{2}(x^{2}+x^{3})^{2}}{+}\tfrac{1}{(x^{3}-x^{1})^{2}(x^{3}+x^{1})^{2}}\Bigr{)}x^{1}x^{2}x^{3}\
,$ (61) $\displaystyle J_{4}$ $\displaystyle\ =\
{\textstyle\frac{1}{2}}\\{x^{1},p_{1}^{3}\\}+4g(g{-}1)\left({\textstyle\frac{(x^{1})^{2}+2(x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}+{\textstyle\frac{(x^{1})^{2}+2(x^{3})^{2}}{((x^{1})^{2}-(x^{3})^{2})^{2}}}\right)x^{1}p_{1}-2{\mathrm{i}}g(g{-}1){\textstyle\frac{(x^{1})^{2}+(x^{3})^{2}}{((x^{1})^{2}-(x^{3})^{2})^{2}}}+\text{cyclic}\
.$
### 3.2 Energy eigenstates
For the eigenvalue problem
$\displaystyle H\,\Psi^{(g)}_{{\scriptscriptstyle E},\ell_{3},\ell_{4}}(x)\ =\
E\,\Psi^{(g)}_{{\scriptscriptstyle E},\ell_{3},\ell_{4}}(x)$ (62)
one obtains the energy eigenfunctions
$\Psi^{(g)}_{{\scriptscriptstyle E},\ell_{3},\ell_{4}}(x)\ \equiv\
{\langle}x\mid\ell_{3},\ell_{4}{\rangle}_{g}\ =\
j_{q}({\scriptstyle\sqrt{2E}}\,r)\,r^{-q}{\Delta\vphantom{\big{|}}}^{g}\,h^{(g)}_{\ell_{3},\ell_{4}}(x)\qquad\textrm{with}\qquad
q=6g+3\ell_{3}+4\ell_{4}\ ,$ (63)
where
${\Delta\vphantom{\big{|}}}=(x^{1}{-}x^{2})(x^{1}{+}x^{2})(x^{2}{-}x^{3})(x^{2}{+}x^{3})(x^{3}{-}x^{1})(x^{3}{+}x^{1})$
is the basic anti-invariant, and
$\displaystyle h^{(g)}_{\ell_{3},\ell_{4}}(x)$ $\displaystyle\sim\
r^{12g+1+6\ell_{3}+8\ell_{4}}\,\Delta^{-g}\,\bigl{(}{\cal D}_{1}{\cal
D}_{2}{\cal D}_{3}\bigr{)}^{\ell_{3}}\,\bigl{(}{\cal D}_{1}^{4}{+}{\cal
D}_{2}^{4}{+}{\cal
D}_{3}^{4}\bigr{)}^{\ell_{4}}\,{\Delta\vphantom{\big{|}}}^{g}\,r^{-1-12g}$
(64) $\displaystyle\sim\
r^{12g+1+6\ell_{3}+8\ell_{4}}\,\bigl{(}\widetilde{{\cal
D}}_{1}\widetilde{{\cal D}}_{2}\widetilde{{\cal
D}}_{3}\bigr{)}^{\ell_{3}}\,\bigl{(}\widetilde{{\cal
D}}_{1}^{4}{+}\widetilde{{\cal D}}_{2}^{4}{+}\widetilde{{\cal
D}}_{3}^{4}\bigr{)}^{\ell_{4}}\,r^{-1-12g}$
is a deformed harmonic polynomial of degree $3\ell_{3}{+}4\ell_{4}$.
Conjugation with ${\Delta\vphantom{\big{|}}}^{g}$ defines the “potential-free”
Dunkl operators
$\widetilde{{\cal D}}_{i}\ =\ \ \partial_{i}\ +\sum_{j(\neq
i)}\Big{(}\frac{g}{x^{i}{-}x^{j}}(1{-}s_{i-j})+\frac{g}{x^{i}{+}x^{j}}(1{-}s_{i+j})\Big{)}\
.$ (65)
The first polynomials read
$\displaystyle h^{(g)}_{0,0}(x)$ $\displaystyle\ =\ 1\ ,$ (66) $\displaystyle
h^{(g)}_{1,0}(x)$ $\displaystyle\ =\ \sigma_{3}\ ,$ $\displaystyle
h^{(g)}_{0,1}(x)$ $\displaystyle\ =\
(12g{+}5)\sigma_{4}-(8g{+}3)\sigma_{2}^{2}\ ,$ $\displaystyle
h^{(g)}_{2,0}(x)$ $\displaystyle\ =\
\alpha_{1}\sigma_{2}^{3}+\alpha_{2}\sigma_{3}^{2}+\alpha_{3}\sigma_{2}\sigma_{4}\
,$ $\displaystyle h^{(g)}_{1,1}(x)$ $\displaystyle\ =\
\alpha_{4}\sigma_{2}^{2}\sigma_{3}+\alpha_{5}\sigma_{4}\sigma_{3}\ ,$
$\displaystyle h^{(g)}_{0,2}(x)$ $\displaystyle\ =\
\alpha_{6}\sigma_{2}^{4}+\alpha_{7}\sigma_{2}\sigma_{3}^{2}+\alpha_{8}\sigma_{2}^{2}\sigma_{4}+\alpha_{9}\sigma_{4}^{2}\
,$ $\displaystyle h^{(g)}_{3,0}(x)$ $\displaystyle\ =\
\alpha_{10}\sigma_{3}^{3}+\alpha_{11}\sigma_{2}\sigma_{3}\sigma_{4}+\alpha_{12}\sigma_{2}^{3}\sigma_{3}\
,$ $\displaystyle h^{(g)}_{2,1}(x)$ $\displaystyle\ =\
\alpha_{13}\sigma_{2}^{5}+\alpha_{14}\sigma_{2}^{2}\sigma_{3}^{2}+\alpha_{15}\sigma_{2}^{3}\sigma_{4}+\alpha_{16}\sigma_{3}^{2}\sigma_{4}+\alpha_{17}\sigma_{2}\sigma_{4}^{2}\
,$ $\displaystyle h^{(g)}_{1,2}(x)$ $\displaystyle\ =\
\alpha_{18}\sigma_{2}^{4}\sigma_{3}+\alpha_{19}\sigma_{2}\sigma_{3}^{3}+\alpha_{20}\sigma_{2}^{2}\sigma_{3}\sigma_{4}+\alpha_{21}\sigma_{3}\sigma_{4}^{2}\
,$ $\displaystyle h^{(g)}_{0,3}(x)$ $\displaystyle\ =\
\alpha_{22}\sigma_{2}^{6}+\alpha_{23}\sigma_{2}^{3}\sigma_{3}^{2}+\alpha_{24}\sigma_{2}^{4}\sigma_{4}+\alpha_{25}\sigma_{2}\sigma_{3}^{2}\sigma_{4}+\alpha_{26}\sigma_{2}^{2}\sigma_{4}^{2}+\alpha_{27}\sigma_{4}^{3}\
,$ $\displaystyle h^{(g)}_{4,0}(x)$ $\displaystyle\ =\
\alpha_{28}\sigma_{2}^{6}+\alpha_{29}\sigma_{2}^{3}\sigma_{3}^{2}+\alpha_{30}\sigma_{3}^{4}+\alpha_{31}\sigma_{2}^{4}\sigma_{4}+\alpha_{32}\sigma_{2}\sigma_{3}^{2}\sigma_{4}+\alpha_{33}\sigma_{2}^{2}\sigma_{4}^{2}\
,$
where the coefficients $\alpha_{i}$ are given in Appendix A.
### 3.3 Intertwining Operators
The basic anti-invariant of the $AD_{3}$ model reads
$\tau_{6}(x)\ =\
(x^{1}{-}x^{2})(x^{1}{+}x^{2})(x^{2}{-}x^{3})(x^{2}{+}x^{3})(x^{3}{-}x^{1})(x^{3}{+}x^{1})\
=\ {\Delta\vphantom{\big{|}}}\ ,$ (67)
which produces the intertwiner
$M(g)\ =\
\text{res}\bigl{(}(\mathcal{D}_{1}^{2}{-}\mathcal{D}_{2}^{2})(\mathcal{D}_{2}^{2}{-}\mathcal{D}_{3}^{2})(\mathcal{D}_{3}^{2}{-}\mathcal{D}_{1}^{2})\bigr{)}\
,$ (68)
satisfying (17). The polynomial $\mathcal{R}(I)$ can be computed from the free
case ($g{=}0$),
$\displaystyle\mathcal{R}(I)$ $\displaystyle\ =\ M(0)^{2}\ =\
(\partial_{1}^{2}{-}\partial_{2}^{2})^{2}(\partial_{2}^{2}{-}\partial_{3}^{2})^{2}(\partial_{3}^{2}{-}\partial_{1}^{2})^{2}$
(69) $\displaystyle\ =\
\tfrac{1}{2}I_{4}^{3}-\tfrac{5}{4}I_{4}^{2}I_{2}^{2}-9I_{4}I_{3}^{2}I_{2}+I_{4}I_{2}^{4}-27I_{3}^{4}+5I_{3}^{2}I_{2}^{3}-\tfrac{1}{4}I_{2}^{6}\
.$
When the coupling $g$ takes an integral value, there exists a sixth
independent conserved charge
$Q(g)\ =\ M(g{-}1)\,M(g{-}2)\cdots M(1)\,M(0)\,M(-1)\cdots M(2{-}g)\,M(1{-}g)\
,$ (70)
which extends the nonlinear algebra spanned by
$\\{I_{2},I_{3},I_{4},F_{3},F_{4}\\}$ to
$\displaystyle{\mathrm{i}}[I_{3},F_{3}]$ $\displaystyle\ =\
\tfrac{1}{6}(I_{2}^{3}{-}I_{4}I_{2}{-}18I_{3}^{2})\
,\qquad{\mathrm{i}}[I_{4},F_{3}]\ =\
\tfrac{4}{3}(I_{3}I_{2}^{2}{-}3I_{4}I_{3})\ ,$ (71)
$\displaystyle{\mathrm{i}}[I_{3},F_{4}]$ $\displaystyle\ =\
I_{3}I_{2}^{2}{-}3I_{4}I_{3}\ ,\qquad\qquad\ \ {\mathrm{i}}[I_{4},F_{4}]\ =\
2({-}I_{2}^{4}{+}3I_{4}I_{2}^{2}{+}6I_{3}^{2}I_{2}{-}2I_{4}^{2})\ ,$
$\displaystyle{\mathrm{i}}[F_{3},F_{4}]$ $\displaystyle\ =\
\\{F_{4},I_{3}\\}-\tfrac{1}{2}\\{F_{3},I_{4}{-}I_{2}^{2}\\}\ ,$
$\displaystyle{\mathrm{i}}[Q,F_{3}]$ $\displaystyle\ =\ -6(2g{-}1)\,Q\,I_{3}\
,\quad{\mathrm{i}}[Q,F_{4}]\ =\ 6(2g{-}1)\,Q\,(\tfrac{2}{3}I_{2}^{2}-I_{4})\
,\quad Q^{2}\ =\ (\mathcal{R}(I))^{2g-1}\ .$
## 4 The $\bf{BC_{3}}$ model
### 4.1 Integrals of motion
This is the only irreducible non-simply-laced rank-3 model, so we have to deal
with two coupling constants, $g_{\ell}$ and $g_{s}$. It is described by the
Hamiltonian
$H\ =\ \tfrac{1}{2}\sum_{i}^{3}p_{i}^{2}\
+\sum_{i<j}\Big{(}\frac{g_{\ell}(g_{\ell}{-}1)}{(x^{i}{-}x^{j})^{2}}+\frac{g_{\ell}(g_{\ell}{-}1)}{(x^{i}{+}x^{j})^{2}}\Big{)}\
+\sum_{i}\frac{g_{s}(g_{s}{-}1)}{2\,(x^{i})^{2}}\ .$ (72)
We take the set of positive roots as
$\mathcal{R}_{+}\ =\ \\{e_{1},\hskip 5.69046pte_{2},\hskip
5.69046pte_{3},\hskip 5.69046pte_{1}{+}e_{2},\hskip
5.69046pte_{1}{+}e_{3},\hskip 5.69046pte_{2}{+}e_{3},\hskip
5.69046pte_{1}{-}e_{2},\hskip 5.69046pte_{1}{-}e_{3},\hskip
5.69046pte_{2}{-}e_{3}\\}\ ,$ (73)
so the Dunkl operators read
$\mathcal{D}_{i}\ =\ \partial_{i}\ -\sum_{j(\neq
i)}\Big{(}\frac{g_{\ell}}{x^{i}{-}x^{j}}s_{i-j}+\frac{g_{\ell}}{x^{i}{+}x^{j}}s_{i+j}\Big{)}\
-\ \frac{g_{s}}{x^{i}}s_{i}\ ,$ (74)
where the reflections are given in (28) and (57) and by
$\displaystyle s_{1}$ $\displaystyle:\ (x^{1},x^{2},x^{3})\ \mapsto\
(-x^{1},x^{2},x^{3})\ ,$ (75) $\displaystyle s_{2}$ $\displaystyle:\
(x^{1},x^{2},x^{3})\ \mapsto\ (x^{1},-x^{2},x^{3})\ ,$ $\displaystyle s_{3}$
$\displaystyle:\ (x^{1},x^{2},x^{3})\ \mapsto\ (x^{1},x^{2},-x^{3})\ .$
The three lowest Weyl-invariant polynomials may be chosen as
$\sigma_{2}(x)=(x^{1})^{2}+(x^{2})^{2}+(x^{3})^{2}\
,\qquad\sigma_{4}(x)=(x^{1})^{4}+(x^{2})^{4}+(x^{3})^{4}\
,\qquad\sigma_{6}(x)=(x^{1}\,x^{2}\,x^{3})^{2}=\sigma_{3}^{2}\ ,$ (76)
but one may also remain with the even Newton sums, replacing $\sigma_{6}$ with
$\sigma^{\prime}_{6}(x)=(x^{1})^{6}+(x^{2})^{6}+(x^{3})^{6}$.
For the basis (76), the first three Liouville integrals take the form
$\displaystyle I_{2}$ $\displaystyle\ =\ -{\rm
res}\bigl{(}\mathcal{D}_{1}^{2}{+}\mathcal{D}_{2}^{2}{+}\mathcal{D}_{3}^{2}\bigr{)}\
=\ 2\,H\ ,$ (77) $\displaystyle I_{4}$ $\displaystyle\ =\ \ \ {\rm
res}\bigl{(}\mathcal{D}_{1}^{4}{+}\mathcal{D}_{2}^{4}{+}\mathcal{D}_{3}^{4}\bigl{)}\
,$ $\displaystyle I_{6}$ $\displaystyle\ =\ -{\rm
res}\bigl{(}\mathcal{D}_{1}^{2}\,\mathcal{D}_{2}^{2}\,\mathcal{D}_{3}^{2}\bigr{)}\
,$
with the explicit form of $I_{4}$ and $I_{6}$ displayed in Appendix B. Two
additional integrals of motion (not in involution) are
$F_{4}\ =\ \\{H,J_{4}\\}-{{\textstyle\frac{1}{2}}}\\{I_{4},D\\}{\qquad{\rm
and}\qquad}F_{6}\ =\ \\{H,J_{6}\\}-{{\textstyle\frac{1}{2}}}\\{I_{6},D\\}\ ,$
(78)
where $J_{4}$ and $J_{6}$ are also given in Appendix B.
### 4.2 Energy eigenstates
The eigenvalue problem
$\displaystyle H\,\Psi^{(g_{\ell},g_{s})}_{{\scriptscriptstyle
E},\ell_{4},\ell_{6}}(x)\ =\ E\,\Psi^{(g_{\ell},g_{s})}_{{\scriptscriptstyle
E},\ell_{4},\ell_{6}}(x)$ (79)
is solved by
$\Psi^{(g_{\ell},g_{s})}_{{\scriptscriptstyle E},\ell_{4},\ell_{6}}(x)\
\equiv\ {\langle}x\mid\ell_{4},\ell_{6}{\rangle}_{g,g^{\prime}}\ =\
j_{q}({\scriptstyle\sqrt{2E}}\,r)\,r^{-q}\Delta_{\ell}^{g_{\ell}}\Delta_{s}^{g_{s}}\,h^{(g_{\ell},g_{s})}_{\ell_{4},\ell_{6}}(x)\qquad\textrm{with}\qquad
q=6g_{\ell}+3g_{s}+4\ell_{4}+6\ell_{6}\ ,$ (80)
where
$\Delta_{\ell}=(x^{1}{-}x^{2})(x^{1}{+}x^{2})(x^{2}{-}x^{3})(x^{2}{+}x^{3})(x^{3}{-}x^{1})(x^{3}{+}x^{1})$
and $\Delta_{s}=x^{1}\,x^{2}\,x^{3}$, as well as
$\displaystyle h^{(g_{\ell},g_{s})}_{\ell_{4},\ell_{6}}(x)$
$\displaystyle\sim\
r^{12g_{\ell}+6g_{s}+1+8\ell_{4}+12\ell_{6}}\,\Delta_{\ell}^{-g_{\ell}}\Delta_{s}^{-g_{s}}\,\bigl{(}{\cal
D}_{1}^{4}{+}{\cal D}_{2}^{4}{+}{\cal
D}_{3}^{4}\bigr{)}^{\ell_{4}}\,\bigl{(}{\cal D}_{1}{\cal D}_{2}{\cal
D}_{3}\bigr{)}^{2\ell_{6}}\,{\Delta\vphantom{\big{|}}}^{g}\,r^{-1-12g_{\ell}-6g_{s}}$
(81) $\displaystyle\sim\
r^{12g_{\ell}+6g_{s}+1+8\ell_{4}+12\ell_{6}}\,\bigl{(}\widetilde{{\cal
D}}_{1}^{4}{+}\widetilde{{\cal D}}_{2}^{4}{+}\widetilde{{\cal
D}}_{3}^{4}\bigr{)}^{\ell_{4}}\,\bigl{(}\widetilde{{\cal
D}}_{1}\widetilde{{\cal D}}_{2}\widetilde{{\cal
D}}_{3}\bigr{)}^{2\ell_{6}}\,r^{-1-12g_{\ell}-6g_{s}}$
being a deformed harmonic polynomial of degree $4\ell_{4}{+}6\ell_{6}$. The
“potential-free” Dunkl operators read
$\widetilde{{\cal D}}_{i}\ =\ \ \partial_{i}\ +\sum_{j(\neq
i)}\Big{(}\frac{g_{\ell}}{x^{i}{-}x^{j}}(1{-}s_{i-j})+\frac{g_{\ell}}{x^{i}{+}x^{j}}(1{-}s_{i+j})\Big{)}\
+\ \frac{g_{s}}{x^{i}}(1{-}s_{i})\ .$ (82)
The first polynomials read
$\displaystyle h^{(g_{\ell},g_{s})}_{0,0}(x)$ $\displaystyle\ =\ 1\ ,$ (83)
$\displaystyle h^{(g_{\ell},g_{s})}_{1,0}(x)$ $\displaystyle\ =\
-(8g_{\ell}{+}2g_{s}{+}3)\sigma_{2}^{2}+(12g_{\ell}{+}6g_{s}{+}5)\sigma_{4}\
,$ $\displaystyle h^{(g_{\ell},g_{s})}_{0,1}(x)$ $\displaystyle\ =\
\mu_{1}\sigma_{2}^{3}+\mu_{2}\sigma_{2}\sigma_{4}+\mu_{3}\sigma_{6}\ ,$
$\displaystyle h^{(g_{\ell},g_{s})}_{2,0}(x)$ $\displaystyle\ =\
\mu_{4}\sigma_{2}^{4}+\mu_{5}\sigma_{2}^{2}\sigma_{4}+\mu_{6}\sigma_{4}^{2}+\mu_{7}\sigma_{2}\sigma_{6}\
,$ $\displaystyle h^{(g_{\ell},g_{s})}_{1,1}(x)$ $\displaystyle\ =\
\mu_{8}\sigma_{2}^{5}+\mu_{9}\sigma_{2}^{3}\sigma_{4}+\mu_{10}\sigma_{2}\sigma_{4}^{2}+\mu_{11}\sigma_{2}^{2}\sigma_{6}+\mu_{12}\sigma_{4}\sigma_{6}\
,$ $\displaystyle h^{(g_{\ell},g_{s})}_{0,2}(x)$ $\displaystyle\ =\
\mu_{13}\sigma_{2}^{6}+\mu_{14}\sigma_{2}^{4}\sigma_{4}+\mu_{15}\sigma_{2}^{2}\sigma_{4}^{2}+\mu_{16}\sigma_{2}^{3}\sigma_{6}+\mu_{17}\sigma_{2}\sigma_{4}\sigma_{6}+\mu_{18}\sigma_{6}^{2}\
,$ $\displaystyle h^{(g_{\ell},g_{s})}_{3,0}(x)$ $\displaystyle\ =\
\mu_{19}\sigma_{2}^{6}+\mu_{20}\sigma_{2}^{4}\sigma_{4}+\mu_{21}\sigma_{2}^{2}\sigma_{4}^{2}+\mu_{22}\sigma_{2}^{3}\sigma_{6}+\mu_{23}\sigma_{2}\sigma_{4}\sigma_{6}+\mu_{24}\sigma_{4}^{3}\
.$
where the coefficients $\mu_{i}$ are given in Appendix A. It is obvious that
putting $g_{s}=0$ and $g_{\ell}=g$ brings us to the corresponding states (66).
### 4.3 Intertwining Operators
The short-root and long-root anti-invariant polynomials are
$\tau^{\prime}_{3}(x)=x^{1}\,x^{2}\,x^{3}={\Delta\vphantom{\big{|}}}_{s}{\qquad{\rm
and}\qquad}\tau_{6}(x)=(x^{1}{-}x^{2})(x^{1}{+}x^{2})(x^{2}{-}x^{3})(x^{2}{+}x^{3})(x^{3}{-}x^{1})(x^{3}{+}x^{1})={\Delta\vphantom{\big{|}}}_{\ell}\
,$ (84)
yielding the short-root and long-root intertwiners
$M_{s}(g_{\ell},g_{s})\ =\ \text{res}\prod_{i}\mathcal{D}_{i}{\qquad{\rm
and}\qquad}M_{\ell}(g_{\ell},g_{s})\ =\
\text{res}\prod_{i<j}(\mathcal{D}^{2}_{i}{-}\mathcal{D}^{2}_{j})\ ,$ (85)
respectively. They satisfy the relations (17), such that
$\displaystyle M_{s}(g_{\ell},g_{s})\,I_{k}(g_{\ell},g_{s})\ =\
I_{k}(g_{\ell},g_{s}{+}1)\,M_{s}(g_{\ell},g_{s})\ ,$ (86) $\displaystyle
M_{\ell}(g_{\ell},g_{s})\,I_{k}(g_{\ell},g_{s})\ =\
I_{k}(g_{\ell}{+}1,g_{s})\,M_{\ell}(g_{\ell},g_{s})\ .$
From the free case ($g_{\ell}{=}g_{s}{=}0$), one can compute the operators
$\mathcal{R}_{s}(I)$ and $\mathcal{R}_{\ell}(I)$ (20),
$\displaystyle\mathcal{R}_{s}(I)$ $\displaystyle\ =\
\prod_{i}\partial_{i}^{2}\qquad\quad\\!\ =\ -I_{6}\ ,$ (87)
$\displaystyle\mathcal{R}_{\ell}(I)$ $\displaystyle\ =\
\prod_{i<j}(\partial^{2}_{i}{-}\partial^{2}_{j})^{2}\ =\
-27I_{6}^{2}-9I_{6}I_{4}I_{2}+5I_{6}I_{2}^{3}+\tfrac{1}{2}I_{4}^{3}-\tfrac{5}{4}I_{4}^{2}I_{2}^{2}+I_{4}I_{2}^{4}-\tfrac{1}{4}I_{2}^{6}\
.$
Finally, for integral values of both couplings we can construct two more
conserved charges,
$\displaystyle Q_{\ell}(g_{\ell},g_{s})$ $\displaystyle\ =\
M_{\ell}(g_{\ell}{-}1,g_{s})\,\cdots\,M_{\ell}(1,g_{s})\,M_{\ell}(0,g_{s})\,M_{\ell}(-1,g_{s})\,\cdots\,M_{\ell}(1{-}g_{\ell},g_{s})\
,$ (88) $\displaystyle Q_{s}(g_{s},g_{\ell})$ $\displaystyle\ =\
M_{s}(g_{\ell},g_{s}{-}1)\,\cdots\,M_{s}(g_{\ell},1)\,M_{s}(g_{\ell},0)\,M_{s}(g_{\ell},-1)\,\cdots\,M_{s}(g_{\ell},1{-}g_{s})\
.$
Together with $F_{k}$ and $I_{k}$, they satisfy the following nonzero
relations,
$\displaystyle{\mathrm{i}}[I_{4},F_{4}]$ $\displaystyle\ =\
2({-}I_{2}^{4}{+}3I_{4}I_{2}^{2}{+}6I_{6}I_{2}{-}2I_{4}^{2}),\
,\qquad{\mathrm{i}}[I_{6},F_{4}]\ =\ 2(I_{6}I_{2}^{2}{-}3I_{6}I_{4})\ ,$ (89)
$\displaystyle{\mathrm{i}}[I_{4},F_{6}]$ $\displaystyle\ =\
\tfrac{4}{3}I_{6}I_{2}^{2}{-}4I_{6}I_{4}\ ,\qquad{\mathrm{i}}[I_{6},F_{6}]\ =\
\tfrac{1}{3}(I_{6}I_{2}^{3}{-}I_{6}I_{4}I_{2}{-}18I_{6}^{2})\ ,$
$\displaystyle{\mathrm{i}}[F_{4},F_{6}]$ $\displaystyle\ =\
\\{F_{6},2I_{4}{-}I_{2}^{2}\\}-\\{F_{4},I_{6}\\}\ ,$
$\displaystyle{\mathrm{i}}[Q_{s},F_{4}]$ $\displaystyle\ =\
-3(2g_{s}{-}1)Q_{s}(I_{4}{-}{\textstyle\frac{1}{3}}I_{2}^{2})\
,\qquad{\mathrm{i}}[Q_{s},F_{6}]\ =\
-3(2g_{s}{-}1)Q_{s}(I_{6}{+}\tfrac{1}{18}I_{2}I_{4}{-}\tfrac{1}{18}I_{2}^{3})\
,$ $\displaystyle{\mathrm{i}}[Q_{\ell},F_{4}]$ $\displaystyle\ =\
-6(2g_{\ell}{-}1)Q_{\ell}(I_{4}{-}{\textstyle\frac{2}{3}}I_{2}^{2})\
,\qquad{\mathrm{i}}[Q_{\ell},F_{6}]\ =\ -6(2g_{\ell}{-}1)Q_{\ell}I_{6}\ ,$
$\displaystyle Q_{s}^{2}$ $\displaystyle\ =\ (\mathcal{R}_{s}(I))^{2g_{s}-1}\
,\qquad Q_{\ell}^{2}=(\mathcal{R}_{\ell}(I))^{2g_{\ell}-1}\
,\qquad[Q_{s},Q_{\ell}]\ =\ 0\ ,$
which define a $\mathbb{Z}_{2}{\oplus}\mathbb{Z}_{2}$ graded polynomial
algebra of conserved charges.
Acknowledgments: FC-M was partially supported by Fondecyt grant 1171475. FC
was partially supported by Fondecyt grant 1171475 and Becas Santander
Iberoamérica. FC would like to thank the Departamento de Física Teórica,
Átomica y Óptica at the Universidad de Valladolid for all the support and kind
hospitality. OL ist grateful to the Instituto de Ciencias Físicas y
Matemáticas at the Universidad Austral de Chile for warm hospitality.
## Appendix A Polynomials coefficients
#### $A_{1}\oplus A_{2}$ model
$\displaystyle\kappa_{1}$
$\displaystyle=-27(6g{+}2g^{\prime}{+}7)(6g{+}2g^{\prime}{+}9)\ ,$
$\displaystyle\kappa_{2}$ $\displaystyle=162(3g{+}2)(6g{+}2g^{\prime}{+}7)\ ,$
$\displaystyle\kappa_{3}$ $\displaystyle=-324(3g{+}1)(3g{+}2)\ ,$
$\displaystyle\kappa_{4}$
$\displaystyle=2(6g{+}2g^{\prime}{+}7)(6g{+}2g^{\prime}{+}9)(6g{+}2g^{\prime}{+}11)\
,$ $\displaystyle\kappa_{5}$ $\displaystyle=-648(g{+}1)(3g{+}1)(3g{+}2)\ ,$
$\displaystyle\kappa_{6}$
$\displaystyle=324(g{+}1)(3g{+}2)(6g{+}2g^{\prime}{+}7)\ ,$ (90)
$\displaystyle\kappa_{7}$
$\displaystyle=-54(g{+}1)(6g{+}2g^{\prime}{+}7)(6g{+}2g^{\prime}{+}9)\ ,$
$\displaystyle\kappa_{8}$
$\displaystyle=(6g{+}2g^{\prime}{+}7)(6g{+}2g^{\prime}{+}9)(6g{+}2g^{\prime}{+}11)\
,$ $\displaystyle\kappa_{9}$ $\displaystyle=-36(3g{+}4)(3g{+}5)\ ,$
$\displaystyle\kappa_{10}$ $\displaystyle=12(3g{+}5)(6g{+}2g^{\prime}{+}11)\
,$ $\displaystyle\kappa_{11}$
$\displaystyle=-(6g{+}2g^{\prime}{+}11)(6g{+}2g^{\prime}{+}13)\ .$
#### $A_{2}$ model
$\displaystyle\gamma_{1}$
$\displaystyle=6(6g{+}7)\left(4g^{2}{+}11g{+}10\right)\ ,$
$\displaystyle\gamma_{2}$ $\displaystyle=-3\left(4g^{2}{+}14g{+}17\right)\ ,$
$\displaystyle\gamma_{3}$ $\displaystyle={-}3(2g{+}3)(6g{+}7)\ ,$
$\displaystyle\gamma_{4}$ $\displaystyle={-}12(2g{+}3)^{2}(6g{+}7)\ ,$
$\displaystyle\gamma_{5}$ $\displaystyle=2(2g{+}3)(6g{+}7)(6g{+}11)\ ,$
$\displaystyle\gamma_{6}$ $\displaystyle=-3(2g{+}1)(2g{+}3)(6g{+}7)\ ,$ (91)
$\displaystyle\gamma_{7}$ $\displaystyle=(2g{+}3)(6g{+}7)(6g{+}11)\ ,$
$\displaystyle\gamma_{8}$ $\displaystyle=3(6g{-}1)(6g{+}7)\ ,$
$\displaystyle\gamma_{9}$ $\displaystyle=-9(2g{+}3)(6g{+}7)\ ,$
$\displaystyle\gamma_{10}$ $\displaystyle=(2g{+}3)(6g{+}7)(6g{+}11)\ ,$
$\displaystyle\gamma_{11}$ $\displaystyle=-45(2g{+}3)(6g{+}7)\ ,$
$\displaystyle\gamma_{12}$ $\displaystyle=135(6g{+}7)\ .$
#### $AD_{3}$ model
$\displaystyle\alpha_{1}$ $\displaystyle={-}(28g{+}17)\ ,$
$\displaystyle\alpha_{2}$ $\displaystyle=6(12g{+}7)(12g{+}11)\ ,$
$\displaystyle\alpha_{3}$ $\displaystyle=3(12g{+}7)\ ,$
$\displaystyle\alpha_{4}$ $\displaystyle=(8g{+}5)\ ,$
$\displaystyle\alpha_{5}$ $\displaystyle=-(12g{+}11)\ ,$
$\displaystyle\alpha_{6}$ $\displaystyle=(64g^{2}{+}152g{+}99)\ ,$
$\displaystyle\alpha_{7}$ $\displaystyle=-48(12g{+}13)\ ,$
$\displaystyle\alpha_{8}$ $\displaystyle=-6(32g^{2}{+}76g{+}47)\ ,$
$\displaystyle\alpha_{9}$ $\displaystyle=3(4g{+}5)(12g{+}13)\ ,$
$\displaystyle\alpha_{10}$ $\displaystyle=-2(12g{+}13)(12g{+}17)\ ,$
$\displaystyle\alpha_{11}$ $\displaystyle=-3(12g{+}13)\ ,$
$\displaystyle\alpha_{12}$ $\displaystyle=(28g{+}27)\ ,$
$\displaystyle\alpha_{13}$ $\displaystyle=(224g^{2}{+}492g{+}255)\ ,$
$\displaystyle\alpha_{14}$ $\displaystyle=-6(4g{+}5)(12g{+}11)(24g{+}29)\ ,$
$\displaystyle\alpha_{15}$ $\displaystyle=-4(12g{+}11)(13g{+}18)\ ,$
$\displaystyle\alpha_{16}$ $\displaystyle=6(12g{+}11)(12g{+}17)(12g{+}19)\ ,$
$\displaystyle\alpha_{17}$ $\displaystyle=3(12g{+}11)(12g{+}17)\ ,$
$\displaystyle\alpha_{18}$ $\displaystyle=-(64g^{2}{+}184g{+}159)\ ,$ (92)
$\displaystyle\alpha_{19}$ $\displaystyle=48(12g{+}19)\ ,$
$\displaystyle\alpha_{20}$ $\displaystyle=6(32g^{2}{+}100g{+}83)\ ,$
$\displaystyle\alpha_{21}$ $\displaystyle=-3(4g{+}7)(12g{+}19)\ ,$
$\displaystyle\alpha_{22}$
$\displaystyle=-(73728g^{5}{+}589824g^{4}{+}1898752g^{3}{+}3045328g^{2}{+}2416240g$
$\displaystyle{+}754803)\ ,$ $\displaystyle\alpha_{23}$
$\displaystyle=48(12g{+}13)(12g{+}17)(288g^{2}{+}868g{+}651)\ ,$
$\displaystyle\alpha_{24}$
$\displaystyle=3(12g{+}13)(9216g^{4}{+}63360g^{3}{+}164912g^{2}{+}191336g{+}83097)\
,$ $\displaystyle\alpha_{25}$
$\displaystyle=-432(4g{+}7)(12g{+}13)(12g{+}17)(12g{+}19)\ ,$
$\displaystyle\alpha_{26}$
$\displaystyle=-3(12g{+}13)(12g{+}17)(12g{+}19)(96g^{2}{+}364g{+}357)\ ,$
$\displaystyle\alpha_{27}$
$\displaystyle=3(4g{+}7)(12g{+}13)(12g{+}17)(12g{+}19)(12g{+}23)\ ,$
$\displaystyle\alpha_{28}$
$\displaystyle=(3136g^{3}{+}12656g^{2}{+}16764g{+}7269)\ ,$
$\displaystyle\alpha_{29}$
$\displaystyle=-12(12g{+}13)(12g{+}17)(112g^{2}{+}344g{+}263)\ ,$
$\displaystyle\alpha_{30}$
$\displaystyle=12(4g{+}7)(12g{+}13)(12g{+}17)(12g{+}19)(12g{+}23)\ ,$
$\displaystyle\alpha_{31}$ $\displaystyle=-6(12g{+}13)(112g^{2}{+}336g{+}251)\
,$ $\displaystyle\alpha_{32}$
$\displaystyle=36(4g{+}7)(12g{+}13)(12g{+}17)(12g{+}19)\ ,$
$\displaystyle\alpha_{33}$ $\displaystyle=3(12g{+}13)(12g{+}17)(12g{+}19)\ .$
#### $BC_{3}$ model
$\displaystyle\mu_{1}$ $\displaystyle=-(2g_{s}{+}1)(28g_{\ell}+10g_{s}{+}17)\
,$ $\displaystyle\mu_{2}$ $\displaystyle=3(2g_{s}{+}1)(12g_{\ell}+6g_{s}{+}7)\
,$ $\displaystyle\mu_{3}$
$\displaystyle=6(12g_{\ell}{+}6g_{s}{+}7)(12g_{\ell}{+}6g_{s}{+}11)\ ,$
$\displaystyle\mu_{4}$
$\displaystyle=(64g_{\ell}^{2}{+}4g_{s}^{2}{+}56g_{s}{+}8g_{\ell}(4g_{s}{+}19){+}99)\
,$ $\displaystyle\mu_{5}$
$\displaystyle=-6(32g_{\ell}^{2}{+}4g_{s}^{2}{+}32g_{s}{+}4g_{\ell}(6g_{s}{+}19){+}47)\
,$ $\displaystyle\mu_{6}$
$\displaystyle=3(4g_{\ell}{+}2g_{s}{+}5)(12g_{\ell}{+}6g_{s}{+}13)\ ,$
$\displaystyle\mu_{7}$ $\displaystyle=-48(12g_{\ell}{+}6g_{s}{+}13)\ ,$
$\displaystyle\mu_{8}$
$\displaystyle=(2g_{s}{+}1)(224g_{\ell}^{2}{+}20g_{s}^{2}{+}168g_{s}{+}4g_{\ell}(34g_{s}{+}123){+}255)\
,$ $\displaystyle\mu_{9}$
$\displaystyle=-4(2g_{s}{+}1)(13g_{\ell}{+}4g_{s}{+}18)(12g_{\ell}{+}6g_{s}{+}11)\
,$ $\displaystyle\mu_{10}$
$\displaystyle=3(2g_{s}{+}1)(12g_{\ell}{+}6g_{s}{+}11)(12g_{\ell}{+}6g_{s}{+}17)\
,$ $\displaystyle\mu_{11}$
$\displaystyle=-6(12g_{\ell}{+}6g_{s}{+}11)(96g_{\ell}^{2}{+}12g_{s}^{2}{+}104g_{s}{+}4g_{\ell}(18g_{s}{+}59){+}145)\
,$ $\displaystyle\mu_{12}$
$\displaystyle=6(12g_{\ell}{+}6g_{s}{+}11)(12g_{\ell}{+}6g_{s}{+}17)(12g_{\ell}{+}6g_{s}{+}19)\
,$ $\displaystyle\mu_{13}$
$\displaystyle=(2g_{s}{+}1)(2g_{s}{+}3)(4g_{\ell}(4191{+}3164g_{\ell}{+}784g_{\ell}^{2}){+}16g_{\ell}g_{s}(645{+}238g_{\ell}{+}95g_{s}){+}2g_{s}(3447{+}1046g_{s}{+}100g_{s}^{2}){+}7269)\
,$ $\displaystyle\mu_{14}$
$\displaystyle=-6(2g_{s}{+}1)(2g_{s}{+}3)(12g_{\ell}{+}6g_{s}{+}13)(112g_{\ell}(g_{\ell}{+}3){+}144g_{s}{+}96g_{\ell}g_{s}{+}20g_{s}^{2}{+}251)\
,$ $\displaystyle\mu_{15}$
$\displaystyle=3(2g_{s}{+}1)(2g_{s}{+}3)(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(12g_{\ell}{+}6g_{s}{+}19)\
,$ $\displaystyle\mu_{16}$
$\displaystyle=-12(2g_{s}{+}3)(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(112g_{\ell}^{2}{+}4g_{s}(5g_{s}{+}38){+}8g_{\ell}(12g_{s}{+}43){+}263)\
,$ $\displaystyle\mu_{17}$
$\displaystyle=36(2g_{s}{+}3)(4g_{\ell}{+}2g_{s}{+}7)(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(12g_{\ell}{+}6g_{s}{+}19)\
,$ $\displaystyle\mu_{18}$
$\displaystyle=36(4g_{\ell}{+}2g_{s}{+}7)(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(12g_{\ell}{+}6g_{s}{+}19)(12g_{\ell}{+}6g_{s}{+}23)\
,$ $\displaystyle\mu_{19}$
$\displaystyle=-(73728g_{\ell}^{5}{+}288g_{s}^{5}{+}11952g_{s}^{4}{+}127152g_{s}^{3}{+}552584g_{s}^{2}{+}1064386g_{s}{+}18432g_{\ell}^{4}(7g_{s}{+}32){+}256g_{\ell}^{3}(342g_{s}^{2}{+}3555g_{s}{+}7417)$
$\displaystyle{+}16g_{\ell}^{2}(1800g_{s}^{3}{+}32508g_{s}^{2}{+}146534g_{s}{+}190333){+}16g_{\ell}(288g_{s}^{4}{+}8136g_{s}^{3}{+}59596g_{s}^{2}{+}163350g_{s}{+}151015){+}754803)\
,$
$\displaystyle\mu_{20}$
$\displaystyle=3(12g_{\ell}{+}6g_{s}{+}13)(9216g_{\ell}^{4}{+}144g_{s}^{4}{+}4032g_{s}^{3}{+}30072g_{s}^{2}{+}85520g_{s}{+}1152g_{\ell}^{3}(12g_{s}{+}55){+}16g_{\ell}^{2}(468g_{s}^{2}{+}4824g_{s}{+}10307)$
(93)
$\displaystyle{+}8g_{\ell}(216g_{s}^{3}{+}3852g_{s}^{2}{+}17722g_{s}{+}23917){+}83097)\
,$ $\displaystyle\mu_{21}$
$\displaystyle=-3(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(12g_{\ell}{+}6g_{s}{+}19)(96g_{\ell}^{2}{+}12g_{s}^{2}{+}160g_{s}{+}4g_{\ell}(18g_{s}{+}91){+}357)\
,$ $\displaystyle\mu_{22}$
$\displaystyle=48(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(288g_{\ell}^{2}{+}36g_{s}^{2}{+}328g_{s}{+}4g_{\ell}(217{+}54g_{s}){+}651)\
,$ $\displaystyle\mu_{23}$
$\displaystyle=-432(4g_{\ell}{+}2g_{s}{+}7)(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(12g_{\ell}{+}6g_{s}{+}19)\
,$ $\displaystyle\mu_{24}$
$\displaystyle=3(4g_{\ell}{+}2g_{s}{+}7)(12g_{\ell}{+}6g_{s}{+}13)(12g_{\ell}{+}6g_{s}{+}17)(12g_{\ell}{+}6g_{s}{+}19)(12g_{\ell}{+}6g_{s}{+}23)\
.$
## Appendix B Formulae for $\bf{BC_{3}}$
Explicitly, the integrals of motion read
$\displaystyle{I_{4}}$ $\displaystyle{\ =\
p_{1}^{4}+g_{s}(g_{s}{-}1)\\{p_{1}^{2},{\textstyle\frac{1}{(x^{1})^{2}}}\\}+g_{s}^{2}(g_{s}{-}1)^{2}{\textstyle\frac{1}{(x^{1})^{4}}}+2g_{\ell}(g_{\ell}{-}1)\textstyle\sum_{\ell\neq
1}^{3}\bigl{\\{}p_{1}^{2},{\textstyle\frac{1}{(x^{1}-x^{\ell})^{2}}}{+}{\textstyle\frac{1}{(x^{1}+x^{\ell})^{2}}}\bigr{\\}}}$
(94)
$\displaystyle{-2{\mathrm{i}}g_{\ell}(g_{\ell}{-}1)\textstyle\sum_{\ell\neq
1}^{3}\\{p_{1},{\textstyle\frac{1}{(x^{1}-x^{\ell})^{3}}}{+}{\textstyle\frac{1}{(x^{1}+x^{\ell})^{3}}}\\}+g_{\ell}^{2}(g_{\ell}{-}1)^{2}\textstyle\sum_{\ell\neq
1}^{3}\left({\textstyle\frac{1}{(x^{1}-x^{j})^{4}}}{+}{\textstyle\frac{1}{(x^{1}+x^{j})^{4}}}\right)}$
$\displaystyle{+4g_{\ell}(g_{\ell}{-}1)\left({\textstyle\frac{1}{(x^{1}-x^{2})^{2}}}{-}{\textstyle\frac{1}{(x^{1}+x^{2})^{2}}}\right)p_{1}p_{2}+4g_{\ell}^{2}(g_{\ell}{-}1)^{2}\left({\textstyle\frac{3}{((x^{1})^{2}-(x^{2})^{2})^{2}}}{+}4{\textstyle\frac{(x^{1})^{2}+(x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}{\textstyle\frac{(x^{1})^{2}+(x^{3})^{2}}{((x^{1})^{2}-(x^{3})^{2})^{2}}}\right)}$
$\displaystyle{+8g_{\ell}(g_{\ell}{-}1)g_{s}(g_{s}{-}1)\left({\textstyle\frac{1}{(x^{1}x^{2})^{2}}}{+}{\textstyle\frac{2}{((x^{1})^{2}-(x^{2})^{2})^{2}}}\right)+\text{cyclic}\
.}$ $\displaystyle{I_{6}}$
$\displaystyle=\tfrac{1}{3}p_{1}^{2}p_{2}^{2}p_{3}^{2}+g_{s}(g_{s}{-}1)\tfrac{1}{(x^{1})^{2}}p_{2}^{2}p_{3}^{2}-g_{\ell}(g_{\ell}{-}1)\bigl{\\{}p_{1}p_{2}p_{3}^{2},{\textstyle\frac{1}{(x^{1}-x^{2})^{2}}}{-}{\textstyle\frac{1}{(x^{1}+x^{2})^{2}}}\bigr{\\}}+g_{s}^{2}(g_{s}{-}1)^{2}\tfrac{1}{(x^{1})^{2}(x^{2})^{2}}p_{3}^{2}$
(95)
$\displaystyle+16g_{\ell}^{2}(g_{\ell}{-}1)^{2}{\textstyle\frac{(x^{1}x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{4}}}p_{3}^{2}+8g_{s}(g_{s}{-}1)g_{\ell}(g_{\ell}{-}1){\textstyle\frac{1}{((x^{1})^{2}-(x^{2})^{2})^{2}}}p_{3}^{2}+16g_{\ell}^{2}(g_{\ell}{-}1)^{2}\bigl{\\{}p_{1}p_{3},{\textstyle\frac{x^{1}x^{3}(x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}((x^{2})^{2}-(x^{3})^{2})^{2}}}\bigr{\\}}$
$\displaystyle-4g_{s}(g_{s}{-}1)g_{\ell}(g_{\ell}{-}1)\bigl{\\{}p_{1}p_{3},{\textstyle\frac{x^{1}x^{3}}{(x^{2})^{2}((x^{3})^{2}-(x^{1})^{2})^{2}}}\bigr{\\}}-g_{\ell}^{2}(g_{\ell}{-}1)^{2}{\textstyle\frac{48(x^{1}x^{2})^{2}((x^{1})^{2}+(x^{2})^{2})+160(x^{1}x^{2}x^{3})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}((x^{2})^{2}-(x^{3})^{2})^{2}((x^{3})^{2}-(x^{1})^{2})^{2}}}$
$\displaystyle+g_{s}(g_{s}{-}1)g_{\ell}^{2}(g_{\ell}{-}1)^{2}\left({\textstyle\frac{1}{(x^{1})^{2}(x^{2}-x^{3})^{4}}}{+}{\textstyle\frac{1}{(x^{1})^{2}(x^{2}+x^{3})^{4}}}{+}{\textstyle\frac{32(x^{1})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}((x^{3})^{2}-(x^{1})^{2})^{2}}}\right)$
$\displaystyle-2g_{s}(g_{s}{-}1)g_{\ell}(g_{\ell}{-}1)\bigl{(}g_{\ell}(g_{\ell}{-}1){-}4g_{s}(g_{s}{-}1)\bigr{)}{\textstyle\frac{1}{((x^{1})^{2}((x^{2})^{2}-(x^{3})^{2})^{2}}}+\tfrac{1}{3}\tfrac{g_{s}^{3}(g_{s}{-}1)^{3}}{(x^{1})^{2}(x^{2})^{2}(x^{3})^{2}}+\text{cyclic}\
.$ $\displaystyle 2{J_{4}}$
$\displaystyle=\\{p_{1}^{3},x^{1}\\}+g_{s}(g_{s}{-}1)\\{p_{1},{\textstyle\frac{1}{x^{1}}}\\}+g_{\ell}(g_{\ell}{-}1)\Bigl{(}6\textstyle\sum_{\ell\neq
1}\bigl{(}{\textstyle\frac{1}{(x^{1}-x^{\ell})^{2}}}+{\textstyle\frac{1}{(x^{1}+x^{\ell})^{2}}}\bigr{)}x^{1}p_{1}-\textstyle\sum_{\ell\neq
1}\bigl{\\{}p_{1},{\textstyle\frac{1}{x^{1}-x^{\ell}}}+{\textstyle\frac{1}{x^{1}+x^{\ell}}}\bigr{\\}}\Bigr{)}$
(96) $\displaystyle+\text{cyclic}\ ,$ $\displaystyle 6{J_{6}}$
$\displaystyle=p_{2}^{2}p_{3}^{2}\\{p_{1},x^{1}\\}{+}2{\mathrm{i}}g_{\ell}(g_{\ell}{-}1)\Bigl{(}{\textstyle\frac{(p_{1}-p_{2})}{(x^{1}-x^{2})^{3}}}+{\textstyle\frac{(p_{1}+p_{2})}{(x^{1}+x^{2})^{3}}}\Bigr{)}\\{p_{3},x^{3}\\}{+}g_{s}(g_{s}{-}1)p_{3}^{2}\Bigl{(}\\{p_{1},{\textstyle\frac{x^{1}}{(x^{2})^{2}}}\\}+\\{p_{2},{\textstyle\frac{x^{2}}{(x^{1})^{2}}}\\}\Bigr{)}$
$\displaystyle-8g_{\ell}(g_{\ell}{-}1){\textstyle\frac{(x^{1}x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}\bigr{(}{\textstyle\frac{p_{1}}{x^{1}}}+{\textstyle\frac{p_{2}}{x^{2}}}\bigr{)}p_{3}^{2}{-}16g_{\ell}(g_{\ell}{-}1)\tfrac{x^{1}x^{2}x^{3}}{((x^{1})^{2}-(x^{2})^{2})^{2}}p_{1}p_{2}p_{3}{+}4{\mathrm{i}}g_{\ell}(g_{\ell}{-}1){\textstyle\frac{(x^{1})^{2}+(x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}p_{3}^{2}$
$\displaystyle+8{\mathrm{i}}g_{\ell}(g_{\ell}{-}1){\textstyle\frac{x^{1}x^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}p_{1}p_{2}{+}g_{s}^{2}(g_{s}{-}1)^{2}{\textstyle\frac{1}{(x^{1}x^{2})^{2}}}\\{p_{3},x^{3}\\}+8g_{\ell}(g_{\ell}{-}1)(2g_{\ell}^{2}{-}2g_{\ell}{-}9){\textstyle\frac{(x^{1}x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{4}}}\\{p_{3},x^{3}\\}$
$\displaystyle-4g_{\ell}(g_{\ell}{-}1)g_{s}(g_{s}{-}1)\Bigl{\\{}p_{3},\bigr{(}{\textstyle\frac{(x^{1})^{2}}{(x^{2})^{2}((x^{3})^{2}-(x^{1})^{2})^{2}}}+{\textstyle\frac{(x^{2})^{2}}{(x^{1})^{2}((x^{3})^{2}-(x^{2})^{2})^{2}}}\bigr{)}x^{3}\Bigr{\\}}+8g_{\ell}(g_{\ell}{-}1)g_{s}(g_{s}{-}1){\textstyle\frac{1}{((x^{1})^{2}-(x^{2})^{2})^{2}}}\\{p_{3},x^{3}\\}$
$\displaystyle-12g_{\ell}(g_{\ell}{-}1){\textstyle\frac{(x^{1})^{4}+(x^{2})^{4}}{((x^{1})^{2}-(x^{2})^{2})^{4}}}\\{p_{3},x^{3}\\}+16g_{\ell}^{2}(g_{\ell}{-}1)^{2}{\textstyle\frac{(x^{1}x^{2})^{2}}{((x^{1})^{2}-(x^{2})^{2})^{2}}}\Bigl{\\{}p_{3},\bigr{(}{\textstyle\frac{1}{((x^{3})^{2}-(x^{1})^{2})^{2}}}+{\textstyle\frac{1}{((x^{2})^{2}-(x^{3})^{2})^{2}}}\bigr{)}x^{3}\Bigr{\\}}+\text{cyclic}\
.$
##
## References
* [1] F. Calogero,
Solution of the one-dimensional N-body problem with quadratic and/or inversely
quadratic pair potentials,
J. Math. Phys. 12 (1971) 419–436; Erratum, ibidem 37 (1996) 3646.
* [2] S. Wojciechowski,
Superintegrability of the Calogero–Moser system, Phys. Lett. 95A (1983)
279–281.
* [3] M.A. Olshanetsky, A.M. Perelomov,
Quantum integrable systems related to Lie algebras,
Phys. Rept. 94 (1983) 313–404.
* [4] A.P. Polychronakos,
Physics and mathematics of Calogero particles,
J. Phys. A: Math. Gen. 39 (2006) 12793, arXiv:hep-th/0607033.
* [5] A.P. Polychronakos,
Exchange interactions, Yang–Baxter relations and transparent particles,
Nucl. Phys. B 961 (2020) 115243, arXiv:2006.14624 [hep-th].
* [6] V.B. Kuznetsov,
Hidden symmetry of the quantum Calogero–Moser system,
Phys. Lett. A 218 (1996) 212–222, arXiv:solv-int/9509001 [nlin.SI].
* [7] F. Correa, O. Lechtenfeld, M. Plyushchay,
Nonlinear supersymmetry in the quantum Calogero model,
JHEP 04 (2014) 151, arXiv:1312.5749 [hep-th].
* [8] C.F. Dunkl,
Differential difference operators associated to reflection groups,
Trans. Am. Math. Soc. 311 (1989) 167–183.
* [9] M. Rösler,
Dunkl operators: theory and applications,
lecture notes for the SIAM activity group OP-SF summer school 2002; Leuven,
Belgium.
* [10] F. Correa and O. Lechtenfeld,
The tetrahexahedric angular Calogero model,
JHEP 10 (2015) 191, arXiv:1508.04925 [hep-th].
* [11] F. Correa and O. Lechtenfeld,
$\mathcal{P}\mathcal{T}$ deformation of angular Calogero models,
JHEP 11 (2017) 122, arXiv:1705.05425 [hep-th].
* [12] F. Correa and O. Lechtenfeld,
$\mathcal{P}\mathcal{T}$ deformation of Calogero-Sutherland models,
JHEP 05 (2019) 166 arXiv:1903.06481 [hep-th].
|
# Linear Strands of Initial Ideals of Determinantal Facet Ideals
Ayah Almousa Cornell University<EMAIL_ADDRESS>http://math.cornell.edu/~aalmousa and Keller VandeBogert University of South
Carolina<EMAIL_ADDRESS>http://people.math.sc.edu/kellerlv
###### Abstract.
A determinantal facet ideal (DFI) is an ideal $J_{\Delta}$ generated by
maximal minors of a generic matrix parametrized by an associated simplicial
complex $\Delta$. In this paper, we construct an explicit linear strand for
the initial ideal with respect to any diagonal term order $<$ of an arbitrary
DFI. In particular, we show that if $\Delta$ has no _1-nonfaces_ , then the
Betti numbers of the linear strand of $J_{\Delta}$ and its initial ideal
coincide. We apply this result to prove a conjecture of Ene, Herzog, and Hibi
on Betti numbers of closed binomial edge ideals in the case that the
associated graph has at most $2$ maximal cliques. More generally, we show that
the linear strand of the initial ideal (with respect to $<$) of _any_ DFI is
supported on a polyhedral cell complex obtained as an induced subcomplex of
the _complex of boxes_ , introduced by Nagel and Reiner.
###### Key words and phrases:
determinantal facet ideal, binomial edge ideal, initial ideals, linear strand,
free resolutions
AA was partially supported by the NSF GRFP under Grant No. DGE-1650441.
## 1\. Introduction
Let $R$ denote the coordinate ring of a generic $n\times m$ matrix $M$, over
some field $k$ (with $n\leq m$). The ideal of maximal minors $I_{n}(M)$
possesses many surprising and desirable properties; for example, a result of
Sturmfels and Zelevinsky [16] shows that the natural generating set consisting
of all maximal minors forms of $M$ a reduced Gröbner basis of $I_{n}(M)$, for
_any_ term order $<$. Boocher goes one step further in [3] and shows that the
graded Betti numbers of $I_{n}(M)$ and any of its initial ideals must also
agree. As a consequence, the minimal free resolution of any initial ideal of
the ideal of maximal minors can be obtained by simply setting some of the
entries in the matrix representation of the standard Eagon-Northcott
differentials equal to $0$.
One direction for generalizing ideals of maximal minors is to imagine that the
column-sets of minors appearing in the generating set are parametrized by some
simplicial complex $\Delta$. Such ideals are called _determinantal facet
ideals_ (DFIs), and have been studied in multiple contexts by a wide variety
of authors (see [6], [8], [17]). In the case that $\Delta$ has dimension $1$
(that is, $\Delta$ is a graph), DFIs are better known as _binomial edge
ideals_ , and are even more well behaved than arbitrary DFIs (see [14], [7],
and [9]). DFIs themselves have also been generalized in a few different
directions - Mohammadi and Rauh [12] have allowed for the minors to be
parametrized by hypergraphs, and in [2], DFIs for nonmaximal minors are
studied.
In [5], Ene, Herzog, and Hibi conjecture that Betti numbers for a _closed_
binomial edge ideal agree with that of its initial ideal with respect to any
diagonal term order. This conjecture is known to be true in the case of Cohen-
Macaulay binomial edge ideals, but has remained elusive in generality. In this
paper, we give further evidence and a proof of this conjecture in the
$2$-clique case using techniques related to the computation of linear strands.
Given a homogeneous minimal complex of initial degree $d$, the linear strand
can be obtained by restricting the $i$th differential to basis elements of
degree $i+d$, for all $i\geq 1$. The first results relating to linear strands
of DFIs were given by Herzog, Kiani, and Madani in [8]; the main result was
the fact that the linear strand of any DFI can be obtained as a so-called
_generalized_ Eagon-Northcott complex. In this paper, we prove a similar
result showing that the linear of of the initial ideal of certain classes of
DFIs is obtained as a _generalized sparse_ Eagon-Northcott complex (see
Definition 4.3).
More precisely, in this paper we study linear strands of initial ideals of
DFIs with respect to a diagonal term order. We first construct an explicit
resolution of $\operatorname{in}_{<}I_{n}(M)$ as a _sparse_ Eagon-Northcott
complex (see 3.6). Using this, one can restrict to an appropriate subcomplex
parametrized by the clique complex of the simplicial complex associated to the
DFI $J_{\Delta}$. This subcomplex is not acyclic in general; however, if one
imposes sufficient conditions on $\Delta$, the homology of this subcomplex
will vanish in “small” degrees. This implies that the Betti numbers of certain
DFIs and their initial ideals with respect to a diagonal term order coincide
on the linear strand (see Theorem 4.13).
As an application, we prove the previously mentioned conjecture of Ene,
Herzog, and Hibi in the case that the associated graph $G$ has at most $2$
maximal cliques. Moreover, we pose more generally the conjecture that for any
lcm-closed DFI $J_{\Delta}$, the Betti numbers of $J_{\Delta}$ and its initial
ideal with respect to any diagonal term order coincide (see Conjecture 4.16).
Finally, we consider the linear strands of initial ideals (with respect to any
diagonal term order) of arbitrary DFIs. In this case, we find that the linear
strand in general is _always_ supported on a polyhedral cell complex. This
implies that the Betti numbers of a general linear strand can be obtained by
looking at certain induced subcomplexes of the so-called _complex of boxes_ of
Nagel and Reiner (see [13]). We conclude with examples of this construction
and remarks on further applications.
The paper is organized as follows. In Section 2, we introduce the notation and
terminology that will be in play for the rest of the paper. This includes the
definition of a DFI (see 2.3), linear strands, and the Eagon-Northcott
complex. We include the aforementioned result of Boocher on the Betti numbers
of initial ideals of ideals of maximal minors, and use this to deduce that the
multigraded Betti numbers of any initial ideal of the ideal of maximal minors
are either $0$ or $1$. In Section 3 we introduce a _sparse_ Eagon-Northcott
complex to be used for resolving the initial ideal of the ideal of maximal
minors with respect to any diagonal term order. As corollaries, we obtain
minimal free resolutions of the ideal of squarefree monomials of a given
degree and the box polarization for powers of the graded maximal ideal (and
hence by specialization, any power of the graded maximal ideal).
In Section 4, we consider subcomplexes of the sparse Eagon-Northcott complex
with basis elements parametrized by a simplicial complex $\Delta$. This
subcomplex in general is _not_ acyclic, but it turns out that combinatorial
properties of $\Delta$ will allow us to deduce exactly when homology is
nontrivial in linear degrees. This combinatorial condition is encoded in the
existence of so-called $i$-_nonfaces_ , a generalization of minimal nonfaces
(see Definition 4.5). We use this to show that the Betti numbers along the
linear strand of a DFI $J_{\Delta}$ and its initial ideal with respect to a
diagonal term order coincide if and only if the clique complex associated to
$\Delta$ has no $1$-nonfaces of cardinality $n+1$. We apply this result to
give a proof of the conjecture of Ene, Herzog, and Hibi for graphs having at
most $2$ maximal cliques (see Corollary 4.21).
In Section 5, we consider linear strands supported on cellular complexes. In
particular, we show that the linear strand of the initial ideal (with respect
to a diagonal term order) of _any_ DFI is supported on a polyhedral cellular
complex which can be obtained as the induced subcomplex of the _complex of
boxes_ introduced by Nagel and Reiner in [13] (see Theorem 5.11). This implies
that the multigraded Betti numbers of the linear strand of any such ideal are
$0$ or $1$ and moreover allows us to count the Betti numbers in the linear
strand as the $f$-vector of the associated polyhedral cell complex.
## 2\. Background
In this section we introduce some necessary background to be used for the rest
of the paper. To start, we discuss determinantal facet ideals and establish
some notation related to these ideals and the simplicial complexes that
parametrize them. We then give the definition of the _linear strand_ of a
minimal homogeneous complex, which will be used extensively in later sections.
Finally, we conclude with some results on ideals generated by maximal minors,
including the definition of the classical _Eagon-Northcott_ complex. We
conclude with an observation stating that the multigraded Betti numbers of the
minimal free resolution of the initial ideal of the ideal of maximal minors
with respect to any term order has multigraded Betti numbers that are either
$0$ or $1$.
###### Notation 2.1.
Let $S=k[x_{ij}\mid 1\leq i\leq n,\ 1\leq j\leq m]$ be a polynomial ring over
an arbitrary field $k$. Let $M$ be an $n\times m$ matrix of variables in $S$
where $n\leq m$. For indices $\mathbf{a}=\\{a_{1},\ldots,a_{r}\\}$ and
$\mathbf{b}=\\{b_{1},\ldots,b_{r}\\}$ such that $1\leq a_{1}<\ldots<a_{r}\leq
n$ and $1\leq b_{1}<\cdots<b_{r}\leq m$, set
$[\mathbf{a}|\mathbf{b}]=[a_{1},\ldots,a_{r}|b_{1},\ldots,b_{r}]=\det\left(\begin{array}[]{ccc}x_{a_{1},b_{1}}&\cdots&x_{a_{1},b_{r}}\\\
\vdots&\ddots&\vdots\\\ x_{a_{r},b_{1}}&\cdots&x_{a_{r},b_{r}}\\\
\end{array}\right)$
where $[\mathbf{a}|\mathbf{b}]=0$ if $r>n$. When $r=n$, use the simplified
notation $[\mathbf{a}]$ = $[1,\ldots,n|\mathbf{a}]$. The ideal generated by
the $r$-minors of $M$ is denoted $I_{r}(M)$.
###### Definition 2.2.
For a simplicial complex $\Delta$ and an integer $i$, the $i$-th skeleton
$\Delta^{(i)}$ of $\Delta$ is the subcomplex of $\Delta$ whose faces are those
faces of $\Delta$ with dimension at most $i$. Let $\mathcal{S}$ denote the set
of simplices $\Gamma$ with vertices in $[m]$ with $\dim(\Gamma)\geq n-1$ and
$\Gamma^{(n-1)}\subset\Delta$.
Let $\Gamma_{1},\dotsc,\Gamma_{c}$ be maximal elements in $\mathcal{S}$ with
respect to inclusion, and let $\Delta_{i}:=\Gamma^{(n-1)}_{i}$. Each
$\Gamma_{i}$ is called a _maximal clique_ , and any induced subcomplex of
$\Gamma_{i}$ is a _clique_. The simplicial complex $\Delta^{\textrm{clique}}$
whose facets are the maximal cliques of $\Delta$ is called the _clique
complex_ associated to $\Delta$. The decomposition
$\Delta=\Delta_{1}\cup\cdots\cup\Delta_{c}$ is called the _maximal clique
decomposition_ of $\Delta$.
###### Definition 2.3.
Let $\Delta$ be a pure $(n-1)$-dimensional simplicial complex on the vertex
set $[m]$. Let $S=k[x_{ij}\mid 1\leq i\leq n,\ 1\leq j\leq m]$ be a polynomial
ring over an arbitrary field $k$. Let $M$ be an $n\times m$ matrix of
variables in $S$. The _determinantal facet ideal_ (or _DFI_)
$J_{\Delta}\subseteq S$ associated to $\Delta$ is the ideal generated by
determinants of the form $[\mathbf{a}]$ where $\mathbf{a}$ supports an
$(n-1)$-facet of $\Delta$; that is, the columns of $[\mathbf{a}]$ correspond
to the vertices of some facet of $\Delta$.
###### Notation 2.4.
Let $\Delta$ be a pure $(n-1)$-dimensional simplicial complex on the vertex
set $[m]$ with maximal clique decomposition
$\Delta=\Delta_{1}\cup\cdots\cup\Delta_{c}$. The notation $J_{\Delta_{i}}$
will be used to denote the DFI associated to the simplicial complex whose
facets come from all $\mathbf{a}\in\Delta_{i}$ with $|\mathbf{a}|=r$.
###### Definition 2.5.
Let $\Delta$ be a pure $(n-1)$-dimensional simplicial complex on $m$ vertices
with maximal clique decomposition $\Delta=\bigcup_{i=1}^{c}\Delta_{i}$. The
DFI $J_{\Delta}$ is _lcm-closed_ if the following condition holds:
1. (*)
For all $[\mathbf{a}]\in J_{\Delta_{i}}$, $[\mathbf{a}^{\prime}]\in
J_{\Delta_{j}}$ with $(a_{k},b_{k})=(a_{k}^{\prime},b_{k}^{\prime})$ for some
$1\leq i\leq r$ and $[\mathbf{a}],\ [\mathbf{a}^{\prime}]\notin
J_{\Delta_{i}\cap\Delta_{j}}$, there exists $[\mathbf{c}]\in
J_{\Delta_{i}\cap\Delta_{j}}$ such that $\operatorname{in}([\mathbf{c}])$
divides
$\operatorname{lcm}\big{(}\operatorname{in}([\mathbf{a}]),\operatorname{in}([\mathbf{a}^{\prime}])\big{)}$.
In [2], it is shown that the standard minimal generating set of an lcm-closed
DFI forms a reduced Gröbner basis; conjecturally, we believe that Definition
2.5 is equivalent to being a Gröbner basis for DFIs. The following definition
introduces the main theme of the current paper.
###### Definition 2.6.
Let $F_{\bullet}$ be a minimal graded $R$-free complex with $F_{0}$ having
initial degree $d$. Then the _linear strand_ of $F_{\bullet}$, denoted
$F_{\bullet}^{\operatorname{lin}}$, is the complex obtained by restricting
$d_{i}^{F}$ to $(F_{i})_{d+i}$ for each $i\geq 1$.
###### Remark 2.7.
Observe that the minimality assumption in Definition 2.6 ensures that the
linear strand is well defined. Choosing bases, the linear strand can be
obtained by restricting to the columns where only linear entries occur in the
matrix representation of each differential.
The following result, due to Boocher, shows that with respect to any term
order $<$, the ideal $\operatorname{in}_{<}I_{n}(M)$ specializes to the ideal
of all squarefree monomials of degree $n$ in $m$ variables.
###### Theorem 2.8 ([3, Proof of Theorem 3.1]).
For any term order $<$, the sequence of variable differences
$\\{x_{11}-x_{21},\dots,x_{11}-x_{n1}\\}\cup\cdots\cup\\{x_{1m}-x_{2m},\dotsc,x_{1m}-x_{nm}\\}$
forms a regular sequence on $R/\operatorname{in}_{<}I_{n}(M)$. In particular,
$\beta_{ij}(R/I_{n}(M))=\beta_{ij}(R/\operatorname{in}_{<}I_{n}(M))\
\textrm{for all}\ i,j.$
###### Definition 2.9 (Eagon-Northcott complex).
Let $\phi:F\to G$ be a homomorphism of free modules of ranks $n$ and $m$,
respectively, with $n\geq m$. Let $c_{\phi}$ be the image of $\phi$ under the
isomorphism $\operatorname{Hom}_{R}(F,G)\xrightarrow{\cong}F^{*}\otimes G$.
The _Eagon-Northcott complex_ is the complex
$0\to D_{m-n}(G^{*})\otimes\bigwedge^{m}F\to
D_{m-n-1}(G^{*})\otimes\bigwedge^{m-1}F\to\cdots\to
G^{*}\otimes\bigwedge^{n+1}F\to\bigwedge^{n}F\to\bigwedge^{n}G$
with differentials in homological degree $\geq 2$ induced by multiplication by
the element $c_{\phi}\in F^{*}\otimes G$, and the map
$\bigwedge^{g}F\to\bigwedge^{g}G$ is $\bigwedge^{g}\phi$.
###### Notation 2.10.
Let $E_{\bullet}$ denote the Eagon-Northcott complex of Definition 2.9. If $F$
has basis $f_{1},\dots,f_{m}$ and $G$ has basis $g_{1},\dots,g_{n}$, then
define
$g^{*(\alpha)}\otimes f_{I}:=g_{1}^{*(\alpha_{1})}\cdots
g_{n}^{*(\alpha_{n})}\otimes f_{i_{1}}\wedge\cdots\wedge f_{i_{n+\ell}},$
where $\alpha=(\alpha_{1},\dots,\alpha_{n})$ and
$I=(i_{1}<\cdots<i_{n+\ell})$. Observe that $E_{\bullet}$ inherits a
$\mathbb{Z}^{n}\times\mathbb{Z}^{m}$-grading by setting
$\operatorname{mdeg}(g^{*(\alpha)}\otimes
f_{I})=(1+\alpha_{1}\epsilon_{1}+\cdots+\alpha_{n}\epsilon_{n},\epsilon_{i_{1}}+\cdots\epsilon_{i_{n+\ell}}),$
where $\epsilon_{k}$ denotes the appropriately sized vector with $1$ in the
$i$th spot and $0$ elsewhere, and $1$ denotes a length $n$ vector of $1$s.
###### Corollary 2.11.
Let $F_{\bullet}$ denote a multigraded resolution of
$\operatorname{in}_{<}I_{n}(M)$. Then for every multidegree $\alpha$,
$\beta_{\alpha}(R/\operatorname{in}_{<}I_{n}(M))\leq 1.$
###### Proof.
By Theorem 2.8, a minimal free resolution of $R/\operatorname{in}_{<}I_{n}(M)$
may be obtained by setting some of the entries in the matrix representation of
the differentials of Definition 2.9 equal to $0$. With respect to the
$\mathbb{Z}^{n}\times\mathbb{Z}^{m}$-grading of the Eagon-Northcott complex
$E_{\bullet}$, one has
$\beta_{\alpha}(R/\operatorname{in}_{<}I_{n}(M))\leq 1.$
Since any $\mathbb{Z}^{nm}$-graded minimal free resolution is also
$\mathbb{Z}^{n}\times\mathbb{Z}^{m}$-graded, the result follows. ∎
## 3\. Sparse Eagon-Northcott Complexes
In this section, we construct an explicit example of a _sparse_ Eagon-
Northcott complex. The most complicated part of the construction ends up being
the definition of the differentials; as it turns out, ayclicity will follow
immediately from Theorem 2.8. As a consequence, we deduce that certain
specializations of this complex also yield minimal free resolutions of the
ideal generated by all squarefree monomials of a given degree and powers of
the graded maximal ideal in a polynomial ring. We begin this section with the
following setup:
###### Setup 3.1.
Let $R=k[x_{ij}\mid 1\leq i\leq n,1\leq j\leq m]$ and $M=(x_{ij})_{1\leq i\leq
n,1\leq j\leq m}$ denote a generic $n\times m$ matrix, where $n\leq m$. View
$M$ as a homomorphism $M:F\to G$ of free modules $F$ and $G$ of rank $m$ and
$n$, respectively.
Let $f_{i}$, $i=1,\dots,m$, $g_{j}$, $j=1,\dots,n$ denote the standard bases
with respect to which $M$ has the above matrix representation. Let $<$ denote
any diagonal term order on $R$ and $\operatorname{in}_{<}I_{n}(M)$ the initial
ideal with respect to $<$ of the ideal of maximal minors of $M$.
###### Notation 3.2.
Let $\alpha=(\alpha_{1},\dotsc,\alpha_{n})$. Define
$\alpha_{\leq i}:=(\alpha_{1},\dotsc,\alpha_{i}),$
where $\alpha_{\leq i}=\varnothing$ if $i\leq 0$ and $\alpha_{\leq i}=\alpha$
if $i\geq n$.
###### Definition 3.3.
Let $\alpha=(\alpha_{1},\dotsc,\alpha_{n})$ with $|\alpha|=\ell$ and
$I=(i_{1}<\cdots<i_{n+\ell})$. Define the indexing set
$\mathcal{I}_{<}(\alpha,I):=\\{(i,I_{i+j})\mid i\in\\{k\mid\alpha_{k}>0\\},\
|\alpha_{\leq i-1}|\leq j\leq|\alpha_{\leq i}|\\}$
###### Example 3.4.
One easily computes:
$\mathcal{I}_{<}((1,1,1),(1,2,3,4,5,6))=\\{(1,1),(1,2),(2,3),(2,4),(3,5),(3,6)\\}$
$\mathcal{I}_{<}((1,0,2),(1,2,3,4,5,6))=\\{(1,1),(1,2),(3,4),(3,5),(3,6)\\}$
$\mathcal{I}_{<}((2,1),(1,2,4,5,6))=\\{(1,1),(1,2),(1,4),(2,5),(2,6)\\}$
###### Remark 3.5.
Each basis element of the Eagon-Northcott complex also has a
$\mathbb{Z}^{nm}$-grading and can be viewed as a monomial in the ideal
$\operatorname{in}(J_{\Delta})$:
$g^{*(\alpha)}\otimes f_{\sigma}\leftrightarrow(x_{1\sigma_{1}}\dots
x_{1\sigma_{\alpha_{1}+1}})\cdot(x_{2\sigma_{\alpha_{1}+2}}\dots
x_{2\sigma_{\alpha_{1}+\alpha_{2}+2}})\cdots(x_{n\sigma_{n+i-\alpha_{n}-2}}\dots
x_{n\sigma_{n+i-1}})=:m_{\alpha,\sigma}.$
Observe then that $\mathcal{I}(\alpha,\sigma)$ chooses precisely those indices
for which $m_{\alpha,\sigma}/x_{rs}\in\operatorname{in}_{<}I_{n}(M)$ for all
$(r,s)\in\mathcal{I}(\alpha,\sigma)$.
Using the indexing set of Definition 3.3, we can define what will end up being
a complex as follows:
###### Definition 3.6.
Adopt notation and hypotheses as in Setup 3.1. Let $E_{\bullet}^{\prime}$
denote the sequence of module homomorphisms with
$E_{\ell}^{\prime}=\begin{cases}\bigwedge^{n}G\qquad\textrm{if}\ \ell=0\\\
D_{\ell}(G^{*})\otimes\bigwedge^{n+\ell}F\qquad\textrm{otherwise},\\\
\end{cases}$
and first differential $d^{\prime}_{1}:\bigwedge^{n}F\to\bigwedge^{n}G$
sending $f_{I}\mapsto\operatorname{in}_{<}(M(f_{I}))$. For $\ell\geq 2$,
$d^{\prime}_{\ell}:D_{\ell-1}(G^{*})\otimes\bigwedge^{n+\ell-1}F\to
D_{\ell-2}(G^{*})\otimes\bigwedge^{n+\ell-2}F$ is the map
$d_{\ell}(g^{*(\alpha)}\otimes
f_{I})=\sum_{\\{i\mid\alpha_{i}>0\\}}\sum_{(i,I_{j})\in\mathcal{I}_{<}(\alpha,I)}(-1)^{j+1}x_{iI_{j}}g^{*(\alpha-\epsilon_{i})}\otimes
f_{I\backslash I_{j}}.$
###### Proposition 3.7.
The sequence of homomorphisms $E^{\prime}_{\bullet}$ of Definition 3.6 forms a
complex.
###### Proof.
Observe first that the map $d^{\prime}_{1}:\bigwedge^{n}F\to\bigwedge^{n}G$
sends
$f_{I}\mapsto x_{1I_{1}}\cdots x_{nI_{n}}g_{[n]}.$
We first verify that $d^{\prime}_{1}\circ d^{\prime}_{2}=0$. Let
$g_{k}^{*}\otimes f_{I}\in G^{*}\otimes\bigwedge^{n+1}F$; then:
$\displaystyle d^{\prime}_{1}\circ d^{\prime}_{2}(g_{k}^{*}\otimes f_{I})$
$\displaystyle=d^{\prime}_{1}((-1)^{k+1}x_{kI_{k}}f_{I\backslash
I_{k}}+(-1)^{k+2}x_{kI_{k+1}}f_{I\backslash I_{k+1}})$
$\displaystyle=(-1)^{k+1}x_{kI_{k}}(x_{1I_{1}}\cdots\widehat{x_{kI_{k}}}x_{kI_{k+1}}\cdots
x_{nI_{n}})g_{[n]}$
$\displaystyle\quad+(-1)^{k+2}x_{kI_{k+1}}(x_{1I_{1}}\cdots
x_{kI_{k}}\widehat{x_{kI_{k+1}}}\cdots x_{nI_{n}})g_{[n]}$ $\displaystyle=0.$
Assume now that $\ell\geq 1$; the fact that $d^{\prime}_{\ell+1}\circ
d^{\prime}_{\ell+2}=0$ is a nearly identical computation to that of the
standard Eagon-Northcott differential, where one uses the fact that
$\mathcal{I}_{<}(\alpha-\epsilon_{i},I\backslash
I_{j})=\begin{cases}\mathcal{I}_{<}(\alpha,I)\backslash\\{(i,I_{j})\\}\qquad\textrm{if}\
\alpha_{i}>1\\\
\mathcal{I}_{<}(\alpha,I)\backslash\\{(i^{\prime},I_{j^{\prime}})\mid
i^{\prime}=i\\}\qquad\textrm{if}\ \alpha_{i}=1.\\\ \end{cases}$
∎
###### Definition 3.8.
Adopt notation and hypotheses as in Setup 3.1. Then the _sparse Eagon-
Northcott_ complex (with respect to $<$) is the complex of Definition 3.6
###### Remark 3.9.
The complex of Definition 3.6 is called _sparse_ because of the relationship
of the differentials to that of the classical Eagon-Northcott complex. Namely,
the differentials are obtained by simply setting some of the entries equal to
$0$ in the matrix representation.
###### Corollary 3.10.
Adopt notation and hypotheses as in Setup 3.1. Then the sparse Eagon-Northcott
complex $E^{\prime}_{\bullet}$ is a minimal free resolution of the ideal
$\operatorname{in}_{<}(I_{n}(M))$.
###### Proof.
It is clear that the image of each basis element of $E_{i}^{\prime}$ forms a
linearly independent subset of $\ker(d_{i-1})$ for each $i\geq 2$. Using
Theorem 2.8, this image must also be a generating set. ∎
The following results are quick corollaries of Corollary 3.10. For the
definition of the polarization used in Corollary 3.12, see [1].
###### Corollary 3.11.
Adopt notation and hypotheses as in Setup 3.1. Let $E_{\bullet}^{\prime}$
denote the sparse Eagon-Northcott complex with respect to $<$. Then
$E_{\bullet}^{\prime}\otimes R/\sigma$ is a minimal free resolution of the
ideal of all squarefree monomials of degree $n$ in $m$ variables, where
$\displaystyle\sigma=\\{x_{11}-x_{21},x_{11}-x_{31},\ldots,x_{11}-x_{n1}\\}\cup\\{x_{12}-x_{22},\ldots,x_{12}-x_{n2}\\}\cup\ldots$
$\displaystyle\cup\\{x_{1m}-x_{2m},\ldots,x_{1m}-x_{nm}\\}$
###### Proof.
This is immediate by Theorem 2.8. ∎
###### Corollary 3.12.
Adopt notation and hypotheses as in Setup 3.1. Let $E_{\bullet}^{\prime}$
denote the sparse Eagon-Northcott complex with respect to $<$. Then, under the
relabelling
$x_{ij}\mapsto x_{j-i+1,i},$
$E^{\prime}_{\bullet}$ is a minimal free resolution of the box polarization of
$(x_{1},\dotsc,x_{m-n+1})^{n}$.
In particular, with the above relabelling, $E_{\bullet}^{\prime}\otimes
R/\sigma$ is a minimal free resolution of $(x_{1},\dotsc,x_{m-n+1})^{n}$,
where
$\displaystyle\sigma=\\{x_{11}-x_{12},x_{11}-x_{13},\ldots,x_{11}-x_{1n}\\}\cup\\{x_{21}-x_{22},\ldots,x_{21}-x_{2n}\\}\cup\ldots$
$\displaystyle\cup\\{x_{m-n+1,1}-x_{m-n+1,2},\ldots,x_{m-n+1,1}-x_{m-n+1,n}\\}$
## 4\. Linear Strand of Certain DFIs
In this section, we construct an explicit linear strand for the initial ideal
of certain classes of DFIs. This linear strand is built as a _generalized_
sparse Eagon-Northcott complex, where the generators in each homological
degree are parametrized by faces of the _clique complex_ associated to the
simplicial complex $\Delta$ parametrizing the DFI. We then define the notion
of an $i$-nonface of a simplicial complex (see Definition 4.5); it turns out
that the nonexistence of $1$-nonfaces will guarantee that homology of the
associated generalized sparse Eagon-Northcott complex is trivial in
appropriate degrees. In particular, we deduce that so-called lcm-closed DFIs
have the property that
$\beta_{i,i+n}(R/J_{\Delta})=\beta_{i,i+n}(R/\operatorname{in}_{<}J_{\Delta})\
\textrm{for all}\ i\geq 0,$
for any diagonal term order $<$. To conclude this section, we verify a
conjecture of Ene, Herzog, and Hibi on the Betti numbers of binomial edge
ideals when the associated graph has at most $2$ maximal cliques.
To start off, we recall some results on linear strands of general $R$-modules
due to Herzog, Kiana, and Madani.
###### Theorem 4.1 ([8], Theorem 1.1).
Let $R$ be a standard graded polynomial ring over a field $k$. Let
$G_{\bullet}$ be a finite linear complex of free $R$-modules with initial
degree $n$. Then the following are equivalent:
1. (1)
The complex $G_{\bullet}$ is the linear strand of a finitely generated
$R$-module with initial degree $n$.
2. (2)
The homology $H_{i}(G_{\bullet})_{i+n+j}=0$ for all $i>0$ and $j=0,\ 1$.
###### Proposition 4.2 ([8], Corollary 1.2).
Let $R$ be a standard graded polynomial ring over a field $k$. Let
$G_{\bullet}$ be a finite linear complex of free $R$-modules with initial
degree $n$ such that $H_{i}(G_{\bullet})_{i+n+j}=0$ for all $i>0$, $j=0,\ 1$.
Let $N$ be a finitely generated $R$-module with minimal graded free resolution
$F_{\bullet}$. Assume that there exist isomorphisms making the following
diagram commute:
$\textstyle{G_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{G_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{F_{1}^{\textrm{lin}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{F_{0}^{\textrm{lin}}.}$
Then $G_{\bullet}\cong F_{\bullet}^{\textrm{lin}}$.
Notice that the following definition is simply a subcomplex of the sparse
Eagon-Northcott complex as in Definition 3.6.
###### Definition 4.3.
Adopt notation and hypotheses as in Setup 3.1. Let $\Delta$ denote a
simplicial complex on the vertex set $[m]$. Define
$\mathcal{C}^{<}_{0}(\Delta,M):=\bigwedge^{n}G$. For $i\geq 1$, let
$\mathcal{C}^{<}_{i}(\Delta,M)\subseteq
D_{i-1}(G^{*})\otimes\bigwedge^{n+i-1}F$ denote the free submodule generated
by all elements of the form
$g^{*(\alpha)}\otimes f_{\sigma},$
where $\sigma\in\Delta$ with $|\sigma|=n+i-1$ and
$\alpha=(\alpha_{1},\dotsc,\alpha_{n})$ with $|\alpha|=i-1$.
Let $\mathcal{C}^{<}_{\bullet}(\Delta,M)$ denote the complex induced by the
differential
$d^{\prime}_{1}:\bigwedge^{n}F\to\bigwedge^{n}G$
sending $f_{I}\mapsto\operatorname{in}_{<}(M(f_{I}))$ and for $\ell\geq 2$,
$d^{\prime}_{\ell}:D_{\ell-1}(G^{*})\otimes\bigwedge^{n+\ell-1}F\to
D_{\ell-2}(G^{*})\otimes\bigwedge^{n+\ell-2}F$
is the sparse Eagon-Northcott differential
$d_{\ell}(g^{*(\alpha)}\otimes
f_{I})=\sum_{\\{i\mid\alpha_{i}>0\\}}\sum_{(i,I_{j})\in\mathcal{I}_{<}(\alpha,I)}(-1)^{j+1}x_{iI_{j}}g^{*(\alpha-\epsilon_{i})}\otimes
f_{I\backslash I_{j}}.$
###### Definition 4.4.
Adopt notation and hypotheses as in Setup 3.1. Let $\Delta$ denote a
simplicial complex on the vertex set $[m]$. The complex of Definition 4.3 will
be called the _generalized sparse Eagon-Northcott_ complex.
Notice that by the definition of a simplicial complex, the above complex is
indeed well-defined. The following definition introduces a slight
generalization of the notion of a _minimal nonface_.
###### Definition 4.5.
Let $\Delta$ be a simplicial complex. Then an $i$-nonface
$\sigma=\\{\sigma_{1}<\dots<\sigma_{\ell}\\}$ is an element
$\sigma\notin\Delta$ such that for some $j\geq 1$,
$\sigma\backslash\sigma_{j+k}\in\Delta$ for all $k=0,\dotsc,i$.
###### Example 4.6.
Consider the following graph $G$ on vertices $\\{1,2,3,4\\}$:
$\textstyle{3\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{4}$$\textstyle{1.\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Observe that the associated clique complex has facets $\\{1,2,4\\}$ and
$\\{1,3,4\\}$, and no minimal nonfaces. However, $\\{1,2,3,4\\}$ is a
$1$-nonface of the clique complex, since $\\{1,2,4\\}$ and $\\{1,3,4\\}$ are
both facets.
If we instead consider the graph
$\textstyle{3\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{4}$$\textstyle{1,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
then the clique complex has facets $\\{1,2,3\\}$ and $\\{2,3,4\\}$. The set
$\\{1,2,3,4\\}$ is not a $1$-nonface. Likewise, there are no $1$-nonfaces of
cardinality $3$. In the graph
$\textstyle{3\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{4}$$\textstyle{1,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
the associated clique complex has facets $\\{1,2,4\\}$ and $\\{2,3,4\\}$, and
has no $1$-nonfaces of cardinality $4$. However, $\\{1,3,4\\}$ is a
$1$-nonface of cardinality $3$ since $\\{1,4\\}$ and $\\{3,4\\}$ are vertices
of $G$.
###### Remark 4.7.
Notice that a minimal nonface $\sigma$ is a $\dim\sigma$-nonface in the above
definition. Moreover, any $i$-nonface is a $k$-nonface for all $k\leq i$.
In the proofs of the results in the remainder of this section, notice that we
have chosen to augment our complexes with the ring $R$. This means that we are
resolving the quotient ring $R/I$ as opposed to the module $I$; this has the
effect of shifting the indexing in the statements of Theorem 4.1 and
Proposition 4.2.
###### Lemma 4.8.
Adopt notation and hypotheses as in Setup 3.1. If the simplicial complex
$\Delta$ has no $1$-nonfaces of cardinality $\geq n+1$, then the complex
$\mathcal{C}^{<}_{\bullet}(\Delta,M)$ is the linear strand of a finitely
generated graded $R$-module with initial degree $n$.
###### Proof.
Employ Theorem 4.1. To avoid trivialities, assume $n\leq\dim(\Delta)+1$.
Observe first that $H_{i}(\mathcal{C}_{\bullet}^{<}(\Delta,M))_{i+n-1}=0$ for
all $i\geq 1$ trivially.
To finish the proof, it will be shown that
$H_{i}(\mathcal{C}_{\bullet}^{<}(\Delta,M))_{i+n}\neq 0\ \textrm{for all}\
i\geq 1$ $\implies$ $\textrm{there exists a}\ 1\textrm{-nonface of
cardinality}\ n+i,\ \textrm{for all}\ i\geq 1.$
For convenience, use the notation
$\mathcal{C}_{\bullet}^{<}(M)=:E_{\bullet}^{\prime}$, where
$\mathcal{C}_{\bullet}^{<}(M)$ is as in Definition 4.3.
Assume $H_{i}(\mathcal{C}_{\bullet}^{<}(\Delta,M))_{i+n}\neq 0$. Let
$z\in\mathcal{C}_{i}^{<}(\Delta,M)$ be a cycle that is not a boundary; without
loss of generality, assume $z$ is multihomogeneous. The complex
$\mathcal{C}_{\bullet}^{<}(M)$ is exact, whence $z=d(y)$ for some
$y\in\mathcal{C}_{i+1}^{<}(M)$. By multihomogeneity, $y=\lambda
g^{*(\alpha)}\otimes f_{\sigma}$ for some $\lambda\in k^{\times}$ with
$|\sigma|=n+i$, $|\alpha|=i$. The assumption that $z$ is not a boundary
implies that $\sigma\notin\Delta$, since otherwise
$y\in\mathcal{C}_{i+1}(\Delta,M)$. By definition of the differential of
$\mathcal{C}_{\bullet}^{<}(M)$,
$z=\lambda\cdot\sum_{\\{\ell\mid\alpha_{\ell}>0\\}}\sum_{(\ell,\sigma_{j})\in\mathcal{I}_{<}(\alpha,\sigma)}(-1)^{j+1}x_{\ell\sigma_{j}}g^{*(\alpha-\epsilon_{\ell})}\otimes
f_{\sigma\backslash\sigma_{j}}.$
Since $z\neq 0$, $(\ell,\sigma_{j})\in\mathcal{I}_{<}(\alpha,\sigma)$ for some
$\ell$, $j$. By definition of $\mathcal{I}_{<}(\alpha,\sigma)$, this means
$(\ell,\sigma_{k})\in\mathcal{I}_{<}(\alpha,\sigma)$ for all
$|\alpha_{\leq\ell-1}|\leq j\leq|\alpha_{\leq\ell}|$. This translates to the
fact that $\sigma$ is an $\alpha_{\ell}$-nonface of cardinality $n+i$. Since
$\alpha_{\ell}\geq 1$, the result follows. ∎
###### Remark 4.9.
The proof of Lemma 4.8 allows one to construct explicit examples of nonzero
homology on the complex $\mathcal{C}_{\bullet}^{<}(\Delta,M)$. Let
$\Delta^{\textrm{clique}}$ be the simplicial complex associated to the first
graph of Example 4.6. Then the element
$z=x_{22}f_{1,3,4}-x_{23}f_{1,2,4}$
is a cycle which is not a boundary in
$\mathcal{C}_{\bullet}^{<}(\Delta^{\textrm{clique}},M)$.
###### Lemma 4.10.
Adopt notation and hypotheses as in Setup 3.1. Then the following are
equivalent:
1. (1)
$H_{1}(\mathcal{C}_{\bullet}^{<}(\Delta,M))_{n+1}=0$,
2. (2)
$\Delta$ has no $1$-nonfaces of cardinality $n+1$.
###### Proof.
The implication $(2)\implies(1)$ is Lemma 4.8. Conversely, assume that
$\sigma\notin\Delta$ is a $1$-nonface of cardinality $n+1$. By definition,
there exists some $j$ such that $\sigma\backslash\sigma{j}$ and
$\sigma\backslash\sigma_{j+1}\in\Delta$. This means that
$z=(-1)^{j+1}(x_{j\sigma_{j}}f_{\sigma\backslash\sigma_{j}}-x_{j\sigma_{j+1}}f_{\sigma\backslash\sigma_{j+1}})$
is a cycle in $\mathcal{C}_{1}^{<}(\Delta,M)$ that is not a boundary, since
$z=d_{2}(g_{j}^{*}\otimes f_{\sigma})$, and $g_{j}^{*}\otimes
f_{\sigma}\notin\mathcal{C}_{2}(\Delta,M)$ by construction. ∎
Recall that the standard Eagon-Northcott complex inherits a
$\mathbb{Z}^{n}\times\mathbb{Z}^{m}$-grading, as described in Notation 2.10.
Since the sparse Eagon-Northcott complexes of Definition 4.3 are obtained by
simply setting certain entries in the differentials equal to $0$, these maps
will remain multigraded in an identical manner. We tacitly use this
multigrading for the remainder of this section.
###### Theorem 4.11.
Adopt notation and hypotheses as in Setup 3.1. Assume that $\Delta$ is an
$(n-1)$-pure simplicial complex such that $\Delta^{\textrm{clique}}$ has no
$1$-nonfaces of cardinality $n+1$. Let $F_{\bullet}$ denote the minimal graded
free resolution of $\operatorname{in}_{<}(J_{\Delta})$; then
$F_{\bullet}^{\textrm{lin}}\cong\mathcal{C}_{\bullet}^{<}(\Delta^{\textrm{clique}},M).$
###### Proof.
Let $Z^{\textrm{lin}}:=(\ker d_{1})_{n+1}$, where $d_{1}$ is the first
differential of the complex
$\mathcal{C}_{\bullet}^{<}(\Delta^{\textrm{clique}},M)$. By construction,
$\mathcal{C}_{1}^{<}(\Delta^{\textrm{clique}},M)$ is generated in degree $n+1$
and hence induces a homogeneous map
$\partial:\mathcal{C}_{1}^{<}(\Delta^{\textrm{clique}},M)\to
Z^{\textrm{lin}}.$
Let $0\neq z\in Z^{\textrm{lin}}$ be an element of multidegree
$(\epsilon_{s}+1,\epsilon_{i_{1}}+\cdots+\epsilon_{i_{n+1}})$ (where $1$
denotes the appropriately sized vector of all $1$’s). Set
$\tau:=\\{i_{1}<\cdots<i_{n+1}\\}$; by multihomogeneity, there are constants
$\lambda_{k}\in k$ such that
$z=\sum_{k=1}^{n+1}\lambda_{k}x_{si_{k}}f_{\tau\backslash i_{k}}.$
Since $z$ is a cycle of $\mathcal{C}_{1}^{<}(M)$ (where
$\mathcal{C}_{\bullet}^{<}(M):=E_{\bullet}^{\prime}$ is as in Definition 4.3),
there exists $y\in\mathcal{C}_{2}^{<}(M)$ such that $d_{2}(y)=z$. By
multihomogeneity, $y=\lambda g_{s}\otimes f_{\tau}$ for some constant
$\lambda$, whence
$z=\lambda(-1)^{s+1}(x_{s\sigma_{s}}f_{\sigma\backslash\sigma_{s}}-x_{s\sigma_{s+1}}f_{\sigma\backslash\sigma_{s+1}})$.
This implies that $\sigma\in\Delta^{\textrm{clique}}$, since otherwise
$\Delta^{\textrm{clique}}$ would have a $1$-nonface of cardinality $n+1$,
contradicting our assumptions. Thus $Z^{\textrm{lin}}$ is generated by the set
$\\{r_{s}(\sigma):=(-1)^{s+1}(x_{s\sigma_{s}}f_{\sigma\backslash\sigma_{s}}-x_{s\sigma_{s+1}}f_{\sigma\backslash\sigma_{s+1}})\mid
1\leq s\leq n,\ \sigma\in\Delta^{\textrm{clique}},\ |\sigma|=n+1\\}.$
Moreover, since
$\textrm{mdeg}(r_{s}(\sigma))\neq\textrm{mdeg}(r_{s^{\prime}}(\sigma^{\prime})$
for $s\neq s^{\prime}$ or $\sigma\neq\sigma^{\prime}$, the above is a basis.
Finally, $d_{2}(g_{s}^{*}\otimes f_{\sigma})=r_{s}(\sigma)$, whence the
induced map $\partial$ is an isomorphism of vector spaces. ∎
###### Remark 4.12.
Let $\Delta$ be an $(n-1)$-pure closed simplicial complex. Then
$\Delta^{\textrm{clique}}$ has no minimal nonfaces in cardinality $\geq n+1$,
since any minimal nonface is in particular a $1$-nonface. This means that
$\Delta^{\textrm{clique}}$ satisfies the hypotheses of Theorem $3.1$ of [8].
The following theorem says that the Betti numbers of a DFI and its initial
ideal coincide in the case that the associated clique complex has no
$1$-nonfaces. Recall that in general, the Betti numbers of the initial ideal
are merely an upper bound and it is quite rare to have equality throughout.
###### Theorem 4.13.
Adopt notation and hypotheses as in Setup 3.1. Assume that $\Delta$ is an
$(n-1)$-pure simplicial complex with no $1$-nonfaces of cardinality $n+1$.
Then for all $i\geq 1$,
$\beta_{i,n+i}(J_{\Delta})=\beta_{i,n+i}(\operatorname{in}_{<}(J_{\Delta})).$
###### Proof.
Notice that the linear strand of $J_{\Delta}$ is
$\mathcal{C}_{\bullet}(\Delta^{\textrm{clique}},M)$ where
$\mathcal{C}_{\bullet}$ is the generalized Eagon-Northcott complex of [8].
Then, $\mathcal{C}_{\bullet}$ and $\mathcal{C}_{\bullet}^{<}$ have the same
underlying free modules, so the result follows. ∎
###### Proposition 4.14.
Let $\Delta$ be pure $(n-1)$-dimensional simplicial complex on $m$ vertices,
and let $J_{\Delta}$ be its associated $n$-determinantal facet ideal. If
$J_{\Delta}$ is lcm-closed, then there are no $1$-nonfaces in
$\Delta^{\textrm{clique}}$.
###### Proof.
It suffices to show that there are no $1$-nonfaces of cardinality $n+1$ in
$\Delta^{\textrm{clique}}$. Suppose, seeking contradiction, that
$\mathbf{f}=\\{f_{1}<\dots<f_{n+1}\\}$ is a $1$-nonface of cardinality $n+1$
in $\Delta^{\textrm{clique}}$. By definition, there exists some $f_{i}$ such
that $\mathbf{a}=\\{f_{1},\dots,\widehat{f_{i}},f_{i+1},\dots,f_{n+1}\\}$ and
$\mathbf{b}=\\{f_{1},\dots,f_{i},\widehat{f_{i+1}},\dots,f_{n+1}\\}$ are
facets of $\Delta$. Let $\Delta_{\mathbf{a}}$ and $\Delta_{\mathbf{b}}$ be
maximal cliques of $\Delta$ containing $\mathbf{a}$ and $\mathbf{b}$,
respectively, with nontrivial intersection. By assumption,
$\Delta_{\mathbf{a}}\neq\Delta_{\mathbf{b}}$. Because $J_{\Delta}$ is lcm-
closed, there exists some facet
$\mathbf{c}\in\Delta_{\mathbf{a}}\cap\Delta_{\mathbf{b}}$ such that
$\operatorname{in}[\mathbf{c}]$ divides
$\operatorname{lcm}(\operatorname{in}[\mathbf{a}],\operatorname{in}[\mathbf{b}])=x_{1f_{1}}x_{2f_{2}}\dots
x_{if_{i}}x_{if_{i+1}}x_{i+1f_{i+2}}\dots x_{n,f_{n+1}}.$
The only possible facets $\mathbf{c}$ of $\Delta$ satisfying this property are
$\mathbf{a}$ and $\mathbf{b}$ themselves, so they must be in the same maximal
clique of $\Delta$ and $\mathbf{f}$ is indeed a face in
$\Delta^{\textrm{clique}}$, giving the desired contradiction. ∎
###### Corollary 4.15.
Adopt notation and hypotheses as in Setup 3.1. Assume that $J_{\Delta}$ is an
lcm-closed DFI. Then for all $i$,
$\beta_{i,n+i}(J_{\Delta})=\beta_{i,n+i}(\operatorname{in}_{<}(J_{\Delta})).$
Using Corollary 4.15 as initial evidence, we pose the following:
###### Conjecture 4.16.
Adopt notation and hypotheses as in Setup 3.1. Assume that $J_{\Delta}$ is an
lcm-closed DFI. Then
$\beta_{ij}(R/J_{\Delta})=\beta_{ij}(R/\operatorname{in}_{<}J_{\Delta})\quad\textrm{for
all}\ i,j.$
###### Remark 4.17.
It is important to notice that the condition on $1$-nonfaces is not sufficient
for the minimal generators to form a Gröbner basis, and is hence more general
than the property of being lcm-closed. For example, let $\Delta$ be the
simplicial complex with facets ${1,2,3}$, ${1,4,5}$, and ${1,6,7}$. One can
verify directly that there are no $1$-nonfaces of cardinality $4$, but
calculations in Macaulay2 show that the minimal generating set of the
associated determinantal facet ideal does _not_ form a Gröbner basis.
To conclude this section, we gather some necessary facts to prove (in a
special case) a conjecture of Ene, Herzog, and Hibi. The following result, due
to Conca and Varbaro, shows that ideals with squarefree initial ideals (with
respect to some term order) exhibit homological behavior similar to that of
the associated generic initial with respect to revlex.
###### Theorem 4.18 ([4, Follows from Theorem 1.3]).
Let $I$ be a homogeneous ideal in a standard graded polynomial ring $R$ with
term order $<$. If $\operatorname{in}_{<}(I)$ is squarefree, then the extremal
Betti numbers of $R/I$ and $R/\operatorname{in}_{<}(I)$ coincide. In
particular,
$\operatorname{reg}(R/I)=\operatorname{reg}(R/\operatorname{in}_{<}(I))\quad\textrm{and}\quad\operatorname{pd}_{R}(R/I)=\operatorname{pd}_{R}(R/\operatorname{in}_{<}(I)).$
###### Theorem 4.19 ([15, Theorem 1.1]).
Let $I$ be a graded ideal and let $<$ be any term order. Then the graded Betti
numbers $\beta_{i,j}(R/I)$ can be obtained from the graded Betti numbers
$\beta_{i,j}(R/\operatorname{in}_{<}(I))$ by a sequence of consecutive
cancellations.
###### Theorem 4.20 ([10, Corollary of Theorem 2.3]).
Let $G$ be a closed graph with at most $2$ maximal cliques. Then
$\operatorname{reg}(R/J_{G})\leq 2.$
Finally, we arrive at the last result of this section.
###### Corollary 4.21.
Let $G$ be a closed graph with at most $2$ maximal cliques. Let $J_{G}$ denote
the associated binomial edge ideal $J_{G}$ and let $<$ denote any diagonal
term order. Then for all $i,j$,
$\beta_{i,j}(R/J_{G})=\beta_{i,j}(R/\operatorname{in}_{<}(J_{G})).$
###### Proof.
Since $G$ is a closed graph, $J_{G}$ is lcm-closed and hence the associated
clique complex of $G$ has no $1$-nonfaces (by Proposition 4.14). It is well
known that every binomial edge ideal has squarefree Gröbner basis with respect
to $<$ (see [7]); in particular, Theorem 4.18 conbimed with Theorem 4.20 shows
that $\operatorname{reg}(R/\operatorname{in}_{<}J_{G})\leq 2$. Theorem 4.19
asserts that the Betti numbers of $R/J_{G}$ are obtained by those of
$R/\operatorname{in}_{<}J_{G}$ by a sequence of consecutive cancellations.
However, Theorem 4.13 implies that no cancellations are possible. ∎
## 5\. Linear Strands Supported on Cellular Complexes
In this section, we introduce the notion of a linear strand supported on a
polyhedral cell complex (Definition 5.6), generalizing the well-studied
phenomenon of cellular resolutions. We show that the first linear strand of
the initial ideal of any determinantal facet ideal with respect to a diagonal
term order is supported on a polyhedral cell complex. In particular, this cell
complex is an induced subcomplex of the _complex of boxes_ introduced by Nagel
and Reiner (see Construction 5.8), which supports a minimal linear free
resolution of squarefree strongly stable and strongly stable ideals [13].
We begin by recalling some basic notions from the theory of cellular
resolutions. For a more detailed exposition, see [11, Chapter 4].
###### Definition 5.1.
A _polyhedral cell complex_ $\mathcal{P}$ is a finite collection of convex
polytopes (called _cells_ or _faces_ of $\mathcal{P}$) in some Euclidean
space, satisfying the following two properties:
* •
if $H$ is a polytope in $\mathcal{P}$, then every face of $H$ also lies in
$\mathcal{P}$, and
* •
if $H_{i},H_{j}$ are both in $\mathcal{P}$, then $H_{i}\cap H_{j}$ is a face
of both $H_{i}$ and $H_{j}$.
Denote by $V(\mathcal{P})$ the set of vertices (or $0$-dimensional cells) of
$\mathcal{P}$. If $\mathcal{X}\subseteq V(\mathcal{P})$, the _induced
subcomplex_ of $\mathcal{P}$ on $\mathcal{X}$ is the subcomplex
$\\{F\in\mathcal{P}\mid V(F)\subseteq\mathcal{X}\\}$. The $f$-vector of a
$d$-dimensional polyhedral cell complex $\mathcal{P}$ is the vector
$(f_{0},f_{1},\dots,f_{d})$, where $f_{i}$ is the number of $i$-dimensional
cells of $\mathcal{P}$.
###### Construction 5.2.
Set $S=k[x_{1},\dots,x_{n}]$ to be a polynomial ring over a field $k$. Let
$\mathcal{P}$ be an oriented polyhedral complex and let
$(\alpha_{H})_{H\in\mathcal{P}}\in\mathbb{Z}^{n}$ be a labeling of the cells
of $\mathcal{P}$ such that
$\alpha_{H}=\operatorname{lcm}\\{\mathbf{x}^{\alpha_{G}}\mid G\subset H\\}.$
The labeled complex $(\mathcal{P},\alpha)$ gives rise to an algebraic complex
of free $\mathbb{Z}^{n}$-graded $S$-modules in the following way. Let
$(C_{\bullet},\partial_{\bullet})$ be the cellular chain complex for
$\mathcal{P}$. For two cells $G,H\in\mathcal{P}$ with $\dim H=\dim G+1$ denote
by $\epsilon(H,G)\in\\{0,\pm 1\\}$ the coefficient of $G$ in the cellular
boundary of $H$. Define the free modules
$F_{i}\coloneqq\bigoplus_{\begin{subarray}{c}H\in\mathcal{P}\\\ \dim
H=i+1\end{subarray}}S(-\alpha_{H}).$
The differentials $d_{i}:F_{i}\rightarrow F_{i-1}$ are given by
$d(e_{H})\coloneqq\sum_{\dim G=\dim
H-1}\epsilon(H,G)\mathbf{x}^{\alpha_{H}-\alpha_{G}}e_{G}.$
One can verify this defines an algebraic complex $\mathcal{F}_{\mathcal{P}}$.
For $\beta\in\mathbb{Z}^{n}$, denote by $\mathcal{P}_{\leq\beta}$ the
subcomplex of $\mathcal{P}$ given by all cells $H\in\mathcal{P}$ with
$\alpha_{H}\leq\beta$ coordinatewise. Let
$I=\langle\mathbf{x}^{\alpha_{v}}\mid v\in\mathcal{P}\text{ a vertex}\rangle$.
###### Lemma 5.3 ([11, Proposition 4.5]).
Adopt notation and hypotheses of Construction 5.2. Let
$\mathcal{F}_{\mathcal{P}}$ be the algebraic complex obtained from the labeled
polyhedral complex $(\mathcal{P},\alpha)$. If for every
$\beta\in\mathbb{Z}^{n}$ the subcomplex $\mathcal{P}_{\leq\beta}$ is acyclic
over $k$, then $\mathcal{F}_{\mathcal{P}}$ resolves the quotient $S/I$.
Furthermore, the resolution is minimal if $\alpha_{H}\neq\alpha_{G}$ for any
two faces $G\subset H$ with $\dim H=\dim G+1$.
###### Definition 5.4.
Adopt notation and hypotheses of Construction 5.2. The complex
$\mathcal{F}_{\mathcal{P}}$ is a _cellular resolution_ if it meets the
criteria of Lemma 5.3, and the polyhedral complex $\mathcal{P}$ _supports_ the
resolution.
We extend the notion of cellular resolution to study multigraded linear
strands supported on a polyhedral cell complex. The following lemma follows
naturally from Theorem 4.1.
###### Lemma 5.5.
Adopt notation and hypotheses of Construction 5.2 and assume the complex
$\mathcal{F}_{\mathcal{P}}$ is $d$-linear, i.e., all the generators of $I$
have degree $d$ and all higher syzygy maps are given by linear forms. Then
$\mathcal{F}_{\mathcal{P}}$ is the first linear strand of $S/I$ if, for any
$\alpha\in\mathbb{Z}^{n}$ with $\lvert\alpha\rvert=d+k$ and $k>0$,
$\tilde{H}_{i}(\mathcal{P}_{\leq\alpha})=0$ for $i=k$ and $k-1$.
###### Definition 5.6.
Adopt notation and hypotheses of Construction 5.2. The complex
$\mathcal{F}_{\mathcal{P}}$ is a _cellular linear strand_ if it satisfies the
criteria of Lemma 5.5, and the polyhedral complex $\mathcal{P}$ _supports the
linear strand_ of $S/I$.
###### Setup 5.7.
Let $R=k[x_{ij}\mid 1\leq i\leq n,1\leq j\leq m]$ and $M=(x_{ij})_{1\leq i\leq
n,1\leq j\leq m}$ denote a generic $n\times m$ matrix, where $n\leq m$. Denote
by $I_{n}(M)$ the ideal of maximal minors of $M$. Let $<$ denote any diagonal
term order on $R$, and for any ideal $I\subset R$, denote by
$\operatorname{in}(I)=\operatorname{in}_{<}(I)$ the initial ideal of $I$ with
respect to $<$.
Partition the variables of $R$ into $n$ subsets
$\check{X}_{i}=\\{x_{i1},\dots,x_{im}\\}$. Set
$K=\\{\mathbf{a}\mid\mathbf{a}\text{ is an $n$ subset of }[m]\\}$, so the
elements of $K$ are in one-to-one correspondence with the generators of
$\operatorname{in}_{<}(I_{n}(M))$ via
$\mathbf{a}=\\{a_{1}<\dots<a_{n}\\}\longleftrightarrow\mathbf{x}_{\mathbf{a}}=x_{1a_{1}}\cdots
x_{na_{n}}$.
The following construction by Nagel and Reiner can be defined more generally
for squarefree strongly stable and strongly stable ideals, but for our
purposes we only consider the case when the ideal in question is
$\operatorname{in}(I_{n}(M))$.
###### Construction 5.8.
(see [13]) Adopt notation and hypotheses of Setup 5.7. Call a subset of $K$
which is a Cartesian product $X_{1}\times\dots\times X_{n}$ for subsets
$X_{j}\subseteq\check{X}_{j}$ a _box_ inside $K$, and define the _complex of
boxes_ of $K$ to be the polyhedral subcomplex of the product of simplices
$2^{\check{X}_{1}}\times\dots\times 2^{\check{X}_{n}}$ having faces indexed by
the boxes inside $K$.
###### Theorem 5.9 ([13, Theorem 3.12]).
Adopt notation and hypotheses of Construction 5.8. Labelling a vertex
$\mathbf{a}$ in the complex of boxes by the monomial $\mathbf{x}_{\mathbf{a}}$
gives a minimal linear cellular resolution of $R/\operatorname{in}(I_{n}(M))$.
Hence $\beta_{i,j}(R/\operatorname{in}(I_{n}(M)))=1$ where $i=\sum_{k}\lvert
X_{k}\rvert-n$, $j=X_{1}\sqcup\dots\sqcup X_{n}$ for every box
$X_{1}\times\dots\times X_{n}$ inside $K$, and all other Betti numbers vanish.
###### Notation 5.10.
Let $\Delta$ be a pure $(n-1)$-dimensional simplicial complex on $m$ vertices,
and let $\mathcal{P}$ denote the complex of boxes supporting a minimal linear
cellular resolution of $R/\operatorname{in}(I_{n}(M))$. Denote by
$\mathcal{P}(\Delta)$ the induced polyhedral subcomplex of $\mathcal{P}$ on
the vertex set labeled by $\\{\mathbf{x}_{\mathbf{a}}\mid\mathbf{a}\text{ a
facet of }\Delta\\}.$
###### Theorem 5.11.
Adopt notation and hypotheses of Setup 5.7. Let $\Delta$ be a pure
$(n-1)$-dimensional simplicial complex on $m$ vertices. Then the linear strand
of $R/\operatorname{in}(J_{\Delta})$ is supported on $\mathcal{P}(\Delta)$.
###### Proof.
First, observe that although $\operatorname{in}(J_{\Delta})$ may have
generators in higher degree, its linear strand is completely determined by
syzygies among the generators of degree $n$.
Let $\ell=f_{n-1}(\Delta)$, the number of facets of $\Delta$, and proceed by
downward induction on $\ell$. The base case $\ell=\binom{m}{n}$ corresponds to
the case where $J_{\Delta}=I_{n}(M)$ and is clear. Fix $\ell\geq 0$ and assume
that for any $\Delta$ with $\ell$ generators, $\mathcal{P}(\Delta)$ supports
the first linear strand of $\operatorname{in}(J_{\Delta})$. Let
$\Delta^{\prime}\subset\Delta$ be the subcomplex with a single facet
$\mathbf{a}$ removed. Then
$\mathcal{P}(\Delta^{\prime})\subset\mathcal{P}(\Delta)$ is the induced
subcomplex $\mathcal{P}(\Delta)\setminus v_{\mathbf{a}}$, where
$v_{\mathbf{a}}$ is the vertex labeled by the generator
$\mathbf{x}_{\mathbf{a}}$ of $\operatorname{in}(J_{\Delta})$.
By Lemma 5.5, it suffices to check that for any multidegree $\alpha$ with
$\lvert\alpha\rvert=n+k$,
$\tilde{H}_{k}(\mathcal{P}(\Delta^{\prime})_{\leq\alpha})=\tilde{H}_{k-1}(\mathcal{P}(\Delta^{\prime})_{\leq\alpha})=0$.
Observe that any face of dimension $k$ in
$\mathcal{P}(\Delta^{\prime})_{\leq\alpha}$ has multidegree $\beta$ such that
$\lvert\beta\rvert=\lvert\alpha\rvert=n+k$. Therefore, if
$\dim\mathcal{P}(\Delta^{\prime})_{\leq\alpha}=k$, it is the unique box in
$\mathcal{P}$ with multidegree $\alpha$ by Theorem 5.9, which is contractible.
If $\dim\mathcal{P}(\Delta^{\prime})_{\leq\alpha}<k$, then
$H_{k}(\mathcal{P}(\Delta^{\prime})_{\leq\alpha})=0$ trivially.
By the inductive hypothesis,
$\tilde{H}_{k-1}(\mathcal{P}(\Delta^{\prime})_{\leq\alpha})=0$ as long as
$\mathbf{x}_{\mathbf{a}}$ does not divide $\mathbf{x}^{\alpha}$, so suppose it
does. Let $C$ be a cycle of dimension $k-1$ in
$\mathcal{P}(\Delta^{\prime})_{\leq\alpha}$. Since
$\tilde{H}_{k-1}(\mathcal{P}(\Delta)_{\leq\alpha})=0$ by the inductive
hypothesis, there is some boundary $B$ of dimension $k$ in
$\mathcal{P}(\Delta)_{\leq\alpha}$ of degree $\alpha$ containing the vertex
$v_{\mathbf{a}}$. By Theorem 5.9, there is a unique box in $\mathcal{P}$ with
multidegree $\alpha$, so this must be $B$. But then $\partial(B)$ will be a
linear combination of its codimension $1$ faces, including those containing
$v_{\mathbf{a}}$, so it cannot be $C$. ∎
As a consequence of Theorem 5.11, we obtain the following means of computing
the Betti numbers in the first linear strand of
$\operatorname{in}(J_{\Delta})$.
###### Corollary 5.12.
Adopt notation and hypotheses of Setup 5.7. Let $\Delta$ be a pure
$(n-1)$-dimensional simplicial complex on $m$ vertices and let
$\mathcal{P}(\Delta)$ be as in Notation 5.10. Then
$\beta_{i,i+1}(R/\operatorname{in}(J_{\Delta}))=f_{i}(\mathcal{P}(\Delta))$.
$x_{12}x_{23}$$x_{12}x_{24}$$x_{13}x_{24}$$x_{11}x_{23}$$x_{11}x_{24}$$x_{11}x_{22}$
Figure 1. Complex of boxes $\mathcal{P}$ for $\operatorname{in}(I_{2}(M))$
where $M$ is a $2\times 4$ matrix.
$x_{12}x_{24}$$x_{13}x_{24}$$x_{11}x_{23}$$x_{11}x_{24}$$x_{11}x_{22}$ Figure
2. $\mathcal{P}(G)$ where $G$ has clique decomposition
$\\{1,2,4\\}\cup\\{1,3,4\\}$ as in Example 4.6.
###### Example 5.13.
Let $G$ be the graph in Example 4.6 with clique decomposition
$\\{1,2,4\\}\cup\\{1,3,4\\}$. The complex of boxes $\mathcal{P}$ for
$\operatorname{in}(I_{2}(M))$ where $M$ is a $2\times 4$ matrix is shown in
Figure 1. The induced subcomplex of $\mathcal{P}(G)$ of $\mathcal{P}$ on the
vertices corresponding to edges in $G$ is depicted in Figure 2. The $f$-vector
for this subcomplex is $(5,6,2)$, which indeed corresponds to the Betti
numbers in the linear strand of $\operatorname{in}(J_{G})$. In particular,
$\mathcal{P}(G)$ has $1$-cells corresponding to the $1$-nonfaces
$\\{1,2,3\\}$, $\\{1,2,4\\}$ and $2$-cells corresponding to the $1$-nonface
$\\{1,2,3,4\\}$ in Example 4.6.
Note, however, that this cell complex cannot support the full free resolution
of the ideal generated by the labels on its vertices, since
$\mathcal{P}(\Delta)_{\leq\alpha}$ is not in general acyclyic for any
multidegree $\alpha$. Consider, for example, the multidegree
$\alpha=x_{11}x_{12}x_{23}x_{24}$. Then $\mathcal{P}(G)_{\leq\alpha}$ consists
of the disjoint vertices labeled by $x_{11}x_{23}$ and $x_{12}x_{24}$, so
$\tilde{H}_{0}(\mathcal{P}(G)_{\leq\alpha})=1$ and this complex does not
satisfy the hypotheses of Lemma 5.3.
###### Remark 5.14.
Nagel and Reiner show that other strongly stable and squarefree strongly
stable ideals have a minimal, linear, cellular resolution given by the complex
of boxes $\mathcal{P}$. However, ideals generated by a subset of generators in
these other cases do not, in general, have a linear strand supported on the
induced subcomplex of $\mathcal{P}$ in the same manner. The key issue is that
in other cases, the multigraded Betti numbers may not be $0$ or $1$, so the
proof of Theorem 5.11 does not apply.
## References
* [1] Ayah Almousa, Gunnar Fløystad, and Henning Lohne, _Polarizations of powers of graded maximal ideals_ , arXiv preprint arXiv:1912.03898 (2019).
* [2] Ayah Almousa and Keller VandeBogert, _Determinantal facet ideals for smaller minors_ , arXiv preprint arXiv:2006.14434 (2020).
* [3] Adam Boocher, _Free resolutions and sparse determinantal ideals_ , Math. Res. Lett 19 (2012), no. 04, 805–821.
* [4] Aldo Conca and Matteo Varbaro, _Square-free Gröbner degenerations_ , Inventiones mathematicae (2020), 1–18.
* [5] Viviana Ene, Jürgen Herzog, and Takayuki Hibi, _Cohen-Macaulay binomial edge ideals_ , Nagoya Mathematical Journal 204 (2011), 57–68.
* [6] Viviana Ene, Jürgen Herzog, Takayuki Hibi, and Fatemeh Mohammadi, _Determinantal facet ideals_ , arXiv preprint arXiv:1108.3667 (2011).
* [7] Jürgen Herzog, Takayuki Hibi, Freyja Hreinsdóttir, Thomas Kahle, and Johannes Rauh, _Binomial edge ideals and conditional independence statements_ , Advances in Applied Mathematics 45 (2010), no. 3, 317–333.
* [8] Jürgen Herzog, Dariush Kiani, and Sara Saeedi Madani, _The linear strand of determinantal facet ideals_ , The Michigan Mathematical Journal 66 (2017), no. 1, 107–123.
* [9] Sara Saeedi Madani, _Binomial edge ideals: A survey_ , The 24th National School on Algebra, Springer, 2016, pp. 83–94.
* [10] M Rouzbahani Malayeri, S Saeedi Madani, and D Kiani, _A proof for a conjecture on the regularity of binomial edge ideals_ , arXiv preprint arXiv:2007.09959 (2020).
* [11] Ezra Miller and Bernd Sturmfels, _Combinatorial commutative algebra_ , vol. 227, Springer Science & Business Media, 2004.
* [12] Fatemeh Mohammadi and Johannes Rauh, _Prime splittings of determinantal ideals_ , Communications in Algebra 46 (2018), no. 5, 2278–2296.
* [13] Uwe Nagel and Victor Reiner, _Betti numbers of monomial ideals and shifted skew shapes_ , the electronic journal of combinatorics (2009), R3–R3.
* [14] Masahiro Ohtani, _Graphs and ideals generated by some 2-minors_ , Communications in Algebra 39 (2011), no. 3, 905–917.
* [15] Irena Peeva, _Consecutive cancellations in Betti numbers_ , Proceedings of the American Mathematical Society 132 (2004), no. 12, 3503–3507.
* [16] Bernd Sturmfels and Andrei Zelevinsky, _Maximal minors and their leading terms_ , Advances in Mathematics 98 (1993), no. 1, 65–112.
* [17] Keller VandeBogert, _Trimming complexes and applications to resolutions of determinantal facet ideals_ , Communications in Algebra (To Appear).
|
# On the connection between microscopic description and memory effects in open
quantum system dynamics
Andrea Smirne Dipartimento di Fisica “Aldo Pontremoli”, Università degli
Studi di Milano, via Celoria 16, 20133 Milan, Italy Istituto Nazionale di
Fisica Nucleare, Sezione di Milano, via Celoria 16, 20133 Milan, Italy Nina
Megier Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di
Milano, via Celoria 16, 20133 Milan, Italy Istituto Nazionale di Fisica
Nucleare, Sezione di Milano, via Celoria 16, 20133 Milan, Italy Bassano
Vacchini Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di
Milano, via Celoria 16, 20133 Milan, Italy Istituto Nazionale di Fisica
Nucleare, Sezione di Milano, via Celoria 16, 20133 Milan, Italy
###### Abstract
The exchange of information between an open quantum system and its environment
allows us to discriminate among different kinds of dynamics, in particular
detecting memory effects to characterize non-Markovianity. Here, we
investigate the role played by the system-environment correlations and the
environmental evolution in the flow of information. First, we derive general
conditions ensuring that two generalized dephasing microscopic models of the
global system-environment evolution result exactly in the same open-system
dynamics, for any initial state of the system. Then, we use the trace distance
to quantify the distinct contributions to the information inside and outside
the open system in the two models. Our analysis clarifies how the interplay
between system-environment correlations and environmental-state
distinguishability can lead to the same information flow from and toward the
open system, despite significant qualitative and quantitative differences at
the level of the global evolution.
## 1 Introduction
Whenever we want to describe the time evolution of a quantum system taking
into account the effects of the surrounding environment, we can rely on the
tools provided by the theory of open quantum systems [1, 2]. The latter, in
fact, allows us to model physical phenomena, such as dissipation and
decoherence, that are inherently associated with the open-system nature of the
quantum system at hand. Generally speaking, quantities that would be conserved
under a closed-system unitary evolution will rather change in time as a
consequence of the action of the environment. In somehow more abstract terms,
the interaction between a quantum system and its environment induces a mutual
exchange of information, which would be prevented if the system were closed.
Besides discriminating closed-system and open-system evolutions, the exchange
of information between an open quantum system and its environment also
provides us with a powerful way to distinguish different open quantum system
dynamics, associated with qualitatively and quantitatively distinct
behaviours. In certain dynamics, the information flows unidirectionally from
the open system to the environment, so that, once leaked out of the open
system, it is irremediably lost. In other dynamics, instead, there is a bi-
directional flow of information, implying that some information previously
flown from the reduced system to the environment can later follow the opposite
path; in other terms, the environment, as well as the system-environment
correlations, can act as a memory storage, giving back to the open system some
information that was previously contained in it. Relying on this intuition,
the backflow of information to a reduced system can be regarded as the
distinctive sign of the presence of memory in its evolution. This, in turn,
leads to the identification of open-system dynamics having a two-fold exchange
of information between the open system and the environment with quantum non-
Markovian dynamics, that is, dynamics where memory effects cannot be neglected
(analogously to the corresponding notion for classical stochastic processes
[3, 4, 5, 6, 7]). This is precisely the route that has been established in [8,
9], where the picture above has been defined in rigorous terms by means of the
trace distance, used as a quantifier of quantum-state distinguishability [10]:
The variations in time of the trace distance detect the direction of the
information flow between the open system and the environment and then the
Markovian or non-Markovian nature of the corresponding dynamics.
Despite the relevant theoretical [6, 7, 11, 12, 13] and experimental [14, 15,
16, 17, 18, 19] progresses in understanding the differences between Markovian
and non-Markovian quantum dynamics, several key questions remain to be
addressed. In particular, it would be desirable to connect the possible
occurrence of memory effects in open system dynamics with general features of
the underlying microscopic description of the open system, its environment and
their interaction. Within the trace distance approach, it is possible to
ascribe any backflow of information towards the open system to either the
generation of system-environment correlations, or changes in the environmental
state due to the interaction with the open system, or both [20, 7]. Even more,
quantitative links between the trace distance variations and the influence of
both system-environmental correlations and environmental changes, as measured
via the trace distance, have been derived [21, 22, 23], and a similar result
has recently been obtained also for entropic distinguishability quantifiers
[24]. In addition to their quantitative content, these links suggest further
evidence that the possible quantum nature of the system-environment
correlations, in terms of the presence of entanglement [25] or non-zero
discord [26, 27, 28], does not play any special role in producing memory
effects, compared to mere classical correlations [29, 30]. Indeed, the key
point is that the state of an open quantum system, and then any information-
content associated with it, is the result of an average over the environmental
degrees of freedom, mathematically described by the partial trace [1, 2]. As a
consequence, different system-environment correlations and environmental
states might well result in exactly the same reduced system evolution.
Role and relevance of correlations with ancillary degrees of freedom in the
characterization of non-Markovian dynamics has also been the object of many
recent investigations [31, 32, 33, 34]. In the present contribution, on the
contrary, we concentrate on the role of the correlations between system and
environment arising due to the microscopic interaction. The latter, together
with the impact of the interaction on the environment, should be the ultimate
cause of memory effects. More specifically, we investigate by means of the
trace distance to what extent different evolutions of the information lying
outside the open system – being in the system-environment correlations or in
the environmental state – can lead to the same information flow from and
toward the open quantum system. This analysis will help clarify the non-
trivial interplay between the features of the global evolution that can
provoke non-Markovian open system dynamics.
We first consider the generalized pure-dephasing dynamics [1] of a
$d$-dimensional open quantum system interacting with a generic environment
and, relying on the exact analytical solution, we derive general conditions
ensuring the equivalence between two open system dynamics. These dynamics
result from two distinct microscopical models, for which the type of the
environment, the initial environmental state and the interaction between the
system and the environment may differ. After moving to the simplest scenario
involving a two-level system and two-level environments, we show that the
reduced system dynamics can coincide even though in one case the global state
is classically correlated, while in the other it is (almost) always entangled
(see Fig.1), and even maximally entangled at isolated instants of time. By
means of the bound derived in [20], we compare the strength of the system-
environment correlations and environmental changes in the two global
evolutions, showing that relevant qualitative and quantitative differences
among them can still result in the same exchange of information between the
open system and the environment, and thus in the same non-Markovian behavior.
The rest of the paper is organized as follows. In Sec.2, we recall the
features of the trace distance characterization of quantum non-Markovianity
that are relevant for our analysis. In Sec.3, we derive explicit conditions on
the environmental initial states and interaction operators guarantying that
different generalized pure dephasing microscopic models lead to the same open
system dynamics. Sec.4 contains the main part of our paper, where the
qualitative and quantitative differences between the system-environment
correlations and environmental states in the two models are studied in
relation with their influence on the occurrence of memory effects, as signaled
by an increase of the trace distance. Finally, the conclusions and possible
outlooks are given in Sec.5.
## 2 System-environment information exchange and quantum Markovianity
In order to fix the notation and introduce the notions that will be relevant
for the rest of the paper, we start by briefly recalling the mathematical
characterization of quantum Markovianity in terms of the trace distance, along
with its physical interpretation in connection with the exchange of
information between an open quantum system and its environment [8, 9, 7].
Given the Hilbert space $\mathcal{H}_{S}$ associated with an open quantum
system and the set of statistical operators $\mathcal{S}(\mathcal{H}_{S})$,
i.e., the self-adjoint, positive, trace-one operators acting on
$\mathcal{H}_{S}$, we denote as $\rho_{S}(t)\in\mathcal{S}(\mathcal{H}_{S})$
the state of the open system at time $t$. Under the assumptions that the open
system and the environment can be treated overall as a closed system and that
they are uncorrelated at the initial time $t_{0}=0$, i.e., the initial global
state is $\rho_{SE}(0)=\rho_{S}(0)\otimes\rho_{E}(0)$ with $\rho_{E}(0)$ a
fixed environmental state (within the set $\mathcal{S}(\mathcal{H}_{E})$ of
statistical operators on $\mathcal{H}_{E}$), the state $\rho_{S}(t)$ is given
by the completely positive trace preserving (CPTP) map $\Lambda(t)$ defined as
[1, 2]
$\displaystyle\rho_{S}(t)$ $\displaystyle=$
$\displaystyle\Lambda(t)[\rho_{S}(0)]$ (1) $\displaystyle:=$
$\displaystyle\mbox{tr}_{E}\left\\{U_{SE}(t)(\rho_{S}(0)\otimes\rho_{E}(0))U^{{\dagger}}_{SE}(t)\right\\}.$
Here and in the following, $\mbox{tr}_{E}$ ($\mbox{tr}_{S}$) denotes the
partial trace over the environmental (open system) degrees of freedom and
$U_{SE}(t)$ is the unitary operator describing the global closed-system
evolution from the initial time to the time $t$.
The family of CPTP maps at the different times,
$\left\\{\Lambda(t)\right\\}_{t\geqslant 0}$, fixes the open system dynamics
and encloses all the predictions related with measurements performed on the
open system, at any single time $t$ and for any initial condition
$\rho_{S}(0)$111In general, instead, the family of CPTP maps is not enough to
fully characterize the statistics associated with multi-time measurements, for
which different mathematical objects are needed; suitable notions of quantum
Markovianity can be defined also for such objects and are indeed not
equivalent to the notions referred to the open-system dynamics [7, 11, 35, 36,
37, 38, 39].. As a consequence, the different features of open system dynamics
are preferably formulated in terms of properties of the corresponding families
of maps. Indeed, this is the case also for quantum Markovianity, which aims at
introducing the notion of memoryless evolutions in the quantum realm,
analogously to what happens for classical stochastic processes [3, 1]. Among
the different, and not necessarily equivalent, definitions of Markovian
quantum dynamics [40, 41, 42, 6, 43, 44, 45, 11, 31, 32, 46], the one based on
the trace distance [8, 9, 7] stems from a quantitative definition of memory
effects, linked to the information exchange between the system of interest and
its environment.
The trace distance between two quantum states $\rho^{1}$ and $\rho^{2}$, which
is defined as
$D(\rho^{1},\rho^{2})=\frac{1}{2}\left\|\rho^{1}-\rho^{2}\right\|_{1}=\frac{1}{2}\sum_{i}|x_{i}|$
(2)
with $\|\cdot\|_{1}$ the trace norm and hence $x_{i}$ the eigenvalues of
$\rho^{1}-\rho^{2}$, quantifies their distinguishability [10], that is, the
ability to discriminate between $\rho^{1}$ and $\rho^{2}$ if it is known that
one of the two states has been prepared with probability $1/2$; note that a
more general quantifier of distinguishability can be introduced, including a
possibly biased probability of preparation [47, 48, 7]. Now, if we consider
the evolution of the trace distance $D(\rho_{S}^{1}(t),\rho^{2}_{S}(t))$
between two open system states $\rho_{S}^{1}(t)$ and $\rho^{2}_{S}(t)$,
evolved from two different initial conditions $\rho_{S}^{1}(0)$ and
$\rho^{2}_{S}(0)$ via Eq.(1), the decrease in time of
$D(\rho_{S}^{1}(t),\rho^{2}_{S}(t))$ can be traced back to a loss of
information from the open system, due to the interaction with the environment.
On the same footing, an increase of the trace distance means that some
information is flowing back to the open system, leading to an increased
capability to discriminate between the two possible states by performing
measurements on the reduced system only. Such a backflow of information is
precisely what is identified as memory effect in the definition of quantum
Markovianity introduced in [8, 9]. Following that definition, we say that non-
Markovian dynamics are those where there is at least a pair of initial states
and a time interval $[s,t]$, with $t\geq s$, such that
$\Delta_{S}(t,s):=D(\rho_{S}^{1}(t),\rho^{2}_{S}(t))-D(\rho_{S}^{1}(s),\rho^{2}_{S}(s))$
(3)
is larger than zero. Importantly, by integrating the time derivative of the
trace distance over the time intervals where $\Delta_{S}(t,s)>0$ and
optimizing the integral over the pairs of initial conditions, one can
introduce a measure of non-Markovianity that is univocally associated with the
family of CPTP dynamical maps. At the same time, the detection of an increase
of the trace distance for a single pair of initial states and interval of time
is enough to witness the non-Markovianity of the dynamics, which then does not
call for the full reconstruction of the dynamical map, nor for an explicit
microscopic model of the underlying system-environment interaction [8, 9].
Besides the rigorous definition of quantum (non)-Markovian dynamics rooted in
the information exchange between the open system and the environment, the
trace distance also provides us with a clear physical picture motivating the
possible occurrence of memory effects. The contractivity of the trace distance
under CPTP maps, along with the triangular inequality, allows us to upper
bound the trace distance variation in Eq.(3) via [20, 7]
$\Delta_{S}(t,s)\leqslant I_{SE}(s)$ (4)
with
$\displaystyle I_{SE}(s):=$ $\displaystyle
D(\rho^{1}_{E}(s),\rho^{2}_{E}(s))+D(\rho^{1}_{SE}(s),\rho^{1}_{S}(s)\otimes\rho^{1}_{E}(s))$
(5) $\displaystyle+D(\rho^{2}_{SE}(s),\rho^{2}_{S}(s)\otimes\rho^{2}_{E}(s));$
here $\rho_{S}(s)=\mbox{tr}_{E}\left\\{\rho_{SE}(s)\right\\}$
($\rho_{E}(s)=\mbox{tr}_{S}\left\\{\rho_{SE}(s)\right\\}$) denotes the reduced
system (environmental) state at time $s$ obtained from the global state
$\rho_{SE}(s)\in\mathcal{S}(\mathcal{H}_{SE})$. The terms at the right hand
side (r.h.s.) of the previous inequality describe, respectively, the total
correlations in the two global states $\rho^{1}_{SE}(s)$ and
$\rho^{2}_{SE}(s)$ and the difference between the two environmental states
$\rho^{1}_{E}(s)$ and $\rho^{2}_{E}(s)$ at time $s$; the labels $1$ and $2$
refer to the two different initial reduced system states $\rho^{1}_{S}(0)$ and
$\rho^{2}_{S}(0)$. Crucially, Eq.(4) relates the trace distance between open
system states with quantities that are referred to the system-environment
correlations and the environmental state, and that are thus associated with
some information lying outside the open system itself. On the one hand, this
provides us with an explanation of the physical origin of memory effects in
quantum dynamics, as an increase of trace distance at time $t$,
$\Delta_{S}(t,s)>0$, is necessarily linked to the presence, at the previous
time $s$, of system-environment correlations and/or to differences in the
environmental states due to the different initial conditions. On the other
hand, in a complementary way, we can use the bound in Eq.(4) as a starting
point to gain some quantitative information about the system-environment
correlations and the changes in the environment established by the
interaction, via measurements performed on the reduced system only [21, 22,
23].
## 3 Locally indistinguishable microscopic models
The correlations between an open system and its environment and the dependence
of the environmental state on the reduced system initial condition necessarily
feed any backflow of information to the open system. However, one should keep
in mind that the reduced system state is related to the global state via the
partial trace in Eq.(1), which unavoidably washes out the details about the
global dynamics that do not have an impact on the open-system evolution. To
fully appreciate in what respect system-environment correlations and
environmental states affect the flow of information from and toward the
reduced system, it is thus important to understand to what extent different
global evolutions can lead to similar, or even to the same open-system
dynamics.
Figure 1: Sketch of two different global system-environment evolutions sharing
the same reduced system dynamics obtained by taking the partial trace as in
Eq.(1). Note that while the lower evolution involves system-environment
entangled states, the upper evolution takes place within the set of separable
states. In particular, we depict it as a straight line to suggest that these
states are actually zero-discord states, a subset of states that has measure
zero within the set of separable states [49] (compare with the example in
Sec.4).
More specifically, as illustrated in Fig.1, we are going to define two
different unitary evolutions associated with the global states, respectively,
$\rho_{SE}(t)\in\mathcal{S}(\mathcal{H}_{SE})$, and
$\overline{\rho}_{SE}(t)\in\mathcal{S}(\overline{\mathcal{H}}_{SE})$, which
share the same reduced system state at any time $t$,
$\rho_{S}(t)=\overline{\rho}_{S}(t)$. Hence, the reduced system dynamics is
exactly the same and therefore the global exchange of information between the
open system and the environment, despite the difference between the two global
evolutions. Even more, we will show in the next section that the two global
states can be characterized by radically different kinds of correlations and
environmental evolutions; in particular, as sketched in Fig.1, it is possible
that one global state is a separable state at any time, while the other global
state is entangled at (almost) any time.
### 3.1 Generalized dephasing models
We consider the generalized pure dephasing microscopic model [1], in which a
$d$-dimensional open system interacts with an environment, according to the
global Hamiltonian $H=H_{S}+H_{E}+H_{I}$, where $H_{S}$ and $H_{E}$ are the
free Hamiltonians of, respectively, the open system and the environment,
$H_{I}=\sum_{n=1}^{d}\lvert n\rangle\langle n\rvert\otimes B_{n}$ is the
interaction Hamiltonian, with $\left\\{\lvert n\rangle\right\\}_{n=1,\ldots
d}$ an orthonormal basis of $\mathcal{H}_{S}$, and $B_{n}=B^{{\dagger}}_{n}$
are arbitrary self-adjoint operators on $\mathcal{H}_{E}$; crucially,
$\left[H_{S},\lvert n\rangle\langle n\rvert\right]=0$ so that the free system
Hamiltonian commutes with the interaction term and the model can be solved
exactly. To do that, one can introduce the environmental interaction-picture
operators $B_{n}(t)=e^{iH_{E}t}B_{n}e^{-iH_{E}t},$ along with the
corresponding unitaries
$V_{n}(t)=T_{\leftarrow}\exp\left(-i\int_{0}^{t}dsB_{n}(s)\right),$ where
$T_{\leftarrow}$ is the chronological time-ordering operator. Given the
generic initial product state $\rho_{SE}(0)=\rho_{S}(0)\otimes\rho_{E}(0),$ we
express the initial system state with respect to the basis $\left\\{\lvert
n\rangle\right\\}_{n=1,\ldots d}$, $\rho_{S}(0)=\sum_{n,m=1}^{d}c_{nm}\lvert
n\rangle\langle m\rvert,$ while the initial environmental state with respect
to its spectral decomposition,
$\rho_{E}(0)=\sum_{\alpha}\lambda_{\alpha}\lvert\phi_{\alpha}\rangle\langle\phi_{\alpha}\rvert,$
where the index $\alpha$ runs from 1 to the (possibly infinite) rank of
$\rho_{E}(0)$. Then, exploiting the linearity of the global unitary evolution
and partial trace, the global state at time $t$ in the interaction picture
with respect to $H_{S}+H_{E}$ can be written as [1]
$\rho_{SE}(t)=\sum_{n,m=1}^{d}\sum_{\alpha}c_{nm}\lambda_{\alpha}V_{n}(t)\lvert
n\phi_{\alpha}\rangle\langle m\phi_{\alpha}\rvert V^{\dagger}_{m}(t)$ (6)
and, by taking the partial trace over the environment (see Eq.(1)), the open
system state at time $t$ is
$\displaystyle\rho_{S}(t)$ $\displaystyle=$
$\displaystyle\sum_{n,m=1}^{d}\sum_{\alpha}c_{nm}\lambda_{\alpha}\mathcal{F}_{\alpha,n,m}(t)\lvert
n\rangle\langle m\rvert,$ (7) $\displaystyle\mathcal{F}_{\alpha,n,m}(t)$
$\displaystyle:=$ $\displaystyle\langle\phi_{\alpha}\rvert
V^{\dagger}_{m}(t)V_{n}(t)\lvert\phi_{\alpha}\rangle.$ (8)
It is now clear how to define two global unitary evolutions along with two
initial environmental states such that the corresponding open system states
coincide at any time $t$. In fact, let
$\left\\{\lambda_{\alpha},\lvert\phi_{\alpha}\rangle,B_{n}\right\\}$ and
$\left\\{\overline{\lambda}_{\beta},\lvert\overline{\phi}_{\beta}\rangle,\overline{B}_{n}\right\\}$
be two sets with the eigenvalues and eigenvectors of the initial environmental
states $\rho_{E}(0)$ and $\overline{\rho}_{E}(0)$ (possibly on two different
Hilbert spaces $\mathcal{H}_{E}$ and $\overline{\mathcal{H}}_{E}$), and the
environmental interaction operators appearing in two generalized dephasing
unitaries $U_{SE}(t)$ and $\overline{U}_{SE}(t)$. A necessary and sufficient
condition to have $\rho_{S}(t)=\overline{\rho}_{S}(t)$ for any initial
condition $\rho_{S}(0)=\overline{\rho}_{S}(0)$ is thus222We assume that the
free Hamiltonian $H_{S}$ is the same in the two cases, so that the equality
among the open system dynamics is preserved by moving back to the Schrödinger
picture.
$\sum_{\alpha}\lambda_{\alpha}\mathcal{F}_{\alpha,n,m}(t)=\sum_{\beta}\overline{\lambda}_{\beta}\overline{\mathcal{F}}_{\beta,n,m}(t),$
(9)
for any $n>m$ and $t\geq 0$, where $\mathcal{F}_{\alpha,n,m}(t)$ and
$\overline{\mathcal{F}}_{\beta,n,m}(t)$ are both defined as in Eq.(8), but
with quantities referred, respectively, to the first and to the second
environment; note that Eq.(9) is satisfied automatically for $n=m$, due to the
unitarity of each $V_{n}(t)$ and to the identity
$\sum_{\alpha}\lambda_{\alpha}=\sum_{\beta}\overline{\lambda}_{\beta}=1$;
moreover, it holds for $n>m$ if and only if it holds for $n<m$, since
$\mathcal{F}^{*}_{\alpha,n,m}(t)=\mathcal{F}_{\alpha,m,n}(t)$ and
$\lambda^{*}_{\alpha}=\lambda_{\alpha}$.
We stress that we did not assume any specific form of the initial states of
the environments, so that the equivalence between the open system evolutions
guaranteed by Eq.(9) does not follow from the recent equivalence theorems
among different dynamics with initial Gaussian bosonic or fermionic
environmental states [50, 51, 52, 53, 54, 55].
### 3.2 Two-level system and environment
The condition in Eq.(9) guarantees the equivalence between two open system
dynamics in two generalized pure-dephasing models. To further work out
analytically this equality, as well as the related quantities referred to the
global system-environment evolution, we now restrict the dimensionality of
both the open system and the environment. This will also allow us to better
grasp the physical meaning associated with Eq.(9), relating it to the
different action of the unitaries $V_{n}(t)$ in Eq.(6) on populations and
coherences in the eigenbasis fixed by the initial environmental state.
First, we assume that the open system is a two-level system,
$\mathcal{H}_{S}=\mathbbm{C}^{2}$, (with a slight change of notation we make
the corresponding index $n$ run over $\left\\{0,1\right\\}$) and we set
$B_{0}=-B_{1}=:-B$, so that the interaction Hamiltonian is simply the standard
pure-dephasing term, $H_{I}=\sigma_{z}\otimes B$, with $\sigma_{z}=\lvert
1\rangle\langle 1\rvert-\lvert 0\rangle\langle 0\rvert$ and $B=B^{\dagger}$ a
generic self-adjoint operator on $\mathcal{H}_{E}$; furthermore, we also
assume that $\left[H_{E},B\right]=0$, so that $V(t)=e^{-iBt}$. As done before,
we can now compare two different pure-dephasing global evolutions,
characterized by the environmental interaction operators $B$ and
$\overline{B}$ and initial environmental states
$\rho_{E}(0)=\sum_{\alpha}\lambda_{\alpha}\lvert\phi_{\alpha}\rangle\langle\phi_{\alpha}\rvert$
and
$\overline{\rho}_{E}(0)=\sum_{\beta}\overline{\lambda}_{\beta}\lvert\overline{\phi}_{\beta}\rangle\langle\overline{\phi}_{\beta}\rvert$;
the condition in Eq.(9) ensuring the coincidence between the two corresponding
open system dynamics reduces to
$\displaystyle\sum_{\alpha}\lambda_{\alpha}\langle\phi_{\alpha}\rvert
e^{-iBt}\lvert\phi_{\alpha}\rangle=\sum_{\beta}\overline{\lambda}_{\beta}\langle\overline{\phi}_{\beta}\rvert
e^{-i\overline{B}t}\lvert\overline{\phi}_{\beta}\rangle$ (10)
for any $t\geq 0$. The validity of Eq.(10) only depends on how each pure state
in the spectral decomposition of the initial environmental states,
$\left\\{\lvert\phi_{\alpha}\rangle\right\\}$ and
$\left\\{\lvert\overline{\phi}_{\beta}\rangle\right\\}$, is mapped into itself
by the unitary operators fixed by the environmental interaction operators,
$e^{-iBt}$ and $e^{-i\overline{B}t}$. On the other hand, the global state in
Eq.(6) does depend on how each pure state in the spectral decomposition of the
initial environmental states is mapped into the other pure states in the
decomposition; such a dependence is precisely what is washed out by the
partial trace. This is the key mechanisms guarantying that we can have two
different global evolutions, with the same open system pure dephasing
dynamics.
Furthermore, Eq.(10) can also be expressed as
$\sum_{\alpha}\lambda_{\alpha}\langle\phi_{\alpha}\rvert
B^{k}\lvert\phi_{\alpha}\rangle=\sum_{\beta}\overline{\lambda}_{\beta}\langle\overline{\phi}_{\beta}\rvert\overline{B}^{k}\lvert\overline{\phi}_{\beta}\rangle\quad\forall
k=1,\ldots;$ (11)
i.e., the equivalence of the two reduced system dynamics is fixed by the
momenta of any power of the interaction operators $B$ and $\overline{B}$ on
the initial environmental states $\rho_{E}(0)$ and $\overline{\rho}_{E}(0)$.
This condition might be more convenient to check [56], especially if the
environment has finite dimension $d_{E}$, so that it is enough to verify its
validity for $k=1,\ldots d_{E}^{2}-1$, due to the Cayley-Hamilton theorem.
Moving further, we now also restrict to the case where both pure-dephasing
dynamics are referred to two-level-system environments,
$\mathcal{H}_{E}=\mathcal{H}_{\overline{E}}=\mathbbm{C}^{2}$, so that $B$ and
$\overline{B}$ can be seen as two spin-$1/2$ operators associated with two
different directions $\bm{\eta}$ and $\overline{\bm{\eta}}$, i.e.,
$B=g\bm{\sigma}\cdot\bm{\eta}\qquad\overline{B}=g\bm{\sigma}\cdot\overline{\bm{\eta}},$
(12)
where $\bm{\sigma}$ is the vector of Pauli matrices, $\bm{\eta}$ and
$\overline{\bm{\eta}}$ are two real unit vectors,
$|\bm{\eta}|=|\overline{\bm{\eta}}|=1$, and $g=g^{*}$ is the coupling constant
(that is the same for the two interaction Hamiltonians). Moreover, the two
initial environmental states $\rho_{E}(0)$ and $\overline{\rho}_{E}(0)$ are
fixed by two vectors $\bm{\alpha}$ and $\overline{\bm{\alpha}}$, with
$|\bm{\alpha}|,|\overline{\bm{\alpha}}|\leqslant 1$, according to the Bloch-
ball representation [57]
$\rho_{E}(0)=\frac{1}{2}\left(\mathbbm{1}+\bm{\alpha}\cdot\bm{\sigma}\right),\quad\overline{\rho}_{E}(0)=\frac{1}{2}\left(\mathbbm{1}+\overline{\bm{\alpha}}\cdot\bm{\sigma}\right).$
(13)
Thus, Eq.(11) implies that the two open system states are the same,
$\rho_{S}(t)=\overline{\rho}_{S}(t)$, at any time $t$ and for any initial
condition $\rho_{S}(0)=\overline{\rho}_{S}(0)$ if and only if
$(\bm{\alpha},\bm{\eta})=(\overline{\bm{\alpha}},\overline{\bm{\eta}}),$ (14)
as $B^{2}=\overline{B}^{2}=g^{2}\mathbbm{1}$. In particular, we will focus on
the case where $\bm{\alpha}$ and $\bm{\eta}$ are two vectors with the same
direction, but different length; explicitly, $\bm{\alpha}=(0,0,c)$ and
$\bm{\eta}=(0,0,1)$, with $c=(\bm{\alpha},\bm{\eta})<1$. Note that we are
excluding the value $c=1$, which would imply $\bm{\alpha}=\bm{\eta}$, as
Eq.(14) would then be satisfied only if also
$\overline{\bm{\alpha}}=\bm{\overline{\eta}}$; in other terms, the second pair
of (equal) vectors would simply be the rotation of the first pair of (equal)
vectors, which would correspond to a trivial rotation on the Hilbert space of
the environment.
The geometrical meaning of Eq.(14) for the chosen $\bm{\alpha}$ and
$\bm{\eta}$ is illustrated in Fig.2, under the further constraint that
$|\overline{\bm{\alpha}}|=1$, i.e., $\overline{\rho}_{E}(0)$ is a pure state:
in this case, the projection of the vector $\overline{\bm{\eta}}$ into the
direction fixed by $\overline{\bm{\alpha}}$ has to be equal to the length of
the vector $\bm{\alpha}$. In the next section, we are going to investigate the
qualitatively and quantitatively different features of the global system-
environment evolutions fixed by the conditions above, leading to the same open
system dynamics.
Figure 2: Geometrical meaning of the condition in Eq.(14), ensuring two equal
open system pure dephasing dynamics, in the presence of two two-dimensional
environments. The environmental interaction operators are fixed by the vectors
$\bm{\eta}$ and $\overline{\bm{\eta}}$ – see Eq.(12) – with
$|\bm{\eta}|=|\overline{\bm{\eta}}|=1$, while the initial environmental states
are fixed by the vectors $\bm{\alpha}$ and $\overline{\bm{\alpha}}$ – see
Eq.(13) – $|\bm{\alpha}|,|\overline{\bm{\alpha}}|\leq 1$; in particular, we
consider here $\bm{\alpha}=(0,0,c),\bm{\eta}=(0,0,1)$ and
$|\overline{\bm{\alpha}}|=1$.
## 4 System-environment correlations, environmental states and information
flow
As recalled in Sect.2, the exchange of information between an open system and
its environment possibly inducing a non-Markovian evolution is determined by
the system-environment correlations and the changes in the environmental state
due to the interaction between the two subsystems. The bound in Eq.(4) makes
this statement quantitative, via the sum of three contributions representing
different kinds of information lying outside the open system, i.e., the
distinguishability between the global state and the product of its marginals
for two different initial conditions and the environmental-state
distinguishability related with the latter. Starting from the two equivalent
pure-dephasing dynamics introduced in the previous section, we investigate now
how qualitatively and quantitatively different contributions to the
information related with the global evolution can result in the same system-
environment exchange of information.
### 4.1 Zero-discord vs entangled global states
First, we compare the system-environment correlations and the environmental
states in the two pure-dephasing dynamics; in the next subsection, we will
finally discuss the connection of these quantities with the system-environment
information flow.
Recall that we are looking at two global evolutions where the environments are
two two-level systems, both interacting with the open system of interest via a
pure dephasing term, but fixed by two different directions, ${\bm{\eta}}$ and
$\overline{\bm{\eta}}$, see Eq.(12), and initially in two different states,
fixed by $\bm{\alpha}$ and $\overline{\bm{\alpha}}$, see Eq.(13). For the sake
of concreteness, we are setting $\bm{\alpha}=(0,0,c),\bm{\eta}=(0,0,1)$ and
$\overline{\bm{\alpha}}=(0,0,1)$, with $c<1$. This means that the initial
environmental state is the mixed state $\rho_{E}(0)=\frac{1+c}{2}\lvert
1\rangle\langle 1\rvert+\frac{1-c}{2}\lvert 0\rangle\langle 0\rvert$ in the
first model and the pure state $\overline{\rho}_{E}(0)=\lvert 1\rangle\langle
1\rvert$ in the second model. Finally, the validity of Eq.(14) guarantying the
equivalence between the two open system dynamics,
$\rho_{S}(t)=\overline{\rho}_{S}(t)$ for every
$\rho_{S}(0)=\overline{\rho}_{S}(0)$ and $t\geq 0$, implies that
$\overline{\bm{\eta}}=(\sqrt{1-c^{2}-d^{2}},d,c)$, for any $-1\leqslant
d\leqslant 1$. Note, that due to the invariance of the trace distance under
unitary operations we can set $d=0$ without loss of generality.
Evaluating the two global states via Eq.(6), the difference in their system-
environment correlations appears immediately clear, showing that we are indeed
in the situation illustrated in Fig.1333Eq.(6) refers to the global state in
the interaction picture with respect to $H_{S}+H_{E}$; on the other hand, the
latter is related to the state in the Schrödinger picture via the factorized
unitary operator $e^{-iH_{S}t}\otimes e^{-iH_{E}t}$, which does not affect the
system-environment correlations. For future convenience we also note that the
three terms at the r.h.s. of Eq.(4) do not change when moving from the
interaction to the Schrödinger picture or viceversa, due to the invariance of
the trace distance under unitary operations.. In the first model, the global
state at time $t$ is
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\rho_{SE}(t)$ $\displaystyle=$
$\displaystyle\frac{1+c}{2}\left({\begin{array}[]{cc}c_{11}&c_{10}e^{-2igt}\\\
c_{01}e^{2igt}&c_{00}\end{array}}\right)\otimes\lvert 1\rangle\langle
1\rvert+$ (20)
$\displaystyle\frac{1-c}{2}\left({\begin{array}[]{cc}c_{11}&c_{10}e^{2igt}\\\
c_{01}e^{-2igt}&c_{00}\end{array}}\right)\otimes\lvert 0\rangle\langle
0\rvert,$
which is a zero-discord state, indicating the classical nature of the
correlations between the open system and the environment [26, 27, 28]; zero-
discord states are a proper subset of the set of separable states. On the
other hand, in the second model the global state at time $t$ can be written as
$\displaystyle\overline{\rho}_{SE}(t)$ $\displaystyle=$ $\displaystyle
c_{11}\lvert 1\rangle\langle
1\rvert\otimes\left({\begin{array}[]{cc}|\ell_{t}|^{2}&\ell_{t}^{*}\kappa_{t}\\\
\ell_{t}\kappa_{t}^{*}&|\kappa_{t}|^{2}\end{array}}\right)+$ (29)
$\displaystyle c_{00}\lvert 0\rangle\langle
0\rvert\otimes\left({\begin{array}[]{cc}|\ell_{t}|^{2}&-\ell_{t}\kappa_{t}\\\
-\ell_{t}^{*}\kappa_{t}^{*}&|\kappa_{t}|^{2}\end{array}}\right)+$
$\displaystyle c_{10}\lvert 1\rangle\langle
0\rvert\otimes\left({\begin{array}[]{cc}\ell_{t}^{*2}&-\ell_{t}^{*}\kappa_{t}\\\
\ell_{t}^{*}\kappa_{t}^{*}&-|\kappa_{t}|^{2}\end{array}}\right)+h.c.,$
where $h.c.$ denotes the Hermitian conjugate of the term at its own left and
$\ell_{t}=\cos(gt)+ic\sin(gt);\quad\kappa_{t}=i\sqrt{1-c^{2}}\sin(gt).$ (30)
This state is easily shown to be an entangled state at almost every time $t$,
e.g., by means of the partial transposition criterion [58, 59]. More in
general, any pure-dephasing evolution will generate entanglement between the
two-level system and its (generic) environment if and only if the initial
state of the environment does not commute with the environmental unitary
interaction operator $V(t)$ (see the definition at the beginning of Sec.3.1)
[60]. In addition, we stress that two unitary dilations for the same pure
dephasing CPTP map, one associated with a global entangled state and one with
a zero-discord state have been derived in [61].
Figure 3: Time evolution of the concurrence of the global states
$\mathcal{C}[{\rho}_{SE}(t)]$ and $\mathcal{C}[\overline{\rho}_{SE}(t)]$ for
the pure-dephasing dynamics fixed by the vectors $({\bm{\alpha}},{\bm{\eta}})$
and $(\overline{\bm{\alpha}},\overline{\bm{\eta}})$ respectively – see Fig.2.
In the first case (black line) the state always remains classically
correlated, so that its concurrence is always equal to zero, while in the
second case (red line) it periodically reaches the maximum possible value. We
take $\bm{\alpha}=(0,0,0)$ and $\bm{\eta}=(0,0,1)$ together with
$\overline{\bm{\alpha}}=(0,0,1)$ and $\overline{\bm{\eta}}=(1,0,0)$, while the
system starts in the pure state
$\lvert\psi_{+}\rangle={1}/{\sqrt{2}}\left(\lvert 0\rangle+\lvert
1\rangle\right)$
Actually, we can also explicitly quantify the amount of entanglement of
$\overline{\rho}_{SE}(t)$ and we do so by using the concurrence, according to
[62]
$\mathcal{C}[\overline{\rho}_{SE}(t)]=\max\left\\{0,\lambda_{1}(t)-\lambda_{2}(t)-\lambda_{3}(t)-\lambda_{4}(t)\right\\},$
(31)
where $\lambda_{1}(t)\geq\lambda_{2}(t)\geq\lambda_{3}(t)\geq\lambda_{4}(t)$
are the square root of the eigenvalues of
$\overline{\rho}_{SE}(t)(\sigma_{y}\otimes\sigma_{y})\overline{\rho}^{*}_{SE}(t)(\sigma_{y}\otimes\sigma_{y})$,
with $\sigma_{y}=-i\lvert 0\rangle\langle 1\rvert+i\lvert 1\rangle\langle
0\rvert$ and $\rho^{*}$ the complex conjugate of $\rho$. An example of the
time evolution of the concurrence of $\overline{\rho}_{SE}(t)$ is given in
Fig.3, where one can observe that the interaction between the open system and
the environment leads to the presence of entanglement for any time $t>0$,
apart from isolated instants of time; noticeably, maximally entangled states,
for which the value of concurrence is equal to 1, can be generated by the
global evolution.
The difference between the two global evolutions is further illustrated in
Fig.4, where we report the evolution of the trace distance between the two
corresponding global states, $D(\rho_{SE}(t),\overline{\rho}_{SE}(t))$, and
the two environmental states $D(\rho_{E}(t),\overline{\rho}_{E}(t))$. At the
initial time the two quantities coincide, since both initial system-
environment states are product states. Then, while
$D(\rho_{SE}(t),\overline{\rho}_{SE}(t))$ takes values greater or equal to its
initial value, $D(\rho_{E}(t),\overline{\rho}_{E}(t))$ oscillates between its
initial value and zero; the contractivity of the trace distance under CPTP
maps implies that $D(\rho_{E}(t),\overline{\rho}_{E}(t))\leqslant
D(\rho_{SE}(t),\overline{\rho}_{SE}(t))$. Interestingly, we also note that
when the global-state distinguishability increases the environmental-state
distinguishability decreases and viceversa, so that when the two environmental
states coincide, $D(\rho_{E}(t),\overline{\rho}_{E}(t))=0$, the two global
states have reached their maximum value of distinguishability, which is then
fully due to the different correlations in the two global states.
Figure 4: Time evolution of the trace distance between the global states,
$D(\rho_{SE}(t),\overline{\rho}_{SE}(t))$ (solid line), and the environmental
states, $D(\rho_{E}(t),\overline{\rho}_{E}(t))$ (dashed line), for the two
pure dephasing models fixed by the same parameters as in Fig.3. In both cases
the system starts in the pure state $\lvert\psi_{+}\rangle$, while the initial
states of the environments are given in Eq.(13).
### 4.2 Different contributions to the system-environment exchange of
information
We have thus seen that different evolutions of the global states and the
system-environment correlations can still lead to the same open system
dynamics, meaning in particular that the quantum or classical nature of the
system-environment correlations is not crucial for the presence of memory
effects in the dynamics at hand [29, 30, 21, 22, 23].
Figure 5: Trace distance variation $\Delta_{S}(t,s)$ (green surface) for
$t=\pi/2$ as a function of the time $s$ and the parameter $r$ determining the
initial condition. The pair of initial states for the system is given by the
pure states $\rho^{1}_{S}(0)=\lvert\psi_{+}\rangle\langle\psi_{+}\rvert$ and
$\rho^{2}_{S}(0)=\lvert\psi_{-}^{r}\rangle\langle\psi_{-}^{r}\rvert$, with
$\lvert\psi_{-}^{r}\rangle=\left(r\lvert 0\rangle-\sqrt{1-r^{2}}\lvert
1\rangle\right)$. The black and red meshed transparent surfaces correspond to
the bounds $I_{SE}(s)$ and $\overline{I}_{SE}(s)$ respectively, according to
Eqs.(4) and (5). The other parameters are as in Fig.3. Figure 6: Section of
Fig.5 for the value $r=0.4$. The plot shows the bounds $I_{SE}(s)$ and
$\overline{I}_{SE}(s)$ (dashed red and black lines respectively) together with
the trace distance variation $\Delta_{S}(t,s)$ for $t=\pi/2$ (solid green
line). It clearly appears saturation of the upper bound for the considered
pair of initial system states.
We now move one step forward and use the three contributions at the r.h.s. of
the bound in Eq.(4) to quantify the different kinds of information lying
outside the open system and their relation with the system-environment
information flow. In Fig.5 we depict with a green surface the open system
trace distance variation $\Delta_{S}(\pi/2,s)$ defined in Eq.(3), which by
construction is the same for the two pure-dephasing models we are dealing
with; indeed $\Delta_{S}(\pi/2,s)$ is always larger than zero for the chosen
time interval, in accordance with the strong non-Markovian character of the
open system pure dephasing dynamics due to the interaction with a two-level
system environment. The overall amount of information contained in the system-
environment correlations and environmental-state distinguishability, as
quantified via the sum of the three contributions at the r.h.s. of Eq.(5), is
represented by the meshed trasparent red and black surfaces. In Fig.6 we
consider a section of Fig.5 corresponding to a fixed pair of initial system
states. It clearly appears that the sum of system-environment correlations and
environmental-state distinguishability in the model characterized by the
presence of entanglement exceeds the corresponding sum for the classically-
correlated model, so that the bound on the open system trace distance given by
Eq.(4) is tighter in the latter case and one can consider a choice of initial
pure states such that the bound is actually saturated at some intermediate
point of time. This is exactly the choice we have made in Fig.6.
Despite the different amount of information associated with system-environment
correlations and environmental states distinguishabilities in the two models,
the open system dynamics that results after averaging out the environmental
degrees of freedom is exactly the same. The mentioned differences do not
affect in any way the information exchange between the open system and the
environment and therefore the relevance of memory effects in the open system
dynamics, as quantified by the magnitude of the trace distance revivals.
Figure 7: Trace distance variation $\Delta_{S}(t,s)$, see Eq.(3), as a
function of $s$ and $t-s$ for the two pure-dephasing models fixed by
$(\bm{\alpha},\bm{\eta})$ and $(\overline{\bm{\alpha}},\overline{\bm{\eta}})$
– see Fig.2. The green solid line in the transparent plane corresponds to
fixing $t=\pi/2$. The black and red lines in the background correspond to the
three distinct contributions of $I_{SE}(s)$ (left,black) and
$\overline{I}_{SE}(s)$ (right,red) respectively. For the model fixed by
$(\bm{\alpha},\bm{\eta})$ (left) the dashed lines correspond to the total
amount of system-environment correlations as a function of the time $s$, while
the solid line depicts the distinguishability between the two environmental
marginals, which always remain the same. The same quantities are plotted for
the model fixed by $(\overline{\bm{\alpha}},\overline{\bm{\eta}})$ (right). It
clearly appears that in this case all three contributions are relevant for the
information backflow. For both dynamics the initial reduced system states are
$\rho^{1}_{S}(0)=\lvert\psi_{+}\rangle\langle\psi_{+}\rvert$,
$\rho^{2}_{S}(0)=\lvert\psi^{0.4}_{-}\rangle\langle\psi^{0.4}_{-}\rvert$,
while the other parameters are as in Fig.3.
Interestingly, relevant differences can be observed also if we look at each of
the three contributions at the r.h.s. of Eq.(4) individually. The latter are
represented by the lines on the two planes in the background of Fig.7, where
each plane refers to one of the two pure-depashing models, while the 3D plot
depicts the identical trace distance variation $\Delta_{S}(t,s)$ for the two
models, as a function of $s$ and $t-s$. We can see that for the first dynamics
the environmental states remain the same for both chosen initial states. On
the other hand, in the second model the environmental states do depend on the
initial open system states, showing that in this case the environmental
degrees of freedom have an important role in storing information that was
previously in the reduced quantum system. In addition, also the amount of
information contained in the correlations for both initial conditions differs
significantly in the two microscopic models. Hence, Fig.7 yields a direct
illustration of how different contributions to the information content outside
the open system – being in the system environment correlations or in
environmental-state distinguishability – can result in the very same flow of
information towards the reduced system.
## 5 Conclusions and outlook
In this paper, we have investigated the microscopic origin of the exchange of
information between an open quantum system and its environment. To do so, we
have considered two generalized pure dephasing microscopic models, with
different environmental states and system-environment interaction terms,
leading to the same reduced system dynamics. In this way, we have shown how
quantitatively and even qualitatively different features of the information
contained in system-environment correlations and environmental states might
well result in the same flow of information towards the open system, implying
the same increase of the trace distance and thus the same amount of non-
Markovianity in the dynamics. In particular, the first model is characterized
by classical system-environment correlations (that is, the global state has
always zero discord), while the second generates entangled global states at
almost any time; in addition, for a specific choice of the initial conditions,
in the first model the environmental states do not depend on the open system
initial state, while in the second model significant information is contained
in the environmental-state distinguishability.
Our results will hopefully provide a useful reference point to understand the
general physical mechanisms ruling the origin of non-Markovianity, identifying
those global features that would unavoidably lead to different behaviors of
proper quantifiers of the information accessible via the open quantum system
evolution.
## Acknowledgements
NM would like to thank Walter T. Strunz for getting her interested in the
topic. NM acknowledges funding by the Alexander von Humboldt Foundation in the
form of a Feodor-Lynen Fellowship. All authors acknowledge support from the
UniMi Transition Grant H2020.
## References
* [1] H.-P. Breuer and F. Petruccione. The Theory of Open Quantum Systems. Oxford University Press, Oxford, 2002.
* [2] Á. Rivas and S.F. Huelga. Open Quantum Systems: An Introduction. Springer, 2012.
* [3] W. Feller. An Introduction to Probability Theory and Its Applications. Wiley, New York, 1971.
* [4] B. Vacchini, A. Smirne, E.-M. Laine, J. Piilo, and H.-P. Breuer. Markovianity and non-Markovianity in quantum and classical systems. New J. Phys., 13:093004, 2011.
* [5] B. Vacchini. A classical appraisal of quantum definitions of non-Markovian dynamics. J. Phys. B, 45:154007, 2012.
* [6] Á. Rivas, S.F. Huelga, and M.B. Plenio. Quantum non-Markovianity: characterization, quantification and detection. Rep. Progr. Phys., 77:094001, 2014.
* [7] H.-P. Breuer, E.-M. Laine, J. Piilo, and B. Vacchini. Colloquium : Non-Markovian dynamics in open quantum systems. Rev. Mod. Phys., 88:021002, 2016.
* [8] H.-P. Breuer, E.-M. Laine, and J. Piilo. Measure for the degree of non-Markovian behavior of quantum processes in open systems. Phys. Rev. Lett., 103:210401, 2009.
* [9] E.-M. Laine, J. Piilo, and H.-P. Breuer. Measure for the non-Markovianity of quantum processes. Phys. Rev. A, 81:062115, 2010.
* [10] C. A. Fuchs and J. van de Graaf. Cryptographic distinguishability measures for quantum-mechanical states. IEEE Transactions on Information Theory, 45:1216, 1999.
* [11] L. Li, M. Hall, and H. Wiseman. Concepts of quantum non-Markovianity: A hierarchy. Phys. Rep., 759:1, 2018.
* [12] I. de Vega and D. Alonso. Dynamics of non-Markovian open quantum systems. Rev. Mod. Phys., 89:015001, 2017.
* [13] C.-F. Li, G.-C. Guo, and J. Piilo. Non-Markovian quantum dynamics: What does it mean? EPL (Europhysics Letters), 127:50001, 2019.
* [14] B.-H. Liu, L. Li, Y.-F. Huang, C.-F. Li, G.-C. Guo, E.-M. Laine, H.-P. Breuer, and J. Piilo. Experimental control of the transition from Markovian to non-Markovian dynamics of open quantum systems. Nat. Phys., 7:931, 2011.
* [15] N.K. Bernardes, J.P.S. Peterson, R.S. Sarthour, A.M. Souza, C. H. Monken, I. Roditi, Oliveira I.S., and M.F. Santos. High resolution non-Markovianity in NMR. Sci.Rep., 6:33945, 2016.
* [16] S. Cialdi, M.A.C. Rossi, C. Benedetti, B. Vacchini, D. Tamascelli, S. Olivares, and M.G.A. Paris. All-optical quantum simulator of qubit noisy channels. Appl. Phys. Lett., 110:081107, 2017.
* [17] J. F. Haase, P. J. Vetter, T. Unden, A. Smirne, J. Rosskopf, B. Naydenov, A. Stacey, F. Jelezko, M. B. Plenio, and S. F. Huelga. Controllable non-Markovianity for a spin qubit in diamond. Phys. Rev. Lett., 121:060401, 2018.
* [18] M. Wittemer, G. Clos, H.-P. Breuer, U. Warring, and T. Schaetz. Measurement of quantum memory effects and its fundamental limitations. Phys. Rev. A, 97:020102, 2018.
* [19] C.-F. Li, G.-C. Guo, and J. Piilo. Non-Markovian quantum dynamics: What is it good for? EPL (Europhysics Letters), 128:30001, 2020.
* [20] E.-M. Laine, J. Piilo, and H.-P. Breuer. Witness for initial system-environment correlations in open-system dynamics. EPL (Europhysics Letters), 92:60010, 2010.
* [21] L. Mazzola, C. A. Rodríguez-Rosario, K. Modi, and M. Paternostro. Dynamical role of system-environment correlations in non-Markovian dynamics. Phys. Rev. A, 86:010102, 2012.
* [22] A. Smirne, L. Mazzola, M. Paternostro, and B. Vacchini. Interaction-induced correlations and non-Markovianity of quantum dynamics. Phys. Rev. A, 87:052129, 2013.
* [23] S. Campbell, M. Popovic, D. Tamascelli, and B. Vacchini. Precursors of non-Markovianity. New J. Phys., 21(5):053036, 2019.
* [24] Nina Megier, Andrea Smirne, and Bassano Vacchini. Entropic bounds on information backflow. e-print arXiv:2101.02720, 2021.
* [25] I. Bengtsson and K. Zyczkowski. Geometry of quantum states: an introduction to quantum entanglement. Cambridge University Press, Cambridge, 2006.
* [26] H. Ollivier and W. H. Zurek. Quantum discord: A measure of the quantumness of correlations. Phys. Rev. Lett., 88:017901, 2001.
* [27] L. Henderson and V. Vedral. Classical, quantum and total correlations. Journal of Physics A: Mathematical and General, 34:6899, 2001.
* [28] K. Modi, A. Brodutch, H. Cable, T. Paterek, and V. Vedral. The classical-quantum boundary for correlations: Discord and related measures. Rev. Mod. Phys., 84:1655, 2012.
* [29] A. Pernice and W. T. Strunz. Decoherence and the nature of system-environment correlations. Phys. Rev. A, 84:062121, 2011.
* [30] A. Pernice, J. Helm, and W. T. Strunz. System–environment correlations and non-Markovian dynamics. J. Phys. B: Atomic, Molecular and Optical Physics, 45:154005, 2012\.
* [31] D. De Santis, M. Johansson, B. Bylicka, N.K. Bernardes, and A. Acín. Correlation measure detecting almost all non-Markovian evolutions. Phys. Rev. A, 99:012303, 2019.
* [32] J. Kołodyński, S. Rana, and A. Streltsov. Entanglement negativity as a universal non-Markovianity witness. Phys. Rev. A, 101:020303, 2020.
* [33] D. De Santis and M. Johansson. Equivalence between non-Markovian dynamics and correlation backflows. New Journal of Physics, 22:093034, 2020.
* [34] D. De Santis, M. Johansson, B. Bylicka, N. K. Bernardes, and A. Acín. Witnessing non-Markovian dynamics through correlations. Phys. Rev. A, 102:012214, 2020.
* [35] F. A. Pollock, C. Rodríguez-Rosario, T. Frauenheim, M. Paternostro, and K. Modi. Operational Markov condition for quantum processes. Phys. Rev. Lett., 120:040405, 2018.
* [36] S. Milz, M. S. Kim, F. A. Pollock, and K. Modi. Completely positive divisibility does not mean Markovianity. Phys. Rev. Lett., 123:040401, 2019.
* [37] A. Smirne, D. Egloff, M. G. Díaz, M. B. Plenio, and S. F. Huelga. Coherence and non-classicality of quantum Markov processes. Quantum Sci. Technol., 4:01LT01, 2019.
* [38] S. Milz, F. Sakuldee, F. A. Pollock, and K. Modi. Kolmogorov extension theorem for (quantum) causal modelling and general probabilistic theories. Quantum, 4:255, 2020.
* [39] S. Milz, D. Egloff, P. Taranto, T. Theurer, M. B. Plenio, A. Smirne, and S. F. Huelga. When is a non-Markovian quantum process classical? Phys. Rev. X, 10:041049, 2020.
* [40] M. M. Wolf, J. Eisert, T. S. Cubitt, and J. I. Cirac. Assessing non-Markovian quantum dynamics. Phys. Rev. Lett., 101:150402, 2008.
* [41] Á. Rivas, S. F. Huelga, and M. B. Plenio. Entanglement and non-Markovianity of quantum evolutions. Phys. Rev. Lett., 105:050403, 2010.
* [42] X.-M. Lu, X. Wang, and C. P. Sun. Quantum Fisher information flow and non-Markovian processes of open systems. Phys. Rev. A, 82:042103, 2010.
* [43] D. Chruściński and S. Maniscalco. Degree of non-Markovianity of quantum evolution. Phys. Rev. Lett., 112:120404, 2014.
* [44] M. J. W. Hall, J. D. Cresser, L. Li, and E. Andersson. Canonical form of master equations and characterization of non-Markovianity. Phys. Rev. A, 89:042120, 2014.
* [45] F. Buscemi and N. Datta. Equivalence between divisibility and monotonic decrease of information in classical and quantum stochastic processes. Phys. Rev. A, 93:012101, 2016.
* [46] H. R. Jahromi, K. Mahdavipour, M. Khazaei Shadfar, and R. Lo Franco. Witnessing non-Markovian effects of quantum processes through Hilbert-Schmidt speed. Phys. Rev. A, 102:022221, 2020.
* [47] D. Chruściński, A. Kossakowski, and Á. Rivas. Measures of non-Markovianity: Divisibility versus backflow of information. Phys. Rev. A, 83:052128, 2011.
* [48] S. Wißmann, H.-P. Breuer, and B. Vacchini. Generalized trace-distance measure connecting quantum and classical non-Markovianity. Phys. Rev. A, 92:042108, 2015.
* [49] A. Ferraro, L. Aolita, D. Cavalcanti, F. M. Cucchietti, and A. Acín. Almost all quantum states have nonclassical correlations. Phys. Rev. A, 81:052318, May 2010.
* [50] D. Tamascelli, A. Smirne, S. F. Huelga, and M. B. Plenio. Nonperturbative treatment of non-Markovian dynamics of open quantum systems. Phys. Rev. Lett., 120:030402, 2018.
* [51] D. Tamascelli, A. Smirne, J. Lim, S. F. Huelga, and M. B. Plenio. Efficient simulation of finite-temperature open quantum systems. Phys. Rev. Lett., 123:090402, 2019.
* [52] F. Chen, E. Arrigoni, and M. Galperin. Markovian treatment of non-Markovian dynamics of open Fermionic systems. New J. Phys., 21:123035, 2019.
* [53] N. Lambert, S. Ahmed, M. Cirio, and F. Nori. Modelling the ultra-strongly coupled spin-boson model with unphysical modes. Nat. Commun., 10:3721, 2019.
* [54] A. Nüßeler, I. Dhand, S. F. Huelga, and M. B. Plenio. Efficient simulation of open quantum systems coupled to a fermionic bath. Phys. Rev. B, 101:155134, 2020.
* [55] G. Pleasance, B. M. Garraway, and Fr. Petruccione. Generalized theory of pseudomodes for exact descriptions of non-Markovian quantum processes. Phys. Rev. Research, 2:043058, 2020.
* [56] M. G. Díaz, B. Desef, M. Rosati, D. Egloff, J. Calsamiglia, A. Smirne, M. Skotiniotis, and S. F. Huelga. Accessible coherence in open quantum system dynamics. Quantum, 4:249, 2020.
* [57] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, Cambridge, 2000.
* [58] A. Peres. Separability criterion for density matrices. Phys. Rev. Lett., 77:1413–1415, 1996.
* [59] M. Horodecki, P. Horodecki, and R. Horodecki. Separability of mixed states: necessary and sufficient conditions. Phys. Lett. A, 223:1, 1996.
* [60] K. Roszak and Ł. Cywiński. Characterization and measurement of qubit-environment-entanglement generation during pure dephasing. Phys. Rev. A, 92:032310, 2015.
* [61] A. C. S. Costa, M. W. Beims, and W. T. Strunz. System-environment correlations for dephasing two-qubit states coupled to thermal baths. Phys. Rev. A, 93:052316, 2016.
* [62] W. K. Wootters. Entanglement of formation of an arbitrary state of two qubits. Phys. Rev. Lett., 80:2245, 1998.
|
# Robust measurement of wave function topology on NISQ quantum computers
Xiao Xiao<EMAIL_ADDRESS>Department of Physics, North Carolina State
University, Raleigh, North Carolina 27695, USA J. K. Freericks
<EMAIL_ADDRESS>Department of Physics, Georgetown University,
37th and O Sts. NW, Washington, DC 20057 USA A. F. Kemper<EMAIL_ADDRESS>Department of Physics, North Carolina State University, Raleigh, North
Carolina 27695, USA
###### Abstract
Topological quantum phases of quantum materials are defined through their
topological invariants. These topological invariants are quantities that
characterize the global geometrical properties of the quantum wave functions
and thus are immune to local noise. Here, we present a general strategy to
measure topological invariants on NISQ quantum computers and demonstrate our
proposal by applying it to a topological superconducting model. Remarkably, we
find that the Chern numbers of the model can be measured exactly; that is, it
is immune to the noise of NISQ machines. We also evaluate the Zak phase and
ensemble geometric phase, which are not as robust as the Chern number, but
still can be used to qualitatively distinguish different topological phases.
Our study demonstrates that the robust nature of wave function topology allows
noisy NISQ machines to determine topological invariants accurately.
Introduction — Topological phases are characterized by nonlocal topological
invariants, which are by nature robust against local perturbations
Thouless_prl_1982 ; Niu_prb_1985 ; Sheng_PRB_2006 ; Obuse_PRB_2007 ;
Li_PRL_2009 ; Prodan_JPA_2011 ; Chalker_PRB_2011 ; Liu_PRL_2012 ;
Lobos_PRL_2012 ; Konig_PRB_2013 ; Altland_PRL_2014 ; Shem_PRL_2014 ;
Song_PRB_2014 ; Foster_PRB_2014 ; Wang_PRB_2014 ; Liu_PRL_2017 ;
Meier_Science_2018 ; Stutzer_Nature_2018 ; Xiao_arXiv_2018 ; Shtanko_PRL_2018
; Okugawa_PRB_2020 . This unique property makes topological phases an ideal
application of quantum computing, in particular in the NISQ era where the
noise levels are high. A significant amount of work has been performed on
realizing topological phases and identifying different topological phases
_qualitatively_ on quantum hardware Roushan_Nature_2014 ; Choo_PRL_2018 ;
Smith_arXiv_2019 ; Azses_PRL_2020 ; Mei_PRL_2020 ; Xiao_arXiv_2020 . However,
although the strategies for calculating topological invariants are well-
established in the condensed matter community, there have only been a few
studies employing real quantum circuits to determine themFlurin_PRX_2017 ;
Zhan_PRL_2017 ; Xu_PRL_2018 ; Elben_SA_2020 .
The difficulty in using NISQ hardware to measure topological invariants stems
from the inherent errors due to the non-fault-tolerant quantum hardware; the
issues of noise are omnipresent within NISQ hardware calculations
Preskill_Quantum_2018 , and advanced error mitigation strategies often have to
be deployed before _semiquantitative_ results are obtained Kandala_Nature_2019
; Hamilton_QMI_2020 . These strategies may not suffice; the quantitative
results may be significantly off from the exact results, regardless of the
error mitigation. Even to obtain qualitatively correct results, a limitation
to few qubits and low-depth circuits Lierta_Quantum_2018 ; Aydeniz_npj_2020 ;
Grimsley_nc_2019 is necessary to reduce the influence of the operation errors
in NISQ quantum computers.
Figure 1: Illustration of different topology of wave functions: the pseudo-
spin representation of the wave function of a chiral $p$-wave superconductor
in (a) the case of a topological phase and (b) the case of a trivial phase.
The pseudo-spin vector field for a particular $k_{y}$ (along the gray lines)
determines the Zak phase $\varphi(k_{y})$, which has non-trivial (trivial)
winding for the topological (trivial) phase. The central quantity here is the
overlap of the wave function after a small transport in ${\bm{k}}$-space
denoted by $U_{\delta{\bm{k}}}({\bm{k}})$.
Here we develop quantum circuits—based on holonomy—that can measure
topological invariants of models, and do so in a noise-resistant (or even
noise-free) manner. Our strategy is to construct a general quantum circuit to
measure the parallel transport of the wavefunction in the base space. These
determine the connection of the wavefunction bundle, which permits the
calculation of topological invariants in a gauge-invariant fashion. We
demonstrate this idea by showing how to measure the Chern number, the winding
of the Zak phase, and the ensemble geometric phase (a density matrix
generalization of the Zak phase) for a model of chiral $p$-wave
superconductors. Strikingly, the Chern number can be measured _exactly_ on
NISQ hardware without any error, even when the system is tuned close to a
topological phase transition. We are not aware of any other error-free
measurements obtained from NISQ hardware. The winding of the Zak phase and
ensemble geometric phase are less robust, but can still qualitatively
determine different topological phases.
Figure 2: Quantum circuit for measuring parallel transport of the
wavefunction: (a) Schematic quantum circuit structure (independent of the
details of the microscopic model); (b) Detailed circuit for wave-function
preparation of the chiral $p$-wave superconductor appendix ; and (c) Detailed
circuit for wave-function evolution of the chiral $p$-wave superconductor
appendix . In the panels, $U_{3}({\bm{k}})$ (and $U_{3}^{\dagger}({\bm{k}})$)
denotes the general single-qubit operator specified by three angles determined
at the momentum point ${\bm{k}}$, ‘X’ denotes a Pauli-X gate, ‘H’ is a
Hadamard gate, $\bullet$ denotes a controlled gate, and $\oplus$ denotes a NOT
gate.
General scheme and quantum circuits — To measure the topology of the
wavefunction in some parameter space, the central quantity we use is the
holonomy in the wavefunction bundle, which can be obtained by the parallel
transportation of the wavefunction along a closed loop in parameter space.
Topological states have non-trivial holonomy, while trivial states have
trivial holonomy (see Fig. 1 for examples). Along a closed path, each parallel
transport operation defines a local connection, which is determined from the
overlap of the wavefunctions
$\langle\Psi_{\Theta}|\Psi_{\Theta+\delta\Theta}\rangle$ at two neighboring
points $\Theta$ and $\Theta+\delta\Theta$ in the parameter space. Therefore,
the key step to measure the holonomy with quantum circuits is to characterize
the local wavefunction overlap, which requires the evolution of the
wavefunction from $|\Psi_{\Theta}\rangle$ to
$|\Psi_{\Theta+\delta\Theta}\rangle$. Once this is known, the overlap
$\langle\Psi_{\Theta}|\Psi_{\Theta+\delta\Theta}\rangle$ can be evaluated by a
Hadamard test algorithm (see Fig. 2 (a)).
So far our procedure stated in the above is general and largely independent of
model details. To demonstrate an explicit realization, we use the two-
dimensional chiral $p$-wave superconductor model Volovik_JEPT_1999 ;
Read_PRB_2000 , which can be tuned through several trivial and topological
phases. Our aim is to identify phases by the topology of their wave functions.
This model can be described by the following Hamiltonian density:
$\displaystyle\mathcal{H}({\bm{k}})=$ $\displaystyle\Delta(\sin
k_{y}\sigma_{x}+\sin k_{x}\sigma_{y})$ $\displaystyle-\left[t(\cos k_{x}+\cos
k_{y})+\mu\right]\sigma_{z},$ (1)
where $\Delta$ is the superconducting gap, $t$ is the nearest neighbor hopping
amplitude, and $\mu+2t$ is the chemical potential in the normal state. We set
$t=\Delta=1$ so that the different phases are tuned by the parameter $\mu$
only. The topological quantum critical points occur when the energy levels of
the two bands touch at some point in k-space. As shown in Fig. 3, this model
exhibits $4$ different phases separated by $3$ topological quantum critical
points at $\mu=\\{-2,0,2\\}$. The trivial phases with $\mathcal{C}=0$ occur
for $|\mu|>2$; when $|\mu|<2$, $\mathcal{C}=\mathrm{sign}(\mu)$.
Figure 3: Robustness of the Chern number on NISQ hardware: (a) Topological
phase diagram for the chiral $p$-wave superconductor. The Chern number,
measured on IBMQ-Toronto, is shown with open symbols falling accurately on the
exact results. (b) Mistake ratio of Chern number measurements from noisy
simulations (we assume that the two-qubit gate error $\varepsilon_{2}$ is $10$
times the one-qubit gate error $\varepsilon_{1}$). (c)-(e) Integer gauge field
$n({\bm{k}})$ extracted from the data points denoted by blue stars in (a)
($\bullet$ denotes $n=1$, $\circ$ denotes $n=-1$ and the empty box is for
$n=0$). The overlap $U_{\delta{\bm{k}}}({\bm{k}})$ measured by both IBMQ-
Toronto and noisy simulations was obtained with $N=5120$ shots.
The Chern number — The Chern number for a particular $\mu$ can be obtained by
discretizing the Brillouin zone (BZ) and measuring the normalized overlap
$U_{\delta{\bm{k}}}({\bm{k}})\equiv\langle\Psi({\bm{k}})|\Psi({\bm{k}}+\delta{\bm{k}})\rangle/|\langle\Psi({\bm{k}})|\Psi({\bm{k}}+\delta{\bm{k}})\rangle|$
between neighboring mesh points following the general strategy described in
the last section. Explicitly, the Chern number can be expressed by the
measured overlap in a gauge-invariant way Fukui_JPSJ_2005 :
$\displaystyle\mathcal{C}=\frac{1}{2\pi
i}\sum_{{\bm{k}}}\mathcal{F}({\bm{k}}),$ (2)
where:
$\displaystyle\mathcal{F}({\bm{k}})=$ $\displaystyle\ln\left[U_{\delta
k_{x}}({\bm{k}})U_{\delta k_{y}}({\bm{k}}+\delta k_{x}\hat{x})\right.$
$\displaystyle\times\left.U_{\delta k_{x}}^{-1}({\bm{k}}+\delta
k_{y}\hat{y})U_{\delta k_{y}}^{-1}({\bm{k}})\right],$ (3)
where $\hat{x},\hat{y}$ are unit vectors in the $x$ and $y$ directions in the
BZ. From this definition, we have $-\pi<\mathcal{F}({\bm{k}})/i\leq\pi$. The
success of the calculations largely relies on the admissibility condition
$|\mathcal{F}({\bm{k}})|\leq\pi$ Fukui_JPSJ_2005 . The quantity
$\mathcal{F}({\bm{k}})$ approaches the continuum value of field strength
$F({\bm{k}})\delta k_{x}\delta
k_{y}=\nabla\times{\bm{A}}({\bm{k}})\cdot\hat{z}\delta k_{x}\delta k_{y}$,
where ${\bm{A}}({\bm{k}})$ is the Berry curvature, when $\delta{\bm{k}}\to 0$.
As long as there is a finite gap between the two bands,
$\mathcal{F}({\bm{k}})$ should be small for a sufficiently dense mesh so that
the Chern number calculated from the lattice is identical to the continuum
value. A finite critical mesh density, above which the Chern number calculated
from lattice is identical to the continuum value, can be determined by a
breaking of the admissibility condition Fukui_JPSJ_2005 . Typically for Chern
number $\mathcal{C}\sim\mathcal{O}(1)$, the critical mesh number is not very
large. In our simulations, we will demonstrate our proposal by using a uniform
discretization of the BZ into $8\times 8$ mesh points.
We first demonstrate the measurement of the Chern number on the IBMQ-Toronto
machine IBMQ_Toronto , focusing on the regions near the topological critical
points, which are typically most sensitive to noise. Fig. 3(a) shows the exact
results for the Chern number $\mathcal{C}$ as the chemical potential is
varied. The plot shows only the regions near the phase transitions, where we
have performed calculations on the quantum computer. Remarkably, we observe
that the Chern number is error-free. This vividly demonstrates the power of
global properties of the wavefunction, which are by definition independent of
any local deformation.
To investigate the robustness of the Chern number measurement, we performed a
noise analysis by varying the gate errors. For typical noise in the IBMQ
machines, the two-qubit gate error $\varepsilon_{2}$ is about one order of
magnitude larger than that of the one-qubit gate error $\varepsilon_{1}$. In
our simulation, we simply set $\varepsilon_{2}=10\varepsilon_{1}$ and vary
only the one-qubit gate error $\varepsilon_{1}$. The one-qubit gate error
$\varepsilon_{1}$ is successively tuned from $\varepsilon_{1}=0.005$ to
$\varepsilon_{1}=0.015$ with step size $0.001$, and for each gate error $10$
trials were performed. The mistake ratio of the measurement is thus defined as
the percentage of mistakes in these $10$ trials. The results of the noisy
simulations are shown in Fig. 3(b). As expected, larger gate error leads to a
larger mistake ratio. The measured Chern number is error-free when
$\varepsilon_{1}\leq 0.008$, and this noise level can be achieved in NISQ
machines. We have to note that our measurement scheme plays an important role
in leading to the error-free result. In our measurement scheme,
$U_{\delta{\bm{k}}}({\bm{k}})$ at each discritized momentum point is measured
by the quantum circuit of the same structure, so the error at each momentum
point is contributed by two parts: one from parameter-relevant operations, and
the other from parameter-independent operations. By performing phase
measurements, the parameter-independent part is gauged out, so our measurement
scheme improves the error tolerance.
A different perspective on the results may obtained by rewriting Eq. (Robust
measurement of wave function topology on NISQ quantum computers) as:
$\displaystyle\mathcal{F}({\bm{k}})$ $\displaystyle=\left[\ln U_{\delta
k_{x}}({\bm{k}})-\ln U_{\delta k_{x}}({\bm{k}}+\delta k_{y}\hat{y})\right]$
$\displaystyle+\left[\ln U_{\delta k_{y}}({\bm{k}}+\delta k_{x}\hat{x})-\ln
U_{\delta k_{y}}({\bm{k}})\right]+2\pi in({\bm{k}}).$ (4)
Here, $|n({\bm{k}})|\leq 2$ is an integer-valued field that acquires a non-
zero value whenever $\mathcal{F}({\bm{k}})$ moves out of the principal branch
of the logarithm. Precisely where and how often this occurs depends
sensitively on the measurement and its associated noise. Inserting Eq.(Robust
measurement of wave function topology on NISQ quantum computers) into the
expression of the Chern number (Eq.(2)), one finds:
$\mathcal{C}=\sum_{{\bm{k}}}n({\bm{k}}).$ (5)
From the measured overlaps $U_{\delta{\bm{k}}}({\bm{k}})$, we can extract the
distribution of $n({\bm{k}})$ for three typical $\mu$ values (denoted by the
blue stars in Fig. 3(b)-(d)), and we show them in Fig. 3(e)-(g). The sum of
the integer-valued field yields results which are consistent with Eq.(5). Note
that each calculation will yield different integer fields, but these always
conspire to achieve the correct net value when summed Fukui_JPSJ_2005 .
Winding of the Zak phase —
Figure 4: Qualitative robustness of Zak phase on NISQ machines: the winding of
Zak phase $\varphi(k_{y})$ along $k_{y}$ measured by IBMQ-Toronto: (a)
topological phase with $\mu=1.9$; (b) trivial phase with $\mu=2.1$. For the
comparison, the winding of Zak phase was measured by noisy simulations and
shown in (c) for $\mu=1.9$ and (d) for $\mu=2.1$. In the noisy simulations,
the gate noises were chosen to be $\epsilon_{1}=0.008$ and
$\epsilon_{2}=0.08$. Each overlap $U_{\delta{\bm{k}}}({\bm{k}})$ measured by
IBMQ-Toronto and noisy simulations were obtained with $N=5120$ shots, and
$N_{L}=8$ grids point were used for each loop $\mathcal{L}(k_{y})$ to obtain
the Zak phase.
Another way to determine the topology of the wavefunction is to measure the
winding of the Zak phase, which is defined by parallel transport of the
wavefunction in only one direction of the Brillouin zone (as opposed to
measuring the transport summed over the entire Brillouin zone for the Chern
number). The Zak phase measures the holomony of the wavefunction in one
direction and can be written in terms of normalized overlap $U_{\delta
k_{x}}({\bm{k}})$ as:
$\varphi(k_{y})=\ln\prod_{{\bm{k}}\in\mathcal{L}(k_{y})}U_{\delta
k_{x}}({\bm{k}}),$ (6)
where $\mathcal{L}(k_{y})$ is the loop for a fixed $k_{y}$ along the
$k_{x}$-direction. Then, as illustrated in Fig. 1, the winding of the Zak
phase along the $k_{y}$-direction relates to the Chern number as follows:
$\mathcal{C}=\frac{1}{2\pi}\oint dk_{y}\frac{d\varphi(k_{y})}{dk_{y}}.$ (7)
To obtain this from the quantum computer, we measure the normalized overlap
$U_{\delta k_{x}}({\bm{k}})$ by using the quantum circuits shown in Fig. 1,
from which we can obtain the Zak phase for a particular $k_{y}$. In Fig. 4(a)
and (b), the Zak phases $\varphi(k_{y})$ obtained from $5$ independent
simulations on IBMQ-Toronto are shown as functions of $k_{y}$ for a typical
topological state (panel (a) with $\mu=1.9$) and a typical trivial state
(panel (b) $\mu=2.1$) respectively. Here, the results from the NISQ machines
do not fall exactly on the exact results (the blue curves), but the results
from quantum computers do capture the main features of the two topologically
distinct phases. That is, for the topological state, a sharp change at the
high-symmetry point $k_{y}=\pi$ (see Fig. 4(a)) can be identified and
signifies the non-trivial winding of the Zak phase along $k_{y}$, while this
sharp change is absent for the trivial phase as illustrated in Fig. 4(b). For
comparison, we also performed noisy simulations with $\epsilon_{1}=0.008$ and
$\epsilon_{2}=10\epsilon_{1}$ for the same parameters. The results of $5$
independent trials for the topological state with $\mu=1.9$ (non-trivial) and
$\mu=2.1$ (trivial) are shown in Fig. 4(c) and the (d), respectively. It can
be clearly observed that the noisy simulation results are quite similar to
those from the IBMQ-Toronto quantum computer. These results from the quantum
computer and the noisy simulations suggest that the topology of wavefunctions
can also be successfully identified by measuring the winding of the Zak phase.
Ensemble geometric phase — Thus far we have demonstrated that the topology of
the lowest lying energy band can be measured on NISQ machines. Here we further
demonstrate that the quantum circuits proposed in Fig. 2 can be used to
measure the topology of mixed states; these may arise from a finite
temperature or by being driven out of equilibrium. The many-body
generalization of the Zak phase in this case is the ensemble geometric phase
$\varphi_{E}$ introduced by Bardyn et al.Bardyn_PRX_2018 Unlike other
proposals such as the Uhlmann phaseHuang_PRL_2014 ; Viyuela_PRL_2014a ;
Viyuela_PRL_2014b , the ensemble geometric phase reduces to the Zak phase in
the thermodynamic limit. The ensemble geometric phase is obtained from a
“fictitious Hamiltonian $G$,” which for Bloch states coincides with the real
Hamiltonian scaled by $\beta=1/T$:
$G=\sum_{\bm{k}}G_{\bm{k}}c^{\dagger}_{\bm{k}}c_{\bm{k}}=\sum_{\bm{k}}\beta
H_{\bm{k}}c^{\dagger}_{\bm{k}}c_{\bm{k}}.$ (8)
The ensemble geometric phase is obtained from the rotation matrices
$\mathcal{U}_{{\bm{k}}}$ that diagonalize $G_{\bm{k}}$Bardyn_PRX_2018 :
$\varphi_{E}(k_{y})=\Im m\left[\ln\mathrm{det}\left(1+M_{T}\right)\right],$
(9)
where the matrix $M_{T}$ is constructed as follows: first create
$B_{\bm{k}}=\mathrm{diag}_{s}(\lambda_{{\bm{k}},s}):=\mathcal{U}_{\bm{k}}^{\dagger}G_{\bm{k}}\mathcal{U}_{{\bm{k}}},$
(10)
which is the diagonal matrix of eigenvalues $\lambda_{{\bm{k}},s}$ of
$G_{\bm{k}}$, and then form
$M_{T}=(-1)^{N_{L}+1}\prod_{{\bm{k}}\in\mathcal{L}(k_{y})}e^{-B_{\bm{k}}}\mathcal{U}_{{\bm{k}}+\delta
k_{x}\hat{x}}^{\dagger}\mathcal{U}_{{\bm{k}}}.$ (11)
Here $N_{L}$ is the number of grid points along the loop $\mathcal{L}(k_{y})$.
In the evaluation of $M_{T}$, the important component is the product
$\mathcal{U}_{{\bm{k}}+\delta k_{x}\hat{x}}^{\dagger}\mathcal{U}_{{\bm{k}}}$,
which includes the parallel transport of both intraband and interband
wavefunctions. For the model of a chiral $p$-wave superconductor, these
parallel transports can be measured by the quantum circuits shown in Fig. 2(b)
and (c).
The factor $e^{-B_{\bm{k}}}=\mathrm{diag}_{s}(e^{-\lambda_{{\bm{k}},s}})$
along the loop $\mathcal{L}(k_{y})$ has crucial effects on mixed states. The
lowest purity band (separated from others by a purity gap) has the highest
weight according to the factor $e^{-B_{\bm{k}}}$. Therefore, if the number of
grid points is large enough, the ensemble geometric phase approaches the Zak
phase. The deviation of the ensemble geometric phase $\varphi_{E}$ from the
Zak phase $\varphi$ is proportional to $(T/N_{L})^{2}$ Bardyn_PRX_2018 .
Therefore, for small $N_{L}$ the topology of the density matrix is correctly
measured as long as $T$ is small compared to the energy gap.
We demonstrate the measurement of the ensemble geometric phase by noisy
simulations, which (as was shown in Figs. 3 and 4) have results compatible
with those from real quantum hardware. The winding of the ensemble geometric
phase for the topological phase with $\mu=1.9$ and for the trivial phase with
$\mu=2.1$ is shown in Fig. 5(a) and (b), respectively. Indeed, the results are
similar to that of the Zak phase as expected, and likewise the two phases with
different topology can be qualitatively distinguished. Note that the challenge
with performing this calculation on a NISQ machine is actually the initial
thermal state preparation at low temperature.
Figure 5: Qualitative robustness of the ensemble geometric phase illustrated
by noise simulations: (a) the winding of ensemble geometric phase
$\varphi_{E}(k_{y})$ along $k_{y}$ for (a) topological phase with $\mu=1.9$
and (b) trivial phase with $\mu=2.1$. The number of grid for the loop
$\mathcal{L}(k_{y})$ is $N_{L}=8$ and the inverse temperature $\beta=2.1$. In
this simulation, the gate noises were chosen to be $\epsilon_{1}=0.008$ and
$\epsilon_{2}=0.08$, and $N=5120$ shots were used to obtain each element of
$\mathcal{U}_{{\bm{k}}+\delta k_{x}\hat{x}}^{\dagger}\mathcal{U}_{{\bm{k}}}$
in Eq.(11).
Discussion and conclusion — Although we illustrate the application of our
method to a non-interacting model, whose wavefunction can be easily prepared,
the principles stated here can be generally applied to interacting topological
models. Following the discussion in Ref. Niu_prb_1985 , the topology of a wave
function can be measured in a real-space calulation by examining the evolution
of the wave function under the variation of a twisted boundary condition
angle, even when the system is interacting. Taking $2D$ systems as an example,
as we considered in this work, the two twist angles at the boundaries along
the $x$ and $y$ directions $(\theta_{x},\theta_{y})$ form a 2-torus exactly
like the $2$-torus formed by $(k_{x},k_{y})$ of the chiral $p$-wave
superconductor. As such, the parallel transport for the more complex models is
simple to achieve, and all the machinery developed in this work can be applied
straightforwardly to measure the wavefunction topology on quantum hardware.
A comparison between our method with the recently proposed scheme
Cian_arXiv_2020 to measure the many-body Chern number by randomized
measurements is in order. First, the randomized measurement scheme is based on
decomposing the wavefunction manifold and requires the preparation of two
copies of the wavefunction for a given set of parameters in the base space. In
contrast, in our algorithm, we only use one copy. Second, the many-body Chern
number in Ref. Cian_arXiv_2020 is inferred from the winding of the measured
expectation of the SWAP applied to the two copies of the wavefunction after
‘surgery’, while our method can either measure the Chern number directly or
infer it from the winding of the Zak phase. More importantly, as we have shown
here, although it is not difficult to qualitatively identify different winding
properties on NISQ hardware, it is generally difficult to do so
quantitatively. The randomized measurement scheme has yet to be carried out on
NISQ machines.
In conclusion, by measuring the Chern number, the winding of the Zak phase and
the winding of the ensemble geometric phase for chiral $p$-wave
superconductors, we demonstrated how the topology of a wavefunction can be
measured, and that it is robust against operation errors of quantum computers
in this NISQ era. Our work provides a general scheme to investigate various
topologically ordered systems on current quantum hardware.
## Acknowledgments
We acknowledge helpful discussions with Michael Geller. This work was
supported by the Department of Energy, Office of Basic Energy Sciences,
Division of Materials Sciences and Engineering under Grant No. DE-SC0019469.
J.K.F. was also supported by the McDevitt bequest at Georgetown. We
acknowledge use of the IBM Q for this work. The views expressed are those of
the authors and do not reflect the official policy or position of IBM or the
IBM Q team. Access to the IBM Q Network was obtained through the IBM Q Hub at
NC State. We acknowledge the use of the QISKIT software package qiskit2019
for performing the quantum simulations.
## References
* (1) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 105 (1982).
* (2) Q. Niu, D. J. Thouless, and Y.-S. Wu, Phys. Rev. B 31, 3372 (1985).
* (3) D. N. Sheng, L. Sheng, and Z. Y. Weng, Phys. Rev. B 73, 233406 (2006).
* (4) H. Obuse, A. Furusaki, S. Ryu, and C. Mudry, Phys. Rev. B 76, 075301 (2007).
* (5) J. Li, R.-L. Chu, J. K. Jain, and S.-Q. Shen, Phys. Rev. Lett. 102, 136806 (2009).
* (6) E. Prodan, J. Phys. A: Math. Theor. 44, 113001 (2011).
* (7) J. T. Chalker, M. Ortuno, and A. M. Somoza, Phys. Rev. B 83, 115317 (2011).
* (8) J. Liu, A. C. Potter, K. T. Law, and P. A. Lee, Phys. Rev. Lett. 109, 267002 (2012).
* (9) A. M. Lobos, R. M. Lutchyn, and S. Das Sarma, Phys. Rev. Lett. 109, 146403 (2012).
* (10) E. J. Konig, P. M. Ostrovsky, I. V. Protopopov, I. V. Gornyi, I. S. Burmistrov, and A. D. Mirlin Phys. Rev. B 88, 035106 (2013).
* (11) A. Altland, D. Bagrets, L. Fritz, A. Kamenev, and H. Schmiedt, Phys. Rev. Lett. 112, 206602 (2014).
* (12) I. M. Shem, T. L. Hughes, J. Song, and E. Prodan, Phys. Rev. Lett. 113, 046802 (2014).
* (13) J. Song, and E. Prodan, Phys. Rev. B 89, 224203 (2014).
* (14) M. Foster, H.-Y. Xie, and Y.-Z. Chou, Phys. Rev. B 89, 155140 (2014).
* (15) J. Wang, B. Liao, and S.-C. Zhang, Phys. Rev. B 89, 085106 (2014).
* (16) C. Liu, W. Gao, B. Yang, and S. Zhang, Phys. Rev. Lett. 119, 183901 (2017).
* (17) E. J. Meier, F. A. An, A. Dauphin, M. Maffei, P. Massignan, T. L. Hughes, B. Gadway, Science 362, 929 (2018).
* (18) S. Stutzer, Y. Plotnik, Y. Lumer, P. Titum, N. Linder, M. Segev, M. C. Rechtsman, and A. Szameit, Nature 560, 461 (2018).
* (19) X. Xiao, arXiv:1802.02687 (2018).
* (20) O. Shtanko, and R. Movassagh, Phys. Rev. Lett. 121, 126803 (2018).
* (21) T. Okugawa, P. Tang, A. Rubio, and D. Kennes, Phys. Rev. B 102, 201405(R) (2020).
* (22) P. Roushan, C. Neill, Yu Chen, M. Kolodrubetz, C. Quintana, N. Leung, M. Fang, R. Barends, B. Campbell, Z. Chen, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, A. Megrant, J. Mutus, P. J. J. O’Malley, D. Sank, A. Vainsencher, J. Wenner, T. White, A. Polkovnikov, A. N. Cleland and J. M. Martinis Nature 515, 241 (2014).
* (23) K. Choo, C. W. von Keyserlingk, N. Regnault, and T. Neupert, Phys. Rev. Lett. 121, 086808 (2018).
* (24) A. Smith, B. Jobst, A. G. Green, and F. Pollmann, arXiv:1910.05351 (2019).
* (25) D. Azses, R. Haenel, Y. Naveh, R. Raussendorf, E. Sela, and E. G. D Torre, Phys. Rev. Lett. 125, 120502 (2020).
* (26) F. Mei, Q. Guo, Y.-F. Yu, L. Xiao, S.-L. Zhu, and S. Jia, Phys. Rev. Lett. 125, 160503 (2020).
* (27) X. Xiao, J. K. Freericks, and A. F. Kemper, arXiv:2006.05524 (2020).
* (28) E. Flurin, V. V. Ramasesh, S. Hacohen-Gourgy, L. S. Martin, N. Y. Yao, and I. Siddiqi, Phys. Rev. X 7, 031023 (2017).
* (29) X. Zhan, L. Xiao, Z. Bian, K. Wang, X. Qiu, B. Sanders, W. Yi, and P. Xue, Phys. Rev. Lett. 119, 130501 (2017).
* (30) X.-Y. Xu, Q.-Q. Wang, W.-W. Pan, K. Sun, J.-S. Xu, G. Chen, J.-S. Tang, M. Gong, Y.-J. Han, C.-F. Li, and G.-C. Guo, Phys. Rev. Lett. 120, 260501 (2018).
* (31) A. Elben, J. Yu, G. Zhu, M. Hafezi, F. Pollmann, P. Zoller, and B. Vermersch, Sci. Adv. 6, eaaz3666 (2020).
* (32) J. Preskill, Quantum 2, 79 (2018).
* (33) A. Kandala, K. Temme, A. D. Corcoles, A. Mezzacapo, J. M. Chow, and J. M. Gambetta, Nature 567, 491 (2019).
* (34) K. E. Hamilton, and R. C. Pooser, Quantum Machine Intelligence 2, 10 (2020).
* (35) A. Cervera-Lierta, Quantum 2, 114 (2018).
* (36) K. Yeter-Aydeniz, R. C. Pooser, and G. Siopsis, npj Quantum Inf. 6, 63 (2020).
* (37) H. R. Grimsley, S. E. Economou, E. Barnes, and N. J. Mayhall, Nat. Commun. 10, 3007 (2019).
* (38) G. E. Volovik, JETP Lett. 70, 609 (1999).
* (39) N. Read, and D. Green, Phys. Rev. B 61, 10267 (2000).
* (40) See the appendix for the details of quantum circuit constructions for the chiral $p$-wave superconducting model.
* (41) T. Fukui, Y. Hutsugai, and H. Suzuki, J. Phys. Soc. Jpn. 74, 1674 (2005).
* (42) IBM Q team, “IBM Q Toronto backend specification v1.0.7,” retrieved from https://quantum-computing.ibm.com (2020).
* (43) C.-E. Bardyn, L. Wawer, A. Altland, M. Fleischhauer, and S. Diehl, Phys. Rev. X 8, 011035 (2018).
* (44) Z. Huang, and D. P. Arovas, Phys. Rev. Lett. 113, 076407 (2014).
* (45) O. Viyuela, A. Rivasand, and M. A. Martin-Delgado, Phys. Rev. Lett. 112, 130401 (2014).
* (46) O. Viyuela, A. Rivasand, and M. A. Martin-Delgado, Phys. Rev. Lett. 113, 076408 (2014).
* (47) Z.-P. Cian, H. Dehghani, A. Elben, B. Vermersch, G. Zhu, M. Barkeshli, P. Zoller, and M. Hafezi, arXiv:2005.13543 (2020).
* (48) G. Aleksandrowicz, et al. Qiskit: An open-source framework for quantum computing. https://doi.org/10.5281/ZENODO.2562111 (2019).
## Appendix A Details of quantum circuits construction for chiral $p$-wave
superconductors
We consider how to measure the overlap of the wavefunctions at neighbored mesh
points by the standard Hadamard test. To do so we denote the prepared state as
$|\Psi\rangle=|0\rangle\otimes|\psi\rangle$, where $|0\rangle$ is the initial
state of the ancilla and $|\psi\rangle$ is the wavefunction at one of the mesh
points in the BZ. Then we first apply the Hardmard gate to the ancilla and
resulting in the following product state:
$|\Psi\rangle=\frac{1}{\sqrt{2}}|0\rangle\otimes|\psi\rangle+\frac{1}{\sqrt{2}}|1\rangle\otimes|\psi\rangle.$
(12)
Then, we impose the controlled $\mathcal{U}$ operation, where $\mathcal{U}$
relates the wavefunctions at the neighboring mesh points. After applying the
operation, the state is in an entangled superposition given by
$|\Psi\rangle=\frac{1}{\sqrt{2}}|0\rangle\otimes|\psi\rangle+\frac{1}{\sqrt{2}}|1\rangle\otimes\mathcal{U}|\psi\rangle.$
(13)
Then we projectively measure the expectations of $\sigma_{x}\otimes I$ and
$\sigma_{y}\otimes I$, which will give the real part and imaginary parts of
the overlap:
$\langle\sigma_{x}\otimes I\rangle=\frac{1}{2}\left[\langle
0|\otimes\langle\psi|+\langle
1|\otimes\langle\psi|\mathcal{U}^{\dagger}\right]\sigma_{x}\otimes
I\left[|0\rangle\otimes|\psi\rangle+|1\rangle\otimes\mathcal{U}|\psi\rangle\right]=\frac{1}{2}\left[\langle\psi|\mathcal{U}|\psi\rangle+\langle\psi|\mathcal{U}^{\dagger}|\psi\rangle\right]=\Re
e\langle\psi|\mathcal{U}|\psi\rangle,$ (14)
and
$\langle\sigma_{y}\otimes I\rangle=\frac{1}{2}\left[\langle
0|\otimes\langle\psi|+\langle
1|\otimes\langle\psi|\mathcal{U}^{\dagger}\right]\sigma_{y}\otimes
I\left[|0\rangle\otimes|\psi\rangle+|1\rangle\otimes\mathcal{U}|\psi\rangle\right]=-\frac{i}{2}\left[\langle\psi|\mathcal{U}|\psi\rangle-\langle\psi|\mathcal{U}^{\dagger}|\psi\rangle\right]=\Im
m\langle\psi|\mathcal{U}|\psi\rangle.$ (15)
To complete the algorithm, we have two more steps: first, we need to determine
what the initial state $|\psi\rangle$ is that we will use and how we prepare
it by a quantum circuit and second, we need to determine the unitary operator
$\mathcal{U}$ that evolves the wavefunction between neighboring mesh points
and how we can realize it by quantum circuits. We next answer these two
questions in turn.
### A.1 Preparation of wave function
We begin from the Hamiltonian density given by Eq. (1) of the main text. The
full Hamiltonian can be written as:
$H=\sum_{{\bm{k}}}\left(\begin{array}[]{cc}c_{{\bm{k}}}^{\dagger}&d_{{\bm{k}}}^{\dagger}\end{array}\right)\mathcal{H}({\bm{k}})\left(\begin{array}[]{c}c_{{\bm{k}}}\\\
d_{{\bm{k}}}\end{array}\right),$ (16)
where $\mathcal{H}({\bm{k}})$ is given by Eq. (1). $\mathcal{H}({\bm{k}})$ has
the eigenvalues:
$E_{\pm}=\pm\sqrt{\Delta^{2}\left(\sin^{2}k_{y}+\sin^{2}k_{x}\right)+\left[t(\cos
k_{x}+\cos k_{y})+\mu\right]^{2}}.$ (17)
By setting $\Delta=t=1$ as it is assumed in the main text, we find that the
gap between the two energy bands would be close at $(k_{x}=0,k_{y}=0)$ for
$\mu=-2$, at $(k_{x}=\pm\pi,k_{y}=\pm\pi)$ for $\mu=-2$, and at
$(k_{x}=\pm\pi,k_{y}=\mp\pi)$ for $\mu=0$. These gap closing points separate
different topological phases.
We define angles $\theta({\bm{k}})$ and $\varphi({\bm{k}})$ determined at each
momentum point:
$\cos\theta=\frac{t(\cos k_{x}+\cos k_{y})+\mu}{E_{+}},$ (18)
and
$\cos\varphi=\frac{\Delta\sin
k_{y}}{\sqrt{\Delta^{2}\left(\sin^{2}k_{y}+\sin^{2}k_{x}\right)}},$ (19)
so that the corresponding eigenstates of $E_{\pm}$ can be written as:
$\Psi_{+}({\bm{k}})=\left(\begin{array}[]{c}\cos\theta/2\\\
\sin\theta/2e^{-i\varphi}\end{array}\right),~{}\Psi_{-}({\bm{k}})=\left(\begin{array}[]{c}-\sin\theta/2e^{i\varphi}\\\
\cos\theta/2\end{array}\right).$ (20)
This eigensolution indicates that:
$\mathrm{diag}\left(E_{+}({\bm{k}}),E_{-}({\bm{k}})\right)=V^{\dagger}({\bm{k}})\mathcal{H}({\bm{k}})V({\bm{k}}),$
(21)
where $V=\left[\Psi_{+},\Psi_{-}\right]$. We denote this diagonalized
representation formed by the eigenstates of $\mathcal{H}({\bm{k}})$ as the
band representation, and the corresponding annihilation operators for the
$E_{+}$ and $E_{-}$ bands are $f_{{\bm{k}}}$ and $g_{{\bm{k}}}$. They relates
to $c_{{\bm{k}}}$ and $d_{{\bm{k}}}$ as:
$\left(\begin{array}[]{c}f_{{\bm{k}}}\\\
g_{{\bm{k}}}\end{array}\right)=V^{\dagger}\left(\begin{array}[]{c}c_{{\bm{k}}}\\\
d_{{\bm{k}}}\end{array}\right).$ (22)
The topological invariant is calculated from the wavefunction in the Brillouin
zone. Therefore, we can begin from the diagonalized band representation. The
initial states are either $|0_{f}1_{g}\rangle$ for the valence band or
$|1_{f}0_{g}\rangle$ for the conducting band. The wavefunction in the
Brillouin zone can be constructed by the creation operators
$c_{{\bm{k}}}^{\dagger}$ and $d_{{\bm{k}}}^{\dagger}$ from vacuum. The
relation between $(f_{{\bm{k}}},g_{{\bm{k}}})$ and
$(c_{{\bm{k}}},d_{{\bm{k}}})$ is clear from Eq. (A11), or more explicitly
$\begin{cases}c_{{\bm{k}}}^{\dagger}=\cos\frac{\theta}{2}f_{{\bm{k}}}^{\dagger}-\sin\frac{\theta}{2}e^{-i\varphi}g_{{\bm{k}}}^{\dagger},\\\
d_{{\bm{k}}}^{\dagger}=\sin\frac{\theta}{2}e^{i\varphi}f_{{\bm{k}}}^{\dagger}+\cos\frac{\theta}{2}g_{{\bm{k}}}^{\dagger}.\end{cases}$
(23)
Therefore, the following relation can be found that relates the two
representations by a unitary operator:
$\left(\begin{array}[]{c}|0_{c}0_{d}\rangle\\\ |1_{c}0_{d}\rangle\\\
|0_{c}1_{d}\rangle\\\
|1_{c}1_{d}\rangle\end{array}\right)=\left(\begin{array}[]{cccc}1&0&0&0\\\
0&\cos\frac{\theta}{2}&-\sin\frac{\theta}{2}e^{-i\varphi}&0\\\
0&\sin\frac{\theta}{2}e^{i\varphi}&\cos\frac{\theta}{2}&0\\\
0&0&0&1\end{array}\right)\left(\begin{array}[]{c}|0_{f}0_{g}\rangle\\\
|1_{f}0_{g}\rangle\\\ |0_{f}1_{g}\rangle\\\
|1_{f}1_{g}\rangle\end{array}\right)=U(\theta,\varphi)\left(\begin{array}[]{c}|0_{f}0_{g}\rangle\\\
|1_{f}0_{g}\rangle\\\ |0_{f}1_{g}\rangle\\\
|1_{f}1_{g}\rangle\end{array}\right).$ (24)
From this relation, we know that the state $|\psi\rangle$ is obtained by
applying $U$ on either $|1_{f}0_{g}\rangle$ for the conduction band or
$|0_{f}1_{g}\rangle$ for the valence band. This operation can be realized by
two $\mathrm{CNOT}$ gates and a controlled-$U_{3}$ gate:
$U(\theta,\varphi)=\mathrm{CNOT}[q_{1},q_{0}]\mathrm{CU}_{3}[q_{0},q_{1}](\vartheta=\theta,\lambda=-\varphi,\phi=\varphi)\mathrm{CNOT}[q_{1},q_{0}],$
(25)
where the first qubit in the bracket is the control qubit and the second one
is the target qubit. The matrix form of
$\mathrm{CU}_{3}[q_{0},q_{1}](\vartheta,\lambda,\phi)$ is:
$\mathrm{CU}_{3}[q_{0},q_{1}](\vartheta,\lambda,\phi)=\left(\begin{array}[]{cccc}1&0&0&0\\\
0&\cos\frac{\vartheta}{2}&0&-\sin\frac{\vartheta}{2}e^{i\lambda}\\\ 0&0&1&0\\\
0&e^{i\phi}\sin\frac{\vartheta}{2}&0&e^{i(\lambda+\phi)}\cos\frac{\vartheta}{2}\end{array}\right).$
(26)
Figure 6: The realization of the controlled-controlled-unitary gate by two-
qubit gates, where $W$ fulfills the condition: $W^{2}=U$.
### A.2 Relating wavefunctions at neighboring mesh points
From the discussion in the last subsection, we understand that the state at a
particular point $(k,t)$ in the Brillouin zone can be prepared by
$U_{b}(\theta_{k,t},\varphi_{k,t})$ for the valence ($b=v$) or conduction
($b=c$) band. Therefore, the operation transforming the wavefunction at
$(k,t)$ to that at $(k^{\prime},t^{\prime})$ is
$\mathcal{U}_{b,b^{\prime}}(k^{\prime},t^{\prime};k,t)=U_{b}(\theta_{k^{\prime},t^{\prime}},\varphi_{k^{\prime},t^{\prime}})U_{b^{\prime}}^{\dagger}(\theta_{k,t},\varphi_{k,t}).$
(27)
To measure the overlap of the wavefunction, we use an ancilla qubit to control
the application of $\mathcal{U}(k^{\prime},t^{\prime};k,t)$ on the other two
qubits. This means that the equation above is modified to
$\mathrm{C}\mathcal{U}(k^{\prime},t^{\prime};k,t)=\mathrm{CU}(\theta_{k^{\prime},t^{\prime}},\varphi_{k^{\prime},t^{\prime}})\mathrm{CU}^{\dagger}(\theta_{k,t},\varphi_{k,t}).$
(28)
The extra letter $\mathrm{C}$ indicates that the two-qubit unitary operation
is controlled by an ancilla qubit. For each $\mathrm{CU}$ operation, we can
realize it by extending the two-qubit gate in Eq. (A7) as follows:
$\mathrm{CU}(\theta,\varphi)=\mathrm{CCX}[q_{0},q_{2},q_{1}]\mathrm{CCU}_{3}[q_{0},q_{1},q_{2}](\vartheta=\theta,\lambda=-\varphi,\phi=\varphi)\mathrm{CCX}[q_{0},q_{2},q_{1}],$
(29)
where the first two qubits in the bracket are the controlled qubits and the
last one is the target qubit. The $\mathrm{CCX}$ is the well-known Toffoli
gate, and we just need to construct the $\mathrm{CCU}_{3}$ gate. It turns out
that the $\mathrm{CCU}_{3}$ gate can be realized by the circuit shown in Fig.
6. Using these components, the quantum circuits for the chiral $p$-wave
superconductors shown in Fig. 2(b) and (c) can be constructed. Using these
components, the quantum circuit to measure each overlap for the chiral
$p$-wave superconducting model requires $\sim 60$ CNOT gates on IBMQ machines.
|
# Generalized Alternating Projections on Manifolds and Convex Sets
Mattias Fält, Pontus Giselsson
###### Abstract
In this paper we extend the previous convergence results on the generalized
alternating projection method, from subspaces [19], to include smooth
manifolds. We show that locally it will behave in the same way, with the same
rate as predicted in [19]. The goal is to get closer to a rate for general
convex sets, where convergence, but not rate is known. If a finite
identification property can be shown for two convex sets, to locally smooth
manifolds, then the rates from this paper also apply to those sets. We present
a few examples where this is the case, and also a counter example for when
this is not the case.
## 1 Introduction
The problem of finding a point in the intersection of sets has a long history
with many proposed algorithms. They generally rely on successive projections
onto the respective sets. The method of alternating projections (MAP, or AP)
was famously studied by von Neumann [34] for the case of two subspaces, and
has a wide range of applications [14]. Many variants have been suggested and
shown to converge in the case of convex sets, for example using relaxed
projections [1, 32, 12, 21], Dykstra’s algorithm [11], Douglas–Rachford
splitting [16, 30], and its dual algorithm ADMM [20, 10].
Many results on the linear convergence rates of these algorithms have been
shown and are generally stated either as a function of a regularity constant,
or as a function of the smallest angle between the sets, which in the case of
affine sets is known as the Friedrichs angle $\theta_{F}$. In the case of two
subspaces, the method of alternating projections was shown to converge with
the linear rate $\cos^{2}(\theta_{F})$ [15], and the Douglas–Rachford method
with rate $\cos(\theta_{F})$ [5]. In [6], the authors studied a few methods
with relaxed projections and the optimal rates with respect to the relaxation
parameters were found. The generalized alternating projection (GAP), which
generalizes most of the algorithms above by allowing several relaxation
parameters, was studied in [19], and it was shown that the faster rate
$\frac{1-\sin\theta_{F}}{1+\sin\theta_{F}}$ is achievable with the right
parameters. It was also shown that, under general assumptions, this is the
best possible rate for this generalization.
When it comes to general convex sets, local linear convergence of these
algorithms is not guaranteed. Several different assumptions on the
intersection between the sets have been proposed and shown to be sufficient.
Some of these assumptions include linear regularity or bounded linear
regularity, see for example [26, 3]. An overview on set regularities can be
found in [24].
Under sub-transversality assumptions of two convex sets, the R-linear rate
presented in [31] translates to a $\cos(\theta_{F}/2)$ contraction rate for
the Douglas–Rachford algorithm, when translated to the subspace setting.
For general non-convex sets, convergence to a feasible point can not be
guaranteed, and instead local convergence is studied. For the alternating
projections method, different types of regularity have been shown to be
sufficient for local linear convergence [26, 8, 7, 33].
For the alternating projections algorithm, the results in [26] for possibly
non-convex super-regular sets with linearly regular intersection translates to
the known optimal rate of $\cos^{2}(\theta_{F})$ when applied to sub-spaces.
In [17], the authors showed that a transversality property can be used to
guarantee local linear convergence. However, both the assumptions and rates
presented in this paper are quite conservative. For example, in the case of
two subspaces, the rate presented in [17] translates to
$\cos^{2}(\theta_{F}/2)$ which is considerably worse than the known
contraction rate $\cos(\theta_{F})$ and the local linear rate
$\cos^{2}(\theta_{F})$. Among the few known results for the relaxed versions
of alternating projections, local linear convergence was shown for the MARP
algorithm in [9] under different regularity assumptions. However, this paper
assumes that the projections are under-relaxed, which was shown in [19] to
result in sub-optimal local rates.
One approach to show local convergence rates for general convex sets is by
showing that the algorithms eventually project onto subsets that have nicer
properties, i.e. that the algorithm identifies these subsets in finite time.
This can be done by partitioning the boundary of sets into a collection of
smooth and open manifolds, and then studying the algorithm on these manifolds.
There has been a lot of research into these identification properties for
various algorithms, see for example [23, 28, 29]. However, as far as the
authors know, none of these results apply to projection methods on feasibility
problems. The fundamental problem seems to be that gradients are vanishing at
any feasible point when a feasibility problem is reformulated as an
optimization problem, so the regularity assumptions are therefore not
satisfied.
However, for specific problems it can sometimes be known that the algorithm
will identify such surfaces, for example when the entire boundary is a smooth
manifold, or when the algorithm is known to converge to the relative interior
of one of the manifolds.
In [27], the authors study alternating projections in the setting of two
smooth manifolds and show that the problem locally can be approximated by
affine sets. They prove that the convergence rates known from affine sets
translates to local linear rates in this setting under a transversality
condition. A similar result is found in [2] under slightly relaxed
assumptions.
In this paper, we study the same setting for the generalized alternating
projections algorithm. We show that the weaker assumption in [2] is sufficient
to show local linear convergence of the generalized alternating projections
method on smooth manifolds. Moreover, we show that the optimal rates and
parameters from [19] translate to this setting. Furthermore, the local linear
rate is strict since affine sets are a special case of smooth manifolds.
Lastly, we provide some classes of convex sets where this result can be used
to prove the convergence rate, as well as one counter-example where we
illustrate that even in the setting of polyhedral sets and the presence of
regularity, the problem can not always be locally reduced to that of affine
sets, as is the case for alternating projections.
## 2 Notation
We denote the identity operator by $I$ and the operator norm by $\|\cdot\|$.
For a matrix $A$ we let $\Lambda(A)$ be the set of eigenvalues and
$\rho(A)\coloneqq\max_{\lambda\in\Lambda(A)}|\lambda|$ the spectral radius. If
the limit $\lim_{k\rightarrow\infty}A^{k}$ exists, we denote it by
$A^{\infty}$ and define $\sigma(A)\coloneqq\|A-A^{\infty}\|$. For a vector
$v\in\mathbb{R}^{n}$ we also denote the vector norm by
$\|v\|\coloneqq\sqrt{\langle v,v\rangle}$. The Jacobian of a function $F$ at a
point $x$ is denoted by $\mathrm{J}_{F}(x)$. We denote the closed ball around
a point $x\in\mathbb{R}^{n}$ and with radius $\delta$, i.e.
$\\{y\in\mathbb{R}^{n}\mid\|x-y\|\leq\delta\\}$, by
${\mathcal{B}}_{\delta}(x)$ and the open ball
$\\{y\in\mathbb{R}\mid\|x-y\|<\delta\\}$ by $\mathcal{B}^{o}_{\delta}(x)$.
## 3 Preliminaries
###### Definition 1 (Projection)
The projection of an element $x\in\mathbb{R}^{n}$ onto a closed, non-empty
subset $C\subset\mathbb{R}^{n}$ is defined by
$\displaystyle\Pi_{C}(x)\coloneqq\mathop{\rm argmin}_{y\in C}\|x-y\|$
when the argmin is unique.
###### Definition 2 (Relaxed Projection)
Let the relaxed projection onto a closed, non-empty subset
$C\subset\mathbb{R}^{n}$, with relaxation parameter $\alpha$, be defined as
$\displaystyle\Pi_{C}^{\alpha}\coloneqq(1-\alpha)I+\alpha\Pi_{{\mathcal{C}}}.$
### 3.1 Subspaces
In this section we introduce some basic properties of subspaces that will be
useful in the study of the local properties of manifolds.
###### Definition 3
The _principal angles_ $\theta_{k}\in[0,\pi/2],\,k=1,\dots,p$ between two
subspaces ${\mathcal{U}},{\mathcal{V}}\in\mathbb{R}^{n}$, where
$p=\min(\dim{\mathcal{U}},\dim{\mathcal{V}})$, are recursively defined by
$\displaystyle\cos\theta_{k}\,$ $\displaystyle\coloneqq$
$\displaystyle\max_{u_{k}\in{\mathcal{U}},\,v_{k}\in{\mathcal{V}}}\left\langle
u_{k},v_{k}\right\rangle$ s.t. $\displaystyle\left\lVert
u_{k}\right\rVert=\left\lVert v_{k}\right\rVert=1,$ $\displaystyle\left\langle
u_{k},v_{i}\right\rangle=\left\langle
u_{i},v_{k}\right\rangle=0,\forall\,i=1,\ldots,k-1.$
###### Fact 1
[6, Def 3.1, Prop 3.3] The principal angles are unique and satisfy
$0\leq\theta_{1}\leq\theta_{2}\leq\dots\theta_{p}\leq\pi/2$. The angle
$\theta_{F}\coloneqq\theta_{s+1}$, where
$s=\text{dim}({\mathcal{U}}\cap{\mathcal{V}})$, is the _Friedrichs angle_ and
it is the smallest non-zero principal angle.
The cosine of the Friedrichs angle occurs naturally in many convergence rate
results and is denoted as follows.
###### Definition 4
Given two subspaces $\,{\mathcal{U}},{\mathcal{V}}\in\mathbb{R}^{n}$, with
Friedrichs angle $\theta_{F}$, we denote its cosine as
$\displaystyle c({\mathcal{U}},{\mathcal{V}}):=\cos(\theta_{F}).$
We see that $\theta_{i}=0$ if and only if $i\leq s$, where
$s=\text{dim}({\mathcal{U}}\cap{\mathcal{V}})$, so $\theta_{F}$ is well
defined whenever
$\min(\dim{\mathcal{U}},\dim{\mathcal{V}})=p>s=\dim({\mathcal{U}}\cap{\mathcal{V}})$,
i.e. when no subspace is contained in the other.
###### Definition 5
$A\in\mathbb{R}^{n\times n}$ is linearly convergent to $A^{\infty}$ with
linear convergence rate $\mu\in[0,1)$ if there exist $M,N>0$ such that
$\left\lVert A^{k}-A^{\infty}\right\rVert\leq M\mu^{k}\quad\forall
k>N,\,k\in\mathbb{N}.$
###### Definition 6
[6, Fact 2.3] For $A\in\mathbb{R}^{n\times n}$ we say that
$\lambda\in\Lambda(A)$ is _semisimple_ if $\text{ker}(A-\lambda
I)=\text{ker}(A-\lambda I)^{2}.$
###### Fact 2
[6, Fact 2.4] For $A\in\mathbb{R}^{n\times n}$, the limit
$A^{\infty}\coloneqq\lim_{k\rightarrow\infty}A^{k}$ exists if and only if
* •
$\rho(A)<1$ or
* •
$\rho(A)=1$ and $\lambda=1$ is semisimple and the only eigenvalue on the unit
circle.
###### Definition 7
[6, Def. 2.10] Let $A\in\mathbb{R}^{n\times n}$ be a matrix with $\rho(A)\leq
1$ and define
$\gamma(A)\coloneqq\max\left\\{|\lambda|\,\mid\,\lambda\in\\{0\\}\cup\Lambda(A)\setminus\\{1\\}\right\\}.$
Then $\lambda\in\Lambda(A)$ is a _subdominant eigenvalue_ if
$|\lambda|=\gamma(A)$.
###### Fact 3
[6, Thm. 2.12] If $A\in\mathbb{R}^{n\times n}$ is convergent to $A^{\infty}$
then
* •
$A$ is linearly convergent with any rate $\mu\in(\gamma(A),1)$
* •
If $A$ is linearly convergent with rate $\mu\in[0,1)$, then
$\mu\in[\gamma(A),1)$.
### 3.2 Manifolds
The following definitions and results follow those in [27].
###### Definition 8 (Smooth Manifold)
A set ${\mathcal{M}}\subset\mathbb{R}^{n}$ is a ${\mathcal{C}}^{k}$-manifold
around a point $x\in{\mathcal{M}}$ if there is an open set
$U\subset\mathbb{R}^{n}$ containing $x$ such that
$\displaystyle{\mathcal{M}}\cap U=\\{x:F(x)=0\\}$
where $F:U\rightarrow\mathbb{R}^{d}$ is a ${\mathcal{C}}^{k}$ function with
surjective derivative throughout $U$.
###### Definition 9 (Tangent space)
The tangent space to a manifold ${\mathcal{M}}$ is given by
$\displaystyle\mathrm{T}_{\mathcal{M}}(x)=\ker\mathrm{J}_{F}(x).$
and is independent to the choice of $F$ that defines the manifold.
###### Definition 10 (Normal vector)
$v\in\mathbb{R}^{n}$ is a normal vector to the manifold
${\mathcal{M}}\subset\mathbb{R}^{n}$ at $x\in\mathbb{R}^{n}$ if $\langle
v,t\rangle=0$ for all $t\in\mathrm{T}_{\mathcal{M}}(x)$.
###### Definition 11 (Smooth boundary)
We say that a closed set $C\subset\mathbb{R}^{n}$ has a ${\mathcal{C}}^{k}$
smooth boundary around $\bar{x}\in\mathbb{R}^{n}$ if $\text{bd}\,(C)$ is a
${\mathcal{C}}^{k}$ smooth manifold around $\bar{x}$.
###### Remark 1
We note that if a set $C\in\mathbb{R}^{n}$ is solid, i.e.
$\text{int}(C)\neq\emptyset$, with a ${\mathcal{C}}^{k}$ smooth boundary
around some point $\bar{x}$, then the boundary is defined in some neighborhood
$U$ of $\bar{x}$ by some $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ as
$\text{bd}\,(C)\cap U=\\{x:f(x)=0\\}$. The tangent space given by
$\ker{\mathrm{J}_{f}(x)}$ is therefore an $\mathbb{R}^{n-1}$ dimensional
plane, with normal vector $\nabla f(x)$. Since $f$ is a ${\mathcal{C}}^{k}$
smooth function, the normal vector is a ${\mathcal{C}}^{k-1}$ smooth function
of $x$.
We now define the regularity condition that will be sufficient to show linear
convergence of the GAP method.
###### Assumption 1 (Regularity)
Two manifolds ${\mathcal{M}},{\mathcal{N}}$ satisfy the regularity assumption
at a point $x$ if they are ${\mathcal{C}}^{k}$-smooth ($k\geq 2$) around
$x\in{\mathcal{M}}\cap{\mathcal{N}}$ and
1. A1.
${\mathcal{M}}\cap{\mathcal{N}}$ is a ${\mathcal{C}}^{k}$ smooth manifold
around $x$
2. A2.
$\mathrm{T}_{{\mathcal{M}}\cap{\mathcal{N}}}(x)=\mathrm{T}_{\mathcal{M}}(x)\cap\mathrm{T}_{\mathcal{N}}(x)$.
In previous literature such as [27], the standard regularity assumption is
transversality.
###### Definition 12 (Transversality)
Two $\mathcal{C}^{k}$-smooth manifolds ${\mathcal{M}},{\mathcal{N}}$ are
transversal at $\bar{x}$ if
$\mathrm{T}_{{\mathcal{M}}}(\bar{x})+T_{{\mathcal{N}}}(\bar{x})=\mathbb{R}^{n}$.
We note that both A1 and A2 in Assumption 1 are implied by the transversality
assumption [25]. Moreover, transversality is not a consequence of Assumption 1
as we see in the following example.
###### Example 1
Let ${\mathcal{M}}=\\{(x,0,x^{2})\mid x\in\mathbb{R}\\}$ and
${\mathcal{N}}=\\{(0,y,0)\mid y\in\mathbb{R}\\}$ where
${\mathcal{M}}\cap{\mathcal{N}}=\\{0\\}$. We have
$\mathrm{T}_{\mathcal{M}}(0)=\\{(x,0,0)\mid x\in\mathbb{R}\\}$ and
$\mathrm{T}_{\mathcal{N}}(0)={\mathcal{N}}$. So the manifolds clearly satisfy
Assumption 1 at $0$, but not the transversality condition
$\mathrm{T}_{\mathcal{M}}(0)+\mathrm{T}_{\mathcal{N}}(0)=\\{(x,y,0)\mid
x,y\in\mathbb{R}\\}\neq\mathbb{R}^{n}$.
With some abuse of notation, we define the angle between two manifolds at a
point in their intersection, using their tangent spaces.
###### Definition 13
For $x\in{\mathcal{M}}\cap{\mathcal{N}}$ let
$\displaystyle c({\mathcal{M}},{\mathcal{N}},x)\coloneqq
c(\mathrm{T}_{\mathcal{M}}(x),\mathrm{T}_{\mathcal{N}}(x)).$
The regularity condition implies that both the manifolds and their
intersection locally behave similarly to their tangent planes. In particular,
the angle between the two tangent planes is zero in some direction if and only
if this direction is also parallel to the intersection of the manifolds, as
seen by A2. This is crucial to show linear convergence later. We also note
that, under the regularity assumptions, the Friedrichs angle $\theta_{F}$ is
positive unless one manifold is locally a subset of the other. To see this, we
know that $\theta_{F}$ is well defined and positive unless one tangent plane
is a subset of the other, for example
$\mathrm{T}_{\mathcal{M}}(x)\subset\mathrm{T}_{\mathcal{N}}(x)$. But since
$\dim(\mathrm{T}_{\mathcal{M}}(x))=\dim({\mathcal{M}})$ around $x$, A2 implies
that also $\dim({\mathcal{M}})=\dim({\mathcal{M}}\cap{\mathcal{N}})$ around
$x$, i.e. that ${\mathcal{M}}$ locally is a subset of ${\mathcal{N}}$. Under
the regularity assumption, we therefore either have a positive Friedrichs
angle or a locally trivial problem.
We now show that relaxed projections are locally well defined on smooth
manifolds, and that their Jacobian is given by relaxed projections onto their
tangent planes. By well defined we mean that the projection point exists and
is unique.
The following Lemma is from [27, Lem 4].
###### Lemma 1 (Projection onto Manifold)
If ${\mathcal{M}}$ is a ${\mathcal{C}}^{k}$ manifold (with $k\geq 2$) around
$\bar{x}\in{\mathcal{M}}$, then $\Pi_{{\mathcal{M}}}$ is well defined and
${\mathcal{C}}^{k-1}$ around $\bar{x}$. Moreover
$\mathrm{J}_{\Pi_{\mathcal{M}}}(\bar{x})=\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}$.
###### Lemma 2 (Relaxed Projection onto Manifold)
If ${\mathcal{M}}$ is a ${\mathcal{C}}^{k}$ manifold (with $k\geq 2$) around
$\bar{x}\in{\mathcal{M}}$, then
$\mathrm{J}_{\Pi^{\alpha}_{\mathcal{M}}}(\bar{x})=\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha}$,
and $\Pi_{{\mathcal{M}}}^{\alpha}$ are well defined and ${\mathcal{C}}^{k-1}$
around $\bar{x}$.
Proof.
$\mathrm{J}_{\Pi_{\mathcal{M}}^{\alpha}}(\bar{x})=\mathrm{J}_{(1-\alpha)I+\alpha\Pi_{\mathcal{M}}}(\bar{x})=(1-\alpha)I+\alpha\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}=\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha}$.
The result now follows from Lemma 1. $\Box$
## 4 Generalized Alternating Projections
In this section, we define the generalized alternating projections (GAP)
operator, and state some known results. We denote the feasibility problem of
finding $x\in{\mathcal{U}}\cap{\mathcal{V}}$ by
$({\mathcal{U}},{\mathcal{V}})$ to signify that the algorithm depends on the
ordering of the two sets.
###### Definition 14 (Generalized alternating projections)
The generalized alternating projections algorithm (GAP) [18] for two nonempty
sets (${\mathcal{U}},{\mathcal{V}}$), with
${\mathcal{U}}\cap{\mathcal{V}}\neq\emptyset$, is defined by the iteration
$x_{k+1}\coloneqq Sx_{k},$ (1)
where
$S=(1-\alpha)I+\alpha\Pi_{{\mathcal{U}}}^{\alpha_{2}}\Pi_{{\mathcal{V}}}^{\alpha_{1}}=:\,(1-\alpha)I+\alpha
T.$ (2)
For closed convex sets, the operator $S$ is averaged and the iterates converge
to the fixed-point set ${\rm{fix}}S$ under the following assumption, see e.g.
[18] where these results are collected.
###### Assumption 2
Assume that $\alpha\in(0,1]$, $\alpha_{1},\alpha_{2}\in(0,2]$ and that either
of the following holds
1. B1.
$\alpha_{1},\alpha_{2}\in(0,2)$
2. B2.
$\alpha\in(0,1)$ with either $\alpha_{1}\neq 2$ or $\alpha_{2}\neq 2$
3. B3.
$\alpha\in(0,1)$ and $\alpha_{1}=\alpha_{2}=2$
The following result was shown in [18].
###### Lemma 3
Let $({\mathcal{U}},{\mathcal{V}})$ be two subspaces with
${\mathcal{U}}\cap{\mathcal{V}}\neq\emptyset$. The fixed point set
${\rm{fix}}S\coloneqq\\{x\mid Sx=x\\}$ of the GAP operator $S$ in (1) is;
${\mathcal{U}}\cap{\mathcal{V}}$ under Assumption 2 case B1 and B2, and
$\,{\mathcal{U}}\cap{\mathcal{V}}+({\mathcal{U}}^{\perp}\cap{\mathcal{V}}^{\perp})$
under Assumption 2 case B3.
To study the local behavior of the GAP method, it is crucial to understand its
behavior on linear subspaces. Throughout this section, we assume that the
subspaces $({\mathcal{U}},{\mathcal{V}})$ are non-empty and that the problem
is consistent, i.e. ${\mathcal{U}}\cap{\mathcal{V}}\neq\emptyset$. In
particular we note that $0\in{\mathcal{U}}\cap{\mathcal{V}}$.
The following proposition and remark are found in [6, Prop. 3.4], and [19]
respectively.
###### Proposition 1
Let ${\mathcal{U}}$ and ${\mathcal{V}}$ be subspaces in $\mathbb{R}^{n}$
satisfying $p\coloneqq\dim({\mathcal{U}})$, $q\coloneqq\dim({\mathcal{V}})$,
where $p\leq q$, $p+q<n$ and $p,q\geq 1$. Then, the projection matrices
$\Pi_{{\mathcal{U}}}$ and $\Pi_{{\mathcal{V}}}$ become
$\displaystyle\Pi_{{\mathcal{U}}}$
$\displaystyle=D\begin{pmatrix}I_{p}&0&0&0\\\ 0&0_{p}&0&0\\\ 0&0&0_{q-p}&0\\\
0&0&0&0_{n-p-q}\end{pmatrix}D^{*},$ (3) $\displaystyle\Pi_{{\mathcal{V}}}$
$\displaystyle=D\begin{pmatrix}\mathcal{C}^{2}&\mathcal{C}\mathcal{S}&0&0\\\
\mathcal{C}\mathcal{S}&\mathcal{S}^{2}&0&0\\\ 0&0&I_{q-p}&0\\\
0&0&0&0_{n-p-q}\end{pmatrix}D^{*}$ (4)
and
$\Pi_{{\mathcal{U}}}\Pi_{{\mathcal{V}}}=D\begin{pmatrix}\mathcal{C}^{2}&\mathcal{C}\mathcal{S}&0&0\\\
0&0_{p}&0&0\\\ 0&0&0_{q-p}&0\\\ 0&0&0&0_{n-p-q}\end{pmatrix}D^{*},$ (5)
where $\mathcal{C}$ and $\mathcal{S}$ are diagonal matrices containing the
cosine and sine of the principal angles $\theta_{i}$, i.e.
$\displaystyle\mathcal{S}$
$\displaystyle=\text{diag}(\sin\theta_{1},\dots,\sin\theta_{p}),$
$\displaystyle\mathcal{C}$
$\displaystyle=\text{diag}(\cos\theta_{1},\dots,\cos\theta_{p}),$
and $D\in\mathbb{R}^{n\times n}$ is an orthogonal matrix.
Under the assumptions in Proposition 1, the linear operator $T$, implicitly
defined in (2), becomes
$\displaystyle T$ $\displaystyle=$
$\displaystyle\Pi_{{\mathcal{U}}}^{\alpha_{2}}\Pi_{{\mathcal{V}}}^{\alpha_{1}}=((1-\alpha_{2})I+\alpha_{2}\Pi_{{\mathcal{U}}})((1-\alpha_{1})I+\alpha_{1}\Pi_{{\mathcal{V}}})$
$\displaystyle=$
$\displaystyle(1-\alpha_{2})(1-\alpha_{1})I+\alpha_{2}(1-\alpha_{1})\Pi_{{\mathcal{U}}}$
$\displaystyle+\alpha_{1}(1-\alpha_{2})\Pi_{{\mathcal{V}}}+\alpha_{1}\alpha_{2}\Pi_{{\mathcal{U}}}\Pi_{{\mathcal{V}}}$
$\displaystyle=$ $\displaystyle D\,\text{blkdiag}(T_{1},T_{2},T_{3})\,D^{*}$
where
$\displaystyle T_{1}$
$\displaystyle=\begin{pmatrix}I_{p}-\alpha_{1}\mathcal{S}^{2}&\alpha_{1}\mathcal{C}\mathcal{S}\\\
\alpha_{1}(1-\alpha_{2})\mathcal{C}\mathcal{S}&(1-\alpha_{2})(I_{p}-\alpha_{1}\mathcal{C}^{2})\end{pmatrix},$
(6) $\displaystyle T_{2}$ $\displaystyle=(1-\alpha_{2})I_{q-p},\quad
T_{3}=(1-\alpha_{2})(1-\alpha_{1})I_{n-p-q}.$
The rows and columns of $T_{1}$ can be reordered so that it is a block-
diagonal matrix with blocks
$T_{1_{i}}=\begin{pmatrix}1-\alpha_{1}s_{i}^{2}&\alpha_{1}c_{i}s_{i}\\\
\alpha_{1}(1-\alpha_{2})c_{i}s_{i}&(1-\alpha_{2})(1-\alpha_{1}c_{i}^{2})\end{pmatrix},\,i\in
1,\dots,p$ (7)
where $s_{i}\coloneqq\sin\theta_{i},\,c_{i}\coloneqq\cos\theta_{i}$. The
eigenvalues of $T$ are therefore $\lambda^{3}\coloneqq(1-\alpha_{2})$,
$\lambda^{4}\coloneqq(1-\alpha_{2})(1-\alpha_{1})$, and for every $T_{1_{i}}$
$\displaystyle\lambda_{i}^{1,2}$
$\displaystyle=\frac{1}{2}\left(2-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}c_{i}^{2}\right)$
(8)
$\displaystyle\,\,\,\pm\sqrt{\frac{1}{4}\left(2-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}c_{i}^{2}\right)^{2}-(1-\alpha_{1})(1-\alpha_{2})}.$
###### Remark 2
The property $p\leq q$ was used to arrive at these results. If instead $p>q$,
we reverse the definitions of $\Pi_{\mathcal{U}}$ and $\Pi_{\mathcal{V}}$ in
Proposition 1. Noting that $\Lambda(T)=\Lambda(T^{\top})$, we get a new block-
diagonal matrix $\bar{T}$ with blocks $\bar{T}_{1}=T_{1}^{\top}$,
$\bar{T}_{3}=T_{3}^{\top}$ and $\bar{T}_{2}=(1-\alpha_{1})I_{p-q}$. Therefore,
the matrix can have eigenvalues $1-\alpha_{1}$ or $1-\alpha_{2}$ depending on
the dimensions of ${\mathcal{U}}$ and ${\mathcal{V}}$.
If either $p=0$ or $q=0$, then the problem is trivial. We note that if
$p+q\geq n$, we can simply embed the sets in a bigger space. Since
${\mathcal{U}}$ and ${\mathcal{V}}$ are contained in the original space, the
iterates will also stay in this subspace if the initial point is. The
algorithm therefore behaves identically and the extra dimensions can be
ignored. Although we do not have an explicit expression for the GAP operator
$T$ in this case, we can calculate the eigenvalues, as stated in the following
theorem.
###### Theorem 1
Let ${\mathcal{U}}$ and ${\mathcal{V}}$ be subspaces in $\mathbb{R}^{n}$
satisfying $p\coloneqq\dim({\mathcal{U}})$, $q\coloneqq\dim({\mathcal{V}})$,
and let $s=\dim({\mathcal{U}}\cap{\mathcal{V}})$. The eigenvalues of
$T=\Pi_{{\mathcal{U}}}^{\alpha_{2}}\Pi_{{\mathcal{V}}}^{\alpha_{1}}$ are
$\displaystyle\\{1\\}^{s},\\{(1-\alpha_{1})(1-\alpha_{2})\\}^{s+n-p-q},$
$\displaystyle\\{1-\alpha_{2}\\}^{\max(0,q-p)},\\{1-\alpha_{1}\\}^{\max(0,p-q)},$
$\displaystyle\\{\lambda_{i}^{1,2}\\}\,\text{for every
$i\in\\{s+1,\ldots,\min(p,q)\\}$ }$
where $\lambda_{i}^{1,2}$ is defined by (8) and $\\{\lambda\\}^{i}$ denotes
(possibly zero) multiplicity $i$ of eigenvalue $\lambda$.
Proof. When either $p=0$ or $q=0$, we get $s=0$ and the result is trivial
from the definition of the projections and $T$. The case when $p\leq q$ and
$p+q<n$ follows directly from Proposition 1 by observing that $s$ of the
eigenvalues in $1$ and $(1-\alpha_{1})(1-\alpha_{2})$ arise from
$\lambda_{i}^{1,2}$ for $i\in\\{1,\dots,s\\}$, i.e. when $\theta_{i}=0$.
For the case when $q<p$ and $p+q<n$ it follows from Remark 2 that the
eigenvalues in $1-\alpha_{2}$ will be in $1-\alpha_{1}$ instead, and that the
rest of the eigenvalues are the same.
For the case when $p+q\geq n$ we provide a proof similar to that in [5, p.
54]. We can extend the space $\mathbb{R}^{n}$ to
$\mathbb{R}^{n+k}\coloneqq\mathbb{R}^{n}\times\mathbb{R}^{k}$ so that
$p+q<n+k\eqqcolon\bar{n}$, where we define the scalar product in this new
space as $\langle(u_{1},u_{2}),(v_{1},v_{2})\rangle\coloneqq\langle
u_{1},v_{1}\rangle+\langle u_{2},v_{2}\rangle$ for
$u_{1},v_{1}\in\mathbb{R}^{n},u_{2},v_{2}\in\mathbb{R}^{k}$.
Let $\bar{{\mathcal{U}}}\coloneqq{\mathcal{U}}\times\\{0_{k}\\}$,
$\bar{{\mathcal{V}}}\coloneqq{\mathcal{V}}\times\\{0_{k}\\}$ so that
$\displaystyle\Pi_{\bar{\mathcal{U}}}=\begin{pmatrix}\Pi_{{\mathcal{U}}}&0\\\
0&0_{k}\end{pmatrix},\quad\Pi_{\bar{\mathcal{V}}}=\begin{pmatrix}\Pi_{{\mathcal{V}}}&0\\\
0&0_{k}\end{pmatrix}.$
It follows that
$\displaystyle\bar{T}\coloneqq\Pi_{\bar{\mathcal{U}}}^{\alpha_{2}}\Pi_{\bar{\mathcal{V}}}^{\alpha_{1}}=\begin{pmatrix}T&0\\\
0&(1-\alpha_{1})(1-\alpha_{2})I_{k}\end{pmatrix},$ (9)
where $T=\Pi_{{\mathcal{U}}}^{\alpha_{2}}\Pi_{{\mathcal{V}}}^{\alpha_{1}}$.
$\bar{T}$ has the same eigenvalues as $T$, as well as $k$ new eigenvalues in
$(1-\alpha_{1})(1-\alpha_{2})$. As seen in the definition of
$\bar{{\mathcal{U}}},\bar{{\mathcal{V}}}$ and $\bar{T}$, these _artificial_
eigenvalues correspond to directions that are orthogonal to the original space
$\mathbb{R}^{n}$. If we now apply the result for $p+q<\bar{n}$ to $\bar{T}$,
and observe that the principal angles are the same for
$\bar{{\mathcal{U}}},\bar{{\mathcal{V}}}$ as for
${\mathcal{U}},{\mathcal{V}}$, we see that the eigenvalues are as those stated
in the theorem, but with $s+\bar{n}-p-q$ eigenvalues in
$(1-\alpha_{1})(1-\alpha_{2})$. Subtracting the $k$ _artificial_ eigenvalues,
we conclude that the operator $T$ must have $s+n-p-q$ eigenvalues in
$(1-\alpha_{1})(1-\alpha_{2})$. $\Box$
###### Proposition 2
Let ${\mathcal{U}}$ and ${\mathcal{V}}$ be subspaces in $\mathbb{R}^{n}$
satisfying $p\coloneqq\dim({\mathcal{U}})$, $q\coloneqq\dim({\mathcal{V}})$,
and let $s=\dim({\mathcal{U}}\cap{\mathcal{V}})$. Then the GAP operator $S$
satisfies
$\displaystyle\sigma(S)$ $\displaystyle=\|S-S^{\infty}\|$
$\displaystyle\leq\max(\|S_{1}-S_{1}^{\infty}\|,|1-\alpha_{2}(1-\alpha)|,$
$\displaystyle\quad\quad\quad\,\,\,|\alpha+(1-\alpha)(1-\alpha_{1})(1-\alpha_{2})|,|1-\alpha|)$
where $S_{1}=(1-\alpha)I+\alpha T_{1}$ with $T_{1}$ defined in Proposition 1.
Proof. If either $p=0$ or $q=0$ we trivially have $S=(1-\alpha)I$ so
$\|S-S^{\infty}\|=|1-\alpha|$ and the result hols. If $p<q$ and $p+q<n$,
$p,q\geq 1$ then it follows directly from Proposition 1 with
$S_{i}=(1-\alpha)I+\alpha T_{i}$ that
$\displaystyle\|S-S^{\infty}\|=\|D\left((1-\alpha)I+\alpha
T\right)D^{*}-\left(D((1-\alpha)I+\alpha T)D^{*}\right)^{\infty}\|=$
$\displaystyle=\|((1-\alpha)I+\alpha T)-((1-\alpha)I+\alpha T)^{\infty})\|$
$\displaystyle=\|\textrm{blkdiag}(S_{1}-S_{1}^{\infty},S_{2}-S_{2}^{\infty},S_{3}-S_{3}^{\infty}))\|$
$\displaystyle\leq\max(\|S_{1}-S_{1}^{\infty}\|,|1-\alpha_{2}(1-\alpha)|,|\alpha+(1-\alpha)(1-\alpha_{1})(1-\alpha_{2})|)$
and the result holds. If $p<q$ and $p+q\geq n$ we extend the space as in
Theorem 1. Since $\bar{T}$ in (9) is a block diagonal matrix containing $T$ we
get with $\bar{S}=(1-\alpha)I+\alpha\bar{T}$ that
$\|S-S^{\infty}\|\leq\|\bar{S}-\bar{S}^{\infty}\|$ and the result follows by
applying the case $p+q<n$ to the operator $\bar{S}$. For the case remaining
cases where $p<q$, we note as in Remark 2 that we can study
$S^{\top}=(1-\alpha)I+\alpha\Pi_{{\mathcal{V}}}^{\alpha_{1}}\Pi_{{\mathcal{U}}}^{\alpha_{2}}$
where the relative dimensions of the subspaces now satisfy the assumptions.
Applying the previous results to this case yields
$\|S^{\top}-{S^{\top}}^{\infty}\|=\|(S-S^{\infty})^{\top}\|=\|S-S^{\infty}\|$
and the proof is complete. $\Box$
It was shown in [19] that the parameters
$\displaystyle\alpha=1,\quad\alpha_{1}=\alpha_{2}=\alpha^{*}\coloneqq\frac{2}{1+\sin{\theta_{F}}},$
(10)
result in that the subdominant eigenvalues of $S$ have magnitude
$\gamma(S)=\gamma^{*}$, where
$\displaystyle\gamma^{*}\coloneqq\alpha^{*}-1=\frac{1-\sin\theta_{F}}{1+\sin\theta_{F}}.$
(11)
When the Friedrichs angle does not exist, i.e., when one subspace is contained
in the other, we define $\alpha^{*}=1$ and $\gamma^{*}=0$. The next two
theorems show that this rate is optimal under mild assumptions. The theorems
were published without proofs by the authors in [19]. We restate them with
minor modifications and prove them here.
###### Theorem 2
[19, Thm. 1] The GAP operator $S$ in (2), for linear subspaces
$({\mathcal{U}},{\mathcal{V}})$ in $\mathbb{R}^{n}$, with
$\alpha,\alpha_{1},\alpha_{2}$ as defined in (10) satisfies
$\gamma(S)=\gamma^{*}$, where $\gamma(S)$ and $\gamma^{*}$ are defined in
Definition 7 and (11) respectively. Moreover, $S$ is linearly convergent with
any rate $\mu\in\left(\gamma^{*},1\right)$.
Proof. See appendix. $\Box$
###### Remark 3
Although the rate in Theorem 2 is dependent on knowing the true Friedrichs
angle $\theta_{F}$, it is sufficient to have some conservative estimate
$\hat{\theta}_{F}<\theta_{F}$. As seen in the proof of Theorem 2, choosing the
parameters as $\alpha_{1}=\alpha_{2}=2/(1+\sin\hat{\theta}_{F})$, results in
the rate $\gamma=(1-\sin\hat{\theta}_{F})/(1+\sin\hat{\theta}_{F})$.
Under the assumption that the relative dimensions of the subspaces are
unknown, it was stated that the rate $\gamma^{*}$ is optimal. We restate it
with slight modifications for clarity, and prove it here.
###### Theorem 3
[19, Thm. 2] Let $({\mathcal{U}}_{1},{\mathcal{V}}_{1})$ and
$({\mathcal{U}}_{2},{\mathcal{V}}_{2})$ be two feasibility problems, where the
sets are linear subspaces in $\mathbb{R}^{n}$. Assume that
$\dim({\mathcal{U}}_{1})<\dim({\mathcal{V}}_{1})$,
$\dim({\mathcal{U}}_{2})>\dim({\mathcal{V}}_{2})$ and that
$c({\mathcal{U}}_{1},{\mathcal{V}}_{1})=c({\mathcal{U}}_{2},{\mathcal{V}}_{2})$
$=\cos(\theta_{F})$, $\theta_{F}<\pi/2$. Let $S_{1},S_{2}$ be the
corresponding GAP operators as defined in (2), both defined with the same
parameters $\alpha_{1},\alpha_{2},\alpha>0$. Then, both $S_{1}$ and $S_{2}$
are linearly convergent with all rates $\mu\in(\gamma^{*},1)$ if and only if
$\alpha=1,\quad\alpha_{1}=\alpha_{2}=\alpha^{*}\coloneqq\frac{2}{1+\sin{\theta_{F}}}.$
Proof. See appendix. $\Box$
This theorem shows that there is no choice of parameters that can perform
better than that in (10) independently of the dimensions of the sets. Any
choice of parameters that performs better than those in (10) for a specific
problem, where the dimensions of the sets are not the same, will necessarily
perform worse on all problems where the relative dimensions are reversed, if
the Friedrichs-angle is kept constant.
###### Remark 4
The are a few cases that are excluded in the theorem that should be explained.
When $\theta_{F}=\pi/2$, we have $\gamma^{*}=0$, which is obviously optimal,
however, there are choices of $\alpha,\alpha_{1},\alpha_{2}$ other than (10)
that achieve this rate. The same is true if the Friedrichs angle is not well
defined, i.e., when one set is contained in the other. In that case, by
defining $\theta_{F}=\pi/2$, we get $\gamma(S)=0$ with the parameters in (10),
but the solution is not unique.
As noted in [19], there are specific choices of
$({\mathcal{U}},{\mathcal{V}})$ where it is possible to get
$\gamma(S)<\gamma^{*}$. However, if one of the principal angles is large
enough, for example $\theta_{i}=\pi/2$, then it is not possible to get a rate
better than $\gamma^{*}$. In the cases where $\gamma(S)<\gamma^{*}$, the
difference in rate is negligible if $\theta_{F}$ is small, as long as the
parameters are chosen so that the algorithm is convergent for every
$({\mathcal{U}},{\mathcal{V}})$. For example, if
$\dim{\mathcal{U}}\leq\dim{\mathcal{V}}$ and _all_ principal angles
$\theta_{i}$ are small enough, then the parameter choice _GAP $2\alpha$_ in
[19]
$\alpha=1,\quad\alpha_{1}=2,\quad\alpha_{2}=\frac{2}{1+\sin(2\theta_{F})}$
achieves a rate of
$\frac{\cos\theta_{F}-\sin\theta_{F}}{\cos\theta_{F}+\sin\theta_{F}}=1-2\theta_{F}+2\theta_{F}^{2}-8\theta_{F}^{3}/3+O(\theta_{F}^{4})\quad(\text{as
}\theta_{F}\rightarrow 0)$
compared to
$\gamma^{*}=\frac{1-\sin\theta_{F}}{1+\sin\theta_{F}}=1-2\theta_{F}+2\theta_{F}^{2}-5\theta_{F}^{3}/3+O(\theta^{4})\quad(\text{as
}\theta_{F}\rightarrow 0).$
This should be contrasted to the rates of alternating projections and
Douglas–Rachford, which are are $1-\theta_{F}^{2}+O(\theta_{F}^{4})$ and
$1-\theta_{F}^{2}/2+O(\theta_{F}^{4})$ as $\theta_{F}\rightarrow 0$
respectively. So for small angles $\theta_{F}$, the improvement over AP and DR
is significant ($O(\theta_{F})$), and the difference to _GAP $2\alpha$_ is
very small ($O(\theta_{F}^{3})$). As mentioned above, the rate for _GAP
$2\alpha$_ is only valid under an assumption on the relative dimensions of the
manifolds, and that all principal angles are small enough.
## 5 Manifolds
In this section we study the local properties of the GAP operator on two
manifolds ${\mathcal{M}},{\mathcal{N}}$ instead of linear subspaces. These
results generalize the results in Section 4 of [27], from alternating
projections to the GAP algorithm, with similar proofs but under the relaxed
Assumption 1 instead of transversality.
We begin by showing that the GAP operator is locally well defined and well
behaved around all points that satisfy the regularity assumptions.
###### Lemma 4
Let (${\mathcal{M}},{\mathcal{N}}$) satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$, and let
$\alpha_{1},\alpha_{2}\in[0,2]$. Then $\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}$,
$\Pi^{\alpha_{2}}_{{\mathcal{M}}}\Pi^{\alpha_{1}}_{{\mathcal{N}}}$ and
$S=(1-\alpha)I+\alpha\Pi^{\alpha_{2}}_{{\mathcal{M}}}\Pi^{\alpha_{1}}_{N}$ are
well defined and of class ${\mathcal{C}}^{k-1}$ around $\bar{x}$.
Proof. From Assumption 1 A1 it follows that ${\mathcal{M}}\cap{\mathcal{N}}$
is a ${\mathcal{C}}^{k}$ manifold (with $k\geq 2$) so from Lemma 2 we know
that there exists $\delta>0$ so that
$\Pi_{{\mathcal{M}}},\Pi_{{\mathcal{N}}},\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}$
are well defined and of class ${\mathcal{C}}^{k-1}$ on $B_{\delta}(\bar{x}).$
Restrict further $x\in B_{\delta/3}(\bar{x})$ then
$\displaystyle\left\|\bar{x}-\Pi_{{\mathcal{N}}}^{\alpha_{1}}(x)\right\|$
$\displaystyle\leq\left\|\bar{x}-x\right\|+\left\|x-\Pi_{{\mathcal{N}}}^{\alpha_{1}}(x)\right\|=\left\|\bar{x}-x\right\|+\alpha_{1}\left\|x-\Pi_{{\mathcal{N}}}(x)\right\|$
$\displaystyle\leq\left\|\bar{x}-x\right\|+\alpha_{1}\left\|x-\bar{x}\right\|\leq
3\left\|x-\bar{x}\right\|\leq\delta$
so $\Pi_{{\mathcal{N}}}^{\alpha_{1}}(x)\in B_{\delta}(\bar{x})$ and we
therefore have $\Pi^{\alpha_{2}}_{{\mathcal{M}}}\Pi^{\alpha_{1}}_{N}$ and $S$
well defined and ${\mathcal{C}}^{k-1}$ on $B_{\delta/3}(\bar{x})$. $\Box$
To simplify notation, we denote the GAP operator applied to the tangent spaces
$\mathrm{T}_{{\mathcal{M}}}(\bar{x}),T_{{\mathcal{N}}}(\bar{x})$ by
$\displaystyle
S_{\mathrm{T}(\bar{x})}\coloneqq(1-\alpha)I+\alpha\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha_{2}}\Pi_{\mathrm{T}_{{\mathcal{N}}}(\bar{x})}^{\alpha_{1}}.$
(12)
We next show that the local behavior of $S$ around a point
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$ can be described by
$S_{\mathrm{T}(\bar{x})}$.
###### Lemma 5
Let $({\mathcal{M}},{\mathcal{N}})$ satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$. Then the Jacobian at $\bar{x}$ of
the GAP operator $S$ in (2) is given by
$\mathrm{J}_{S}(\bar{x})=(1-\alpha)I+\alpha\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha_{2}}\Pi_{\mathrm{T}_{{\mathcal{N}}}(\bar{x})}^{\alpha_{1}}=S_{\mathrm{T}(\bar{x})}.$
Proof. By Lemma 2, the chain rule, and
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$ we have
$\displaystyle\mathrm{J}_{\Pi^{\alpha_{2}}_{\mathcal{M}}\Pi^{\alpha_{1}}_{\mathcal{N}}}(\bar{x})$
$\displaystyle=\mathrm{J}_{\Pi^{\alpha_{2}}_{\mathcal{M}}}(\Pi^{\alpha_{1}}_{\mathcal{N}}(\bar{x}))\mathrm{J}_{\Pi^{\alpha_{1}}_{\mathcal{N}}}(\bar{x})=\mathrm{J}_{\Pi^{\alpha_{2}}_{\mathcal{M}}}(\bar{x})\mathrm{J}_{\Pi^{\alpha_{1}}_{\mathcal{N}}}(\bar{x})$
$\displaystyle=\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha_{2}}\Pi_{\mathrm{T}_{{\mathcal{N}}}(\bar{x})}^{\alpha_{1}}.$
Moreover
$\displaystyle\mathrm{J}_{S}(\bar{x})$
$\displaystyle=\mathrm{J}_{(1-\alpha)I}(\bar{x})+\alpha\mathrm{J}_{\Pi^{\alpha_{2}}_{\mathcal{M}}\Pi^{\alpha_{1}}_{\mathcal{N}}}(\bar{x})$
$\displaystyle=(1-\alpha)I+\alpha\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha_{2}}\Pi_{\mathrm{T}_{{\mathcal{N}}}(\bar{x})}^{\alpha_{1}}=S_{\mathrm{T}(\bar{x})}$
by definition of $S_{\mathrm{T}(\bar{x})}$ in (12). $\Box$
###### Proposition 3
Let ${\mathcal{M}},{\mathcal{N}}$ satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$ and the parameters of the GAP
operator $S$ satisfy Assumption 2 case B1 or B2. Then
$\displaystyle\mathrm{T}_{{\mathcal{M}}(\bar{x})\cap{\mathcal{N}}(\bar{x})}=\mathrm{T}_{{\mathcal{M}}(\bar{x})}\cap
T_{{\mathcal{N}}(\bar{x})}={\rm{fix}}S_{\mathrm{T}(\bar{x})}$ (13)
and
$\displaystyle\Pi_{{\rm{fix}}S_{\mathrm{T}(\bar{x})}}=S_{\mathrm{T}(\bar{x})}^{\infty}.$
(14)
Proof. The first equality follows from Assumption 1. From Lemma 3, under
Assumption 2 case B1 and B2, we know that
${\rm{fix}}S_{\mathrm{T}(\bar{x})}=\mathrm{T}_{{\mathcal{M}}(\bar{x})}\cap
T_{{\mathcal{N}}(\bar{x})}$ and from non-expansiveness of
$S_{\mathrm{T}(\bar{x})}$ and [6, Corollary 2.7], we have that
$\Pi_{\text{Fix}S_{\mathrm{T}}(\bar{x})}=S_{\mathrm{T}(\bar{x})}^{\infty}$.
$\Box$
We next prove that the convergence rate of $S^{k}(x)$ to the intersection,
tends to the rate $\gamma(S_{\mathrm{T}(\bar{x})})$ as the initial point gets
closer to the intersection and the number of iterations $k$ increases.
$\theta_{F}$${\mathcal{M}}$$\bar{x}$${\mathcal{N}}$$\mathrm{T}_{\mathcal{N}}(\bar{x})+\bar{x}$$\mathrm{T}_{\mathcal{M}}(\bar{x})+\bar{x}$
Figure 1: Illustration of manifolds ${\mathcal{M}},{\mathcal{N}}$ and the
approximation by tangent planes at a point
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$.
###### Theorem 4
Let (${\mathcal{M}},{\mathcal{N}}$) satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$ and the parameters of the GAP
operator $S$ satisfy Assumption 2 case B1 or B2. Then
1. 1.
for all
$c>\left\|S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})\cap
T_{{\mathcal{N}}}(\bar{x})}\right\|$, where
$S_{\mathrm{T}(\bar{x})}\coloneqq(1-\alpha)I+\alpha\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha_{2}}\Pi_{\mathrm{T}_{{\mathcal{N}}}(\bar{x})}^{\alpha_{1}}$,
there exists some $\eta>0$ so that for all $x\in{\mathcal{B}}_{\eta}(\bar{x})$
$\left\|S(x)-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\right\|\leq
c\left\|x-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\right\|.$ (15)
2. 2.
for all $\mu_{\bar{x}}\in(\gamma(S_{\mathrm{T}(\bar{x})}),1)$ there exists
$N\in\mathbb{N}$, such that for any $k\geq N$
$\limsup_{x\rightarrow\bar{x},x\not\in{\mathcal{M}}\cap{\mathcal{N}}}\frac{\left\|S^{k}(x)-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\right\|}{\left\|x-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\right\|}\leq\mu_{\bar{x}}^{k}.$
(16)
Proof. Let $x_{r}$ be any point $x_{r}\not\in{\mathcal{M}}\cap{\mathcal{N}}$,
close enough to $\bar{x}$, such that Lemma 4 is satisfied. Denote
$\bar{x}_{r}=\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{r})$. Since
$\bar{x}_{r}\in{\mathcal{M}}\cap{\mathcal{N}}$ we trivially have
$S\bar{x}_{r}=\bar{x}_{r}$.
Moreover, $S$ and $\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}$ are $\mathcal{C}^{1}$
around $\bar{x}$ by Lemma 4. By [13, Eq (3.8.1), Thm 3.8.1], a
$\mathcal{C}^{1}$ function $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ at a
point $a\in\mathbb{R}^{n}$ can be approximated as
$\displaystyle f(x)-f(y)=\mathrm{J}_{f}(a)(x-y)+\|x-y\|\psi(x,y),\text{ where
}\lim_{x,y\rightarrow a}\psi(x,y)=0,$
at $x,y\in\mathbb{R}^{n}$. Using this, with
$f(x)=S(x)-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)$, at
$x=x_{r},y=\bar{x}_{r},a=\bar{x}$ we get
$\displaystyle S(x_{r})-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{r})$
$\displaystyle=(\mathrm{J}_{S}(\bar{x})-\mathrm{J}_{\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}}(\bar{x}))(x_{r}-\bar{x}_{r})+\|x_{r}-\bar{x}_{r}\|\psi(x_{r},\bar{x}_{r}),$
(17) $\displaystyle\text{ where
}\lim_{x_{r},\bar{x}_{r}\rightarrow\bar{x}}\psi(x_{r},\bar{x}_{r})=0.$
We can replace the Jacobians by noting that Lemma 5, Lemma 1 and Assumption 1
A2 at $\bar{x}$ implies
$\displaystyle\mathrm{J}_{S}(\bar{x})-\mathrm{J}_{\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}}(\bar{x})=S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})\cap
T_{{\mathcal{N}}}(\bar{x})}$
where
$S_{\mathrm{T}(\bar{x})}=(1-\alpha)I+\alpha\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha_{2}}\Pi_{\mathrm{T}_{{\mathcal{N}}}(\bar{x})}^{\alpha_{1}}$.
Using this equality in (17), taking the norm of both sides, applying the
triangle inequality and Cauchy-Schwarz, and dividing by
$\|x_{r}-\bar{x}_{r}\|$ results in
$\frac{\left\|S(x_{r})-\bar{x}_{r}\right\|}{\left\|x_{r}-\bar{x}_{r}\right\|}\leq\left\|S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})\cap
T_{{\mathcal{N}}}(\bar{x})}\right\|+\|\psi(x_{r},\bar{x}_{r})\|,\text{ if
}x_{r}\neq\bar{x}_{r}.$ (18)
Continuity of $\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}$ around $\bar{x}$ means
that
$\psi(x_{r},\bar{x}_{r})=\psi(x_{r},\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{r}))\rightarrow
0$ as $x_{r}\rightarrow\bar{x}$, so for any
$c>\left\|S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})\cap
T_{{\mathcal{N}}}(\bar{x})}\right\|$, there exists some $\eta>0$ so that
$\forall
x_{r}\in{\mathcal{B}}_{\eta}(\bar{x}):\quad\left\|S(x_{r})-\bar{x}_{r}\right\|\leq
c\left\|x_{r}-\bar{x}_{r}\right\|.$ (19)
This proves part 1 of the theorem.
In the same way for $S^{k}$, since
$S(\bar{x})=S_{\mathrm{T}(\bar{x})}(\bar{x})=\bar{x}$, using the chain rule,
we get
$\mathrm{J}_{S^{k}}(\bar{x})=\left(\mathrm{J}_{S}(\bar{x})\right)^{k}=S_{\mathrm{T}(\bar{x})}^{k},$
so in the same way we conclude
$\frac{\left\|S^{k}(x_{r})-\bar{x}_{r}\right\|}{\left\|x_{r}-\bar{x}_{r}\right\|}\leq\left\|S^{k}_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})\cap
T_{{\mathcal{N}}}(\bar{x})}\right\|+\psi(x_{r},\bar{x}_{r}),\text{ if
}x_{r}\neq\bar{x}_{r}$ (20)
From Proposition 3 we have that $\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})\cap
T_{{\mathcal{N}}}(\bar{x})}=S_{\mathrm{T}(\bar{x})}^{\infty}$ and thus
$\displaystyle\frac{\left\|S^{k}(x_{r})-\bar{x}_{r}\right\|}{\left\|x_{r}-\bar{x}_{r}\right\|}\leq\left\|S_{\mathrm{T}(\bar{x})}^{k}-S^{\infty}_{\mathrm{T}(\bar{x})}\right\|+\psi(x_{r},\bar{x}_{r}),\text{
if }x_{r}\neq\bar{x}_{r}.$
Continuity of $\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}$ around
$\bar{x}=\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(\bar{x})$, with
$\bar{x}_{r}=\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{r})$, implies
$\displaystyle\limsup_{x_{r}\rightarrow\bar{x},x_{r}\not\in{\mathcal{M}}\cap{\mathcal{N}}}\frac{\left\|S^{k}(x_{r})-\bar{x}_{r}\right\|}{\left\|x_{r}-\bar{x}_{r}\right\|}$
$\displaystyle\leq\left\|S_{\mathrm{T}(\bar{x})}^{k}-S^{\infty}_{\mathrm{T}(\bar{x})}\right\|.$
Using the results in [19] with Definitions 5, 6, 7, and Facts 2, 3 implies
that for any $\mu_{\bar{x}}$ with
$\gamma(S_{\mathrm{T}(\bar{x})})<\mu_{\bar{x}}$ there exists $N\in\mathbb{N}$
so that for all $k\geq N$
$\left\|S_{\mathrm{T}(\bar{x})}^{k}-S_{\mathrm{T}(\bar{x})}^{\infty}\right\|\leq\mu_{\bar{x}}^{k}.$
We conclude that for any
$\mu_{\bar{x}}\in(\gamma(S_{\mathrm{T}(\bar{x})}),1)$, there exists $N$ such
that for all $k\geq N$
$\limsup_{x\rightarrow\bar{x},x\not\in{\mathcal{M}}\cap{\mathcal{N}}}\frac{\left\|S^{k}(x)-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\right\|}{\left\|x-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\right\|}\leq\mu_{\bar{x}}^{k},$
(21)
which proofs part 2 of the theorem. $\Box$
It remains to show that the sequence of iterates actually converges. To do
this, we first show that
$\|S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{\mathcal{M}}(\bar{x})\cap\mathrm{T}_{\mathcal{N}}(\bar{x})}\|<1$.
###### Lemma 6
Let $\alpha,\alpha_{1},\alpha_{2}$ satisfy Assumption 2 case B1 or B2, and let
${\mathcal{M}},{\mathcal{N}}$ satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$. Then
$\sigma(S_{\mathrm{T}(\bar{x})})\coloneqq\|S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{\mathcal{M}}(\bar{x})\cap\mathrm{T}_{\mathcal{N}}(\bar{x})}\|<1$
(22)
where
$S_{\mathrm{T}(\bar{x})}=\alpha\Pi^{\alpha_{2}}_{\mathrm{T}_{\mathcal{M}}(\bar{x})}\Pi^{\alpha_{1}}_{\mathrm{T}_{\mathcal{N}}(\bar{x})}+(1-\alpha)I$
Proof. First note that
$\Pi_{\mathrm{T}_{\mathcal{M}}(\bar{x})\cap\mathrm{T}_{\mathcal{N}}(\bar{x})}=\Pi_{\text{Fix}S_{\mathrm{T}(\bar{x})}}=S_{\mathrm{T}(\bar{x})}^{\infty}$
by Proposition 3. Proposition 2 therefore gives that
$\displaystyle\|S_{\mathrm{T}(x)}-S_{\mathrm{T}(x)}^{\infty}\|\leq\max($
$\displaystyle\|S_{1}-S_{1}^{\infty}\|,|1-\alpha_{2}(1-\alpha)|,$
$\displaystyle|\alpha+(1-\alpha)(1-\alpha_{1})(1-\alpha_{2})|,|1-\alpha|),$
where $S_{1}$ is a block diagonal matrix with blocks
$S_{1_{i}}=(1-\alpha)I+\alpha T_{1_{i}}$, where $T_{1_{i}}$ are defined in (7)
as
$T_{1_{i}}=\begin{pmatrix}1-\alpha_{1}s_{i}^{2}&\alpha_{1}c_{i}s_{i}\\\
\alpha_{1}(1-\alpha_{2})c_{i}s_{i}&(1-\alpha_{2})(1-\alpha_{1}c_{i}^{2})\end{pmatrix},$
where $c_{i}=\cos(\theta_{i}),s_{i}=\sin(\theta_{i})$ for each principal angle
$\theta_{i}$. Under Assumption 2 case B1 or B2 we have
$|1-\alpha_{2}(1-\alpha)|<1$,
$|\alpha+(1-\alpha)(1-\alpha_{1})(1-\alpha_{2})|<1$ and $|1-\alpha|<1$. It
remains to show that
$\|S_{1}-S_{1}^{\infty}\|=\max_{i}{\|S_{1_{i}}-S_{1_{i}}^{\infty}\|}<1$. We
now look at each block $S_{1_{i}}$ corresponding the each of the principal
angles $\theta_{i}$. Each block with $\theta_{i}=0$ becomes
$\displaystyle S_{1_{i}}$ $\displaystyle=\alpha
T_{1_{i}}+(1-\alpha)I=\begin{pmatrix}1&0\\\
0&\alpha(1-\alpha_{1})(1-\alpha_{2})+(1-\alpha)\end{pmatrix}$ $\displaystyle
S_{1_{i}}^{\infty}$ $\displaystyle=\begin{pmatrix}1&0\\\ 0&0\end{pmatrix},$
so the corresponding singular values are $0$ and
$|\alpha(1-\alpha_{1})(1-\alpha_{2})+(1-\alpha)|<1$. The remaining cases are
$\theta_{i}\in(0,\pi/2]$ for which
$(S_{1_{i}})^{\infty}=\Pi_{{\rm{fix}}S_{1_{i}}}=0$. To study the largest
singular value $\|S_{1_{i}}-S_{1_{i}}^{\infty}\|=\|S_{1_{i}}\|=\|\alpha
T_{1_{i}}+(1-\alpha)I\|$ so $\|S_{1_{i}}\|\leq 1$, hence we only need to show
that $\|S_{1_{i}}\|\neq 1$. From the triangle inequality we get $\|\alpha
T_{1_{i}}+(1-\alpha)I\|\leq\alpha\|T_{1_{i}}\|+(1-\alpha)\leq 1$, with
equality only if $\|T_{1_{i}}\|=1$. To this end, we consider
$\|T_{1_{i}}\|^{2}=\max(\textrm{eig}(T_{1_{i}}T_{1_{i}}^{\top}))$ and study
the eigenvalues of of $T_{1_{i}}T_{1_{i}}^{\top}$. Non-expansiveness again
implies that $\|T_{1_{i}}\|\leq 1$. We now aim to show that these blocks have
singular values smaller than $1$ when $\theta_{i}\in(0,\pi/2]$. After
simplifying with the identity $s_{i}^{2}+c_{i}^{2}=1$ we get
$\displaystyle T_{1_{i}}T_{1_{i}}^{\top}$
$\displaystyle=\begin{pmatrix}1-2\alpha_{1}s_{i}^{2}+\alpha_{1}^{2}s_{i}^{2}&(2-\alpha_{1})\alpha_{1}(1-\alpha_{2})c_{i}s_{i}\\\
(2-\alpha_{1})\alpha_{1}(1-\alpha_{2})c_{i}s_{i}&(1-\alpha_{2})^{2}(1-2\alpha_{1}c_{i}^{2}+\alpha_{1}^{2}c_{i}^{2})\end{pmatrix}$
$\displaystyle=:\begin{pmatrix}a&b\\\ c&d\end{pmatrix}.$
For any of these eigenvalues to be $1$ it must be that
$\displaystyle\det\begin{pmatrix}a-1&b\\\ c&d-1\end{pmatrix}=0,$
i.e
$\displaystyle 0$ $\displaystyle=1-a-d+ad-bc.$ (23)
Simplifying the expressions yields the following identities
$\displaystyle 1-a-d$
$\displaystyle=\alpha_{1}s_{i}^{2}(2-\alpha_{1})-(1-\alpha_{2})^{2}(1-2\alpha_{1}c_{i}^{2}+\alpha_{1}^{2}c_{i}^{2})$
$\displaystyle ad$
$\displaystyle=(1-\alpha_{2})^{2}(\alpha_{1}^{2}c_{i}^{2}s_{i}^{2}(4-4\alpha_{1}+\alpha_{1}^{2})+(1-\alpha_{1})^{2})$
$\displaystyle bc$
$\displaystyle=(1-\alpha_{2})^{2}\alpha_{1}^{2}c_{i}^{2}s_{i}^{2}(4-4\alpha_{1}+\alpha_{1}^{2})$
$\displaystyle ad-bc$ $\displaystyle=(1-\alpha_{1})^{2}(1-\alpha_{2})^{2}$
and thus
$\displaystyle 1-a-d+ad-bc$
$\displaystyle=\alpha_{1}s_{i}^{2}(2-\alpha_{1})-(1-\alpha_{2})^{2}(1-2\alpha_{1}c_{i}^{2}+\alpha_{1}^{2}c_{i}^{2})$
$\displaystyle\quad+(1-\alpha_{1})^{2}(1-\alpha_{2})^{2}$
$\displaystyle=s_{i}^{2}\alpha_{1}(2-\alpha_{1})-(1-\alpha_{2})^{2}(2\alpha_{1}(1-c_{i}^{2})+\alpha_{1}^{2}(c_{i}^{2}-1))$
$\displaystyle=s^{2}\alpha_{1}(2-\alpha_{1})-(1-\alpha_{2})^{2}\alpha_{1}s_{i}^{2}(2-\alpha_{1})$
$\displaystyle=s_{i}^{2}\alpha_{1}\alpha_{2}(2-\alpha_{1})(2-\alpha_{2}).$
So from (23), for the largest eigenvalue to be $1$ it must be that
$\displaystyle 0$
$\displaystyle=\sin(\theta_{i})^{2}\alpha_{1}\alpha_{2}(2-\alpha_{1})(2-\alpha_{2}).$
Within the ranges $\alpha_{1},\alpha_{2}\in(0,2)$ and $\theta_{i}\in(0,\pi/2]$
we have
$\displaystyle\sin(\theta_{i})^{2}\alpha_{1}\alpha_{2}(2-\alpha_{1})(2-\alpha_{2})>0,$
which leads to
$\max(\text{eig}(T_{1_{i}}T_{1_{i}}^{\top}))=\|T_{1_{i}}\|^{2}<1$, and thus
$\|S_{1_{i}}\|<1$. This completes the proof for case B1 from Assumption 2.
Now consider the case B2 from Assumption 2 where either $\alpha_{1}=2$ or
$\alpha_{2}=2$, i.e. $\|T_{1_{i}}\|=1$, but $\alpha\in(0,1)$ and assume that
also $\|S_{1_{i}}\|=1$. From compactness of the unit circle in
$\mathbb{R}^{n}$ and continuity of the norm we get from the definition of the
operator norm that there exists a $\|v\|=1$ such that $\|S_{1_{i}}v\|=1$. But
then $1=\|S_{1_{i}}v\|^{2}=\|\alpha T_{1_{i}}v+(1-\alpha)v\|^{2}$. However, on
the boundaries $\alpha=0$ or $\alpha=1$ we get $\|S_{1_{i}}v\|=1$. Since the
squared norm is strongly convex we have for any $\alpha\in(0,1)$ where
$T_{1_{i}}v\neq v$ the contradiction $\|\alpha
T_{1_{i}}v+(1-\alpha)v\|^{2}<1$. This leaves the case where $T_{1_{i}}v=v$,
which means that $v$ is a fixed point of $T$, but the only fixed point is
$v=0$, which does not satisfy $\|v\|=1$. Thus, there is no $\|v\|=1$ such that
$\|S_{1_{i}}v\|=1$ and therefore $\|S_{1_{i}}\|<1$. This concludes the proof.
$\Box$
We are now ready to show that the algorithm will locally converge to some
point in the intersection with the contraction factor in Lemma 6. The proof is
similar to that in [27], where the authors show the result for the special
case of alternating projections.
###### Theorem 5
Let (${\mathcal{M}},{\mathcal{N}}$) satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$, and $S$ in Definition 14 satisfy
Assumption 2 case B1 or B2. If the initial point $x_{0}$ is close enough to
$\bar{x}$ then the GAP method
$x_{k+1}=Sx_{k}$
is well defined. Moreover, the sequence $(x_{k})_{k\in\mathbb{N}}$ converges
to some point $x^{*}\in{\mathcal{M}}\cap{\mathcal{N}}$, and for every
$\mu_{\bar{x}}\in(\sigma(S_{\mathrm{T}(\bar{x})}),1)$, there exists a
$\beta>0$ such that
$\|x_{k}-x^{*}\|\leq\beta\mu_{\bar{x}}^{k}.$ (24)
Proof. By Lemma 6 we have
$\sigma(S_{\mathrm{T}(\bar{x})})=\|S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{\mathcal{M}}(\bar{x})\cap\mathrm{T}_{\mathcal{N}}(\bar{x})}\|<1$.
Let $c\in(0,1)$ be such that
$\|S_{\mathrm{T}(\bar{x})}-\Pi_{\mathrm{T}_{\mathcal{M}}(\bar{x})\cap\mathrm{T}_{\mathcal{N}}(\bar{x})}\|<c<1$
and choose $\eta$ such that $Sx$ and $\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)$
are well defined by Theorem 4 for $x\in B_{\eta}(\bar{x})$ and so that Theorem
4.1 is satisfied, i.e
$\forall x\in
B_{\eta}(\bar{x}),\quad\|Sx-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\|\leq
c\|x-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x)\|.$ (25)
Let the initial point $x_{0}\in{\mathcal{B}}_{\delta}(\bar{x})$ where
$\delta\coloneqq\eta/(2\sum_{k=0}^{\infty}c^{k})=\eta(1-c)/2<\eta$ and define
$\bar{x}_{k}:=\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{k})$. By the choice of
$\eta$, if $x_{k}\in{\mathcal{B}}_{\eta}(\bar{x})$ then $\bar{x}_{k}$ and
$x_{k+1}$ are well defined. We now show the following results by induction:
$\displaystyle\|x_{k}-\bar{x}\|$ $\displaystyle\leq
2\delta\sum_{i=0}^{k}c^{i}$ (H0) $\displaystyle\|x_{k}-\bar{x}_{k}\|$
$\displaystyle\leq\delta c^{k}$ (H1)
$\displaystyle\|\bar{x}_{k}-\bar{x}_{k-1}\|$ $\displaystyle\leq 2\delta c^{k}$
(H2) $\displaystyle\|\bar{x}_{k}-\bar{x}\|$ $\displaystyle\leq
2\delta\sum_{i=0}^{k}c^{i}$ (H3)
where we note that $2\delta\sum_{i=0}^{k}c^{i}\leq\frac{2\delta}{1-c}=\eta$.
Case $k=0$: Let $\bar{x}_{-1}\coloneqq\bar{x}_{0}$. We have trivially
$\displaystyle\|x_{0}-\bar{x}\|$ $\displaystyle\leq\delta\leq 2\delta$
(H$0^{0}$) $\displaystyle\|x_{0}-\bar{x}_{0}\|$
$\displaystyle\leq\|x_{0}-\bar{x}\|\leq\delta$ (H$1^{0}$)
$\displaystyle\|\bar{x}_{0}-\bar{x}_{-1}\|$ $\displaystyle=0\leq 2\delta$
(H$2^{0}$) $\displaystyle\|\bar{x}_{0}-\bar{x}\|$ $\displaystyle\leq 2\delta.$
(H$3^{0}$)
Now assume that (H0)-(H3) hold up to some $k$. Then by the triangle
inequality, (25), (H1), and (H3) we get
$\displaystyle\|x_{k+1}-\bar{x}\|$
$\displaystyle\leq\|x_{k+1}-\bar{x}_{k}\|+\|\bar{x}_{k}-\bar{x}\|$
$\displaystyle\leq c\|x_{k}-\bar{x}_{k}\|+\|\bar{x}_{k}-\bar{x}\|\leq\delta
c^{k+1}+2\delta\sum_{i=0}^{k}c^{i}\leq 2\delta\sum_{i=0}^{k+1}c^{i}.$
(H$0^{+}$)
By the definition of the projection, (25), and (H$1)$ we get
$\displaystyle\|x_{k+1}-\bar{x}_{k+1}\|\leq\|x_{k+1}-\bar{x}_{k}\|\leq
c\|x_{k}-\bar{x}_{k}\|\leq\delta c^{k+1}.$ (H$1^{+}$)
Again, by the triangle inequality, the definition of projection and (H$1^{+}$)
$\displaystyle\|\bar{x}_{k+1}-\bar{x}_{k}\|\leq\|\bar{x}_{k+1}-x_{k+1}\|+\|x_{k+1}-\bar{x}_{k}\|\leq
2\|x_{k+1}-\bar{x}_{k}\|\leq 2\delta c^{k+1}$ (H$2^{+}$)
and by (H$2^{+})$ and $(H3)$:
$\displaystyle\|\bar{x}_{k+1}-\bar{x}\|\leq\|\bar{x}_{k+1}-\bar{x}_{k}\|+\|\bar{x}_{k}-\bar{x}\|\leq
2\delta c^{k+1}+2\delta\sum_{i=0}^{k}c^{i}=2\delta\sum_{i=0}^{k+1}c^{i}.$
(H$3^{+}$)
By induction we have now shown that (H0)–(H3) must hold for all $k\geq 0.$
We now show that $\left(\bar{x}_{k}\right)_{k\in\mathbb{N}}$ is Cauchy. By the
triangle inequality, (25), and (H1):
$\displaystyle\|\bar{x}_{k+1}-\bar{x}_{k}\|$
$\displaystyle\leq\|\bar{x}_{k+1}-x_{k+1}\|+\|x_{k+1}-\bar{x}_{k}\|$
$\displaystyle\leq\|\bar{x}_{k+1}-x_{k+1}\|+c\|x_{k}-\bar{x}_{k}\|\leq\delta
c^{k+1}+\delta c^{k+1}\leq 2\delta c^{k+1}.$
Thus for any $p,k\in\mathbb{N}$ with $p>k$
$\|\bar{x}_{p}-\bar{x}_{k}\|\leq\sum_{i=k}^{p-1}\|\bar{x}_{i+1}-\bar{x}_{i}\|\leq
2\delta\sum_{i=k}^{p-1}c^{i+1}\leq 2\delta
c^{k+1}\sum_{i=0}^{\infty}c^{i}=\frac{2\delta}{1-c}c^{k+1},$
so the sequence is Cauchy. Therefore
$x^{*}=\lim_{p\rightarrow\infty}\bar{x}_{p}\in{\mathcal{M}}\cap{\mathcal{N}}$
exists and
$\|x^{*}-\bar{x}_{k}\|\leq\frac{2\delta}{1-c}c^{k+1}.$
Lastly, by the triangle inequality and (H1)
$\|x_{k}-x^{*}\|\leq\|x_{k}-\bar{x}_{k}\|+\|\bar{x}_{k}-x^{*}\|\leq\delta
c^{k}+\frac{2\delta}{1-c}c^{k+1}=\delta\frac{1+c}{1-c}c^{k},$
hence (24) holds with $\beta=\delta\frac{1+c}{1-c}$ and $\mu_{\bar{x}}=c$.
$\Box$
Theorem 5 implies that the sequence generated by the generalized alternating
projection algorithm converges to a point in the intersection when started
close enough. However, as is the case for the method of alternating
projections, the rate predicted by $\sigma(S_{\mathrm{T}(x^{*})})$ is very
conservative. We now show that the iterates converge to the intersection with
the faster rate $\gamma(S_{\mathrm{T}(x^{*})})$ from Definition 7. The theorem
and proof are similar to that in [27, Rem. 4], where the authors show it for
alternating projections.
###### Theorem 6
Let (${\mathcal{M}},{\mathcal{N}}$) satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$, let the initial point $x_{0}$ be
close enough to $\bar{x}$, and the GAP operator $S$ from Definition 14 satisfy
Assumption 2 case B1 or B2. Further assume that
(${\mathcal{M}},{\mathcal{N}}$) satisfies Assumption 1 at the limit point
$x^{*}$ of the sequence $(x_{k})_{k\in\mathbb{N}}$ generated by the GAP method
$x_{k+1}=Sx_{k}.$
Then the convergence is R-linear to ${\mathcal{M}}\cap{\mathcal{N}}$ with any
rate $\mu_{x^{*}}\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$. That is, for any
$\mu_{x^{*}}\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$, there exists
$N\in\mathbb{N}$ such that
$d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{k})\leq\mu_{x^{*}}^{k},\quad\forall
k>N.$ (26)
Proof. We note that Theorem 5 establishes the existence of a limit point
$x^{*}$. Take any $\mu_{x^{*}}\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$ and let
$\bar{\mu}_{x^{*}}=(\mu_{x^{*}}+\gamma(S_{\mathrm{T}(x^{*})}))/2$. Theorem 5
implies that eventually $x_{r}\in B_{\eta}(x^{*})$, and thus by Theorem 4.2,
with $\bar{\mu}_{x^{*}}\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$, there exists
$N\in\mathbb{N}$ so that $\forall t>N$,
$\displaystyle d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{t+n})$
$\displaystyle=\left\|S^{t}x_{n}-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{n})\right\|$
$\displaystyle<\bar{\mu}_{x^{*}}^{t}\left\|x_{n}-\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{n})\right\|=\bar{\mu}_{x^{*}}^{t}d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{n}),$
as long as $x_{n}\not\in{\mathcal{M}}\cap{\mathcal{N}}$. By induction this
leads to
$d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{kt+n})<\bar{\mu}_{x^{*}}^{kt}d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{n}),\quad\forall
k=1,2,3,\dots.$ (27)
Now fix $t>N$ and assume that (26) does not hold, then there exists an
infinite sequence $r_{1}<r_{2}<\cdots$, all satisfying
$d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{r_{j}})>\mu_{x^{*}}^{r_{j}}.$ (28)
We now show that this is impossible and that the theorem therefore must hold.
By Lemma 9 (see Appendix A.1) we can select a sub-sequence
$\left(r_{k_{j}}\right)_{j\in\mathbb{N}}$ of
$\left(r_{j}\right)_{j\in\mathbb{N}}$ where we can write $r_{k_{j}}=a+b_{j}t$
for some $a\in\mathbb{N}$ and increasing sequence of integers
$\left(b_{j}\right)_{j\in\mathbb{N}}$, i.e. we have a new sub-sub-sequence
where all iterates are a multiplicity of $t$ iterations apart. Thus, picking
any $b$ so that $a+bt>N$, we have with $r_{k_{j}}=a+b_{j}t=a+bt+(b_{j}-b)t$
from (27) that
$d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{r_{k_{j}}})<\bar{\mu}_{x^{*}}^{(b_{j}-b)t}d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{a+bt}).$
Since $\bar{\mu}_{x^{*}}<\mu_{x^{*}}$ we can find a large enough $j$ so that
$\left(\frac{\bar{\mu}_{x^{*}}}{\mu_{x^{*}}}\right)^{(b_{j}-b)t}\leq\frac{\mu_{x^{*}}^{a+bt}}{d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{a+bt})}$
and thus
$d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{r_{k_{j}}})<\bar{\mu}_{x^{*}}^{(b_{j}-b)t}d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{a+bt})\leq\mu_{x^{*}}^{(b_{j}-b)t}\mu_{x^{*}}^{a+bt}=\mu_{x^{*}}^{r_{k_{j}}}.$
This contradicts the (28) so the theorem must hold. $\Box$
###### Remark 5
For the case of the method of alternating projections
($\alpha=\alpha_{1}=\alpha_{2}=1$), we see that these results coincide with
those of [27]. In particular, the contraction rate is then given by
$\sigma(S_{\mathrm{T}(\bar{x})})=c(\mathrm{T}_{{\mathcal{M}}(\bar{x})},T_{{\mathcal{N}}(\bar{x})})$
and the limiting rate is
$\gamma(S_{\mathrm{T}(\bar{x})})=c^{2}(\mathrm{T}_{{\mathcal{M}}(\bar{x})},T_{{\mathcal{N}}(\bar{x})})$.
This corresponds to the rates $\cos(\theta_{F})$ and $\cos^{2}(\theta_{F})$
where $\theta_{F}$ is the Friedrichs angle of the corresponding tangent
planes.
We now show that the faster rate in Theorem 6 holds not only in distance to
the intersection, but also to a point
$x^{*}\in{\mathcal{M}}\cap{\mathcal{N}}$. A similar result can be found in [2]
for the alternating projections method.
###### Theorem 7
Let $({\mathcal{M}},{\mathcal{N}})$ satisfy Assumption 1 at
$\bar{x}\in{\mathcal{M}}\cap{\mathcal{N}}$, let the initial point $x_{0}$ be
close enough to $\bar{x}$, and the GAP operator $S$ from Definition 14 satisfy
Assumption 2 case B1 or B2. Further assume that
(${\mathcal{M}},{\mathcal{N}}$) satisfies Assumption 1 at the limit point
$x^{*}$ of the sequence $(x_{k})_{k\in\mathbb{N}}$ generated by the GAP method
$x_{k+1}=Sx_{k}.$
Then for every $\mu_{x^{*}}\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$, there exists
$N\in\mathbb{N}$ such that for all $k\geq N$
$\|x_{k}-x^{*}\|\leq\mu_{x^{*}}^{k},$
or equivalently
$\limsup_{k\rightarrow\infty}\|x_{k}-x^{*}\|^{1/k}\leq\gamma(S_{\mathrm{T}(x^{*})}).$
Proof. Take any $\mu_{x^{*}}\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$ and let
$\bar{\mu}=(\mu_{x^{*}}+\gamma(S_{\mathrm{T}(x^{*})}))/2\leq\mu_{x^{*}}$.
Clearly $\bar{\mu}\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$, so we know from
Theorem 6 that there exists $N$ such that
$d_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{k})=\|x_{k}-\bar{x}_{k}\|\leq\bar{\mu}^{k},\quad\forall
k\geq N,$ (29)
where $\bar{x}_{k}\coloneqq\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}(x_{k})$. Pick
$c<1$ and $\eta$ so that Theorem 4.1 holds for $\bar{x}=x^{*}$. Since
$(x_{k})\rightarrow x^{*}$ there is some $M\geq N$ so that $x_{k}\in
B_{\eta(x^{*})}$ for all $k\geq M$ and thus by Theorem 4.1
$\left\|x_{k+1}-\bar{x}_{k}\right\|\leq
c\left\|x_{k}-\bar{x}_{k}\right\|,\quad\forall k\geq M.$ (30)
Using (29), (30) and the triangle inequality, for $k\geq M$ we get
$\displaystyle\ \|\bar{x}_{k+1}-\bar{x}_{k}\|$
$\displaystyle\leq\|\bar{x}_{k+1}-x_{k+1}\|+\|{x}_{k+1}-\bar{x}_{k}\|$
$\displaystyle\leq\|\bar{x}_{k+1}-x_{k+1}\|+c\|{x}_{k}-\bar{x}_{k}\|\leq\bar{\mu}^{k+1}+c\bar{\mu}^{k}$
$\displaystyle=\bar{\mu}^{k+1}(1+\frac{c}{\bar{\mu}}).$ (31)
By continuity of $\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}$ around $x^{*}$, the
point $\bar{x}^{*}=\lim_{k\rightarrow\infty}\bar{x}_{k}$ exists. Using the
triangle inequality and (31) for $k\geq M$ we get
$\displaystyle\|\bar{x}_{k}-\bar{x}^{*}\|$
$\displaystyle\leq\sum_{i=k}^{\infty}\|\bar{x}_{i+1}-\bar{x}_{i}\|\leq\sum_{i=k}^{\infty}\bar{\mu}^{i+1}(1+\frac{c}{\bar{\mu}})$
(32)
$\displaystyle=(1+\frac{c}{\bar{\mu}})\bar{\mu}^{k+1}\sum_{i=0}^{\infty}\bar{\mu}^{i}$
(33)
$\displaystyle\leq(1+\frac{c}{\bar{\mu}})\frac{1}{1-\bar{\mu}}\bar{\mu}^{k+1}=\frac{\bar{\mu}+c}{1-\bar{\mu}}\bar{\mu}^{k}.$
(34)
By continuity of $\Pi_{{\mathcal{M}}\cap{\mathcal{N}}}$ we also have
$x^{*}=\bar{x}^{*}$ since $x^{*}\in{\mathcal{M}}\cap{\mathcal{N}}$. Again,
using the triangle inequality, (29) and (34) for $k\geq M$
$\displaystyle\|x_{k}-x^{*}\|$
$\displaystyle\leq\|x_{k}-\bar{x}_{k}\|+\|\bar{x}_{k}-x^{*}\|$ (35)
$\displaystyle\leq\bar{\mu}^{k}+\frac{\bar{\mu}+c}{1-\bar{\mu}}\bar{\mu}^{k}=\frac{1+c}{1-\bar{\mu}}\bar{\mu}^{k}.$
(36)
Lastly, since $\bar{\mu}<\mu_{x^{*}}$, there is some $L\geq M$ so that for all
$k\geq L$
$\|x_{k}-x^{*}\|\leq\frac{1+c}{1-\bar{\mu}}\bar{\mu}^{k}\leq\mu_{x^{*}}^{k}.$
$\Box$
We note that the local linear rate $\mu_{x}^{*}<\gamma(S_{\mathrm{T}(x^{*})})$
is strict, in the sense that it can not be improved without adding more
assumptions or changing the algorithm. This follows from the fact that the
worst case rate is achieved in the setting of affine sets, which is covered by
this theorem.
As shown in Theorem 3, to optimize the bound on the convergence rate
$\gamma(S_{\mathrm{T}(x^{*})})$ from Theorem 7, in the case where the relative
dimensions of the tangent planes are unknown, the parameters should be chosen
as
$\displaystyle\alpha=1,\quad\alpha_{1}=\alpha_{2}=\alpha^{*}\coloneqq\frac{2}{1+\sin{\theta_{F}}},$
(37)
where $\theta_{F}$ is the Friedrichs angle between the sets
$\mathrm{T}_{{\mathcal{M}}(x^{*})}$ and $T_{{\mathcal{N}}(x^{*})}$.
## 6 Convex sets
We now show how the convergence results of GAP on manifolds can be extended to
GAP on convex sets in some cases. We first note that the GAP method is known
to converge to some point in the intersection when the sets are convex, see
e.g [18], so the question that remains is the convergence rate. One way to
extend the results in this paper to convex sets is to show that the iterates
will eventually behave identically as if the projections were made onto smooth
manifolds. One approach to do this is to partition a convex set into locally
smooth manifolds. This can be done for many convex sets, as illustrated in
Example 2.
###### Example 2
Consider the convex set $C=\\{(x,y,z)\mid x^{2}+y^{2}\leq z^{2},0\leq z\leq
1\\}$. The set can be partitioned into the following five locally smooth
manifolds: $C_{1}=\text{int}C,C_{2}=\\{(x,y,z)\mid
x^{2}+y^{2}=z^{2},0<z<1\\},C_{3}=\\{(x,y,1)\mid
x^{2}+y^{2}<1\\},C_{4}=\\{(x,y,1)\mid x^{2}+y^{2}=1\\},C_{5}=\\{(0,0,0)\\}.$
There is plenty of literature on this type of identification of surfaces. For
example, in [29] the authors study the Douglas–Rachford algorithm for
partially smooth functions. However, the assumptions do not generally apply to
convex feasibility problems since all reformulations into the framework will
either be non-smooth or have vanishing gradients at the boundaries.
For the case of alternating projections on convex sets, the projections will
always lie on the boundary of the sets until the problem is solved. The local
convergence rate therefore follows trivially if the boundaries of these sets
satisfy the regularity assumptions at the intersection.
However, this is not the case for GAP in general because of the (over)-relaxed
projections. Even in cases of polyhedral sets, identification of affine sets
is not guaranteed as we show with an example in Section 6.2.
We therefore show the results under smoothness assumptions, for a slightly
restricted set of parameters. This set of parameters does however include the
parameters found by optimizing the rate in Theorem 7.
###### Lemma 7
Let $A$ be a closed solid convex set in $\mathbb{R}^{n}$ with
${\mathcal{C}}^{2}$ smooth boundary around $\bar{x}\in\text{bd}\,A$. Then
there exists a $\delta>0$ such that for all
$x\in{\mathcal{B}}_{\delta}(\bar{x})\setminus A$
$\displaystyle\Pi_{A}^{\alpha}x\in\text{int}A,\,\,\forall\alpha\in(1,2].$
Proof. As noted in Remark 1, smoothness of $\text{bd}\,A$ implies that there
exists a neighborhood of $\bar{x}$ for which the outwards facing normal vector
$n(x)$ with $\|n(x)\|=1$ is unique for all $x\in\text{bd}\,A$ and that the
normal $n(x)$ is continuous around $\bar{x}$. Since $A$ is solid and smooth at
$\bar{x}$, there is some $\zeta>0$ so that $\bar{x}-\beta
n(\bar{x})\in\text{int}A$ for all $\beta\in(0,\zeta]$. We assume without loss
of generality that $\zeta<1$. We can now create an open ball with radius
$\delta$ such that
$\mathcal{B}^{o}_{\delta}(\bar{x}-\beta n(\bar{x}))\subset\text{int}A.$ (38)
From continuity of $n(x)$ we have that there exists $\epsilon^{\prime}>0$ such
that for all $x\in\text{bd}\,A$
$\|x-\bar{x}\|\leq\epsilon^{\prime}\Rightarrow\|n(x)-n(\bar{x})\|\leq\delta.$
(39)
Now pick $0<\epsilon<\min(\delta(1-\beta),\beta,\epsilon^{\prime})$. By the
triangle inequality, for all
$x\in{\mathcal{B}}_{\epsilon}(\bar{x})\cap\text{bd}\,A$,
$\displaystyle\|(x-\beta n(x))-(\bar{x}-\beta n(\bar{x}))\|$
$\displaystyle\leq\|x-\bar{x}\|+\beta\|n(x)-n(\bar{x}))\|\leq\epsilon+\beta\delta$
$\displaystyle<\delta(1-\beta)+\beta\delta=\delta.$
Using this and (38),
$x-\beta n(x)\in\text{int}A\,,\forall
x\in{\mathcal{B}}_{\epsilon}(\bar{x})\cap\text{bd}\,A.$ (40)
Moreover, by convexity of $A$ and non-expansiveness [4, Prp. 4.16] of the
projection
$\Pi_{A}(x)\in{\mathcal{B}}_{\epsilon}(\bar{x}),\forall
x\in{\mathcal{B}}_{\epsilon}(\bar{x}).$ (41)
Hence, by (40), (41) and since $\Pi_{A}(x)\in\text{bd}\,(A)$ for $x\not\in A$
we have
$\Pi_{A}(x)-\beta n(\Pi_{A}(x))\in\text{int}A,\,\forall
x\in{\mathcal{B}}_{\epsilon}(\bar{x})\setminus A.$ (42)
Moreover, the projection operator satisfies
$\displaystyle n(\Pi_{A}(x))=\frac{x-\Pi_{A}(x)}{\|x-\Pi_{A}(x)\|},$
for $x\not\in A$ [4, Prp. 6.47]. By the definition of relaxed projection we
therefore have for $x\in{\mathcal{B}}_{\epsilon}(\bar{x})\setminus A$ that
$\Pi_{A}^{\alpha}(x)=\Pi_{A}(x)-(\alpha-1)\|\Pi_{A}(x)-x\|n(\Pi_{A}(x))$.
Noting that since $\alpha\in(1,2]$
$0<(\alpha-1)\|\Pi_{A}(x)-x\|\leq\epsilon<\beta<1,$
we conclude that $\Pi_{A}^{\alpha}(x)$ is a strict convex combination between
$\Pi_{A}(x)\in A$ and $\Pi_{A}(x)-\beta n(\Pi_{A}(x))\in\text{int}A$, i.e.
$\Pi_{A}^{\alpha}(x)=\gamma\Pi_{A}(x)+(1-\gamma)(\Pi_{A}(x)-\beta
n(\Pi_{A}(x)))$
where $\gamma\coloneqq 1-(\alpha-1)\|\Pi_{A}(x)-x\|/\beta\in(0,1)$, and
therefore $\Pi_{A}^{\alpha}(x)\in\text{int}A$. $\Box$
### 6.1 Examples of convex sets
In this section we present some results on when the rate in Theorem 7 can be
applied to convex sets. We say that, for a convex set $A$, the algorithm has
_identified_ a manifold ${\mathcal{M}}\subset A$ at some iteration $k$, if
subsequent iterations would be identical when the set $A$ is replaced with
${\mathcal{M}}$. We partition a smooth convex set $A$ into two parts
$\text{bd}\,A$ and $\text{int}A$, and show that either $\text{bd}\,A$ or
$\text{int}A$ is identified.
###### Assumption 3 (Regularity of Convex Sets at Solution)
Let $A,B$ be two closed convex sets with $x^{*}\in A\cap B$. Assume that at
least one of the following holds
1. C1.
$x^{*}\in\text{bd}\,A\cap\text{bd}\,B$ and $(\text{bd}\,A,\text{bd}\,B)$
satisfies Assumption 1 at the point $x^{*}$,
2. C2.
$x^{*}\in\text{int}A\cap\text{bd}\,B$ where $\text{bd}\,B$ is
$\mathcal{C}^{2}$-smooth around $x^{*}$,
3. C3.
$x^{*}\in\text{bd}\,A\cap\text{int}B$ where $\text{bd}\,A$ is
$\mathcal{C}^{2}$-smooth around $x^{*}$,
4. C4.
$x^{*}\in\text{int}A\cap\text{int}B$.
We now introduce a definition of $S_{\mathrm{T}(x^{*})}$ in the setting of
convex sets to simplify the following statements on convergence rates.
###### Definition 15
For two convex sets $(A,B)$ that satisfy Assumption 3 at a point $x^{*}\in
A\cap B$, we define
$S_{\mathrm{T}(x^{*})}\coloneqq(1-\alpha)I+\alpha\Pi_{\mathrm{T}_{{\mathcal{M}}}(x^{*})}^{\alpha_{2}}\Pi_{\mathrm{T}_{{\mathcal{N}}}(x^{*})}^{\alpha_{1}}$
where we let
${\mathcal{M}}\coloneqq\begin{cases}\text{bd}\,A&\textrm{ if
}x^{*}\in\text{bd}\,A\\\ \text{int}A&\textrm{ if
}x^{*}\in\text{int}A\end{cases},\quad{\mathcal{N}}\coloneqq\begin{cases}\text{bd}\,B&\textrm{
if }x^{*}\in\text{bd}\,B\\\ \text{int}B&\textrm{ if
}x^{*}\in\text{int}B.\end{cases}$
We note that with the definition above, if $x^{*}\in\text{int}A$, then we get
the corresponding set $\mathrm{T}_{{\mathcal{M}}}(x^{*})=\mathbb{R}^{n}$ and
the projection operator
$\Pi_{\mathrm{T}_{{\mathcal{M}}}(\bar{x})}^{\alpha_{2}}=I$, and equivalently
for $x^{*}\in\text{int}B$. The corresponding rate
$\gamma(S_{\mathrm{T}(x^{*})})$ then reduces to one of $(1-\alpha_{2})$,
$(1-\alpha_{1})$ or $(1-\alpha_{1})(1-\alpha_{2})$ according to Theorem 1.
###### Theorem 8
Let ($A,B$) be solid convex sets with $A\cap B\neq\emptyset$, let
$\alpha=1,1<\alpha_{1},\alpha_{2}<2$ in the GAP algorithm (1). Then the
iterations converge to some point $x_{k}\rightarrow x^{*}\in A\cap B$. If the
sets ($A,B$) satisfy Assumption 3 at the point $x^{*}$, then either the
problem is solved in finite time, or eventually the algorithm will identify
the sets ($\text{bd}\,A$, $\text{bd}\,B$) and converge R-linearly with any
rate $\mu\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$ to
$x^{*}\in\text{bd}\,A\cap\text{bd}\,B$.
Proof. We know that $x_{k}\rightarrow x^{*}$ for some point $x^{*}$ from
convexity of $A$ and $B$ [18, Prp. 3]. We first show that the problem is
solved in a finite number of iterations unless
$x^{*}\in\text{bd}\,A\cap\text{bd}\,B$.
Assume $x^{*}\in\text{int}A\cap\text{int}B$. Then there is some open ball
around $x^{*}$ that is contained in $A\cap B$. By convergence of
$(x_{k})_{k\in\mathbb{N}}$, there is some $k$ such that $x_{k}$ is in this
ball, and we have convergence in finite time.
Assume $x^{*}\in\text{bd}\,A\cap\text{int}B$. Let $\delta$ be such that Lemma
7 is satisfied for $(A,x^{*}$) and so that
${\mathcal{B}}_{\delta}(x^{*})\subset B$. Then there is a $k$ such that
$x_{k}\in{\mathcal{B}}_{\delta}(x^{*})$. If $x_{k}\in A\cap B$ the problem is
solved in finite time. If not, then $x_{k}\in B\setminus A$, so trivially
$\Pi_{B}^{\alpha_{1}}x_{k}=x_{k}$, and by Lemma 7 we get
$x_{x+1}=\Pi_{A}^{\alpha_{2}}x_{k}\in\text{int}A$. By non-expansiveness of
$\Pi_{A}^{\alpha_{2}}\Pi_{B}^{\alpha_{1}}$, we have
$x_{k+1}\in{\mathcal{B}}_{\delta}(x^{*})\subset B$, so $x_{k+1}\in A\cap B$,
and the problem is solved in finite time.
Assume $x^{*}\in\text{int}A\cap\text{bd}\,B$ and let $\delta$ be such that
Lemma 7 is satisfied for $(B,x^{*}$), and so that
${\mathcal{B}}_{\delta}(x^{*})\subset A$. Eventually
$x_{k}\in{\mathcal{B}}_{\delta}(x^{*})$ for some $k$. If $x_{k}\in B$ the
problem is solved. If not, then $x_{k}\in A\setminus B$, but then
$\Pi^{\alpha_{1}}_{B}x_{k}\in B$ by Lemma 7. Again, by non-expansiveness of
$\Pi_{B}^{\alpha_{1}}$ we have
$\Pi^{\alpha_{1}}_{B}x_{k}\in{\mathcal{B}}_{\delta}(x^{*})\subset A$ so
$x_{k+1}=\Pi^{\alpha_{1}}_{B}x_{k}\in A\cap B$ and the problem is solved in
finite time.
Now consider the case where $x^{*}\in\text{bd}\,A\cap\text{bd}\,B$. Choose
$\delta_{A}$ and $\delta_{B}$ so that Lemma 7 is satisfied for $(A,x^{*})$ and
$(B,x^{*})$ respectively and let $\delta=\min(\delta_{A},\delta_{B})$. Since
$x_{k}\rightarrow x^{*}$ there exists $N\in\mathbb{N}$ such that
$x_{k}\in{\mathcal{B}}_{\delta}(x^{*})$ for all $k>N$. By Lemma 7, we then
have $x_{k+1}\in A$. If $x_{k+1}\in A\cap B$ the problem is solved in finite
time, else $x_{k+1}\in A\setminus B$. Now consider any $j>N$ such that
$x_{j}\in A\setminus B$ with $x_{j}\in{\mathcal{B}}_{\delta}(x^{*})$. The
first projection $\Pi_{B}^{\alpha_{1}}(x_{j})$ is equivalent to projecting
onto the manifold $\text{bd}\,B$, and by Lemma 7, we have
$\Pi_{B}^{\alpha_{1}}(x_{j})\in B$. Either this point is also in $A$ in which
case the problem is solved in finite time, or the second projection
$\Pi_{A}^{\alpha_{2}}\Pi_{B}^{\alpha_{1}}(x_{j})$ is equivalent to projecting
onto the manifold $\text{bd}\,A$. By Lemma 7, we get $x_{j+1}\in A$. Thus
either we have $x_{j+1}\in A\cap B$, in which case we have a solution in
finite time. Otherwise, $x_{j+1}\in A\setminus B$. By recursion over $j>N$, we
see that either the problem is solved in finite time, or $x_{j+1}\in
A\setminus B$ for all $j>N$, in which case each projection onto the sets is
equivalent to projecting onto their boundaries, i.e. the algorithm has
identified the manifolds. The rate then follows directly from Theorem 7.
$\Box$
###### Theorem 9
Let $A$ be a solid convex set, $B$ an affine set such that $A\cap
B\neq\emptyset$. Then $x_{k}\rightarrow x^{*}$ for some point $x^{*}\in A\cap
B$ for the GAP algorithm (1). If the sets ($A,B$) satisfy Assumption 3 at
$x^{*}$, then the iterates $x_{k+1}=Sx_{k}$ converge R-linearly with any rate
$\mu\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$ to $x^{*}$.
Proof. This proof is similar to that of Theorem 8. The sequence
$(x_{k})_{k\in\mathbb{N}}$ converges to some $x^{*}\in A\cap B$ by convexity
of the sets. First assume that $x^{*}\in\text{int}A$. Then, since
$x_{k}\rightarrow x^{*}$ there exists $N$ such that $x_{j}\in A$ for all
$j>N$. The problem is then locally equivalent to that of $(\mathbb{R}^{n},B)$,
i.e. two subspaces.
If $x^{*}\in\text{bd}\,A$, then let $\delta$ be such that Lemma 7 is satisfied
for $(A,x^{*})$. Then by convergence to $x^{*}$, eventually
$x_{j}\in{\mathcal{B}}_{\delta}(x^{*})$ for all $j>N$. If
$\Pi_{B}^{\alpha_{1}}x_{j}\not\in A$ then $x_{j+1}\in\text{int}A$ by Lemma 7.
And if $\Pi_{B}^{\alpha_{1}}x_{j}\in A$, then $x_{j+1}\in A$ by the definition
of projection. So $x_{j+1}\in A$ for all $j>N$.
If also $\Pi_{B}^{\alpha_{1}}x_{l}\in A$ for some $l>j>N$, then since both
$x_{l}$ and $x_{l-1}$ are in $A$, we have
$x_{l}-x_{l-1}\in{\textrm{N}}_{B}(\Pi_{B}x_{l-1})$. From convexity of $A$ we
have that the segment between $x_{l}$ and $x_{l-1}$ must be contained in $A$,
so all subsequent iterations must be on this line segment. But then
$\Pi_{B}x_{l}=x^{*}$ and by assumption $x^{*}\in\text{bd}\,A$, so convexity of
$A$ implies that the whole segment must be in $\text{bd}\,A$. The algorithm
has thus identified $(\text{bd}\,A,B)$.
Otherwise, $\Pi_{B}^{\alpha_{1}}x_{j}\not\in A$ for all $j>k$, and the
projection $\Pi_{A}^{\alpha_{2}}(\Pi_{B}^{\alpha_{1}})x_{j}$ is equivalent to
projecting onto the boundary $\text{bd}\,A$, i.e, the algorithm has identified
$(\text{bd}\,A,B)$. The rate then follows from Theorem 7 since $B$ is a smooth
manifold. $\Box$
We now introduce some regularity properties of convex sets and show how they
relate to the regularity of the manifolds corresponding to their boundaries.
###### Definition 16 (Substranservality of sets)
[25, Thm. 1 (ii)]
Two sets $C,D$ are _subtransversal_ at $x^{*}$ if there exist $\alpha>0$ and
$\delta>0$ such that
$\displaystyle\alpha\textrm{d}_{C\cap
D}(x)\leq\max\\{\textrm{d}_{C}(x),\textrm{d}_{D}(x)\\}\quad\forall
x\in{\mathcal{B}}_{\delta}(x^{*}).$ (43)
$\mathrm{sr}[C,D](x^{*})$ is defined as the exact upper bound of all $\alpha$
such that (43) holds.
###### Definition 17 (Transervality of sets)
[25, Thm. 1 (ii)]
Two sets $C,D$ are _transversal_ at $x^{*}$ if there exists $\alpha>0$ and
$\delta>0$ such that
$\displaystyle\alpha\textrm{d}_{(C-x_{1})\cap(D-x_{2})}(x)\leq$
$\displaystyle\max\\{\textrm{d}_{C-x_{1}}(x),\textrm{d}_{D-x_{2}}(x)\\}$
$\displaystyle\quad\quad\forall
x\in{\mathcal{B}}_{\delta}(x^{*}),x_{1},x_{2}\in{\mathcal{B}}_{\delta}(0).$
(44)
$\mathrm{r}[C,D](x^{*})$ is defined as the exact upper bound of all $\alpha$
such that (17) holds. Equivalently, $(C,D)$ are transversal at $x^{*}$ if
${\textrm{N}}_{C}(x^{*})\cap(-{\textrm{N}}_{D}(x^{*}))=\\{0\\}$ [25, Thm. 2
(v)].
We note that the transversality condition
${\textrm{N}}_{C}(x^{*})\cap(-{\textrm{N}}_{D}(x^{*}))=\\{0\\}$ for two sets
$(C,D)$ coincides with Definition 12 of transversality when the sets are
smooth manifolds, since the normal cones are linear subspaces in this case
[22].
###### Definition 18 (Acute and obtuse intersection)
For two solid, closed, convex sets $(A,B)$ with smooth boundaries, we say that
the intersection is _acute_ at a point $x^{*}\in\text{bd}\,A\cap\text{bd}\,B$
if $\langle v_{1},v_{2}\rangle\leq 0$, where $v_{1},v_{2}$ are the unique
vectors such that
$v_{1}\in{\textrm{N}}_{A}(x^{*}),v_{2}\in{\textrm{N}}_{B}(x^{*}),\|v_{1}\|=\|v_{2}\|=1$.
Conversely, we say that the intersection is _obtuse_ if $\langle
v_{1},v_{2}\rangle>0$.
Note that _acute_ and _obtuse_ refer to the shape of the intersection, and not
the angle between the normals, for which the property is reversed.
###### Lemma 8
Let $A,B$ be solid, closed and convex sets in $\mathbb{R}^{n}$ with boundaries
$\text{bd}\,A,\text{bd}\,B$ that satisfy Assumption 1 at some point
$x^{*}\in\text{bd}\,A,\text{bd}\,B$ and assume that
$\mathrm{T}_{\text{bd}\,A}(x^{*})\neq\mathrm{T}_{\text{bd}\,B}(x^{*})$. Let
$\theta_{F}\in(0,\pi/2]$ be defined via
$\cos(\theta_{F})=c(\text{bd}\,A,\text{bd}\,B,x^{*})$. Then
1. 1.
the manifolds $(\text{bd}\,A,\text{bd}\,B)$ are transversal at $x^{*}$,
2. 2.
the sets $(A,B)$ are transversal at $x^{*}$, i.e.
${\textrm{N}}_{A}(x^{*})\cap(-{\textrm{N}}_{B}(x^{*}))=\\{0\\}$,
3. 3.
the sets $(A,B)$ are subtransversal at $x^{*}$ and the following inequalities
hold
$\displaystyle\mathrm{r}[A,B](x^{*})\leq\mathrm{sr}[A,B](x^{*})\leq\begin{cases}\sin(\theta_{F}/2)\quad\textrm{
if }(A,B)\textrm{ acute at }x^{*}\\\ \cos(\theta_{F}/2)\quad\textrm{ if
}(A,B)\textrm{ obtuse at }x^{*},\end{cases}$
4. 4.
$\sin(\theta_{F}/2)=\mathrm{r}[\text{bd}\,A,\text{bd}\,B](x^{*})$.
Furthermore, if the intersection of $(A,B)$ is acute at $x^{*}$ then
$\displaystyle\sin(\theta_{F}/2)=\mathrm{r}[\text{bd}\,A,\text{bd}\,B](x^{*})=\mathrm{r}[A,B](x^{*})=\mathrm{sr}[A,B](x^{*})$
otherwise
$\displaystyle\cos(\theta_{F}/2)=\mathrm{r}[A,B](x^{*})=\mathrm{sr}[A,B](x^{*}).$
Proof. The proofs follow the definitions and results on (sub-)transversality
of general sets from [24].
1: From smoothness of the manifolds $\text{bd}\,A,\text{bd}\,B$, the
corresponding normals are lines and trivially
${\textrm{N}}_{\text{bd}\,B}(x^{*})=-{\textrm{N}}_{\text{bd}\,B}(x^{*})$.
Moreover, since
$\mathrm{T}_{\text{bd}\,A}(x^{*})\neq\mathrm{T}_{\text{bd}\,B}(x^{*})$ we have
${\textrm{N}}_{\text{bd}\,A}(x^{*})\neq{\textrm{N}}_{\text{bd}\,B}(x^{*})$,
and therefore
${\textrm{N}}_{\text{bd}\,A}(x^{*})\cap(-{\textrm{N}}_{\text{bd}\,B}(x^{*}))=\\{0\\}.$
2: The normals to the sets $A,B$ at a point in their boundaries $x^{*}$
satisfy
${\textrm{N}}_{\text{bd}\,A}(x^{*})={\textrm{N}}_{A}(x^{*})\cup(-{\textrm{N}}_{A}(x^{*}))$
and correspondingly for $B$. Hence,
${\textrm{N}}_{A}(x^{*})\subset{\textrm{N}}_{\text{bd}\,A}(x^{*})$ and
$-{\textrm{N}}_{B}(x^{*})\subset{\textrm{N}}_{\text{bd}\,B}(x^{*})$, so from
from case 1 it follows that
${\textrm{N}}_{A}(x^{*})\cap(-{\textrm{N}}_{B}(x^{*}))=\\{0\\}$.
3: The first inequality follows directly from [25, Thm. 4 (i)]. For the second
inequality, let
$v_{1}\in{\textrm{N}}_{A}(x^{*})$,$v_{2}\in{\textrm{N}}_{B}(x^{*})$ be the
unique vectors with $\|v_{1}\|=\|v_{2}\|=1$, and define
$w=(v_{1}+v_{2})/\|v_{1}+v_{2}\|$. From case 2, we see that $v_{1}\neq-v_{2}$
and thus $\langle v_{1},v_{2}\rangle>-1$. Thus $\langle
w,v_{1}\rangle=(\langle v_{1},v_{2}\rangle+1)/\|v_{1}+v_{2}\|>0$ and similarly
$\langle w,v_{2}\rangle>0$. Since $A,B$ are convex sets,
$\mathrm{T}_{A}(x^{*})+x^{*}$ and $\mathrm{T}_{B}(x^{*})+x^{*}$ are separating
hyperplanes to the corresponding sets, and it follows from $\langle
w,v_{1}\rangle>0,\langle w,v_{2}\rangle>0$ that $x^{*}+\beta w$ is separated
from the sets $A$ and $B$ when $\beta>0$, i.e. $x^{*}+\beta w\not\in A\cup B$
for $\beta>0$. Moreover, by definition of $w$, we have $w\in
N_{A}(x^{*})+N_{B}(x^{*})\subset N_{A\cap B}(x^{*})$ where the second
inclusion holds trivially for convex sets. We can therefore conclude that
$\Pi_{A\cap B}(x^{*}+\beta w)=x^{*}$, and therefore
$\displaystyle\textrm{d}_{A\cap B}(x^{*}+\beta w)=\beta\|w\|=\beta.$ (45)
We now calculate an expression for $\textrm{d}_{A}(x^{*}+\beta w)$. Since
$x^{*}+\beta w\not\in A$, the projection onto $A$ is locally equivalent to
projecting onto the smooth manifold $\text{bd}\,A$. From Lemma 1 we get with
series expansion around $x^{*}$ that
$\displaystyle\Pi_{\text{bd}\,A}(x^{*}+\beta
w)=\Pi_{\text{bd}\,A}(x^{*})+\Pi_{\mathrm{T}_{\text{bd}\,A}(x^{*})}(\beta
w)+O(\beta^{2}),$
where $\Pi_{\text{bd}\,A}(x^{*})=x^{*}$. The projection of
$w=(v_{1}+v_{2})/\|v_{1}+v_{2}\|$ onto $\mathrm{T}_{\text{bd}\,A}(x^{*})$ is
given by
$\displaystyle\Pi_{\mathrm{T}_{\text{bd}\,A}(x^{*})}(w)$
$\displaystyle=w-\frac{\langle v_{1},w\rangle}{\|v_{1}\|^{2}}v_{1}=w-\langle
v_{1},w\rangle v_{1}$
and the distance $d_{A}(x^{*}+\beta w)$ is therefore
$\displaystyle d_{A}(x^{*}+\beta w)$
$\displaystyle=\|\Pi_{\text{bd}\,A}(x^{*}+\beta w)-(x^{*}+\beta w)\|$
$\displaystyle=\|\beta\Pi_{\mathrm{T}_{\text{bd}\,A}(x^{*})}(w)-\beta
w+O(\beta^{2})\|=\|\beta\langle v_{1},w\rangle v_{1}-O(\beta^{2})\|$
$\displaystyle=\beta\|\frac{1+\langle
v_{1},v_{2}\rangle}{\|v_{1}+v_{2}\|}v_{1}-O(\beta)\|,$ (46)
and in the same way for $B$: $d_{B}(x^{*}+\beta w)=\beta\|\frac{1+\langle
v_{1},v_{2}\rangle}{\|v_{1}+v_{2}\|}v_{2}-O(\beta)\|$.
By the Definition 4 of the Friedrichs-angle and Definition 13, we have
$\displaystyle\cos\theta_{F}$
$\displaystyle=c(\text{bd}\,A,\text{bd}\,B,x^{*})=c(\mathrm{T}_{\text{bd}\,A}(x^{*}),\mathrm{T}_{\text{bd}\,B}(x^{*}))$
$\displaystyle=c((\mathrm{T}_{\text{bd}\,A}(x^{*}))^{\perp},(\mathrm{T}_{\text{bd}\,B}(x^{*}))^{\perp}),$
where the last equality is well known, see e.g. [25, Def. 3]. Since
$(\mathrm{T}_{\text{bd}\,A}(x^{*}))^{\perp}={\textrm{N}}_{A}(x^{*})\cup(-{\textrm{N}}_{A}(x^{*}))=\\{\beta
v_{1}\mid\beta\in\mathbb{R}\\}$, and similarly for $B$, Definition 4 of the
Friedrichs angle results in that $\cos\theta_{F}=\max\\{\langle
v_{1},v_{2}\rangle,-\langle v_{1},v_{2}\rangle\\}$, i.e.
$\displaystyle\langle v_{1},v_{2}\rangle=\begin{cases}-\cos\theta_{F}&\textrm{
if }\langle v_{1},v_{2}\rangle\leq 0\\\ \cos\theta_{F}&\textrm{ if }\langle
v_{1},v_{2}\rangle\geq 0.\end{cases}$
Thus by definition of $\mathrm{sr}[A,B](x^{*})$, (45) and (6.1)
$\displaystyle\mathrm{sr}[A,B](x^{*})$
$\displaystyle\leq\lim_{\beta\rightarrow
0^{+}}\frac{\max(\textrm{d}_{A}(x^{*}+\beta w),\textrm{d}_{B}(x^{*}+\beta
w))}{\textrm{d}_{A\cap B}(x^{*}+\beta w)}$
$\displaystyle=\lim_{\beta\rightarrow
0^{+}}\max_{i\in\\{1,2\\}}\|\frac{1+\langle
v_{1},v_{2}\rangle}{\|v_{1}+v_{2}\|}v_{i}-O(\beta)\|$
$\displaystyle=\frac{1+\langle
v_{1},v_{2}\rangle}{\sqrt{\|v_{1}\|^{2}+2\langle
v_{1},v_{2}\rangle+\|v_{2}\|^{2}}}$
$\displaystyle=\begin{cases}\frac{1-\cos\theta_{F}}{\sqrt{2-2\cos\theta_{F}}}=\sqrt{1-\cos\theta_{F}}/\sqrt{2}=\sin(\theta_{F}/2)\,\,\,\textrm{
if }\langle v_{1},v_{2}\rangle\leq 0\\\
\frac{1+\cos\theta_{F}}{\sqrt{2+2\cos\theta_{F}}}=\sqrt{1+\cos\theta_{F}}/\sqrt{2}=\cos(\theta_{F}/2)\,\,\,\textrm{
if }\langle v_{1},v_{2}\rangle\geq 0.\end{cases}$
4: By [25, Prp. 8]
$\displaystyle\mathrm{r}_{\textrm{a}}[C,D](x)=\sup_{\begin{subarray}{c}n_{1}\in{\textrm{N}}_{C}(x),\,n_{2}\in{\textrm{N}}_{D}(x)\\\
\|n_{1}\|=\|n_{2}\|=1\end{subarray}}-\langle n_{1},n_{2}\rangle,$
where $\mathrm{r}_{\textrm{a}}[C,D](x)$ satisfies
$\mathrm{r}_{\textrm{a}}[C,D](x^{*})+2(\mathrm{r}[C,D](x^{*}))^{2}=1$.
Since $\text{bd}\,A,\text{bd}\,B$ are smooth manifolds, this results in
$\mathrm{r}_{\textrm{a}}[\text{bd}\,A,\text{bd}\,B](x^{*})=\cos(\theta_{F})$
by Definition 4 of the Friedrichs angle, since
${\textrm{N}}_{\text{bd}\,A}(x^{*})=-{\textrm{N}}_{\text{bd}\,A}(x^{*})$ and
equivalently for $\text{bd}\,B$. Thus, since $\theta_{F}\in[0,\pi/2]$ and
$\mathrm{r}[\text{bd}\,A,\text{bd}\,B](x^{*})\geq 0$ holds by definition, we
have
$\mathrm{r}[\text{bd}\,A,\text{bd}\,B](x^{*})=\sqrt{(1-\cos\theta_{F})/2}=\sin(\theta_{F}/2)$
for all $\theta_{F}\in[0,\pi/2]$.
For $\mathrm{r}[A,B](x^{*})$ we use the same result, but the unit normal
vectors are unique in this case. When $\langle v_{1},v_{2}\rangle\leq 0$ we
have $\langle v_{1},v_{2}\rangle=-\cos\theta_{F}$ by definition of
$\theta_{F}$. We therefore get $\mathrm{r}_{\textrm{a}}[A,B]=\cos\theta_{F}$
and thus
$\mathrm{r}[A,B](x^{*})=\sqrt{(1-\cos\theta_{F})/2}=\sin(\theta_{F}/2)$.
In the same way, when $\langle v_{1},v_{2}\rangle\geq 0$ we have $\langle
v_{1},v_{2}\rangle=\cos\theta_{F}$, so
$\mathrm{r}_{\textrm{a}}[A,B]=-\cos\theta_{F}$ and
$\mathrm{r}[A,B](x^{*})=\sqrt{(1+\cos\theta_{F})/2}=\cos(\theta_{F}/2)$.
But we always have $\mathrm{r}[A,B]\leq\mathrm{sr}[A,B]$ [25, Thm. 4 (i)], so
together with case 3 we see that $\mathrm{sr}[A,B](x^{*})$ is bounded both
above and below by
$\displaystyle\sin(\theta_{F}/2)$ $\displaystyle\quad\textrm{ if }\langle
v_{1},v_{2}\rangle\leq 0$ $\displaystyle\cos(\theta_{F}/2)$
$\displaystyle\quad\textrm{ if }\langle v_{1},v_{2}\rangle\geq 0,$
which concludes the proof. $\Box$
###### Remark 6
The regularity constants above are continuous with respect to the normals as
they approach the limit between acute and obtuse since $\langle
v_{1},v_{2}\rangle\rightarrow 0\Rightarrow\theta_{F}\rightarrow\pi/2$ and
$\sin(\pi/4)=\sin(\pi/4)=1/\sqrt{2}$.
The rates presented so far are stated either as a property of the operator
$S_{\mathrm{T}(x^{*})}$ or as a function of the Friedrichs angle $\theta_{F}$
between tangent planes at the intersection. In previous work on alternating
projections and similar algorithms for convex and non-convex sets, the rates
are often stated as a function of a linear regularity constant [26, 9]. We now
state the rate found by choosing the optimal relaxation parameters (10) in
terms of linear regularity.
###### Theorem 10
Let $A,B$ be two solid, closed, and convex sets in $\mathbb{R}^{n}$. Let
$x^{*}\in A\cap B$ be the limit point of the sequence
$(x_{k})_{k\in\mathbb{N}}\in\mathbb{R}$ generated by the GAP algorithm (14),
and assume that
1. 1.
$x^{*}\in\text{bd}\,A\cap\text{bd}\,B$
2. 2.
$(\text{bd}\,A,\text{bd}\,B)$ satisfies Assumption 1 at the point $x^{*}$.
Then the sets are $\hat{\kappa}$-linearly regular, i.e., there exists
$\delta>0$ and $\hat{\kappa}>0$ such that
$\textrm{d}_{A\cap
B}(x)\leq\hat{\kappa}\max(\textrm{d}_{A}(x),\textrm{d}_{B}(x)),\quad\forall
x\in{\mathcal{B}}_{\delta}(x^{*}).$ (47)
Let $\kappa$ be the lower limit of all such $\hat{\kappa}$ and assume that
$\kappa\geq\sqrt{2}$, then the GAP algorithm with parameters
$\alpha=1,\quad\alpha_{1}=\alpha_{2}=2\left(\frac{\kappa}{\sqrt{\kappa^{2}-1}+1}\right)^{2}$
(48)
will converge to $x^{*}$ with R-linear rate $\mu$ for any $\mu\in(\gamma,1)$,
where
$\gamma=\left(\frac{\sqrt{\kappa^{2}-1}-1}{\sqrt{\kappa^{2}-1}+1}\right)^{2}=1-4\frac{\sqrt{\kappa^{2}-1}}{\kappa^{2}+2\sqrt{\kappa^{2}-1}}.$
(49)
Proof. Existence of a limit point for convex sets $x^{*}$ follows from the
previous results or [18]. First assume that
$T_{\text{bd}\,A}(x^{*})=T_{\text{bd}\,B}(x^{*})$. Then by simple
dimensionality and Assumption A2 it follows that $\text{bd}\,A=\text{bd}\,B$
in some neighborhood of $x^{*}$. It must therefore be that either $A\cap
B=A=B$ or $A\cap B=\text{bd}\,A\cap\text{bd}\,B$ in some neighborhood of
$x^{*}$. The problem is then trivial, but $\textrm{d}_{A\cap
B}(x)=\textrm{d}_{A}(x)=\textrm{d}_{B}(x)$ for all
$x\in{\mathcal{B}}_{\delta}(x^{*})$, so $\kappa=1$. This falls outside the
scope of the rest of the result.
Now assume instead that $T_{\text{bd}\,A}(x^{*})\neq T_{\text{bd}\,B}(x^{*})$.
The sets ($A,B$) are therefore transversal by Lemma 8 case 2, and since
${\textrm{N}}_{A}(x^{*})\neq{\textrm{N}}_{B}(x^{*})$, we have $\theta_{F}>0$.
Since $1/\hat{\kappa}=\mathrm{sr}[A,B]\leq 1/\sqrt{2}$ we have by Lemma 8 case
4 that
$\displaystyle
1/\hat{\kappa}=\mathrm{r}[\text{bd}\,A,\text{bd}\,B]=\mathrm{sr}[A,B]=\sin(\theta_{F}/2).$
The optimal parameters (10) are therefore, with
$\theta_{F}=2\arcsin(1/\kappa)$
$\displaystyle\alpha_{1}=\alpha_{2}=\frac{2}{1+\sin\theta_{F}}=\frac{2}{1+\sin(2\arcsin(1/\kappa))}=2\left(\frac{\kappa}{\sqrt{\kappa^{2}-1}+1}\right)^{2}\in[1,2).$
By Theorem 9 and Theorem 3, the convergence to $x^{*}$ is R-linear with rate
$\mu$ for any $\mu\in(\gamma(S_{\mathrm{T}(x^{*})}),1)$ where
$\displaystyle\gamma(S_{\mathrm{T}(x^{*})}),1)$
$\displaystyle=\frac{1-\sin\theta_{F}}{1+\sin\theta_{F}}=\frac{1-\sin(2\arcsin(1/\kappa))}{1+\sin(2\arcsin(1/\kappa))}=\left(\frac{\sqrt{\kappa^{2}-1}-1}{\sqrt{\kappa^{2}-1}+1}\right)^{2}$
$\displaystyle=1-4\frac{\sqrt{\kappa^{2}-1}}{\kappa^{2}+2\sqrt{\kappa^{2}-1}}.$
$\Box$
###### Remark 7
The regularity parameter $\kappa$ is always in the range
$\kappa\in[1,\infty]$. In particular, for ill-conditioned problems, i.e. large
$\kappa$, the rate above approaches $\gamma\approx 1-\frac{4}{\kappa}$. This
can be compared to the worse rate of alternating projections of
$\gamma=1-\frac{4}{\kappa^{2}}$ as found in [26] under linear regularity
assumptions for non-convex sets. We note that the difference in rates is
because the algorithm is better, not because of better analysis, in
particular, we assume convexity. The contraction rate for the Douglas–Rachford
algorithm, presented in [31] for general convex sets is
$\sqrt{1-\kappa^{-2}}$, which can be approximated for large $\kappa$ by
$1-\frac{1}{2\kappa^{2}}$.
###### Theorem 11
Let $A,B$ be two solid, closed, and convex sets in $\mathbb{R}^{n}$ that
satisfy Assumption 3 at every point $x^{*}\in A\cap B$. Assume that there is a
$\hat{\kappa}>0$ such that the sets $A,B$ are $\hat{\kappa}$-linearly regular
at every point $x^{*}\in A\cap B$, i.e., for every $x^{*}$ there exists
$\delta_{x^{*}}>0$ such that
$\textrm{d}_{A\cap
B}(x)\leq\hat{\kappa}\max(\textrm{d}_{A}(x),\textrm{d}_{B}(x)),\quad\forall
x\in{\mathcal{B}}_{\delta_{x^{*}}}(x^{*}).$ (50)
Let $\kappa=\max(\hat{\kappa},\sqrt{2})$, then the GAP algorithm with
parameters
$\alpha=1,\quad\alpha_{1}=\alpha_{2}=2\left(\frac{\kappa}{\sqrt{\kappa^{2}-1}+1}\right)^{2}$
(51)
will converge to $x^{*}$ with R-linear rate $\mu$ for any $\mu\in(\gamma,1)$,
where
$\gamma=\left(\frac{\sqrt{\kappa^{2}-1}-1}{\sqrt{\kappa^{2}-1}+1}\right)^{2}=1-4\frac{\sqrt{\kappa^{2}-1}}{\kappa^{2}+2\sqrt{\kappa^{2}-1}}.$
(52)
Proof. We note that $\kappa=\sqrt{2}$ implies that $\alpha_{1}=\alpha_{2}=1$,
otherwise $\alpha_{1}=\alpha_{2}\in(1,2)$. Convergence to some $x^{*}\in A\cap
B$ follows from convexity, and if $x^{*}\not\in\text{bd}\,A\cap\text{bd}\,B$,
then Theorem 8 states that the convergence is in finite time, for which the
rate holds trivially. The remaining case is
$x^{*}\in\text{bd}\,A\cap\text{bd}\,B$. If
$\mathrm{T}_{\text{bd}\,A}(x^{*})=\mathrm{T}_{\text{bd}\,B}(x^{*})$, then
$\text{bd}\,A=\text{bd}\,B$ in some neighborhood of $x^{*}$ and the problem is
trivial with convergence in finite time.
Otherwise,
$\mathrm{T}_{\text{bd}\,A}(x^{*})\neq\mathrm{T}_{\text{bd}\,B}(x^{*})$ and
consequently the Friedrichs angle satisfies $\cos(\theta_{F})>0$. First
consider the case where the angle between the sets $(A,B)$ is obtuse at
$x^{*}$. Let $\delta_{1}$ be such that Lemma 7 holds, i.e.
$\Pi_{A}^{\alpha_{1}}x\in A$ and $\Pi_{B}^{\alpha_{2}}x\in B$, for any
$x\in{\mathcal{B}}_{\delta_{1}}(x^{*})$. Let $c=\langle
n_{A}(x^{*}),n_{B}(x^{*})\rangle$, where $n_{A}(x^{*}),n_{B}(x^{*})$ are the
outward facing unit normals for the sets $A,B$ at the point $x^{*}$, which by
definition of obtuse satisfies $c>0$. By smoothness of the boundaries of $A$
and $B$, and continuity of their normals, there is some $\delta_{2}>0$ such
that
$\displaystyle\langle n_{A}(x),n_{B}(y)\rangle>0,\forall
x\in{\mathcal{B}}_{\delta_{2}}(x^{*})\cap\text{bd}\,A,y\in{\mathcal{B}}_{\delta_{2}}(x^{*})\cap\text{bd}\,B,$
(53)
where $n_{A}(x)$, $n_{B}(y)$ are the outward facing unit normals to $A$ and
$B$ at $x$ and $y$ respectively. Now, by convergence of $x_{k}$ to $x^{*}$,
there is some $k$ such that $x_{k}\in{\mathcal{B}}_{\delta}(x^{*})$ where
$\delta=\min(\delta_{1},\delta_{2})$. Thus by Lemma 7 and non-expansiveness of
the projectors, we have $\Pi_{A}^{\alpha_{1}}x\in A$ and
$x_{k+1}=\Pi_{B}^{\alpha_{2}}\Pi_{A}^{\alpha_{1}}x_{k}\in B$. If $x_{k+1}\in
A$, then the problem is solved in finite time, and the result is trivial,
otherwise $x_{k+1}\in B\setminus A$. There must therefore exist a point
$\bar{x}$ on the line between $x_{k+1}\in B\setminus A$ and
$\Pi_{A}^{\alpha_{1}}x_{k}\in A$ such that $\bar{x}\in\text{bd}\,A$, moreover
it must satisfy $\langle
n_{A}(\bar{x}),x_{k+1}-\Pi_{A}^{\alpha_{1}}x_{k}\rangle>0$ since the line is
pointing out of the set $A$. But by the definition of the projection and
$x_{k+1}$, we have
$\displaystyle\frac{x_{k+1}-\Pi_{A}^{\alpha_{1}}x_{k}}{\|x_{k+1}-\Pi_{A}^{\alpha_{1}}x_{k}\|}=-n_{B}(\tilde{x}),$
where $\tilde{x}=\Pi_{B}\Pi_{A}^{\alpha_{1}}x_{k}\text{bd}\,B$. This leads to
$\langle n_{A}(\bar{x}),n_{B}(\tilde{x})\rangle<0$. And since both $\bar{x}$
and $\tilde{x}$ are in ${\mathcal{B}}_{\delta}(x^{*})$ by non-expansiveness,
this is a contradiction to (53), i.e. $x_{x+1}\in B\setminus A$ can not hold,
so $x_{x+1}\in A\cap B$ and the convergence is finite and the result holds
trivially.
The remaining case is when $(A,B)$ is acute at $x^{*}$. By Lemma 8 case 4, we
have $\mathrm{sr}[A,B](x^{*})=\sin(\theta_{F}/2)\leq 1/\sqrt{2}$, so by
definition of $\mathrm{sr}$ (Definition 16), it must hold that $\kappa\geq
1/\mathrm{sr}[A,B](x^{*})=1/\sin(\theta_{F}/2)\geq\sqrt{2}$. By Theorem 10, we
see that the optimal rate would have been achieved if
$\kappa=1/\sin(\theta_{F}/2)$, i.e. $\alpha_{1}=\alpha_{2}>\alpha^{*}$, or
equivalently that the parameters have been chosen as if $\theta_{F}$ was
smaller. But as seen in Remark 3, this still results in the sub-optimal rate
(52) based on this conservative $\kappa$. $\Box$
###### Remark 8
We note that the adaptive method proposed in [19] for estimating $\theta_{F}$
by the angle between the vectors $v_{1}=\Pi_{B}^{\alpha_{1}}x_{k}-x_{k}$ and
$v_{2}=\Pi_{A}^{\alpha_{1}}x_{k}-\Pi_{B}^{\alpha_{2}}\Pi_{A}^{\alpha_{1}}x_{k}$,
works very well in the setting of two convex sets $(A,B)$ with smooth
boundaries. This can be seen by observing that if $v_{1}/\|v_{1}\|=-n_{1}$ and
$v_{2}/\|v_{2}\|=n_{2}$, where $n_{1},n_{2}$ are normal vectors with unit
length to $A$ and $B$ at the point $x^{*}$, then the angle between them is
exactly $\theta_{F}$ in the acute case. And indeed, as long as the algorithm
has not already converged, we have $v_{1}/\|v_{1}\|\rightarrow-n_{1}$,
$v_{2}/\|v_{2}\|\rightarrow n_{2}$ as $x_{k}\rightarrow x^{*}$, by the
definition of the projections and continuity of the normals around $x^{*}$.
The estimate will therefore converge to $\theta_{F}$ as $x_{k}\rightarrow
x^{*}$.
### 6.2 Counter example
We now introduce a simple convex example, which illustrates that it is not
always possible to rely on finite identification of smooth manifolds for the
GAP algorithm 1, even in the case of convex polytopes.
$C$$D$$p_{0}$$\Pi_{C}^{\alpha_{1}}p_{0}$$p_{1}$$\Pi_{C}^{\alpha_{1}}p_{1}$$p_{2}$
Figure 2: Illustration of the problem with a cone $C$ and line $D$ from
Example 3. The iterates $p_{0},p_{1},p_{2},\dots$ are illustrated in red, the
normal cone to $C$ with dashed lines, and the rays through $(1,-\gamma)$ and
$(-1,-\gamma)$ are shown with blue dotted lines. As shown in the example, the
iterates stay on the dotted lines and alternate between projecting on the two
faces of $C$.
###### Example 3
Consider the convex feasibility problem ($C,D$) with $C=\left\\{(x,y)\mid
y\geq\left|x\right|\right\\}$, $D=\left\\{(x,y)\mid y=0\right\\}$ as
illustrated in Figure 2, with parameters $\alpha=1,\alpha_{1}=\alpha_{2}=1.5$
for the GAP algorithm 1. Let
$p_{0}=(1,-\gamma)$
where $\gamma=\frac{1}{12}\left(1+\sqrt{73}\right)\approx 0.795$. The GAP
algorithm will then alternate between projecting onto the surfaces
$\\{y=x,x>0\\}$ and $\\{y=-x,x<0\\}$.
Proof. The first projection point will hit the boundary of the cone $C$ at
$\Pi_{C}p_{0}=\frac{1}{2}\left(1-\gamma,1-\gamma\right)$ which is easily seen
by that $\Pi_{C}p_{0}-p_{0}=\frac{1}{2}(-1-\gamma,1+\gamma)\perp\Pi_{C}p_{0}$.
The relaxed projection point and the next iterate can then be calculated to
$\displaystyle\Pi_{C}^{\alpha 1}p_{0}$
$\displaystyle=\frac{1}{4}\left(1-3\gamma,-3+\gamma\right)$ $\displaystyle
p_{1}$
$\displaystyle=\Pi_{D}^{\alpha_{2}}\Pi_{C}^{\alpha_{1}}p_{0}=\frac{1}{8}\left(2-6\gamma,-3+\gamma\right)$
We note that $\gamma^{2}=\frac{1}{6}(\gamma+3)$, and simple arithmetic gives
$(p_{1})_{x}\gamma=\frac{1}{8}(2-6\gamma)\gamma=\frac{1}{8}(\gamma-3)=(p_{1})_{y}$.
So $p_{1}$ is simply $p_{0}$ scaled and flipped around the $y$ axis, i.e., it
is on the form $p_{1}=\beta\left(-1,-\gamma\right)$. The next projection point
is therefore on the boundary of the cone $C$ with $x<0$, and because of the
symmetry around the $y$ axis, the next iterate is
$p_{2}=\beta^{2}\left(1,-\gamma\right).$
By linearity and induction, it is clear that the algorithm will not identify
any of the smooth surfaces $\\{y=x,x>0\\}$ or $\\{y=-x,x<0\\}$ but instead
alternate between them. $\Box$
###### Remark 9
The example above shows that finite identification of either of the manifolds
$\\{(x,y)\mid y=x,x>0\\}$ and $\\{(x,y)\mid y=-x,x<0\\}$ does not occur for
every initial point. However, with some reasonable definition of smallest
angle, for example through the subregularity constant $\mathrm{sr}$, we would
have $\theta_{F}=\pi/4$, and the theory for subspaces would predict a worst
case rate $\gamma(S)=0.5$. It is notable that the convergence rate
$\beta\approx 0.35$ in the example is significantly better. It is therefore
still an open question whether the smallest angle sets an upper bound on the
rate, through the eigenvalues in Theorem 1, even for these problems.
## 7 Conclusions
We have shown that the known convergence rates for the GAP algorithm on affine
sets extend to local rates on smooth manifolds, and that the optimal
parameters and rates hold also in this setting. These rates are significantly
better than previous known rates for similar projection methods. We have also
shown how these results can be applied to generate linear convergence rates
for two smooth and solid convex sets, and how they can be connected to linear
regularity.
Since finite identification of smooth manifolds can not generally be assumed,
it remains to be shown how these results can be applied to general convex
sets.
## References
* [1] S. Agmon. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):382–392, 1954.
* [2] F. Andersson and M. Carlsson. Alternating projections on nontangential manifolds. Constructive approximation, 38(3):489–525, 2013.
* [3] H. H. Bauschke and J. M. Borwein. On the convergence of von Neumann’s alternating projection algorithm for two sets. Set-Valued Analysis, 1(2):185–212, 1993.
* [4] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011.
* [5] H. H. Bauschke, J. Y. B. Cruz, T. T. A. Nghia, H. M. Pha, and X. Wang. The rate of linear convergence of the Douglas-Rachford algorithm for subspaces is the cosine of the Friedrichs angle. Journal of Approximation Theory, 185(0):63–79, 2014.
* [6] H. H. Bauschke, J. Y. B. Cruz, T. T. A. Nghia, H. M. Pha, and X. Wang. Optimal rates of linear convergence of relaxed alternating projections and generalized Douglas-Rachford methods for two subspaces. Numerical Algorithms, 73(1):33–76, 2016.
* [7] H. H. Bauschke, D. R. Luke, H. M. Phan, and X. Wang. Restricted Normal Cones and the Method of Alternating Projections: Applications. Set-Valued and Variational Analysis, 21(3):475–501, 2013.
* [8] H. H. Bauschke, D. R. Luke, H. M. Phan, and X. Wang. Restricted Normal Cones and the Method of Alternating Projections: Theory. Set-Valued and Variational Analysis, 21(3):431–473, 2013.
* [9] H. H. Bauschke, H. M. Phan, and X. Wang. The method of alternating relaxed projections for two nonconvex sets. Vietnam Journal of Mathematics, 42(4):421–450, 2014.
* [10] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
* [11] J. P. Boyle and R. L. Dykstra. A Method for Finding Projections onto the Intersection of Convex Sets in Hilbert Spaces, pages 28–47. Springer New York, New York, NY, 1986.
* [12] L. M. Bregman. Finding the common point of convex sets by the method of successive projection. Dokl Akad. Nauk SSSR, 162(3):487–490, 1965.
* [13] H. Cartan. Differential Calculus. Kershaw, 1971.
* [14] F. Deutsch. The Method of Alternating Orthogonal Projections, pages 105–121. Springer Netherlands, Dordrecht, 1992.
* [15] F. Deutsch. The Angle Between Subspaces of a Hilbert Space, pages 107–130. Springer Netherlands, Dordrecht, 1995.
* [16] J. Douglas and H. H. Rachford. On the numerical solution of heat conduction problems in two and three space variables. Trans. Amer. Math. Soc., 82:421–439, 1956.
* [17] D. Drusvyatskiy, A. D. Ioffe, and A. S. Lewis. Transversality and alternating projections for nonconvex sets. Found. Comput. Math., 15(6):1637–1651, Dec. 2015.
* [18] M. Fält and P. Giselsson. Line search for generalized alternating projections. In 2017 American Control Conference (ACC), pages 4637–4642, 2017\.
* [19] M. Fält and P. Giselsson. Optimal convergence rates for generalized alternating projections. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pages 2268–2274, Dec 2017.
* [20] R. Glowinski and A. Marroco. Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de dirichlet non linéaires. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, 9:41–76, 1975.
* [21] L. G. Gubin, B. T. Polyak, and E. V. Raik. The method of projections for finding the common point of convex sets. USSR Computational Mathematics and Mathematical Physics, 7(6):1–24, 1967.
* [22] P. R. Halmos. Finite dimensional vector spaces. Number 7. Princeton University Press, 1947.
* [23] W. L. Hare and A. S. Lewis. Identifying active constraints via partial smoothness and prox-regularity. Journal of Convex Analysis, 11(2):251–266, 2004.
* [24] A. Y. Kruger. About regularity of collections of sets. Set-Valued Analysis, 14(2):187–206, 2006.
* [25] A. Y. Kruger, D. R. Luke, and N. H. Thao. Set regularities and feasibility problems. Mathematical Programming, 168(1-2):279–311, 2018.
* [26] A. S. Lewis, D. R. Luke, and J. Malick. Local linear convergence for alternating and averaged nonconvex projections. Foundations of Computational Mathematics, 9(4):485–513, 2009.
* [27] A. S. Lewis and J. Malick. Alternating projections on manifolds. Mathematics of Operations Research, 33(1):216–234, 2008.
* [28] A. S. Lewis and S. J. Wright. Identifying activity. SIAM Journal on Optimization, 21:597–614, 2011.
* [29] J. Liang, J. Fadili, G. Peyré, and R. Luke. Activity identification and local linear convergence of douglas–rachford/admm under partial smoothness. In Scale Space and Variational Methods in Computer Vision, pages 642–653, Cham, 2015. Springer International Publishing.
* [30] P. L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis, 16(6):964–979, 1979.
* [31] D. R. Luke and A.-L. Martins. Convergence analysis of the relaxed Douglas–Rachford algorithm. SIAM Journal on Optimization, 30(1):542–584, 2020.
* [32] T. S. Motzkin and I. Shoenberg. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):383–404, 1954.
* [33] D. Noll and A. Rondepierre. On local convergence of the method of alternating projections. Foundations of Computational Mathematics, 16, 12 2013.
* [34] J. von Neumann. Functional Operators. Volume II. The Geometry of Orthogonal Spaces. Princeton University Press: Annals of Mathematics Studies, 1950. Reprint of 1933 lecture notes.
## Appendix A Appendix
### A.1 Proof of Lemma 9
###### Lemma 9 (Infinite Sub-sequence)
Given any infinite sequence of increasing positive integers
$(r_{j})_{{j\in\mathbb{N}}}\in\mathbb{N}$, for any integer $n>0$ there exists
an infinite sub-sequence $(r_{j_{k}})_{k\in\mathbb{N}}$ where
$r_{j_{k}}=a+nb_{k},$
for some $a\in\mathbb{N}$, some increasing sequence
$(b_{k})_{k}\in\mathbb{N}$.
Proof. Fix $n$ and consider the finite collection of sets
$S_{i}=\\{v\in\mathbb{N}\mid v=i+nb,b\in\mathbb{N}\\}$, $i=0,\ldots,n-1$. We
have $\cup_{i=0,\ldots,n-1}S_{i}=\mathbb{N}$, so
$\cup_{i=0,\ldots,n-1}(S_{i}\cap\\{r_{j}\\}_{j})=\\{r_{j}\\}_{j\in\mathbb{N}}$
and thus one of the sets $(S_{i}\cap\\{r_{j}\\}_{j\in\mathbb{N}})$ must be
infinite. Let $a$ be the index so that
$(S_{a}\cap\\{r_{j}\\}_{j\in\mathbb{N}})$ is infinite. This is clearly a
subset of $\\{r_{j}\\}_{j\in\mathbb{N}}$ and by the definition of $S_{a}$ each
element is of the form $a+nb_{k}$ with $b_{k}\in\mathbb{N}$, and the proof is
complete. $\Box$
### A.2 Proof of Theorem 2
Since $S=T$ with $\alpha=1$, we begin by showing that all eigenvalues to $T$
in Theorem 1 satisfy $|\lambda|\leq\gamma^{*}$. For convenience of notation we
introduce
$\displaystyle f(\theta)$
$\displaystyle\coloneqq\frac{1}{2}\left(2-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}\cos^{2}\theta\right)$
(54) $\displaystyle g(\theta)$
$\displaystyle\coloneqq\sqrt{f(\theta)^{2}-(1-\alpha_{1})(1-\alpha_{2})}$ (55)
so that $\lambda_{i}^{1,2}$ in (8) can be written
$\lambda_{i}^{1,2}=f(\theta_{i})\pm g(\theta_{i})$. For
$\alpha_{1}=\alpha_{2}=\alpha^{*}=\frac{2}{1+\sin\theta_{F}}$ we get
$f(\theta_{F})=1-\alpha^{*}+{\alpha^{*}}^{2}c_{F}^{2}/2=\frac{1-\sin\theta_{F}}{1+\sin\theta_{F}}=\alpha^{*}-1$
and $g(\theta_{F})=0$. The eigenvalues corresponding to $\theta_{F}$ are
therefore
$\lambda_{F}^{1,2}=\alpha^{*}-1=\frac{1-\sin\theta_{F}}{1+\sin\theta_{F}}$. We
also see that $f(\pi/2)=1-\alpha^{*},\,g(\pi/2)=0$. Since $f(\theta)$ is
linear in $\cos^{2}\theta$, which is decreasing in
$\left[\theta_{F},\pi/2\right]$, and
$\left|f(\theta_{F})\right|=\left|f(\pi/2)\right|=\alpha^{*}-1$, it follows
that $\left|f(\theta_{i})\right|\leq\alpha^{*}-1$ for all
$\theta_{i}\in\left[\theta_{F},\pi/2\right]$. This means that
$f(\theta_{i})^{2}-(\alpha^{*}-1)^{2}\leq 0$ and the corresponding
$\lambda_{i}^{1,2}$ are complex with magnitudes
$\displaystyle\left|\lambda_{i}^{1,2}\right|$
$\displaystyle=\sqrt{f(\theta_{i})^{2}+\left|f(\theta_{i})^{2}-(1-\alpha^{*})^{2}\right|}=\sqrt{(1-\alpha^{*})^{2}}$
$\displaystyle=\alpha^{*}-1\quad\forall
i:\,\theta_{F}\leq\theta_{i}\leq\pi/2.$
For the remaining eigenvalues we have
$|1-\alpha_{1}|=\alpha^{*}-1=\gamma^{*}$,
$|1-\alpha_{2}|=\alpha^{*}-1=\gamma^{*}$,
$|(1-\alpha_{1})(1-\alpha_{2})|=(\alpha^{*}-1)^{2}\leq\gamma^{*}$. Lastly, the
eigenvalues in $\lambda=1$, correspond to the angles $\theta_{i}=0$, and are
semisimple since the matrix in (7) is diagonal for $\theta_{i}=0$. We
therefore conclude, from Fact 2 and 3, that $\alpha_{1}=\alpha_{2}=\alpha^{*}$
results in that the GAP operator $S=T$ in (2) is linearly convergent with any
rate $\mu\in\left(\gamma^{*},1\right)$ where
$\gamma^{*}=\alpha^{*}-1=\frac{1-\sin\theta_{F}}{1+\sin\theta_{F}}$ is a
subdominant eigenvalue.
### A.3 Lemmas
###### Lemma 10
The matrix
$\displaystyle
M\coloneqq(2-\alpha^{*})I+\frac{\alpha^{*}}{\alpha_{1}}(T_{1}^{F}-I),$ (56)
where $T_{1}^{F}$ is the matrix defined in (7) corresponding to the angle
$\theta_{F}$ has trace and determinant:
$\displaystyle\text{tr}M$ $\displaystyle=$
$\displaystyle\frac{2}{(1+s)\alpha_{1}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{2}\alpha_{1}c^{2}+2\alpha_{1}s\right)$
$\displaystyle\det M$ $\displaystyle=$
$\displaystyle\frac{4s(1-s)}{\alpha_{1}(1+s)^{2}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}(1+s)\right),$
where $s\coloneqq\sin\theta_{F},\,c\coloneqq\cos\theta_{F}$.
Proof. Let $s\coloneqq\sin\theta_{F},\,c\coloneqq\cos\theta_{F}$. The matrix
can be written
$\displaystyle M$
$\displaystyle=(2-\alpha^{*})I+\frac{\alpha^{*}}{\alpha_{1}}\left(\begin{pmatrix}1-\alpha_{1}s^{2}&\alpha_{1}cs\\\
\alpha_{1}(1-\alpha_{2})cs&(1-\alpha_{2})(1-\alpha_{1}c^{2})\end{pmatrix}-I\right)$
$\displaystyle=\begin{pmatrix}2-\alpha^{*}-\alpha^{*}s^{2}&\alpha^{*}cs\\\
\alpha^{*}(1-\alpha_{2})cs&2-\alpha^{*}+\frac{\alpha^{*}}{\alpha_{1}}\left((1-\alpha_{2})(1-\alpha_{1}c^{2})-1\right)\end{pmatrix}$
$\displaystyle=\begin{pmatrix}2-\alpha^{*}(1+s^{2})&\alpha^{*}cs\\\
\alpha^{*}(1-\alpha_{2})cs&2-\alpha^{*}+\frac{\alpha^{*}}{\alpha_{1}}\left(\alpha_{1}\alpha_{2}c^{2}-\alpha_{2}-\alpha_{1}c^{2}\right)\end{pmatrix}.$
Using that $\alpha^{*}=\frac{2}{1+s}$, we can rewrite the diagonal elements
$2-\alpha^{*}(1+s^{2})=\alpha^{*}\left(1+s-(1+s^{2})\right)=\alpha^{*}s(1-s)$
and
$\displaystyle 2-$
$\displaystyle\alpha^{*}+\frac{\alpha^{*}}{\alpha_{1}}\left(\alpha_{1}\alpha_{2}c^{2}-\alpha_{2}-\alpha_{1}c^{2}\right)=\alpha^{*}(1+s)-\alpha^{*}+\alpha^{*}\left(c^{2}(\alpha_{2}-1)-\frac{\alpha_{2}}{\alpha_{1}}\right)$
$\displaystyle=\alpha^{*}\left(s+c^{2}(\alpha_{2}-1)-\frac{\alpha_{2}}{\alpha_{1}}\right).$
We can extract the factor $\alpha^{*}cs$ from the matrix and get
$M=\alpha^{*}cs\begin{pmatrix}\frac{1-s}{c}&1\\\
1-\alpha_{2}&\frac{s+c^{2}(\alpha_{2}-1)-\frac{\alpha_{2}}{\alpha_{1}}}{cs}\end{pmatrix}.$
The trace is therefore given by
$\displaystyle\text{tr}M$
$\displaystyle=\alpha^{*}cs\left(\frac{1-s}{c}+\frac{s+c^{2}(\alpha_{2}-1)-\frac{\alpha_{2}}{\alpha_{1}}}{cs}\right)$
$\displaystyle=\alpha^{*}\left(2s-s^{2}+c^{2}\alpha_{2}-c^{2}-\frac{\alpha_{2}}{\alpha_{1}}\right)$
$\displaystyle=\frac{\alpha^{*}}{\alpha_{1}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{2}\alpha_{1}c^{2}+2\alpha_{1}s\right)$
$\displaystyle=\frac{2}{(1+s)\alpha_{1}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{2}\alpha_{1}c^{2}+2\alpha_{1}s\right)$
and the determinant
$\displaystyle\text{det}M$
$\displaystyle=\left(\alpha^{*}cs\right)^{2}\left(\frac{\left(1-s\right)\left(s+c^{2}(\alpha_{2}-1)-\frac{\alpha_{2}}{\alpha_{1}}\right)}{c^{2}s}-\frac{\left(1-\alpha_{2}\right)c^{2}s}{c^{2}s}\right)$
$\displaystyle=\alpha^{*2}s\biggl{(}\left(s+c^{2}(\alpha_{2}-1)-\frac{\alpha_{2}}{\alpha_{1}}-s^{2}-c^{2}s(\alpha_{2}-1)+s\frac{\alpha_{2}}{\alpha_{1}}\right)$
$\displaystyle\hskip 199.16928pt{}-\left(1-\alpha_{2}\right)c^{2}s\biggr{)}$
$\displaystyle=\alpha^{*2}s\left(s+c^{2}(\alpha_{2}-1)-\frac{\alpha_{2}}{\alpha_{1}}-s^{2}+s\frac{\alpha_{2}}{\alpha_{1}}\right)$
$\displaystyle=\alpha^{*2}s\left(s-1+\alpha_{2}c^{2}+\frac{\alpha_{2}}{\alpha_{1}}(s-1)\right)$
$\displaystyle=\alpha^{*2}s(1-s)\left(-1+\alpha_{2}(1+s)-\frac{\alpha_{2}}{\alpha_{1}}\right)$
$\displaystyle=\frac{\alpha^{*2}s(1-s)}{\alpha_{1}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}(1+s)\right)$
$\displaystyle=\frac{4s(1-s)}{\alpha_{1}(1+s)^{2}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}(1+s)\right).$
$\Box$
###### Lemma 11
Under the assumptions $\alpha=\frac{\alpha^{*}}{\alpha_{1}}$,
$\alpha_{1}\geq\alpha_{2}>0$ and $\theta_{F}\in(0,\pi/2)$, the matrix $M$ (56)
in Lemma 10 satisfies
$\left(\alpha_{1}\neq\alpha^{*}\text{ or
}\alpha_{2}\neq\alpha^{*}\right)\Rightarrow\max\text{Re}\,\Lambda(M)>0,$
where $\Lambda(M)$ is the set of eigenvalues of $M$.
Proof. We prove the equivalent claim
$\max\text{Re}\,\Lambda(M)\leq 0\Rightarrow\alpha_{1}=\alpha_{2}=\alpha^{*}.$
We have $\max\text{Re}\,\Lambda(M)\leq 0$ if and only if both eigenvalues of
$M$ have negative or zero real part, which is equivalent to
$\lambda_{1}+\lambda_{2}\leq 0\quad\text{and}\quad\lambda_{1}\lambda_{2}\geq
0.$
This is equivalent to
$\text{tr}M\leq 0\quad\text{and}\quad\text{det}M\geq 0.$
Using Lemma 10, this can be written
$\displaystyle\begin{cases}\frac{2}{(1+s)\alpha_{1}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{2}\alpha_{1}c^{2}+2\alpha_{1}s\right)&\leq
0\\\
\frac{4s(1-s)}{\alpha_{1}(1+s)^{2}}\left(-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}(1+s)\right)&\geq
0\end{cases},$
where $s\coloneqq\sin(\theta_{F})$ and $c\coloneqq\cos(\theta_{F})$. Since
$\alpha_{1}>0$, $s\in(0,1)$, this is equivalent to
$\displaystyle\alpha_{1}+\alpha_{2}-\alpha_{2}\alpha_{1}c^{2}-2\alpha_{1}s$
$\geq 0$ (57a) $\displaystyle-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}(1+s)$
$\geq 0$. (57b)
This implies that the sum is positive, i.e.
$\displaystyle\big{(}\alpha_{1}+\alpha_{2}-\alpha_{2}\alpha_{1}c^{2}$
$\displaystyle-2\alpha_{1}s\big{)}+\left(-\alpha_{1}-\alpha_{2}+\alpha_{1}\alpha_{2}(1+s)\right)$
$\displaystyle=(\alpha_{2}\alpha_{1}s^{2}-2\alpha_{1}s+\alpha_{1}\alpha_{2}s)$
$\displaystyle=\alpha_{1}s\left(\alpha_{2}s-2+\alpha_{2}\right)\geq 0$
which, since $\alpha_{2},s>0$, is equivalent to $\alpha_{2}(1+s)\geq 2$, and
thus
$\displaystyle\alpha_{2}\geq\frac{2}{1+s}=\alpha^{*}.$
But then since $\alpha_{2}\geq\alpha^{*}$, (57a) implies
$\alpha_{1}+\alpha_{2}-\alpha^{*}\alpha_{1}c^{2}-2\alpha_{1}s\geq 0$
which is equivalent to
$\displaystyle\alpha_{1}+\alpha_{2}-\alpha^{*}\alpha_{1}c^{2}-2\alpha_{1}s$
$\displaystyle=\alpha_{1}+\alpha_{2}-2\alpha_{1}(1-s)-2\alpha_{1}s$
$\displaystyle=\alpha_{1}+\alpha_{2}-2\alpha_{1}=\alpha_{2}-\alpha_{1}\geq 0$
i.e. $\alpha_{2}\geq\alpha_{1}.$
But by assumption $\alpha_{1}\geq\alpha_{2}$ so we know that (57b) implies
$\alpha_{1}=\alpha_{2}\geq\alpha^{*}$. Equation (57a) yields
$\displaystyle\alpha_{1}+\alpha_{2}-\alpha_{2}\alpha_{1}c^{2}-2\alpha_{1}s$
$\displaystyle\geq 0$ $\displaystyle\Rightarrow\quad$ $\displaystyle
2\alpha_{1}-\alpha_{1}^{2}c^{2}-2\alpha_{1}s$ $\displaystyle\geq 0$
$\displaystyle\Leftrightarrow$ $\displaystyle 2-\alpha_{1}c^{2}-2s$
$\displaystyle\geq 0$ $\displaystyle\Leftrightarrow$ $\displaystyle
2\frac{(1-s)}{c^{2}}$ $\displaystyle\geq\alpha_{1}$
$\displaystyle\Leftrightarrow$ $\displaystyle\alpha^{*}=\frac{2}{(1+s)}$
$\displaystyle\geq\alpha_{1},$
where the implication is from $\alpha_{1}=\alpha_{2}$. We have therefore shown
that $\alpha^{*}\geq\alpha_{1}=\alpha_{2}\geq\alpha^{*}$ i.e.
$\alpha^{*}=\alpha_{1}=\alpha_{2}\geq\alpha^{*}$. This completes the proof.
$\Box$
### A.4 Proof of Theorem 3
The first direction, that both $S_{1}$ and $S_{2}$ are convergent with any
rate $\mu\in(\gamma^{*},1)$ for the parameters in (10) holds by Theorem 2. We
now prove that if $S_{1}$ and $S_{2}$ converge with rate $\mu$ for all
$\mu\in(\gamma^{*},1)$ then the parameters must be those in (10). By Fact 2,
if both operators converge with any rate $\mu\in(\gamma^{*},1)$ then it must
be that $\gamma(S_{1})\leq\gamma^{*}$ and $\gamma(S_{2})\leq\gamma^{*}$. By
Definition 7, this means that all eigenvalues $\lambda$ to both $S_{1}$ and
$S_{2}$ have $|\lambda|\leq\gamma^{*}$, unless $\lambda=1$. With
$S_{i}=(1-\alpha)I+\alpha T_{i}$, we see from Theorem 1, that $T_{1}$ has an
eigenvalue in $1-\alpha_{2}$, $T_{2}$ in $1-\alpha_{1}$, and both $T_{1}$ and
$T_{2}$ have eigenvalues in $\lambda_{i}^{1,2}$ corresponding to the angle
$\theta_{F}$. We therefore need that
$|1+\alpha\left(\lambda-1\right)|\leq\gamma^{*}$ for each of the eigenvalues
$\lambda$. We start by defining $\hat{\alpha}=\alpha^{*}/\alpha_{1}$, where
$\alpha^{*}=2/(1+\sin\theta_{F})$, and observe that $\alpha^{*}-1=\gamma^{*}$.
Assume that $\alpha_{1}\geq\alpha_{2}$ and $\alpha=\hat{\alpha}$. For the
eigenvalue $\lambda=1-\alpha_{1}$, we get
$\displaystyle
1+\hat{\alpha}(\lambda-1)=1+\frac{\alpha^{*}}{\alpha_{1}}(1-\alpha_{1}-1)=1-\alpha^{*}.$
(58)
Consider the eigenvalues to $I+\hat{\alpha}(T_{F}-I)$ where $T_{F}$ is the
matrix (7) corresponding to the angle $\theta_{F}$, i.e., the eigenvalues
$\lambda_{i}^{1,2}$. We have
$\max\text{Re}\,\Lambda(I+\hat{\alpha}(T_{F}-I))>\alpha^{*}-1$ (59)
if and only if
$\max\text{Re}\,\Lambda((2-\alpha^{*})I+\hat{\alpha}(T_{F}-I))>0.$ (60)
By Lemma 11 we know that (60) is true when $\alpha=\hat{\alpha}$, unless
$\alpha_{1}=\alpha_{2}=\alpha^{*}$. We therefore know that for
$\alpha=\hat{\alpha}$, unless the optimal parameters are selected, there will
always be one eigenvalue of $S_{2}$ in $1-\alpha^{*}$ and one, corresponding
to $\theta_{F}$, with real part greater than $\alpha^{*}-1$. We now consider
the two cases $\alpha>\hat{\alpha}$ and $\alpha<\hat{\alpha}$. First note that
$\alpha$ acts as a scaling of the eigenvalues relative to the point $1$, i.e.,
$(1-\alpha)+\alpha\lambda=1+\alpha(\lambda-1)$. It is therefore clear that
$\alpha>\hat{\alpha}$ will result in one eigenvalue with real part less than
$1-\alpha^{*}=-\gamma^{*}$, and thus $\gamma(S_{1})>\gamma^{*}$ and
$\gamma(S_{2})>\gamma^{*}$.
Similarly, any $\alpha<\hat{\alpha}$ will result in one eigenvalue
($\lambda_{F}^{1}$) with real part greater than $\alpha^{*}-1=\gamma^{*}$. If
this eigenvalue is not in $1$, i.e., unless $1+\alpha(\lambda_{F}^{1}-1)=1$,
we know that $\gamma(S)>\gamma^{*}$ also in this case. Since $\alpha\neq 0$ we
have $1+\alpha(\lambda_{F}^{1}-1)=1$ if and only if $\lambda_{F}^{1}=1$. But
$\lambda_{F}^{1}=1$ only if $\det(T_{F}-I)=0$, where $T_{F}$ is the block
corresponding to $\theta_{F}$ in (7). Since $\alpha_{1},\alpha_{2}\neq 0$ and
$\theta_{F}>0$ we get
$\displaystyle\det(T_{F}-I)=-\alpha_{1}s_{F}^{2}(\alpha_{1}c_{F}^{2}-\alpha_{2}+\alpha_{1}\alpha_{2}c_{F}^{2})-\alpha_{1}^{2}(1-\alpha_{2})c_{F}^{2}s_{F}^{2}=\alpha_{1}\alpha_{2}s_{F}^{2}\neq
0$
and thus $\lambda_{F}^{1}\neq 1$.
We conclude that when $\alpha_{1}\geq\alpha_{2}$, then
$\gamma(S_{2})>\alpha^{*}-1=\gamma^{*}$ for all parameters that are not
$\alpha=1,\alpha_{1}=\alpha_{2}=\alpha^{*}$.
The proof is only dependent on the eigenvalue $1-\alpha_{1}$, corresponding to
$S_{2}$, and the eigenvalue $\lambda_{F}^{1,2}$ corresponding to $\theta_{F}$.
From symmetry of $\alpha_{1},\alpha_{2}$ in $\lambda_{F}^{1,2}$ we see that
the same argument holds if we instead assume $\alpha_{2}\geq\alpha_{1}$, let
$\hat{\alpha}=\alpha^{*}/\alpha_{2}$, and consider the eigenvalues
$1-\alpha_{2}$ from $S_{1}$ and $\lambda_{F}^{1,2}$. This leads to that when
$\alpha_{2}\geq\alpha_{1}$, then $\gamma(S_{1})>\alpha^{*}-1=\gamma^{*}$ for
all parameters that are not $\alpha=1,\alpha_{1}=\alpha_{2}=\alpha^{*}$. To
conclude, unless $\alpha=1,\alpha_{1}=\alpha_{2}=\alpha^{*}$, we have either
$\gamma(S_{1})>\gamma^{*}$ or $\gamma(S_{2})>\gamma^{*}$, which contradicts
that they both converge linearly with any rate $\mu\in(\gamma^{*},1)$.
|
# How Long to Estimate Sparse MIMO Channels
Yahia Shabara, C. Emre Koksal and Eylem Ekici Dept. of ECE, The Ohio State
University, Columbus, OH 43210
Email: {shabara.1, koksal.2<EMAIL_ADDRESS>
###### Abstract
Large MIMO transceivers are integral components of next-generation wireless
networks. However, for such systems to be practical, their channel estimation
process needs to be fast and reliable. Although several solutions for fast
estimation of sparse channels do exist, there is still a gap in understanding
the fundamental limits governing this problem. Specifically, we need to better
understand the lower bound on the number of measurements under which accurate
channel estimates can be obtained. This work bridges that knowledge gap by
deriving a tight asymptotic lower bound on the number of measurements. This
not only helps develop a better understanding for the sparse MIMO channel
estimation problem, but it also provides a benchmark for evaluating current
and future solutions.
## I Introduction
Through the use of a large number of antennas, wireless transceivers can focus
their signal transmission and/or reception through very narrow angular
directions [1]. This helps increase the channel capacity in two main ways.
First, it improves the spatial multiplexing capability of transceivers, which
allows simultaneously serving multiple users while keeping cross interference
low. Second, it allows more signal power to be propagated from a transmitter
(TX) to a receiver (RX). For the latter reason, large MIMO transceivers have
emerged as the prominent solution to solve the severe path loss problem in
millimeter-wave (mmWave) systems [2, 3].
The main challenge of large MIMO, however, is that the channel estimation
process can be complex [4]. This is a byproduct of having channel matrices
with large dimensions. Moreover, both initial and running costs (i.e., cost of
hardware and power consumption, respectively) of such devices are high. To
minimize these costs, the architectural design of large MIMO transceivers have
deviated from the traditional fully-digital design towards analog or hybrid
transceivers. While these alternative architectures solve the cost problem,
they exacerbate the channel estimation overhead. This is because such
alternative transceiver designs are less-flexible than the fully-digital ones.
For example, an analog transceiver can obtain only one independent measurement
at a time, unlike a digital transceiver that obtains as many independent
measurements as the number of antennas at RX.
Reducing the number of channel measurements is thus one of the main challenges
facing large MIMO implementations. This problem has largely been tackled as an
application of Compressed Sensing (CS) [5, 6], which relies on channel
sparsity as a key enabler for reducing the number of measurements111Sparsity
here means that the number of signal propagation paths is small compared to
the number of TX and RX antennas (e.g., mmWave channels).. The closest effort
to understanding how changing the number of measurements affects the quality
of channel estimates, to the best of our knowledge, is [7], where computer
simulations were conducted to measure the quality of channel estimates as the
number of measurements increases. Nonetheless, there is still a gap in the
current literature in understanding the lower bound on the number of necessary
measurements needed for accurate channel recovery. To the best of our
knowledge, the tightest known bound scales as
$\Omega\left(k\log\frac{n_{t}n_{r}}{k}\right)$ [8, 9], where $k$ is the
channel sparsity level and $n_{t}$ and $n_{r}$ are the numbers of antennas at
TX and RX, respectively. This bound, however, is a naive application of the CS
bound for recovery of sparse vectors of length $n=n_{t}n_{r}$ and $k$ non-zero
values. In fact, the nature of the channel estimation problem poses
limitations on how measurements are obtained, as opposed to the standard CS
problem. Thus, more attention needs to be paid when deriving measurement lower
bounds. In this paper, we show that the aforementioned bound is too loose, and
we provide a tighter lower bound which has order of
$\Omega\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
We argue the tightness of this bound by showing that, under a mild constraint
on the channel sparsity level, there exists a solution with a number of
measurements upper bounded as
$O(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right))$.
Notations: Let $x$ be a scalar quantity, $\boldsymbol{x}$ be a vector and
$\boldsymbol{X}$ be a matrix. The conjugate of $\boldsymbol{X}$ is
$\boldsymbol{X}^{\ast}$, its transpose is $\boldsymbol{X}^{T}$ and its
hermition (i.e., conjugate transpose) is $\boldsymbol{X}^{H}$. Let
$\left\lVert\boldsymbol{x}\right\rVert_{p}$ denote the $p^{\text{th}}$ norm of
$\boldsymbol{x}$. If the subscript $p$ is dropped, then
$\left\lVert\boldsymbol{x}\right\rVert$ denotes the Euclidean norm,
$\left\lVert\boldsymbol{x}\right\rVert_{2}$. Define the operator
$\operatorname{vec}\left(\boldsymbol{X}\right)$ to be the stacking of all the
columns of $\boldsymbol{X}$ to form one vector as follows: If $\boldsymbol{X}$
has columns $\boldsymbol{x_{i}}$ for $i=1,\dots,n$, then
$\operatorname{vec}\left(\boldsymbol{X}\right)=\begin{pmatrix}\boldsymbol{x_{1}}^{T}&\boldsymbol{x_{2}}^{T}&\dots&\boldsymbol{x_{n}}^{T}\end{pmatrix}^{T}$.
We denote by $\otimes$ the Kronecker product. Finally, we use: (i)
$\Omega\left(\cdot\right)$ to denote the Big Omega notation, i.e., the
asymptotic lower bound222We say that $f(n)\in\Omega\left(g(n)\right)$ (or
loosely, $f(n){=}\Omega\left(g(n)\right)$) if there exists a constant $c>0$,
and $n_{0}\in\mathbb{N}$ such that $f(n)\geq cg(n)$, for all $n{\geq}n_{0}$.,
(ii) $O\left(\cdot\right)$ to denote the Big O notation, i.e., the asymptotic
upper bound333We say that $f(n)\in O\left(g(n)\right)$ (or loosely
$f(n){=}O\left(g(n)\right)$) if there exists a constant $c>0$ and
$n_{0}\in\mathbb{N}$ such that $f(n)\leq cg(n)$, for all $n{\geq}n_{0}$., and
(iii) we say that $f(n)\in\Theta\left(g(n)\right)$ if both
$f(n)\in\Omega(g(n))$ and $f(n)\in O(g(n))$.
## II System Model
Consider a single-tap, block-fading, sparse MIMO channel between a TX and RX
equipped with $n_{t}$ and $n_{r}$ antennas, respectively. Antennas at TX and
RX form Uniform Linear Arrays (ULA), with normalized antenna spacing of
$\Delta_{t}$ and $\Delta_{r}$, respectively. The normalization is with respect
to the carrier wavelength, denoted by $\lambda_{c}$. We consider analog
transceiver architectures at both TX and RX. That is, only one RF chain exists
per transceiver, and all antennas are connected to this RF chain through
phase-shifters and variable-gain amplifiers.
Let the maximum number of resolvable signal propagation paths in the channel
be denoted by $k$. Recall that we consider sparse channels. By the sparsity
assumption [4, 10, 11, 12, 5, 13], only a few signal propagation paths exist,
where $k\ll n_{t},n_{r}$. Note that a wireless transceiver may not be able to
resolve multiple channel paths if they are spatially close. However, as the
number of antennas increases, the transceiver’s ability to resolve more paths
also increases due to its ability to form narrower antenna beams. This means
that $k$ increases with $n$. However, the ratio $\frac{k}{n}$ decreases as $n$
increases. We assume that $n_{t},n_{r}\geq k^{1+\epsilon}$, for some
$\epsilon{>}0$, which reflects the ability of transceivers to resolve more
channel paths as their number of antennas increases. For each propagation path
$p$, let $\alpha_{p}$ be its path-gain, $\theta_{p}$ be its Angle of Departure
(AoD) at TX, $\phi_{p}$ be its Angle of Arrival (AoA) at RX, and $\rho_{p}$ be
its path length. The baseband path gain, $\alpha_{p}^{b}$, is given by
$\alpha_{p}^{b}=\alpha_{p}\sqrt{n_{t}n_{r}}\exp^{-j\frac{2\pi\rho_{p}}{\lambda_{c}}}.$
(1)
Let $\boldsymbol{Q}\in\mathbb{C}^{n_{r}\times n_{t}}$ denote the channel
matrix, where $q_{i,j}$, the element at row $i$ and column $j$ in
$\boldsymbol{Q}$, is the channel gain between the $j^{\text{th}}$ TX antenna
and the $i^{\text{th}}$ RX antenna. Let us denote the path-loss by $\mu$.
Then, we can write $\boldsymbol{Q}$ as
$\boldsymbol{Q}=\sum_{p=1}^{k}\frac{\alpha_{p}^{b}}{\mu}\boldsymbol{e_{r}}(\Omega_{r,p})\boldsymbol{e}^{H}_{\boldsymbol{t}}(\Omega_{t,p}),$
(2)
where $\boldsymbol{e_{t}}(\Omega)$ and $\boldsymbol{e_{r}}(\Omega)$ are the
transmit and receive signal spatial signatures, at angular cosine $\Omega$ [1,
Chapter 7]. We define $\boldsymbol{e_{i}}(\Omega)$ as:
$\small\boldsymbol{e_{i}}(\Omega)=\frac{1}{\sqrt{n_{i}}}\begin{pmatrix}1\\\
\exp^{-j2\pi\Delta_{i}\Omega}\\\ \exp^{-j2\pi 2\Delta_{i}\Omega}\\\ \vdots\\\
\exp^{-j2\pi(n_{i}-1)\Delta_{i}\Omega}\end{pmatrix},\quad\quad i\in\\{t,r\\}.$
(3)
The channel $\boldsymbol{Q}$, in this form, is not sparse. However, it can be
represented in a sparse form using a simple change of basis:
$\boldsymbol{Q^{a}}=\boldsymbol{U}_{\boldsymbol{r}}^{H}\boldsymbol{Q}\boldsymbol{U_{t}},$
(4)
where $\boldsymbol{Q^{a}}$ is known as the “angular channel” and is sparse.
The matrices $\boldsymbol{U_{t}}$ and $\boldsymbol{U_{r}}$ are Discrete
Fourier Transform matrices whose columns represent an orthonormal basis for
the transmit and receive signal spaces, and are defined as:
$\small\boldsymbol{U_{i}}=\begin{pmatrix}\boldsymbol{e_{i}}\left(0\right)&\boldsymbol{e_{i}}\left(\frac{1}{L_{i}}\right)&\boldsymbol{e_{i}}\left(\frac{2}{L_{i}}\right)&\dots&\boldsymbol{e_{i}}\left(\frac{n_{i}-1}{L_{i}}\right)\end{pmatrix},\quad
i\in\\{t,r\\}.$
When transmitting a symbol $\zeta$, the TX uses a precoder vector
$\boldsymbol{f}\in\mathbb{C}^{n_{t}}$ while RX uses a combiner vector
$\boldsymbol{w}\in\mathbb{C}^{n_{r}}$. The received symbol at RX is thus given
by:
$y_{i,j}=\boldsymbol{w}_{i}^{H}\boldsymbol{Q}\boldsymbol{f}_{j}\zeta+\boldsymbol{w}_{i}^{H}\boldsymbol{n_{i,j}},$
(5)
where $y_{i,j}$ denotes the received symbol (i.e., measurement result),
$\boldsymbol{w}_{i}$ denotes the $i^{\text{th}}$ receive combiner and
$\boldsymbol{f}_{j}$, the $j^{\text{th}}$ transmit precoder. Assume, for
simplicity, that $\zeta=1$. Let the number of rx-combiners be $m_{r}$ and the
number of tx-precoders be $m_{t}$. Then, the total number of measurements we
can obtain using all combinations of $\boldsymbol{f}_{j}$ and
$\boldsymbol{w}_{i}$ is $m=m_{t}{\times}m_{r}$. We can also write the
measurement equations for all precoders and combiners more compactly as:
$\boldsymbol{Y}=\boldsymbol{W}^{H}\boldsymbol{Q}\boldsymbol{F}+\boldsymbol{N},$
(6)
where $y_{i,j}$ is the element at row $i$ and column $j$ of $\boldsymbol{Y}$.
$\boldsymbol{W}$ and $\boldsymbol{F}$ are defined as:
$\displaystyle\boldsymbol{W}$
$\displaystyle\triangleq\begin{pmatrix}\boldsymbol{w}_{1}&\boldsymbol{w}_{2}&\dots&\boldsymbol{w}_{m_{r}}\end{pmatrix},$
(7) $\displaystyle\boldsymbol{F}$
$\displaystyle\triangleq\begin{pmatrix}\boldsymbol{f}_{1}&\boldsymbol{f}_{2}&\dots&\boldsymbol{f}_{m_{t}}\end{pmatrix}$
(8)
The channel estimation problem, i.e., figuring out what the matrix
$\boldsymbol{Q}$ is, can be broken down into determining the best set of
precoders $\boldsymbol{f}_{j}$ and combiners $\boldsymbol{w}_{i}$ using which
we can accurately recover $\boldsymbol{Q}$. To speed up the estimation
process, the smallest sets of those $\boldsymbol{f}_{j}$’s and
$\boldsymbol{w}_{i}$’s should be used. In this paper, we do not provide a
specific design for such precoders and combiners, but we seek to find a
“tight” lower bound on the number of measurements using which $\boldsymbol{Q}$
can be recovered.
Special Cases: Suppose the number of TX antennas $n_{t}{=}1$. In such case,
the channel is Single-Input-Multiple-Output (SIMO), and the channel matrix
$\boldsymbol{Q}$ becomes a vector $\boldsymbol{q}$. The precoders at TX also
fall back to just a scalar quantity; $f=1$. Thus, we can rewrite the
measurement equation (Eq. (6)) as:
$\boldsymbol{y}=\boldsymbol{W}^{H}\boldsymbol{q}+\boldsymbol{n}$ (9)
Similarly, if we have a MISO channel, i.e., $n_{r}{=}1$, we have the following
measurement equation:
$\boldsymbol{y}=\boldsymbol{F}^{H}\boldsymbol{q}+\boldsymbol{n}$ (10)
## III Problem Formulation
In this section, we will provide a brief overview of compressed sensing (CS).
Then, we will formulate the problem of channel estimation as a CS problem. To
that end, we will reshape the measurement equation given in Eq. (6) to be in
the form
$\boldsymbol{y_{v}}{=}\boldsymbol{G_{v}}\boldsymbol{q^{a}_{v}}{+}\boldsymbol{n_{v}}$,
which conforms with the traditional compressed sensing problem, as will be
shown in Eq. (13) below. Here, $\boldsymbol{q^{a}_{v}}$ is sparse and has
dimensions $n_{r}n_{t}{\times}1$.
### III-A Compressed Sensing Background
Compressed sensing is a signal processing technique [6] that allows the
reconstruction of a signal $\boldsymbol{x}=\left(x_{i}\right)_{i=1}^{n}$ from
a small number of samples given that $\boldsymbol{x}$ is either: (i) sparse,
or (ii) can be represented in a sparse form, using a linear transformation
$\boldsymbol{U}$ such that $\boldsymbol{x}=\boldsymbol{U}\boldsymbol{s}$ where
$\boldsymbol{s}$ is sparse. Let the number of measurements be denoted by $m$
where $m<n$ and $m,n\in\mathbb{N}$. Each measurement of $\boldsymbol{x}$ is a
linear combination of its components $x_{i}$. Such measurements are dictated
by the sensing matrix $\boldsymbol{G}$ and are given by
$\boldsymbol{y}=\boldsymbol{G}\boldsymbol{x},$ (11)
where $\boldsymbol{y}$ denotes the $m{\times}1$ measurement vector. Eq. (11)
represents an under-determined system of linear equations (since $m<n$). In
other words, we have fewer equations than the number of unknowns we want to
solve for. While, in general, an infinite number of solutions exist, the
sparsity of $\boldsymbol{x}$ allows for perfect signal reconstruction from
$\boldsymbol{y}$ given that certain conditions are satisfied, among which, is
a lower bound on the “spark” of the sensing matrix.
###### Definition III.1.
The spark of a given matrix $\boldsymbol{G}$ is the smallest number of its
linearly dependent columns.
###### Theorem 1 (Corollary 1 of [14]).
For any vector $\boldsymbol{y}\in\mathbb{R}^{m}$, there exits at most one
vector $\boldsymbol{q^{a}}\in\mathbb{R}^{n}$ with
$\left\lVert\boldsymbol{q^{a}}\right\rVert_{0}=k$ such that
$\boldsymbol{y}=\boldsymbol{G}\boldsymbol{q^{a}}$ if and only if
$\operatorname{spark}(\boldsymbol{G})>2k$.
Theorem 1 provides a mathematical guarantee on the exact recovery of
$k-$sparse vectors using $m$ linear measurements. An immediate bound on the
number of measurements, $m$, we get from Theorem 1 is
$m\geq 2k.$ (12)
The spark lower bound on the matrix $\boldsymbol{G}$ works well under noise-
free measurements. In practice, however, measurements are corrupted with an
error vector $\boldsymbol{n}$, i.e.,
$\boldsymbol{y}=\boldsymbol{G}\boldsymbol{x}+\boldsymbol{n}.$ (13)
It is necessary to guarantee that the measurement process is not adversely
affected by such errors in a significant way. This calls for alternative,
stricter requirements on sensing matrices to guarantee “good” sparse recovery.
Mathematically, we need to design the sensing matrix such that the energy in
the measured signal is preserved. This is quantified using the Restricted
Isometry Property (RIP). The RIP property guarantees that the distance between
any pair of $k-$sparse vectors is not significantly changed under the
measurement process. This RIP property is defined as follows:
###### Definition III.2.
A matrix $\boldsymbol{G}$ satisfies the restricted isometry property (RIP) of
order $k$ if there exists a constant $\delta_{k}\in(0,1)$ such that for all
vectors $\boldsymbol{q^{a}}$, with
$\left\lVert\boldsymbol{q^{a}}\right\rVert_{0}\leq k$, we have
$(1-\delta_{k})\left\lVert\boldsymbol{q^{a}}\right\rVert_{2}^{2}\leq\left\lVert\boldsymbol{G}\boldsymbol{q^{a}}\right\rVert_{2}^{2}\leq(1+\delta_{k})\left\lVert\boldsymbol{q^{a}}\right\rVert_{2}^{2}.$
(14)
The smallest $\delta_{k}$ which satisfies Eq. (14) is called the
“$k-$restricted isometry constant”. Note that in general, a matrix
$\tilde{\boldsymbol{G}}$ does not necessarily result in
$\small||\tilde{\boldsymbol{G}}\boldsymbol{q^{a}}||^{2}$ that is symmetric
about $1$. However, a simple scaling of $\tilde{\boldsymbol{G}}$ results in
$\boldsymbol{G}$ such that the tightest bounds of
$\small\left\lVert\boldsymbol{G}\boldsymbol{q^{a}}\right\rVert^{2}$ in Eq.
(14) are symmetric [15]. From now on, we will only consider matrices whose
bounds are symmetric as shown in Eq. (14).
The following theorem provides a necessary condition for $m{\times}n$ matrices
that satisfy the RIP property with $\delta_{k}{\in}\left(0,1\right)$.
###### Theorem 2 (Theorem 3.5 of [16]).
Let $\boldsymbol{G}$ be an $m{\times}n$ matrix that satisfies RIP of order $k$
with constant $\delta_{k}{\in}\left(0,1\right)$. Then,
$m\geq c_{\delta}k\log\left(\frac{n}{k}\right)$ (15)
where
$c_{\delta}=\frac{0.18}{\log\left(\sqrt{\frac{1+\delta}{1-\delta}}+1\right)}$,
is a function of $\delta$ only.
Theorem 2 demonstrates the popular asymptotic measurement bound:
$m=\Omega\left(k\log\frac{n}{k}\right).$ (16)
Next, we will formulate the MIMO channel estimation as a compressed sensing
problem.
### III-B The Problem
Recall from Eq. (6) that channel measurements take the form
$\boldsymbol{Y}=\boldsymbol{W}^{H}\boldsymbol{Q}\boldsymbol{F}+\boldsymbol{N}.$
This is not the standard form of a noisy CS problem (see Eq. (13)). Thus, it
cannot readily be solved using compressed sensing. To put this equation in a
CS problem form, let us “vectorize” its left and right hand sides as follows:
* •
Let $\boldsymbol{y_{v}}=\operatorname{vec}\left(\boldsymbol{Y}\right)$
* •
Let $\boldsymbol{n_{v}}=\operatorname{vec}\left(\boldsymbol{N}\right)$
* •
And by the properties of vectorization [17], we have
$\displaystyle\operatorname{vec}\left(\boldsymbol{W}^{H}\boldsymbol{Q}\boldsymbol{F}\right)$
$\displaystyle=\left(\boldsymbol{F}^{T}\otimes\boldsymbol{W}^{H}\right)\operatorname{vec}\left(\boldsymbol{Q}\right)$
(17)
$\displaystyle=\left(\boldsymbol{F}^{T}\otimes\boldsymbol{W}^{H}\right)\operatorname{vec}\left(\boldsymbol{U}_{\boldsymbol{r}}\boldsymbol{Q^{a}}\boldsymbol{U}^{H}_{\boldsymbol{t}}\right)$
(18)
$\displaystyle=\left(\boldsymbol{F}^{T}\otimes\boldsymbol{W}^{H}\right)\left(\boldsymbol{U}_{\boldsymbol{t}}^{\ast}\otimes\boldsymbol{U}_{\boldsymbol{r}}\right)\operatorname{vec}\left(\boldsymbol{Q^{a}}\right)$
(19)
$\displaystyle=\left(\boldsymbol{F}^{T}\otimes\boldsymbol{W}^{H}\right)\left(\boldsymbol{U}_{\boldsymbol{t}}^{\ast}\otimes\boldsymbol{U}_{\boldsymbol{r}}\right)\boldsymbol{q^{a}_{v}}$
(20)
$\displaystyle=\left(\left(\boldsymbol{F}^{T}\boldsymbol{U}^{\ast}_{\boldsymbol{t}}\right)\otimes\left(\boldsymbol{W}^{H}\boldsymbol{U_{r}}\right)\right)\boldsymbol{q^{a}_{v}}$
(21)
$\displaystyle=\left(\left(\boldsymbol{F}^{H}\boldsymbol{U_{t}}\right)^{\ast}\otimes\left(\boldsymbol{W}^{H}\boldsymbol{U_{r}}\right)\right)\boldsymbol{q^{a}_{v}}$
(22)
Thus, we can rewrite the measurement equation in (6) as
$\displaystyle\boldsymbol{y_{v}}$
$\displaystyle=\boldsymbol{G_{v}}\boldsymbol{q^{a}_{v}}+\boldsymbol{n_{v}},$
(23) $\displaystyle\textit{where}\quad\boldsymbol{G_{v}}$
$\displaystyle=\left(\boldsymbol{F}^{H}\boldsymbol{U_{t}}\right)^{\ast}\otimes\left(\boldsymbol{W}^{H}\boldsymbol{U_{r}}\right)$
(24)
is the sensing matrix, with dimensions $m_{t}m_{r}{\times}n_{t}n_{r}$, while
$\boldsymbol{y_{v}}$ has dimensions $m_{t}m_{r}{\times}1$ and
$\boldsymbol{q^{a}_{v}}$ has dimensions $n_{t}n_{r}{\times}1$. This form of
the problem allows us to employ CS sparse recovery techniques to estimate
$\boldsymbol{q^{a}_{v}}$ from $\boldsymbol{y_{v}}$.
## IV Lower Measurement Bound
We are interested in sensing matrices that preserve the distance between two
different channels $\boldsymbol{q^{a}_{v1}}$ and $\boldsymbol{q^{a}_{v2}}$.
This distance is the norm of
$\boldsymbol{q^{a}_{v1}}-\boldsymbol{q^{a}_{v2}}$, which has a sparsity level
of $2k$ (recall that the maximum number of channel paths is $k$). Thus, to be
able to accurately estimate $\boldsymbol{q^{a}_{v}}$, we need the sensing
matrix $\boldsymbol{G_{v}}$ to satisfy the RIP property of order $2k$ with
some RIP constant $\delta_{2k}\in(0,1)$. At sparsity level of $2k$, Theorem 2
shows that the recovery of a sparse vector with dimensions $n{=}n_{t}n_{r}$
requires a number of measurements, $m$, lower bounded as
$\displaystyle\small m\small$ $\displaystyle\geq
c_{\delta}(2k)\log\left(\frac{n_{t}n_{r}}{(2k)}\right)$ (25)
$\displaystyle=2c_{\delta}k\left(\log\left(\frac{n_{t}}{\sqrt{2k}}\right)+\log\left(\frac{n_{r}}{\sqrt{2k}}\right)\right).$
(26)
This demonstrates the popular $m=\Omega\left(k\log\left(\frac{n_{r}\times
n_{t}}{k}\right)\right)$ lower bound for sparse channel estimation. Although
this bound is valid, it is in fact too loose since it assumes that arbitrary
constructions of $\boldsymbol{G}_{v}$ are possible. This, however, is not the
case for sparse MIMO channel estimation since $\boldsymbol{G}_{v}$ takes a
special, Kronecker product form, as derived in Eq. (24).
(a) At fixed sparsity level $k=5$.
(b) At fixed number of antennas $n=n_{t}=n_{r}=100$.
Figure 1: Unscaled asymptotic measurement lower bounds.
Next, we will derive a tighter bound on the number of measurements. A bound
that considers the special structure of the sensing matrix. This will result
in
$m=\Omega\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
To appreciate how much tighter our derived bound is, we plot the functions
$k\log\left(\frac{n_{t}\times n_{r}}{k}\right)$ and
$k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)$
without constant scaling in Fig. 1.
### IV-A Main Results: A “Tight” Measurement Bound
In this section, we will derive the relationship between $k-$RIP constants of
Kronecker product matrices and those of the blocks that form it. Then, using
Theorem 2, we will derive an asymptotic lower bound on the number of rows of
$\boldsymbol{G_{v}}$ and deduce its asymptotic behavior. We will finally show
the tightness of our derived asymptotic bound using the solution framework in
[19].
Optimum Measurement Length: Among all possible matrices which satisfy the RIP
property, we are interested in the ones that have the least number of rows
(since the number of rows equals the number of measurements). This leads to
the notion of “Optimum Measurement Length (OML)”. We define OML as the
smallest number of measurements such that the RIP property is satisfied. OML
is dependent on the length of unknown vectors $n$, the maximum sparsity level
$k$ and the $k-$RIP constant $\delta$. Hence, we can define a function $\mu$,
$\mu:\mathcal{N}\times\mathcal{K}\times(0,1)\rightarrow\mathbb{N}_{0}^{+}$
(27)
which maps the space of all possible values for $n$, $k$, and $\delta$, given
by444We define $\mathbb{N}_{0}^{+}$ to be the set of non-negative integers.
$\mathcal{N}\subseteq\mathbb{N}_{0}^{+}$,
$\mathcal{K}\subseteq\mathbb{N}_{0}^{+}$ and $(0,1)$, respectively, to the
corresponding OML quantity.
Now, let us focus on the special case of matrices which can be arbitrarily
constructed. In such case, let $\mu$ be denoted by $\mu_{a}$ (‘a’ stands for
Arbitrary matrix construction). We define $\mu_{a}$ to be the solution of the
following optimization problem: —l—[2] M_a∈C^m_a ×n m_a P1: M_a ∈F_δ where
$\mathcal{F}_{\delta}$ is the feasible set, and it is defined as
$\displaystyle\small\mathcal{F}_{\delta}\triangleq\\{\boldsymbol{M_{a}}\in\mathbb{C}^{m_{a}\times
n}:$
$\displaystyle(1{-}\delta)\left\lVert\boldsymbol{x}\right\rVert_{2}^{2}\leq\left\lVert\boldsymbol{M_{a}}\boldsymbol{x}\right\rVert_{2}^{2}\leq(1{+}\delta)\left\lVert\boldsymbol{x}\right\rVert^{2}_{2},$
$\displaystyle\forall\boldsymbol{x}\in\mathbb{C}^{n}:\left\lVert\boldsymbol{x}\right\rVert_{0}\leq
k\\}$
###### Lemma 3.
Let $n$ and $k$ be fixed. Then, $\delta_{1}\geq\delta_{2}$ implies
$\mu_{a}(n,k,\delta_{1})\leq\mu_{a}(n,k,\delta_{2})$.
###### Proof.
The proof directly follows by observing that $\delta_{1}\geq\delta_{2}$
implies that $\mathcal{F}_{\delta_{2}}\subseteq\mathcal{F}_{\delta_{1}}$.
Since the problem is a minimization problem, then
$\mu_{a}(n,k,\delta_{1})\leq\mu_{a}(n,k,\delta_{2})$. ∎
Kronecker Product Matrices: The standard compressed sensing problem assumes
that all elements of the sensing matrix are independently chosen. On the
contrary, in sparse channel estimation, we are restricted to a specific
sensing matrix structure, as shown in Eq. (24). The only free parameters in
this sensing matrix are the tx-precoders $\boldsymbol{f_{j}}$ and the rx-
combiners $\boldsymbol{w_{i}}$. This limitation suggests that more
measurements may be needed to achieve the same RIP constant, compared to
matrices whose elements are independently selected.
At the heart of our results lies the relationship between the $k-$RIP constant
of Kronecker product matrices and the $k-$RIP constants of the matrices that
form them. We formally state this relationship in the following lemma.
###### Lemma 4 (RIP of Kronecker Products).
Let $\delta_{a}$ and $\delta_{b}$ be the $k-$RIP constants of the matrices
$\boldsymbol{A}$ and $\boldsymbol{B}$, respectively. Then, the $k-$RIP
constant of $\boldsymbol{A}\otimes\boldsymbol{B}$, denoted by $\delta$, is
bounded as
$\delta\geq\max\\{\delta_{a},\delta_{b}\\}$ (28)
A similar result to Lemma 4 was derived in [18], but under the stronger
assumption of matrices with normalized columns. Our more general result
implies that even if the normalized columns assumption is loosened, we still
cannot obtain a matrix, through a Kronecker Product, which satisfies the RIP
property with a constant smaller than the maximum of the $k-$RIP constants of
the matrices that form it. The proof of Lemma 4 is provided in Appendix A.
A Generalized Bound: Recall Eq. (24). We will rewrite $\boldsymbol{G_{v}}$,
for brevity, in terms of $\boldsymbol{M_{t}}$ and $\boldsymbol{M_{r}}$, where
$\displaystyle\small\boldsymbol{M_{t}}$
$\displaystyle\triangleq\left(\boldsymbol{F}^{H}\boldsymbol{U_{t}}\right)^{\ast}\in\mathbb{C}^{m_{t}\times
n_{t}}$ (29) $\displaystyle\boldsymbol{M_{r}}$
$\displaystyle\triangleq\boldsymbol{W}^{H}\boldsymbol{U_{r}}\in\mathbb{C}^{m_{r}\times
n_{r}}$ (30)
Thus, we have
$\boldsymbol{G_{v}}{=}\boldsymbol{M_{t}}{\otimes}\boldsymbol{M_{r}}$, and
$m{=}m_{t}m_{r}$ is the number of rows of $\boldsymbol{G_{v}}$. Now, suppose
that $\boldsymbol{G_{v}}$ satisfies $k-$RIP with constant $\delta{\in}(0,1)$.
Then, both $\boldsymbol{M_{t}}$ and $\boldsymbol{M_{r}}$ must satisfy the
$k-$RIP with constants $\delta_{t}{\in}(0,1)$ and $\delta_{r}{\in}(0,1)$,
respectively. To show that this is true, assume, without loss of generality
(w.l.o.g.), that there does not exist $\delta_{t}\in(0,1)$ such that
$\boldsymbol{M_{t}}$ satisfies $k-$RIP. Then, there exists a vector
$\boldsymbol{v}$ with $\left\lVert\boldsymbol{v}\right\rVert_{0}{\leq}k$ such
that $\boldsymbol{M_{t}}\boldsymbol{v}=\boldsymbol{0}$, which implies the
existence of at least $k$ dependent columns of $\boldsymbol{M_{t}}$, call them
$\boldsymbol{a_{t1}},\boldsymbol{a_{t2}},\dots,\boldsymbol{a_{tk}}$. In turn,
there exists at least $k$ dependent columns in $\boldsymbol{G_{v}}$ (Let
$\boldsymbol{a_{r1}}$ be a column in $\boldsymbol{M_{r}}$, then the columns
$\boldsymbol{a_{t1}}{\otimes}\boldsymbol{a_{r1}},\boldsymbol{a_{t2}}{\otimes}\boldsymbol{a_{r1}},\dots,\boldsymbol{a_{tk}}{\otimes}\boldsymbol{a_{r1}}$
are dependent). Hence, $\nexists\delta\in(0,1)$ such that $\boldsymbol{G_{v}}$
satisfies $k-$RIP with a constant $\delta$. Thus, we arrive at a
contradiction. Further, by Lemma 4, we have that
$\delta\geq\max\\{\delta_{t},\delta_{r}\\}$.
Since $\boldsymbol{M_{t}}$ and $\boldsymbol{M_{r}}$ can be arbitrarily
constructed, then we can lower bound $m_{t}$ and $m_{r}$ by their OML values
as follows
$\displaystyle m_{t}$
$\displaystyle\geq\mu_{a}(n_{t},k,\delta_{t})\stackrel{{\scriptstyle(i)}}{{\geq}}\mu_{a}(n_{t},k,\delta)$
(31) $\displaystyle m_{t}$
$\displaystyle\geq\mu_{a}(n_{r},k,\delta_{r})\stackrel{{\scriptstyle(ii)}}{{\geq}}\mu_{a}(n_{r},k,\delta)$
(32)
where inequalities $(i)$ and $(ii)$ follow from Lemma 3. Thus, it follows that
the number of rows of $\boldsymbol{G_{v}}$, $m$, is bounded as
$m\geq\mu_{a}(n_{t},k,\delta)\times\mu_{a}(n_{r},k,\delta).$ (33)
Recall that $\mu_{a}(\cdot)$ is the value that solves problem P1.
###### Remark.
The implication of Inequality (33) is that the number of measurements needed
for estimating a sparse MIMO channel, $\boldsymbol{Q}$, is at least equal to
(but possibly higher) than the product of the number of measurements needed to
solve the following two sub-problems:
* •
The first is a Single-Input Multiple-Output (SIMO), $1\times n_{r}$ channel,
with $\boldsymbol{M_{t}}^{\ast}$ as sensing matrix.
* •
The second is a Multiple-Input Single-Output (MISO), $n_{t}\times 1$ channel,
with $\boldsymbol{M_{r}}$ as sensing matrix,
where the sparsity level of both channels is $\leq k$. These two sub-problems
are special cases of the original problem, whose measurement equations are
shown in Eq. (9) and Eq. (10), respectively. The only difference is the
conjugation of $\boldsymbol{M_{t}}$.
The bound we derive in Eq. (33) highlights the dependence on the channel
dimensions $n_{t}$ and $n_{r}$, the maximum sparsity level $k$ and a measure,
$\delta$, of how much information the measurements preserve about the channel.
This bound, however, is not explicit, but we can use Theorem 2 to derive a
more concrete lower bound for $\mu_{a}(\cdot)$. This leads to our main result:
###### Theorem 5 (Main Theorem).
Fix $\delta\in(0,1)$. If $\boldsymbol{G}_{v}$ in Eq. (24) satisfies RIP with
order $2k$ and constant $\delta$, then the number of measurements $m$ is
asymptotically bounded as:
$m=\Omega\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$
(34)
###### Proof.
Since $\mu_{a}(n_{t},2k,\delta)$ and $\mu_{a}(n_{r},2k,\delta)$ are obtained
by solving the problem $P1$ (with their respective $n_{t}$, $n_{r}$ and
$\delta$ values), then there exists matrices $\boldsymbol{X_{t}}$ and
$\boldsymbol{X_{r}}$, with dimensions $\mu_{a}(n_{t},2k,\delta)\times n_{t}$
and $\mu_{a}(n_{r},2k,\delta)\times n_{r}$ which satisfy $2k-$RIP with
constant $\delta$. Thus, it follows by Theorem 2 that:
$\displaystyle\mu_{a}(n_{t},2k,\delta)$ $\displaystyle\geq
c_{\delta}2k\log\left(\frac{n_{t}}{2k}\right)$ (35)
$\displaystyle\mu_{a}(n_{r},2k,\delta)$ $\displaystyle\geq
c_{\delta}2k\log\left(\frac{n_{r}}{2k}\right)$ (36)
Therefore, by Eq. (33), the following follows
$\displaystyle m=m_{t}m_{r}$ $\displaystyle\geq
4c_{\delta}^{2}k^{2}\log\left(\frac{n_{t}}{2k}\right)\log\left(\frac{n_{r}}{2k}\right)$
(37)
Finally, let $c=0.5$ and recall that the ratio $\frac{n_{t}}{k}$ increases (by
assumption). Then, there exists $n_{t0}\in\mathbb{N}$ such that
$\log(\frac{n_{t}}{2k})\geq c\log(\frac{n_{t}}{k})$ for all $n_{t}\geq
n_{t0}$. Similarly, there exists $n_{r0}\in\mathbb{N}$ such that
$\log(\frac{n_{r}}{2k})\geq c\log(\frac{n_{r}}{k})$ for all $n_{r}\geq
n_{r0}$. Then, it follows that $m\geq
4c^{2}c_{\delta}^{2}k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)$
where $4c^{2}c_{\delta}^{2}=c_{\delta}^{2}$ is a constant, from which Eq. (34)
follows. ∎
### IV-B Tightness of the Measurement Bound
To argue that the measurement lower bound in Theorem 5 is tight, we will show
that there exists a solution, based on [19], which yields sensing matrices
that satisfy $2k-$RIP with constants $\in(0,1)$ and with
$m\in\Theta\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
We briefly discuss the measurement framework of [19] next.
In [19], a source-coding-based framework for the sparse MIMO channel
estimation problem is developed. This solution proposes a method for obtaining
a small number of measurements that are sufficient to estimate the channel.
Such measurements are designed based on two carefully chosen binary linear
source codes, $C_{t}$ and $C_{r}$. These codes dictate the design of tx-
precoders (using $C_{t}$) and rx-combiners (using $C_{r}$) and produce real-
valued measurement (sensing) matrices, namely, $\boldsymbol{H_{t}}$ (of size
$m_{t}{\times}n_{t}$) and $\boldsymbol{H_{r}}$ (of size $m_{r}{\times}n_{r}$),
respectively. The matrix $\boldsymbol{H_{t}}$ can estimate $k-$sparse MISO
channel vectors (i.e., produces unique measurements), while
$\boldsymbol{H_{r}}$ can estimate $k-$sparse SIMO channels. Hence, the spark
of both matrices is greater than $2k$ (by Theorem 1). Measurements are then
obtained using all combinations of $m_{t}$ tx-precoders and $m_{r}$ rx-
combiners, and can be arranged as
$\boldsymbol{y_{v}}=\boldsymbol{H_{v}}\boldsymbol{q^{a}_{v}}+\boldsymbol{n_{v}}$
where $\boldsymbol{H_{v}}=\boldsymbol{H_{t}}\otimes\boldsymbol{H_{r}}$. By
Lemma 9 (in Appendix D), we have that
$\operatorname{spark}(\boldsymbol{H_{v}})>2k$. Hence, either
$\boldsymbol{H_{v}}$ or a scaled version of it satisfies $2k-$RIP with a
constant $\delta_{h}\in(0,1)$. This measurement framework is shown to produce
a number of measurements, $m$, that is lower bounded as:
$\footnotesize
m\geq\underline{m}\triangleq\underbrace{\left\lceil\log_{2}\left(\sum_{i=0}^{k}{n_{r}\choose
i}\right)\right\rceil}_{\leq
m_{t}}\underbrace{\left\lceil\log_{2}\left(\sum_{i=0}^{k}{n_{t}\choose
i}\right)\right\rceil}_{\leq m_{r}}.$ (38)
This lower bound is achievable with equality for specific examples as shown in
[19]. However, it is not immediately clear how this bound compares to our
bound in Eq. (34). The following lemma sheds more light on this issue:
###### Lemma 6.
The asymptotic behavior of $\underline{m}$, defined in Eq. (38) follows:
$\underline{m}=\Theta\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
This is the same asymptotic behavior as the lower bound in Theorem 5. The
proof is provided in Appendix B. Next, we will examine a specific solution
based on the family of BCH codes, which results in a number of measurements
upper bounded as
$m=O\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
###### Example 1 (BCH codes).
Although BCH codes are natively error-correcting codes, they can be used as
syndrome-source-codes, as well555A linear block error-correcting code (LBC)
can be utilized as a syndrome source code which can uniquely compress
sequences that contain a number of 1’s less than or equal to the number of
correctable errors of the used code [21]. The parity check matrix of the LBC
code is used as the generator matrix for the source code. Hence, the number of
parity bits of the LBC code is the length of the compressed sequences for the
corresponding source code.. By the properties of BCH codes, we have that for
any positive integers $t\geq 3$ and $k<2^{t-1}$, there exists a binary BCH
code with: i) block length $n=2^{t}-1$, ii) minimum distance
$d_{\text{min}}\geq 2k+1$ (hence, it can correct up to $k$ errors), and iii) a
number of parity check bits $m\leq tk=k\log_{2}\left(n+1\right)$. Using BCH
codes to design $C_{t}$ and $C_{r}$, we obtain a solution whose number of
measurements is upper bounded according to the following lemma:
###### Lemma 7.
The number of measurements achievable using BCH codes in the framework of [19]
is asymptotically bounded as
$m=O\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
The proof depends on constructing syndrome source codes with arbitrary block
lengths, and is provided in Appendix C.
Among all solutions in [19], we are interested in the ones whose number of
measurements, $m$, is closest to $\underline{m}$. These solutions are
“Optimum” in the sense of reducing the number of measurements. Recall that
$\underline{m}$ is the lower bound of all solutions based on [19] (see Eq.
(38)). The following theorem shows that these optimum solutions scale
similarly to $\underline{m}$, which in turn shows that the lower bound of
Theorem 5 is tight.
###### Theorem 8.
The number of measurements of “Optimum Solutions” of [19] scales as
$m=\Theta\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$
###### Proof.
By Lemma 6, we have that all solutions, including the optimal, have
$m=\Omega\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
Moreover, Lemma 7 shows that solutions based on BCH codes result in
$m=O\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
Since optimal solutions have a number of measurements smaller than or equal to
those obtained by BCH codes, then they also have the same asymptotic upper
bound. Therefore, optimal solutions have
$m=\Theta\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$
follows. ∎
###### Remark.
Even though we have shown that the bound of Theorem 5 is tight, we have
demonstrated this tightness in the asymptotic regime of $n$ and $k$. The
dependence on the RIP constant, $\delta$, however, remains an open question.
## V Conclusion
In this paper, we study the fundamental lower bound governing the number of
measurements, required for estimating sparse, large MIMO channels. We consider
a simple analog transceiver, where each channel measurements is obtained using
a specific combination of beamforming vectors at the transmitter and receiver.
The currently known lower bound on number of measurements is
$\Omega\left(k\log\left(\frac{n_{r}n_{t}}{k}\right)\right)$. We derive a tight
lower measurement bound, which scales asymptotically as
$\Omega\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
The tightness of our derived bound is demonstrated by showing that there
exists a solution with
$m=O\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)$.
## Appendix A Proof of Lemma 4
###### Proof.
Let $\boldsymbol{A}\in\mathbb{R}^{m_{a}\times n_{a}}$ and
$\boldsymbol{B}\in\mathbb{R}^{m_{b}\times n_{b}}$. Denote by
$\boldsymbol{a_{i}}$ the $i^{\text{th}}$ column of $\boldsymbol{A}$ and let
$a_{i,j}$ be its $j^{\text{th}}$ element. And define
$\boldsymbol{C}\triangleq\boldsymbol{A}\otimes\boldsymbol{B}$. Denote by
$\delta_{c}$ the $k-$RIP constant of $\boldsymbol{C}$.
We will first show that $\delta_{c}\geq\delta_{b}$. To that end, let us define
the sets $\mathcal{X}_{c}$ and $\mathcal{X}_{b}$ as:
$\displaystyle\mathcal{X}_{c}$
$\displaystyle\triangleq\\{\boldsymbol{x_{c}}\in\mathbb{R}^{n_{a}n_{b}}:\left\lVert\boldsymbol{x_{c}}\right\rVert_{0}\leq
k\\}$ (39) $\displaystyle\mathcal{X}_{b}$
$\displaystyle\triangleq\\{\boldsymbol{x_{b}}\in\mathbb{R}^{n_{b}}:\left\lVert\boldsymbol{x_{b}}\right\rVert_{0}\leq
k\\}$ (40)
Since $\delta_{c}$ is the $k-$RIP constant of $\boldsymbol{C}$, then
$\forall\boldsymbol{x_{c}}\in\mathcal{X}_{c}$ we have
$(1-\delta_{c})\left\lVert\boldsymbol{x_{c}}\right\rVert^{2}\leq\left\lVert\boldsymbol{C}\boldsymbol{x_{c}}\right\rVert^{2}\leq(1+\delta_{c})\left\lVert\boldsymbol{x_{c}}\right\rVert^{2}$
(41)
Now, we will focus our attention on a smaller class of vectors
$\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}$, which constitute a
strict subset of $\mathcal{X}_{c}$, defined as follows
$\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\triangleq\begin{pmatrix}\boldsymbol{b}\\\
\boldsymbol{0}\\\ \vdots\\\ \boldsymbol{0}\end{pmatrix},$ (42)
where $\boldsymbol{b}\in\mathcal{X}_{b}$ and
$\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\in\mathbb{R}^{n_{a}n_{b}}$.
Then, by construction,
$\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\in\mathcal{X}_{c}$, and
$\left\lVert\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\right\rVert=\left\lVert\boldsymbol{b}\right\rVert$.
Now, observe that
$\small\left\lVert\boldsymbol{C}\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\right\rVert^{2}$
is
$\displaystyle\small\left\lVert\boldsymbol{C}\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\right\rVert^{2}=\left\lVert\left(\boldsymbol{A}\otimes\boldsymbol{B}\right)\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\right\rVert^{2}$
$\displaystyle=\sum_{i=1}^{n_{a}}|a_{i,1}|^{2}\left\lVert\boldsymbol{B}\boldsymbol{b}\right\rVert^{2}$
(43)
$\displaystyle=\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}\left\lVert\boldsymbol{B}\boldsymbol{b}\right\rVert^{2}$
(44)
Since $\delta_{b}$ is the $k-$RIP constant of $\boldsymbol{B}$, then
$\forall\boldsymbol{b}\in\mathcal{X}_{b}$ we have
$\small\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}(1-\delta_{b})\left\lVert\boldsymbol{b}\right\rVert^{2}\leq\underbrace{\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}\left\lVert\boldsymbol{B}\boldsymbol{b}\right\rVert^{2}}_{=\left\lVert\boldsymbol{C}\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}\right\rVert^{2}}\leq\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}(1+\delta_{b})\left\lVert\boldsymbol{b}\right\rVert^{2}$
(45)
Since (i) the space of all possible constructions of
$\boldsymbol{x}^{(\boldsymbol{b})}_{\boldsymbol{c}}$ is a strict subset of
$\mathcal{X}_{c}$, and since (ii) $\delta_{b}$ is the smallest constant such
that Eq. (45) holds, then the following two equations must always hold true
$\displaystyle(1-\delta_{c})\left\lVert\boldsymbol{b}\right\rVert^{2}\leq$
$\displaystyle\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}(1-\delta_{b})\left\lVert\boldsymbol{b}\right\rVert^{2}$
(B1)
$\displaystyle\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}(1+\delta_{b})\left\lVert\boldsymbol{b}\right\rVert^{2}\leq(1+\delta_{c})\left\lVert\boldsymbol{b}\right\rVert^{2}$
(B2)
If $\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}\leq 1$, then from Eq. (B1)
we have $\delta_{c}\geq\delta_{b}$. Otherwise, if
$\left\lVert\boldsymbol{a_{1}}\right\rVert^{2}\geq 1$, then from Eq. (B2) we
have $\delta_{c}\geq\delta_{b}$. Therefore, $\delta_{b}\leq\delta_{c}$ is
always true. Now, define
$C^{\prime}\triangleq\boldsymbol{B}\otimes\boldsymbol{A}$. By the properties
of the Kronecker product, we know that there exist two “Permutation” matrices,
call them $\boldsymbol{P_{\rho}}$ and $\boldsymbol{P_{c}}$, such that:
$\boldsymbol{C^{\prime}}=\boldsymbol{P_{\rho}}\boldsymbol{C}\boldsymbol{P_{c}}=\boldsymbol{P_{\rho}}\left(\boldsymbol{A}\otimes\boldsymbol{B}\right)\boldsymbol{P_{c}},$
(46)
where $\boldsymbol{P_{\rho}}$ permutes the rows of $\boldsymbol{C}$, and
$\boldsymbol{P_{c}}$ permutes the columns of
$\boldsymbol{P_{\rho}}\boldsymbol{C}$. Then, we have that
$\displaystyle\left\lVert\boldsymbol{C^{\prime}}\boldsymbol{x_{c}}\right\rVert=\left\lVert\boldsymbol{P_{\rho}}\boldsymbol{C}\boldsymbol{P_{c}}\boldsymbol{x_{c}}\right\rVert\stackrel{{\scriptstyle(i)}}{{=}}\left\lVert\boldsymbol{C}\boldsymbol{P_{c}}\boldsymbol{x_{c}}\right\rVert.$
(47)
Also, observe that if $\boldsymbol{x_{c}}\in\mathcal{X}_{c}$, then
$\boldsymbol{P_{c}}\boldsymbol{x_{c}}$ has the same sparsity level as
$\boldsymbol{x_{c}}$ and hence it lies in $\mathcal{X}_{c}$, as well.
Therefore, it follows that
$(1-\delta_{c})\left\lVert\boldsymbol{x_{c}}\right\rVert^{2}\leq\underbrace{\left\lVert\boldsymbol{C^{\prime}}\boldsymbol{x_{c}}\right\rVert^{2}}_{=\left\lVert\boldsymbol{C}\boldsymbol{P_{c}}\boldsymbol{x_{c}}\right\rVert^{2}}\leq(1+\delta_{c})\left\lVert\boldsymbol{x_{c}}\right\rVert^{2},$
(48)
which shows that both $\boldsymbol{C}$ and $\boldsymbol{C^{\prime}}$ have the
same $k-$RIP constant $\delta_{c}$. Then, it follows that
$\delta_{c}\geq\delta_{a}$. Therefore,
$\delta_{c}\geq\max\\{\delta_{a},\delta_{b}\\}$, which concludes our proof. ∎
## Appendix B Proof Of Lemma 6
###### Proof.
First, observe that ${n\choose i}<{n\choose k}$ for all $0\leq i<k$ such that
$k<\frac{n+1}{2}$. Thus, for $k<\frac{n+1}{2}$, we have that
$\sum_{i=0}^{k}{n\choose i}\leq\left(k+1\right){n\choose k}\footnotesize$ (49)
By taking the logarithm of the previous equation, we get
$\displaystyle\Rightarrow\log\left(\sum_{i=0}^{k}{n\choose i}\right)$
$\displaystyle\leq\log\left(k+1\right)+\log{n\choose k}\footnotesize$ (50)
$\displaystyle\leq\log\left(k+1\right)+k\log\left(\frac{n}{k}\right)+k\log e,$
(51)
where (51) follows from the following popular bounds on $n\choose k$ [20]
$\footnotesize\left(\frac{n}{k}\right)^{k}\leq{n\choose
k}\leq\left(\frac{ne}{k}\right)^{k}.$ (52)
From (52) we also have that $\left(\frac{n}{k}\right)^{k}\leq{n\choose
k}\leq\sum_{i=0}^{k}{n\choose i}$. This gives us the following upper and lower
bounds on $\Upsilon$ where
$\footnotesize\Upsilon\triangleq\left\lceil\log_{2}\left(\sum_{i=0}^{k}{n\choose
i}\right)\right\rceil$ (53) $\small
k\log\left(\frac{n}{k}\right)\leq\Upsilon\leq\log(k+1)+k\log\left(\frac{n}{k}\right)+k\log
e+1$ (54)
Therefore, we have that $\Upsilon$ is in both
$\Omega\left(k\log\frac{n}{k}\right)$ and $O\left(k\log\frac{n}{k}\right)$.
Hence, $\Upsilon\in\Theta\left(k\log\frac{n}{k}\right)$. Finally, we can
conclude that $\underline{m}=\Upsilon|_{n=n_{t}}\Upsilon|_{n=n_{r}}$ is
asymptotically bounded as
$\underline{m}=\Theta\left(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right)\right)\qed$
## Appendix C Proof Of Lemma 7
###### Proof.
First, we will show that $m_{t}\leq ck\log\frac{n_{t}}{k}$ for some $c>0$. Let
$n_{t}$ be an arbitrary integer such that $n_{t}>=7$. Then, there exists a
positive integer $t\geq 3$ such that $2^{t}-1\leq n_{t}<2^{t+1}-1$. If
$n_{t}=2^{t}-1$, then there exists a BCH code with a number of parity check
bits $m_{t}$ such that $m_{t}\leq k\log(n_{t}+1)$. Hence, there exists a
positive constant $c_{0}\in\mathbb{R}$ such that $m_{t}\leq
c_{0}k\log(n_{t})$. On the other hand, if $2^{t}-1<n_{t}<2^{t+1}-1$, then we
can construct a linear block code of length $n_{t}$ by shortening a BCH code
with block length $n_{t}^{\prime}=2^{t+1}-1$ and number of parity check bits
$m_{t}\leq k\log(n_{t}^{\prime}+1)$. This shortening process leaves the number
of parity bits intact, hence, we have that $m_{t}\leq
k\log\left(n_{t}^{\prime}+1\right)$, but it removes $n_{t}^{\prime}-n_{t}$
information bits from the codewords. Thus, we have
$m_{t}\leq
k\log\left(2^{t+1}\right)\leq\frac{4}{3}k\log\left(2^{t}\right)<\frac{4}{3}k\log\left(n_{t}+1\right).$
(55)
Now, recall that $n_{t}\geq k^{1+\epsilon}$, where $\epsilon>0$ (by
assumption). Then, we have that $\frac{1}{\epsilon}\log\frac{n_{t}}{k}\geq\log
k$. Therefore,
$\displaystyle\log(n_{t})$ $\displaystyle=\log\left(\frac{n_{t}}{k}\times
k\right)=\left(\log\left(\frac{n_{t}}{k}\right)+\log\left(k\right)\right)$
(56)
$\displaystyle\leq\left(\log\left(\frac{n_{t}}{k}\right)+\frac{1}{\epsilon}\log\frac{n_{t}}{k}\right)=\left(1+\frac{1}{\epsilon}\right)\log\left(\frac{n_{t}}{k}\right)$
(57)
Thus, if follows that $m_{t}=O(k\log\left(\frac{n_{t}}{k}\right))$ for
arbitrary $n_{t}\in\mathbb{N}$.
Similarly, we have $m_{r}=O\left(k\log\left(\frac{n_{r}}{k}\right)\right)$.
Thus, it follows that
$m=O(k^{2}\log\left(\frac{n_{t}}{k}\right)\log\left(\frac{n_{r}}{k}\right))$.
∎
## Appendix D Spark of the Kronecker Product
###### Lemma 9.
Let $\boldsymbol{A}\in\mathbb{C}^{m_{a}\times n_{a}}$ and
$\boldsymbol{B}\in\mathbb{C}^{m_{b}\times n_{b}}$ be such that
$\min\\{\operatorname{spark}(\boldsymbol{A}),\operatorname{spark}(\boldsymbol{B})\\}>k$.
Then, $\operatorname{spark}(\boldsymbol{A}\otimes\boldsymbol{B})>k$.
###### Proof.
Let $\left(\boldsymbol{a_{i}}\right)_{i=1}^{n_{a}}$ and
$\left(\boldsymbol{b_{j}}\right)_{j=1}^{n_{b}}$ be the columns of
$\boldsymbol{A}$ and $\boldsymbol{B}$, respectively. Since
$\operatorname{spark}(\boldsymbol{A})>k$, then any $k$ columns of
$\boldsymbol{A}$ are linearly independent. Similarly, any $k$ columns of
$\boldsymbol{B}$ are also independent. Observe that any column of
$\boldsymbol{A}\otimes\boldsymbol{B}$ is of the form
$\boldsymbol{a_{i}}\otimes\boldsymbol{b_{j}}$. Pick any $k$ columns of
$\boldsymbol{A}\otimes\boldsymbol{B}$, i.e.,
$\boldsymbol{a_{p_{1}}}\otimes\boldsymbol{b_{t_{1}}}$,
$\boldsymbol{a_{p_{2}}}\otimes\boldsymbol{b_{t_{2}}}$, $\dots$,
$\boldsymbol{a_{p_{k}}}\otimes\boldsymbol{b_{t_{k}}}$. We will show that
$\sum_{i=1}^{k}\alpha_{i}\boldsymbol{a_{p_{i}}}\otimes\boldsymbol{b_{t_{i}}}=\boldsymbol{0}$
if and only if $\alpha_{i}=0\;\forall i$.
Assume, without loss of generality, that
$\displaystyle\footnotesize\boldsymbol{a_{p_{1}}}$
$\displaystyle=\dots=\boldsymbol{a_{p_{d_{1}}}},\text{
and}\quad\sum_{i=1}^{d_{1}}\alpha_{i}\boldsymbol{b_{t_{i}}}=\boldsymbol{r_{d_{1}}}$
$\displaystyle\boldsymbol{a_{p_{d_{1}+1}}}$
$\displaystyle=\dots=\boldsymbol{a_{p_{d_{2}}}},\text{
and}\quad\sum_{i=d_{1}+1}^{d_{2}}\alpha_{i}\boldsymbol{b_{t_{i}}}=\boldsymbol{r_{d_{2}}}$
$\displaystyle\quad\quad\vdots\hskip 85.35826pt\vdots$
$\displaystyle\boldsymbol{a_{p_{d_{l-1}+1}}}$
$\displaystyle=\dots=\boldsymbol{a_{p_{d_{l}}}},\text{
and}\quad\sum_{i=d_{l-1}+1}^{d_{l}=k}\alpha_{i}\boldsymbol{b_{t_{i}}}=\boldsymbol{r_{d_{l}}}$
Then, we can rewrite
$\sum_{i=1}^{k}\alpha_{i}\boldsymbol{a_{p_{i}}}\otimes\boldsymbol{b_{t_{i}}}$
as:
$\displaystyle\boldsymbol{a_{p_{d_{1}}}}\otimes\underbrace{\left(\sum_{i=1}^{d_{1}}\alpha_{i}\boldsymbol{b_{t_{i}}}\right)}_{\boldsymbol{r_{d_{1}}}}+\boldsymbol{a_{p_{d_{2}}}}\otimes\underbrace{\left(\sum_{i=d_{1}+1}^{d_{2}}\alpha_{i}\boldsymbol{b_{t_{i}}}\right)}_{\boldsymbol{r_{d_{2}}}}$
$\displaystyle\hskip
85.35826pt+\dots+\boldsymbol{a_{p_{d_{l}}}}\otimes\underbrace{\left(\sum_{i=d_{l-1}+1}^{d_{l}}\alpha_{i}\boldsymbol{b_{t_{i}}}\right)}_{\boldsymbol{r_{d_{l}}}}$
Suppose there exists at least one value $i_{0}$ such that $\alpha_{i_{0}}\neq
0$, then $\exists\boldsymbol{r_{d_{i}}}\neq 0$ since all
$\boldsymbol{b_{t_{i}}}$ are independent. Finally, since all
$\boldsymbol{a_{p{d_{i}}}}$ are independent, then
$\sum_{i=1}^{k}\alpha_{i}\boldsymbol{a_{p_{i}}}\otimes\boldsymbol{b_{t_{i}}}\neq\boldsymbol{0}$.
Therefore, the $k$ columns
$\left(\boldsymbol{a_{p_{i}}}\otimes\boldsymbol{b_{t_{i}}}\right)_{i=1}^{k}$,
of $\boldsymbol{A}\otimes\boldsymbol{B}$, are independent. Hence,
$\operatorname{spark}(\boldsymbol{A}\otimes\boldsymbol{B})>k$. ∎
## References
* [1] D. Tse and P. Viswanath, _Fundamentals of wireless communication_. Cambridge university press, 2005.
* [2] Y. Shabara, C. E. Koksal, and E. Ekici, “Beam discovery using linear block codes for millimeter wave communication networks,” _IEEE/ACM Transactions on Networking_ , 2019.
* [3] D. Fan, F. Gao, Y. Liu, Y. Deng, G. Wang, Z. Zhong, and A. Nallanathan, “Angle Domain Channel Estimation in Hybrid Millimeter Wave Massive MIMO Systems,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 12, pp. 8165–8179, 2018.
* [4] Y. Ding and B. D. Rao, “Dictionary learning-based sparse channel representation and estimation for fdd massive mimo systems,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 8, pp. 5437–5451, 2018\.
* [5] J. W. Choi, B. Shim, Y. Ding, B. Rao, and D. I. Kim, “Compressed sensing for wireless communications: Useful tips and tricks,” _IEEE Communications Surveys & Tutorials_, vol. 19, no. 3, pp. 1527–1550, 2017.
* [6] D. L. Donoho, “Compressed sensing,” _IEEE Transactions on information theory_ , vol. 52, no. 4, pp. 1289–1306, 2006.
* [7] A. Alkhateeby, G. Leusz, and R. W. Heath, “Compressed sensing based multi-user millimeter wave systems: How many measurements are needed?” in _2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , April 2015, pp. 2909–2913.
* [8] J. W. Choi, B. Shim, Y. Ding, B. Rao, and D. I. Kim, “Compressed sensing for wireless communications : Useful tips and tricks,” _IEEE Communications Surveys Tutorials_ , vol. PP, no. 99, pp. 1–1, 2017.
* [9] A. Alkhateeb, O. El Ayach, G. Leus, and R. W. Heath, “Channel estimation and hybrid precoding for millimeter wave cellular systems,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 8, no. 5, pp. 831–846, 2014.
* [10] Z. Gao, L. Dai, C. Yuen, and Z. Wang, “Asymptotic orthogonality analysis of time-domain sparse massive mimo channels,” _IEEE Communications Letters_ , vol. 19, no. 10, pp. 1826–1829, 2015.
* [11] M. Masood, L. H. Afify, and T. Y. Al-Naffouri, “Efficient coordinated recovery of sparse channels in massive mimo,” _IEEE Transactions on Signal Processing_ , vol. 63, no. 1, pp. 104–118, 2014.
* [12] S. Sun and T. S. Rappaport, “Millimeter wave mimo channel estimation based on adaptive compressed sensing,” in _2017 IEEE International Conference on Communications Workshops (ICC Workshops)_. IEEE, 2017, pp. 47–53.
* [13] W. Ma, C. Qi, Z. Zhang, and J. Cheng, “Deep learning for compressed sensing based channel estimation in millimeter wave massive mimo,” in _2019 11th International Conference on Wireless Communications and Signal Processing (WCSP)_. IEEE, 2019, pp. 1–6.
* [14] D. L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization,” _Proceedings of the National Academy of Sciences_ , vol. 100, no. 5, pp. 2197–2202, 2003.
* [15] Y. C. Eldar and G. Kutyniok, _Compressed sensing: theory and applications_. Cambridge university press, 2012.
* [16] M. A. Davenport, “Random observations on random observations: Sparse signal acquisition and processing,” Ph.D. dissertation, Rice University, 2010.
* [17] P. J. Dhrymes, “Mathematics for Econometrics,” Springer, Tech. Rep., 1978.
* [18] S. Jokar and V. Mehrmann, “Sparse solutions to underdetermined kronecker product systems,” _Linear Algebra and its Applications_ , vol. 431, no. 12, pp. 2437–2447, 2009.
* [19] Y. Shabara, E. Ekici, and C. E. Koksal, “Source Coding Based mmWave Channel Estimation with Deep Learning Based Decoding,” _arXiv preprint arXiv:1905.00124_ , 2019.
* [20] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, _Introduction to Algorithms_. MIT press, 2009.
* [21] T. Ancheta, “Syndrome-source-coding and its universal generalization,” _IEEE Transactions on Information Theory_ , vol. 22, no. 4, pp. 432–436, Jul 1976.
|
# The Perfect Hyperfluid of Metric-Affine Gravity: The Foundation
Damianos Iosifidis Institute of Theoretical Physics, Department of Physics
Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
<EMAIL_ADDRESS>
###### Abstract
We set the foundation and formulate the Perfect (Ideal) Hyperfluid. The latter
represents the natural generalization of the usual perfect fluid structure
where now the microscopic characteristics of matter (spin, shear, dilation)
are also taken into account, sourcing a non-Riemannian arena (i.e spacetime
torsion and non-metricity) for Metric-Affine Gravity. We derive the energy
tensors of this Hyperfluid structure and subsequently present the conservation
laws obeyed by them. Finally, we consider a Cosmological application of this
Perfect Hyperfluid and classify some possible forms of this fluid structure.
###### Contents
1. I Introduction
2. II The Setup
3. III The sources of Metric-Affine Gravity (MAG)
1. III.1 Canonical and Metrical Energy Momentum and Hypermomentum Tensors
2. III.2 Conservation Laws
4. IV Perfect Hyperfluid: Foundation
1. IV.1 Conservation Laws of the Perfect Hyperfluid
2. IV.2 Exterior Calculus representation of Perfect Hyperfluid
5. V Theories with $\mathcal{L}_{G}=\mathcal{L}_{G}(R^{\lambda}_{\;\;\;\alpha\beta\gamma})$
6. VI Cosmological Application of the Perfect Hyperfluid
1. VI.1 Cosmological Hyperfluids Classification
7. VII Discussion
8. VIII Acknowledgments
## I Introduction
The Perfect Fluid notion, that we are acquainted with from GR, (or even metric
extensions of the latter) has very broad and important applications. However,
this collective fluid representation (the Classical Perfect Fluid) is
applicable only to matter with no internal structure. At the same time, it is
known that in order to probe the non-Riemannian characteristics (torsion and
non-metricity) of the underlying spacetime manifold, matter with intrinsic
hypermomentum has to be used puetzfeld2008probing . It is therefore natural to
ask what would be the generalization-extension of the classical Perfect Fluid
notion of GR, to Metric-Affine Gravityhehl1995metric ; iosifidis2019metric ?
In this direction, many interesting models of fluids with intrinsic
hypermomentum have been considered in the past almost $20$ years ago (see
obukhov1993hyperfluid ; obukhov1996model ; babourova1998perfect ). However, a
generic formulation of an isotropic hyperfluid which represents the direct
generalization of the Perfect Fluid was missing so far. Such an attempt was
considered recently in damianos2020cosmological . However, that model was
tailored only for Cosmological settings and was also laid on the assumption of
equivalence between the canonical and the metrical energy momentum tensors of
matter. It is the purpose of this note to further extend this consideration by
dropping the latter postulate along with homogeneity assumption in order to
formulate a generalized isotropic hyperfluid.
The paper is organized as follows. Firstly, setup our conventions and briefly
discuss the basic concepts of a non-Riemannian Geometryeisenhart2012non . Then
we touch upon the sources of MAG, being the canonical energy momentum, the
metrical energy momentum and the hypermomentum tensor, and also present their
associated conservation laws. Subsequently we formulate the concept of the
Perfect (Ideal) Hyperfluid by first giving its physical definition and later
using the appropriate mathematical formulation in order to extract its energy
tensors by demanding spatial isotropy. Continuing, we apply the conservation
laws of MAG to the energy tensors we derived establishing, therefore, a
complete description of the novel fluid model we propose. Finally, we consider
the Cosmological application of our Hyperfluid model and classify the possible
Perfect Cosmological Hyperfluids one can have depending on the equations of
state among the hyperfluid variables.
## II The Setup
We shall now briefly discuss some basic aspects of non-Riemannian geometry
that are essential for our analysis. We will use the conventions of
damianos2020cosmological ; iosifidis2019metric so we will go through them
rather quickly here.
Let us first fix notation. We shall use letters from the Latin alphabet
$a,b,c,...$ to denote anholonomic indices and Greek $\mu,\nu,\rho,...$ for
holonomic ones (coordinate) both ranging from $0,1,2,...n-1$ where $n$ is the
spacetime dimension. We consider an $n-dim$ non-Riemannian manifold endowed
with a metric along with a linear connection $(\mathcal{M},g,\nabla)$. As
usual, at each point $p$ on the manifold we can define a set of local frames
$\\{e_{a}\\}$ spanning the tangent vector space $\mathcal{T}_{p}\mathcal{M}$
of the manifold at that point. We also define the coframe one forms
$\\{\vartheta^{a}\\}$, living in the cotangent space
$\mathcal{T}_{p}^{\star}\mathcal{M}$, through the duality relation
$e_{a}\rfloor\vartheta^{b}=\delta_{a}^{b}$ (1)
where $\rfloor$ denotes the interior product. In addition, we assume the
existence of a $GL(n,R)$-valued linear connection $1$-form
$\Gamma^{a}_{\;\;b}$ allowing us to define the gauge exterior gauge covariant
derivative on arbitrary tensor valued forms. From the gauge potentials
$(g_{ab},\vartheta^{a},\Gamma^{a}_{\;\;b})$ we construct their associated
field strengths corresponding to non-metricity, torsion and curvature,
according to hehl1995metric
$\mathcal{Q}_{ab}:=-Dg_{ab}$ (2)
$\mathcal{T}^{a}:=D\vartheta^{a}=d\vartheta^{a}+\Gamma^{a}_{\;\;b}\wedge\vartheta^{b}$
(3)
$R^{a}_{\;\;b}:=d\Gamma^{a}_{\;\;b}+\Gamma^{a}_{\;\;c}\wedge\Gamma^{c}_{\;\;b}$
(4)
respectively. In the above $D$ represents the exterior gauge covariant
derivative, and obviously the non-metricity is an $1$-form while torsion and
curvature are both $2$-forms. Also, we define the invariant volume element as
$\mu:=\frac{1}{n!}\epsilon_{a_{1}...a_{n}}\vartheta^{a_{1}}\wedge...\wedge\vartheta^{a_{n}}$
and the Hodge dual for an arbitrary $p$-form $\Psi$ as
$\star\Psi:=\frac{1}{(n-p)!p!}g^{a_{1}c_{1}}...g^{a_{p}c_{p}}\epsilon_{a_{1}...a_{p}b_{1}...b_{n-p}}\Psi_{c_{1}...c_{p}}\vartheta^{b_{1}}\wedge...\wedge\vartheta^{b_{n-p}}$.
Now, considering an appropriate local gauge transformation we can set the
gauge such that111See hehl1995metric for more details. Note that the
conventions we are using here have some slight variations from the ones found
there.
$\partial_{\nu}e_{\mu}^{\;\;a}-\Gamma^{\rho}_{\;\;\mu\nu}e_{\rho}^{\;\;a}+\Gamma^{a}_{\;\;b\nu}e_{\mu}^{\;\;b}=0$
(5)
where the expansions $e_{a}=e^{\mu}_{\;\;a}{\partial}_{\mu}$,
$\vartheta^{a}=e_{\mu}^{\;\;a}dx^{\mu}$,
$\Gamma^{a}_{\;\;b}=\Gamma^{a}_{\;\;b\mu}dx^{\mu}$ along with the identity
$e_{\mu}^{\;\;a}e^{\mu}_{\;\;b}=\delta_{a}^{b}$ , have been used. The latter
relation enables one to switch over to a holonomic description based on a
metric $g_{\mu\nu}$ and an independent affine connection
$\Gamma^{\lambda}_{\;\;\;\mu\nu}$. In this instance our definitions for non-
metricity, torsion and curvature respectively read222In the transition from
the anholonomic to the holonomic frame description we have denoted
$\mathcal{T}^{c}_{\;\;\mu\nu}:=-2S_{\mu\nu}^{\;\;\;c}$ and
$\mathcal{Q}_{abc}=Q_{bca}$. Note also that the form indices appear on the
very right of the component expansion of each object, for instance
$\mathcal{Q}_{ab}:=\mathcal{Q}_{abc}\vartheta^{c}$ and
$\mathcal{T}^{a}=\frac{1}{2}\mathcal{T}^{a}_{\;\;bc}\vartheta^{a}\wedge\vartheta^{c}$.
$Q_{\alpha\mu\nu}=-\nabla_{\alpha}g_{\mu\nu}$ (6)
$S_{\mu\nu}^{\;\;\;\lambda}:=\Gamma^{\lambda}_{\;\;\;[\mu\nu]}$ (7)
$R^{\mu}_{\;\;\;\nu\alpha\beta}:=2\partial_{[\alpha}\Gamma^{\mu}_{\;\;\;|\nu|\beta]}+2\Gamma^{\mu}_{\;\;\;\rho[\alpha}\Gamma^{\rho}_{\;\;\;|\nu|\beta]}$
(8)
Our definitions for the associated torsion and non-metricity vectors are
$S_{\mu}:=S_{\mu\lambda}^{\;\;\;\;\lambda}\;\;,\;\;\;\tilde{S}_{\mu}:=\epsilon_{\mu\alpha\beta\gamma}S^{\alpha\beta\gamma}$
(9)
$Q_{\alpha}:=Q_{\alpha\mu\nu}g^{\mu\nu}\;,\;\;\tilde{Q}_{\nu}=Q_{\alpha\mu\nu}g^{\alpha\mu}$
(10)
respectively. In addition, without the use of any metric, from the Riemann
tensor we can derive the two contractions
$R_{\nu\beta}:=R^{\mu}_{\;\;\;\nu\mu\beta}\;,\;\widehat{R}_{\alpha\beta}:=R^{\mu}_{\;\;\;\mu\alpha\beta}$
(11)
the first one defines, as usual, the Ricci tensor (which is not symmetric now)
and the second one is the homothetic curvature. If a metric is given another
contraction can be formed which reads
$\breve{R}^{\lambda}_{\;\;\kappa}:=R^{\lambda}_{\;\;\mu\nu\kappa}g^{\mu\nu}$
(12)
Note however that the Ricci scalar is always uniquely defined since
$R:=R_{\mu\nu}g^{\mu\nu}=-\breve{R}_{\mu\nu}g^{\mu\nu}\;\;,\;\;\;\widehat{R}_{\alpha\beta}g^{\mu\nu}=0$
(13)
Finally, the affine connection can always be decomposed into a Riemannian
piece (Levi-Civita connection) plus post Riemannian contributions, according
to schouten1954ricci ; iosifidis2019metric
$\Gamma^{\lambda}_{\;\;\;\mu\nu}=\widetilde{\Gamma}^{\lambda}_{\;\;\;\mu\nu}+N^{\lambda}_{\;\;\;\mu\nu}$
(14)
where
$N_{\alpha\mu\nu}=\frac{1}{2}(Q_{\mu\nu\alpha}+Q_{\nu\alpha\mu}-Q_{\alpha\mu\nu})-(S_{\alpha\mu\nu}+S_{\alpha\nu\mu}-S_{\mu\nu\alpha})$
(15)
is the so-called distortion tensor and
$\widetilde{\Gamma}^{\lambda}_{\;\;\;\mu\nu}$ is the usual Levi-Civita
connection given by
$\widetilde{\Gamma}^{\lambda}_{\;\;\;\mu\nu}:=\frac{1}{2}g^{\alpha\lambda}(\partial_{\mu}g_{\nu\alpha}+\partial_{\nu}g_{\alpha\mu}-\partial_{\alpha}g_{\mu\nu})$
(16)
From now on, all Riemannian quantities (i.e. evaluated with respect to the
Levi-Civita connection) will be denoted by a tilde. We have now briefly
developed the geometric setup to be used for the rest of our analysis. For en
extended exposure on the aspects of non-Riemannian geometry the reader is
referred to eisenhart2012non .
## III The sources of Metric-Affine Gravity (MAG)
### III.1 Canonical and Metrical Energy Momentum and Hypermomentum Tensors
As we have already pointed out in the MAG framework one starts with the three
independent fields $g_{ab},\vartheta^{b}$ and $\Gamma^{a}_{\;\;b}$. The field
equations of a given MAG Theory are obtained by varying the total action
independently with respect to those fields. Then, the variations of the matter
part of the actions would be the sources of Gravity. Let $\mathcal{L}_{m}$ be
the matter Lagrangian of the Theory. Then we have the following variations:
Canonical (Noether) Energy Momentum ($n-1$)-form
$\Sigma_{a}:=\frac{\delta\mathcal{L}_{m}}{\delta\vartheta^{a}}$ (17)
Metrical (Hilbert) Energy Momentum $n$-form
$\sigma^{ab}:=2\frac{\delta\mathcal{L}_{m}}{\delta g_{ab}}$ (18)
Hypermomentum ($n-1$)-form
$\Delta_{a}^{\;\;b}:=-2\frac{\delta\mathcal{L}_{m}}{\delta\Gamma^{a}_{\;\;b}}$
(19)
Therefore, the three sources of MAG are the canonical, the metrical and the
hypermomentum currents of matter hehl1976hypermomentum ; hehl1978hypermomentum
. Note that the hypermomentum can be split into its three irreducible pieces
of spin, dilation and shear according to
$\Delta_{ab}=\tau_{ab}+\frac{1}{n}\Delta g_{ab}+\hat{\Delta}_{ab}$ (20)
where $\tau_{ab}:=\Delta_{[ab]}$ is the spin part $\Delta:=\Delta_{cd}g^{cd}$
the dilation (trace) and $\hat{\Delta}_{ab}$ the shear (symmetric traceless
part). Of course, the physical role of spin and dilation is well known. The
most elusive so far has been the role of shear. There have been some
interesting early attempts to connect it to the hadronic properties of matter
hehl1997ahadronic but this connection is not totally clear. For a recent
study on the role of shear hypermomentum in Cosmology, see iosifidis2020non .
In a holonomic frame the above canonical, metrical energy momentum and
hypermomentum tensors read
$t^{\mu}_{\;\;c}=\frac{1}{\sqrt{-g}}\frac{\delta S_{M}}{\delta
e_{\mu}^{\;\;c}}$ (21)
$T^{\alpha\beta}:=+\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{M})}{\delta
g_{\alpha\beta}}$ (22)
$\Delta_{\lambda}^{\;\;\;\mu\nu}:=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{M})}{\delta\Gamma^{\lambda}_{\;\;\;\mu\nu}}$
(23)
where we have made the identifications $T_{ab}=-\star\sigma_{ab}$,
$t_{ab}=e_{b}\rfloor\star\Sigma_{a}$ and
$\Delta_{a}^{\;\;bd}:=g^{cd}e_{c}\rfloor\star\Delta_{a}^{\;\;b}$ in order to
extract the tensor components. Next we discuss the conservation laws these
sources must obey.
### III.2 Conservation Laws
As we have discussed in the previous section, the sources of MAG are the
canonical energy momentum tensor along with the hypermomentum current of
matter. Of course there is also the metrical energy momentum tensor but this
can be seen as a byproduct of the latter two hehl1995metric . These sources
are not quite independent and obey certain conservation laws as generalized
versions of the energy momentum conservation in GR. Indeed, working in the
exterior calculus language, the diffeomorphism invariance of the matter part
of the action gives, on-shell hehl1995metric ; gronwald1997metric
333Translated here to our conventions.
$D\Sigma_{a}=\frac{1}{2}(e_{a}\rfloor
R^{b}_{\;\;c})\wedge\Delta_{b}^{\;\;c}-\frac{1}{2}(e_{a}\rfloor\mathcal{Q}_{bc})\sigma^{bc}+(e_{a}\rfloor\mathcal{T}^{b})\wedge\Sigma_{b}$
(24)
In addition, we now also have $GL(n,R)$ invariance which when applied to the
matter part implies (still on-shell)
$D\Delta_{a}^{\;\;b}=2(\vartheta^{b}\wedge\Sigma_{a}-\sigma^{b}_{\;\;a})$ (25)
The above two equations give the set of conservation laws which have to be
obeyed by the matter sources of MAG. Switching to a holonomic frame, they read
obukhov2013conservation ; damianos2020cosmological
$t^{\mu}_{\;\;\lambda}=T^{\mu}_{\;\;\lambda}-\frac{1}{2\sqrt{-g}}\hat{\nabla}_{\nu}(\sqrt{-g}\Delta_{\lambda}^{\;\;\mu\nu})$
(26)
$\frac{1}{\sqrt{-g}}\hat{\nabla}_{\mu}(\sqrt{-g}t^{\mu}_{\;\;\alpha})=-\frac{1}{2}\Delta^{\lambda\mu\nu}R_{\lambda\mu\nu\alpha}+\frac{1}{2}Q_{\alpha\mu\nu}T^{\mu\nu}+2S_{\alpha\mu\nu}t^{\mu\nu}$
(27)
where
$\hat{\nabla}_{\nu}:=2S_{\nu}-\nabla_{\nu}$ (28)
The above are the MAG conservation laws in the holonomic description. For the
most part we will be using this coordinate based formalism (holonomic), which
makes things more transparent, but we will give the construction equations of
our Perfect Hyperfluid Model in both languages. Again, the set of (26)-(27) or
equivalently (24)-(25) has to be obeyed by all matter types of MAG and will be
crucial in developing our Perfect Hyperfluid Model.
## IV Perfect Hyperfluid: Foundation
We shall now define the Perfect Hyperfluid as a direct generalization of the
Perfect fluid of GR, where now the microstructure of matter (hypermomentum) is
also taken into account. Our definition for Perfect Hyperfluid goes as
follows.
###### Definition 1
Let $t_{\mu\nu}$, $T_{\mu\nu}$ and $\Delta_{\alpha\mu\nu}$ represent the
canonical and metrical energy momentum tensors and $\Delta_{\alpha\mu\nu}$ the
hypermomentum of the fluid. We define the Perfect Hyperfluid as exactly this
fluid whose associated energy tensors ($t,T,\Delta$) respect spatial
isotropy444In other words, following Weinberg’s definition
weinberg1972gravitation , in our case ’ A perfect hyperfluid is defined as
this fluid where there exists a velocity $\vec{v}$ such that an observer
moving with this velocity will see their surroundings as isotropic’. I am very
grateful to Jose Beltran Jimenez for bringing this definition to my attention.
. Mathematically, we demand a vanishing Lie derivative along the spatial
slices for each (see also tsamparlis1979cosmological ), i.e.
$\pounds_{\xi}t_{\mu\nu}=0\;\;,\;\;\pounds_{\xi}T_{\mu\nu}=0\;\;,\;\;\pounds_{\xi}\Delta_{\alpha\mu\nu}=0$
(29)
This implies that both the canonical and the metrical energy momentum tensors
would have the perfect fluid form555Here, as usual, we consider a normalized
velocity field $u_{\mu}u^{\mu}=-1$ and perform an $1+(n-1)$ spacetime split
with the projection tensor $h_{\mu\nu}=g_{\mu\nu}+u_{\mu}u_{\nu}$.
$t_{\mu\nu}=\rho_{c}u_{\mu}u_{\nu}+p_{c}h_{\mu\nu}$ (30) $T_{\mu\nu}=\rho
u_{\mu}u_{\nu}+ph_{\mu\nu}$ (31)
where $\rho_{c},p_{c}$ are the density and pressure associated with the
canonical part and $\rho,p$ the usual ones associated to $T_{\mu\nu}$. In
addition, demanding only spatial isotropy the hypermomentum takes the
covariant form damianos2020cosmological
$\Delta_{\alpha\mu\nu}^{(n)}=\phi h_{\mu\alpha}u_{\nu}+\chi
h_{\nu\alpha}u_{\mu}+\psi u_{\alpha}h_{\mu\nu}+\omega
u_{\alpha}u_{\mu}u_{\nu}+\delta_{n,4}\epsilon_{\alpha\mu\nu\kappa}u^{\kappa}\zeta$
(32)
this is the most general covariant form of a type ($0,3$) tensor respecting
isotropy. In the above $\delta_{n,4}$ is the Kronecker’s delta. Note that
since we have not imposed homogeneity here, in contrast to
damianos2020cosmological , all the functions of the set
$V=\\{\rho_{c},p_{c},...\omega,\zeta\\}$ will be generic spacetime functions
i.e. $\psi=\psi(x^{\alpha})=\psi(t,\vec{x})$ etc. In a covariant fashion,
these read
$\omega=-u^{\alpha}u^{\mu}u^{\nu}\Delta_{\alpha\mu\nu}$ (33)
$\phi=-\frac{1}{(n-1)}h^{\alpha\mu}u^{\nu}\Delta_{\alpha\mu\nu}$ (34)
$\chi=-\frac{1}{(n-1)}h^{\alpha\nu}u^{\mu}\Delta_{\alpha\mu\nu}$ (35)
$\psi=-\frac{1}{(n-1)}h^{\mu\nu}u^{\alpha}\Delta_{\alpha\mu\nu}$ (36)
$\zeta=+\frac{1}{6}\epsilon^{\alpha\mu\nu\lambda}\Delta_{\alpha\mu\nu}u_{\lambda}\delta_{n,4}$
(37)
These are the $5$ material variables describing the hypermomentum part of the
fluid. These five fields are then rearranged in a certain way and provide the
spin, dilation and shear parts according to
$\Delta_{[\alpha\mu]\nu}=(\psi-\chi)u_{[\alpha}h_{\mu]\nu}+\delta_{n,4}\epsilon_{\alpha\mu\nu\kappa}u^{\kappa}\zeta$
(38)
$\Delta_{\nu}:=\Delta_{\alpha\mu\nu}g^{\alpha\mu}=\Big{[}(n-1)\phi-\omega\Big{]}u_{\nu}$
(39)
$\breve{\Delta}_{\alpha\mu\nu}=\Delta_{(\alpha\mu)\nu}-\frac{1}{n}g_{\alpha\mu}\Delta_{\nu}=\frac{(\phi+\omega)}{n}\Big{[}h_{\alpha\mu}+(n-1)u_{\alpha}u_{\mu}\Big{]}u_{\nu}+(\psi+\chi)u_{(\mu}h_{\alpha)\nu}$
(40)
Let us stress again that all the fields appearing above are generic spacetime
functions.
### IV.1 Conservation Laws of the Perfect Hyperfluid
We are now in a position to derive the conservation laws that hold true for
our Perfect Hyperfluid Model. As we discussed earlier, in this case the
canonical energy momentum tensor has the perfect fluid form and it is
therefore symmetric as seen from (45). Then, for any symmetric rank-$2$ tensor
$C_{\mu\nu}$ we have the identity
$-\frac{1}{\sqrt{-g}}\hat{\nabla}_{\mu}(\sqrt{-g}C^{\mu}_{\;\;\nu})=\tilde{\nabla}_{\mu}C^{\mu}_{\;\;\nu}-\frac{1}{2}Q_{\nu\alpha\beta}C^{\alpha\beta}-2S_{\nu\alpha\beta}C^{\alpha\beta}$
(41)
which can be proved trivially by expanding the left-hand side. Applying this
to the canonical energy momentum tensor, the conservation law $(\ref{cc2})$
reads
$\tilde{\nabla}_{\mu}t^{\mu}_{\;\;\alpha}=\frac{1}{2}\Delta^{\lambda\mu\nu}R_{\lambda\mu\nu\alpha}+\frac{1}{2}Q_{\alpha\mu\nu}(t^{\mu\nu}-T^{\mu\nu})$
(42)
which is also supplemented by
$t^{\mu}_{\;\;\lambda}=T^{\mu}_{\;\;\lambda}-\frac{1}{2\sqrt{-g}}\hat{\nabla}_{\nu}(\sqrt{-g}\Delta_{\lambda}^{\;\;\mu\nu})$
(43)
From the first one above we see that the difference of the canonical and the
metrical tensors couples directly to spacetime non-metricity. Of course in the
case of the Perfect Cosmological Hyperfluid of damianos2020cosmological this
term drops out since in that case the latter two tensors coincide. Note also
that the fact that in general the metrical is not the same with the canonical
implies that the hypermomentum is not conserved, as seen from (49). In
addition, as it is obvious from the above relations one may eliminate the
difference of the two energy tensors from (48) by using $(\ref{CL2})$. Then we
get the alternative expression
$\tilde{\nabla}_{\mu}t^{\mu}_{\;\;\alpha}=\frac{1}{2}\Delta^{\lambda\mu\nu}R_{\lambda\mu\nu\alpha}-\frac{1}{4\sqrt{-g}}Q_{\alpha\mu}^{\;\;\;\;\lambda}\hat{\nabla}_{\nu}(\sqrt{-g}\Delta_{\lambda}^{\;\;\;\mu\nu})$
(44)
The advantage of this last expression is that it involves only the canonical
and the hypermomentum tensors and not the metrical which is a byproduct of the
latter two, as seen from (49). Let us collect all the above results and
postulate the Perfect Hyperfluid concept along with its complete mathematical
description.
###### Definition 2
The Perfect Hyperfluid: There exists a Perfect (ideal) Hyperfluid structure,
carrying intrinsic hypermomentum, that generalizes the Perfect Fluid concept.
The description of the Perfect Hyperfluid is given by the energy tensors
$t_{\mu\nu}=\rho_{c}u_{\mu}u_{\nu}+p_{c}h_{\mu\nu}$ (45) $T_{\mu\nu}=\rho
u_{\mu}u_{\nu}+ph_{\mu\nu}$ (46) $\Delta_{\alpha\mu\nu}^{(n)}=\phi
h_{\mu\alpha}u_{\nu}+\chi h_{\nu\alpha}u_{\mu}+\psi
u_{\alpha}h_{\mu\nu}+\omega
u_{\alpha}u_{\mu}u_{\nu}+\delta_{n,4}\epsilon_{\alpha\mu\nu\kappa}u^{\kappa}\zeta$
(47)
subject to the conservation laws
$\tilde{\nabla}_{\mu}t^{\mu}_{\;\;\alpha}=\frac{1}{2}\Delta^{\lambda\mu\nu}R_{\lambda\mu\nu\alpha}+\frac{1}{2}Q_{\alpha\mu\nu}(t^{\mu\nu}-T^{\mu\nu})$
(48)
$t^{\mu}_{\;\;\lambda}=T^{\mu}_{\;\;\lambda}-\frac{1}{2\sqrt{-g}}\hat{\nabla}_{\nu}(\sqrt{-g}\Delta_{\lambda}^{\;\;\mu\nu})$
(49)
providing a direct generalization of the Perfect Fluid continuum when the
intrinsic characteristics of matter (i.e. $\Delta_{\alpha\mu\nu}$) are also
taken into account.
Comment: As seen from the above, the complete description of the Perfect
Hyperfluid is given by the set of $9$ spacetime functions
$\\{\rho,\rho_{c},p,p_{c},\phi,\chi,\psi,\omega,\zeta\\}$ along with its
associated velocity field $u$.
### IV.2 Exterior Calculus representation of Perfect Hyperfluid
For completeness we will give the forms of energy tensors of the Perfect
Hyperfluid (MAG sources) in the language of exterior differential forms, which
is of great use in MAG. It can be easily seen that the hypermomentum
expression (47), in the language of differential forms, translates to the
hypermomentum ($n-1$) form
$\Delta_{a}^{\;\;b}=\delta_{a}^{b}\phi u+\chi(e^{b}\rfloor\star
u)\star\vartheta_{a}+\psi(e_{a}\rfloor\star
u)\star\vartheta^{b}+(e_{a}\rfloor\star u)(e^{b}\rfloor\star
u)\bar{\omega}u-3!\vartheta_{a}\wedge\vartheta^{b}\wedge Z\delta_{n,4}$ (50)
where $\phi,\chi,\psi$ and $\bar{\omega}=\omega+\phi+\chi+\psi$ are $0$-forms
and $Z=\zeta_{a}\vartheta^{a}=\zeta u_{a}\vartheta^{a}=\zeta\star u$ is an
$1$-form. The associated canonical ($n-1$)-from and metrical $n$-form currents
are also extracted rather trivially and read
$\Sigma_{a}=(\bar{\rho}+\bar{p})(e_{a}\rfloor\star
u)u+\bar{p}\star\vartheta_{a}$ (51)
$\sigma_{ab}=\mu\Big{(}(\rho+p)u_{a}u_{b}+pg_{ab}\Big{)}$ (52)
respectively. In the above with have denoted $\rho_{c}=\bar{p}$ and
$p_{c}=\bar{p}$ in order to avoid confusion with the anholonomic index $c$. In
addition, we have considered the flow ($n-1$)-form (see for instance
obukhov1996model ) $u$ with its dual giving us the velocity field
$\star u:=u_{a}\vartheta^{a}$ (53)
and the normalization
$u\wedge\star u=\mu$ (54)
With this we may re-express (50) in the more transparent form
$\Delta_{a}^{\;\;b}=\delta_{a}^{b}\phi u+\chi u^{b}(\star\vartheta_{a})+\psi
u_{a}(\star\vartheta^{b})+u_{a}u^{b}\tilde{\omega}u-3!\zeta\vartheta_{a}\wedge\vartheta^{b}\wedge(\star
u)\delta_{n,4}$ (55)
The latter expression along with (51) and (52) represent the material sources
of the Perfect (Ideal) Hyperfluid in the language of differential forms and
are subject to the conservation laws (24) and $(\ref{MCL2})$.
## V Theories with
$\mathcal{L}_{G}=\mathcal{L}_{G}(R^{\lambda}_{\;\;\;\alpha\beta\gamma})$
In the special case of Theories whose Gravitational part is constructed only
by the Riemann tensor and its contractions we have a very important
restriction on the Hyperfluid sector. Indeed, as can be trivially checked in
this case, the Riemann tensor is invariant under special projective
transformations of the form666Also known as $\lambda$-transformations.
$\Gamma^{\lambda}_{\;\;\;\mu\nu}\longrightarrow\bar{\Gamma}^{\lambda}_{\;\;\;\mu\nu}=\Gamma^{\lambda}_{\;\;\;\mu\nu}+\delta^{\lambda}_{\mu}\partial_{\nu}\lambda$
(56)
$R^{\lambda}_{\;\;\mu\nu\alpha}\longrightarrow\bar{R}^{\lambda}_{\;\;\mu\nu\alpha}=R^{\lambda}_{\;\;\mu\nu\alpha}$
(57)
Of course, all contractions of the Riemann tensor will also respect this
symmetry. The above fact has a great impact on the hypermomentum sources.
Indeed, since this symmetry have to be respected from the matter part as well,
we get the constraint (see for instance iosifidis2019scale )
$\partial_{\nu}(\sqrt{-g}\Delta^{\nu})=0$ (58)
that is, the dilation current
$\Delta^{\mu}:=\Delta_{\lambda}^{\;\;\lambda\nu}$ must be conserved.
Equivalently, taking the contraction of $(\ref{CL2})$ the above constraint
translates to
$t=T$ (59)
namely the traces of the canonical and the metrical tensors must coincide.
Note that up to now we have made no assumption on the matter content of the
Theory. If we apply the above result to our Perfect Hyperfluid model, given
the forms (45) and (46) it follows that
$-\rho_{c}+(n-1)p_{c}=-\rho+(n-1)p$ (60)
In addition, assuming that both perfect fluid components (metrical and
canonical) are barotropic, that is $p_{c}=w_{c}\rho_{c}$ and $p=w\rho$, we
have
$\rho_{c}=\left(\frac{1-(n-1)w}{1-(n-1)w_{c}}\right)\rho$ (61)
From the above discussion it is now clear that if the barotropic components of
the canonical and the metrical are of the same kind, in the sense that
$w=w_{c}$ we immediately have that $p=p_{c}$ and from (61) it follows that
$\rho=\rho_{c}$ as well. Conversely, on the (logical!) assumption that the
associated densities are equal (i.e. $\rho=\rho_{c}$) we also have that
$p=p_{c}$ from the above equation. Either way the end result is that both the
pressures and the densities would be identical and consequently
$t_{\mu\nu}\equiv T_{\mu\nu}$. We therefore see how the Perfect Cosmological
Hyperfluid Model of damianos2020cosmological is embedded in our general
construction here. Given that the Gravitation part is built only from the
Riemann tensor and its contractions, the Perfect Hyperfluid of
damianos2020cosmological is the one for which the barotropic fluid components
are of the same kind $w=w_{c}$ or alternatively the canonical and metrical
densities coincide. Recall that in this case the conservation laws take the
form
$\widetilde{\nabla}_{\mu}T^{\mu}_{\;\;\nu}=\frac{1}{2}\Delta^{\alpha\beta\gamma}R_{\alpha\beta\gamma\nu}$
(62)
$\hat{\nabla}_{\nu}\Big{(}\sqrt{-g}\Delta_{\lambda}^{\;\;\;\mu\nu}\Big{)}=0\;\;,\;\;\;t_{\mu\nu}=T_{\mu\nu}$
(63)
For this reason we shall call this Model the Hypermomentum Conserving Perfect
Hyperfluid. The above considerations were rather general with no assumption
about the underlying spacetime. If we consider a Cosmological setting the
fluid obeying the above two conservation laws (that is, the fluid in
damianos2020cosmological ) will be called the Hypermomentum Preserving Perfect
Cosmological Hyperfluid.
## VI Cosmological Application of the Perfect Hyperfluid
Let uf now see an immediate application of our Fluid model. If we now impose
also homogeneity and consider and FLRW Universe all the variables, of the
hyperfluid, would be time dependent only and in this case our conservation
laws (48) and (49) boil down to777As usual, the dot denotes time derivative.
$\Big{[}\dot{\rho}+(n-1)H(\rho+p)\Big{]}u_{\nu}+(\rho+p)u^{\mu}\widetilde{\nabla}_{\mu}u_{\nu}=\frac{1}{2}u^{\mu}(\phi\widehat{R}_{\mu\nu}+\chi
R_{\mu\nu}+\psi\breve{R}_{\mu\nu})+\frac{1}{2}u_{\nu}\Big{[}(\rho_{c}-\rho)C+(p_{c}-p)A\Big{]}$
(64) $\displaystyle-\delta^{\mu}_{\lambda}\frac{\partial_{\nu}(\sqrt{-g}\phi
u^{\nu})}{\sqrt{-g}}-u^{\mu}u_{\lambda}\frac{\partial_{\nu}\Big{(}\sqrt{-g}(\phi+\chi+\psi+\omega)u^{\nu}\Big{)}}{\sqrt{-g}}$
$\displaystyle+\left[\Big{(}2S_{\lambda}+\frac{Q_{\lambda}}{2}\Big{)}u^{\mu}-\nabla_{\lambda}u^{\mu}\right]\chi+\left[\Big{(}2S^{\mu}+\frac{Q^{\mu}}{2}-\tilde{Q}^{\mu}\Big{)}u_{\lambda}-g^{\mu\nu}\nabla_{\nu}u_{\lambda}\right]\psi$
$\displaystyle+u^{\mu}u_{\lambda}(\dot{\chi}+\dot{\psi})-(\phi+\chi+\psi+\omega)(\dot{u}^{\mu}u_{\lambda}+u^{\mu}\dot{u}_{\lambda})=2(\rho-\rho_{c})u_{\lambda}u^{\mu}+2(p-p_{c})h_{\lambda}^{\;\;\mu}$
(65)
The above equations contain the full dynamics of the generalized Perfect
Cosmological Hyperfluid. Let us highlight again that the degrees of freedom in
this case are888Obviously two degrees of freedom come from $\rho$, $p$ another
two from $\rho_{c}$, $p_{c}$ and the rest are the five hypermomentum
variables. $2+2+5=9$. However, due to the high symmetry of FLRW spacetime, the
above conservation laws only provide $1+2$ evolution equations. This is no
surprise since a similar situation appears in the case of the Perfect Fluid of
GR where the continuity equation only gives the evolution for $\rho$ and p is
usually assumed to satisfy an equation of state $p=w\rho$ such that the system
becomes complete. The same behaviour we also expect here, so we will need to
provide two barotropic indices for the metrical and canonical parts according
to
$p=w\rho\;\;,\;\;\;p_{c}=w_{c}\rho_{c}$ (66)
In addition, there should be one equation relating the above $4$ functions
(similar to (61)) and we should also have three equations of state among the
hypermomentum variables, in order to have a completely determined dynamics. In
general, the three equations of state among the hypermomentum variables would
have the form
$F_{A}(\phi,\chi,\psi,\omega,\zeta)=0\;,\;\;\;A=1,2,3$ (67)
To further restrict the above possibility it would be most natural to
associate equations of state among the different parts of hypermomentum (spin,
dilation and shear) as was implemented in iosifidis2020non . The exact values
of these equations of state would characterize the nature of the hyperfluid
(see also damianos2020cosmological where such equations of state are
derived). We should also note that it would be possible to have equations of
state that mix up the perfect fluid with the hypermomentum parts. Then, one
needs $6$ independent equations of state, whose generic form would read
$F_{I}(\rho,p,\rho_{c},p_{c},\phi,\chi,\psi,\omega,\zeta)=0\;\;,\;\;\;I=1,2,...,6$
(68)
As a result, the Perfect Cosmological Hyperfluid lies in the intersection of
the aforementioned $6$ hypersurfaces. However, we find the latter mixing
possibility very unlikely, though it may be true for some types of generalized
hyperfluids. In any case, the three conservation laws for any type of
Cosmological Hyperfluids, can be extracted from $(\ref{cl1})$ and
$(\ref{conl2})$ by first contracting the former with $u^{\mu}$ and take from
the latter one time the $ij$-components and another the $00$ to arrive at
$\dot{\rho}_{c}+(n-1)H(\rho_{c}+p_{c})=-\frac{1}{2}u^{\mu}u^{\nu}(\chi
R_{\mu\nu}+\psi\breve{R}_{\mu\nu})+\frac{1}{2}(\rho_{c}-\rho)C+\frac{1}{2}(p_{c}-p)A$
(69) $\dot{\phi}+(n-1)H\phi+H(\chi+\psi)+\psi X-\chi Y=2(p_{c}-p)$ (70)
$\dot{\omega}+(n-1)H(\chi+\psi+\omega)+(n-1)(\psi X-\chi Y)=2(\rho_{c}-\rho)$
(71)
Additionally, one could take the trace of (65) to arrive at
$(n-1)\dot{\phi}-\dot{\omega}+(n-1)H\Big{[}(n-1)\phi-\omega\Big{]}=2\Big{[}(\rho-\rho_{c})-(n-1)(p-p_{c})\Big{]}$
(72)
However, as expected, this gives no further information since it is easily
seen that the latter is equal to999As is also apparent from $(\ref{dil})$ the
dilation current is conserved only when $(\rho-\rho_{c})=(n-1)(p-p_{c})$. This
condition is always true for frame rescaling invariant Theories
iosifidis2019scale . $(n-1)(\ref{hyper1})-(\ref{hyper2})$. Therefore, as we
have already mentioned, the full dynamics of the Perfect Hyperfluid is
contained in the three equations $(\ref{cont})$, $(\ref{hyper1})$ and
$(\ref{hyper2})$. We should emphasize again that the latter equations are
fairly general and hold true regardless of the equations of state to be
imposed on the hyperfluid variables. For any Metric-Affine Cosmology, the
evolution equations for the sources are the aforementioned three.
Let us now organize the above ideas according to the following
classifications. We start with the most general case and subsequently
specialize.
### VI.1 Cosmological Hyperfluids Classification
Below we classify some characteristic cases of the Perfect Cosmological
Hyperfluids.
###### Definition 3
A generalized Perfect Cosmological Hyperfluid consists of a set of energy
tensors given by the expressions (45), (46) and $(\ref{Dform})$ subject to the
conservation laws
$\tilde{\nabla}_{\mu}t^{\mu}_{\;\;\alpha}=\frac{1}{2}\Delta^{\lambda\mu\nu}R_{\lambda\mu\nu\alpha}+\frac{1}{2}Q_{\alpha\mu\nu}(t^{\mu\nu}-T^{\mu\nu})$
(73)
$t^{\mu}_{\;\;\lambda}=T^{\mu}_{\;\;\lambda}-\frac{1}{2\sqrt{-g}}\hat{\nabla}_{\nu}(\sqrt{-g}\Delta_{\lambda}^{\;\;\mu\nu})$
(74)
In addition, its $9$ material sources are related to one another by the $6$
generalized equations of state
$F_{I}(\phi,\chi,\psi,\omega,\zeta,\rho,p,\rho_{c},p_{c})=0\;,\;\;\;I=1,2,...,6$
(75)
###### Definition 4
A Barotropic Perfect Cosmological Hyperfluid represents a special case of the
above for which the equations of state (75) have the barotropic form
$\sum_{i=1}^{9}a_{i}^{I}X_{i}=0\;,I=1,2,...,6\;,\;\;i=1,2,...,9$ (76)
where not all $a_{i}^{I}$’s are zero and we have also collectively denoted
$X_{i}=\\{\rho_{c},p_{c},\rho,p,\phi,\chi,\psi,\omega,\zeta\\}$.
###### Definition 5
A Decoupled Barotropic Perfect Cosmological Hyperfluid is the one for which
the barotropic equations among its constituents have the following forms
$p=w\rho,\;\;p_{c}=w_{c}p_{c}\;,\;\;,\rho_{c}=\bar{w}\rho$ (77)
$\sum_{i=1}^{6}a_{i}^{I}Y_{i}=0\;,I=1,2,...,6\;,\;\;i=1,2,...,5$ (78)
where not all $a_{i}^{I}$’s are zero and we have collectively denoted
$Y_{i}=\\{\phi,\chi,\psi,\omega,\zeta\\}$.
Comment: The Perfect Hypermomentum Preserving Cosmological Hyperfluid (see
previous section) represents a special case of the latter with $\rho_{c}=\rho$
and $p_{c}=p$.
## VII Discussion
We have constructed a straight generalization of the familiar Perfect Fluid
notion, encompassing now the microscopic characteristics of matter as well. We
call this extended fluid notion a Perfect (Ideal) Hyperfluid. The latter is
described by the three tensors of the canonical, metrical and hypermomentum
currents, all of which have an isotropic form (see eqn’s
$(\ref{canonical})$-$(\ref{Dform})$) and are subject to the MAG conservation
laws. As soon as the intrinsic properties of the fluid are neglected
($\Delta_{\alpha\mu\nu}=0$) we arrive at the classical Perfect Fluid model as
a limiting case, as expected. The description of the generalized Perfect
Hyperfluid is given by $9$ spacetime functions along with the fluids’ velocity
flow $u$. The specific form of these $9$ functions will characterize the fluid
under consideration.
It should be noted that even though our generalized Perfect Hyperfluid
construction fits most naturally in a Metric-Affine Gravity approach, this is
by no means the only place it can find applications. Indeed, our general
construction here can just as well be applied to all Theories that represent
special cases of MAG. For instance, Einstein-Cartan trautman2006einstein or
more generalized torsionful Theories (like Poincare Gravity hayashi1980gravity
), non-metric torsionless Theories, and also to all teleparallel Theories such
as metric aldrovandi2012teleparallel , symmetric nester1998symmetric ;
jimenez2018teleparallel and generalized teleparallelism jimenez2020general .
Of course the list could go on and on. In general we expect the Perfect
Hyperfluid to describe matter configurations to all Gravity Theories
exhibiting a non-trivial connection.
Furthermore, the Perfect Hyperfluid could find interesting applications in the
Theory of materials with microstructure and elasticity
mindlin1963microstructure . This is plausible since the very concept of
hypermomentum has a close analogy with a notion appearing in materials with
microstructure, which is known as hyper-stress gronwald1997stress . Lastly,
probably the most obvious application of our hyperfluid structure would be to
use it in order to study the full quadratic MAG Theory in a Cosmological
setting and in the presence of the latter fluid configuration. This is
currently under investigation.
## VIII Acknowledgments
I would like to thank Jose Beltran Jimenez and Tomi Sebastian Koivisto for
some useful discussions. This research is co-financed by Greece and the
European Union (European Social Fund- ESF) through the Operational Programme
’Human Resources Development, Education and Lifelong Learning’ in the context
of the project “Reinforcement of Postdoctoral Researchers - 2 nd Cycle”
(MIS-5033021), implemented by the State Scholarships Foundation (IKY).
## References
* [1] Dirk Puetzfeld and Yuri N Obukhov. Probing non-riemannian spacetime geometry. Physics Letters A, 372(45):6711–6716, 2008.
* [2] Friedrich W Hehl, J Dermott McCrea, Eckehard W Mielke, and Yuval Ne’eman. Metric-affine gauge theory of gravity: field equations, noether identities, world spinors, and breaking of dilation invariance. Physics Reports, 258(1-2):1–171, 1995.
* [3] Damianos Iosifidis. Metric-affine gravity and cosmology/aspects of torsion and non-metricity in gravity theories. arXiv preprint arXiv:1902.09643, 2019.
* [4] Yuri N Obukhov and Romualdo Tresguerres. Hyperfluid—a model of classical matter with hypermomentum. Physics Letters A, 184(1):17–22, 1993.
* [5] Yuri N Obukhov. On a model of an unconstrained hyperfluid. Physics Letters A, 210(3):163–167, 1996.
* [6] Olga V Babourova and Boris N Frolov. Perfect hypermomentum fluid: Variational theory and equations of motion. International Journal of Modern Physics A, 13(31):5391–5407, 1998\.
* [7] Iosifidis Damianos. Cosmological hyperfluids, torsion and non-metricity. The European Physical Journal. C, Particles and Fields., 80(11), 2020.
* [8] Luther Pfahler Eisenhart. Non-riemannian geometry. Courier Corporation, 2012.
* [9] JA Schouten. Ricci-calculus. an introduction to tensor analysis and its geometrical applications, 1954.
* [10] Friedrich W Hehl, G David Kerlick, and Paul von der Heyde. On hypermomentum in general relativity i. the notion of hypermomentum. Zeitschrift fuer Naturforschung A, 31(2):111–114, 1976.
* [11] FW Hehl, EA Lord, and Y Ne’Eman. Hypermomentum in hadron dynamics and in gravitation. Physical Review D, 17(2):428, 1978.
* [12] Friedrich W Hehl and Yuri N Obukhov. Is ahadronic’shear current one of the sources in metric-affine gravity? arXiv preprint gr-qc/9712089, 1997.
* [13] Damianos Iosifidis. Non-riemannian cosmology: The role of shear hypermomentum. arXiv preprint arXiv:2010.00875, 2020.
* [14] Frank Gronwald. Metric-affine gauge theory of gravity: I. fundamental structure and field equations. International Journal of Modern Physics D, 6(03):263–303, 1997\.
* [15] Yuri N Obukhov and Dirk Puetzfeld. Conservation laws in gravitational theories with general nonminimal coupling. Physical Review D, 87(8):081502, 2013.
* [16] Steven Weinberg. Gravitation and cosmology: principles and applications of the general theory of relativity. 1972\.
* [17] Michael Tsamparlis. Cosmological principle and torsion. Physics Letters A, 75(1-2):27–28, 1979.
* [18] Damianos Iosifidis and Tomi Koivisto. Scale transformations in metric-affine geometry. Universe, 5(3):82, 2019.
* [19] Andrzej Trautman. Einstein-cartan theory. arXiv preprint gr-qc/0606062, 2006.
* [20] Kenji Hayashi and Takeshi Shirafuji. Gravity from poincaré gauge theory of the fundamental particles. i: General formulation. Progress of Theoretical Physics, 64(3):866–882, 1980.
* [21] Ruben Aldrovandi and Jose G Pereira. Teleparallel gravity: an introduction, volume 173. Springer Science & Business Media, 2012.
* [22] James M Nester and Hwei-Jang Yo. Symmetric teleparallel general relativity. arXiv preprint gr-qc/9809049, 1998.
* [23] Jose Beltrán Jiménez, Lavinia Heisenberg, and Tomi S Koivisto. Teleparallel palatini theories. Journal of Cosmology and Astroparticle Physics, 2018(08):039, 2018\.
* [24] Jose Beltrán Jiménez, Lavinia Heisenberg, Damianos Iosifidis, Alejandro Jiménez-Cano, and Tomi S Koivisto. General teleparallel quadratic gravity. Physics Letters B, page 135422, 2020.
* [25] Raymond David Mindlin. Microstructure in linear elasticity. Technical report, Columbia Univ New York Dept of Civil Engineering and Engineering Mechanics, 1963.
* [26] Frank Gronwald and Friedrich W Hehl. Stress and hyperstress as fundamental concepts in continuum mechanics and in relativistic field theory. arXiv preprint gr-qc/9701054, 1997.
|
# Cross-Layer Network Codes for Content Delivery in Cache-Enabled D2D
Networks
Mohammed S. Al-Abiad, Student Member, IEEE, Md. Zoheb Hassan, Student Member,
IEEE, and Md. Jahangir Hossain, Senior Member, IEEE Mohammed S. Al-Abiad, Md.
Zoheb Hassan, and Md. Jahangir Hossain are with the School of Engineering,
University of British Columbia, Kelowna, BC V1V 1V7, Canada (e-mail:
<EMAIL_ADDRESS><EMAIL_ADDRESS>jahangir.hossain@ubc.ca).
###### Abstract
In this paper, we consider the use of cross-layer network coding (CLNC),
caching, and device-to-device (D2D) communications to jointly optimize the
delivery of a set of popular contents to a set of user devices (UDs). In the
considered D2D network, a group of near-by UDs cooperate with each other and
use NC to combine their cached files, so as the completion time required for
delivering all requested contents to all UDs is minimized. Unlike the previous
work that considers only one transmitting UD at a time, our work allows
multiple UDs to transmit simultaneously given the interference among the
active links is small. Such configuration brings a new trade-off among
scheduling UDs to transmitting UDs, selecting the coding decisions and the
transmission rate/power. Therefore, we consider the completion time
minimization problem that involves scheduling multiple transmitting UDs,
determining their transmission rates/powers and file combinations. The problem
is shown to be intractable because it involves all future coding decisions. To
tackle the problem at each transmission slot, we first design a graph called
herein the D2D Rate-Aware IDNC graph where its vertices have weights that
judiciously balance between the rates/powers of the transmitting UDs and the
number of their scheduled UDs. Then, we propose an innovative and efficient
CLNC solution that iteratively selects a set of transmitting UDs only if the
interference caused by the transmissions of the newly selected UDs does not
significantly impact the overall completion time. Simulation results show that
the proposed solution offers significant completion time reduction compared
with the existing algorithms.
###### Index Terms:
Cross layer network coding, content delivery, device-to-device communications,
power optimization, real-time applications.
## I Introduction
### I-A Overview
The exploding amount of mobile traffic, e.g., streaming applications, YouTube
videos, video-on demand, consume large bandwidth and high transmission energy
of the source limited cellular networks. Moreover, if the cloud base-stations
(CBSs) are fully loaded, it is not possible for the CBSs to schedule all user
devices (UDs). To circumvent these challenges, D2D communications have widely
been considered as a promising technology [1, 2, 3]. The performance of D2D
communications can be further improved by pushing some popular contents to the
UDs near to the CBSs. This integrated system is referred to as cache-enabled
D2D system. Cache-enabled D2D system draws remarkable benefits for alleviating
the traffic congestion of the cellular network and reducing both the CBS
involvement and end-to-end latency. In this work, we consider cache-enabled
D2D system, where multiple UDs cache some popular contents and cooperate among
them to deliver their cached contents that are requested by other UDs. As
such, all the requested contents are delivered to all UDs within the lowest
possible network delay.
Network coding (NC) has been shown to be promising for improving throughput
and minimizing decoding delay and completion time for numerous applications in
wireless networks [2, 3, 4, 5]. Specifically, random linear NC (RLNC) can
achieve the optimal throughput of wireless broadcast networks [5]. However,
this throughput achievement comes at the expense of complex encoding (i.e.,
mixing contents using coefficients from a large Galois field), high decoding
delay, and prohibitive computational complexity. This is suitable for delay-
tolerant contents and UDs with high capabilities and buffer sizes. The report
by CISCO [6] shows that a significant portion of network traffic is popular
contents (popular videos and photos) that are frequently requested by UDs in
short time. Therefore, it is crucial to deliver these delay-sensitive contents
with minimum possible delay. For this purpose, instantly decodable NC (IDNC)
is adopted [7]. IDNC performs simple encoding XOR operation at the transmitter
and simple decoding XOR operation at the receiver, and thus instant use of the
received contents. Accordingly, it is suitable for implementation in small and
low cost UDs [8]. Therefore, D2D communication and IDNC technique can be
exploited to deliver popular contents to UDs with the lowest possible delay
while offloading the CBSs. For instance, consider that a content consists of
set of files $f_{1},f_{2},$ and $f_{3}$ is wanted by set of UDs $u_{1},u_{2}$,
and $u_{3}$. Suppose that the CBS transmitted the wanted contents to the UDs
and due to channel impairments UD $u_{i}$ did not obtain file $f_{i}$ for
$1\leq i\leq 3$. These missed files can be traditionally re-transmitted from
the CBS to each UD until all UDs obtain them correctly. As a result, the CBS
requires at least $3$ uncoded transmissions for delivering these files, which
degrade system performance [9]. However, UDs can be either file cachers that
can deliver their cached files to other UDs or file requesters that can
receive the wanted files from other UDs. In our considered example, UD $u_{1}$
holds files $f_{2}$ and $f_{3}$, and accordingly, it can transmit the binary
XOR combination $f_{2}\oplus f_{3}$ to UDs $u_{2}$ and $u_{3}$. Then, UD
$u_{2}$ holds $f_{1}$ and can provide it to UD $u_{1}$. As a result, $2$
transmissions are required for delivering all files to all UDs. Therefore, the
cooperation among UDs can be utilized with IDNC to combine files and transmit
them to interested UDs via D2D links. As such, the requested files can be
delivered to the requesting UDs quickly while offloading the CBS’s resources.
### I-B Related Works and Motivation
The content delivery problem, known as completion time minimization problem,
in IDNC-enabled networks is considered based on layer functionalities as
follows. From only network-layer perspective, IDNC schedule is adopted to
solve the problem in real-time applications in terms of minimizing the number
of transmissions [8, 12, 13, 14, 15]. In particular, these related works
modeled the status of physical channels by file erasure probabilities and
integrated such erasures in the coding decisions, e.g., see for example [12],
[15]. This improves the system’s performance from network-layer perspective by
scheduling many UDs to the same resource block, but it degrades the
performance from physical-layer perspective through selecting the minimum
rates of all the scheduled UDs. This results in prolonged transmission
duration and thus, consumes the time resources of network. Unlike network-
layer IDNC that depends solely on file combinations for aiding the coding
decisions, rate-aware IDNC (RA-IDNC) also depends on the channel capacities of
different UDs. This allows a new degree-of-freedom, such as, choosing the
transmitting UDs, their transmission rates, and IDNC file combinations, to
optimize the content delivery problem. The authors of [16, 17, 18, 19, 20, 21,
22, 23, 24] used RA-IDNC in centralized and decentralized networks for
optimizing different system parameters. For example, the authors of [19] used
RA-IDNC scheme in cloud radio access networks (C-RANs) for completion time
minimization. The authors of [20, 22] developed cross-layer IDNC to optimize
throughput and Quality-of-Service (QoS) of UDs in centralized C-RANs and Fog-
RANs, respectively.
For D2D systems, the authors of [24] considered a vanilla-version of the
completion time minimization problem. Indeed, the problem was considered by
simply selecting the transmitting UD and its NC combination. However, the main
drawback of the work in [24] is that only one UD is allowed to transmit coded
file in each transmission slot. Thus, they ignored the interference caused by
different transmitting UDs to the scheduled UDs. Actually, in D2D networks,
UDs are spatially distributed in a region which creates an opportunity to
judiciously select multiple transmitting UDs that schedule a significant set
of other UDs. Such configuration brings a new trade-off between scheduling UDs
to transmitting UDs and choosing the coded files and the transmission
rate/power. However, solving the completion time minimization problem while
jointly considering the previously cached files at users, their transmission
rates/powers, NC, and D2D communications has not explored yet. Furthermore,
developing a joint cross-layer IDNC for the completion time minimization in
D2D networks is new to the area of NC. Therefore, our setting in this work is
much more realistic than the one used in [24] as it enables for both selection
of multiple transmitting UDs and optimization of the employed transmission
rates using power control on each transmitting UD.
The completion time minimization problem is motivated by real-time
applications, i.e., video streaming. In these applications, UDs need to obtain
a set of popular files from other transmitting UDs with the minimum possible
completion time, given the required minimum rates for QoS. Unlike pre-loads
that can be done at much lower rates or at off-peak times, our work delivers
popular contents to UDs with the minimum possible completion time. Consider
that a popular video representing a frame of files is requested by a set of
UDs located in a playground. Many UDs in the playground are interested in
receiving this frame. At any given time, consider that UDs have already cached
some files and requested some other files from that frame. In order to deliver
the requested files in that frame without any interruption, UDs should receive
their requested files with a minimum possible delay. For such a case, users
can re-XOR the transmitted files from transmitting UDs to progressively and
immediately use the decoded files at the application layer. Such progressive
file decoding at the UDs meets the delay requirements and streaming quality.
### I-C Contributions
Unlike previously-discussed existing works that considered the optimization
factors (e.g., NC, users’ cached and requested files, UD scheduling, QoS
guarantee requirements, and power optimization) and their corresponding
problems separately, our work develops a framework that jointly considers all
the aforementioned factors. To this end, we develop a novel cross-layer
network coding (CLNC) optimization framework taking NC and rate/power
optimization into account. The main contributions of our work are summarized
as follows.
* •
The completion time minimization problem is shown to be computationally
intractable due to the interdependence among variables such as the UDs’ cached
and requested files, power optimization, channel qualities, and coding
decisions. Using the lower bound on the completion time used in the
literature, we tackle the problem and solve it online at each transmission
slot.
* •
We design a D2D-RA-IDNC graph to efficiently transform the completion time
minimization problem to a maximal weight independent set (MWIS) problem. The
designed D2D-RA-IDNC graph represents all the feasible rates and NC decisions
for all potential transmitting UDs. The problem is then reformulated as an
MWIS problem that can be efficiently solved using low complexity graph
theoretical solution. The designed weights of the vertices in this graph
balances between the transmission rate and number of scheduled UDs in each
transmission.
* •
We develop a CLNC solution that efficiently iterates between finding the MWIS
in the designed D2D-RA-IDNC graph and optimizing the power of the transmitting
UDs using a function evaluation (FE) method. In each iteration, a new
transmitting UD is selected only if the resultant interference does not
significantly degrade the completion time performance. The complexity of our
developed CLNC solution is analyzed.
* •
We compare our proposed scheme with existing baseline schemes. Simulation
results demonstrate that our proposed CLNC solution significantly minimizes
the completion time compared with existing algorithms.
The rest of this paper is organized as follows. Section II overviews the
system model. The completion time approximation and problem formulation are
illustrated in Section III. In Section IV, we present the graph construction
and problem transformation and propose cross layer network coding solution in
Section V. Finally, we present selected simulation results in Section VI and
conclude the work in Section VII.
## II System Model
### II-A System Overview
We consider a cache-enabled D2D system with one cloud base-station (CBS) and
$N$ user-devices (UDs), denoted by the set
$\mathcal{N}=\\{u_{1},u_{2},\cdots,u_{N}\\}$. We adopt a fully connected D2D
model where D2D links are usually implemented with low-range transmission
technologies, such as Bluetooth and WiFi. Therefore, we assume that each UD is
connected to all other UDs. Each UD is assumed to be equipped with single
antenna and used half-duplex channel. Accordingly, each UD can either transmit
or receive at a given transmission slot. Unlike the work in [24] that ideally
considered interference-free setup, our realistic work considers that UDs use
the same frequency band and can cooperate by utilizing D2D links and transmit
simultaneously. With such a cooperation among UDs for content delivery, the
CBS dose not need to transmit requested contents to UDs. Therefore, the CBS is
responsible for selecting a set of transmitting UDs and their NC combinations
and power allocations, that deliver requested contents to requesting UDs.
Accordingly, the whole process of coding decisions in this work is executed at
the CBS and depends on selecting the transmission rate and transmit power
allocation of each transmitting UD.
Let $\mathcal{F}$ denote a frame of $F$ files,
$\mathcal{F}=\\{f_{1},f_{2},\cdots,f_{F}\\}$, each of size $B$ bits. This data
frame represents a popular content, i.e., YouTube video, and constitutes the
set of most frequent requested files by the UDs for any given time period. We
assume that UDs proactively cached some files from $\mathcal{F}$ and stored
them in their local caches, i.e., $\mathcal{C}_{u_{k}}$ represents the set of
the files locally cached at UD $u_{k}$. We assume that UD $u_{k}$ requests a
set of files, from the frame $\mathcal{F}$, and denoted by the demand set
$\mathcal{W}_{u_{k}}=\mathcal{F}\backslash\mathcal{C}_{u_{k}}$. Following the
caching model in [24], each file from $\mathcal{F}$ is cached by at least one
UD in $\mathcal{N}$ which leads to the fact that
$\cup_{u_{k}\in\mathcal{N}}\mathcal{C}_{u_{k}}=\mathcal{F}$. The set of UDs
that having non-empty demand sets at the $t$-th transmission slot is denoted
by $\mathcal{N}_{w,t}$, i.e.,
$\mathcal{N}_{w,t}=\\{u_{k}\in\mathcal{N}|\mathcal{W}_{u_{k},t}\neq\varnothing\\}$.
Without loss of generality, we assume that all UDs have non-empty demand sets.
Otherwise, they can simply be ignored from the set $\mathcal{N}$ without
affecting the system performance. When an UD receives its requested files, it
acts as a transmitting UD that provides its received files to the interested
UDs. The goal is to deliver the requested files to the UDs within the lowest
possible completion time by leveraging NC and D2D links.
Let $\gamma_{u_{k},u_{i}}$ denote the channel gain between UD $u_{k}$ and UD
$u_{i}$ and $Q_{\text{max}}$ denote the maximum transmit power for D2D link.
We consider slow fading channels, and accordingly, $\gamma_{u_{k},u_{i}}$ is
considered to be fixed during a single transmission but may change
independently from one file transmission to another file transmission. Then,
the achievable capacity of a D2D pair ($u_{k},u_{i}$) is given by
$C_{u_{k},u_{i}}=\log_{2}(1+\text{SINR}_{u_{k},u_{i}}(\textbf{Q})),\forall
u_{k}\in\mathcal{A}$, where $\text{SINR}_{u_{k},u_{i}}(\textbf{Q}))$ is the
corresponding signal-to-interference plus noise-ratio experienced by UD
$u_{i}$ when it is scheduled to UD $u_{k}$. This SINR is given by
$\text{SINR}_{u_{k},u_{i}}(\textbf{Q}))=\cfrac{Q_{u_{k}}|\gamma_{u_{k},u_{i}}|^{2}}{N_{0}+\sum_{u_{m}\neq
u_{k}}Q_{u_{m}}|\gamma_{u_{m},u_{i}}|^{2}},\forall u_{k},u_{m}\in\mathcal{A}$,
where $\mathcal{A}$ is the set of transmitting UDs, $N_{0}$ is the noise
power, $Q_{u_{k}}$, $Q_{u_{m}}$ are the transmit powers of UDs $u_{k}$ and
$u_{m}$ which both are bounded by $Q_{\text{max}}$, and
$\textbf{Q}=[Q_{u_{k}}]$ is a row vector containing the power levels of the
transmitting UDs. The channel capacities of all pairs of D2D links can be
stored in an $N\times N$ _capacity status matrix (CSM)_
$\mathbf{C}=[C_{u_{k},u_{i}}],$ $\;\forall(u_{k},u_{i})$. Since UD $u_{k}$
cannot transmit to itself, $C_{u_{k},u_{k}}=0$.
### II-B Rate-Aware NC and Expression of the Completion Time Metric
Let $\mathtt{f}_{u_{k},t}$ denote the XOR file combination to be sent by UD
$u_{k}$ to the set of scheduled UDs $\mathtt{u}(\mathtt{f}_{{u_{k}},t})$ at
the $t$-th transmission. For simplicity, we use time index $t$ to represent
the $t$-th transmission slot, i.e., $t=1$ refers to the first transmission
slot. The file combination $\mathtt{f}_{u_{k}}$ is an element of the power set
$\mathcal{P}({\mathcal{C}_{u_{k}}})$ of the cached files at UD $u_{k}$. At
every transmission $t$, each scheduled UD in
$\mathtt{u}(\mathtt{f}_{{u_{k},t}})$ can re-XOR $\mathtt{f}_{{u_{k},t}}$ with
its previously cached files to decode a new requested file. To ensure
successful reception at the UDs, the maximum transmission rate of a particular
transmitting UD is equal to the minimum achievable capacity of its scheduled
UDs. Therefore, the set of targeted users111The term “targeted users” is given
for the scheduled UDs who receive an instantly-decodable transmission. by UD
$u_{k}$ is expressed as
$\mathtt{u}(\mathtt{f}_{u_{k}})=\left\\{u_{i}\in\mathcal{N}_{w}\
\big{|}|\mathtt{f}_{u_{k}}\cap\mathcal{W}_{u_{i}}|=1~{}\text{and}~{}R_{u_{k}}\leq
C_{u_{k},u_{i}}\right\\}$. Without loss of generality, the set of all targeted
UDs, when $|\mathcal{A}|$ transmitting UDs transmit the set of combinations
$\mathtt{f}(\mathcal{A})$, is represented by
$\mathtt{u}(\mathtt{f}(\mathcal{A}))$, where $u_{k}$, $\mathtt{f}_{u_{k}}$,
$\mathtt{u}({\mathtt{f}}_{u_{k}})$ are elements in $\mathcal{A}$,
$\mathtt{f}(\mathcal{A})$, and $\mathtt{u}(\mathtt{f}(\mathcal{A}))$,
respectively 222The symbol $|\mathcal{X}|$ represents the cardinality of the
set $\mathcal{X}$..
Let $T_{u_{k}}$ denote the duration of the transmission from the $u$-th UD.
The duration for transmitting $\mathtt{f}_{u_{k}}$ from UD $u_{k}$ with rate
$R_{u_{k}}$ to $\mathtt{u}(\mathtt{f}_{u_{k}})$ is
$T_{u_{k}}=\frac{B}{R_{u_{k}}}$ seconds. For transmission synchronization, all
transmitting UDs in the set $\mathcal{A}$ adopt a common transmission rate,
denoted as $R$. Otherwise, different transmitting UDs will have different
transmission rates, and thus, they will have different transmission durations.
So, UDs who finish the transmission first must wait for those who transmit
with the slowest-rate to start a new transmission at the same time. Therefore,
we adopt one transmission rate for all transmitting UDs, and accordingly, the
transmission duration for sending any coded/uncoded file from any transmitting
UD is denoted by $T_{t}$ and expressed by $T_{t}=\frac{B}{R}$ seconds.
Consequently, UDs that are not targeted at transmission slot $t$, experience
$T_{t}$ seconds of delay has a cumulative delay as defined below.
Definition 1: Any UD with non-empty demand set experiences $T_{t}$ seconds of
time delay if it does not receive any requested file at $t$-th transmission.
The accumulated time delay of UD $u_{i}$ is the sum of $T_{t}$ seconds at each
transmission until $t$-th transmission, denoted by $\mathbb{T}_{u_{i}}(t)$,
and expressed as
$\displaystyle\mathbb{T}_{u_{i}}(t)=\mathbb{T}_{u_{i}}(t-1)+\begin{cases}T_{t}&\text{if}~{}u_{i}\notin\mathtt{u}{(\mathtt{f}(\mathcal{A}))}\\\
T_{t}&\text{if}~{}u_{i}\in\mathcal{A}.\end{cases}$ (1)
Let $\mathtt{T}_{u_{i}}$ denote the completion time of UD $u_{i}$ until it
receives the requested files. The completion time for UD $u_{i}$ includes two
parts, its accumulated time delay $\mathbb{T}_{u_{i}}$ due to receiving a non-
instantly decodable file and the time duration of sending all instantly
decodable transmissions. In other words, such completion time is divided into
consecutive instantly and non instantly decodable transmission for each UD in
$\mathcal{N}_{w}$ until it obtains all requested files. Subsequently, the
overall completion time
$\mathtt{T}=\max_{u_{i}\in\mathcal{N}}\\{\mathtt{T}_{u_{i}}\\}$ is the time
required until all UDs recover all files. The used notations and variables are
summarized in Table I.
To minimize the overall completion time, we need to find the optimal schedule
from the beginning of the D2D transmission phase at $t=1$ until all UDs
obtained all requested files at $t=\mathcal{j{\mathcal{S}}j}$. Here,
$\mathcal{S}$ is defined as a collection of transmitting UDs, file
combinations and transmission rates/powers until all UDs in $\mathcal{N}_{w}$
receive all $F$ files, i.e.,
$\mathcal{S}=\\{\mathcal{A}(t),\mathcal{P}(\mathcal{C}_{u_{i}}(t)),\mathcal{R}(t)\\},\forall
t\in\\{1,...,|\mathcal{S}|\\}$. Thus, the optimal schedule $\mathcal{S}^{*}$
that minimizes the overall completion time of all UDs is
$\mathcal{S}^{*}=\arg\min_{\mathcal{S}\in\mathbf{S}}\\{\mathtt{T}(\mathcal{S})\\}=\arg\min_{\mathcal{S}\in\mathbf{S}}\left\\{\max_{u_{i}\in\mathcal{N}_{w}}\left\\{\mathtt{T}_{u_{i}}(\mathcal{S})\right\\}\right\\}$,
where $\mathbf{S}$ is the set of all possible D2D transmission schedules,
i.e., $\mathcal{S}\in\mathbf{S}$. This optimal schedule can be formulated as
follows
TABLE I: Variables and parameters of the system Variable | Definition
---|---
$\mathcal{N}$ | Set of $N$ UDs
$\mathcal{N}_{w}$ | Set of $N$ UDs that want files
$\mathcal{F}$ | Set of $F$ popular files
$B$ | File size
$\mathtt{f}_{u_{k}}$ | The encoded file of of UD $u_{k}$
$\mathtt{u}(\mathtt{f}_{u_{k}})$ | Set of targeted UDs by UD $u_{k}$
$\mathcal{A}$ | Set of $A$ transmitting UDs
C | Set of all achievable capacities
$\mathcal{W}_{u_{i}}$ | Set of wanted files by UD $u_{i}$
$\mathcal{C}_{u_{i}}$ | Set of locally cached files by UD $u_{i}$
$R_{u_{k}}$ | Transmission rate of UD $u_{k}$
$T_{u_{k}}$ | The transmission duration of UD $u_{k}$
$\mathtt{T}_{u_{i}}$ | The completion time of UD $u_{i}$
$\mathcal{R}_{u_{k}}$ | Set of all achievable rates of UD $u_{k}$
###### Theorem 1.
The minimum overall completion time problem in a D2D multihop network can be
formulated as a transmission schedule selection problem such that:
$\displaystyle\mathcal{S}^{*}=\arg\min_{\mathcal{S}\in\mathbf{S}}\left\\{\max_{u_{i}\in\mathcal{N}_{w}}\left\\{\frac{B.|W_{u_{i}}(0)|}{\tilde{R}_{u_{i}}(\mathcal{S})}+\mathbb{T}_{u_{i}}(\mathcal{S})\right\\}\right\\},$
(2)
where $|W_{u_{i}}(0)|$ is the initial demand set size of UD $u_{k}$,
$\mathcal{\mathbb{missing}}T_{u_{i}}(\mathcal{S})$ is the accumulative time
delay of UD $u_{i}$ in schedule $\mathcal{S}$ and
$\tilde{R}_{u_{i}}(\mathcal{S})$ is the harmonic mean of the transmission
rates of time indices that are instantly decodable for UD $u_{i}$ in schedule
$\mathcal{S}$.
###### Proof.
The proof of Theorem 1 is omitted in this paper becasue we can use the same
steps that was used in [18] for C-RAN networks. Therefore, a sketch of the
proof is given as follows. We first show that the completion time can be
expressed as the sum of instantly and non-instantly decodable transmission
times from $|\mathcal{A}|$ transmitters via D2D links. Afterward, we need to
proof that the number of instantly decodable transmissions to UD $u_{l}$ is
equal to the number of its requested files $|\mathcal{W}_{u_{l},0}|$ and the
number of non-instantly decodable transmissions matches the time delay in
definition 1. Finally, we extend the results of the optimal schedule in
Theorem 1 in [18] that was used in C-RAN system to the coordinated D2D setting
with multiple transmitters. ∎
The optimal NC transmission schedule that reduces the overall completion time
in a D2D network is the solution of the optimization problem in Theorem 1.
Such schedule requires to exploit the heterogeneity of UDs’ channel capacities
and the interdependence of UDs’ file reception. Actually, the decision at the
current transmission slot is dependent on the future coding situations, which
makes the optimization problem anti-causal. Therefore, it can be inferred that
finding the optimal schedule $\mathcal{S}^{*}$ is intractable [19], [24].
Figure 1: D2D system containing $6$ UDs and their corresponding
requested/received files and rates. For example, $u_{2}$ receives $f_{1}$,
$f_{4}$ and requests $f_{2}$, $f_{3}$. The sets of files that locally cached
at UD $u_{1}$ is: $\mathcal{C}_{u_{1}}=\\{f_{1},f_{4},f_{3}\\}$.
### II-C Example of RA-IDNC Transmissions in D2D System
This example illustrates the aforementioned definitions and concepts to ease
the analysis of the completion time minimization problem reformulation in next
section. Consider a simple D2D network that shown in Fig. 1 which consists of
$6$ users, users’ received and requested files and their rates. For example,
$u_{2}$ receives $f_{1}$, $f_{4}$ and requests $f_{2}$, $f_{3}$. Each file is
assumed to have a size of $10$ bits. To minimize the completion time for this
example, one possible schedule is given as follows.
First time slot: The $u_{1}$-th and $u_{2}$-th UDs can use their cached files
to transmit $\mathtt{f}_{u_{1}}=f_{1}\oplus f_{4}$ and
$\mathtt{f}_{u_{2}}=f_{4}$ with rates $R_{u_{1}}=2.5$ and $R_{u_{2}}=2.5$
bits/s, respectively, to the sets
$\mathtt{u}(\mathtt{f}_{u_{1}})=\\{u_{4},u_{6}\\}$ and
$\mathtt{u}(\mathtt{f}_{u_{2}})=\\{u_{3},u_{5}\\}$. Given this, we have the
following transmission durations of $u_{1}$-th UD and $u_{2}$-th UD,
respectively: $T_{u_{1}}=\frac{10}{2.5}=4,T_{u_{2}}=\frac{10}{2.5}=4$ seconds.
The decoding process at UDs side can be explained as follows.
* •
The $u_{4}$-th UD already has $f_{1}$, so it can XOR the combination
($f_{1}\oplus f_{4}$) with $f_{1}$ (i.e., $(f_{1}\oplus f_{4})\oplus f_{1}$)
to retrieve $f_{4}$. Thus, the transmission is instantly decodable for UD
$u_{4}$.
* •
The $u_{6}$-th UD already has $f_{4}$, so it can XOR the combination
($f_{1}\oplus f_{4}$) with $f_{4}$ (i.e., $(f_{1}\oplus f_{4})\oplus f_{4}$)
to retrieve $f_{1}$. Thus, the transmission is instantly decodable for UD
$u_{6}$.
* •
The $u_{3}$-th and $u_{5}$-th UDs can receive $f_{4}$ from $u_{2}$-th UD.
Thus, the transmission is instantly decodable for UDs $u_{3}$ and $u_{5}$.
Therefore, the updated demand sets after the first time slot are:
$\mathcal{W}_{u_{2}}=\\{f_{3}\\}$,
$\mathcal{W}_{u_{3}}=\varnothing,\mathcal{W}_{u_{4}}=\varnothing$,
$\mathcal{W}_{u_{5}}=\\{f_{3}\\}$, $\mathcal{W}_{u_{6}}=\\{f_{2}\\}$. Note
that $T_{t,1}=4$ seconds.
Second time slot: The $u_{1}$-th UD can use its cached files to transmit
$\mathtt{f}_{u_{1}}=f_{2}\oplus f_{3}$ with rate $R_{u_{1}}=3$ bits/s to the
set $\mathtt{u}(\mathtt{f}_{e_{2}})=\\{u_{2},u_{5},u_{6}\\}$ which requires
transmission time $T_{u_{1}}=T_{t,2}=\frac{10}{3}=3.33$ seconds. The decoding
process at UDs side can be explained as follows.
* •
The $u_{2}$-th UD already has $f_{2}$, so it can XOR the combination
($f_{2}\oplus f_{3}$) with $f_{2}$ (i.e., $(f_{2}\oplus f_{3})\oplus f_{2}$)
to retrieve $f_{3}$. Thus, the transmission is instantly decodable for UD
$u_{2}$.
* •
The $u_{5}$-th UD already has $f_{2}$, so it can XOR the combination
($f_{2}\oplus f_{3}$) with $f_{3}$ (i.e., $(f_{2}\oplus f_{3})\oplus f_{2}$)
to retrieve $f_{3}$. Thus, the transmission is instantly decodable for UD
$u_{5}$.
* •
The $u_{6}$-th UD already has $f_{2}$, so it can XOR the combination
($f_{2}\oplus f_{3}$) with $f_{2}$ (i.e., $(f_{2}\oplus f_{3})\oplus f_{2}$)
to retrieve $f_{3}$. Thus, the transmission is instantly decodable for UD
$u_{6}$.
By the end of second time slot, all UDs will have their requested files.
Therefore, the total transmission time is $T_{t,1}+T_{t,2}=4+3.33=7.33$
seconds.
The above example demonstrates the benefit of NC and D2D communications in
minimizing the completion time. We can further improve this result by
allocating the power levels efficiently to the transmitting UDs.
## III Completion Time Approximation and Problem Reformulation
Following [18, 24], we approximate the completion time in Theorem 1 to select
a set of transmitting UDs, file combinations, and transmission rates/powers at
each transmission slot $t$ without going through all future possible coding
decisions. To achieve this, at each transmission slot $t$, a lower bound on
the completion times of all UDs is computed. This lower bound is computed
separately for each UD and does not require to exploit the interdependence of
UDs’ file reception and channel capacities. In fact, this lower bound metric
facilitates the mapping of the transmission schedule selection problem in (2)
into an online maximal independent set selection problem.
###### Corollary 1.
A lower bound on completion time $\bar{\mathtt{T}}_{i}(t)$ of UD
$u_{i}\in\mathcal{N}_{w}$ in a given time index $t$ can be approximated as
$\displaystyle\bar{\mathtt{T}}_{u_{i}}(t)\approx\frac{B.|W_{u_{i}}(0)|}{\tilde{R}_{u_{i}}}+\mathbb{T}_{u_{i}}(t),$
(3)
where $\mathbb{T}_{u_{i}}(t)$ is the accumulative time delay experienced by UD
$u_{i}$ until time index $t$ and $\tilde{R}_{u_{i}}$ is the harmonic mean of
the channel capacities from all UDs.
###### Proof.
The expression in (3) matches the expression in Theorem 1, except
$\mathbb{T}_{u_{i}}(\mathcal{S})$ and $\tilde{R}_{u_{i}}(\mathcal{S})$ of
Theorem 1 are replaced by $\mathbb{T}_{u_{i}}(t)$ and $\tilde{R}_{u_{i}}$,
respectively. The best case scenario is that all transmissions starting from
time slot $t$ are instantly decodable for UD $u_{i}$. Thus, it experiences no
further time delay, i.e.,
$\mathbb{T}_{u_{i}}(\mathcal{S})=\mathbb{T}_{u_{i}}(t)$. In addition, since a
fully connected D2D model is adopted, UD $u_{i}$ can receive a missing file
from any other UD until it receives all $F$ files. Therefore,
$\tilde{R}_{u_{i}}(\mathcal{S})$ is replaced by $\tilde{R}_{u_{i}}$, where
$\tilde{R}_{u_{i}}$ is the harmonic mean of the channel capacities from all
other UDs to UD $u_{i}$. This is an approximation as $\tilde{R}_{u_{i}}$ is
exactly equal to $\tilde{R}_{u_{i}}(\mathcal{S})$ if UD $u_{i}$ receives an
equal number of files from other UDs with the rates of channel capacities. ∎
Using the approximated completion time (3) at each transmission slot $t$, we
now ready to reformulate the completion time minimization problem in Theorem 1
with the aim to develop a cross-layer network coding framework that decides
the set of transmitting UDs $\mathcal{A}$ for sending $\mathtt{f}_{u_{k}}$ to
the UDs $\mathtt{u}(\mathtt{f}_{u_{k}})$, and their transmission rate/power
$\\{R_{u_{k}},Q_{u_{k}}\\}$, $\forall u_{k}\in\mathcal{A}$. As such, all files
are delivered to all UDs with minimum completion time. Therefore, the
completion time minimization problem in fully D2D connected system can be
formulated as
$\displaystyle\rm P1:\hskip
5.69046pt\min_{\begin{subarray}{c}\mathtt{f}_{u_{k}},r_{u_{k}},Q_{u_{k}}\\\
\mathcal{A}\in\mathcal{P}(\mathcal{N})\end{subarray}}\left\\{\max_{u_{i}\in\mathcal{N}_{w}}\bar{\mathtt{T}}_{u_{i}}(t)\right\\}$
(4a) $\displaystyle\rm subject~{}to\begin{cases}\text{(C1):}\hskip
2.84544pt\mathtt{u}(\mathtt{f}_{u_{k}})\cap\mathtt{u}(\mathtt{f}_{u_{m}})=\varnothing,\forall
u_{k}\neq{u_{m}}\in\mathcal{A},\\\ \text{(C2):}\hskip
2.84544pt\mathtt{f}_{u_{k}}\subseteq\mathcal{P}(\mathcal{H}_{u_{k}}),~{}\forall
u_{k}\in\mathcal{A},\\\ \text{(C3):}\hskip 2.84544pt0\leq Q_{u_{k}}\leq
Q_{\max},~{}\forall u_{k}\in\mathcal{A},\\\ \text{(C4):}\hskip
2.84544ptR_{u_{k}}\geq R_{\text{th}},\forall u_{k}\in\mathcal{A},\end{cases}$
where (C1) states that the sets of targeted UDs from all transmitting UDs are
disjoint, i.e., each UD must be scheduled to only one transmitting UD; (C2)
ensures that all files to be combined using XOR operation at each transmitting
UD $u_{k}$ are stored in its cache; (C3) bounds the maximum transmit power of
transmitting UDs, and (C4) guarantees the minimum transmission rate
$R_{\text{th}}$ required to meet the QoS rate requirements.
The optimization variables in (P1) contain the NC scheduling parameters
$\mathtt{u}(\mathtt{f}_{u_{k}})$, potential set of transmitting UDs
$\mathcal{A}$, and their adopted power allocations. It can be seen that
problem (P1) is intractable. However, by analyzing the problem, next section
successfully transforms it into MWIS problem using graph theory technique.
## IV Graph Construction and Problem Transformation
The formulated problem in ($\rm P1$) is similar to MWIS problems in several
aspects. In MWIS, two vertices should be non-adjacent in the graph, and
similarly, in problem ($\rm P1$), same UD cannot be scheduled to two different
UDs (i.e., C1). Moreover, the objective of problem ($\rm P1$) is to minimize
the maximum completion time, and similarly, the goal of MWIS is to maximize
the number of vertices that have high weights. Therefore, the feasible NC
schedules can be considered to be the MWISs. Consequently, we focus on graph-
based methods, and in what follows, we will construct a graph that allows us
to transform problem ($\rm P1$) into MWIS-based problem.
### IV-A D2D Rate-Aware IDNC Graph
In this sub-section, we construct a weighted undirected graph, referred to
D2D-RA-IDNC graph, that considers all possible conflicts for scheduling UDs,
such as transmission, network coding, and transmission rate. Let
$\mathcal{G}(\mathcal{V},\mathcal{E})$ represent the D2D-RA-IDNC graph where
$\mathcal{V}$, $\mathcal{E}$ stand for the set of all the vertices and the
edges, respectively. In order to construct $\mathcal{G}$, we need first to
generate the vertices and connect them.
Let $\mathcal{N}_{w}\subset\mathcal{N}$ denote the set of UDs that still wants
some files. Hence, the D2D-RA-IDNC graph is designed by generating all
vertices for the $u_{k}$-th possible transmitting UD, $\forall
u_{k}\in\mathcal{N}$. The vertex set $\mathcal{V}$ of the entire graph is the
union of vertices of all UDs. Consider, for now, generating the vertices of UD
$u_{k}$. Note that transmitting UD $u_{k}$ can encode its IDNC file
$\mathtt{f}_{u_{k}}$ using its previously received files
$\mathcal{C}_{u_{k}}$. Therefore, each vertex is generated for each single
file $f_{h}\in\mathcal{W}_{u_{i}}\cap\mathcal{C}_{u_{k}}$ that is requested by
each UD $u_{i}\in\mathcal{N}_{w}$ and for each achievable rate $r$ of UD
$u_{k}$ that is defined below.
Definition 2: The set of achievable rates $\mathcal{R}_{u_{k},u_{i}}$ from UD
$u_{k}$ to UD $u_{i}$ is a subset of achievable rates $\mathcal{R}_{u_{k}}$
that are less than or equal to channel capacity $r_{u_{k},u_{i}}$. It can be
expressed by $\mathcal{R}_{u_{k},u_{i}}=\\{r\in\mathcal{R}_{u_{k}}|r\leq
C_{u_{k},u_{i}}~{}\text{and}~{}u_{i}\in\mathcal{N}_{\text{w}}\\}$.
The above definition emphasizes that $u_{i}$-th UD can receive a file from
transmitting UD $u_{k}$ if the adopted transmission rate $r$ is in the
achievable set $R_{u_{k},u_{i}}$. Therefore, we generate
$|\mathcal{R}_{u_{k},u_{i}}|$ vertices for a requesting file
$f_{h}\in\mathcal{C}_{u_{k}}\cap\mathcal{W}_{u_{i}},\forall
u_{i}\in\mathcal{N}_{\text{w}}$. In summery, a vertex $v_{r,i,f}^{k}$ is
generated for each association of transmitting UD $u_{k}$, a rate
$r\in\mathcal{R}_{u_{k},u_{i}}$, and a requesting file
$f_{h}\in\mathcal{C}_{u_{k}}\cap\mathcal{W}_{u_{i}}$ of user
$u_{i}\in\mathcal{N}_{\text{w}}$. Similarly, we generate all vertices for all
UDs in $\mathcal{N}$.
Given the above generated vertices, in what follows, we connect them to
construct the D2D-RA-IDNC graph. All possible conflict connections between
vertices (conflict edges between circles) in the D2D-RA-IDNC graph are
provided as follows. Two vertices $v_{r,i,h}^{k}$ and
$v_{r^{\prime},i^{\prime},h^{\prime}}^{k}$ representing the same transmitting
UD $u_{k}$ are linked with a coding-conflict edge if the resulting combination
violate the instant decodability constraint. This event occurs if one of the
following holds.
* •
The combination is not-instantly decodable, i.e., $f_{h}\neq f_{h^{\prime}}$
and ($f_{h},f_{h^{\prime}}$)
$\notin\mathcal{C}_{u_{k^{\prime}}}\times\mathcal{C}_{u_{k}}$.
* •
The transmission rate is different, i.e., $r\neq r^{\prime}$.
Similarly, two vertices $v_{r,i,h}^{k}$ and
$v_{r^{\prime},i^{\prime},h^{\prime}}^{k^{\prime}}$ representing different
transmitting UDs $u_{k}\neq u_{k^{\prime}}$ are conflicting if
* •
The transmission rate is different, i.e., $r\neq r^{\prime}$.
* •
The same UD is scheduled, i.e., $u_{i}=u_{i^{\prime}}$.
Therefore, two vertices $v_{r,i,h}^{k}$ and
$v_{r^{\prime},i^{\prime},h^{\prime}}^{k^{\prime}}$ are adjacent by a conflict
edge in $\mathcal{E}$ if they satisfy one of the following connectivity
conditions (CC).
* •
CC1: $u_{k}=u_{k^{\prime}}$ and ($f_{h}\neq f_{h^{\prime}}$) and
($f_{h},f_{h^{\prime}}$)
$\notin\mathcal{C}_{u_{k^{\prime}}}\times\mathcal{C}_{u_{k}}$.
* •
CC2: $r\neq r^{\prime}$.
* •
CC3: $u_{k}\neq u_{k^{\prime}}$ and $u_{i}=u_{i^{\prime}}$.
### IV-B Problem Transformation
In this sub-section, we transform the network-coded user scheduling and power
optimization problem ($\rm P1$) into MWIS problem, and consequently, we start
by the following definitions.
Definition 3: Any independent set (IS) $\mathcal{I}$ in graph $\mathcal{G}$
must satisfies: i) $\mathcal{I}_{i}\subseteq\mathcal{G}$; ii) $\forall
v,v^{\prime}\in\mathcal{I}_{i}$, we have $(v,v^{\prime})\notin\mathcal{E}$.
Definition 4: A maximal IS in an undirected graph cannot be expanded to add
one more vertex without affecting the pairwise non-adjacent vertices.
Definition 5: The independent set $\mathcal{I}$ is referred to as an MWIS of
$\mathcal{G}$ if it satisfies: i) $\mathcal{I}$ is an IS in graph
$\mathcal{G}$; ii) the sum weights of the vertices in $\mathcal{I}$ offers the
maximum among all ISs of $\mathcal{G}$. Therefore, the MWIS will be denoted as
$\mathtt{I}$.
Based on the aforementioned designed D2D-RA-IDNC graph and definition of MWIS,
we have the following proposition.
###### Proposition 1.
The problem of minimizing the approximated completion time in ($\rm P1$) at
the $t$-th transmission is equivalently represented by the MWIS selection
among all the ISs in the $\mathcal{G}$ graph, where the original weight
$\omega_{o}(v)$ of each vertex $v$ is given by
$\displaystyle\omega_{o}(v)=2^{N_{w}-d_{u_{i}}+1}\bar{\mathtt{T}}_{u_{i}}(t)\left(\frac{r}{B}\right),$
(5)
where $d_{u_{i}}$ is the order of UD $u_{k}$ in the group that arranges all
UDs in $\mathcal{N}_{w}(t)$ in decreasing order of lower bound on completion
times [24].
###### Proof.
The proof of the proposition follows similar steps of [18], and consequently,
the detailed steps are omitted. Accordingly, a sketch of proof is provided as
follows. First, we need to sufficiently show that there is a mapping between
the set of maximal ISs in the D2D-RA-IDNC graph and the set of feasible
transmissions. Then, the weight of each IS is the objective function to P1.
The authors in [18] showed that there exists a one-to-one mapping between the
set of feasible transmissions and the set of ISs in the RA-IDNC graph. Here,
we extend the results of [18] to the D2D-RA-IDNC graph by showing that the
feasible transmissions between different transmissions are non-adjacent, i.e.,
the constraint CC3. Since each feasible transmission by a transmitting UD is
an IS and they are non-adjacent, then the union of both sets is also an IS.
From CC3, the same UD cannot be targeted by distinct transmitting UDs.
Therefore, all vertices in the sub-graph representing transmitting UD $u_{k}$
are non-adjacent to vertices in the sub-graph of transmitting UD
$u_{k^{\prime}}$ as long as the targeted UDs are distinct. Therefore, each
feasible association between targeted UDs-transmitting UDs, file combinations,
and the transmission rate is represented by a maximal IS. Conversely, it can
readily be seen that each IS represents a feasible condition as it does not
violate the connectivity conditions CC1, CC2, and CC3. Indeed, for
$\mathcal{I}$, the transmission of the combination
$\mathtt{u}_{u_{k}}=\oplus_{v^{k}_{r,i,h}\in\mathcal{I}}f$ by transmitting UD
$u_{k}$ at rate $r$ is instantly for all UDs
$\mathtt{u}(\mathtt{f}_{u_{k}})=\cup_{v^{k}_{r,i,h}\in\mathcal{I}}u$.
To finish the proof, we show that the weight of the IS is the objective
function of ($\rm P1$). Let the weight of vertex $v^{k}_{r,i,h}$ be defined as
in (5) and $\mathcal{I}$ be the set of maximal ISs in the D2D-RA-IDNC graph
$\mathcal{G}$. Consider $\mathtt{I}\in\mathcal{I}$ is the MWIS that has the
maximum vertex weights. By the designed graph $\mathcal{G}$, all the feasible
decisions of transmitting UDs, transmitted files and transmission rates/powers
are mapped to the set of all maximal ISs. Consequently, the completion time
reduction problem can be reformulated as a maximal IS selection problem in
graph $\mathcal{G}$ such as
$\displaystyle\arg\max_{\begin{subarray}{c}\mathcal{A}\in\mathcal{P}(\mathcal{N})\\\
\mathtt{f}_{u_{k}}\in\mathcal{P}(\mathcal{C}_{u_{k}})\\\
Q_{u_{k}}\in\\{0,\cdots,Q_{\max}\\}\\\
r\in\mathcal{R}_{u_{k}}\end{subarray}}\sum_{u_{i}\in\mathcal{X}}2^{N_{w}-d_{u_{i}}+1}\bar{\mathtt{T}}_{u_{i}}(t)\left(\frac{r}{B}\right)$
$\displaystyle=\max_{\begin{subarray}{c}\mathtt{I}\in\mathcal{I}\end{subarray}}\sum_{v\in\mathtt{I}}2^{N_{w}-d_{u_{i}}+1}\bar{\mathtt{T}}_{u_{i}}(t)\left(\frac{r}{B}\right)=\max_{\begin{subarray}{c}\mathtt{I}\in\mathcal{I}\end{subarray}}\sum_{v\in\mathtt{I}}\omega_{o}(v).$
(6)
Consequently, the problem of choosing transmitting UDs, file combinations, and
transmission rates/powers that results in minimizing the completion time is
equivalent to the MWIS selection problem over the D2D-RA-IDNC graph. ∎
It is readily known that finding the MWIS is NP-complete problem [25].
Consequently, solving Proposition 1 is NP-hard. In the next section, we
greedily select a maximal IS using the vertices’ weights defined in (5).
## V Proposed Solution
In this section, we develop an efficient cross-layer network coding solution
that judiciously selects multiple transmitting UDs simultaneously and their
coding decisions and transmitting rates/powers. As shown in SINR expression,
the increase in the number of transmitting UDs also increases interference of
a transmission channel caused by multiple transmitting UDs and therefore,
reduces the channel capacity. To control the deleterious impact of
interference on channel capacities, a power allocation mechanism is employed
that efficiently selects the set of transmitting UDs and allocates the
transmitting power to the transmitting UDs such that: (1) a potential number
of UDs can be targeted with an IDNC combination, and (2) the channel
capacities of the transmitting UDs to the targeted UDs still improves the
objective function. The overall steps of our proposed solution are as follows.
We first present a power allocation algorithm for the given set of
transmitting UDs and the scheduled/targeted UDs to these transmitting UDs.
Next, we provide a greedy algorithm that selects a set of transmitting and
targeted UDs considering known/predefined power allocations. Finally, by
combining the aforementioned algorithms, we present an innovative cross-layer
NC solution.
### V-A Transmit Power Allocation Algorithm
In this sub-section, we derive optimal power allocations to maximize sum-
throughput for a given set of transmitting UDs. We assume that the system has
$A$ transmitting UDs, i.e., $\mathcal{A}=\left\\{1,2,\cdots,A\right\\}$ and
the UDs receiving data from the $u_{k}$-th transmitting UD is denoted by the
set $\mathtt{u}(\mathtt{f}_{u_{k}})$. The power optimization problem to
maximize the sum-capacity of $A$ transmitting UDs is formulated as
$\begin{split}\max_{\\{Q_{k}\\}}\sum_{k=1}^{A}\mathcal{N}_{k}\quad\text{s.t.}\quad
0\leq Q_{k}\leq Q_{\max},\forall k\end{split}$ (7)
where
$\mathcal{N}_{k}=\sum_{u_{i}\in\mathtt{u}(\mathtt{f}_{u_{k}})}\log_{2}\left(1+\text{SINR}_{u_{k},u_{i}}\right)$.
The near-optimal power allocation for the $u_{k}$-th transmitting UD is
obtained in the following proposition.
###### Proposition 2.
Let $\widehat{Q}_{k}$ be the given transmit power of the $k$-th transmitting
UD at the $t$-th iteration. A converged power allocation is obtained by
updating power at the $(t+1)$-th iteration, $\forall t$, according to the
following power update equation
$\displaystyle
Q_{k}=\left[\frac{\sum_{u_{i}\in\mathtt{u}(\mathtt{f}_{u_{k}})}\frac{\text{SINR}_{u_{k},u_{i}}}{1+\text{SINR}_{u_{k},u_{i}}}}{\sum_{{\begin{subarray}{c}m=1\\\
m\neq
k\end{subarray}}}\sum_{u_{j}\in\mathtt{u}(\mathtt{f}_{u_{m}})}\left(\frac{\text{SINR}_{u_{m},u_{j}}}{1+\text{SINR}_{u_{m},u_{j}}}\right)^{2}\frac{\gamma_{u_{k},u_{i}}}{\widehat{Q}_{m}\gamma_{u_{m},u_{j}}}}\right]_{0}^{Q_{max}}$
(8)
where $\text{SINR}_{u_{m},u_{j}}$, $\forall m,j$, is obtained by applying the
value $\widehat{Q}_{m}$ in the expression of end-to-end SINR.
###### Proof.
The proof follows similar steps of [26, Lemma 2]. Particularly, although (7)
is a non-convex power allocation problem, a local optimal solution to (7) can
be obtained by obtaining the stationary point of the objective function. To
obtain a stationary power allocation for the $u_{k}$-th transmitting UD, we
need to solve $\frac{\partial\mathcal{N}_{k}}{\partial Q_{k}}=0$. In
particular, we obtain
$\displaystyle\frac{\partial\mathcal{N}_{k}}{\partial
Q_{k}}=\frac{1}{Q_{k}}\sum_{u_{i}\in\mathtt{u}(\mathtt{f}_{u_{k}})}\frac{\text{SINR}_{u_{k},u_{i}}}{1+\text{SINR}_{u_{k},u_{i}}}$
(9) $\displaystyle-\sum_{m=1,m\neq
k}\sum_{u_{j}\in\mathtt{u}(\mathtt{f}_{u_{m}})}\left(\frac{\text{SINR}_{u_{m},u_{j}}}{1+\text{SINR}_{u_{m},u_{j}}}\right)^{2}\frac{\gamma_{u_{k},u_{i}}}{\widehat{Q}_{m}\gamma_{u_{m},u_{j}}}.$
Therefore, by solving $\frac{\partial\mathcal{N}_{k}}{\partial Q_{k}}=0$, we
obtain
$Q_{k}=\left[\frac{\sum_{u_{i}\in\mathtt{u}(\mathtt{f}_{u_{k}})}\frac{\text{SINR}_{u_{k},u_{i}}}{1+\text{SINR}_{u_{k},u_{i}}}}{\sum_{m=1,m\neq
k}\sum_{u_{j}\in\mathtt{u}(\mathtt{f}_{u_{m}})}\left(\frac{\text{SINR}_{u_{m},u_{j}}}{1+\text{SINR}_{u_{m},u_{j}}}\right)^{2}\frac{\gamma_{u_{k},u_{i}}}{{Q}_{m}\gamma_{u_{m},u_{j}}}}\right]$
(10)
By solving (10), one can obtain the stationary point for the objective
function of the $u_{k}$-th transmitting UD, $\forall u_{k}$. However, a
closed-form power allocation by solving (10) is intractable. Accordingly, we
adopt an iterative approach to obtain a near-optimal stationary power
allocation. To this end, we denote $\widehat{Q}_{k}$ as the given power
allocation for the $u_{k}$-th transmitting UD, $\forall u_{k}$, and evaluate
the R.H.S. of (10) for the given power allocations. Finally, by projecting
R.H.S of (10) to the feasible region of the power allocations, we obtain (8).
∎
Based on Proposition 2, an iterative algorithm to obtain transmit power
allocations for a given set of transmitting UDs is provided as Algorithm 1.
The convergence of Algorithm 1 is justified as follows.
Algorithm 1 Transmit Power Allocations for A Given Set of Transmitting UDs
1: Input: Set of transmitting UDs, the file combinations, and the associated
UDs with each transmitting UDs.
2: Initialize: $\widehat{Q}_{u_{k}}=Q_{o}$, $\forall k=1,2,\cdots A$, $t=1$.
3: repeat
4: Update the power allocation of the $u_{k}$-th transmitting UD, $\forall
u_{k}$, by applying (8).
5: Set $\widehat{Q}_{u_{k}}=Q_{u_{k}}$, $\forall k=1,2,\cdots A$, and $t=t+1$
6: until Objective function of (7) converges or $t>t_{\max}$.
7: Output: Final transmission power for all the transmitting UDs.
###### Proposition 3.
Algorithm 1 provides a stable and local optimal solution to
$\eqref{Power_opt}$.
###### Proof.
We can proof Proposition 2 by resorting to the game theory. In fact, the
proposed power allocation update can be considered as a non-cooperative power
control game (NCPCG) where each transmitting UDs act as a rational and selfish
player, and wants to maximize its utility by choosing the best possible power
allocation strategy. To this end, the utility function of the $u_{k}$-th
transmitting UD is given at the top of the next page, where $\mathbf{Q_{-k}}$
denotes the power allocation for the transmitting UDs other than the
$u_{k}$-th UD.
$\begin{split}\mathcal{N}_{k}(Q_{k},\mathbf{Q_{-k}})=&\sum_{u_{i}\in\mathtt{u}(\mathtt{f}_{u_{k}})}\log_{2}\left(1+\cfrac{Q_{k}|\gamma_{u_{k},u_{i}}|^{2}}{N_{0}+\sum_{m=1,m\neq
k}^{K}Q_{m}|\gamma_{u_{m},u_{i}}|^{2}}\right)\\\ &+\sum_{m=1,m\neq
K}^{K}\sum_{u_{j}\in\mathtt{u}(\mathtt{f}_{u_{m}})}\log_{2}\left(1+\cfrac{Q_{m}|\gamma_{u_{m},u_{j}}|^{2}}{N_{0}+\sum_{n=1,n\neq
k,m}^{K}Q_{n}|\gamma_{u_{n},u_{i}}|^{2}+Q_{k}|\gamma_{u_{k},u_{i}}|^{2}}\right)\end{split}$
The utility function has two parts where the first part is the payoff in terms
of the achievable throughput and the second term is the payoff for creating
less interference to the other players in the system. Obviously, the first and
second terms monotonically increase and decrease with the increase of
transmission power, $Q_{k}$, respectively. We denote the R.H.S of (8) as
$\mathcal{F}_{k}\left(\\{\widehat{Q}_{k}\\}\right)$. We can readily
demonstrate that if $Q_{k}<\mathcal{F}_{k}\left(\\{\widehat{Q}_{k}\\}\right)$,
$U_{k}(Q_{k},\mathbf{Q_{-k}})$ monotonically increases, and if
$Q_{k}>\mathcal{F}_{k}\left(\\{\widehat{Q}_{k}\\}\right)$,
$U_{k}(Q_{k},\mathbf{Q_{-k}})$ monotonically decreases. Therefore,
$U_{k}(Q_{k},\mathbf{Q_{-k}})$ is a quasi-concave utility function. From [27,
Theorem 3.2], for a non-cooperative game with quasi-concave utility functions,
a Nash-equilibrium (NE) point must exists and it is obtained as the best
response strategy of the players in the game. Note that, in an NE point, no
player can improve its utility by taking an alternative strategy, and
consequently, the overall solution must converge. We can easily justify that
(8) is same as the best response strategy of the $k$-th transmitting UD,
$\forall k$. Consequently, the iterative power allocation procedure, given in
Algorithm 1, must converge to a stable point. We also emphasize that (8) is
derived by satisfying the Karush-khun-Tucker (KKT) conditions for (7). Hence,
a stable power allocation that is obtained by iteratively solving (8) must
converge to a local optimal solution to (7). Accordingly, Algorithm 1 provides
a stable and local optimal solution to (7)333The convergence of transmission
power update equation, given by (8), is justified for asymptotically high
signal-to-noise (SNR) ratio regime in [26]. However, using Proposition 2, we
justify that the considered power allocation converges without the assumption
of asymptotic high SNR.. ∎
### V-B Greedy Maximal Independent Set Selection Algorithm
In this sub-section, we describe a maximal IS selection algorithm based on a
greedy vertex search in the D2D-RA-IDNC graph and the priority of vertices
defined in Proposition 1. Such a greedy vertex search approach was adopted in
[10, 11] without adopting the physical-layer rate, but demonstrated its
efficiency for completion time minimization. For simplicity, we use $v$ and
$v^{\prime}$ instead of $v^{k}_{r,i,h}$ and $v^{k}_{r,i^{\prime},h^{\prime}}$,
respectively. Let $\mathcal{E}_{v,v^{\prime}}$ be the adjacency connector of
vertices $v$ and $v^{\prime}$ in graph $\mathcal{G}$ such that
$\mathcal{E}_{v,v^{\prime}}=\begin{cases}1&\text{if $v$ is not adjacent to
$v^{\prime}$ in $\mathcal{G}$},\\\ 0&\text{otherwise}.\end{cases}$ (11)
Further, let $g_{v}$ denote the weighted degree of vertex $v$, which can be
expressed as
$g_{v}=\sum_{v^{\prime}\in\mathcal{G}}\mathcal{E}_{v,v^{\prime}}\omega_{o}(v_{v}^{\prime})$,
where $\omega_{o}(v^{\prime})$ is the priority of vertex $v^{\prime}$ defined
in (5). Finally, the modified weight of vertex $v$ is defined as
$\displaystyle\omega_{m}(v)$
$\displaystyle=\omega_{o}(v)g_{v}=2^{N_{w}-d_{u_{i}}+1}\bar{\mathtt{T}}_{u_{i}}(t)\left(\frac{r}{B}\right)n_{v}.$
(12)
To this end, at each step, the vertex search method adds a new vertex based on
the maximum weight. Essentially, a vertex $v^{*}$ that has the maximum weight
$\omega_{m}(v^{*})$ is selected and added to the maximal independent set
$\mathtt{I}$ (i.e., $\mathtt{I}=\\{v^{*}\\}$). Then, the subgraph
$\mathcal{G}(\mathtt{I})$, which consists of vertices in graph $\mathcal{G}$
that are not connected to vertex $v^{*}$ is extracted and considered for the
next step. Next, a new maximum weight vertex $v^{{}^{\prime}*}$ is selected
from subgraph $\mathcal{G}(\mathtt{I})$. We repeat this process until no more
vertices that are not connected to all the vertices in the maximal independent
set $\mathtt{I}$. The steps of the greedy vertex search selection are
summarized in Algorithm 2.
The transmitting UD in $\mathtt{I}$ generates a coded file by XORing all the
files identified by the vertices in $\mathtt{I}$. It also adopts the
transmission rate corresponding to the vertices of $\mathtt{I}$. It is worth
mentioning that the MWIS $\mathtt{I}$ and its corresponding modified weights
in (12) provide the following potential benefits:
* •
The modified weight of each vertex in $\mathtt{I}$ shows the following. The
first term $\left(\frac{r}{B}\right)$ provides a balance between the
transmission rate/power and the number of scheduled UDs to transmitting UD
$u_{k}$. The second term $2^{N_{w}-d_{u_{i}}+1}\bar{\mathtt{T}}_{u_{i}}(t)$
classifies the UDs based on their completion time lower bounds. As such, we
give them priority for scheduling. More importantly, through the weighted
degree $g$, the modified weight of a vertex $v$ has a large original weight
and it is not connected to a large number of vertices that have high original
weights.
* •
Each UD is scheduled only to a transmitting UD that cached one of its missed
files.
* •
The transmitting UD delivers an IDNC file with an adopted transmission
rate/power that provides a lower completion time to a set of UDs. This adopted
rate ensures the QoS rate guarantee and no larger than the channel capacities
of all scheduled UDs.
### V-C Cross-layer NC Solution
The iterative proposed CLNC solution maximizes the weighted sum rate subject
to completion time reduction constraints, i.e., the problem of determining
transmitting UDs and their transmission rates/powers and transmitted file
combinations in a coordinated fashion. Particularly, we iterate between
solving the completion time reduction problem for fixed transmit power and
optimizing the power level for a given schedule of completion time reduction.
The main philosophy of this heuristic is to iteratively include more
transmitting UDs and allocate transmission powers subject to the reduction in
the completion time. At each iteration, it first determines the scheduled UDs
by the set of chosen transmitting UDs as described in Algorithm 2. Then, given
the resulting network-coded user scheduling, it executes a power allocation
algorithm to determine the power level of the transmitting UDs that maximizes
the sum-rate and minimizes the completion time as described in Algorithm 1.
The steps of the proposed iterative solution is described as in Algorithm 3.
1: Generate D2D-RA-IDNC graph $\mathcal{G}$.
2: Initialize $\mathtt{I}=\varnothing$.
3: Set $\mathcal{G}(\mathtt{I})\leftarrow\mathcal{G}$.
4: while $\mathcal{G}(\mathtt{I})\neq\varnothing$ do
5: $\forall v\in\mathcal{G}(\mathtt{I})$: compute $\omega_{o}(v)$ and
$\omega_{m}(v)$ using (5) and (12), respectively.
6: Select
$v^{*}=\arg\max_{v\in\mathcal{G}(\mathtt{I})}\\{\omega_{m}(v^{*})\\}$.
7: Set $\mathtt{I}\leftarrow\mathtt{I}\cup v^{*}$.
8: Obtain $\mathcal{G}(\mathtt{I})$.
9: end while
Algorithm 2 Greedy Maximal Weight Independent Set (MWIS) Selection
In Algorithm 3, $\mathcal{A}$ is the set of selected transmitting UDs,
$\mathcal{M}_{w}$ is the set of UDs having non-empty Wants set, and
$\mathcal{X}$ is the set of all the targeted UDs.
Algorithm 3 Cross-layer Network-Coded (CLNC) Resource Scheduling Algorithm
1: Initialize: $\mathcal{A}$ = $\varnothing$,
$\mathcal{M}_{w}=\mathcal{N}_{w}$, and $\mathcal{X}=\varnothing$.
2: Initialize: Transmission power level $Q_{o}\in Q_{\text{feasible}}$ for
each potential transmitting UD.
3: Compute SINR(s) setting transmission power $Q_{o}$ and considering no
interference.
4: repeat
5: Construct the D2D-RA-IDNC graph using Section IV-A by considering using
$\mathcal{M}_{w}$ as the set of potential transmitting or targeted UDs.
6: Select maximal independent set $\mathtt{I}$ using Algorithm 2. Let the
transmitting UD be $u_{k}$, the file combination be $\mathtt{f}$ in the
maximal independent set $\mathtt{I}$, and the set of potential targeted UDs by
$u_{k}$ be $\mathtt{u}(\mathtt{f}_{u_{k}})$.
7: Compute the lower bound on the individual completion time of targeted UDs
in $\mathcal{X}$ set using (3) and increase delay of the non-targeted UDs in
$\mathcal{M}_{w}\backslash\mathcal{X}$ by $\frac{B}{r(t)}$.
8: Set $\mathcal{A}=\\{\mathcal{A},u_{k}\\}$.
9: Consider the UDs in
$\mathcal{M}_{w}\backslash(\\{u_{k}\\}\cup\mathcal{X}(\mathtt{f}_{u_{k}}))$ as
future transmitting UDs or more targeted UDs.
10: Compute the SINRs by setting the transmission power as $Q_{o}$ and
considering interference from the UDs in the set $\mathcal{A}$.
11: Optimize the transmission power of the transmitting UDs $\mathcal{A}$
using Algorithm 1 to maximize the sum-capacity.
12: If the sum-capacity is improved, $\mathcal{A}\leftarrow\mathcal{A}\cup
u_{i}$ and $\mathcal{M}_{w}\leftarrow\mathcal{M}_{w}\setminus u_{i}$.
13: if $|\mathcal{A}|>1$ then
14: For each receiving UD, $u_{i}$, compute $R_{u_{i}}$. If $R_{u_{i}}\geq
R_{th}$ and $u_{i}\notin\mathcal{X}$, update
$\mathcal{X}\leftarrow\mathcal{X}\cup u_{i}$ and
$\mathcal{M}_{w}\leftarrow\mathcal{M}_{w}\setminus u_{i}$. On the other hand,
if $R_{u_{i}}<R_{th}$ and $u_{i}\in\mathcal{X}$, update
$\mathcal{X}\leftarrow\mathcal{X}\setminus u_{i}$ and
$\mathcal{M}_{w}\leftarrow\mathcal{M}_{w}\cup u_{i}$. Repeat this step
$\forall u_{i}\in\mathtt{u}(\mathtt{f}_{u_{k}})$ and $\forall
u_{k}\in\mathcal{A}$.
15: end if
16: $\exists u_{k}\in\mathcal{A}$ such that the none of UDs in
$\mathtt{u}(\mathtt{f}_{u_{k}})$ set satisfies the rate constraint,
$\mathcal{M}_{w}\leftarrow\mathcal{M}_{w}\cup u_{k}$ and
$\mathcal{A}\leftarrow\mathcal{A}\setminus u_{k}$.
17: Recompute $\bar{\mathtt{T}}$ and store the solution that achieves the
minimum completion time.
18: until No UDs can be added to the set $\mathcal{A}$.
19: Output: Overall completion time $\bar{\mathtt{T}}$.
###### Proposition 4.
The CLNC solution achieves improved sum-rate compared to the interference free
solution of [24].
###### Proof.
At each iteration of the proposed Algorithm 3, the set of transmitting UDs is
updated. Let denote $\mathcal{A}^{(i)}$ be the set of transmitting UDs at the
$i$-th iteration of Algorithm 3. Recall, only a finite number of UDs can be
the transmitting UDs in a given TS. Thus, the set of transmitting UDs is
evolved as
$\mathcal{A}^{(1)}\rightarrow\mathcal{A}^{(2)}\rightarrow\cdots\rightarrow\mathcal{A}^{(\text{final})}.$
(13)
Note that $\mathcal{A}^{(1)}$ contains only one transmitting UD. Particularly,
the proposed scheme initially selects a transmitting UD with maximum number of
potential receiving UDs, and such a transmitting UD is included in
$\mathcal{A}^{(1)}$. Subsequently, at each iteration Algorithm 3 adds one more
transmitting UDs with the existing set of transmitting UDs given that the
total sum-rate is improved. To this end, at each iteration, Algorithm 3
updates the power allocations of all the transmitting UDs to maximize the
overall sum-capacity. Essentially, at each iteration of Algorithm 3, the sum-
rate is non-decreasingly improved. Obviously, the sum-rate of the
$\mathcal{A}^{(\text{final})}$ set must outperform the sum-rate achieved by
the $\mathcal{A}^{(1)}$ set. Recall that the interference free solution of
[24] selects only one transmitting UD each TS. Thus, the achievable sum-rate
of the interference free solution of [24] becomes same to the sum-rate
achieved by the $\mathcal{A}^{(1)}$ set. Hence, the CLNC solution achieves
improved sum-rate compared to the interference free solution of [24]. ∎
Remark 1: By exploiting power allocation and time-varying channel of the UDs,
the proposed CLNC solution activates multiple transmitting UDs at each TS.
However, for severely strong inter-device interference channel, the power
allocation may not improve the sum-capacity. In this case, the proposed
solution activates only one transmitting UD. Consequently, the the
interference free solution of [24] is a special case of the proposed CLNC
solution. Hence, the proposed CLNC solution always achieves a lower completion
time compared to the interference free solution.
Remark 2: When we schedule many UDs to the transmitting UDs, the number of
targeted UDs is increased from the side information optimization, however, the
sum-capacity may not be maximized. Consequently, we optimize the power/rate of
the transmitting UDs to maximize the sum-capacity. Thus, our CLNC solution not
only increases the number of targeted UDs, but also maximizes the sum-
capacity.
### V-D Complexity Analysis
For any arbitrary D2D network setting and at any iteration of the proposed
algorithm, we need to construct the D2D-RA-IDNC graph, calculate the power
allocation of the transmitting UDs, and find the MWIS.
Since each UD caches only a set of files, the total number of vertices in
D2D-RA-IDNC graph corresponding to that UD is $V=|\mathcal{C}|\times N$.
Therefore, we construct the D2D-RA-IDNC graph for all UDs by generating
$O(VN)$ vertices. Building the adjacency matrix needs a computational
complexity of $O(V^{2}N^{2})$. For the vertex search algorithm, we need first
to calculate the weights of all vertices, and then finding the MWIS. It is
easily to note that all UDs having vertices in the independent set have the
same transmission rate as they initially corresponding to the same
transmitting UD. Thus, the algorithm needs $|\mathcal{R}|$ maximal ISs. Note
that each maximal IS has at most $V$ vertices as each UD can be targeted by at
most one file (i.e., one vertex for each targeted UD) per transmission. Each
iteration with a given rate needs a complexity of $O(VN)$ for weight
calculations of the MWIS. It also needs searching for at most $N-1$ vertices.
Then, the complexity of the algorithm for finding the maximal ISs for all
rates and their sum weights, at most, is
$O(VN|\mathcal{R}|+(N-1)|\mathcal{R}|)=O(|\mathcal{R}|(VN+N-1))$. The
computational complexity of constructing the D2D-RA-IDNC graph, building the
adjacent matrix, and finding the MWIS is
$O(|\mathcal{R}|(VN+N-1))+O(V^{2}N^{2})=O(V^{2}N^{2})$.
On the other hand, calculating the power allocation for any fixed D2D schedule
needs
$C_{p}=O(|\mathtt{u}_{u_{1}}|\times|\mathtt{u}_{u_{2}}|\times\cdots|\mathtt{u}_{u_{K}}|)$.
Finally, Algorithm 3 iterates between constructing the D2D-RA-IDNC graph and
finding its corresponding MWIS and optimizing power levels of the transmitting
UDs, thus leading to an overall computational complexity of
$O(T(V^{2}N^{2}+C_{p}))$, where $T$ is the number of iterations.
## VI Numerical Results
In this section, we present some numerical results that compare the completion
time performance of our proposed CLNC scheme with existing coded and uncoded
schemes. We consider a D2D network where UDs are distributed randomly within a
hexagonal cell of radius $500$m. We assume the channel gains between UDs
follow the standard path-loss model, which consists of three components: 1)
path-loss of $148+37.6\log_{10}(d_{{u_{k}},{u_{i}}})$dB, where
$d_{{u_{k}},{u_{i}}}$ represents the distance between $u_{k}$-th UD and
$u_{i}$-th UD in km; 2) log-normal shadowing with $4$dB standard deviation and
3) Rayleigh channel fading with zero-mean and unit variance. We consider that
the channels are perfectly estimated. The noise power and maximum’ UD power
are assumed to be $-174$ dBm/Hz and $Q_{\text{max}}=-42.60$ dBm/Hz,
respectively, and the bandwidth is $1$ MHz. Unless otherwise stated, we
initially consider that each UD already has about $45\%$ and $55\%$ of
$\mathcal{F}$ files for the considered schemes. To evaluate the performance of
our proposed scheme with different thresholds ($R_{\text{th}1}=0.5$, and
$R_{\text{th}2}=5$), we simulate various scenarios with different number of
UDs, number of files, file sizes, and demand ratio of UDs. These thresholds
represent the minimum transmission rates required for QoS. The performances of
our joint solution for $R_{\text{th}1}=0.5$ and $R_{\text{th}2}=5$ are shown
in solid and dash red lines, respectively.
For the sake of comparison, our proposed schemes are compared with the
following existing schemes.
* •
Uncoded Broadcast: This scheme picks a random UD that broadcasts an uncoded
file from its cache set that is missing at the largest number of other UDs.
Moreover, this scheme uses the minimum channel capacity from the transmitting
UD to all other UDs as the transmission rate.
* •
Cooperative RLNC: This RLNC algorithm picks the UD with the highest side
information rank as the transmitting UD in a D2D transmission [28]. The picked
UD encodes all files using random coefficient from a large Galois field.
However, this algorithm discards the dynamic transmission rates and for the
transmission to be successfully received by all other UDs, the minimum channel
capacity from the transmitting UD to all other UDs is adopted as the
transmission rate.
* •
Cooperative IDNC: This IDNC algorithm considers cooperation among UDs and
jointly selects a set of transmitting UDs and their XOR file combinations
[12]. However, this algorithm focuses on serving a large number of UDs with a
new file in each time index to reduce the overall completion time. Due to
ignoring the dynamic rate adaptation, the minimum channel capacity from the
transmitting UDs to all targeted UDs is adopted as the transmission rate.
For completeness of this work, we also compare our proposed scheme with the
recent RA-IDNC work in [24]. In this scheme, RA-IDNC scheme is employed for
D2D network that allows only one transmitting UD to transmit at a time.
Figure 2: Average completion time in sec. vs the number of UDs $N$. Figure 3:
Average completion time in sec. vs the number of files $F$.
In Fig. 2, we depict the average completion time versus the number of UDs $N$.
We consider a D2D model with a frame of $20$ files and a file size of $1$
Mbits. From this figure, we can observe that the proposed CLNC scheme offers
an improved performance in terms of completion time minimization as compared
to the other schemes for all considered number of UDs. This improved
performance is due the fact that our proposed scheme judiciously selects
potential UDs for transmitting coded files to a set of schedule UDs, adopts
the transmission rate, and optimizes the transmission power of each
transmitting UD. This in turn aides the file combination selection process.
The uncoded broadcast scheme sacrifices the rate optimality by scheduling the
maximum number of UDs. Although uncoded scheme needs a fewer number of
transmissions, at least $F$ transmissions, it requires longer transmission
durations for frame delivery completion. This leads to a high completion time.
On the other hand, the RA-IDNC scheme improves the selection of file process
by adapting the transmission rate, but it suffers from activating only one
transmitting UD at each transmission slot. This is a clear limitation of the
RA-IDNC scheme as it does not fully exploit the simultaneous transmissions
from multiple UDs. The proposed CLNC scheme strikes a balance between the
aforementioned aspects by jointly selecting the number of targeted UDs and the
transmission rate of each transmitting UD such that the overall completion
time is minimized. This results in a full utilization of simultaneous
transmissions from multiple transmitting UDs. Consequently, an improved
performance of our proposed scheme compared to the RA-IDNC scheme is achieved.
Moreover, our proposed scheme improves the used rates using power control on
each transmitting UD.
In Fig. 3, we show the average completion time versus the number of files $F$.
Fig. 3 considers different sizes of frames. The simulated D2D system composed
of $20$ UDs and file size of $1$ Mbits. For the same reason as mentioned for
Fig. 2, our proposed scheme outperforms other schemes. It can be observed from
the figure that increasing the frame size leads to an increased completion
time of all schemes. This is because for few files, the opportunities of
mixing files using IDNC in the proposed and other NC schemes are limited. As a
result, all NC schemes have roughly similar performances. As the number of
files increases, the increase in the completion time with our proposed scheme
is low. This is due to the fact that our proposed scheme judiciously allows
each transmitting UD to decide on a set of files to be XORed. As such, they
are beneficial to a significant set of UDs that have relatively good channel
qualities. Note that uncoded broadcast and RLNC schemes complete frame
transmissions in fewer transmissions ($F$ transmissions) than our developed
scheme. However, each of their transmission durations is longer than a single
transmission of the proposed schemes since they are adopting the transmission
rates to the minimum of all achievable capacities.
Figure 4: Average completion time in sec. vs file size $B$. Figure 5: Average
completion time in sec. vs demand ratio $\mu$.
In Fig. 4, we illustrate the impact of increasing the file size $B$ on the
average completion time. Fig. 8 shows the size of such popular files and how
long it takes for the proposed solution to deliver a complete frame to UDs. In
this figure, we simulate the D2D system composed of $20$ UDs and $15$ files.
We observe that the completion time performances of all schemes increase
linearly with the file size. This agrees with the completion time expression
in Corollary 1, where it was shown that $\mathtt{T}$ increases linearly with
$B$. From physical-layer consideration, as $B$ increases, more bits are needed
for delivering files. Thus, time delay is increased to receive files from
transmitting UDs. It can be seen that the proposed scheme in all above figures
outperforms all other schemes for different rate thresholds as shown with red
lines. As the rate threshold increases, the completion time improvement
increases. This is because as the rate threshold increases, a certain number
of UDs is scheduled and the transmission rate of the transmitting UDs becomes
high. Thus, the role of our proposed scheme for completion time minimization
and QoS optimization technique becomes more noticeable.
In Fig. 5, we illustrate the impact of changing the demand ratio $\mu$ on the
average completion time. This ratio represents the demand portion of the
requested files of UDs. In this figure, we simulate the D2D system composed of
$10$ UDs and $8$ files each with a size of $1$ Mbits. We can observe that the
completion time performance of our proposed scheme outperforms the
performances of other schemes for the whole range of $\mu$. It can be seen
from the figure that increasing the demand ratio leads to an increased
completion time of all schemes. This is because for high demand ratio, the
number of transmissions for delivering all the files to all the UDs of all
schemes increases. As a result, the completion time performance of the
considered schemes increases.
Finally, we provide some observations from our presented simulation results as
follows. First, it is always beneficial from network-layer perspective to
schedule many UDs with IDNC files as in the classical IDNC scheme. However,
selecting the minimum transmission rate of all the scheduled UDs degrades the
completion time performance of the classical scheme. Second, although the
uncoded brodcast and RLNC schemes schedule almost all the UDs, they adopt the
transmission rates to the minimum of all the scheduled UDs. Thus, its
completion time performance is degraded, especially for large network sizes
since selecting the minimum transmission rate of an increasing set is always
minimum. Third, RA-IDNC scheme overcomes the limitations of the aforementioned
schemes but suffers from selecting only one transmitting UD. This limitation
further degrades the completion time performance of the RA-IDNC scheme in
large network sizes. This due to the fact that RA-IDNC scheme always selects
one transmitter regardless of the size of the network. Conversely, our
transmission framework is more practically relevant as it considers different
transmitting UDs and optimizes the employed rates using power control on each
transmitting UD.
## VII Conclusion
In this paper, we have studied the joint optimization of CLNC and D2D
communications for the file delivery phase with the goal of minimizing the
completion time while guaranteeing UD’s QoS, subject to the UD’s cache files,
the required minimum rate, power allocation, and NC constraints. The
completion time minimization problem in interference-allowed setup is solved
over a set of transmitting UDs, their power allocation, dynamic rate selection
and transmitted file combinations. By using a graph theory technique, we
proposed a novel and efficient approach that uses cross-layer NC for power
optimization and UDs coordinated scheduling in D2D networks. Specifically, our
proposed solution judiciously iterates between finding the MWIS in the D2D-RA-
IDNC graph and optimizing the power allocation, subject to the resultant
interference of the newly added transmitting UDs. Simulation results show that
the proposed interference-allowed solution reduces the completion time
compared to the interference-free solution as well as conventional network
coding algorithms.
## References
* [1] L. Song, D. Niyato, Z. Han, and E. Hossain, Wireless Device-to-Device Communications and Networks, Cambridge University Press, 2015.
* [2] P. Pahlevani, M. Hundeboll, M. V. Pedersen, D. Lucani, H. Charaf, F. Fitzek, H. Bagheri, and M. Katz, “Novel concepts for device-to-device communication using network coding,” _IEEE Commun. Mag.,_ vol. 52, no. 4, pp. 32-39, 2014.
* [3] Q. Zhang, J. Heide, M. V. Pedersen, and F. H. Fitzek, “MBMS with user cooperation and network coding,” in _IEEE Global Telecommunications Conference (GLOBECOM),_ 2011, pp. 1–6.
* [4] A. Tassi, C. Khirallah, D. Vukobratovic, F. Chiti, J. S. Thompson, and R. Fantacci, “Resource allocation strategies for networkcoded video broadcasting services over LTE-advanced,” _IEEE Trans. Veh. Technol.,_ vol. 64, no. 5, pp. 2186-2192, 2015.
* [5] X. Wang, C. Yuen, T. Li, W. Song, and Y. Xu, “Minimizing transmission cost for third-party information exchange with network coding,” 2014.
* [6] Cisco visual networking index: global mobile data traffic forecast update, 2017-2022, White Paper, Feb., 2019.
* [7] D. Ferreira, R. Costa, J. Barros et al., “Real-time network coding for live streaming in hyper-dense Wi-Fi spaces,” _IEEE J. Sel. Areas Commun.,_ vol. 32, no. 4, pp. 773-781, 2014.
* [8] D. Traskov, M. Medard, P. Sadeghi, and R. Koetter, “Joint scheduling and instantaneously decodable network coding,” in _Proc. IEEE Global Telecommun. Conf. (GLOBECOM),_ Honolulu, Hawaii, USA, Nov./Dec. 2009, pp. 1-6.
* [9] C. Xu, C. Gao, Z. Zhou, Z. Chang, and Y. Jia, “Social network-based content delivery in device-to-device underlay cellular networks using matching theory,” _IEEE Access,_ vol. 5, pp. 924-937, Nov. 2017.
* [10] S. Sorour and S. Valaee, “Completion delay minimization for instantly decodable network codes,” _IEEE/ACM Trans. on Netw.,_ vol. 23, no. 5, pp. 1553-1567, Oct. 2015.
* [11] N. Aboutorab and P. Sadeghi, “Instantly decodable network coding for completion time or decoding delay reduction in cooperative data exchange systems,” _IEEE Trans. Vehi. Tech.,_ vol. 65, no. 3, pp. 1212-1228, Mar. 2016.
* [12] A. Douik, S. Sorour, T. Y. Al-Naffouri, H.-C. Yang, and M.-S. Alouini, “Delay reduction in multi-hop device-to-device communication using network coding,” in _IEEE International Symposium on Network Coding (NetCod),_ 2015, pp. 1–5.
* [13] A. Douik, M. S. Al-Abiad and M. J. Hossain, “An improved weight design for unwanted packets in multicast instantly decodable network coding,” in _IEEE Comm. Letters,_ vol. 23, no. 11, pp. 2122-2125, Nov. 2019,
* [14] M. S. Al-Abiad, A. Douik and M. J. Hossain, “Coalition formation game for cooperative content delivery in network coding assisted D2D communications,“ in _IEEE Access,_ vol. 8, pp. 158152-158168, 2020.
* [15] A. Douik, S. Sorour, T. Y. Al-Naffouri, and M.-S. Alouini, “Instantly decodable network coding: From centralized to device-to-device communications,” _IEEE Commun. Surveys Tuts.,_ vol. 19, no. 2, pp. 1201-1224, 2nd Quart., 2017.
* [16] M. Saif, A. Douik and S. Sorour, “Rate aware network codes for coordinated multi base-station networks,” _2016 IEEE Int. Conf. on Commun. (ICC), Kuala Lumpur,_ 2016, pp. 1-7.
* [17] A. Douik, S. Sorour, T. Y. Al-Naffouri and M. Alouini, “Rate Aware Instantly Decodable Network Codes,” in _IEEE Trans. on Wireless Commun.,_ vol. 16, no. 2, pp. 998-1011, Feb. 2017.
* [18] M. S. Al-Abiad, A. Douik, and S. Sorour, “Rate aware network codes for cloud radio access networks,” _IEEE Trans. on Mobile Comp.,_ vol. 18, no 8, pp 1898-1910, Aug. 2019.
* [19] X. Wang, C. Yuen, and Y. Xu, “Coding based data broadcasting for time critical applications with rate adaptation,” _IEEE Trans. on Vehicular Tech.,_ vol. 63, no. 5, pp. 2429-2442, Jun. 2014.
* [20] M. S. Al-Abiad, A. Douik, S. Sorour, and Md. J. Hossain, “Throughput maximization in cloud-radio access networks using rate-aware network Coding,” _IEEE Trans. Mobile Comput.,_ Early Access, Aug. 2020.
* [21] M. S. Al-Abiad, M. Z. Hassan, A. Douik, and Md. J. Hossain, “Low-complexity power allocation for network-coded User scheduling in Fog-RANs,” _IEEE Commu. Letters,_ Early Access, Dec. 2020.
* [22] M. S. Al-Abiad, S. Sorour, and M. J. Hossain, “Cloud offloading with QoS provisioning using cross-layer network coding,” _2018 IEEE Global Commu. Conference (GLOBECOM), Abu Dhabi, UAE,_ 2018, pp. 1-7.
* [23] M. S. Al-Abiad, M. J. Hossain, and S. Sorour, “Cross-layer cloud offloading with quality of service guarantees in Fog-RANs,” in _IEEE Trans. on Commun.,_ vol. 67, no. 12, pp. 8435-8449, Jun. 2019.
* [24] M. S. Karim, A. Douik, S. Sorour, and P. Sadeghi, “Rate-aware network codes for completion time reduction in device-to-device communications,” in _IEEE Int. Conf. on Commu. (ICC),_ 2016, pp. 1-7.
* [25] D. B. West et al., _Introduction to graph theory._ Prentice hall Upper Saddle River, 2001, vol. 2.
* [26] A. Douik, H. Dahrouj, O. Amin, B. Aloquibi, T. Y. A.-Naffouri, and M.-S. Alouini, “Mode selection and power allocation in multi-level cache-enabled networks,” IEEE Commun. Lett., vol. 24, no. 8, pp. 1789-1793, Aug. 2020.
* [27] Z. Han et al., Game Theory in Wireless and Communication Networks: Theory, Models and Applications,
* [28] A. Sprintson, P. Sadeghi, G. Booker, and S. El Rouayheb, “A randomized algorithm and performance bounds for coded cooperative data exchange,” in _IEEE Int. Symposium on Inf. Theory Proceedings (ISIT),_ 2010, pp. 1888–1892.
|
# Quadrupole absorption rate for atoms in circularly-polarized optical
vortices
Smail Bougouffa<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of
Physics, College of Science, Imam Mohammad ibn Saud Islamic University
(IMSIU), P.O. Box 90950, Riyadh 11623, Saudi Arabia
ORCiD: http://orcid.org/0000-0003-1884-4861
###### Abstract
Twisted light beams, or optical vortices, have been used to drive the circular
motion of microscopic particles in optical tweezers and have been shown to
generate vortices in quantum gases. Recent studies have established that
electric quadrupole interactions can mediate an orbital angular momentum
exchange between twisted light and the electronic degrees of freedom of atoms.
Here we consider a quadrupole atomic transition mediated by a circularly-
polarized optical vortex. We evaluate the transfer rate of the optical angular
momentum to a Cs atom involving the $6^{2}S_{1/2}\rightarrow 5^{2}D_{5/2}$
quadrupole transition and explain how the polarization state and the
topological charge of the vortex beam determine the selection rules.
Optical angular momentum transfer, quadrupole interaction , optical vortex
beams, atoms-light interactions
###### pacs:
37.10.De; 37.10.Gh
## I Introduction
The present trend towards efficiency and enhanced applications in various
optical fields has led to a growing interest in developing and using twisted
light beams or optical vortices Torres and Torner (2011); Surzhykov _et al._
(2015); Babiker _et al._ (2019); Andrews (2011); Yao and Padgett (2011).
Laguerre-Gaussian beams have been suggested and used for numerous novel
applications such as high-dimensional quantum information Fickler _et al._
(2014), quantum cryptography Souza _et al._ (2008) and quantum memories
Nicolas _et al._ (2014).
The progress of laser cooling and trapping has been concentrated on the
interaction of such particular forms of light with atoms Babiker _et al._
(2019); Scholz-Marggraf _et al._ involving electric dipole interactions. In
other studies quadrupole transitions have also featured and in contexts where
they have been significantly enhanced Tojo _et al._ (2005); Hu _et al._
(2012); Kern and Martin (2012). In particular, it has been shown that the
interaction of vortex light, such as Laguerre-Gaussian and Bessel-Gaussian
modes, with atoms involving electric quadrupole transitions, can affect atomic
motion Al-Awfi and Bougouffa (2019); Bougouffa and Babiker (2020a); Ray _et
al._ (2020).
There are also several theoretical and experimental investigations that dealt
with the possibility of the exchange of orbital angular momentum (OAM) between
light and the internal motion of atoms Van Enk (1994); Babiker _et al._
(2002); Araoka _et al._ (2005); Löffler and Woerdman (2012); Giammanco _et
al._ (2017). The main established result of the previous studies is the
nonexistence of the OAM impact on electric dipole transitions Babiker _et
al._ (2002); Lloyd _et al._ (2012a, b)
Recently, the OAM transfer to atoms interacting with optical vortices was
considered and the absorption rate evaluated in the cases of the
$6^{2}S_{1/2}\rightarrow 5^{2}D_{5/2}$ quadrupole transition in Cs when cesium
atoms are subject to the field of a linearly polarized optical vortex
Bougouffa and Babiker (2020b). The obtained results showed that the absorption
rate, albeit lower than the quadrupole spontaneous emission rate, is still
measurable within the current experimentally available parameters. Also, it
should still be within the measurement abilities of modern spectroscopic
techniques. Those studies have been mainly concerned with the case in which
the optical vortex light is linearly polarized, and so optical spin is ignored
in the transfer process. On the other hand, the experimental works by
Schmiegelow et al. Schmiegelow _et al._ (2016); Afanasev _et al._ (2018)
have indicated that an atom or an ion can exchange two units of optical
angular momentum, one unit from optical spin and another from its OAM. In this
direction, but the rate of transfer has not been evaluated, to the best of our
knowledge. We have thus set out to set up the necessary theory leading to the
evaluation of the rate of OAM transfer from the circularly polarized optical
vortices to the atoms. The theory is applied to the particular case involving
the $6^{2}S_{1/2}\rightarrow 5^{2}D_{5/2}$ in Cs when cesium atoms are subject
to the field of a circular polarized optical vortex. This Cs transition is
well known as a dipole-forbidden but a quadrupole-allowed transition. In
evaluating the rate of OAM transfer involving a quadrupole transition we had
to consider the relevant selection rules which involve both spin and OAM
Rajasree _et al._ (2020).
This paper is structured as follows. In Sec. II the fundamental concepts and
essential formalism involved in the interaction of twisted light with an atom
in a quadrupole active transition is first outlined, leading from the
quadrupole interaction Hamiltonian to the quadrupole Rabi frequency. Section
III is concerned with the process of OAM transfer as the atom interacts with
the optical vortex field at near-resonance with the aim of evaluating the OAM
transfer rate. The model treats the atom as a two-level system and applies the
Fermi Golden rule with appropriate use of the selection rules for quadrupole
transitions. However, the transfer process demands a treatment including the
density of the continuum states as a Lorentzian function representing the
upper atomic level as an energy band of width $\hbar\gamma$ where
$\gamma^{-1}$ is the lifetime of the upper state. Section IV deals with the
optical vortex as a circularly-polarised Laguerre-Gaussian beam whose winding
number is restricted by the optical spin and quadrupole section rules. The
results are shown in Sec.V for the quadrupole atomic transition
$6^{2}S_{1/2}\rightarrow 5^{2}D_{5/2}$ in Cs. SectionVI contains a summary and
final conclusions.
## II Quadrupole interaction Hamiltonian
We consider a two-level atom interacting with a single optical vortex beam
propagating along the +z axis. The ground and excited states of the two-level-
atom are $\\{\ket{g},\ket{e}\\}$ with level energies
$\mathcal{\varepsilon}_{1}$ and $\mathcal{\varepsilon}_{2}$, respectively,
which correspond to a transition frequency
$\omega_{a}=(\mathcal{\varepsilon}_{2}-\mathcal{\varepsilon}_{1})/\hbar$.
We focus on the case of an optical transition that is dipole-forbidden, but
quadrupole-allowed which allows us to consider only the quadrupole interaction
term in the interaction Hamiltonian, which arises from a multipolar series
expansion about the center of mass coordinate $\mathbf{R}$ as follows:
$\hat{H}_{Q}=-\frac{1}{2}\sum_{ij}\hat{Q}_{ij}\nabla_{i}\hat{E_{j}}.$ (1)
Here $x_{i}$ are the components of the internal position vector
$\mathbf{r}=(x,y,z)$ and $\nabla_{i}$ are components of the gradient operator
which act only on the spatial coordinates of the transverse electric field
vector $\mathbf{E}$ as a function of the centre of mass position vector
variable $\mathbf{R}=(X,Y,Z)$. The quadrupole tensor operator ${\hat{Q}}_{ij}$
can be written in terms of ladder operators as
$\hat{Q}_{ij}=Q_{ij}(\hat{b}+\hat{b}^{{\dagger}})$, where
$Q_{ij}=\bra{i}\hat{Q}_{ij}\ket{j}$ are the quadrupole matrix elements between
the two atomic levels, and $\hat{b}(\hat{b}^{{\dagger}})$ are the atomic level
lowering (raising) operators.
In the following, we assume that the electric field is circularly polarized
and propagating along the $z$ direction, so optical spin will play a crucial
role here, in which case we have the following form of the quadrupole
interaction Hamiltonian
$\hat{H}_{Q}=-\frac{1}{2}\sum_{i}\left[\hat{Q}_{ix}\frac{\partial\hat{E_{x}}}{\partial
R_{i}}+\hat{Q}_{iy}\frac{\partial\hat{E_{y}}}{\partial R_{i}}\right]$ (2)
The quantized electric field can conveniently be written in terms of the
centre-of-mass position vector in cylindrical polar coordinates
$\mathbf{R}=(\rho,\phi,Z)$ as follows
$\mathbf{\hat{E}}(\mathbf{R})=\left(\mathbf{\hat{i}}\alpha+\mathbf{\hat{j}}\beta\right)u_{\\{k\\}}(\mathbf{R})\hat{a}_{\\{k\\}}e^{i\theta_{\\{k\\}}(\mathbf{R})}+H.c.$
(3)
where the complex numbers, $\alpha$ and $\beta$, determine the polarization
state of the beam, $\sigma_{z}$. In fact,
$\sigma_{z}=i(\alpha\beta^{*}-\beta\alpha^{*})$, where $\alpha$ and $\beta$
are normalized and $|\alpha|^{2}+|\beta|^{2}=1$ so that $\sigma_{z}=+1,0,-1$
for right circular, linear, and left circular polarizations, respectively.
Also, $u_{\\{k\\}}(\mathbf{R})$ and $\theta_{\\{k\\}}(\mathbf{R})$ are,
respectively, the amplitude function and the phase function of the LG vortex
electric field. Here the subscript $\\{k\\}$ denotes a group of indices that
specify the optical mode in terms of its axial wave-vector $k$, winding number
$\ell$ and radial number $p$. The operators $\hat{a}_{\\{k\\}}$ and
$\hat{a}_{\\{k\\}}^{\dagger}$ are the annihilation and creation operators of
the field mode $\\{k\\}$. Finally $H.c.$ stands for Hermitian conjugate. Using
this form of the electric field, we obtain the desired expression for the
quadrupole interaction Hamiltonian
$\hat{H}_{qp}=\hbar\Omega^{Q}_{\\{k\\}}(\mathbf{R})e^{i\theta_{\\{k\\}}(\mathbf{R})}\hat{a}_{\\{k\\}}(\hat{b}^{{\dagger}}+\hat{b})+H.c.$
(4)
where $\Omega^{Q}_{\\{k\\}}(\mathbf{R})$ is the quadrupole Rabi frequency,
which can be written as
$\Omega^{Q}_{\\{k\\}}(\mathbf{R})=-\frac{1}{2\hbar}\sum_{i}\left(\alpha
Q_{ix}+\beta
Q_{iy}\right)u_{\\{k\\}}\Big{(}\frac{1}{u_{\\{k\\}}}\frac{\partial
u_{\\{k\\}}}{\partial R_{i}}+i\frac{\partial\theta_{\\{k\\}}}{\partial
R_{i}}\Big{)}$ (5)
It is suitable to proceed as we show below by supposing a general LG mode LGℓp
of winding number $\ell$ and radial number $p$. The values of $\ell$ and $p$
relevant to a given quadrupole transition are chosen within the selection
rules of the considered atomic transition.
## III Transition amplitude and absorption rate of OAM by atom
We consider the vortex field with an orbital angular momentum (OAM)
$\pm\ell\hbar$ and a spin angular momentum (SAM) $\pm s\hbar$ per photon where
$\ell$ and $s$ are positive Wätzel and Berakdar (2020). Hence, the transition
matrix element Bougouffa and Babiker (2020b); Forbes and Andrews (2018);
Scholz-Marggraf _et al._ , comprising only the quadrupole coupling, is
specified by $\mathrm{M}^{\\{k\\}}_{if}=\bra{f}\hat{H}_{Q}\ket{i}$, where
$\ket{i}$ and $\ket{f}$ are, respectively, the initial and final states of the
overall quantum system (atom plus optical vortex). We assume that the system
has as an initial state $\ket{i}$ with the atom in its the ground state and
there is one vortex photon. The final state $\ket{f}$ consists of the excited
state of the atom and there is no field mode. Thus
$\ket{i}=\ket{g\\{1\\}_{\\{k\\}}}$ and $\ket{f}=\ket{e\\{0\\}}$.
Using the relations
$\bra{\\{0\\}}\hat{a}^{+}_{\\{k^{\prime}\\}}\ket{\\{1\\}_{\\{k\\}}}=0$, and
$\bra{\\{0\\}}\hat{a}_{\\{k^{\prime}\\}}\ket{\\{1\\}_{\\{k\\}}}=\delta_{\\{k^{\prime}\\}\\{k\\}}$
we obtain
$\displaystyle\mathrm{M}^{\\{k\\}}_{if}$ $\displaystyle=$
$\displaystyle\hbar\Omega^{Q}_{\\{k\\}}(\mathbf{R})e^{i\theta_{\\{k\\}}(\mathbf{R})}$
(6)
where $\Omega^{Q}_{\\{k\\}}(\mathbf{R})$ is the quadrupole Rabi frequency. The
final state of the system in the absorption process comprises a continuous
band of energy of width $\hbar\gamma$ where $\gamma$ is the quadrupole
spontaneous emission rate in free space. In this case the absorption rate is
assumed by the form of Fermi’s golden rule Bougouffa and Babiker (2020b);
Barnett and Radmore (2002); Loudon (2000); Lloyd _et al._ (2012a); Fox (2006)
with a density of states
$\displaystyle\mathrm{\Gamma}_{if}=2\pi\big{|}\Omega^{Q}_{\\{k\\}}(\mathbf{R})\big{|}^{2}\mathcal{F}_{\omega_{a}}(\omega),$
(7)
where $\mathcal{F}_{\omega_{a}}(\omega)$ is the density of the final state and
it can be well characterized by a Lorentzian distribution of states with a
width (FWHM) matching with the spontaneous quadrupole emission rate, thus
$\mathcal{F}_{\omega_{a}}(\omega)=\frac{1}{\pi}\frac{\gamma/2}{(\omega-\omega_{a})^{2}+(\gamma/2)^{2}},$
(8)
The Lorentzian distribution characterizing the density of states specifies a
limit to the validity of using Fermi’s Golden rule to calculate the absorption
rate, since such a rate is valid only if the frequency width of the upper
state $\ket{e}$ is larger than the excitation rate; i.e., the spontaneous
emission rate is larger than the Rabi frequency. The Rabi frequency may exceed
the spontaneous emission rate for high intensities. In which case the
perturbative approach culminating in the Fermi Golden Rule is no longer valid
and the strong coupling regime is applicable involving Rabi oscillations.
Substituting Eq. (8) in Eq. (7) we find for the quadrupole absorption rate
$\Gamma_{if}=\frac{\gamma}{(\omega-\omega_{a})^{2}+(\gamma/2)^{2}}\big{|}\Omega^{Q}_{\\{k\\}}(\mathbf{R})\big{|}^{2}$
(9)
In the following, we are concerned by the use of the optical vortex in the
Laguerre-Gaussian (LG) mode.
## IV Circularly polarized Laguerre-Gaussian Mode
In the paraxial regime and using cylindrical coordinates Allen _et al._
(1992), the LGℓp beam is described by the amplitude function
$u_{\\{k\\}}(\rho)=u_{k\ell p}(\rho)=E_{k00}g_{\ell,p}(\rho)$ (10)
with
$g_{\ell,p}(\rho)=\sqrt{\frac{p!}{(|\ell|+p)!}}\Big{(}\frac{\rho\sqrt{2}}{w_{0}}\Big{)}^{|\ell|}L_{p}^{|\ell|}(\frac{2\rho^{2}}{w_{0}^{2}})e^{-\rho^{2}/w_{0}^{2}},$
(11)
where $L_{p}^{|\ell|}$ is the associated Laguerre polynomial and $w_{0}$ is
the radius at beam waist (at $Z=0$). The global factor $E_{k00}$ is the
constant amplitude of the corresponding plane electromagnetic wave. The phase
function of the LG mode in the paraxial regime is given by
$\theta_{klp}(\rho,Z,t)\approx kZ+\ell\phi-\omega t.$ (12)
For the sake of simplicity, we here assume that the projection $\ell$ of the
OAM of the beam upon the $z$ axis is positive. Here, the LGℓp beam is supposed
to be circularly polarized and the polarisation vector
$\mathbf{\hat{\varepsilon}}=\mathbf{\hat{i}}\alpha+\mathbf{\hat{j}}\beta$. In
brief, every photon of the beam with waist $w_{0}$ is in the same quantum
state as categorized by the resulting four beam parameters: frequency
$\omega$, radial index $p$, winding number $\ell$, and spin $\sigma_{z}$.
The quadrupole Rabi frequency associated with the LGℓp of frequency $\omega$,
which is circularly polarised in the $x-y$ plane can be written as follows
Andrews (2011); Babiker _et al._ (2019); Klimov and Letokhov (1996); Lin _et
al._ (2016); Fickler _et al._ (2012); Domokos and Ritsch (2003); Deng and Guo
(2008)
$\Omega_{k\ell p}^{Q}(\rho)=\left(u_{p}^{\ell}(\rho)/\hbar\right)\Big{(}G({\bf
R})\mathcal{Q}_{1}+H({\bf R})\mathcal{Q}_{2}+ik\mathcal{Q}_{3}\Big{)}$ (13)
where the modified quadrupole moments are $\mathcal{Q}_{j}=(\alpha
Q_{jx}+\beta Q_{jy})$ with $j=x,y,z$, and the functions $G({\bf R})$ and
$H({\bf R})$ are as follows
$\displaystyle G({\bf
R})=\left(\frac{\left|\ell\right|X}{\rho^{2}}-\frac{2X}{w_{0}^{2}}-\frac{i\ell
Y}{\rho^{2}}+\frac{1}{L_{p}^{\left|\ell\right|}}\frac{\partial
L_{p}^{\left|\ell\right|}}{\partial X}\right),$ (14) $\displaystyle H({\bf
R})=\left(\frac{\left|\ell\right|Y}{\rho^{2}}-\frac{2Y}{w_{0}^{2}}+\frac{i\ell
X}{\rho^{2}}+\frac{1}{L_{p}^{\left|\ell\right|}}\frac{\partial
L_{p}^{\left|\ell\right|}}{\partial Y}\right),$ (15)
Finally, we get the quadrupole absorption rate for an atom interacting with
the LGℓ,p light mode that is circularly polarized in $x-y$ plane, and the atom
is characterised by the quadrupole matrix elements
$Q_{xx},Q_{xy},Q_{yy},Q_{zx}$ and $Q_{zy}$
$\Gamma_{{if}}=\frac{\kappa}{(\omega-\omega_{a})^{2}+(\kappa/2)^{2}}\left|\left(G({\bf
R})\mathcal{Q}_{1}+H({\bf
R})\mathcal{Q}_{2}+ik\mathcal{Q}_{3}\right)\right|^{2}\times\left|u_{p}^{\ell}(\rho)/\hbar\right|^{2},$
(16)
This general result applies to any atom with a quadrupole-allowed but a
dipole-forbidden transition, at near resonance with a circularly polarized
Laguerre-Gaussian light mode LG ℓ,p. The principal constraint is that the
interaction must obey the OAM and SAM selection rules involving the quantum
number $m$ between the ground and excited atomic states $\ket{g}$ and
$\ket{e}$ and for a quadrupole transition, we have
$\Delta m=0,\pm 1,\pm 2$ (17)
The constraint of angular momentum conservation then means that the optical
vortex absorption process in a quadrupole transition can only arise for
optical vortices if the total angular momentum (TAM) of a Laguerre-Gaussian
beam has a value $\ell+\sigma_{z}$, which is not larger than the multipolarity
$L$ of the underling atomic transition, i.e.,
$\ell+\sigma_{z}\leq L,$ (18)
where for the circularly polarized field $\sigma_{z}=\pm 1$ and the winding
numbers are $\ell\leq L\mp 1$. The details will depend on the specific atom
and its specific quadrupole transition. Note that although the radial quantum
number $p$ is important for the amplitude distribution function of the LGℓ,p
mode, the magnitude of the OAM transferred is determined exclusively by the
value of the winding number $\ell$ and the spin $\sigma_{z}$
The case $(\ell=0,\sigma_{z}=0)$ is possible, but then no transfer of orbital
angular momentum occurs in the absorption process, while the case
$(\ell=1,\sigma_{z}=0)$ corresponds to transfer of total angular momentum of
magnitude $\hbar$ is well investigated in Bougouffa and Babiker (2020b). Here,
we will explore the cases $(\ell\leq L\mp 1)$, which are accompanied by a
transfer of total angular momentum (TAM) of magnitude $\geq\hbar$.
In the following, we will explore the main result with useful illustrations by
focusing on the case that has lately been examined Bougouffa and Babiker
(2020a); Babiker _et al._ (2019); Lembessis and Babiker (2013); Al-Awfi and
Bougouffa (2019); Bougouffa and Babiker (2020b), namely an LG mode of the
winding number $\ell\leq L\mp 1$ and the radial number $p$. In the simplest
case where the mode is a doughnut mode $p=0$, we find that the last terms
involving the derivatives in $U({\bf R})$ and $V({\bf R})$ given by Eqs.
(14,15) vanish, as $L_{0}^{\left|\ell\right|}$ are constants for all $\ell$.
Though, the case where $p\neq 0$ is also of concern since the value of $p$ is
significant for the intensity distribution. A particular atomic transition, we
shall consider showing the results is that of the neutral Cs atom, namely
$6^{2}S_{1/2}\rightarrow 5^{2}D_{5/2}$ transition.
## V Results and discussion
To explore the results, we need to evaluate the quadrupole matrix elements
$Q_{xx},Q_{xy},Q_{yy},Q_{zx}$ and $Q_{zy}$, which are related to the
considered atomic transition, depending on the OAM selection rules. Indeed,
using the normalized hydrogen-like wave function $\psi_{nLm}$ Bransden _et
al._ (2003); Fischer (1973), with an appropriate value of the effective
nuclear charge of $Z_{a}=8.56$ Bransden _et al._ (2003); Fischer (1973); Le
Kien _et al._ (2018); Varshalovich _et al._ (1988); Yannopapas and
Paspalakis (2015); Ray _et al._ (2020), the quadrupole matrix elements can be
calculated Bougouffa and Babiker (2020b) from
$Q_{\alpha\beta}=e\bra{\psi_{f}}x_{\alpha}x_{\beta}\ket{\psi_{i}}$, where
$x_{\alpha}=(x,y,z)$. Straightforward evaluations yield the following:
* •
for the transition $\ket{L=0,m=0}\rightarrow\ket{L=2,m=0}$, we find that
$\mathcal{Q}_{1}=\alpha Q_{xx}$ $\mathcal{Q}_{2}=\beta Q_{xx}$ and
$\mathcal{Q}_{3}=0$,
* •
for the case $\ket{L=0,m=0}\rightarrow\ket{L=2,m=\pm 1}$, we have
$\mathcal{Q}_{1}=\mathcal{Q}_{2}=0$ and $\mathcal{Q}_{3}=i|Q_{xz}|(\alpha\pm
i\beta)$,
* •
for the transition $\ket{L=0,m=0}\rightarrow\ket{L=2,m=\pm 2}$, we have
$\mathcal{Q}_{2}=\pm i\mathcal{Q}_{1}=\pm i|Q_{xx}|(\alpha\pm i\beta)$ and
$\mathcal{Q}_{3}=0$.
We consider a quadrupole transition with the selection rules $\Delta L=2$ and
$\Delta m=0,\pm 1,\pm 2$ applicable for the $(6^{2}S_{1/2}\rightarrow
5^{2}D_{5/2})$ quadrupole transition in Cs.
### V.1 No OAM transfer case
For this case $\Delta m=0$, expressing lengths in units of $w_{0}$ , so that
$\bar{\rho}=\rho/w_{0}$ etc, the Rabi frequency can be read as
$\displaystyle\Omega_{k\ell p}^{Q}(\rho)$ $\displaystyle=$
$\displaystyle|Q_{xx}|\left(u_{0}^{|\ell|}(\rho)/\hbar\right)\Big{(}\alpha
G({\bf R})+\beta H({\bf R})\Big{)}$ (19) $\displaystyle=$
$\displaystyle\Omega_{01}g_{\ell,p}(\bar{\rho})\Big{(}(\frac{\ell}{\bar{\rho}^{2}}-2+\frac{1}{\bar{\rho}}\frac{1}{L_{p}^{\left|\ell\right|}}\frac{\partial
L_{p}^{\left|\ell\right|}}{\partial\bar{\rho}})(\alpha\bar{X}+\beta\bar{Y})+\frac{i\ell}{\bar{\rho}^{2}}(\beta\bar{X}-\alpha\bar{Y})\Big{)}$
where $\Omega_{01}$ is a scaling factor for the Rabi frequency
$\Omega_{01}=\frac{1}{\hbar}\frac{E_{k00}|Q_{xx}|}{w_{0}}.$ (20)
The requirement of OAM and SAM conservations imply that $\ell=1$ and
$\sigma=-1$.
### V.2 OAM transfer case
For this case $\Delta m=\pm 1$, the quadrupole Rabi frequency can be read as
$\displaystyle\Omega_{k\ell p}^{Q}(\bar{\rho})=-\Omega_{02}(\alpha\pm
i\beta)kw_{0}g_{\ell,p}(\bar{\rho}),$ (21)
where $\Omega_{02}$ is a scaling factor for the Rabi frequency
$\Omega_{02}=\frac{1}{\hbar}\frac{E_{k00}|Q_{xz}|}{w_{0}}.$ (22)
The absorption rate is then given by
$\displaystyle\mathrm{\Gamma}_{if}$ $\displaystyle=$ $\displaystyle\frac{2\pi
w_{0}^{2}}{c^{2}}\big{|}\Omega_{02}\big{|}^{2}\frac{\gamma/2}{\pi}\frac{\omega^{2}}{(\omega-\omega_{a})^{2}+(\gamma/2)^{2}}\big{|}g_{\ell,p}(\bar{\rho})\big{|}^{2}$
(23)
It is clear that for $\Delta m=+1$ and the right circularly polarized field
$(\alpha+i\beta=0)$, the quadrupole Rabi frequency is zero. Also, for $\Delta
m=-1$ and the left circularly polarized field $(\alpha-i\beta=0)$, the
quadrupole Rabi frequency is zero. Then the requirement of OAM and SAM
conservations imply that $\ell=2$ and $\sigma_{z}=-1$ for $\Delta m=+1$.
### V.3 Total angular momentum transfer case
In this case $\Delta m=\pm 2$, the quadrupole moments are $\mathcal{Q}_{2}=\pm
i\mathcal{Q}_{1}=\pm i|Q_{xx}|(\alpha\pm i\beta)$, and $\mathcal{Q}_{3}=0$ and
the Rabi frequency Eq. (13) is as follows:
$\displaystyle\Omega_{k\ell p}^{Q}(\bar{\rho})$ $\displaystyle=$
$\displaystyle|Q_{xx}|(\alpha\pm
i\beta)\left(u_{p}^{|\ell|}(\rho)/\hbar\right)\Big{(}G({\bf R})\pm iH({\bf
R})\Big{)}$ (24)
or
$\displaystyle\Omega_{k\ell
p}^{Q}(\bar{\rho})=\begin{cases}\Omega_{01}g_{\ell,p}(\bar{\rho})(\alpha+i\beta)\Big{(}\bar{X}+i\bar{Y}\Big{)}\Big{(}-2+\frac{1}{\bar{\rho}}\frac{1}{L_{p}^{\left|\ell\right|}}\frac{\partial
L_{p}^{\left|\ell\right|}}{\partial\bar{\rho}}\Big{)},\text{ for }\Delta
m=+2,\\\
\Omega_{01}g_{\ell,p}(\bar{\rho})(\alpha-i\beta)\Big{(}\bar{X}-i\bar{Y}\Big{)}\Big{(}\frac{2|\ell|}{\bar{\rho}^{2}}-2+\frac{1}{\bar{\rho}}\frac{1}{L_{p}^{\left|\ell\right|}}\frac{\partial
L_{p}^{\left|\ell\right|}}{\partial\bar{\rho}}\Big{)},\text{ for }\Delta
m=-2.\end{cases}$ (25)
also, for $\Delta m=+2$, the quadrupole Rabi frequency is zero for the right
circularly polarized filed $\sigma_{z}=+1$ and inversely, the quadrupole Rabi
frequency is null for the left circularly polarized field for $\Delta m=-2$.
Then the requirement of OAM and SAM conservations imply that $\ell=3$ and
$\sigma=-1$ for $\Delta m=+2$.
The absorption rate is then given by
$\displaystyle\mathrm{\Gamma}_{if}=\begin{cases}\frac{\kappa}{(\omega-\omega_{a})^{2}+(\kappa/2)^{2}}\big{|}\Omega_{01}\big{|}^{2}\big{|}\bar{\rho}g_{\ell,p}(\bar{\rho})\big{|}^{2}\Big{|}-2+\frac{1}{\bar{\rho}}\frac{1}{L_{p}^{\left|\ell\right|}}\frac{\partial
L_{p}^{\left|\ell\right|}}{\partial\bar{\rho}}\Big{|}^{2},\text{ for }\Delta
m=+2,\\\
\frac{\kappa}{(\omega-\omega_{a})^{2}+(\kappa/2)^{2}}\big{|}\Omega_{01}\big{|}^{2}\big{|}\bar{\rho}g_{\ell,p}(\bar{\rho})\big{|}^{2}\Big{|}\frac{2|\ell|}{\bar{\rho}^{2}}-2+\frac{1}{\bar{\rho}}\frac{1}{L_{p}^{\left|\ell\right|}}\frac{\partial
L_{p}^{\left|\ell\right|}}{\partial\bar{\rho}}\Big{|}^{2},\text{ for }\Delta
m=-2.\end{cases}$ (26)
where $\Omega_{01}=\frac{1}{\hbar}\frac{E_{k00}|Q_{xx}|}{w_{0}}$ is a scaling
factor for the Rabi frequency and $g_{\ell,p}$ is given by Eq.(11).
Typical parameters in this case are Chan _et al._ (2016) $\lambda=685(nm)$,
$|Q_{xx}|\simeq|Q_{xz}|\simeq 10ea_{0}^{2}$, and the spontaneous decay rate is
$\Gamma_{S}=3.34\times 10^{7}(s^{-1})$ Tojo _et al._ (2004, 2005). The beam
parameters are chosen such that the beam waist $w_{0}=\lambda\xi$, where $\xi$
is a real number, and the intensity $I=\epsilon_{0}cE_{k00}^{2}/2$. We assume
a moderate laser intensity $I=40\times 10^{4}Wm^{-2}$ Chan _et al._ (2016),
the scaling factor of the Rabi frequency can be written as
$\Omega_{01}\simeq\Omega_{02}\simeq\frac{1}{\hbar}\big{(}\frac{2I}{\epsilon_{0}c}\big{)}^{1/2}\frac{|Q_{xx}|}{w_{0}}=\frac{\Omega_{0}}{\xi},$
(27)
where $\Omega_{0}=3.25\times 10^{-2}\Gamma_{S}$. We must make an appropriate
choice of the beam waist to ensure that $\Omega_{0}\ll\Gamma_{S}$, which is
the condition of the validity of the Fermi golden rule. On the other hand, the
Lorentzian density of states is chosen with a width given by the spontaneous
emission rate $\kappa=\Gamma_{S}$, where $\Gamma_{S}\ll\omega_{a}$.
In Figure 1, we present the variation of the quadrupole Rabi frequency
$\Omega/\Omega_{0}$ as a function of the radial position of the atom for
different values of the beam waist $w_{0}/\lambda=5,8$, and for two values of
the beam radial number $p=0,1$ for the case $\Delta m=0$, i.e.,
($\ell=1,\sigma_{z}=-1$). It is clear that the maximum of the function shifts
away from the origin with increasing beam waist, while the value of the
maximum is decreased. The insets to the figures represent the cylindrically
symmetric Rabi frequency for the case $w_{0}/\lambda=5$. In addition, the
curves are similar for the right circularly polarized field and $\ell=-1$.
Figure 1: (Color online) The variation with radial position of the quadrupole
Rabi frequency $\Omega/\Omega_{0}$ for an atom in a Laguerre-Gaussian mode
LGℓ,p, $\Delta m=0$, $\ell=1$ and $\sigma_{z}=-1$. (a) for $p=0$. (b) for
$p=1$.The solid curves are for $w_{0}/\lambda=5$, while the dashed curves are
$w_{0}/\lambda=8$. The insets to the figures represent the cylindrically
symmetric Rabi frequency for the case $w_{0}/\lambda=5$. The scaling factor
$\Omega_{0}=\frac{1}{\hbar}\frac{E_{k00}|Q_{xx}|}{\lambda}=3.25\times
10^{-2}\Gamma_{S}$.
In Figure 2, we present the variation of the quadrupole Rabi frequency
$\Omega/\Omega_{0}$ as a function of the radial position of the atom for
different values of the beam waist $w_{0}/\lambda=5,8$, and for two values of
the beam radial number $p=0,1$ for the case $\Delta m=+1$, i.e. $\ell=2$ and
$\sigma_{z}=-1$. It is clear that the maximum of the function shifts away from
the origin with increasing beam waist, but the value of the maximum is
independent of $w_{0}$. The insets to the figures represent the cylindrically
symmetric Rabi frequency for the case $w_{0}/\lambda=5$. In addition, the
quadrupole Rabi frequency is null for the right circularly polarized field.
For the case $\Delta m=-1$, i.e., ($\ell=-2$ and $\sigma_{z}=1$), we get
similar curves.
Figure 2: (Color online) The variation with radial position of the quadrupole
Rabi frequency $\Omega/\Omega_{0}$ for an atom in a Laguerre-Gaussian mode
LGℓ,p, $\Delta m=1$ and ($\ell=2,\sigma_{z}=-1$). Left panel for $p=0$. Right
panel for $p=1$.The solid curves are for $w_{0}/\lambda=5$, while the dashed
curves are $w_{0}/\lambda=8$.The insets to the figures represent the
cylindrically symmetric Rabi frequency for the case $w_{0}/\lambda=5$. The
scaling factor
$\Omega_{0}=\frac{1}{\hbar}\frac{E_{k00}|Q_{xx}|}{\lambda}=3.25\times
10^{-2}\Gamma_{S}$.
In Figure 3, we present the variation of the quadrupole Rabi frequency
$\Omega/\Omega_{0}$ as a function of the radial position of the atom for
different values of the beam waist $w_{0}/\lambda=5,8$, and for two values of
the beam radial number $p=0,1$ for the case $\Delta m=+2$, i.e.,
($\ell=3,\sigma_{z}=-1$). It is clear that the maximum of the function shifts
away from the origin with increasing beam waist, and the maximum decreases.
The insets to the figures represent the cylindrically symmetric Rabi frequency
for the case $w_{0}/\lambda=5$. We get similar curves for the case $\Delta
m=-2$, i.e. ($\ell=-3\sigma_{z}=1$).
Figure 3: (Color online) The variation with radial position of the quadrupole
Rabi frequency $\Omega/\Omega_{0}$ for an atom in a Laguerre-Gaussian mode
LGℓ,p, $\Delta m=2$, i. e., ($\ell=3,\sigma_{z}=-1$). Left panel for $p=0$.
Right panel for $p=1$.The solid curves are for $w_{0}/\lambda=5$, while the
dashed curves are $w_{0}/\lambda=8$. The field is left circularly polarized
$\sigma_{z}=-1$. The insets to the figures represent the cylindrically
symmetric Rabi frequency for the case $w_{0}/\lambda=5$. The scaling factor
$\Omega_{0}=\frac{1}{\hbar}\frac{E_{k00}|Q_{xx}|}{\lambda}=3.25\times
10^{-2}\Gamma_{S}$.
From the figures (1 , 3), it is clear that the quadrupole Rabi frequency is
very small comparatively to the case of the figure 2, which means that the
corresponding absorption rate to the case $\Delta m=+1$ is more interesting
and the other cases are negligible. In Figure 4, we present the variation of
the absorption rate $\Gamma_{if}/\Gamma_{S}$ as a function of the radial
position of the atom $\rho/\lambda$ for different values of the beam waist
$w_{0}/\lambda=5,8$ for the case $\Delta m=+1$, i.e.,
($\ell=2,\sigma_{z}=-1$). It is clear that the maximum of the function shifts
away from the origin with increasing beam waist, but the value of the maximum
is independent of $w_{0}$.
Figure 4: (Color online) The variation with radial position of the quadrupole
absorption rate $\Gamma_{if}/\Gamma_{S}$ for $\Delta m=+1$, i.e.,
($\ell=2,\sigma_{z}=-1$). Left panel $p=0$, Right panel $p=1$, and an atom in
a Laguerre-Gaussian mode LGℓ,p. The solid line concerns the case
$w_{0}/\lambda=5$; the dashed line concerns $w_{0}/\lambda=8$.
## VI Conclusion
In summary, we have investigated the interaction of atoms with light, which is
characterized by orbital angular momentum (OAM) and spin angular momentum
(SAM). The principal purpose is to evaluate the absorption rate of total
angular momentum from the circularly polarized light to the atoms in a dipole-
forbidden but quadrupole-allowed transition at near-resonance. We have
explored a proposed technique concerning a particular case, i.e.,
$6^{2}S_{1/2}\rightarrow 5^{2}D_{5/2}$ quadrupole transition in Cs atom, which
obeys with the requirements of OAM and SAM conservation consistent with the
rules $\Delta m=0,\pm 1,\pm 2$ and $\ell+\sigma_{z}\leqslant L$. This
situation represents the interesting case, and the other cases do not
contribute to the process. Our extension to the circularly polarized optical
vortex is well suited to the practical realizations.
###### Acknowledgements.
The author is grateful to Professor Mohamed Babiker for helpful discussions.
## References
* Torres and Torner (2011) J. P. Torres and L. Torner, _Twisted photons: applications of light with orbital angular momentum._ , edited by e. Torres JP, Torner L (John Wiley & Sons., 2011).
* Surzhykov _et al._ (2015) A. Surzhykov, D. Seipt, V. G. Serbo, and S. Fritzsche, Phys Rev A 91 (2015), 10.1103/physreva.91.013403.
* Babiker _et al._ (2019) M. Babiker, D. L. Andrews, and V. Lembessis, J Opt 21, 013001 (2019).
* Andrews (2011) D. L. Andrews, _Structured light and its applications: An introduction to phase-structured beams and nanoscale optical forces_ (Academic press, 2011).
* Yao and Padgett (2011) A. M. Yao and M. J. Padgett, Adv Opt Photonics 3, 161 (2011).
* Fickler _et al._ (2014) R. Fickler, R. Lapkiewicz, M. Huber, M. P. Lavery, M. J. Padgett, and A. Zeilinger, Nat. Commun 5, 1 (2014).
* Souza _et al._ (2008) C. Souza, C. Borges, A. Khoury, J. Huguenin, L. Aolita, and S. Walborn, Phys Rev A 77, 032345 (2008).
* Nicolas _et al._ (2014) A. Nicolas, L. Veissier, L. Giner, E. Giacobino, D. Maxein, and J. Laurat, Nat Photonics 8, 234 (2014).
* (9) H. M. Scholz-Marggraf, S. Fritzsche, V. G. Serbo, A. Afanasev, and A. Surzhykov, Phys. 90, 013425.
* Tojo _et al._ (2005) S. Tojo, T. Fujimoto, and M. Hasuo, Phys Rev A 71, 012507 (2005).
* Hu _et al._ (2012) S.-M. Hu, H. Pan, C.-F. Cheng, Y. Sun, X.-F. Li, J. Wang, A. Campargue, and A.-W. Liu, Astrophys J 749, 76 (2012).
* Kern and Martin (2012) A. Kern and O. J. Martin, Phys Rev A 85, 022501 (2012).
* Al-Awfi and Bougouffa (2019) S. Al-Awfi and S. Bougouffa, Results Phys 12, 1357 (2019).
* Bougouffa and Babiker (2020a) S. Bougouffa and M. Babiker, Phys Rev A 101, 043403 (2020a).
* Ray _et al._ (2020) T. Ray, R. K. Gupta, V. Gokhroo, J. L. Everett, T. N. Nieddu, K. S. Rajasree, and S. N. Chormaic, New J Phys (2020), 10.1088/1367-2630/ab8265.
* Van Enk (1994) S. Van Enk, Quantum Optics: Journal of the European Optical Society Part B 6, 445 (1994).
* Babiker _et al._ (2002) M. Babiker, C. Bennett, D. Andrews, and L. D. Romero, Phys Rev Lett 89, 143601 (2002).
* Araoka _et al._ (2005) F. Araoka, T. Verbiest, K. Clays, and A. Persoons, Phys Rev A 71, 055401 (2005).
* Löffler and Woerdman (2012) W. Löffler and J. Woerdman, in _Complex Light and Optical Forces VI_ , Vol. 8274 (International Society for Optics and Photonics, 2012) p. 827404.
* Giammanco _et al._ (2017) F. Giammanco, A. Perona, P. Marsili, F. Conti, F. Fidecaro, S. Gozzini, and A. Lucchesini, Opt Lett 42, 219 (2017).
* Lloyd _et al._ (2012a) S. Lloyd, M. Babiker, and J. Yuan, Phys Rev Lett 108, 074802 (2012a).
* Lloyd _et al._ (2012b) S. M. Lloyd, M. Babiker, and J. Yuan, Physical Review A 86, 023816 (2012b).
* Bougouffa and Babiker (2020b) S. Bougouffa and M. Babiker, Phys Rev A 102, 063706 (2020b).
* Schmiegelow _et al._ (2016) C. T. Schmiegelow, J. Schulz, H. Kaufmann, T. Ruster, U. G. Poschinger, and F. Schmidt-Kaler, Nat Commun 7, 12998 (2016).
* Afanasev _et al._ (2018) A. Afanasev, C. E. Carlson, C. T. Schmiegelow, J. Schulz, F. Schmidt-Kaler, and M. Solyanik, New J Phys 20, 023032 (2018).
* Rajasree _et al._ (2020) K. S. Rajasree, R. K. Gupta, V. Gokhroo, F. L. Kien, T. Nieddu, T. Ray, S. N. Chormaic, and G. Tkachenko, Phys. Rev. Research 2, 033341 (2020).
* Wätzel and Berakdar (2020) J. Wätzel and J. Berakdar, Phys. Rev. A 102, 063105 (2020).
* Forbes and Andrews (2018) K. A. Forbes and D. L. Andrews, in _Complex Light and Optical Forces XII_ , Vol. 10549 (International Society for Optics and Photonics, 2018) p. 1054915.
* Barnett and Radmore (2002) S. Barnett and P. M. Radmore, _Methods in theoretical quantum optics_ , Vol. 15 (Oxford University Press, 2002).
* Loudon (2000) R. Loudon, _The quantum theory of light_ (OUP Oxford, 2000).
* Fox (2006) M. Fox, _Quantum optics: an introduction_ , Vol. 15 (OUP Oxford, 2006).
* Allen _et al._ (1992) L. Allen, M. W. Beijersbergen, R. Spreeuw, and J. Woerdman, Phys Rev A 45, 8185 (1992).
* Klimov and Letokhov (1996) V. Klimov and V. Letokhov, Phys Rev A 54, 4408 (1996).
* Lin _et al._ (2016) L. Lin, Z. H. Jiang, D. Ma, S. Yun, Z. Liu, D. H. Werner, and T. S. Mayer, Appl Phys Lett 108, 171902 (2016).
* Fickler _et al._ (2012) R. Fickler, R. Lapkiewicz, W. N. Plick, M. Krenn, C. Schaeff, S. Ramelow, and A. Zeilinger, Science 338, 640 (2012).
* Domokos and Ritsch (2003) P. Domokos and H. Ritsch, JOSA B 20, 1098 (2003).
* Deng and Guo (2008) D. Deng and Q. Guo, J Opt A: Pure Appl Opt 10, 035101 (2008).
* Lembessis and Babiker (2013) V. Lembessis and M. Babiker, Phys Rev Lett 110, 083002 (2013).
* Bransden _et al._ (2003) B. H. Bransden, C. J. Joachain, and T. J. Plivier, _Physics of atoms and molecules_ (Pearson education, 2003).
* Fischer (1973) C. F. Fischer, Atom. Data Nucl. Data Tabl. 12, 87 (1973).
* Le Kien _et al._ (2018) F. Le Kien, T. Ray, T. Nieddu, T. Busch, and S. N. Chormaic, Phys Rev A 97, 013821 (2018).
* Varshalovich _et al._ (1988) D. A. Varshalovich, A. N. Moskalev, and V. K. Khersonskii, “Quantum theory of angular momentum,” (1988).
* Yannopapas and Paspalakis (2015) V. Yannopapas and E. Paspalakis, J Mod Opt 62, 1435 (2015).
* Chan _et al._ (2016) E. A. Chan, S. A. Aljunid, N. I. Zheludev, D. Wilkowski, and M. Ducloy, Opt Lett 41, 2005 (2016).
* Tojo _et al._ (2004) S. Tojo, M. Hasuo, and T. Fujimoto, Phys Rev Lett 92, 053001 (2004).
|
# Boundary value problems for two dimensional steady incompressible fluids
Diego Alonso-Orán Institut für Angewandte Mathematik, Universitat Bonn,
Endenicher Allee 60, 53115 Bonn, Germany<EMAIL_ADDRESS>and Juan
J.L. Velázquez Institut für Angewandte Mathematik, Universitat Bonn,
Endenicher Allee 60, 53115 Bonn, Germany<EMAIL_ADDRESS>
###### Abstract.
In this paper we study the solvability of different boundary value problems
for the two dimensional steady incompressible Euler equation and for the
magneto-hydrostatic equation. Two main methods are currently available to
study those problems, namely the Grad-Shafranov method and the vorticity
transport method. We describe for which boundary value problems these methods
can be applied. The obtained solutions have non-vanishing vorticity.
## 1\. Introduction
In this paper we consider several boundary value problems for the two
dimensional incompressible steady Euler equation describing the motion of an
inviscid fluid given by
$\left\\{\begin{array}[]{lll}v\cdot\nabla v&=-\nabla p,\quad\mbox{in
}\Omega\\\ \nabla\cdot v&=0,\quad\mbox{in }\Omega\end{array}\right.$ (1.1)
where $u:\Omega\to\mathbb{R}^{2}$ is the velocity fluid vector field and
$p:\Omega\to\mathbb{R}$ is the scalar pressure on a suitable domain $\Omega$.
System (1.1) can be rewritten as
$\left\\{\begin{array}[]{lll}v\times\omega&=\nabla H,\quad\mbox{in }\Omega\\\
\nabla\times v&=\omega,\quad\mbox{in }\Omega\\\ \nabla\cdot v&=0,\quad\mbox{in
}\Omega\end{array}\right.$ (1.2)
by using the well-known identity
$(v\cdot\nabla)v-\frac{1}{2}\nabla(\left|v\right|^{2})=-v\times\omega$. Here
the function $H=p+\frac{1}{2}\left|v\right|^{2}$ is known as the Bernoulli
function and the rotational of the velocity field, $\nabla\times v=\omega$ is
called vorticity field.
Solutions to (1.1) with $\omega=0$ are termed irrotational solutions. It is
well-known that boundary problems for (1.1) (even in the three dimensional
case) reduce to boundary value problems for the Laplace equation. Indeed, if
$\omega=0$ we have that $v=\nabla\psi$ and the second equation in (1.1)
implies that $\Delta\psi=0$. Technical difficulties can arise for particular
types of boundary conditions. Nevertheless the well developed theory of
harmonic functions can be applied to study those problems. To construct flows
with non-vanishing vorticity is more challenging and the corresponding
boundary value problems have been less studied in the mathematical literature.
A system of equations which is mathematically equivalent to (1.1) is the set
of equations describing magnetohydrostatics (MHS). This is just the system for
the magnetohydrodynamic equations for incompressible fluids with zero fluid
velocity, namely
$\left\\{\begin{array}[]{lll}B\times j&=-\nabla p,\quad\mbox{in }\Omega\\\
\nabla\times B&=j,\quad\mbox{in }\Omega\\\ \nabla\cdot B&=0,\quad\mbox{in
}\Omega\end{array}\right.$ (1.3)
where $B$ denotes the magnetic field, $j$ the current density and $p$ the
fluid pressure. A quick inspection of equations (1.3) reveal the equivalence
of the magnetohydrostatic equations and the Euler equations (1.2) using the
transformation of variables
$\displaystyle B$ $\displaystyle\leftrightarrow v,\quad
j\leftrightarrow\omega,\quad p\leftrightarrow-H.$ (1.4)
Magnetohydrostatics is relevant in a wide variety of problems in astrophysical
plasmas describing coronal field structures and stellar winds. The system
(1.3) is also a central model to the study of plasma confinement fusion, (cf.
[12, 13, 19]).
It is not a priori clear for which type of boundary conditions the problem
(1.1) or (1.3) can be solved. This issue has been considered in the seminal
paper of Grad and Rubin [14] where the authors describe several meaningful
boundary value problems related to the MHS equations in two dimensional and
three dimensional cases. The main goal of this article is to study the
solvability for different types of boundary value problems for the two
dimensional steady incompressible Euler equations (1.1) (or equivalently the
MHS equations (1.3)). A relevant feature of the solutions constructed in this
paper is that the vorticity $\omega$ (or the current $j$) is different from
zero for generic choices of the boundary values. Since our main goal is to
examine the types of boundary conditions yielding well-posedness for (1.1) or
(1.3) we will restrict ourselves to a very particular geometric setting,
namely we will assume that
$\Omega=\mathbb{S}^{1}\times(0,L),$ (1.5)
with $L>0.$ There are several reasons to choose this particular domain. First,
due to the directionality of the velocity field it is natural to impose
different boundary conditions on different parts of the boundary
$\partial\Omega$. More precisely, one can impose different boundary conditions
on the subsets of $\partial\Omega$ where $v\cdot n>0$ or $v\cdot n<0$.
However, on the points of the boundary where $v\cdot n=0$ singularities for
the solutions can arise. This introduces additional technical difficulties.
The analysis of these singular behaviors is interesting but they will be not
considered in this paper.
Notice that if $\Omega$ is as in (1.5), we have
$\partial\Omega=\partial\Omega_{+}\cup\partial\Omega_{-}$ where
$\partial\Omega_{+}=\mathbb{S}^{1}\times\\{L\\}$ and
$\partial\Omega_{-}=\mathbb{S}^{1}\times\\{0\\}$. In all the cases considered
in this manuscript we will impose different types of boundary conditions on
$\partial\Omega_{+},\partial\Omega_{-}$. Since
$\partial\Omega_{+}\cap\partial\Omega_{-}=\emptyset$ it is possible to impose
boundary conditions which guarantee $v\cdot n\neq 0$ at all the points in
$\partial\Omega$. This is not case if we consider domains $\Omega$ with a
connected boundary $\partial\Omega$ due to the fact that $\mbox{div }v=0$ on
$\Omega$.
The results of this paper can be easily generalized for domains
$\Omega=\\{(x_{1},x_{2}):\gamma_{1}(x_{1})<x_{2}<\gamma_{2}(x_{1})\\}$ where
$\gamma_{j}$ are smooth functions satisfying the periodicity condition
$\gamma_{j}(x_{1}+1)=\gamma_{j}(x_{1})$ for $j=1,2$. In this case we will look
for solutions $(v,p)$ such that $v(x_{1}+1,x_{2})=v(x_{1},x_{2})$ and
$p(x_{1}+1,x_{2})=p(x_{1},x_{2})$.
The different types of boundary conditions that we would consider in this
paper are collected in the following table:
BVC | 2D Euler equation (1.1) | 2D MHS equation (1.3)
---|---|---
(A) | $v\cdot n=f\ \mbox{on}\ \partial\Omega,\quad p+\frac{|v|^{2}}{2}=h\ \mbox{on}\ \partial\Omega_{-}$ | $B\cdot n=f\ \mbox{on}\ \partial\Omega,\quad p=h\ \mbox{on}\ \partial\Omega_{-}$
(B) | $v\cdot n=f\ \mbox{on}\ \partial\Omega,\quad p=h\ \mbox{on}\ \partial\Omega_{-}$ | $B\cdot n=f\ \mbox{on}\ \partial\Omega,\quad p+\frac{|B|^{2}}{2}=h\ \mbox{at}\ \partial\Omega_{-}$
(C) | $p=h\ \ \mbox{on}\ \partial\Omega,\quad v\cdot n=f\ \mbox{on}\ \partial\Omega_{-}$ | $p+\frac{|B|^{2}}{2}=h\ \mbox{on}\ \partial\Omega,\quad B\cdot n=f\ \mbox{on}\ \partial\Omega_{-}$
(D) | $p+\frac{|v|^{2}}{2}=h\ \ \mbox{on}\ \partial\Omega,\quad v\cdot n=f\ \mbox{on}\ \partial\Omega_{-}$ | $p=h\ \mbox{at}\ \Omega,\quad B\cdot n=f\ \mbox{at}\ \partial\Omega_{-}$
(E) | $v\cdot n=f\ \mbox{on}\ \partial\Omega,\quad v\cdot t=h\ \mbox{on}\ \partial\Omega_{-}$ | $B\cdot n=f\ \mbox{on}\ \partial\Omega,\quad B\cdot t=h\ \mbox{on}\ \partial\Omega_{-}$
(F) | $\begin{cases}p=h^{+}\ \mbox{on}\ \partial\Omega_{+}\\\ p+\frac{\left|v^{2}\right|}{2}=h^{-}\ \mbox{on}\ \partial\Omega_{-}\end{cases}$, $v\cdot n=f^{+}\ \mbox{on}\ \partial\Omega_{+}$ | $\begin{cases}p+\frac{\left|B\right|^{2}}{2}=h^{+}\ \mbox{on}\ \partial\Omega_{+}\\\ p=h^{-}\ \mbox{on}\ \partial\Omega^{-}\end{cases}$, $B\cdot n=f^{+}\ \mbox{on}\ \partial\Omega_{+}$
(G) | $\begin{cases}p=h^{+}\ \mbox{on}\ \partial\Omega_{+}\\\ p+\frac{\left|v^{2}\right|}{2}=h^{-}\ \mbox{on}\ \partial\Omega_{-}\end{cases}$, $v\cdot n=f^{-}\ \mbox{on}\ \partial\Omega_{-}$ | $\begin{cases}p+\frac{\left|B\right|^{2}}{2}=h^{+}\ \mbox{on}\ \partial\Omega_{+}\\\ p=h^{-}\ \mbox{on}\ \partial\Omega_{-}\end{cases}$, $B\cdot n=f^{-}\ \mbox{on}\ \partial\Omega_{-}$
Table 1. Different types of boundary value conditions
We remark that each of the boundary value problems appearing in the same row
in Table 1 yield the same PDE problem in spite of the fact that the boundary
conditions imposed for Euler equation (1.1) and MHS (1.3) are different. This
can be seen using the variable transformation in (1.4).
Several before mentioned boundary value conditions have a simple
interpretation from the physical point of view, since we prescribed either the
inflow and outflow fluxes and the pressures or the Bernoulli function on parts
or the full boundary.
We notice that in [14] the study of the boundary value problems (A) and (E) in
Table 1 has been posed for the MHS equations (1.3) as well as additional
boundary value problems in three dimensions. Moreover, the authors also
suggest an iteration scheme to solve these boundary value problems but so far
the precise conditions for convergence of the iterative method has not been
studied in detail. Nevertheless, the method has been seen to be successful for
constructing Beltrami fields in 3D which are particular pressureless solutions
of the Euler equation (1.2) (or equivalently magnetic pressureless solutions
for the MHS (1.3)), see [2, 4, 9].
### Previous results
In order to solve boundary value problems for the steady Euler or the MHS
equations two main methods have been considered in the literature: the Grad-
Shafranov method [15, 20] and the vorticity transport method introduced by
Alber [1].
The method of Grad-Shafranov is in principle restricted to two dimensional
settings or to problems which can be reduced to two dimensions using
symmetries (for instance axisymmetric or toroidal symmetries). We briefly
describe the main idea behind in the particular situation of the two
dimensional steady Euler equation although the method can be adapted to MHS.
Due to the incompressibility condition, there exists a stream function $\psi$
such that the velocity field
$v=\nabla^{\perp}\psi=(-\frac{\partial\psi}{\partial
y},\frac{\partial\psi}{\partial x})$ and therefore equation (1.2) is given by
$\Delta\psi\cdot\nabla\psi=\nabla H,$ (1.6)
where $H=p+\frac{1}{2}\left|v\right|^{2}$ is the Bernoulli function. Hence,
(1.6) implies that there exists a function $F$ such that $H=F(\psi)$ and then
(1.6) yields also
$\Delta\psi=F^{\prime}(\psi)\mbox{ in }\Omega.$ (1.7)
Therefore the analysis of the steady Euler equation has been reduced to the
study of the elliptic equation (1.7) for which a huge number of techniques are
available. The essential difficulty regarding (1.7) is to determine the
function $F$ from the boundary conditions. It turns out that this is possible
for some of the boundary conditions collected in Table 1. Indeed, using that
$v=\nabla^{\perp}\psi$ we have that $v\cdot n=\pm\frac{\partial\psi}{\partial
s}$ where $s$ is the arc-length associated to the boundary. The sign depends
on the orientation chosen for the curve. Hence, suppose by definiteness that
$H$ and $v\cdot n$ are known in the same subset of the boundary, say in
$\partial\Omega_{-}$. Then we can determine $\psi$ in $\partial\Omega_{-}$ (up
to an additive constant) and since $H=F(\psi)$ we can also obtain the function
$F$. If the additional boundary condition imposed in $\partial\Omega_{+}$
gives enough information on $\psi$ to have a well-defined problem for (1.7),
we can then determine the function $\psi$ in $\Omega$ solving (1.7) with the
boundary conditions obtained for $\psi$ in
$\partial\Omega_{-},\partial\Omega_{+}$. This will be the case for the
problems (A),(D),(G) in Table 1.
Clearly, when applying this procedure some technicalities will arise in order
have well-defined functions $F$ and uni-valued functions $\psi$. These issues
will be considered in detail in Section 2. As stated above the main
shortcoming of the Grad-Shafranov method is that its application is restricted
to two dimensional settings. Nevertheless, an extension of the method to
construct rotational solutions to the three dimensional steady Euler equation
in an unbounded domain $(0,L)\times\mathbb{R}^{2}$ with periodic flows in the
unbounded directions was recently treated in [5]. Employing the so called
Clebsch variables the velocity is written as $v=\nabla f\times\nabla g$ to
derive an elliptic non-linear system and perform a Nash-Moser scheme to solve
it. Furthermore, ideas closely related to the method of Grad-Shafranov have
been recently applied to study rigidity and flexibility properties solutions
of the steady Euler equation in [16, 17, 7].
An alternative method to obtain solutions with non-vanishing vorticity for the
steady Euler equation (1.1) was introduced in Alber [1]. More precisely, he
studied the three dimensional version of the problem (A) in Table 1 which
requires an additional boundary condition. In particular, he constructed
solutions where the velocity field $v$ can be splitted into $v=v_{0}+V$ where
$v_{0}$ is an irrotational solution to (1.1) and $V$ a small perturbation. The
boundary value problem for the Euler equations is reduced to a fixed point
problem for a function $V$ combining the fact that the vorticity satisfies a
suitable transport equation and that the velocity can be recovered from the
vorticity using the Biot-Savart law. This idea will be discussed later in more
detail.
Alber’s method works in particular domains $\Omega$ satisfying a geometrical
constraint relating $\partial\Omega$ and $v$. A key assumption that is needed
is that if $x_{0}\in\partial\Omega$ and $v(x_{0})\cdot n(x_{0})=0$, then the
stream line of $v$ crossing through $x_{0}$ is completely contained on the
boundary $\partial\Omega$.
The boundary conditions prescribed in [1] for the three dimensional case are
the normal component of the velocity field on the boundary, i.e. ($v\cdot n$
on $\partial\Omega$), as well as the normal component of the vorticity
$\mbox{curl }v\cdot n$ and the Bernoulli function
$p+\frac{\left|v\right|^{2}}{2}$ on the inflow set
$\partial\Omega_{-}=\partial\Omega\cap\\{v\cdot n<0\\}$. A straightforward
computation shows that the two boundary conditions imposed on the inflow set
$\partial\Omega_{-}$, prescribe completely the vorticity in in
$\partial\Omega_{-}$. It is possible to determine the vorticity in any point
of the domain $\Omega$ using the fact that it satisfies a first order
differential equation by means of the characteristics method.
Since Alber’s result, there have been several generalizations and extensions.
In [23], the authors provide a modification of Alber’s technique to construct
solutions to the three dimensional steady Euler equation where the base flow
does not have to satisfy Euler equation and the boundary conditions are given
by $\mbox{curl }v=av+b$ on the inflow set $\partial\Omega$ for certain values
of $a$ and $b$ satisfying compatibility conditions. Extensions of Alber’s
results to compressible flows with non-smooth domains have been obtained in
[18]. Solutions to the three dimensional steady Euler equation with boundaries
meeting at right angles have been constructed in [21]. An illustrative example
of this situation are curved pipes domains.
### Main results: novelties and key ideas
We describe here the main results and key ideas to construct solutions with
non-vanishing vorticity for the two dimensional incompressible steady Euler
equation (1.1) with the different boundary value conditions collected on Table
1.
The Grad-Shafranov method allows to solve the boundary value problems (A),(D)
and (G). The case (A) has been already considered in [3, 22]. In this paper we
will discuss the application of the Grad-Shafranov approach in Section 2.
Hereafter, we will adapt the arguments of Alber [1] to solve the boundary
value problems (B), (C) and (G), in Table 1. As indicated above, the proof
builds on a ground flow $v_{0}$ solving (1.1) which is perturbed by function
$V$ which will be determined by solving a fixed point of for a suitable
operator $\Gamma$. The idea to construct the operator $\Gamma$, relies on two
building blocks: a transport type problem and a div-curl problem. The former
consists in finding a unique function $\omega$ for a given $v$ and
$\omega_{0}$ satisfying
$(\textit{TP})\left\\{\begin{array}[]{lll}v\cdot\nabla\omega=0\ \mbox{at}\
\Omega,\\\ \omega=\omega_{0}\ \mbox{at}\
\partial\Omega_{-}.\end{array}\right.$ (1.8)
The value of $\omega_{0}$ is chosen in a particular way in order to get a
solution which satisfies the boundary value conditions we want to deal with.
The second building block relies on finding a unique $W$ which solves
$(\textit{DCP})\left\\{\begin{array}[]{lll}\nabla\times W=\omega,\ \mbox{at}\
\Omega\\\ \mbox{div }W=0,\ \mbox{at}\ \Omega\\\ W\cdot n=g,\ \mbox{ at
}\partial\Omega\end{array}\right.$ (1.9)
for a given $\omega$. We will restrict ourselves to the very particular
domains in (1.5). As indicated before, the main reason for that is that in
general open domains $\Omega\subset\mathbb{R}^{2}$ with smooth connected
boundary $\partial\Omega$ there are necessarily boundary points
$x_{0}\in\partial\Omega$ such that $v(x_{0})\cdot n(x_{0})=0$, which will be
termed from now on as tangency points. Integrating by characteristics
(assuming that the vector field $v$ is oriented in such a way that such
problem is solvable), the solutions of (1.8) develop singularities in the
derivatives that makes difficult to solve the combined problem (1.8)-(1.9) by
means of a fixed point argument. In order to avoid this difficulty Alber
restrict himself to smooth domains $\Omega$ with Lipschitz boundaries
$\partial\Omega$ satisfying the following condition: if a vector field $v$ has
a tangency point at $x_{0}$, then the whole stream line of $v$ crossing
$x_{0}$ is contained on the boundary $\partial\Omega$ (see equations
(1.22)-(1.23) in [1]).
A benefit of working with our domains (1.5) is that they do not have tangency
points and hence this difficulties can be ignored. As a drawback, since the
domains (1.5) are not simply connected, we need to impose topological
constraints to our vector fields in order to have well defined problems. In
particular, problem (1.9) cannot be reduced in general to a Laplace equation
unless the flux of $W$ along a vertical line is zero. However, this can be
achieved by adding a horizontal constant vector.
An important observation and difference with the work of Alber, is that we use
Hölder spaces instead of Sobolev spaces to construct our solutions. In the
case treated by Alber, the vorticity $\omega_{0}$ at the boundary can be
readily obtained from the boundary value given in the problem. However this is
not the case for the problems (B) and (C). In those cases $\omega_{0}$ is part
of the solution that is obtained by means of the fixed point argument. More
precisely, the value of $\omega_{0}$ is given in terms of $W$ and its
derivatives at the boundary $\partial\Omega_{-}$, where $W$ solves (1.9). If
the estimates for $W$ are given in terms of the Sobolev spaces, we obtain less
regularity for $\omega_{0}$ due to the classical Trace Theorem. The vorticity
$\omega$ is now computed on the whole domain $\Omega$ using the transport
equation (1.8) and this does not give any gain of regularity for $\omega$. As
a consequence when we recover a new function $W$ using again (1.9) which has
less regularity that we had initially. Due to this it is not possible to close
a fixed point argument. To solve this obstruction we make use of Hölder spaces
where this difficulty vanishes. Furthermore, in the case (G), it is possible
to obtain the vorticity $\omega_{0}$ in terms of the boundary values. However
the normal veloctiy in $\partial\Omega_{+}$ it is not known and it must be
obtained using a fixed point argument as in the cases describe above.
It is worth to notice that seemingly the case (D) cannot be solved using
Alber’s method due to the fact that a loss of regularity takes place when one
tries to reformulate the problem as a fixed point. In this case the lost of
regularity is an essential difficulty that cannot fixed even with the use of
Hölder spaces. See Section 3.4 for a detailed explanation of this fact.
The case (E) which is one of the boundary value problems suggested in the
pioneering work of Grad and Rubin [14], does not seem amenable to any of the
two methods indicated above and will be studied in a forthcoming work by means
of completely different techniques.
Similarly, the case (F) seems to pose essential difficulties for both methods.
Indeed, we cannot applied the Grad-Shafranov method since we do not know the
value of $H$ and $v\cdot n$ at the same part of the boundary. On the other
hand, we cannot apply Alber’s method due to the loss of regularity similar as
it happens in the case (D).
To the best of our knowledge several of the boundary value problems in the
Table 1 have not been studied in the scientific literature. One of the main
goals of this paper is to clarify which sets of boundary conditions yield
well-posed problems.
### Notation
We will use the following notation throughout the manuscript. We recall that
we are working on a domain $\Omega=\mathbb{S}^{1}\times(0,L)$ with $L>0$. Let
$C_{b}(\Omega)$ be the set of bounded continuous functions on $\Omega.$ For
any bounded continuous function and $0<\alpha<1$ we call $f$ uniformly Hölder
continuous with exponent $\alpha$ in $\Omega$ if the quantity
$\left[f\right]_{\alpha,\Omega}:=\displaystyle\sup_{x\neq
y;x,y\in\overline{\Omega}}\frac{\left|f(x)-f(y)\right|}{\left|x-y\right|^{\alpha}}$
is finite. However, this is just a semi-norm and hence in order to work with
Banach spaces we define the space of Hölder continuous functions as
$C^{\alpha}(\Omega)=\\{f\in
C_{b}(\Omega):\left\|f\right\|_{C^{\alpha}(\Omega)}<\infty\\},$
equipped with the norm
$\left\|f\right\|_{C^{\alpha}(\Omega)}:=\displaystyle\sup_{x\in\overline{\Omega}}\left|f(x)\right|+\left[f\right]_{\alpha,\Omega}.$
Similarly, for any non-negative integer $k$ we define the Hölder spaces
$C^{k,\alpha}(\Omega)$ as
$C^{\alpha}(\Omega)=\\{f\in
C^{k}_{b}(\Omega):\left\|f\right\|_{C^{k,\alpha}(\Omega)}<\infty\\},$
equipped with the norm
$\left\|f\right\|_{C^{k,\alpha}(\Omega)}=\displaystyle\max_{\left|\beta\right|\leq
k}\sup_{x\in\overline{\Omega}}\left|\partial^{\beta}f(x)\right|+\displaystyle\sum_{\beta=k}\left[\partial^{\beta}f\right]_{\alpha,\Omega}.$
Notice that in the definitions above the Hölder regularity holds up to the
boundary, i.e in $\overline{\Omega}$. We omit in the functional spaces whether
we are working with scalars or vectors fields, this is
$C^{k,\alpha}(\Omega,\mathbb{R})$ or $C^{k,\alpha}(\Omega,\mathbb{R}^{2})$ and
instead just write $C^{k,\alpha}(\Omega)$. Moreover, we will identify
$\mathbb{S}^{1}$ with the interval $[0,1]$ and the functions $f\in
C^{k,\alpha}(\mathbb{S}^{1})$ , $k=0,1,2...$, $\alpha\in[0,1)$ with the
functions $f\in C^{k,\alpha}([0,1])$ satisfying that $f^{\ell}(0)=f^{\ell}(1)$
for $\ell=0,1,\dots,k$. Notice that this space of functions can also be
identified with the space $f\in C^{k,\alpha}(\mathbb{R})$ such that
$f(x+1)=f(x)$.
### Plan of the paper
In Section 2 we show how to solve the boundary value cases (A),(D) and (G)
using the Grad-Shafranov approach. Next, in Section 3 we introduce the
vorticity transport method and apply it to construct solutions to the steady
Euler equations for the boundary value cases (B),(C) and (G). In the last
section, Section 4, we translate the statements of the results shown for the
Euler equation (1.2) in the case of the MHS equations (1.3).
## 2\. The Grad-Shafranov approach
In this section we will use the Grad-Shafranov method to construct solutions
with non-vanishing vorticity to the Euler equation for the boundary value
problem (D). Although the method is also valid to tackle the case (G), we will
give the details in that case using the fixed point method. As explained in
the introduction, the Grad-Shafranov approach reduces the existence problem
for the Euler equation (1.1) to the study of a simpler elliptic equation
$\Delta\psi=F^{\prime}(\psi)$ where $v=\nabla^{\perp}\psi$ and $F(\psi)$ is an
unknown function related to the Bernoulli function $H$ that we need to
determine using the boundary value conditions.
### 2.1. Boundary value problem (D) for the steady Euler equation
###### Theorem 2.1.
Let $f\in C^{1,\alpha}(\partial\Omega_{-}),h^{-}\in
C^{1,\alpha}(\partial\Omega_{-})$, $h^{+}\in C^{1,\alpha}(\partial\Omega_{+})$
and $h^{+}=h^{-}\circ T$ where $T:\mathbb{S}^{1}\to\mathbb{S}^{1}$ is a given
diffeomorphism with $C^{2,\alpha}$ regularity. Then if $f>0$ for
$x\in\partial\Omega_{-}$, there exists a solution $(v,p)\in
C^{{1,\alpha}}(\Omega)\times C^{{1,\alpha}}(\Omega)$ solving the Euler
equation (1.1) such that
$v\cdot n=f\mbox{on}\ \partial\Omega_{-},\
p+\frac{\left|v\right|^{2}}{2}=h^{-}\ \mbox{on}\ \partial\Omega_{-}\mbox{ and
}p+\frac{\left|v\right|^{2}}{2}=h^{+}\ \mbox{on}\ \partial\Omega_{+}.$ (2.1)
Moreover, there exists $\delta>0$ such that if
$\left\|h^{-}\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}+\left\|f\right\|_{C^{{1,\alpha}}(\partial\Omega)}\leq\delta,$
(2.2)
the solution $(v,p)$ is unique.
###### Proof.
First we let $f(x)$ and $h^{-}(x)$ be extended periodically to the whole real
line $\mathbb{R}$. Then we define $\psi_{-}(x)=\int_{0}^{x}f(s)\ ds$ and
notice that the function $\psi(x)$ is invertible since $f>0$ in
$\partial\Omega_{-}$, this is there exists a function $\xi$ such that
$X(\xi)=\psi^{-1}_{-}(\xi)$ for all $\xi\in\mathbb{R}$. Moreover,
$\psi_{-}(x+1)=\psi_{-}(x)+J,\ \mbox{ and
}\psi_{-}^{-1}(\xi+J)=\psi_{-}^{-1}(\xi)+1,$ (2.3)
where $J=\int_{0}^{1}f(s)ds>0$. Next, we define the function
$F(\xi)=h^{-}(\psi^{-1}(\xi))$ for every $\xi\in\mathbb{R}$ which is a
periodic function of period $J$. Indeed,
$F(\xi+J)=h^{-1}(\psi^{-1}_{-}(\xi+J))=h^{-1}(\psi^{-1}_{-}(\xi)+1)=h^{-1}(\psi^{-1}_{-}(\xi))=F(\xi),$
where we have used the periodicity of $h^{-1}$ and $\psi^{-1}_{-}$ in (2.3).
Notice that the function $F\in C^{1,\alpha}$ since $f,h^{-}\in
C^{1,\alpha}(\partial\Omega_{-})$. Finally the function
$\psi_{+}(x):\mathbb{S}^{1}\to\mathbb{S}^{1}$ given by
$\psi_{+}(x)=(\psi_{-}\circ T)(x)$. With these definitions and constructions
at hand, we are interested in solving the following elliptic boundary value
problem
$\left\\{\begin{array}[]{lll}\Delta\psi=F^{\prime}(\psi),\ \mbox{in }\Omega\\\
\psi(x,0)=\psi_{-}(x),\mbox{on }\partial\Omega_{-}\\\ \psi(x,L)=\psi_{+}(x),\
\mbox{on }\partial\Omega_{+}\\\ \psi(0,y)=\psi(1,y)+J,\ \mbox{for
}y\in(0,L).\end{array}\right.$ (2.4)
In order to obtain a minimization problem in the whole manifold
$\mathbb{S}^{1}\times[0,L]$ we make the following change of variables
$\phi=\psi-Jx$ where the new function $\phi$ solves
$\left\\{\begin{array}[]{lll}\Delta\phi=F^{\prime}(Jx+\phi),\ \mbox{in
}\Omega\\\ \phi(x,0)=\psi_{-}(x)-Jx,\ \mbox{on }\partial\Omega_{-}\\\
\phi(x,L)=\psi_{+}(x)-Jx,\ \mbox{on }\partial\Omega_{+}\\\
\phi(1,y)=\phi(0,y),\ \mbox{for }y\in(0,L)\end{array}\right.$ (2.5)
The new functions $F^{\prime}(Jx+\phi),\psi_{-}(x)-Jx,\psi_{+}(x)-Jx$ defined
on the manifold $\mathbb{S}^{1}\times[0,L]$ are periodic. To show the
existence of solutions to (2.5) we use the classical variational calculus
theory. To that purpose, we introduce the following energy functional
$I[\phi]=\frac{1}{2}\int_{\Omega}\left|\nabla\phi\right|^{2}+\int_{\Omega}F(Jx+\phi)\
dx,$ (2.6)
the admissible space of functions
$\mathcal{A}=\\{\phi\in H^{1}(\Omega):\phi(x,0)=\psi_{-}(x)-Jx\ \mbox{on
}\partial\Omega_{-},\phi(x,L)=\psi_{+}(x)-Jx\ \mbox{on }\partial\Omega_{+}\
\mbox{in the trace sense}\\},$ (2.7)
and set
$\bar{\phi}=\mbox{arg min}\\{I[\phi]:\phi\in\mathcal{A}\\}.$ (2.8)
It is well-known that this minimizing problem has at least one solution and
moreover that it is a weak solution to (2.5) which verify also the boundary
conditions in the trace sense, see [10]. Moreover, since $F\in C^{1,\alpha}$
and $\psi_{-},\psi_{+}\in C^{2,\alpha}$, an application of standard elliptic
regularity theory in Hölder spaces shows that $\phi\in
C^{2,\alpha}(S^{1}\times[0,L])$, (cf. [11]). Therefore, by construction we
have that
$v=\nabla^{\perp}\psi\in C^{1,\alpha}(\Omega)\mbox{ and
}p=F(\psi)-\frac{\left|\nabla^{\perp}\psi\right|^{2}}{2}\in
C^{1,\alpha}(\Omega),$
solves the Euler equation (1.1) with boundary value conditions (2.1)
concluding the proof. To show uniqueness, let $\phi^{1}$ and $\phi^{2}$ be two
different solutions to (2.5) and set $\widehat{\phi}=\phi^{1}-\phi^{2}$. Then
we have that
$\left\\{\begin{array}[]{lll}\Delta\widehat{\phi}=F^{\prime}(Jx+\phi^{1})-F^{\prime}(Jx+\phi^{2}),\
\mbox{in }\Omega\\\ \widehat{\phi}(x,0)=0,\ \mbox{on }\partial\Omega_{-}\\\
\widehat{\phi}(x,L)=0,\ \mbox{on }\partial\Omega_{+}\\\
\widehat{\phi}(1,y)=\widehat{\phi}(0,y),\ \mbox{for
}y\in(0,L).\end{array}\right.$ (2.9)
Applying classical regularity theory for elliptic problems (cf. [11]) we have
that
$\left\|\widehat{\phi}\right\|_{C^{2,\alpha}(\Omega)}\leq
C\left\|F^{\prime}(Jx+\phi^{1})-F^{\prime}(Jx+\phi^{2})\right\|_{C^{\alpha}(\Omega)}.$
(2.10)
Using the smallness condition on $h^{-},f$ in (2.2) and the fact that
$F=h^{-}\circ\psi^{-1}$ we infer that
$\left\|F^{\prime}(Jx+\phi^{1})-F^{\prime}(Jx+\phi^{2})\right\|_{C^{\alpha}(\Omega)}\leq
C\delta\left\|\widehat{\phi}\right\|_{C^{2,\alpha}(\Omega)},$ (2.11)
and hence combining estimates (2.10)-(2.11) yields $\widehat{\phi}=0$, i.e.
$\phi^{1}=\phi^{2}$. ∎
###### Remark 2.2.
We should notice that we only need that the normal component of $v$ is
different than zero at the inflow boundary $\partial\Omega_{-}$ and hence the
vector field $v$ could have generally null points. On the other hand, we
should remark that the diffeomorphism $T$ does not have to be unique for
certain elections of the functions $h^{-}$ and $h^{+}$ , and therefore for
each $T$ we would have a different solution $(v,p)$. Indeed, we can have two
different diffeomorphism $T$ such that $T(A)=\hat{A},T(B)=\hat{B}$ or
$T(A)=\hat{B},T(B)=\hat{A}$ as in Figure 1.
Figure 1. Different diffeomorphism $T$
###### Remark 2.3 (Boundary value problem (G) using Grad-Shafranov).
We give here a brief explanation on how to modify the arguments above in order
to treat the boundary value problem (G). However since this case will be
tackled later with the vorticity transport method (cf. Subsection 3.3) we do
not provide the full details but the idea behind. The statement of the result
reads:
###### Theorem 2.4.
Let $f\in C^{1,\alpha}(\partial\Omega_{-}),h^{-}\in
C^{1,\alpha}(\partial\Omega_{-})$, $h^{+}\in
C^{1,\alpha}(\partial\Omega_{+})$. Then there exists a constant $\delta>0$
such that if
$\left\|h^{-}\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{1,\alpha}}(\partial\Omega_{+})}+\left\|f\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}\leq\delta,$
(2.12)
a unique solution $(v,p)\in C^{{1,\alpha}}(\Omega)\times
C^{{1,\alpha}}(\Omega)$ solving the Euler equation (1.1) such that
$v\cdot n=1+f\ \mbox{on}\ \partial\Omega_{-},\
p+\frac{\left|v\right|^{2}}{2}=h^{-}\ \mbox{on}\ \partial\Omega_{-}\mbox{ and
}p=h^{+}\ \mbox{on}\ \partial\Omega_{+}.$ (2.13)
Mimicking the same argument as in the proof of Theorem 2.1, we have reduced
the existence of solutions to the following boundary value problem
$\left\\{\begin{array}[]{lll}\Delta\psi=F^{\prime}(\psi),\ \mbox{in }\Omega\\\
\psi(x,0)=\psi_{-}(x),\ \mbox{on }\partial\Omega_{-}\\\
h^{+}+\frac{\left|\nabla^{\perp}\psi\right|^{2}}{2}=F(\psi),\ \mbox{on
}\partial\Omega_{+}\\\ \psi(0,y)=\psi(1,y)+J,\ \mbox{for
}y\in(0,L)\end{array}\right.$ (2.14)
where $\psi_{-}(x)=\int_{0}^{x}f(s)\ ds$, $F(\xi)=h^{-}(\psi^{-1}(\xi))$ for
every $\xi\in\mathbb{R}$. Due to the nonlinear character of the boundary
condition on $\partial\Omega_{+}$ it is not a priori clear if the problem
(2.14) can be solved for arbitrary functions $\psi_{-},F$ and $h_{+}$.
However, under the smallness assumption (2.12), we can linearize the equation
$h^{+}+\frac{\left|\nabla^{\perp}\psi\right|^{2}}{2}=F(\psi)\ \mbox{ on
}\partial\Omega_{+}$ for a suitable perturbation $\psi=\psi_{0}+\delta\phi$
where $\psi_{0}=x$ and $\delta\ll 1$ and solve the resulting problem by means
of a fixed point argument. Actually analogous perturbative arguments will be
applied recurrently in the rest of the paper.
###### Remark 2.5 (Boundary value problem (A) using Grad-Shafranov).
In the case of the boundary value problem (A), the construction of solutions
to the Euler equation (1.1) reduces also to the study of an elliptic equation
which can be treated with classical calculus of variations tools. This case
has been studied before in the literature in [3, 22] and we would not provide
more details here.
###### Theorem 2.6.
Let $f^{-}\in C^{1,\alpha}(\partial\Omega_{-}),f^{+}\in
C^{1,\alpha}(\partial\Omega_{+})$ and $h^{-}\in
C^{1,\alpha}(\partial\Omega_{-})$. Then if $f^{-}>0$ for
$x\in\partial\Omega_{-}$, there exists a solution $(v,p)\in
C^{{1,\alpha}}(\Omega)\times C^{{1,\alpha}}(\Omega)$ solving the Euler
equation (1.1) such that
$v\cdot n=f^{-}\ \mbox{on }\ \partial\Omega_{-},v\cdot n=f^{+}\ \mbox{on}\
\partial\Omega_{+}\mbox{ and }p+\frac{\left|v\right|^{2}}{2}=h^{-}\ \mbox{on}\
\partial\Omega_{-}.$ (2.15)
Moreover, there exists $\delta>0$ such that if
$\left\|f^{-}\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}+\left\|f^{+}\right\|_{C^{{1,\alpha}}(\partial\Omega_{+})}+\left\|h^{-}\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}\leq\delta,$
(2.16)
the solution $(v,p)$ is unique.
## 3\. The vorticity transport method
In this section we will apply the fixed point method approach to construct
non-vanishing solutions to the Euler equation for the boundary value problems
(B),(C) and (G). In order to avoid repetition we will show cases (B) and (C)
in full detail and just provide a sketch of the proof for case (G)
highlighting the main differences.
### 3.1. Boundary value problem (B) for the steady Euler equation
We will construct solutions to the Euler equation with boundary conditions (B)
using a suitable modification of the vorticity transport method introduced by
Alber [1]. Let us state the result precisely:
###### Theorem 3.1.
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(v_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.1) with
$\bar{v}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|v^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }v_{0}=0$. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(v_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $v_{0}$ as above with
$\left\|v^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $f\in C^{{2,\alpha}}(\partial\Omega)$ and
$J\in\mathbb{R}$ satisfying
$\left\|h\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|f\right\|_{C^{{2,\alpha}}(\partial\Omega)}+\left|J-J_{0}\right|\leq
KM,$ (3.1)
and
$\int_{\partial\Omega_{-}}f\ dS=\int_{\partial\Omega_{+}}f\ dS,$ (3.2)
there exists a unique $(v,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.1) with
$\left\|v-v_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$v\cdot n=v_{0}\cdot n+f\ \mbox{on}\ \partial\Omega,\ p=p_{0}+h\ \mbox{on}\
\partial\Omega_{-}\mbox{ and }\int_{\mathcal{C}}v\cdot n\ dS=J.$ (3.3)
The constants $M,K,$ as well as $\epsilon,$ depend only on
$\alpha,L,\bar{v}^{2}_{0}$.
###### Remark 3.2.
Notice that we have chosen our base flow $v_{0}$ to be irrotational. From the
mathematical point of view the strategy of the proof is sufficiently flexible
to cover the case when $\mbox{curl }v_{0}$ is different than zero but
sufficiently small. However, for each specific boundary condition it is not
obvious if suitable rotational solutions exist.
###### Remark 3.3.
It is not a priori clear whether the smallness assumption on $v_{0}^{1}$ in
Theorem 3.1 can be removed. This is due to the fact that a crucial step of the
argument is to solve the equation (3.32) in order to obtain the value of the
vorticity at $y=0$. The term
$-\frac{1}{v_{0}^{2}}\partial_{x}(v_{0}^{1}V^{1})$ on the right hand side in
(3.32) is linear in $V$ and therefore we cannot treat it perturbatively if
$v^{1}_{0}$ is not small. If $v=v_{0}+V$ with $V\ll 1$ it is natural to try a
linearization approach that yields a problem of the form
$\left\\{\begin{array}[]{lll}\Delta\psi=\omega(x,y),\ \mbox{in }\Omega\\\
v_{0}\cdot\nabla\omega=Q_{1},\ \mbox{in }\Omega\\\
\omega(x,0)=\partial_{x}(v_{0}^{1}V^{1})+Q_{2},\mbox{ on
}\partial\Omega_{-}\\\ \end{array}\right.$ (3.4)
where $Q_{1},Q_{2}$ contain terms that are quadratic in $V$ or small source
terms due to the boundary data and $V=\nabla^{\perp}\psi$. The existence of an
operator yielding $(V,\omega)$ in terms of $Q_{1},Q_{2}$ can fail if the
homogeneous problem obtained setting $Q_{1}=Q_{2}=0$ has non-trivial
solutions.
###### Remark 3.4.
The curve $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ along we fixed the flux $J$ can
be chosen in a more general way. Indeed, we can choose two different curves
$C_{1}$ and $C_{2}$ in which we have the same flux $J$ if we impose that
$\int_{\Gamma_{1}}v\cdot n=\int_{\Gamma_{2}}v\cdot n$ as in Figure 2.
Figure 2. General curves $\mathcal{C}$.
As we have mentioned in the introduction, the proof is based on defining an
adequate operator $\Gamma$ on a subspace of $C^{2,\alpha}(\Omega)$ which has a
fixed point $V$ such that $v=v_{0}+V$ is a solution to (1.1) and (3.3). To
that purpose, let us define the following subspace
$C_{\star}^{2,\alpha}(\Omega)$ given by
$C_{\star}^{2,\alpha}(\Omega)=\\{g\in C^{2,\alpha}(\Omega):\mbox{div }g=0\
\mbox{in }\Omega\\}.$ (3.5)
For any, $M>0$, let us denote by $B_{M}$ the closed ball of in
$C_{\star}^{2,\alpha}(\Omega)$ with radius $M$, i.e.,
$B_{M}=\\{g\in
C_{\star}^{2,\alpha}(\Omega):\left\|g\right\|_{C^{2,\alpha}(\Omega)}\leq
M\\}.$ (3.6)
###### Remark 3.5.
Notice that if $v_{0}$ satisfies the hypothesis of Theorem 3.1, there exists a
fixed constant $0<M_{0}<\frac{\bar{v}^{2}_{0}}{2}$ such that $v=v_{0}+V$ with
$V\in B_{M_{0}}(\Omega)$ satisfies
$\displaystyle\inf_{(x,y)\in\overline{\Omega}}\left|v^{2}(x)\right|\geq\bar{v}^{2}_{0}-M_{0}>\frac{\bar{v}^{2}_{0}}{2},$
(3.7)
since $V^{2}\leq\left\|V\right\|_{C^{2,\alpha}(\Omega)}\leq M_{0}$. Throughout
the article we will always assume (sometimes will be written explicitly) that
$M\leq M_{0}$, so that the lower bound (3.7) holds true for any function $V\in
B_{M}(\Omega)$.
#### 3.1.1. The building blocks: the transport problem and the div-curl
system
In this subsection we will provide regularity results and show several
estimates regarding the hyperbolic transport problem and the div-curl problem
which are the building blocks to construct the operator $\Gamma$. Notice that
the results will be used not only to solve the boundary value problem (B) but
will be instrumental to construct solutions to boundary value problems (C),
(G) treated in this article (cf. Section 3.2 and Section 3.3).
Before proceeding any further, let us show a regularity result for the
trajectories associated to a vector field. We define the flow of a continuous
and bounded vector field $b(x,y)$ as the map $X:\Omega\to\mathbb{S}^{1}$ which
satisfies
$\left\\{\begin{array}[]{lll}\frac{\partial X}{\partial
y}(a,y)&=b(X(a,y),y)\\\ X(a,0)&=a.\end{array}\right.$ (3.8)
We refer to $a$ as the particle label, since it marks the beginning point of
the path $a\to X(a,y)$.
###### Lemma 3.6.
Let assume that $b\in C^{1,\alpha}(\Omega)$. Then, there exists a unique
solution $X\in C^{1,\alpha}(\Omega)$ solving (3.8). Moreover, the following
estimates are satisfied:
$\displaystyle\left\|X\right\|_{C^{1,\alpha}(\Omega)}\ $ $\displaystyle\leq
C\left(L,\left\|b\right\|_{C^{1,\alpha}(\Omega)}\right),$ (3.9)
$\displaystyle\left\|X^{-1}\right\|_{C^{1,\alpha}(\Omega)}$ $\displaystyle\leq
C\left(L,\left\|b\right\|_{C^{1,\alpha}(\Omega)}\right).$ (3.10)
###### Proof.
To derive (3.9) we need to estimate $\frac{\partial X}{\partial
y},\frac{\partial X}{\partial a}$ as well as their Hölder norms. Notice that a
bound for the $\left\|\frac{\partial X}{\partial
y}\right\|_{L^{\infty}(\Omega)}$ follows directly by (3.8). Furthermore,
standard results of differentiability with respect to parameters of ordinary
differential equations (cf. [6]) yield
$\displaystyle\frac{\partial X}{\partial
a}(a,y)=\mbox{exp}\left(\int_{0}^{y}\frac{\partial b}{\partial X}(X(a,s),s)\
ds\right),$ (3.11)
hence an estimate for $\left\|\frac{\partial X}{\partial
a}\right\|_{L^{\infty}(\Omega)}$ follows directly. To estimate the Hölder norm
of $\frac{\partial X}{\partial a}$ we compute the difference
$\displaystyle\left|\dfrac{\partial X}{\partial a}(a_{1},y)-\dfrac{\partial
X}{\partial a}(a_{2},y)\right|$ $\displaystyle\leq
C\int_{0}^{y}\left|\frac{\partial b}{\partial X}(X(a_{1},s),s)-\frac{\partial
b}{\partial X}(X(a_{2},s),s)\right|ds$ $\displaystyle\leq
C\int_{0}^{y}\left\|b\right\|_{C^{1,\alpha}}\left|X(a_{1},y)-X(a_{2},y)\right|^{\alpha}$
$\displaystyle\leq CL\left\|b\right\|_{C^{1,\alpha}}\left\|\frac{\partial
X}{\partial a}\right\|_{L^{\infty}}^{\alpha}|a_{1}-a_{2}|^{\alpha}.$ (3.12)
Therefore,
$\frac{\left|\frac{\partial X}{\partial a}(a_{1},y)-\frac{\partial X}{\partial
a}(a_{2},y)\right|}{\left|a_{1}-a_{2}\right|^{\alpha}}\leq
CL\left\|b\right\|_{C^{1,\alpha}}\left\|\frac{\partial X}{\partial
a}\right\|_{L^{\infty}}^{\alpha}\leq
C\left(L,\left\|b\right\|_{C^{1,\alpha}(\Omega)}\right),$ (3.13)
for $a_{1},a_{2}\in\mathbb{S}^{1},y\in(0,L)$. On the other hand, given that
$\left\|\frac{\partial X}{\partial y}\right\|_{L^{\infty}}$ and
$\left\|\frac{\partial X}{\partial a}\right\|_{L^{\infty}}$ are bounded we
have that the function $b(X(a,y),y)$ is Lipschitz in both variables $a,y$ (
and then Hölder on both variables $a,y$). Hence,
$\frac{\left|\frac{\partial X}{\partial y}(a_{1},y_{1})-\frac{\partial
X}{\partial
y}(a_{2},y_{2})\right|}{\left|a_{1}-a_{2}\right|^{\alpha}+|y_{1}-y_{2}|^{\alpha}}\leq
C\left(L,\left\|b\right\|_{C^{1,\alpha}(\Omega)}\right),\quad
a_{1},a_{2}\in\mathbb{S}^{1},y\in(0,L).$ (3.14)
Combining bounds (3.13)-(3.14) we infer that
$\left\|X\right\|_{C^{1,\alpha}(\Omega)}\leq
C\left(L,\left\|b\right\|_{C^{1,\alpha}(\Omega)}\right)$ and hence (3.9) is
satisfied.
We are left to show (3.10). To that purpose, notice that estimate (3.11)
implies that $\frac{\partial X}{\partial
a}>\mbox{exp}(L\left\|b\right\|_{C^{1}(\Omega)})^{-1}>0$ and therefore the
mapping $a\to X(a,y)$ is invertible for any $y\in(0,L)$ with inverse
$X^{-1}(x,y)$ such that $X(X^{-1}(x,y),y)=x$. Moreover,
$\frac{\partial X^{-1}}{\partial x}(x,y)=\frac{1}{\frac{\partial X}{\partial
a}(X^{-1}(x,y),y)},\quad\frac{\partial X^{-1}}{\partial
y}(x,y)=-\frac{\frac{\partial X}{\partial y}(X^{-1}(x,y),y)}{\frac{\partial
X}{\partial a}(X^{-1}(x,y),y)}.$ (3.15)
Using that $\left\|\frac{\partial X}{\partial
x}\right\|_{L^{\infty}(\Omega)},\left\|\frac{\partial X}{\partial
y}\right\|_{L^{\infty}(\Omega)}$ are bounded, it then follows that $X^{-1}$ is
Lipschitz for $(x,y)\in\Omega$.
Moreover we can also estimate the Hölder semi-norms of $\frac{\partial
X^{-1}}{\partial x},\frac{\partial X^{-1}}{\partial y}$ as above using that
$\left\|X\right\|_{C^{1,\alpha}(\Omega)}\leq
C(L,\left\|b\right\|_{C^{1,\alpha}(\Omega)})$, then the following bound holds
$\left\|X^{-1}\right\|_{C^{1,\alpha}(\Omega)}\leq
C(L,\left\|b\right\|_{C^{1,\alpha}(\Omega)}),$
proving (3.10) and concluding the proof. ∎
Next, we will derive Hölder estimates for solutions to the hyperbolic
transport type problem given by
$(\textit{TP})\left\\{\begin{array}[]{lll}(v_{0}+V)\cdot\nabla\omega=0\
\mbox{in}\ \Omega,\\\ \omega=\omega_{0}\ \mbox{on}\
\partial\Omega_{-}\end{array}\right.$ (3.16)
which is the first building block to construct the fixed point operator.
###### Proposition 3.7.
Let $v_{0}$ be as in Theorem 3.1 and $0<M_{0}<\frac{\bar{v}^{2}_{0}}{2}$. Then
for every $M\leq M_{0}$, $V\in B_{M}(\Omega)$ and $\omega_{0}\in
C^{1,\alpha}(\partial\Omega_{-})$, there exists a unique $\omega\in
C^{1,\alpha}(\Omega)$ solving (3.16). Moreover, there exists a constant
$C=C(\alpha,\Omega,L,\bar{v}^{2}_{0})>0$ such that the following estimate
holds
$\left\|\omega\right\|_{C^{1,\alpha}(\Omega)}\leq
C\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}.$ (3.17)
Furthermore, let $\omega^{1},\omega^{2}\in C^{\alpha}(\Omega)$ be two
different solutions to (3.16) with $V$ given by $V^{1},V^{2}$ respectively.
Then
$\left\|\omega^{1}-\omega^{2}\right\|_{C^{\alpha}(\Omega)}\leq
C\left(\left\|\omega^{1}_{0}-\omega^{2}_{0}\right\|_{C^{\alpha}(\partial\Omega_{-})}+\left\|\omega^{1}_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\left\|V^{1}-V^{2}\right\|_{C^{\alpha}(\Omega)}\right)$
(3.18)
where $C=C(\alpha,L,\bar{v}^{2}_{0})>0$.
###### Remark 3.8.
It is important to emphasize that the positive constants $C$ depend only on
the following fixed quantities $C=C(\alpha,L,\bar{v}^{2}_{0})$. Note also that
it might change from line to line. For exposition’s clearness we will avoid
writing explicitly the constants dependences along the proofs throughout the
manuscript.
###### Proof.
We can solve equations (3.16) using the integral curves of $v=v_{0}+V$. More
precisely, the explicit solution to (3.16) is given by
$\omega(x,y)=\omega_{0}(X^{-1}(x,y)),$
where $X^{-1}$ is the inverse of the mapping $a\to X(a,y)$ solving the
ordinary differential equation (3.8) with
$b(x,y)=\frac{(v^{1}_{0}+V^{1})(x,y)}{(v^{2}_{0}+V^{2})(x,y)}$. Since
$0<M_{0}<\frac{\bar{v}^{2}_{0}}{2}$ and $M\leq M_{0}$, we have that
$\displaystyle\inf_{(x,y)\in\overline{\Omega}}\left|v^{2}_{0}+V^{2}\right|>0$
(cf. Remark 3.5), hence $b(x,y)$ has $C^{1,\alpha}(\Omega)$ regularity and
satisfies the bound
$\left\|b(x,y)\right\|_{C^{1,\alpha}(\Omega)}\leq C.$ (3.19)
Using Lemma 3.6, we have that there exists a unique $X\in
C^{1,\alpha}(\Omega)$ solving the system (3.8) with inverse $X^{-1}$.
Therefore, invoking the estimate (3.10) in Lemma 3.6 and the bound (3.19) we
have that
$\displaystyle\left\|\omega(x,y)\right\|_{C^{1,\alpha}(\Omega)}=\left\|\omega_{0}(X^{-1}(x,y))\right\|_{C^{1,\alpha}(\partial\Omega)}$
$\displaystyle\leq
C\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\left(1+\left\|\nabla
X^{-1}\right\|_{C^{\alpha}(\Omega)}\right)^{1+\alpha}$ $\displaystyle\leq
C\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}.$ (3.20)
To show (3.18), we use the notation $\widehat{\omega}=\omega_{1}-\omega_{2}=$
and $\widehat{V}=V_{1}-V_{2}$. From (3.16), we have that
$\left\\{\begin{array}[]{lll}(v_{0}+V_{2})\cdot\nabla\widehat{\omega}=-\widehat{V}\cdot\nabla\omega_{1}\quad\mbox{in}\
\Omega,\\\ \widehat{\omega}=\widehat{\omega_{0}}\quad\mbox{on
}\partial\Omega_{-}.\end{array}\right.$ (3.21)
Solving (3.21) using characteristics we have that
$\widehat{\omega}(x,y)=\widehat{\omega_{0}}(X^{-1}(x,y))-\int_{0}^{y}\left(\frac{\widehat{V}\cdot\nabla\omega_{1}}{v^{2}_{0}+V^{2}_{2}}\right)(X(X^{-1}(x,y),s),s)\
ds$ (3.22)
where $X$ solves the ordinary differential equation (3.8) with
$b(x,y)=\frac{(v^{1}_{0}+V^{1}_{2})(x,y)}{(v^{2}_{0}+V^{2}_{2})(x,y)}$.
Therefore, we infer that
$\displaystyle\left\|\widehat{\omega}(x,y)\right\|_{C^{\alpha}(\Omega)}$
$\displaystyle\leq$ $\displaystyle
C\left\|\widehat{\omega_{0}}\right\|_{C^{\alpha}(\partial\Omega_{-})}\left(1+\left\|\nabla
X^{-1}\right\|_{C^{0}(\Omega)}\right)$
$\displaystyle+C\left\|\displaystyle\int_{0}^{y}\left(\frac{\widehat{V}\cdot\nabla\omega_{1}}{v^{2}_{0}+V^{2}_{2}}\right)\left(X(X^{-1}(x,y),s),s\right)\
ds\right\|_{C^{\alpha}(\Omega)}=I_{1}+I_{2}.$
Applying estimate (3.10) in Lemma 3.6 and bound (3.19), we can estimate the
first term as before
$\left|I_{1}\right|\leq
C\left\|\widehat{\omega}_{0}\right\|_{C^{\alpha}(\partial\Omega_{-})}.$ (3.23)
Next, we estimate the second term $I_{2}$. To that purpose, let us define
$\phi(x,y)=\displaystyle\int_{0}^{y}\mathcal{H}(x,y,s)ds$ for any function
$\mathcal{H}\in C^{\alpha}(\Omega\times(0,L))$. Then we have that
$\left|\phi(x_{1},y)-\phi(x_{2},y)\right|\leq
L\left\|\mathcal{H}\right\|_{C^{\alpha}}\left|x_{1}-x_{2}\right|^{\alpha},\quad
y\in(0,L),$ (3.24)
and for any $0\leq y_{1}\leq y_{2}\leq L$
$\displaystyle\left|\phi(x,y_{1})-\phi(x,y_{2})\right|$ $\displaystyle\leq$
$\displaystyle\left(\int_{0}^{y_{2}}\left\|\mathcal{H}\right\|_{C^{\alpha}}\
ds\right)\left|y_{1}-y_{2}\right|^{\alpha}+\int_{y_{1}}^{y_{2}}\left\|\mathcal{H}\right\|_{L^{\infty}}\
ds$ (3.25) $\displaystyle\leq$ $\displaystyle
L\left\|\mathcal{H}\right\|_{C^{\alpha}}|y_{1}-y_{2}|^{\alpha}+\left\|\mathcal{H}\right\|_{L^{\infty}}|y_{1}-y_{2}|,\quad
x\in[0,1].$
Hence, we have shown that $\left\|\phi\right\|_{C^{\alpha}}\leq
C(L)\left\|\mathcal{H}\right\|_{C^{\alpha}}$ for $x\in[0,1],y\in(0,L)$.
Applying this result for
$\mathcal{H}=\left(\frac{\widehat{V}\cdot\nabla\omega_{1}}{v^{2}_{0}+V^{2}_{2}}\right)\left(X(X^{-1}(x,y),s),s\right)$,
we conclude that
$\displaystyle\left|I_{2}\right|\leq\left\|\left(\frac{\widehat{V}\cdot\nabla\omega_{1}}{v^{2}_{0}+V^{2}_{2}}\right)(X(X^{-1}(x,y),s),s)\right\|_{C^{\alpha}(\Omega)}$
$\displaystyle\leq$ $\displaystyle
C\left\|\frac{\widehat{V}\cdot\nabla\omega_{1}}{v^{2}_{0}+V^{2}_{2}}\right\|_{C^{\alpha}(\Omega)}$
(3.26) $\displaystyle\leq$ $\displaystyle
C\left\|\omega_{1}\right\|_{C^{1,\alpha}(\Omega)}\left\|\widehat{V}\right\|_{C^{\alpha(\Omega)}}$
$\displaystyle\leq$ $\displaystyle
C\left\|\omega_{0,1}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\left\|\widehat{V}\right\|_{C^{\alpha}(\Omega)},$
where in the second inequality we have used the fact that $X$ and $X^{-1}$ are
Lipschitz, so bound (3.9) in Lemma 3.6 applies and in the latter we have
invoked bound (3.1.1).
Hence, collecting estimates (3.23)-(3.26) we deduce that
$\left\|\widehat{\omega}(x,y)\right\|_{C^{\alpha}(\Omega)}\leq
C\left(\left\|\widehat{\omega}_{0}\right\|_{C^{\alpha}(\partial\Omega_{-})}+\left\|\omega_{0,1}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\left\|\widehat{V}\right\|_{C^{\alpha}(\Omega)}\right),$
(3.27)
which shows (3.18) and concludes the proof. ∎
On the other hand, we have the following result for the div-curl problem:
###### Proposition 3.9.
For every $j\in\mathbb{R}$, $\omega\in C^{1,\alpha}(\Omega)$ and $f\in
C^{2,\alpha}(\partial\Omega)$ satisfying (3.2), there exists a unique solution
$W\in C^{2,\alpha}(\Omega)$ solving
$\left\\{\begin{array}[]{lll}\nabla\times W=\omega,\quad\mbox{in }\Omega\\\
\mbox{div }W=0,\quad\mbox{in }\Omega\\\ W\cdot n=f,\quad\mbox{on
}\partial\Omega\\\ \int_{\mathcal{C}}W\cdot n\ dS=j.\end{array}\right.$ (3.28)
Moreover, the solution satisfies the inequality
$\left\|W\right\|_{C^{2,\alpha}(\Omega)}\leq
C\left(\left\|\omega\right\|_{C^{1,\alpha}(\Omega)}+\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}+\left|j\right|\right),$
(3.29)
where $C=C(L,\alpha)>0$.
###### Proof.
To solve system (3.28), we examine the following auxiliary problem, namely
$\left\\{\begin{array}[]{lll}\Delta\psi=\omega,\quad x\in\Omega\\\
\psi=-j+\displaystyle\int_{0}^{x}(f(\xi)-A)\ d\xi,\quad
x\in\partial\Omega_{+}\\\ \psi=\displaystyle\int_{0}^{x}(f(\xi)-A)\ d\xi,\quad
x\in\partial\Omega_{-}\\\ \end{array}\right.$ (3.30)
where $A=\int_{\partial\Omega_{+}}f\ dS=\int_{\partial\Omega_{-}}f\ dS.$
Notice that this particular choice of $A$, yields a well-defined uni-valued
function $\psi$. Moreover, if $\psi$ is a solution to (3.30) of sufficient
regularity (actually $\psi\in C^{3,\alpha}(\Omega)$), we get a solution to
(3.28) by defining $W=(0,A)+\nabla^{\perp}\psi$ where
$\nabla^{\perp}\psi=(-\frac{\partial\psi}{\partial
y},-\frac{\partial\psi}{\partial x})$. We only verify the last condition in
(3.28), since the other ones are straightforward to check. Indeed, we have
that
$\displaystyle\int_{\mathcal{C}}W\cdot n\
dS=-\int_{0}^{L}\frac{\partial\psi}{\partial y}(0,y)\
dy=-\psi(0,L)+\psi(0,0)=j.$
Hence applying classical regularity theory for elliptic problems (cf. [11]) we
infer that there exists a unique solution $W\in C^{2,\alpha}(\Omega)$
satisfying the following bound:
$\left\|W\right\|_{C^{2,\alpha}(\Omega)}\leq
C\left(\left\|\omega\right\|_{C^{1,\alpha}(\Omega)}+\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}+\left|j\right|\right),$
as desired. ∎
#### 3.1.2. The fixed point argument and construction of the solution
The proof is based on defining an adequate operator $\Gamma$ which has a fixed
point $V$ such that $v=v_{0}+V$ is a solution to (1.1) and (3.3).
We define the operator $\Gamma:B_{M}(\Omega)\to C_{\star}^{2,\alpha}(\Omega)$
in two steps. First, given $V\in B_{M}(\Omega)$ we define $\omega\in
C^{1,\alpha}(\Omega)$ solving the following the transport type problem
$\left\\{\begin{array}[]{lll}(v_{0}+V)\cdot\nabla\omega=0,\ \mbox{in}\
\Omega\\\ \omega=\omega_{0},\ \mbox{on}\ \partial\Omega_{-}\end{array}\right.$
(3.31)
with $\omega_{0}$ given by
$\omega_{0}=\partial_{x}f+\frac{1}{f+v^{2}_{0}}\left(-\partial_{x}h+\partial_{x}(v^{1}_{0}V^{1})+\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}+f\partial_{y}v^{1}_{0}\right),\
\forall x\in\partial\Omega_{-},$ (3.32)
where $v_{0}=(v^{1}_{0},v^{2}_{0})$ and $V=(V^{1},V^{2})$. As a second step,
we define $W\in C^{2,\alpha}(\Omega)$ as the unique solution to the following
div-curl problem
$\left\\{\begin{array}[]{lll}\nabla\times W=\omega,\ \mbox{in}\ \Omega\\\
\mbox{div }W=0,\ \mbox{in}\ \Omega\\\ W\cdot n=f,\ \mbox{on}\
\partial\Omega\\\ \displaystyle\int_{\mathcal{C}}W\cdot n\
dS=J-J_{0}.\end{array}\right.$ (3.33)
Thus we define $\Gamma(V)=W$.
###### Lemma 3.10.
Let $v_{0}$ be as in Theorem 3.1, $h\in C^{2,\alpha}(\partial\Omega_{-})$ and
$f\in C^{2,\alpha}(\partial\Omega)$ satisfying (3.2). There exists
$\delta_{0}=\delta_{0}(\Omega,\alpha,L,\bar{v}^{2}_{0})$ and
$M_{0}\in(0,\frac{\bar{v}^{2}_{0}}{2})$ such that if $M\in(0,M_{0})$ and
$\epsilon\leq\delta_{0},\
\left\|h\right\|_{C^{2,\alpha}(\partial\Omega_{-})}+\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}\leq
M_{0}\delta_{0}\mbox{ and }\left|J-J_{0}\right|\leq M_{0}\delta_{0},$ (3.34)
then $\Gamma(B_{M}(\Omega))\subset B_{M}(\Omega)$. Moreover, the operator
$\Gamma$ has a unique fixed point in $B_{M}(\Omega).$
###### Remark 3.11.
Notice that the operator $\Gamma$ is not a compact operator, and hence we
cannot prove the result applying Schauder’s fixed point theorem.
###### Proof.
The well-definiteness of the operator $\Gamma$ follows directly from
Proposition 3.7 and Proposition 3.9. Indeed, since $v_{0}$ satisfies the
hypothesis in Theorem 3.1 and $h\in C^{2,\alpha}(\partial\Omega_{-}),f\in
C^{2,\alpha}(\partial\Omega)$, we have that $\omega_{0}$ as in (3.32)
satisfies $\omega_{0}\in C^{1,\alpha}(\partial\Omega_{-})$. Applying
Proposition 3.7, there exists a unique $\omega\in C^{1,\alpha}(\Omega)$
satisfying (3.31). Therefore Proposition 3.9 gives a unique $W\in
C_{\star}^{2,\alpha}(\Omega)$ satisfying (3.33).
We now show that the operator $\Gamma$ maps $V\in B_{M}(\Omega)$ into itself.
Using inequality (3.29) with $j=J-J_{0}$ and then (3.17), we have that
$\displaystyle\left\|\Gamma(V)\right\|_{C^{2,\alpha}(\Omega)}$
$\displaystyle\leq$ $\displaystyle
C\left(\left\|\omega\right\|_{C^{1,\alpha}(\Omega)}+\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}+\left|J-J_{0}\right|\right)$
(3.35) $\displaystyle\leq$ $\displaystyle
C\left(\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}+\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}+\left|J-J_{0}\right|\right).$
In addition, since $V\in B_{M}(\Omega)$, the function $\omega_{0}$ defined in
(3.32) can be estimated as follows
$\displaystyle\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\ $
$\displaystyle=$
$\displaystyle\left\|\partial_{x}f+\frac{1}{f+v^{2}_{0}}\left(\partial_{x}h-\partial_{x}(v^{1}_{0}V^{1})+\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}+f\partial_{y}v^{1}_{0}\right)\right\|_{C^{1,\alpha}(\partial\Omega_{-})}$
(3.36) $\displaystyle\leq$ $\displaystyle
C\left(\left\|h\right\|_{C^{2,\alpha}(\partial\Omega_{-})}+\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}+\epsilon\left\|V\right\|_{C^{2,\alpha}(\Omega)}+\left\|V\right\|^{2}_{C^{2,\alpha}(\Omega)}+\epsilon\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}\right)$
$\displaystyle\leq$ $\displaystyle
C\left(\left\|h\right\|_{C^{2,\alpha}(\partial\Omega_{-})}+\left\|f\right\|_{C^{2,\alpha}(\partial\Omega)}+(\epsilon
M_{0}+M_{0}^{2})\right)\leq C(\delta_{0}M_{0}+M_{0}^{2}).$
Combining estimates (3.35)-(3.36) we have that
$\left\|\Gamma(V)\right\|_{C^{2,\alpha}(\Omega)}\leq
C(\delta_{0}M_{0}+M_{0}^{2}).$ (3.37)
Choosing $C\delta_{0}\leq\frac{1}{4}$ and $CM_{0}\leq\frac{1}{4}$, we obtain
$\Gamma(B_{M}(\Omega))\subset B_{M}(\Omega)$, for $M\in(0,M_{0})$.
We now claim that $B_{M}(\Omega)$ endowed with the topology $C^{1,\alpha}$ is
a complete metric space which we will denote as
$(B_{M}(\Omega),\left\|\cdot\right\|_{C^{1,\alpha}})$. We also claim that
$\Gamma:(B_{M}(\Omega),\left\|\cdot\right\|_{C^{1,\alpha}})\to(B_{M}(\Omega),\left\|\cdot\right\|_{C^{1,\alpha}})$
is a contraction mapping.
In order to prove the first claim, it is enough to show that $B_{M}(\Omega)$
is a closed subset of $C^{1,\alpha}(\Omega)$. Assume that the sequence
$\\{V_{n}\\}\subset B_{M}(\Omega)$ converges to some $V$ strongly in
$C^{1,\alpha}(\Omega)$. Since $\\{V_{n}\\}$ is bounded in the
$C^{2,\alpha}(\Omega)$, Arzela-Ascoli theorem implies that there exists a
subsequence $\\{V_{{n}_{k}}\\}$ which converges in $C^{2}(\Omega)$ to some
$U$. Hence, $V=U\in C^{2}(\Omega)$ and $\\{V_{n}\\}\ \to V\in C^{2}(\Omega)$.
Moreover, it turns out that the limit $V\in B_{M}(\Omega)$. Indeed, using the
definition of the norm $\left\|V\right\|_{C^{2,\alpha}(\Omega)}$ we have that
$\left\|V_{n}\right\|_{C^{2}(\Omega)}+\frac{\left|\partial^{\beta}V_{n}(x)-\partial^{\beta}V_{n}(y)\right|}{\left|x-y\right|^{\alpha}}\leq
M,\quad x,y\in\Omega,x\neq y,\mbox{ for }|\beta|=2.$
Using that $\partial^{\beta}V_{n}\to\partial^{\beta}V$ uniformly in $\Omega$
we can take the limit in the previous inequality to obtain
$\left\|V\right\|_{C^{2}(\Omega)}+\frac{\left|\partial^{\beta}V(x)-\partial^{\beta}V(y)\right|}{\left|x-y\right|^{\alpha}}\leq
M,\quad x,y\in\Omega,x\neq y,\mbox{ for }|\beta|=2.$
Taking the supremum in $x,y\in\Omega,x\neq y$ and the maximum in
$\left|\beta\right|$ we obtain that $\left\|V\right\|_{C^{2,\alpha}}\leq M$,
thus $V\in B_{M}(\Omega)$.
We are just left to show our second claim, namely that
$\Gamma:(B_{M}(\Omega),\left\|\cdot\right\|_{C^{1,\alpha}})\to(B_{M}(\Omega),\left\|\cdot\right\|_{C^{1,\alpha}})$
is a contraction. For $V^{1},V^{2}\in B_{M}(\Omega)$, we need to estimate the
difference
$\left\|\Gamma(V^{1})-\Gamma(V^{2})\right\|_{C^{1,\alpha}(\Omega)}$. Due to
the linearity of the div-curl problem (3.33) we can use inequality (3.29) and
bound (3.18) to get
$\displaystyle\left\|\Gamma(V^{1})-\Gamma(V^{2})\right\|_{C^{1,\alpha}(\Omega)}$
$\displaystyle\leq C\left\|\omega^{1}-\omega^{2}\right\|_{C^{\alpha}(\Omega)}$
$\displaystyle\leq
C\left(\left\|\omega^{1}_{0}-\omega^{2}_{0}\right\|_{C^{\alpha}(\partial\Omega_{-})}+\left\|\omega_{0,1}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\left\|V^{1}-V^{2}\right\|_{C^{\alpha}(\Omega)}\right).$
Computing the difference $\omega^{1}_{0}-\omega^{2}_{0}$ using equation (3.32)
we have that
$\displaystyle\left\|\omega^{1}_{0}-\omega^{2}_{0}\right\|_{C^{\alpha}(\partial\Omega_{-})}$
$\displaystyle=$
$\displaystyle\left\|\frac{1}{f+v_{0}^{2}}\left(-\partial_{x}(v^{1}_{0}(x)(V^{1}_{1}-V^{2}_{1}))-V^{1}_{2}\partial_{x}(V^{1}_{1}-V^{2}_{1})-(V^{1}_{1}-V^{2}_{1})\partial_{x}V^{1}_{1}\right)\right\|_{C^{\alpha}(\partial\Omega_{-})}$
(3.38) $\displaystyle\leq$ $\displaystyle
C\left(\delta_{0}M_{0}\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}(\Omega)}+\delta_{0}M_{0}\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}(\Omega)}\right).$
On the other hand, we notice that applying bound (3.36) yields
$\left\|\omega_{0,1}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\leq
C(\delta_{0}M_{0}+M_{0}^{2}),$ (3.39)
and hence combining estimates (3.38) and (3.39) we have shown that
$\displaystyle\left\|\Gamma(V^{1})-\Gamma(V^{2})\right\|_{C^{1,\alpha}(\Omega)}\leq
C(\delta_{0}M_{0}+M_{0}^{2})\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}}<\theta\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}},$
where $\theta$ is strictly less than one for $C\delta_{0}\leq\frac{1}{4}$ and
$CM_{0}\leq\frac{1}{4}$. Therefore,
$\Gamma:(B_{M}(\Omega),\left\|\cdot\right\|_{C^{1,\alpha}})\to(B_{M}(\Omega),\left\|\cdot\right\|_{C^{1,\alpha}})$
is a contraction mapping, for $M\in(0,M_{0})$.
Invoking Banach’s fixed point theorem we can conclude that $\Gamma$ admits a
unique fixed point $V\in B_{M}(\Omega)$, i.e. $\Gamma(V)=V$, which concludes
the proof. ∎
#### 3.1.3. Proof of Theorem 3.1
Let $\delta_{0}=\delta_{0}(\Omega,\alpha,L,\bar{v}^{2}_{0})$ and $M_{0}$ be
the constants defined in Lemma 3.10. Take $0<\epsilon\leq\delta_{0}$,
$K=\delta_{0}$ and $M\in(0,M_{0})$. Therefore, Lemma 3.10 implies that
$\Gamma$ has a unique fixed point $V\in B_{M}(\Omega)$.
We claim that $V\in B_{M}(\Omega)$ is a fixed point of the operator $\Gamma$,
if an only if $v=v_{0}+V$ is the velocity field which is a solution $(v,p)\in
C^{{2,\alpha}}(\Omega)\times C^{{2,\alpha}}(\Omega)$ to (1.1) and (3.3).
Indeed, on the one hand assume that $V\in B_{M}(\Omega)$ is a fixed point of
$\Gamma$ and write $v=v_{0}+V$. Then using the definition of the space
$B_{M}(\Omega)$ (3.5)-(3.6) the following properties hold
$\mbox{div }v=\mbox{div }v_{0}+\mbox{div }V=0,\quad\mbox{in }\Omega,\quad
v\cdot n=v_{0}\cdot n+V\cdot n=v_{0}\cdot n+f,\quad\mbox{on }\partial\Omega.$
Moreover, from the last equation in (3.33) we have that
$\int_{\mathcal{C}}v\cdot n\ dS=\int_{\mathcal{C}}v_{0}\cdot n\ dS+J-J_{0}=J.$
Since $V$ is fixed point of $\Gamma$ and $\nabla\times v_{0}=0$ we infer that
$\nabla\times v=\nabla\times(v_{0}+V)=\nabla\times
V=\nabla\times[\Gamma(V)]=\omega,$
where in the last equality we have used the first equation in (3.33) with
$\omega$ solving the transport type system (3.31)-(3.32). Hence,
$0=v\cdot\nabla\omega=v\cdot\nabla\left[\nabla\times
v\right]=\nabla\times\left[(v\cdot\nabla)v\right],\mbox{ in }\Omega$
and $\omega_{0}=(\nabla\times v)_{0}$ in $\partial\Omega_{-}$. Defining the
function $g$ by means of
$g=-\int_{0}^{x}(v\cdot\nabla)v(y)\cdot dy,$ (3.40)
we have that
$v\cdot\nabla v=-\nabla g.$ (3.41)
Notice that since $v=v_{0}+V\in C^{2,\alpha}(\Omega)$, we infer that $g\in
C^{2,\alpha}(\Omega)$. The integral of $g$ is computed along any curve
connecting the origin with $x$. We now claim that $g$ is a uni-valued function
in $\Omega=\mathbb{S}^{1}\times(0,L)$. To this end it suffices that check that
$\int_{0}^{1}(v\cdot\nabla)v_{1}\ dx=0$. Indeed, since $(\nabla\times
v)(x,0)=-\frac{\partial V_{1}}{\partial y}(x,0)+\partial_{x}f$ and
$(\nabla\times
v)=\partial_{x}f+\frac{1}{f+v^{2}_{0}}\left(\partial_{x}h-\partial_{x}(v^{1}_{0}V^{1})+\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}+f\partial_{y}v^{1}_{0}\right),\
\forall x\in\partial\Omega_{-}$
we can check that $v=v_{0}+V$ solves
$(v\cdot\nabla)v^{1}=-\partial_{x}(h+p_{0}),\quad\forall
x\in\partial\Omega_{-}.$ (3.42)
Thus,
$\int_{0}^{1}(v\cdot\nabla)v_{1}\ dx=-\int_{0}^{1}\partial_{x}(h+p_{0})\
dx=0,$ (3.43)
by periodicity in $\mathbb{S}^{1}$. Hence, $g$ is a uni-valued function. We
define the pressure function $p$ as $p=g+g_{0}$, where $g_{0}=h(0)+p_{0}(0)$
and then using (3.41) it follows that $-\nabla p=(v\cdot\nabla)v$ and
$p(0)=h(0)+p_{0}(0),\ \forall x\in\partial\Omega_{-}$. Moreover, by equation
(3.41) at $\partial\Omega_{-}$ and (3.42) we observe that
$\partial_{x}p=\partial_{x}(h+p_{0})$ and then $p=p_{0}+h$ for $\forall
x\in\partial\Omega_{-}$. Therefore, $p$ solves equation (1.1) with regularity
$p\in C^{2,\alpha}(\Omega)$ and satisfies the boundary condition (3.3).
On the other hand, let us now assume that $v=v_{0}+V$ with $V\in
B_{M}(\Omega)$ is the velocity field which solves $(v,p)$ to (1.1) and (3.3)
with $(v,p)\in C^{2,\alpha}(\Omega)\times C^{2,\alpha}(\Omega)$. Then, taking
the curl operator to (1.1) we have that
$(v_{0}+V)\cdot\nabla(\nabla\times V)=0,\ \mbox{in }\ \Omega.$ (3.44)
Combining (1.1) and the boundary conditions (3.3), we obtain that
$(\nabla\times
V)=\partial_{x}f+\frac{1}{f+v^{2}_{0}}\left(\partial_{x}h-\partial_{x}(v^{1}_{0}V^{1})+\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}+f\partial_{y}v^{1}_{0}\right),\
\mbox{on}\ \partial\Omega_{-}.$ (3.45)
Therefore $\nabla\times V$ satisfies equation (3.31) with the same boundary
condition (3.32). Integrating by characteristics it follows that $\nabla\times
V=\omega$. Since $W=\Gamma(V)$ and $V$ satisfy the equations (3.33) and
Proposition 3.9 implies that the solution is unique, we have that
$\Gamma(V)-V=0$ and thus $V$ is a fixed point of the operator $\Gamma$.
Given that the fixed point of $\Gamma$ in $B_{M}(\Omega)$ is unique (i.e. for
$\left\|v-v_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$), the uniqueness of
solutions $(v,p)$ satisfying (1.1) and boundary conditions (3.3) follows.
$\square$
### 3.2. Boundary value (C) for the 2D steady Euler equation
In this section we will deal with the boundary value (C) and construct
solutions to the Euler equation (1.1) satisfying those boundary conditions.
However, fixing two arbitrary pressure values on the boundaries
$\partial\Omega_{-}$ and $\partial\Omega_{+}$ is not possible in general and
some compatibility condition is needed (cf. Theorem 3.13 below). Nevertheless,
the problem (1.1)with the following boundary conditions is solvable for a
large class of functions $h^{-},h^{+},f^{-}$:
$p=h^{-}\mbox{ on }\partial\Omega_{-},\partial_{x}p=\partial_{x}h^{+}\mbox{ on
}\partial\Omega_{+}\mbox{ and }v\cdot n=f^{-}\mbox{ on }\partial\Omega_{-}.$
(3.46)
We then have the following result:
###### Theorem 3.12.
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(v_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.1) with
$\bar{v}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|v^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }v_{0}=0$. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(v_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $v_{0}$ as above with
$\left\|v^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $h^{+}\in
C^{{2,\alpha}}(\partial\Omega_{+}),f^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$ and $J\in\mathbb{R}$ satisfying
$\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left|J-J_{0}\right|\leq
KM,$ (3.47)
there exists a unique $(v,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.1) with
$\left\|v-v_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$p=p_{0}+h^{-}\ \mbox{on}\
\partial\Omega_{-},\partial_{x}p=\partial_{x}p_{0}+\partial_{x}h^{+}\mbox{on}\
\partial\Omega_{+},\ v\cdot n=v_{0}\cdot n+f^{-}\ \mbox{on}\
\partial\Omega_{-}\mbox{ and }\int_{\mathcal{C}}v\cdot n\ dS=J.$ (3.48)
The constants $M,K,$ as well as $\epsilon,$ depend only on
$\alpha,L,\bar{v}^{2}_{0}$.
The proof follows the same lines as the proof of Theorem 3.1, hence we will
only highlight the main modifications and important points towards the proof.
To that purpose, let us start by defining the new functional spaces where we
will perform the fixed point argument.
For any, $M>0$, let us denote by $\hat{B}_{M}$ the closed ball of in
$C^{2,\alpha}(\Omega)\times C^{2,\alpha}(\partial\Omega_{+})$ with radius $M$,
i.e.,
$\hat{B}_{M}=\\{g=(g_{1},g_{2})\in C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+}):\left\|g\right\|_{C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+})}\leq M\\}.$ (3.49)
We define the operator $\Gamma:\hat{B}_{M}\to C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+})$ in two steps. First, given
$(V,f^{+})\in\hat{B}_{M}$ we define $\omega\in C^{1,\alpha}(\Omega)$ solving
the following the transport type problem
$\left\\{\begin{array}[]{lll}(v_{0}+V)\cdot\nabla\omega=0,\ \mbox{in}\
\Omega\\\ \omega=\omega_{0},\ \mbox{on}\ \partial\Omega_{-}\end{array}\right.$
(3.50)
with $\omega_{0}$ given by
$\omega_{0}=\partial_{x}f^{-}+\frac{1}{f^{-}+v^{2}_{0}}\left(-\partial_{x}h^{-}+\partial_{x}(v^{1}_{0}V^{1})+\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}+f^{-}\partial_{y}v^{1}_{0}\right),\
\forall x\in\partial\Omega_{-},$ (3.51)
where $v_{0}=(v^{1}_{0},v^{2}_{0})$ and $V=(V^{1},V^{2})$. As a second step,
we define $(W,\tilde{f}^{+})\in C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+})$ as
$\tilde{f}^{+}=\int_{0}^{x}\omega(x^{\prime},L)+\frac{1}{v^{2}_{0}+f^{+}}\left(\partial_{x}h^{+}-\partial_{x}(v^{1}_{0}V^{1})-\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}-f^{+}\partial_{y}v^{1}_{0}\right)\
dx^{\prime}+\tilde{f}^{+}(0,L),\ $ (3.52)
where the constant $\tilde{f}^{+}(0,L)$ is given by
$\tilde{f}^{+}(0,L)=\int_{0}^{1}f^{-}(x)dx-\int_{0}^{1}\mathcal{T}(x)(1-x)\
dx\\\ $ (3.53)
with
$\mathcal{T}(x)=\omega+\frac{1}{v^{2}_{0}+f^{+}}\left(\partial_{x}h^{+}-\partial_{x}(v^{1}_{0}V^{1})-\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}-f^{+}\partial_{y}v^{1}_{0}\right).$
The function $W$ is the unique solution to the following div-curl problem
$\left\\{\begin{array}[]{lll}\nabla\times W=\omega,\ \mbox{in}\ \Omega\\\
\mbox{div }W=0,\ \mbox{in}\ \Omega\\\ W\cdot n=\tilde{f}^{+},\ \mbox{on}\
\partial\Omega_{+}\\\ W\cdot n=f^{-},\ \mbox{on}\ \partial\Omega_{-}\\\
\displaystyle\int_{\mathcal{C}}W\cdot n\ dS=J-J_{0}.\end{array}\right.$ (3.54)
Thus we define $\Gamma(V,f^{+})=(W,\tilde{f}^{+})$.
#### 3.2.1. Proof of Theorem 3.12
Similarly as in Theorem 3.1, we will show that the operator $\Gamma$ has a
fixed point $(V,f^{+})$ such that $v=v_{0}+V$ is a solution to (1.1) and
(3.48). To that purpose let $v_{0}$ be as in Theorem 3.12, $h^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $h^{+}\in
C^{{2,\alpha}}(\partial\Omega_{+})$ and $f^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$ satisfying (3.2). We will show that there
exists $\delta_{0}=\delta_{0}(\Omega,\alpha,L,\bar{v}^{2}_{0})$ and
$M_{0}\in(0,\frac{\bar{v}^{2}_{0}}{2})$ such that if $M\in(0,M_{0})$ and
$\epsilon\leq\delta_{0},\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}\leq
M_{0}\delta_{0}\mbox{ and }\left|J-J_{0}\right|\leq M_{0}\delta_{0},$ (3.55)
then $\Gamma(\hat{B}_{M}(\Omega))\subset\hat{B}_{M}(\Omega)$. Moreover, the
operator $\Gamma$ has a unique fixed point in $\hat{B}_{M}(\Omega).$
First, let us show that the operator $\Gamma$ maps $(V,f^{+})\in\hat{B}_{M}$
into itself. Using inequality (3.29) with $j=J-J_{0}$ and then (3.17), we have
that
$\displaystyle\left\|\Gamma(V,f^{+})\right\|_{C^{2,\alpha}}$
$\displaystyle\leq$ $\displaystyle
C(\left\|\omega\right\|_{C^{1,\alpha}(\Omega)}+\left\|f^{-}\right\|_{C^{2,\alpha}(\partial\Omega_{-})}+\left\|\tilde{f}^{+}\right\|_{C^{2,\alpha}(\partial\Omega_{+})}+\left|J-J_{0}\right|)$
$\displaystyle\leq$ $\displaystyle
C(\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}+\left\|f^{-}\right\|_{C^{2,\alpha}(\partial\Omega_{-})}+\left\|\tilde{f}^{+}\right\|_{C^{2,\alpha}(\partial\Omega_{+})}+\left|J-J_{0}\right|)$
On the one hand, since $(V,f^{+})\in\hat{B}_{M}(\Omega)$ and (3.55) the
function $\omega_{0}$ defined in (3.51) can be estimated as in (3.36)
$\displaystyle\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}\ $
$\displaystyle=$
$\displaystyle\left\|\partial_{x}f^{-}+\frac{1}{f^{-}+v^{2}_{0}}\left(\partial_{x}h^{-}-\partial_{x}(v^{1}_{0}V^{1})+\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}+f^{-}\partial_{y}v^{1}_{0}\right)\right\|_{C^{1,\alpha}(\partial\Omega_{-})}$
(3.57) $\displaystyle\leq$ $\displaystyle C(\delta_{0}M+M^{2}).$
On the other hand, using (3.52), (3.17) and bound (3.57) we have that
$\displaystyle\left\|\tilde{f^{+}}\right\|_{C^{2,\alpha}(\partial\Omega_{+})}$
$\displaystyle\leq$ $\displaystyle
C\bigl{(}\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{2,\alpha}(\partial\Omega_{+})}+\epsilon\left\|V\right\|_{C^{2,\alpha}(\Omega)}+\left\|V\right\|^{2}_{C^{2,\alpha}(\Omega)}$
(3.58) $\displaystyle\
+\epsilon\left\|f^{+}\right\|_{C^{2,\alpha}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{2,\alpha}}\bigr{)}\leq
C\left(\delta_{0}M+M^{2}\right).$
Combining estimates (3.2.1)-(3.58) we have that
$\displaystyle\left\|\Gamma(V,f^{+})\right\|_{C^{2,\alpha}}$
$\displaystyle\leq$ $\displaystyle
C\left(\delta_{0}M+M^{2}+\left\|f^{-}\right\|_{C^{2,\alpha}(\partial\Omega_{-})}+\left|J-J_{0}\right|\right)$
(3.59) $\displaystyle\leq$ $\displaystyle C\left(\delta_{0}M+M^{2}\right),$
where we have used the smallness assumption (3.55) in the last inequality.
Choosing $C\delta_{0}\leq\frac{1}{4}$ and $CM_{0}\leq\frac{1}{4}$, we obtain
$\Gamma(\hat{B}_{M}(\Omega))\subset\hat{B}_{M}(\Omega)$, for $M\in(0,M_{0})$.
By mimicking the arguments of Lemma 3.10, we can show that
$\hat{B}_{M}(\Omega)$ endowed with the topology $C^{1,\alpha}\times
C^{1,\alpha}$ is a complete metric space denoted by
$(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})$. We claim that
$\Gamma:(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})\to(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})$ is a contraction mapping.
To that purpose, we need to estimate the difference
$\left\|\Gamma(V^{1},f^{1,+})-\Gamma(V^{2},f^{2,+})\right\|_{C^{1,\alpha}}$
for $(V^{1},f^{1,+}),(V^{2},f^{2,+})\in\hat{B}_{M}(\Omega)$. Using the
linearity of problem (3.54), we have that bound (3.29) yields
$\left\|\Gamma(V^{1},f^{1,+})-\Gamma(V^{2},f^{2,+})\right\|_{C^{1,\alpha}(\Omega)}\leq
C\left\|\omega^{1}-\omega^{2}\right\|_{C^{\alpha}(\Omega)}+\left\|\tilde{f}^{1,+}-\tilde{f}^{2,+}\right\|_{C^{1,\alpha}(\partial\Omega_{+})}.$
(3.60)
Using (3.38), (3.39) and the smallness assumption (3.55) we infer that the
difference $\omega^{1}-\omega^{2}$ can be bounded by
$\left\|\omega^{1}-\omega^{2}\right\|_{C^{\alpha}(\Omega)}\leq
C\left(\delta^{2}_{0}M+\delta_{0}M+M^{2}\right)\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}(\Omega)}.$
(3.61)
To estimate the latter term on the right hand side in (3.60), we notice that
using (3.52)
$\left\|\tilde{f}^{1,+}-\tilde{f}^{2,+}\right\|_{C^{1,\alpha}(\partial\Omega_{+})}=I_{1}+I_{2}$
with
$\displaystyle
I_{1}=\left\|\displaystyle\int_{0}^{x}\omega^{1}(s,L)-\omega^{2}(s,L)\
ds\right\|_{C^{1,\alpha}(\partial\Omega_{+})}$ $\displaystyle\leq$
$\displaystyle\left\|\omega^{1}-\omega^{2}\right\|_{C^{\alpha}(\partial\Omega_{+})}$
$\displaystyle\leq$ $\displaystyle
C\left(\delta^{2}_{0}M+\delta_{0}M+M^{2}\right)\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}(\Omega)}$
and
$I_{2}=\left\|\displaystyle\int_{0}^{x}\frac{1}{(v^{2}_{0}+f^{1,+})-(v^{2}_{0}+f^{2,+})}\mathcal{M}(s)\
ds\right\|_{C^{1,\alpha}(\partial\Omega_{+})}\leq
C\left\|\mathcal{M}\right\|_{C^{\alpha}(\partial\Omega_{+})}.$
To bound the last term $I_{2}$, we have used estimate (3.24) with
$\mathcal{H}(x)=\frac{1}{(v^{2}_{0}+f^{1,+})-(v^{2}_{0}+f^{2,+})}\mathcal{M}(x)$
where
$\mathcal{M}(x)=\left(-\partial_{x}(v^{1}_{0}(x)(V^{1}_{1}-V^{2}_{1}))-V^{1}_{2}\partial_{x}(V^{1}_{1}-V^{2}_{1})-(V^{1}_{1}-V^{2}_{1})\partial_{x}V^{1}_{1}-\partial_{y}v^{1}_{0}(f^{1,+}-f^{2,+})\right).$
Hence, by the smallness assumption (3.55)
$\left\|\tilde{f}^{1,+}-\tilde{f}^{2,+}\right\|_{C^{1,\alpha}(\partial\Omega_{+})}\leq
C\left((\delta_{0}M+M^{2})\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}(\Omega)}+\delta_{0}M_{0}\left\|f^{1,+}-f^{2,+}\right\|_{C^{1,\alpha}(\partial\Omega_{+})}\right).$
(3.62)
Therefore, collecting (3.61)-(3.62) yields
$\displaystyle\left\|\Gamma(V^{1},f^{1,+})-\Gamma(V^{2},f^{2,+})\right\|_{C^{1,\alpha}(\Omega)}$
$\displaystyle\leq$ $\displaystyle
C\left((\delta_{0}M_{0}+M_{0}^{2})\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}(\Omega)}+\delta_{0}M_{0}\left\|f^{1,+}-f^{2,+}\right\|_{C^{1,\alpha}(\partial\Omega_{+})}\right)$
(3.63) $\displaystyle<$
$\displaystyle\theta(\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}}+\left\|f^{1,+}-f^{2,+}\right\|_{C^{1,\alpha}})$
where $\theta$ is strictly less than one for $C\delta_{0}\leq\frac{1}{4}$ and
$CM_{0}\leq\frac{1}{4}$ that
$\Gamma:(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})\to(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})$ is a contraction mapping. Invoking
Banach’s fixed point theorem we can conclude that $\Gamma$ admits a unique
fixed point $(V,f^{+})\in\hat{B}_{M}(\Omega)$, i.e.
$\Gamma(V,f^{+})=(V,f^{+})$, which concludes the proof.
To conclude the proof of Theorem 3.12, we need to check that
$(V,f^{+})\in\hat{B}_{M}(\Omega)$ is a fixed point of the operator $\Gamma$ if
an only if $v=v_{0}+V$ is the velocity field which is a solution $(v,p)\in
C^{{2,\alpha}}(\Omega)\times C^{{2,\alpha}}(\Omega)$ to (1.1) and (3.48).
Assuming that $(V,f^{+})\in\hat{B}_{M}(\Omega)$ is a fixed point of the
operator $\Gamma$ we can conclude by repeating the arguments of the proof of
Theorem 3.1 that
$\mbox{div }v=0,\quad\mbox{in }\Omega,\quad v\cdot n=v_{0}\cdot
n+f^{-},\mbox{in }\partial\Omega_{-},\mbox{and }\int_{\mathcal{C}}v\cdot n\
dS=J.$
Moreover, $v$ satisfies the vorticity formulation of Euler equation
$0=v\cdot\nabla\omega=v\cdot\nabla\left[\nabla\times
v\right]=\nabla\times\left[(v\cdot\nabla)v\right],\mbox{ in }\Omega$
and $\omega_{0}=(\nabla\times v)_{0}$ in $\partial\Omega_{-}$. The difference
with respect to Theorem 3.1 relies on how to construct a pressure field $p$
which satisfies the boundary conditions (3.48). By similar arguments as the
ones in Theorem 3.1, defining a uni-valued function $g$ in
$\Omega=\mathbb{S}^{1}\times(0,L)$ as
$g=-\int_{0}^{x}(v\cdot\nabla)v(y)\cdot dy,$ (3.64)
we have that
$v\cdot\nabla v=-\nabla g,$ (3.65)
with $g\in C^{2,\alpha}(\Omega)$. Defining $p=g+g_{0}$ with
$g_{0}=h^{-}(0)+p_{0}(0)$ we can check using (3.64)-(3.65) that
$p=p_{0}+h^{-}$ for $x\in\partial\Omega_{-}$ and
$\partial_{x}p=\partial_{x}p_{0}+\partial_{x}h^{+}$ for
$x\in\partial\Omega_{+}$. Hence, $p$ solves equation (1.1) with regularity
$p\in C^{2,\alpha}(\Omega)$ and satisfies the boundary condition (3.48).
$\square$
In order to solve the boundary value problem (C) for the Euler equation as
stated in Table 1 (i.e a boundary problem for $p\text{ on
}\partial\Omega,v\cdot n\text{ on }\partial\Omega_{-}$ and the flux $J$) we
need to impose a compatibility condition on the boundary values
$h^{-},h^{+},f{-}$ as well as in the flux $J$. To that purpose, we define a
subset $\mathcal{S}_{KM}\subset C^{2,\alpha}(\Omega_{-})\times
C^{2,\alpha}(\Omega_{+})\times C^{2,\alpha}(\Omega_{-})\times\mathbb{R}$ given
by
$\mathcal{S}_{KM}=\\{(h^{-},h^{+},f^{-},J)\text{ satisfying
}\eqref{smallnes:bound:C}\\},$ (3.66)
and the operator $\Lambda:\mathcal{S}_{K,M}\to\mathbb{R}$ given by
$\Lambda(h^{-},h^{+},f^{-},J)=p(0,L)-p_{0}(0,L)$ where $p$ is the pressure
function obtained in Theorem 3.12. Notice that by construction
$\Lambda(h^{-},h^{+}+a,f^{-},J)=\Lambda(h^{-},h^{+},f^{-},J)$ for any real
constant $a$. The definition of the operator $\Lambda$ suggest that the
following compatibility condition is required to solve the boundary value
problem (C)
$h^{+}(0)=\Lambda(h^{-},h^{+},f^{-},J).$ (3.67)
More precisely the following theorem holds:
###### Theorem 3.13.
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(v_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.1) with
$\bar{v}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|v^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }v_{0}=0$. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(v_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $v_{0}$ as above with
$\left\|v^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $h^{+}\in
C^{{2,\alpha}}(\partial\Omega_{+}),f^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$ and $J\in\mathbb{R}$ satisfying
$\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left|J-J_{0}\right|\leq
KM,$ (3.68)
there exists a unique $(v,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.1) with
$\left\|v-v_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$p=p_{0}+h^{-}\ \mbox{on}\ \partial\Omega_{-},p=p_{0}+h^{+}\mbox{on}\
\partial\Omega_{+},\ v\cdot n=v_{0}\cdot n+f^{-}\ \mbox{on}\
\partial\Omega_{-}\mbox{ and }\int_{\mathcal{C}}v\cdot n\ dS=J$ (3.69)
if and only if
$h^{+}(0)=\Lambda(h^{-},h^{+},f^{-},J).$ (3.70)
###### Proof of Theorem 3.13.
By Theorem 3.12, we have that that there exists a unique solution $(v,p)\in
C^{{2,\alpha}}(\Omega)\times C^{{2,\alpha}}(\Omega)$ to (1.1) satisfying the
boundary conditions (3.48), this is
$p=p_{0}+h^{-}\ \mbox{on}\
\partial\Omega_{-},\partial_{x}p=\partial_{x}p_{0}+\partial_{x}h^{+}\mbox{on}\
\partial\Omega_{+},\ v\cdot n=v_{0}\cdot n+f^{-}\ \mbox{on}\
\partial\Omega_{-}\mbox{ and }\int_{\mathcal{C}}v\cdot n\ dS=J.$
Integrating the pressure boundary condition at $\partial\Omega_{+}$ yields
$\displaystyle p(x,L)$ $\displaystyle=$ $\displaystyle
p(0,L)+p_{0}(x,L)-p_{0}(x,L)+\int_{0}^{x}\partial_{x}h^{+}(\xi)d\xi$
$\displaystyle=$ $\displaystyle
p_{0}(x,L)+h^{+}(x,L)+\Lambda(h^{-},h^{+},f^{-},J)-h^{+}(0,L).$
Then the problem (1.1) with boundary conditions (3.69) has a unique solution
if and only if $h^{+}(0)=\Lambda(h^{-},h^{+},f^{-},J).$ ∎
### 3.3. Boundary value (G) for the 2D steady Euler equation
In this section we will sketch the proof regarding the construction of
solutions to the Euler equation (1.1) satisfying boundary conditions (G),
which is a slight modification of the proof of Theorem 3.1 and Theorem 3.12.
###### Theorem 3.14.
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(v_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.1) with
$\bar{v}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|v^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }v_{0}=0$. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(v_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $v_{0}$ as above with
$\left\|v^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $h^{+}\in
C^{{2,\alpha}}(\partial\Omega_{+}),f^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$ and $J\in\mathbb{R}$ satisfying
$\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left|J-J_{0}\right|\leq
KM,$ (3.71)
there exists a unique $(v,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.1) with
$\left\|v-v_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$p+\frac{\left|v\right|^{2}}{2}=p_{0}+\frac{\left|v_{0}\right|^{2}}{2}+h^{-}\
\mbox{on}\ \partial\Omega_{-},p=p_{0}+h^{+}\mbox{on}\ \partial\Omega_{+},\
v\cdot n=v_{0}\cdot n+f^{-}\ \mbox{on}\ \partial\Omega_{-}\mbox{ and
}\int_{\mathcal{C}}v\cdot n\ dS=J.$ (3.72)
The constants $M,K,$ as well as $\epsilon,$ depend only on
$\alpha,L,\bar{v}^{2}_{0}$.
###### Proof of Theorem 3.14.
For any, $M>0$, let us denote by $\hat{B}_{M}$ the closed ball of in
$C^{2,\alpha}(\Omega)\times C^{2,\alpha}(\partial\Omega_{+})$ with radius $M$,
i.e.,
$\hat{B}_{M}=\\{(g_{1},g_{2})\in C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{-}):\left\|g\right\|_{C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+})}\leq M\\}.$ (3.73)
We define the operator $\Gamma:\hat{B}_{M}\to C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+})$ in two steps. First, given
$(V,f^{+})\in\hat{B}_{M}$ we define $\omega\in C^{1,\alpha}(\Omega)$ solving
the following the transport type problem
$\left\\{\begin{array}[]{lll}(v_{0}+V)\cdot\nabla\omega=0,\ \mbox{in}\
\Omega\\\ \omega=\omega_{0},\ \mbox{on}\ \partial\Omega_{-}\end{array}\right.$
(3.74)
with $\omega_{0}$ given by
$\omega_{0}=\frac{\partial_{x}h^{-}}{v^{2}_{0}+f^{-}},\ \forall
x\in\partial\Omega_{-},$ (3.75)
where $v_{0}=(v^{1}_{0},v^{2}_{0})$ and $V=(V^{1},V^{2})$. As a second step,
we define $(W,\tilde{f}^{+})\in C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+})$ as
$\tilde{f}^{+}=\int_{0}^{x}\omega(x^{\prime},L)+\frac{1}{v^{2}_{0}+f^{+}}\left(h^{+}-\partial_{x}(v^{1}_{0}V^{1})-\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}-f^{+}\partial_{y}v^{1}_{0}\right)\
dx^{\prime}+\tilde{f}^{+}(0,L),\ $ (3.76)
where the constant $\tilde{f}^{+}(0,L)$ is given by
$\tilde{f}^{+}(0,L)=\int_{0}^{1}f^{-}(x)dx-\int_{0}^{1}\mathcal{T}(x)(1-x)\
dx\\\ $ (3.77)
with
$\mathcal{T}(x)=\omega+\frac{1}{v^{2}_{0}+f^{+}}\left(h^{+}-\partial_{x}(v^{1}_{0}V^{1})-\frac{1}{2}\partial_{x}\left|V^{1}\right|^{2}-f^{+}\partial_{y}v^{1}_{0}\right).$
The function $W$ is the unique solution to the following div-curl problem
$\left\\{\begin{array}[]{lll}\nabla\times W=\omega,\ \mbox{in}\ \Omega\\\
\mbox{div }W=0,\ \mbox{in}\ \Omega\\\ W\cdot n=\tilde{f}^{+},\ \mbox{on}\
\partial\Omega_{+}\\\ W\cdot n=f^{-},\ \mbox{on}\ \partial\Omega_{-}\\\
\displaystyle\int_{\mathcal{C}}W\cdot n\ dS=J-J_{0}.\end{array}\right.$ (3.78)
Thus we define $\Gamma(V,f^{+})=(W,\tilde{f}^{+})$. We need to show that there
exists a sufficiently small
$\delta_{0}=\delta_{0}(\Omega,\alpha,L,\bar{v}^{2}_{0})$ and
$M_{0}\in(0,\frac{\bar{v}^{2}_{0}}{2})$ such that if $M\in(0,M_{0})$ and
$\epsilon\leq\delta_{0},\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}\leq
M_{0}\delta_{0}\mbox{ and }\left|J-J_{0}\right|\leq M_{0}\delta_{0},$ (3.79)
then $\Gamma(\hat{B}_{M}(\Omega))\subset\hat{B}_{M}(\Omega)$. Similarly, as
for case (C) we have that using inequalities (3.29) with $j=J-J_{0}$ and
(3.17) we have that
$\left\|\Gamma(V,f^{+})\right\|_{C^{2,\alpha}(\Omega)}\leq
C\left(\left\|\omega_{0}\right\|_{C^{1,\alpha}(\partial\Omega_{-})}+\left\|f^{-}\right\|_{C^{2,\alpha}(\partial\Omega_{-})}+\left\|\tilde{f}^{+}\right\|_{C^{2,\alpha}(\partial\Omega_{+})}+\left|J-J_{0}\right|\right).$
(3.80)
Furthermore, by the definition of $\omega_{0}$ and $\tilde{f^{+}}$ in (3.75),
(3.76)respectively and using the smallness assumption (3.71) we have that
$\left\|\omega_{0}\right\|_{C^{1,\alpha}(\Omega)}\leq C\delta_{0}M,$ (3.81)
and
$\left\|\tilde{f^{+}}\right\|_{C^{2,\alpha}(\partial\Omega_{+})}\leq
C\left(\delta_{0}M+M^{2}\right).$ (3.82)
Hence, using (3.81) and (3.82) we have that
$\left\|\Gamma(V,f^{+})\right\|_{C^{2,\alpha}(\Omega)}\leq
C\left(\delta_{0}M+M^{2}\right).$
Choosing $C\delta_{0}\leq\frac{1}{4}$ and $CM_{0}\leq\frac{1}{4}$, we obtain
$\Gamma(\hat{B}_{M}(\Omega))\subset\hat{B}_{M}(\Omega)$, for $M\in(0,M_{0})$.
We claim that
$\Gamma:(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})\to(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})$ is a contraction mapping. To that purpose,
we estimate the difference
$\left\|\Gamma(V^{1},f^{1,+})-\Gamma(V^{2},f^{2,+})\right\|_{C^{1,\alpha}}$
with $(V^{1},f^{1,+}),(V^{2},f^{2,+})\in\hat{B}_{M}(\Omega)$. Mimicking the
estimates (3.60), (3.61) and (3.62) it follows that that
$\left\|\Gamma(V^{1},f^{1,+})-\Gamma(V^{2},f^{2,+})\right\|_{C^{1,\alpha}(\Omega)}\leq\theta(\left\|V^{1}-V^{2}\right\|_{C^{1,\alpha}}+\left\|f^{1,+}-f^{2,+}\right\|_{C^{1,\alpha}}),$
where $\theta$ is strictly less than one for $C\delta_{0}\leq\frac{1}{4}$ and
$CM_{0}\leq\frac{1}{4}$, and hence showing that
$\Gamma:(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})\to(\hat{B}_{M},\left\|\cdot\right\|_{C^{1,\alpha}(\Omega)\times
C^{1,\alpha}(\partial\Omega_{+})})$ is a contraction mapping. We conclude the
proof by using Banach’s fixed point theorem which shows the existence of a
unique fixed point $(V,f^{+})\in\hat{B}_{M}(\Omega)$. ∎
### 3.4. Loss of regularity for the boundary value problem (D) using
vorticity transport method
In this section we provide a justification to the regularity loss of the
vorticity transport method to deal with boundary value problem (D) (i.e. a
boundary problem for $p+\frac{v^{2}}{2}=h^{-}$ on $\partial\Omega_{-}$,
$p+\frac{v^{2}}{2}=h^{+}$ on $\partial\Omega_{+}$ and $v\cdot n$ on
$\partial\Omega_{-}$).
Suppose that we try to solve this problem using the vorticity transport method
used before to deal with the cases (B),(C). In a similar way as above, we
define the operator $\Gamma:\hat{B}_{M}\to C^{2,\alpha}(\Omega)\times
C^{2,\alpha}(\partial\Omega_{+})$ in two steps. First, given
$(V,f^{+})\in\hat{B}_{M}$ we define $\omega\in C^{1,\alpha}(\Omega)$ solving
the following the transport type problem
$\left\\{\begin{array}[]{lll}(v_{0}+V)\cdot\nabla\omega=0,\ \mbox{in}\
\Omega\\\ \omega=\omega_{0},\ \mbox{on}\ \partial\Omega_{-}\end{array}\right.$
(3.83)
with $\omega_{0}$ given by
$\omega_{0}=\frac{\partial_{x}h^{-}}{v^{2}_{0}+f^{-}},\ \forall
x\in\partial\Omega_{-},$ (3.84)
where $v_{0}=(v^{1}_{0},v^{2}_{0})$ and $V=(V^{1},V^{2})$. As a second step we
define $(W,\tilde{f}^{+})$ where $W$ is the unique solution to the following
div-curl problem
$\left\\{\begin{array}[]{lll}\nabla\times W=\omega,\ \mbox{at}\ \Omega\\\
\mbox{div }W=0,\ \mbox{at}\ \Omega\\\ W\cdot n=\tilde{f}^{+},\ \mbox{at}\
\partial\Omega_{+}\\\ W\cdot n=f^{-},\ \mbox{at}\ \partial\Omega_{-}\\\
\displaystyle\int_{\mathcal{C}}W\cdot n\ dS=J-J_{0},\end{array}\right.$ (3.85)
and the function $\tilde{f}^{+}$ is given by
$\tilde{f}^{+}=\frac{\partial_{x}h^{+}}{\omega},\ \forall
x\in\partial\Omega_{+},$ (3.86)
Thus we define $\Gamma(V,f^{+})=(W,\tilde{f}^{+})$. Therefore, since
$(V,f^{+})\in C^{2,\alpha}(\Omega)\times C^{2,\alpha}(\partial\Omega_{+})$ and
$\omega_{0}\in C^{1,\alpha}(\Omega_{-})$ we infer that $\omega\in
C^{1,\alpha}(\Omega)$. Furthermore, the function $\tilde{f}^{+}$ has
regularity $C^{1,\alpha}(\Omega_{+})$ and and the function $W$ can only to be
expected to have at most regulartity $C^{1,\alpha}$. Therefore it would not be
possible to close the fixed point argument.
## 4\. The MHS boundary value theorems
For the sake of completeness, we will present in this section the statements
of the theorems we have shown for the boundary value problems for the Euler
equation (1.2) in the case of the MHS equations (1.3). Since from the PDE
point the Euler and MHS equation are equivalent problems using the
identification variables (1.4) the proof is exactly the same as in the Euler
case.
###### Theorem 4.1 (Case A).
Let $f^{-}\in C^{1,\alpha}(\partial\Omega_{-}),f^{+}\in
C^{1,\alpha}(\partial\Omega_{+})$ and $h^{-}\in
C^{1,\alpha}(\partial\Omega_{+})$. Then if $f^{-}>0$ for
$x\in\partial\Omega_{-}$, there exists a solution $(B,p)\in
C^{{1,\alpha}}(\Omega)\times C^{{1,\alpha}}(\Omega)$ solving the MHS equation
(1.3) such that
$B\cdot n=f^{-}\mbox{on}\ \partial\Omega^{-},B\cdot n=f^{+}\ \mbox{on}\
\partial\Omega_{+}\mbox{ and }p=h^{-}\ \mbox{on}\ \partial\Omega_{-}.$ (4.1)
Moreover, there exists $\delta>0$ such that if
$\left\|f^{-}\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}+\left\|f^{+}\right\|_{C^{{1,\alpha}}(\partial\Omega_{+})}+\left\|h^{-}\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}\leq\delta,$
(4.2)
the solution $(B,p)$ is unique.
###### Theorem 4.2 (Case B).
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(B_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.3) with
$\bar{B}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|B^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }B_{0}=0$. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(B_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $B_{0}$ as above with
$\left\|B^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $f\in C^{{2,\alpha}}(\partial\Omega)$ and
$J\in\mathbb{R}$ satisfying
$\left\|h\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|f\right\|_{C^{{2,\alpha}}(\partial\Omega)}+\left|J-J_{0}\right|\leq
KM,$ (4.3)
and
$\int_{\partial\Omega_{-}}f\ dS=\int_{\partial\Omega_{+}}f\ dS,$ (4.4)
there exists a unique $(B,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.3) with
$\left\|B-B_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$B\cdot n=B_{0}\cdot n+f\ \mbox{on}\ \partial\Omega,\
p+\frac{\left|B\right|^{2}}{2}=p_{0}+\frac{\left|B_{0}\right|^{2}}{2}+h\
\mbox{on}\ \partial\Omega_{-}\mbox{ and }\int_{\mathcal{C}}B\cdot n\ dS=J.$
(4.5)
The constants $M,K,$ as well as $\epsilon,$ depend only on
$\alpha,L,\bar{B}^{2}_{0}$.
###### Theorem 4.3 (Case C. Solvability).
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(B_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.3) with
$\bar{B}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|B^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }B_{0}=0$.. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(B_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $B_{0}$ as above with
$\left\|B^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $h^{+}\in
C^{{2,\alpha}}(\partial\Omega_{+})$, $f^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$ and $J\in\mathbb{R}$ satisfying
$\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left|J-J_{0}\right|\leq
KM,$ (4.6)
there exists a unique $(B,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.3) with
$\left\|B-B_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$\displaystyle p+\frac{\left|B\right|^{2}}{2}$
$\displaystyle=p_{0}+\frac{\left|B_{0}\right|^{2}}{2}+h^{-}\ \mbox{on}\
\partial\Omega_{-},\partial_{x}(p+\frac{\left|B\right|^{2}}{2})=\partial_{x}(p_{0}+\frac{\left|B_{0}\right|^{2}}{2})+\partial_{x}h^{+}\
\mbox{on}\ \partial\Omega_{+}$ $\displaystyle B\cdot n$
$\displaystyle=B_{0}\cdot n+f^{-}\ \mbox{on}\ \partial\Omega_{-},\ \mbox{ and
}\int_{\mathcal{C}}B\cdot n\ dS=J.$
The constants $M,K,$ as well as $\epsilon,$ depend only on
$\alpha,L,\bar{B}^{2}_{0}$.
###### Theorem 4.4 (Case C. Compatibility condition).
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(B_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.3) with
$\bar{B}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|B^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }B_{0}=0$. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(B_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $B_{0}$ as above with
$\left\|B^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $h^{+}\in
C^{{2,\alpha}}(\partial\Omega_{+}),f^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$ and $J\in\mathbb{R}$ satisfying
$\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left|J-J_{0}\right|\leq
KM,$ (4.7)
there exists a unique $(B,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.3) with
$\left\|B-B_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$p=p_{0}+h^{-}\ \mbox{on}\ \partial\Omega_{-},p=p_{0}+h^{+}\mbox{on}\
\partial\Omega_{+},\ B\cdot n=B_{0}\cdot n+f^{-}\ \mbox{on}\
\partial\Omega_{-}\mbox{ and }\int_{\mathcal{C}}B\cdot n\ dS=J$ (4.8)
if and only if
$h{+}(0)=\Lambda(h^{-},h^{+},f^{-},J).$ (4.9)
###### Theorem 4.5 (Case D).
Let $f\in C^{1,\alpha}(\partial\Omega_{-}),h^{-}\in
C^{1,\alpha}(\partial\Omega_{-})$, $h^{+}\in C^{1,\alpha}(\partial\Omega_{+})$
and $h^{+}=h^{-}\circ T$ where $T:\mathbb{S}^{1}\to\mathbb{S}^{1}$ is a given
diffeomorphism with $C^{2,\alpha}$ regularity. Then if $f>0$ for
$x\in\partial\Omega_{-}$, there exists a solution $(B,p)\in
C^{{1,\alpha}}(\Omega)\times C^{{1,\alpha}}(\Omega)$ solving the MHS equation
(1.3) such that
$B\cdot n=f\mbox{on}\ \partial\Omega^{-},\ p=h^{-}\ \mbox{on}\
\partial\Omega_{-}\mbox{ and }p=h^{+}\ \mbox{on}\ \partial\Omega_{+}.$ (4.10)
Moreover, there exists $\delta>0$ such that if
$\left\|h^{-}\right\|_{C^{{1,\alpha}}(\partial\Omega_{-})}+\left\|f\right\|_{C^{{1,\alpha}}(\partial\Omega)}\leq\delta,$
(4.11)
the solution $(B,p)$ is unique.
###### Theorem 4.6 (Case G).
Let $\Omega=\\{(x,y)\in\mathbb{S}^{1}\times(0,L)\\}$, with $L>0$ and
$\alpha\in(0,1)$. Suppose that $(B_{0},p_{0})\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ is a solution of (1.3) with
$\bar{B}^{2}_{0}=\displaystyle\inf_{(x,y)\in\bar{\Omega}}\left|B^{2}_{0}(x,y)\right|>0$
and $\mbox{curl }B_{0}=0$. For $\mathcal{C}=\\{(0,y),y\in[0,L]\\}$ we have
that the integral $\int_{\mathcal{C}}(B_{0}\cdot n)\ dS$ is a real constant
that we will denote as $J_{0}$. There exist $\epsilon>0$ , $M>0$ sufficiently
small as well as $K>0$ such that for $B_{0}$ as above with
$\left\|B^{1}_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq\epsilon$ and $h^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$, $h^{+}\in
C^{{2,\alpha}}(\partial\Omega_{+}),f^{-}\in
C^{{2,\alpha}}(\partial\Omega_{-})$ and $J\in\mathbb{R}$ satisfying
$\left\|h^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left\|h^{+}\right\|_{C^{{2,\alpha}}(\partial\Omega_{+})}+\left\|f^{-}\right\|_{C^{{2,\alpha}}(\partial\Omega_{-})}+\left|J-J_{0}\right|\leq
KM,$ (4.12)
there exists a unique $(B,p)\in C^{{2,\alpha}}(\Omega)\times
C^{{2,\alpha}}(\Omega)$ to (1.3) with
$\left\|B-B_{0}\right\|_{C^{2,\alpha}(\Omega)}\leq M$ such that
$p=p_{0}+h^{-}\ \mbox{on}\
\partial\Omega_{-},p+\frac{\left|B\right|^{2}}{2}=p_{0}+\frac{\left|B_{0}\right|^{2}}{2}+h^{+}\mbox{on}\
\partial\Omega_{+},\ B\cdot n=B_{0}\cdot n+f^{-}\ \mbox{on}\
\partial\Omega_{-}\mbox{ and }\int_{\mathcal{C}}B\cdot n\ dS=J.$ (4.13)
The constants $M,K,$ as well as $\epsilon,$ depend only on
$\alpha,L,\bar{B}^{2}_{0}$.
Acknowledgment. D. Alonso-Orán is supported by the Alexander von Humboldt
Foundation. J. J. L. Velázquez acknowledges support through the CRC 1060 (The
Mathematics of Emergent Effects) that is funded through the German Science
Foundation (DFG), and the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) under Germany ’s Excellence Strategy – EXC-2047/1 –
390685813.
## References
* [1] H.D. Alber. Existence of threedimensional, steady, inviscid, incompressible flows with nonvanishing vorticity. Math. Ann., 292, pp. 493–528, (1992).
* [2] T. Amari and T. Boulmezaoud and Z. Milkić. An iterative method for the reconstruction of the solar coronal magnetic field. Method for regular solutions. Astron Astrophys., 350, pp. 1051–1059, (1999).
* [3] V.I Arnold and B.A Khesin. Topological methods in hydrodynamics. Vol. 125. Springer Science $\&$ Business Media, (1999).
* [4] M. Bineau. On the existence of force-free magnetic fields. Commun. Pure Appl. Math., 27, pp. 77–84, (1972).
* [5] B. Buffoni and E. Wahlén. Steady three-dimensional rotational flows : an approach via two stream functions and Nash–Moser iteration. Analysis and PDE, 12, pp. 1225–1258, (2019).
* [6] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. McGraw Hill Publishing, (1955).
* [7] P. Constantin and T. Drivas and D. Ginsberg. Flexibility and rigidity in steady fluid motion. arXiv:2007.09103, (2020).
* [8] P. Constantin and T. Drivas and D. Ginsberg. On quasisymmetric plasma equilibria sustained by small force. To appear in Journal of Plasma Physics, (2020).
* [9] A. Enciso and D. Poyato and J. Soler. Stability Results, Almost Global Generalized Beltrami Fields and Applications to Vortex Structures in the Euler Equations. Commun. Math. Phys, 360, pp. 197–269 (2018).
* [10] L. Evans. Partial differential equations. American Mathematical Society, (2010).
* [11] D. Gilbarg and N.S. Trudinger. Elliptic partial differential equations of second order. Classics in Mathematics, Springer-Verlag, Berlin, (2001).
* [12] J.P. Goedbloed and S. Poedts. Principles of Magnetohydrodynamics with Applications to Laboratory and Astrophysical Plasmas. Cambridge University Press, (2010).
* [13] J.P. Goedbloed and S. Poedts. Advanced Magnetohydrodynamics: With Applications to Laboratory and Astrophysical Plasmas. Cambridge University Press, (2010).
* [14] H. Grad and H. Rubin. Hydromagnetic Equilibria and Force-Free Fields. Proceedings of the 2nd UN Conf. on the Peaceful Uses of Atomic Energy, Vol. 31, Geneva: IAEA p. 190, (1958).
* [15] H. Grad. Toroidal containment of a plasma. The Physics of Fluids, 10(1), 137-154, (1967).
* [16] F. Hamel and N. Nadirashvili. Shear flows of an ideal fluid and elliptic equations in unbounded domains. Communications on Pure and Applied Mathematics , 70, 3, pp. 590-608, (2017).
* [17] F. Hamel and N. Nadirashvili. Circular flows for the Euler equations in two-dimensional annular domains. arXiv:1909.01666, (2019).
* [18] L. Molinet. On the existence of inviscid compressible steady flows through a three- dimensional bounded domain. Adv. Differential Equ., 4, pp. 493–528, (1999).
* [19] E. Priest. Magnetohydrodynamics of the Sun. Cambridge University Press, (2014).
* [20] V.D. Safranov. Plasma equilibrium in a magnetic field. Reviews of Plasma Physics, Vol. 2, New York: Consultants Bureau, p. 103, (1966).
* [21] D.S. Seth. Steady three-dimensional ideal flows with nonvanishing vorticity in domains with edges. Journal of Differential Equations, 274, pp. 345-381, (2021).
* [22] D.S. Seth. On Steady Ideal Flows with Nonvanishing Vorticity in Cylindrical Domains. Master Thesis. Faculty of Science, Centre of Mathematical Sciences, Lund University, (2016).
* [23] C. Tang and Z. Xin. Existence of solutions for three dimensional stationary incompressible Euler equations with nonvanishing vorticity. Chin. Ann. Math. Ser. B, 30, pp. 803-830, (2009).
|
# Understanding the $K^{*}/K$ ratio in heavy ion collisions
C. Le Roux, F. S. Navarra 1Instituto de Física, Universidade de São Paulo,
Rua do Matão, 1371, CEP 05508-090, São Paulo, SP, Brazil L. M. Abreu
Instituto de Física, Universidade Federal da Bahia, Campus Universitário de
Ondina, 40170-115, Bahia, Brazil
###### Abstract
We study the $K^{*}$ meson dissociation in heavy ion collisions during the
hadron gas phase. We use the production and absorption cross sections of the
$K^{*}$ and $K$ mesons in a hadron gas, which were calculated in a previous
work. We compute the time evolution of the $K^{*}$ abundance and the $K^{*}/K$
ratio during the hadron gas phase of heavy ion collisions. Assuming a Bjorken
type cooling and using an empirical relation between the freeze-out
temperature and the central multiplicity density, we are able to write
$K^{*}/K$ as a function of ($dN/d\eta(\eta=0)$). The obtained function is in
very good agreement with recent experimental data.
## I Introduction
In recent heavy ion collision experiments nuclei are accelerated towards each
other with energies of the order of GeV or TeV. These extremely high energies
allow for the production of a deconfined phase of quarks and gluons. This
phase where the fundamental particles are able to travel freely is known as
the quark gluon plasma (QGP) shu ; gm . It exists for a short time and as the
system expands and cools down, quarks, antiquarks and gluons recombine to form
hadrons. This phase transition back to the hadron phase is also called
hadronization. The abundances of particles formed during the hadronization
depend on the temperature and on the baryon chemical potential. After
hadronization the system becomes a hot hadron gas in which inelastic reactions
occur, changing the relative abundance of the hadrons. The system further
expands and cools down until the point when all interactions cease. This is
known as kinetic or thermal freeze-out. The final yield of hadrons in a
collision is influenced not only by their production rate at the quark-hadron
transition point but also by the interactions that they undergo after
hadronization, which might increase or decrease their abundances. At the
thermal freeze-out, particle abundances are frozen and the hadrons flow freely
to the detectors.
The $K^{*}$ meson is a resonance and may change its abundance also by the
strong decay $K^{*}\to K\pi$. This meson has a lifetime of 4 fm/c, smaller
than the duration of the hadron gas phase, which is believed to be of the
order of 10 fm/c. When the decay happens in the hadronic medium, the daughter
particles ($K$ and $\pi$) interact further with other particles in the
environment, changing their energy and momentum, and even if they can be
measured at the end of the heavy ion collision, the invariant mass of the pair
is no longer equal to the $K^{*}$ mass. The $K^{*}$ which is no longer
reconstructed is lost and we would observe a reduction in the final yield of
this resonance, which would then be attributed to the existence of the hadron
gas phase. This means that the existence of the hadron gas phase could be
tested by the study of the abundances of such particles. This idea has been
discussed in several publications torra ; bleiche ; rafeleto ; knospe16 ;
stein17 .
From the experimental point of view, the abundance of the $K^{*}$ meson can be
studied through the yield ratio $K^{*}$/$K$. Experiments have measured it to
be $0.33\pm 0.01$ in Au+Au collisions at $\sqrt{s_{NN}}=130$ GeV, $0.23\pm
0.05$ in Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV at RHIC star05 ; star11 ,
$0.19\pm 0.05$ at $\sqrt{s_{NN}}=2.76$ TeV in Pb+Pb collisions at the LHC
alice15 ; alice17 and very recently alice20 it was found to be $0.2\pm 0.01$
in Pb+Pb at $\sqrt{s_{NN}}=5.02$ TeV in collisions at the LHC. Model
calculations suggest that the lifetime of the hadron gas phase grows with the
mass of the colliding nuclei, with centrality and with the collision energy.
We can see that the ratio $K^{*}/K$ decreases as the collision energy and/or
system size increases, giving support to the conjecture made in the previous
paragraph. However, in order to reach a firm conclusion a comprehensive
quantitative calculation must be done.
In the hadron gas formed in heavy ion collisions, the temperatures range from
$\simeq 175$ MeV, where hadronization takes place, to $\simeq 100$ MeV, where
kinetic freeze-out takes place. The temperature defines the order of magnitude
of the hadron momenta in the gas and also the energy with which hadrons
collide in the medium. The energies of a few hundred MeV’s are too high to
allow the use of chiral perturbation theory and are too low to allow the use
of perturbative QCD. One has to resort to models involving mesons and baryons.
In principle baryons could be efficient in absorbing $K^{*}$’s. Although the
coupling constants $BB^{\prime}K^{(*)}$ (baryon-baryon-strange meson) are
relatively small miru99 , it has been shown in torres15 that the interaction
cross sections are significant and the $K^{*}N$ total cross section can be as
large as 20 mb. On the other hand, the particles which emerge from the hadron
gas have low or moderate rapidities and in this rapidity region there are no
remnant baryons from the projectile or from the target. There are only newly
created baryons, which are relatively rare. Therefore, here we follow suhoung
and neglect $K^{*}$ interactions with baryons.
In Ref. suhoung the authors computed the cross sections of several types of
interactions suffered by $K^{*}$ and $K$ mesons in the hadron gas and showed
that, due to these interactions and to the strong decay, the final yield ratio
$K^{*}$/$K$ measured in central Au+Au collision at $\sqrt{s_{NN}}=200$ GeV
decreases by 37 % during the hadron gas phase, resulting in a final ratio
comparable to STAR measurements. In suhoung , the change in the abundances of
the $K^{*}$ and $K$ mesons was computed by solving a system of differential
rate equations which use as input the cross sections for different
interactions involving the $K^{*}$ and $K$ mesons with each other and with the
light mesons $\rho$ and $\pi$. The authors found that the leading processes
contributing to the abundance dynamics are: $K^{*}\pi\rightarrow K\rho$,
$K^{*}\rho\rightarrow K\pi$ and $K^{*}\rightarrow K\pi$, as well as the
inverse ones.
In suhoung some interaction mechanisms that might be relevant were not
included in the calculations. In a subsequent work abreu the cross sections
for production and annihilation of $K^{*}$ and $K$ mesons were recalculated
with the inclusion of new reaction mechanisms. The relevant Feynman diagrams
for the $K^{*}\pi\to\rho K$ and $K^{*}\rho\to K\pi$ reactions are shown in
Fig. 1.
The most important (but not the only ones) changes made in abreu are:
I) Inclusion of anomalous parity vector-vector-pseudoscalar (VVP)
interactions.
II) Inclusion of the exchange of axial resonances $K_{1}(1270)$,
$h_{1}(1170)$, $h_{1}(1380)$, $f_{1}(1285)$, $a_{1}(1260)$ and $b_{1}(1235)$
in the s and t channels.
Modification I) introduces new vertices, modifies several Feynman diagrams and
changes the amplitudes of all processes discussed previously in suhoung . In
Refs. oh ; babi05 ; torres14 , it was shown that interaction terms with
anomalous parity couplings have a strong impact on the corresponding cross
sections. The relevance of such anomalous terms in the determination of the
abundance of $X(3872)$ in heavy ion collisions was computed in Ref. abreu16 .
In abreu these interaction terms were found to be relevant also in the
calculation of $K^{*}$ absorption processes. Modification II) introduces
several new diagrams. The presence of the resonance $K_{1}(1270)$, for
example, had been found to be important geng07 in describing the invariant
mass distribution of the process $K^{-}p\to K^{-}\pi^{+}\pi^{-}p$ at
$\sqrt{s_{NN}}=63$ GeV measured by the WA3 collaboration at CERN daum81 . In
abreu it was seen that the diagram with $K_{1}(1270)$ in the s-channel is the
most important contribution to the absorption process $\pi K^{*}\to\omega K$
and also to the production process $\rho K\to\pi K^{*}$.
The results in Ref. abreu show that the new mechanisms are rather
significant, changing the cross sections up to one or two orders of magnitude
in some cases, suggesting that these new cross sections would result in a very
different dynamics for the abundances of $K^{*}$ and $K$ mesons. A comparison
between the results obtained in Refs. suhoung and in abreu is presented in
Fig. 2, where we show the thermally averaged cross sections (see below) of the
main processes of absorption and regeneration of $K^{*}$. From the figures we
see that the cross sections found in abreu are much larger than those found
in suhoung , both for absorption and for regeneration of $K^{*}$.
In this work we use the improved cross sections of abreu and solve the
differential rate equations proposed in Ref. suhoung , obtaining the $K^{*}/K$
ratio as a function of the proper time. We use a Bjorken type cooling to
connect the proper time and the temperature. The evolution stops at the
freeze-out temperature $T_{f}$. Finally, we use the empirical relation between
$T_{f}$ and the central multiplicity density found in alice13 , to obtain a
direct relation between the $K^{*}/K$ ratio and $dN/d\eta(\eta=0)$. The
obtained relation is in very good agreement with experimental data. In the
next section we briefly describe the formalism and in the following section we
present our results and compare them with experimental data.
| |
---|---|---
(a) | | (b)
Figure 1: Diagrams for the relevant processes considered in the calculation of
the cross sections in Ref. abreu . a) $K^{*}\pi\to\rho K$ reactions. $R$
represents the resonances $h_{1}(1170)$, $h_{1}(1380)$, $f_{1}(1285)$,
$a_{1}(1260)$ and $b_{1}(1235)$. b) $K^{*}\rho\to K\pi$ reactions.
## II Formalism
### II.1 Thermal cross sections
In suhoung the interactions of $K$ and $K^{*}$ with light non-strange mesons
were described by effective Lagrangians of the type $\mathcal{L}_{PPV}$ and
$\mathcal{L}_{VVV}$, where $P$ and $V$ are pseudoscalar and vector mesons,
respectively. The Lagrangians were obtained from free pseudoscalar and vector
meson Lagrangians by introducing the minimal substitution. In abreu , in
addition to these Lagrangians, the Lagrangian $\mathcal{L}_{VVP}$ was
included, representing the so called “anomalous parity” interactions. From the
Lagrangians it is straightforward to evaluate the amplitudes of $K^{*}$
absorption by pions, kaons, $\rho$’s and by $K^{*}$’s. In order to take the
finite size of the hadrons into consideration when evaluating amplitudes, one
uses form factors at each interaction vertex. These form factors contain a
cut-off parameter. In suhoung the authors took a value taken from previous
phenomenological analises brown91 . With the amplitudes it is easy to compute
the cross sections of the corresponding processes. With the same Lagrangians
one can calculate all the interaction cross sections of kaons. Moreover, with
the use of detailed balance one can calculate the inverse processes, i.e. one
can compute the $K^{*}+\pi\to K+\rho$ and $K+\rho\to K^{*}+\pi$ cross
sections. Finally, one has to consider the processes $K+\pi\to K^{*}$ and
$K^{*}\to K+\pi$. The authors of suhoung also found that the cross section
for the formation of the $K^{*}$ meson from pions and K mesons is not small at
all, compared to cross sections for other processes.
All the reactions mentioned above happen within a hadron gas at temperatures
ranging from 100 to 200 MeV. These temperatures determine the collision
energies. Moreover the densities of the colliding particles are determined by
the temperature. Therefore, in this context, the most relevant dynamical
quantity is the thermally averaged cross section. For a process $a+b\to c+d$
it is defined as:
$\braket{\sigma_{ab\rightarrow cd}v_{ab}}=\frac{1}{1+\delta_{ab}}\frac{\int
d^{3}\vec{p_{a}}d^{3}\vec{p_{b}}f_{a}(\vec{p_{a}})f_{b}(\vec{p_{b}})\sigma_{ab\rightarrow
cd}v_{ab}}{\int
d^{3}\vec{p_{a}}d^{3}\vec{p_{b}}f_{a}(\vec{p_{a}})f_{b}(\vec{p_{b}})},$ (1)
where $v_{ab}$ is the relative velocity between the initial particles
$v_{ab}=\sqrt{(p_{a}\cdot p_{b})^{2}-m^{2}_{a}m^{2}_{b}}/(E_{a}E_{b})$
and $f_{i}(\vec{p_{i}})$ is the thermal momentum distribution of particle $i$,
which is given by a Bose-Einstein distribution:
$f_{i}(\vec{p_{i}})=\frac{1}{e^{\sqrt{\vec{p_{i}}^{2}+m_{i}^{2}}/T}-1}.$
The production and absorption rates of $K^{*}$ or $K$ obviously depend on the
densities of particles in the hadron gas at proper time $\tau$ which are given
by
$n_{i}(\tau)=\frac{g_{i}}{2\pi^{2}}\,\int_{0}^{\infty}\frac{p^{2}dp}{e^{\sqrt{p^{2}_{i}+m^{2}_{i}}/T(\tau)}-1}\simeq\frac{g_{i}}{2\pi^{2}}m_{i}^{2}T(\tau)K_{2}\left(\frac{m_{i}}{T(\tau)}\right),$
(2)
where $g_{i}$ is the degeneracy factor of meson $i$ and $m_{i}$ its mass.
$K_{2}(\tau)$ is the modified Bessel function of the second kind and $T(\tau)$
is the temperature. The total number of particles of species $i$,
$N_{i}(\tau)$, is obtained by multiplying the density given by (2) by the
system volume $V(\tau)$. At last, the thermally averaged decay width of
$K^{*}$ was computed using the following expression introduced in Ref. suhoung
:
$\braket{\Gamma_{K^{*}}}=\Gamma_{K^{*}}(m_{K^{*}})\frac{K_{1}\left(\frac{m_{K^{*}}}{T(\tau)}\right)}{K_{2}\left(\
\frac{m_{K^{*}}}{T(\tau)}\right)},$ (3)
where $K_{1}$ and $K_{2}$ are the modified Bessel functions of the first and
second kind, $T(\tau)$ is the temperature as a function of proper time $\tau$,
$m_{K^{*}}$ is the mass of $K^{*}$ and $\Gamma_{K^{*}}$, its decay width,
which was computed as:
$\Gamma_{K^{*}}(\sqrt{s})=\frac{g^{2}_{\pi KK^{*}}}{2\pi
s}p_{cm}^{3}(\sqrt{s}),$ (4)
with $g_{\pi KK^{*}}$ being the coupling constant, $p_{cm}$ the momentum at
the center of mass frame and $s$ the Mandelstam variable.
For our purposes it is not necessary to recalculate all the thermally averaged
cross sections, $\braket{\sigma_{ab\rightarrow cd}v_{ab}}$, which are smooth
functions of the temperature. It is enough to parametrize the results obtained
in Refs. suhoung and abreu by the polinomial functions which are shown in
the Appendix. The resulting parametrizations are shown in Fig. 2.
|
---|---
(a) | (b)
Figure 2: Comparison between the cross sections obtained in Ref. suhoung and
those obtained in Ref. abreu . The lines are obtained with Eq. (16) which is a
parametrization of the results obtained in the mentioned papers. a)
$K^{*}\pi\to\rho K$ reactions. b) $K^{*}\rho\to K\pi$ reactions.
In Fig. 2 we can compare the results obtained in suhoung with those obtained
in abreu . The inclusion of modifications I and II increased the cross
sections tipically by one order of magnitude. In both approaches the
absorption of $K^{*}$ is stronger than its production. However, with the
formalism considered in abreu , at higher temperatures we observe the
dominance of the processes of creation of $K^{*}$. So, when the hadron gas
starts its expansion at high temperatures, we expect to see first the growth
of the $K^{*}$ multiplicity which is later followed by its reduction. In
contrast, with the formalism of suhoung we only see a monotonic reduction of
the $K^{*}$ multiplicity.
### II.2 Evolution equations
With the ingredients presented in the previous subsection, it is possible to
write rate equations, which describe the time evolution of the $K^{*}$ and $K$
multiplicities, incorporating the gain and loss terms due to production and
absorption respectively. These equations are:
$\displaystyle\frac{dN_{K^{*}}}{d\tau}=$
$\displaystyle\braket{\sigma_{K\rho\rightarrow
K^{*}\pi}v_{K\rho}}n_{\rho}(\tau)N_{K}(\tau)-\braket{\sigma_{K^{*}\pi\rightarrow
K\rho}v_{K^{*}\pi}}n_{\pi}(\tau)N_{K^{*}}(\tau)+\braket{\sigma_{K\pi\rightarrow
K^{*}\rho}v_{K\pi}}n_{\pi}(\tau)N_{K}(\tau)$
$\displaystyle-\braket{\sigma_{K^{*}\rho\rightarrow
K\pi}v_{K^{*}\rho}}n_{\rho}(\tau)N_{K^{*}}(\tau)+\braket{\sigma_{\pi\rho\rightarrow
K^{*}\bar{K}}v_{\pi\rho}}n_{\pi}(\tau)N_{\rho}(\tau)-\braket{\sigma_{K^{*}\bar{K}\rightarrow\rho\pi}v_{K^{*}\bar{K}}}n_{\bar{K}}(\tau)N_{K^{*}}(\tau)$
$\displaystyle+\braket{\sigma_{\pi\pi\rightarrow
K^{*}\bar{K}^{*}}v_{\pi\pi}}n_{\pi}(\tau)N_{\pi}(\tau)-\braket{\sigma_{K^{*}\bar{K}^{*}\rightarrow\pi\pi}v_{K^{*}\bar{K}^{*}}}n_{\bar{K}^{*}}(\tau)N_{K^{*}}(\tau)+\braket{\sigma_{\rho\rho\rightarrow
K^{*}\bar{K}^{*}}v_{\rho\rho}}n_{\rho}(\tau)N_{\rho}(\tau)$
$\displaystyle-\braket{\sigma_{K^{*}\bar{K}^{*}\rightarrow\rho\rho}v_{K^{*}\bar{K}^{*}}}n_{\bar{K}^{*}}(\tau)N_{K^{*}}(\tau)+\braket{\sigma_{K\pi\rightarrow
K^{*}}v_{K\pi}}n_{\pi}(\tau)N_{K}(\tau)-\braket{\Gamma_{K^{*}}}N_{K^{*}}(\tau),$
$\displaystyle\frac{dN_{K}}{d\tau}=$
$\displaystyle\braket{\sigma_{\pi\pi\rightarrow
K\bar{K}}v_{\pi\pi}}n_{\pi}(\tau)N_{\pi}(\tau)-\braket{\sigma_{K\bar{K}\rightarrow\pi\pi}v_{K\bar{K}}}n_{\bar{K}}(\tau)N_{K}(\tau)+\braket{\sigma_{\rho\rho\rightarrow
K\bar{K}}v_{\rho\rho}}n_{\rho}(\tau)N_{\rho}(\tau)$
$\displaystyle-\braket{\sigma_{K\bar{K}\rightarrow\rho\rho}v_{K\bar{K}}}n_{\bar{K}}(\tau)N_{K}(\tau)+\braket{\sigma_{K^{*}\pi\rightarrow
K\rho}v_{K^{*}\pi}}n_{\pi}(\tau)N_{K^{*}}(\tau)-\braket{\sigma_{K\rho\rightarrow
K^{*}\pi}v_{K\rho}}n_{\rho}(\tau)N_{K}(\tau)$
$\displaystyle+\braket{\sigma_{K^{*}\rho\rightarrow
K\pi}v_{K^{*}\rho}}n_{\rho}(\tau)N_{K^{*}}(\tau)-\braket{\sigma_{K\pi\rightarrow
K^{*}\rho}v_{K\pi}}n_{\pi}(\tau)N_{K}(\tau)+\braket{\sigma_{\pi\rho\rightarrow
K^{*}\bar{K}}v_{\pi\rho}}n_{\pi}(\tau)N_{\rho}(\tau)$
$\displaystyle-\braket{\sigma_{K^{*}\bar{K}\rightarrow\rho\pi}v_{K^{*}\bar{K}}}n_{\bar{K}}(\tau)N_{K^{*}}(\tau)+\braket{\Gamma_{K^{*}}}N_{K^{*}}(\tau)-\braket{\sigma_{K\pi\rightarrow
K^{*}}v_{K\pi}}n_{\pi}(\tau)N_{K}(\tau).$ (5)
The above equations include all relevant creation and annihilation reactions.
However, as showed in Refs. suhoung and abreu , some of them have very small
thermally averaged cross sections and can be safely neglected. The really
important interactions of the $K^{*}$ meson according to both references are
the following:
$\displaystyle K^{*}\rho$ $\displaystyle\rightarrow K\pi,$ $\displaystyle
K^{*}\pi$ $\displaystyle\rightarrow K\rho,$ $\displaystyle K^{*}$
$\displaystyle\rightarrow K\pi,$ (6)
as well as the respective inverse processes. This should not be surprising
since $\pi$’s are the most abundant particles in a hadron gas and $\rho$’s are
vector particles and, as discussed above, have a large interaction cross
section with other vector particles. Restricting ourselves to the processes
above, the system of differential equations Eq.(5) can be written as:
$\displaystyle\frac{dN_{K^{*}}(\tau)}{d\tau}$
$\displaystyle=\gamma_{K}N_{K}(\tau)-\gamma_{K^{*}}N_{K^{*}}(\tau),$
$\displaystyle\frac{dN_{K}(\tau)}{d\tau}$
$\displaystyle=-\gamma_{K}N_{K}(\tau)+\gamma_{K^{*}}N_{K^{*}}(\tau),$ (7)
where $N_{K}$ and $N_{K^{*}}$ are the abundances of K and $K^{*}$ mesons
respectively. They are functions of the proper time $\tau$. The factors
$\gamma_{K}$ and $\gamma_{K^{*}}$ depend on the interaction cross sections and
the light meson densities in the following way:
$\displaystyle\gamma_{K}$
$\displaystyle=\braket{\sigma_{K\pi\xrightarrow{}K^{*}\rho}v_{K\pi}}n_{\pi}+\braket{\sigma_{K\rho\xrightarrow{}K^{*}\pi}v_{K\rho}}n_{\rho}+\braket{\sigma_{K\pi\xrightarrow{}K^{*}}v_{K\pi}}n_{\pi},$
$\displaystyle\gamma_{K^{*}}$
$\displaystyle=\braket{\sigma_{K^{*}\rho\xrightarrow{}K\pi}v_{K^{*}\rho}}n_{\rho}+\braket{\sigma_{K^{*}\pi\xrightarrow{}K\rho}v_{K^{*}\pi}}n_{\pi}+\braket{\Gamma_{K^{*}}}.$
(8)
It is interesting to consider the limiting case where the temperature and
light meson densities stay constant in time. In this case $\gamma_{K^{*}}$ and
$\gamma_{K}$ are constant and the system (II.2) can be solved analytically
giving the following result:
$\displaystyle N_{K^{*}}(\tau)$
$\displaystyle=\frac{\gamma_{K}}{\gamma}N^{0}+\left(N_{K^{*}}^{0}-\frac{\gamma_{K}}{\gamma}N^{0}\right)e^{-\gamma(\tau-\tau_{h})},$
$\displaystyle N_{K}(\tau)$
$\displaystyle=\frac{\gamma_{K^{*}}}{\gamma}N^{0}+\left(N_{K}^{0}-\frac{\gamma_{K^{*}}}{\gamma}N^{0}\right)e^{-\gamma(\tau-\tau_{h})},$
(9)
where $N^{0}=N_{K^{*}}^{0}+N_{K}^{0}$, i.e., the sum of the initial abundances
of $K$ and $K^{*}$. Moreover, $\gamma=\gamma_{K^{*}}+\gamma_{K}$, as computed
in expressions (8). At the hadronization time, $\tau_{h}$, the system of
$K^{*}$’s and $K$’s starts to evolve and collide with the light particles from
the reservoir which is kept at constant temperature. At large times the
$N_{K^{*}}^{0}$ and $N_{K}^{0}$ reach their asymptotic constant values. This
is our operational definition of chemical equilibrium.
Once we define the temperature evolution (“cooling”) of the hadron gas
$T(\tau)$ and the initial conditions $N_{K^{*}}(\tau_{h})$ and
$N_{K}(\tau_{h})$, the system of differential equations (II.2) can be solved,
yielding $N_{K^{*}}$, $N_{K}$ and the ratio $R(\tau)$ :
$R(\tau)=\frac{N_{K^{*}}}{N_{K}}=\frac{K^{*}}{K}.$ (10)
We follow the time evolution of the abundances until the kinetic freeze-out of
the gas, which is defined by the freeze-out temperature $T_{f}$ and occurs at
time $\tau_{f}$. Assuming that the hadronic system undergoes a Bjorken-like
expansion, we may write:
$T=T_{h}\left(\frac{\tau_{h}}{\tau}\right)^{1/3},$ (11)
where $T_{h}=175$ MeV is the universal hadronization temperature discussed
above and $\tau_{h}$ is the hadronization time, which may change from system
to system. We take the above expression at the particular freeze out time,
$\tau_{f}$ and freeze-out temperature, $T_{f}$, and invert it to obtain:
$\tau_{f}=\tau_{h}\left(\frac{T_{h}}{T_{f}}\right)^{3}.$ (12)
We solve (II.2) until $\tau_{f}$ and compute the ratio $R[\tau_{f}(T_{f})]$.
As it was pointed out long ago hama92 , the kinetic freeze-out temperature is
not an universal constant. It depends on the size of the hadronic system and
hence on the collision energy, on the mass number of the colliding nuclei and
on the centrality of the collision. A recent blastwave fit analysis made by
the ALICE Collaboration alice13 has confirmed that the kinetic freeze-out
temperature decreases with the system size, customarily associated to the
multiplicity density of charged particles, $dN/d\eta$, measured at
midrapidity. The empirical relation between $T_{f}$ and $\mathcal{N}$ found in
alice13 can be parametrized as:
$T_{f}=\frac{T_{f0}}{\mathcal{N}^{a}},$ (13)
where $T_{f0}$ and $a$ are constants. Inserting (13) into (12) we find that
$\tau_{f}\propto\mathcal{N}^{3a}.$ (14)
This relation tells us that $\mathcal{N}$ gives a measure of the duration of
the hadronic phase. Larger systems (with larger $\mathcal{N}$) live longer.
Using the obtained $\tau_{f}$ to determine the end of the evolution of (II.2)
we find $R$ as a function $\mathcal{N}$. The function $R$ can then be directly
compared with the data on $R$ versus $\mathcal{N}$ presented very recently in
alice20 . This will be done in the next section.
## III Results and Discussion
From what was said above we see that the final multiplicities of $K^{*}$ and
$K$ may depend on: i) the collision dynamics, i.e., on the production and
absorption cross sections discussed above; ii) the initial conditions of the
evolution equations (II.2), i.e., the initial values of $N_{K^{*}}$ and
$N_{K}$; iii) the expansion dynamics, i.e., the cooling function $T(\tau)$ and
iv) the system size, characterized by $dN/d\eta(\eta=0)$.
We solve the equations (II.2) using as input the cross sections calculated in
suhoung and in abreu . The initial temperature is $T_{h}=175$ MeV and the
initial conditions are $K^{*}/K=$ $0.2$, $0.5$ and $0.8$. The results are
shown in Fig. 3. On the left (right) panel the inputs are from Ref. suhoung
(abreu ).
| |
---|---|---
(a) | | (b)
Figure 3: $K^{*}/K$ ratio as a function of the proper time $\tau$. Dashed
lines correspond to the initial conditions 0.2, 0.5 and 0.8 and no cooling.
Solid lines correspond to the initial conditions 0.2, 0.5 and 0.8 and cooling.
a) Cross sections from S. H. Lee et al suhoung . b) Cross sections from A.
Martinez et al abreu .
First we observe that, as anticipated from Eqs. (II.2), when there is no
cooling the system evolves to an asymptotic state where the abundances become
constant. When cooling (11) is included, the ratio $K^{*}/K$ drops and at
typical freeze-out times of 20 - 25 fm/c reaches 0.2 - 0.3. These numbers are
close to the measured ones. This suggests that a cooling faster than (11),
such as the Hubble-like cooling discussed in singh20 ; ghosh20 , is probably
incompatible with data. Another interesting aspect of the figure is that, even
with cooling, after some time of evolution the $K^{*}/K$ ratio becomes the
same for all initial conditions. Comparing the left and right panels we
observe the effect of changing the microscopic cross sections from those
calculated in suhoung to those calculated in abreu . When there is no cooling
the ratio shown on the left (with the inputs from suhoung ) is significantly
smaller than the one on the right (with the inputs from abreu ). This is a
consequence of Fig. 2: at higher temperatures, with abreu the cross section
for $K^{*}$ production is bigger and so is the ratio $R$. It is also for this
reason that on the right panel we observe a growth, in some cases very
pronounced, of all lines at early times.
In abreu all the cross sections are bigger, all the reactions happen faster
and hence the system looses sooner the memory of the initial conditions (the
three lines become a single line). Interestingly, at very long times in both
cases (right and left panels) the ratio goes to the same value.
From Fig. 3 it is clear that the new reactions mechanisms considered in abreu
have an impact on the evolution of the abundances of $K^{*}$ and K mesons in
the hadronic medium. They predict a time evolution of the abundances which is
considerably different from previously thought: there is an initial increase
in the yield ratio which would not exist without taking into account all the
possible mechanisms for the processes in (II.2). Unfortunately, the
differences with respect to the previous calculations of Ref. suhoung are
washed out during the evolution and in the end the improved cross sections
lead to a final yield ratio very close to that computed in Ref. suhoung .
In order to understand this behavior, it is important to notice from Fig. 2
that even though the cross sections for the annihilation of $K^{*}$ are larger
in Ref. abreu than in Ref. suhoung , those for the creation of $K^{*}$ are
also larger. For example, Fig. 2b clearly shows that in the case of the
creation of $K^{*}$ through $K\pi\rightarrow K^{*}\rho$, the cross sections
from Ref. abreu are one order of magnitude larger than those from suhoung
and, as time passes, i.e., the gas cools down, the difference between them
decreases considerably. The opposite goes for the creation of $K^{*}$ through
$K\rho\rightarrow K^{*}\pi$ (Fig. 2a) but the difference between the cross
section in Ref. abreu and Ref. suhoung in this case is much smaller.
In order to compare our results with data, we will make use of the connection
established in alice13 between $T_{f}$ and $\mathcal{N}$. Although the power
law fit (13) is very useful because it leads immediately to (14), a somewhat
better fit of the points shown in alice13 can be obtained with the form:
$T_{f}={T_{f0}}\,e^{-b\,\mathcal{N}},$ (15)
where $T_{f0}=132.5$ MeV and $b=0.02$. The above expression is compared to the
data points from alice13 in Fig. 4. We emphasize that Eq. (15) is not the
result of a global best $\chi^{2}$ fit. We try to get a better description of
the higher energy data points, which will be relevant for the study of the
$K^{*}/K$ ratio measured at the LHC. The STAR points are shown just for
comparison.
Figure 4: Freeze-out temperature as a function of
$\left[dN/d\eta(\eta=0)\right]^{1/3}$. The circles are the result of the
blastwave fits of data on Pb + Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV taken
by the ALICE Collaboration alice13 . The squares represent blastwave fits of
data on Au + Au collisions $\sqrt{s_{NN}}=200$ GeV taken by the STAR
Collaboration star05 . The line represents the expression (15).
We first choose the system under consideration, fixing $\mathcal{N}$. This
determines the freeze-out temperature, $T_{f}$, and the endpoint of the
evolution, $\tau_{f}$. Then, we read the ratio $K^{*}/K$ from Fig. 3. Finally,
we plot $K^{*}/K$ as a function of $\mathcal{N}$ and compare the results with
the data compilation published in alice20 . The comparison is presented in
Fig. 5.
Figure 5: $K^{*}/K$ as a function of $\left[dN/d\eta(\eta=0)\right]^{1/3}$.
Data are from alice20 .
As it can be seen in Fig. 3, the longer the hadronic system lasts, the smaller
is the ratio $R$. Indeed, for each (increasing) value of $\mathcal{N}$ we stop
the evolution at an (increasing) value of $\tau$ (which is $\tau_{f}$) and
read from Fig. 3 a (decreasing) value of the ratio $K^{*}/K$.
There is a strong correlation between Fig. 4 and Fig. 5. A steeper function in
the first figure implies a steeper function in the second. In fact $R\simeq
T_{f}$. Interestingly, the data seem to exclude a flat horizontal line in Fig.
4, i.e., a freeze-out temperature which is universal, independent of the
system size.
Knowing that the existence of a hadron gas phase leads to a reduction in the
ratio $R=K^{*}/K$, the systematic study of this ratio in different collisions
and at different energies will help us in better determining the properties of
the hadron gas. In proton-proton collisions, where there is no hadron gas and
hence no $K^{*}$ absorption, $R$ should be maximal. Moving to p-A and A-A
collisions we expect to see the formation of a larger and longer-living hadron
gas. Also, when we move to larger systems we observe a growth of the
multiplicity of produced particles and also of the multiplicity density in the
central rapidity region $\mathcal{N}=dN/d\eta(\eta=0)$, which is usually taken
as a measure of the size of the system.
In our approach to study the ratio $R=K^{*}/K$ we made some simplifications.
The goal was to determine which ingredients are really crucial to understand
the observed behavior. One of the simplifications was to neglect the volume of
the system. The colliding systems mentioned in Fig. 5 are different and so are
the corresponding hadronic gases, which have different volumes. In our study
these differences are partly considered in Eq. (15). Moreover the details of
the light flavor composition of these different systems were not taken into
account. In each of the systems considered in Fig. 5 the $\pi$ and $\rho$
finally measured multiplicities are different. In thermal models, this
difference is usually accounted by the fugacity factor, $\gamma$, which should
appear multiplying the right side of Eq. (2). We have taken $\gamma=1$ for
$\pi$’s and $\rho$’s in $p-Pb$ and $Pb-Pb$ collisions. In previous studies
with thermal models it was shown that, in $Pb-Pb$ collisions, we could have
$\gamma_{\pi}\simeq 1.3$ and $\gamma_{\rho}\simeq 1.2$. Changes in these
quantities would bring changes in Eqs. (8) and (II.2). We have checked that,
using these values for $\gamma_{\pi}$ and $\gamma_{\rho}$ we would obtain, in
Fig. 5, curves with the same aspect of the solid line but shifted upwards. For
conciseness we decided not to include them in the figure. Furthermore, the
used numerical values could be different, such as the hadronization
temperature, $T_{h}$, or the numbers contained in the parametrizations. None
of these changes however would susbtancially change the curve shown in Fig. 5.
To summarize: we have improved the treatment of the microscopic dynamics of
$K^{*}$’s. We used all the relevant reaction cross sections involving
$K^{*}$’s calculated in Ref. abreu as input in the evolution equations
(II.2). We included cooling and the dependence of the freeze-out temperature
on the system size. We obtained a very good description of the data published
in alice20 on $R=K^{*}/K$ as a function of $dN/d\eta(\eta=0)$. In order to
reproduce the features of Fig. 5 we need the three aspects of the process: i)
dominance of the $K^{*}$ absorption reactions; ii) cooling and iii) system
size dependent freeze-out.
## IV Appendix
In this appendix we have included the parametrization used to reproduce the
thermally averaged cross sections calculated in Ref. suhoung and in Ref.
abreu . It is given by:
$\braket{\sigma\,v}(T)=p_{0}+p_{1}T+p_{2}T^{2}+p_{3}T^{3}.$ (16)
The coefficients $p_{i}$ are given in Table I.
| $p_{0}$ | $p_{1}$ | $p_{2}$ | $p_{3}$
---|---|---|---|---
$K^{*}\rho\to K\pi$ abreu | 92 | -0.91 | 0.0043 | $-7.2\times 10^{-6}$
$K^{*}\rho\to K\pi$ suhoung | 1.78 | -0.0052 | 0.000007 | 0
$K\pi\to K^{*}\rho$ abreu | -20 | 0.6 | -0.007 | $3.5\times 10^{-5}$
$K\pi\to K^{*}\rho$ suhoung | 0.2 | -0.004 | 0.00001 | $6.0\times 10^{-8}$
$K^{*}\pi\to K\rho$ abreu | -8.5 | 0.200 | -0.00085 | $1.23\times 10^{-6}$
$K^{*}\pi\to K\rho$ suhoung | -0.1 | -0.002 | 0.00007 | $-1.5\times 10^{-7}$
$K\rho\to K^{*}\pi$ abreu | 25.3 | -0.143 | 0.00052 | $-8.0\times 10^{-7}$
$K\rho\to K^{*}\pi$ suhoung | 0 | 0.010 | -0.000014 | 0
$K\pi\to K^{*}$ suhoung | -3 | 0.27 | -0.0019 | $3.8\times 10^{-6}$
$K^{*}\to K\pi$ suhoung | 0.2579 | $-4.32\times 10^{-4}$ | $6.0\times 10^{-7}$ | $-6.5\times 10^{-10}$
Table 1: Parameters used in (16). With the above numbers the temperature is
given in MeV and the outcome are the thermally averaged cross sections in mb.
In the last line, the average decay width is given in $fm^{-1}$.
###### Acknowledgements.
This work was partially financed by the Brazilian funding agencies CAPES and
CNPq.
## References
* (1) E. Shuryak, Prog. Part. Nucl. Phys. 62, 48 (2009).
* (2) M. Gyulassy and L. McLerran, Nucl. Phys. A 750, 30 (2005).
* (3) G. Torrieri and J. Rafelski, Phys. Lett. B 509, 239 (2001).
* (4) M. Bleicher and J. Aichelin, Phys. Lett. B 530, 81 (2002).
* (5) J. Rafelski, J. Letessier, and G. Torrieri, Phys. Rev. C 64, 054907 (2001).
* (6) A. G. Knospe, C. Markert, K. Werner, J. Steinheimer and M. Bleicher, Phys. Rev. C 93, 014911 (2016).
* (7) J. Steinheimer, J. Aichelin, M. Bleicher and H. Stöcker, Phys. Rev. C 95, 064902 (2017).
* (8) J. Adams et al. (STAR Collaboration), Phys. Rev. C 71, 064902 (2005).
* (9) M. M. Aggarwal et al. (STAR Collaboration), Phys. Rev. C 84, 034909 (2011).
* (10) B. B. Abelev et al. [ALICE], Phys. Rev. C 91, 024609 (2015).
* (11) J. Adam et al. [ALICE], Phys. Rev. C 95, 064606 (2017).
* (12) S. Acharya et al. [ALICE Collaboration], Phys. Lett. B 802, 135225 (2020).
* (13) M. E. Bracco, F. S. Navarra and M. Nielsen, Phys. Lett. B 454, 346 (1999).
* (14) K. P. Khemchandani, A. Martinez Torres, F. S. Navarra, M. Nielsen and L. Tolos, Phys. Rev. D 91, 094008 (2015).
* (15) S. Cho and S. H. Lee, Phys. Rev. C 97, 034908 (2018).
* (16) A. Martinez Torres, K. P. Khemchandani, L. M. Abreu, F. S. Navarra and M. Nielsen, Phys. Rev. D 97, 056001 (2018).
* (17) Y. S. Oh, T. Song, and S. H. Lee, Phys. Rev. C 63, 034901 (2001).
* (18) F. Carvalho, F. O. Duraes, F. S. Navarra and M. Nielsen, Phys. Rev. C 72, 024902 (2005).
* (19) A. Martinez Torres, K. P. Khemchandani, F. S. Navarra, M. Nielsen, and L. M. Abreu, Phys. Rev. D 90, 114023 (2014); 93, 059902(E) (2016).
* (20) L. M. Abreu, K. P. Khemchandani, A. Martinez Torres, F. S. Navarra, and M. Nielsen, Phys. Lett. B 761, 303 (2016).
* (21) L. S. Geng, E. Oset, L. Roca, and J. A. Oller, Phys. Rev. D 75, 014017 (2007).
* (22) C. Daum et al. [ACCMOR Collaboration], Nucl. Phys. B 187, 1 (1981).
* (23) B. Abelev et al. [ALICE Collaboration], Phys.Rev. C 88, 044910 (2013).
* (24) G. E. Brown, C. M. Ko, Z. G. Wu and L. H. Xia, Phys. Rev. C 43, 1881 (1991).
* (25) A. Ilner, J. Blair, D. Cabrera, C. Markert and E. Bratkovskaya, Phys. Rev. C 99, 024914 (2019).
* (26) A. Ilner, D. Cabrera, C. Markert and E. Bratkovskaya, Phys. Rev. C 95, 014903 (2017).
* (27) Z. Lin and C. M. Ko, Phys. Rev. C 62, 034903 (2000).
* (28) Y. Hama and F. S. Navarra, Z. Phys. C 53, 501 (1992).
* (29) J. Adams et al. [STAR Collaboration], Nucl. Phys. A 757, 102 (2005).
* (30) S. K. Singh, P. Ghosh and J. K. Nayak, arXiv:2007.00053
* (31) P. Ghosh, J. K. Nayak, S. K. Singh and S. K. Agarwalla, Phys. Rev. D 101, 094004 (2020).
|
# Buying Data Over Time: Approximately Optimal Strategies for Dynamic Data-
Driven Decisions
Nicole Immorlica Microsoft Research<EMAIL_ADDRESS>Ian A. Kash
Department of Computer Science, University of Illinois at Chicago,
<EMAIL_ADDRESS>Brendan Lucier Microsoft Research<EMAIL_ADDRESS>
###### Abstract
We consider a model where an agent has a repeated decision to make and wishes
to maximize their total payoff. Payoffs are influenced by an action taken by
the agent, but also an unknown state of the world that evolves over time.
Before choosing an action each round, the agent can purchase noisy samples
about the state of the world. The agent has a budget to spend on these
samples, and has flexibility in deciding how to spread that budget across
rounds. We investigate the problem of choosing a sampling algorithm that
optimizes total expected payoff. For example: is it better to buy samples
steadily over time, or to buy samples in batches? We solve for the optimal
policy, and show that it is a natural instantiation of the latter. Under a
more general model that includes per-round fixed costs, we prove that a
variation on this batching policy is a $2$-approximation.
## 1 Introduction
The growing demand for machine learning practitioners is a testament to the
way data-driven decision making is shaping our economy. Data has proven so
important and valuable because so much about the current state of the world is
_a priori_ unknown. We can better understand the world by investing in data
collection, but this investment can be costly; deciding how much data to
acquire can be a non-trivial undertaking, especially in the face of budget
constraints. Furthermore, the value of data is typically not linear. Machine
learning algorithms often see diminishing returns to performance as their
training dataset grows [22, 10]. This non-linearity is further complicated by
the fact that a data-driven decision approach is typically intended to replace
some existing method, so its value is relative to the prior method’s
performance.
As a motivating example for these issues, consider a politician who wishes to
accurately represent the opinion of her constituents. These constituents have
a position on a policy, say the allocation of funding to public parks. The
politician must choose her own position on the policy or abstain from the
discussion. If she states a position, she experiences a disutility that is
increasing in the distance of her position from that of her constituents. If
she abstains, she incurs a fixed cost for failing to take a stance. To help
her make an optimal decision she can hire a polling firm that collects data on
the participants’ positions.
We focus on the dynamic element of this story. In many decision problems, the
state of the world evolves over time. In the example above, the opinions of
the constituents might change as time passes, impacting the optimal position
of the politician. As a result, data about the state of the world becomes
stale. Furthermore, many decisions are not made a single time; instead,
decisions are made repeatedly. In our example, the politician can update
funding levels each fiscal quarter.
When faced with budget constraints on data collection and the issue of data
staleness, decisions need to be made about when to collect data and when to
save budget for the future, and whether to make decisions based on stale data
or apply a default, non-data-driven policy. Our main contribution is a
framework that models the impact of such budget constraints on data collection
strategies. In our example, the politician has a budget for data collection. A
polling firm charges a fixed cost to initiate a poll (e.g., create the survey)
plus a fee per surveyed participant. The politician may not have enough budget
to hire the firm to survey every constituent every quarter. Should she then
survey fewer constituents every quarter? Or survey a larger number of
constituents every other quarter, counting on the fact that opinions do not
drift too rapidly?
We initiate the study with arguably the simplest model that exhibits this
tension. The state of the world (constituents’ opinions) is hidden but drawn
from a known prior distribution, then evolves stochastically. Each round, the
decision-maker (politician) can collect one or more noisy samples that are
correlated with the hidden state at a cost affine in the number of samples
(conduct a poll). Then she chooses an action and incurs a loss. Should the
decision-maker not exhaust her budget in a given round, she can bank it for
future rounds. A sampling algorithm describes an online policy for scheduling
the collection of samples given the budget and past observations.
We instantiate this general framework by assuming Gaussian prior,
perturbations and sample noise.111A Gaussian prior is justified in our running
example if we assume a large population limit of constituents’ opinions. That
the prior estimate of drift is also Gaussian is likewise motivated as the
number of periods grows large. We discuss alternative distributional
assumptions on the prior, perturbations and noise in Section 6. We capture the
decisions that need to be made as the problem of estimating the current state
value, using the classic squared loss to capture the cost of making a decision
using imprecise information. Alternatively, there is always the option to not
make a decision based on the data and instead accept a default constant loss.
We assume a budget on the number of samples collected per unit time, and
importantly this budget can be banked for future rounds if desired.
### 1.1 A Simple Example.
To illustrate our technical model, suppose the hidden state (constituents’
average opinion) is initially drawn from a mean-zero Gaussian of variance $1$.
In each round, the state is subject to mean-zero Gaussian noise of variance
$1$ (the constituents update their opinions), which is added to the previous
round’s state. Also, any samples we choose to take are also subject to mean-
zero Gaussian noise of variance $1$ (polls are imperfect). Our budget for
samples is $1$ per period, and one can either guess at the hidden state
(incurring a penalty equal to the squared loss) or pass and take a default
loss of $3/4$. What is the expected average loss of the policy that takes a
single sample each round, and then takes the optimal action? As it turns out,
the expected loss is precisely $\phi-1\approx 0.618$, where $\phi$ is the
golden ratio $\frac{1+\sqrt{5}}{2}$ (see Section 3.5 for the analysis).
However, this is not optimal: saving up the allotted budget and taking two
samples every other round leads to an expected loss of
$\frac{0.75+\sqrt{2}-1}{2}\approx 0.582$. The intuition behind the improvement
is that taking a single sample every round beats the outside option, but not
by much; it is better to beat the outside option significantly on even-
numbered rounds (by taking 2 samples), then simply use the outside option on
odd-numbered rounds. It turns out that one cannot improve on this by saving up
for 3 or more rounds to take even more samples all at once. However, one can
do better by alternating between taking no samples for two periods and then
two samples each for two periods, which results in a long-run average loss of
$\approx 0.576$.
### 1.2 Our Results.
As we can see from the example above, the space of policies to consider is
quite large. One simple observation is that since samples become stale over
time it is never optimal to collect samples and then take the outside option
(i.e., default fixed-cost action) in the same round; it would be better to
defer data collection to later rounds where decisions will be made based on
data. As a result, a natural class of policies to consider is those which
alternate between collecting samples and saving budget. Such “on-off” policies
can be thought of as engaging in “data drives” while neglecting data
collection the rest of the time.
Our main result is that these on-off policies are asymptotically optimal, with
respect to all dynamic policies. Moreover, it suffices to collect samples at a
constant rate during the sampling part of the policy’s period. Our argument is
constructive, and we show how to compute an asymptotically optimal policy.
This policy divides time into exponentially-growing chunks and collects data
in the latter end of each chunk.
The solution above assumes that costs are linear in the number of samples
collected. We next consider a more general model with a fixed up-front cost
for the first sample collected in each round. This captures the costs
associated with setting up the infrastructure to collects samples on a given
round, such as hiring a polling firm which uses a two-part tariff. Under such
per-round costs, it can be suboptimal to sample in sequential periods (as in
an on-off policy), as this requires paying the fixed cost twice. For this
generalized cost model, we consider simple and approximately optimal policies.
When evaluating performance, we compare against a null “baseline” policy that
eschews data collection and simply takes the outside option every period. We
define the value of a policy to be its improvement over this baseline, so that
the null policy has a value of $0$ and every policy has non-negative value.
While this is equivalent to simply comparing the expected costs of policies
this alternative measure is intended to capture how well a policy leverages
the extra value obtainable from data; we feel that this more accurately
reflects the relative performance of different policies.
We focus on a class of lazy policies that collect samples only at times when
the variance of the current estimate is worse than the outside option. This
class captures a heuristic based on a threshold rule: the decision-maker
chooses to collect data when they do not have enough information to gain over
the outside option. We show the optimal lazy policy is a $1/2$-approximation
to the optimal policy. The result is constructive, and we show how to compute
an asymptotically optimal lazy policy. Moreover, this approximation factor is
tight for lazy policies.
To derive these results, we begin with the well-known fact that the expected
loss under the squared loss cost function is the variance of the posterior. We
use an analysis based on Kalman filters [23], which are used to solve
localization problems in domains such as astronautics [27], robotics [35], and
traffic monitoring [37], to characterize the evolution of variance given a
sampling policy. We show how to maximize value using geometric arguments and
local manipulations to transform an optimal policy into either an on-off
policy or a lazy policy, respectively.
We conclude with two extensions. We described our results for a discrete-time
model, but one might instead consider a continuous-time variant in which
samples, actions, and state evolution occur continuously. We show how to
extend all of our results to such a continuous setting. Second, we describe a
non-Gaussian instance of our framework, where the state of the world is binary
and switches with some small probability each round. We solve for the optimal
policy, and show that (like the Gaussian model) it is characterized by non-
uniform, bursty sampling.
### 1.3 Other Motivating Examples.
We motivated our framework with a toy example of a politician polling his or
her constituents. But we note that the model is general and applies to other
scenarios as well. For example, suppose a phone uses its GPS to collect
samples, each of which provides a noisy estimate of location (reasonably
approximated by Gaussian noise). The “cost” of collecting samples is energy
consumption, and the budget constraint is that the GPS can only reasonably use
a limited portion of the phone’s battery capacity. The worse the location
estimate is, the less useful this information is to apps; sufficiently poor
estimates might even have negative value. However, as an alternative, apps
always have the outside option of providing location-unaware functionality.
Our analysis shows that it is approximately optimal to extrapolate from
existing data to estimate the user’s location most of the time, and only use
the GPS in “bursts” once the noise of the estimate exceeds a certain
threshold. Note that in this scenario the app never observes the “ground
truth” of the phone’s location. Similarly, our model might capture the problem
faced by a firm that runs user studies when deciding which features to include
in a product, given that such user studies are expensive to run and
preferences may shift within the population of customers over time.
### 1.4 Future Directions.
Our results provide insight into the trade-offs involved in designing data
collection policies in dynamic settings. We construct policies that navigate
the trade-off between cost of data collection and freshness of data, and show
how to optimize data collection schedules in a setting with Gaussian noise.
But perhaps our biggest contribution is conceptual, in providing a framework
in which these questions can be formalized and studied. We view this work as a
first step toward a broader study of the dynamic value of data. An important
direction for future work is to consider other models of state evolution
and/or sampling within our framework, aimed at capturing other applications.
For example, if the state evolves in a heavy-tailed manner, as in the non-
Gaussian instance explored in Section 6, then we show it is beneficial to take
samples regularly in order to detect large, infrequent jumps in state value,
and then adaptively take many samples when such a jump is evident. We solve
this extension only for a simple two-state Markov chain. Can we quantify the
dynamic value of data and find an (approximately) optimal and simple data
collection policy in a general Markov chain?
### 1.5 Related work
While we are not aware of other work addressing the value of data in a dynamic
setting, there has been considerable attention paid to the value of data in
static settings. Arietta-Ibarra et al. [4] argue that the data produced by
internet users is so valuable that they should be compensated for their labor.
Similarly, there is growing appreciation for the value of the data produced on
crowdsourcing platforms like Amazon Mechanical Turk [6, 20]. Other work has
emphasized that not all crowdsourced data is created equal and studied the way
tasks and incentives can be designed to improve the quality of information
gathered [17, 31]. Similarly, data can have non-linear value if individual
pieces are substitutes or complements [8]. Prediction markets can be used to
gather information over time, with participants controlling the order in which
information is revealed [11].
There is a growing line of work attempting to determine the marginal value of
training data for deep learning methods. Examples include training data for
classifying medical images [9] and chemical processes [5], as well as for more
general problems such as estimating a Gaussian distribution [22]. These
studies consider the static problem of learning from samples, and generally
find that additional training data exhibits decreasing marginal value. Koh and
Liang [25] introduced the use of influence functions to quantify how the
performance of a model depends on individual training examples.
While we assume samples are of uniform quality, other work has studied agents
who have data of different quality or cost [29, 7, 16]. Another line studies
the way that data is sold in current marketplaces [33], as well as proposing
new market designs [28]. This includes going beyond markets for raw data to
markets which acquire and combine the outputs of machine learning models [34].
Our work is also related to statistical and algorithmic aspects of learning a
distribution from samples. A significant body of recent work has considered
problems of learning Gaussians using a minimal number of noisy and/or
adversarial samples [21, 13, 14, 26, 15]. In comparison, we are likewise
interested in learning a hidden Gaussian from which we obtain noisy samples
(as a step toward determining an optimal action), but instead of robustness to
adversarial noise we are instead concerned about optimizing the split of
samples across time periods in a purely stochastic setting.
Our investigation of data staleness is closely related to the issue of concept
drift in streaming algorithms; see, e.g., Chapter 3 of [2] Concept drift
refers to scenarios where the data being fed to an algorithm is pulled from a
model that evolves over time, so that, for example, a solution built using
historical data will eventually lose accuracy. Such scenarios arise in
problems of histogram maintenance [18], dynamic clustering [3], and others.
One problem is to quantify the amount of drift occurring in a given data
stream [1]. Given that such drift is present, one approach to handling concept
drift is via sliding-window methods, which limit dependence on old data [12].
The choice of window size captures a tension between using a lot of stale data
or a smaller amount of fresh data. However, in work on concept drift one
typically cannot control the rate at which data is collected.
Another concept related to staleness is the “age of information.” This
captures scenarios where a source generates frequent updates and a receiver
wishes to keep track of the current state, but due to congestion in the
transmission technology (such as a queue or database locks) it is optimal to
limit the rate at which updates are sent [24, 32]. Minimizing the age of
information can be captured as a limit of our model where a single sample
suffices to provide perfect information. Recent work has examined variants of
the model where generating updates is costly [19], but the focus in this
literature is more on the management of the congestible resource. Closer to
our work, several recent papers have eliminated the congestible resource and
studied issues such as an energy budget that is stochastic and has limited
storage capacity [38] and pricing schemes for when sampling costs are non-
uniform [36, 39]. Relative to our work these papers have simpler models of the
value of data and focus on features of the sampling policy given the energy
technology and pricing scheme, respectively.
## 2 Model
We first describe our general framework, then describe a specific
instantiation of interest in Section 2.1. Time occurs in rounds, indexed by
$t=1,2,\dotsc$. There is a hidden state variable $x_{t}\in\Omega$ that evolves
over time according to a stochastic process. The initial state $x_{1}$ is
drawn from known distribution $F_{1}$. Write $m_{t}$ for the (possibly
randomized) evolution mapping applied at round $t$, so that $x_{t+1}\leftarrow
m_{t}(x_{t})$.
In every round, the decision-maker chooses an action $y_{t}\in A$, and then
suffers a loss $\ell(y_{t},x_{t})$ that depends on both the action and the
hidden state. The evolution functions $(m_{t})$ and loss function $\ell$ are
known to the decision-maker, but neither the state $x_{t}$ nor the loss
$\ell(y_{t},x_{t})$ is directly observed.222Assuming that the ground truth for
$\ell(y_{t},x_{t})$ is unobserved captures scenarios like our political
example, and approximates settings where the decision maker only gets weak
feedback, feedback at a delay, or feedback in aggregate over a long period of
time. Observing the loss provides additional information about $x_{t+1}$, and
this could be considered a variant of our model where the decision-maker gets
some number of samples “for free” each round from observing a noisy version of
the loss. Rather, on each round before choosing an action, the decision-maker
can request one or more independent samples that are correlated with $x_{t}$,
drawn from a known distribution $\Gamma(x_{t})$.
Samples are costly, and the decision-maker has a budget that can be used to
obtain samples. The budget is $B$ per round, and can be banked across rounds.
A sampling policy results in a number of samples $s_{t}$ taken in each round
$t$, which can depend on all previous observations. The cost of taking $s_{t}$
samples in round $t$ is $C(s_{t})\geq 0$. We assume that $C$ is non-decreasing
and $C(0)=0$. A sampling policy is _valid_ if $\sum_{t=1}^{T}C(s_{t})\leq
B\cdot T$ for all $T$. For example, $C(s_{t})=s_{t}$ corresponds to a cost of
$1$ per sample, and setting $C(s_{t})=s_{t}+z\cdot\mathbbm{1}_{s_{t}>0}$ adds
an additional cost of $z$ for each round in which at least one sample is
collected.
To summarize: on each round, the decision-maker chooses a number of samples
$s_{t}$ to observe, then chooses an action $y_{t}$. Their loss
$\ell(y_{t},x_{t})$ is then realized, the value of $x_{t}$ is updated to
$x_{t+1}$, and the process proceeds with the next round. The goal is to
minimize the expected long-run average of $\ell(y_{t},x_{t})$, in the limit as
$t\to\infty$, subject to $\sum_{t=0}^{T}C(s_{t})\leq B\cdot T$ for all $T\geq
1$.
### 2.1 Estimation under Gaussian Drift
We will be primarily interested in the following instantiation of our general
framework. The hidden state variable is a real number (i.e.,
$\Omega=\mathbb{R}$) and the decision-maker’s goal is to estimate the hidden
state in each round. The initial state is $x_{1}\sim N(0,\rho)$, a Gaussian
with mean $0$ and variance $\rho>0$. Moreover, the evolution process $m_{t}$
sets $x_{t+1}=x_{t}+\delta_{t}$, where each $\delta_{t}\sim N(0,\rho)$
independently. We recall that the decision-maker knows the evolution process
(and hence $\rho$) but does not directly observe the realizations
$\delta_{t}$.
Each sample in round $t$ is drawn from $N(x_{t},\sigma)$ where $\sigma>0$.
Some of our results will also allow fractional sampling, where we think of an
$\alpha\in(0,1)$ fraction of a sample as a sample drawn from
$N(x_{t},\sigma/\alpha)$.333One can view fractional sampling as modeling
scenarios where the value of any one single sample is quite small; i.e., has
high variance, so that a single “unit” of variance is derived from taking many
samples. E.g., sampling a single constituent in our polling example. It also
captures settings where it is possible to obtain samples of varying quality
with different levels of investment. The action space is
$A=\mathbb{R}\cup\\{\perp\\}$. If the decision-maker chooses
$y_{t}\in\mathbb{R}$, her loss is the squared error of her estimate
$(y_{t}-x_{t})^{2}$. If she is too unsure of the state, she may instead take a
default action $y_{t}=\perp$, which corresponds to not making a guess; this
results in a constant loss of $c>0$. Let $G_{t}$ be a random variable whose
law is the decision maker’s posterior after observing whatever samples are
taken in round $t$ as well as all previous samples. The decision maker’s
subjective expected loss when guessing $y_{t}\in\mathbb{R}$ is
$E[(y_{t}-G_{t})^{2}]$. This is well known to be minimized by taking
$y_{t}=E[G_{t}]$, and that furthermore the expected loss is
$E[(E[G_{t}]-G_{t})^{2}]=Var(G_{t})$. It is therefore optimal to guess
$y_{t}=E[G_{t}]$ if and only if Var$(G_{t})<c$, otherwise pass.
We focus on deriving approximately optimal sampling algorithms. To do so, we
need to track the variance of $G_{t}$ as a function of the sampling strategy.
As the sample noise and random state permutations are all zero-mean Gaussians,
$G_{t}$ is a zero-mean Gaussian as well, and the evolution of its variance has
a simple form.
###### Lemma 1.
Let $v_{t}$ be the variance of $G_{t}$ and suppose each $\delta_{t}\sim
N(0,\rho)$ independently, and that each sample is subject to zero-mean
Gaussian noise with variance $\sigma$. Then, if the decision-maker takes $s$
samples in round $t+1$, the variance of $G_{t+1}$ is
$v_{t+1}=\frac{v_{t}+\rho}{1+\frac{s}{\sigma}(v_{t}+\rho)}.$
The proof, which is deferred to the appendix along with all other proofs,
follows from our model being a special case of the model underlying a Kalman
filter.
The optimization problem therefore reduces to choosing a number of samples
$s_{t}$ to take in each round $t$ in order to minimize the long-run average of
$\min(v_{t},c)$, the loss of the optimal action. That is, the goal is to
minimize $\lim\sup_{T\to\infty}\frac{1}{T}\sum_{t=1}^{T}\min(v_{t},c),$ where
we take the superior limit so that the quantity is defined even when the
average is not convergent. We choose
$C(s_{t})=s_{t}+z\cdot\mathbbm{1}_{s_{t}>0}$, so this optimization is subject
to the budget constraint that, at each time $T\geq 1$,
$\sum_{t=1}^{T}s_{t}+z\cdot\mathbbm{1}_{s_{t}>0}\leq BT$. This captures two
kinds of information acquisition costs faced by the decision-maker. First she
faces a cost per sample, which we have normalized to one. Second, she faces a
fixed cost $z$ (which may be 0) on each day she chooses to take samples,
expressed in terms of the number of samples that could instead have been taken
on some other day had this cost not been paid. This captures the costs
associated with setting up the infrastructure to collects samples on a given
round, such as getting data collectors to the location where they are needed,
hiring a polling firm which uses a two-part tariff, or establishing a
satellite connection to begin using a phone’s GPS.
A useful baseline performance is the cost of a policy that takes no samples
and simply chooses the outside option at all times. We refer to this as the
_null policy_. The _value_ of a sampling policy $s$, denoted $\text{Val}(s)$,
is defined to be the difference between its cost and the cost of the null
policy: $\lim\inf_{T\to\infty}\frac{1}{T}\sum_{t=1}^{T}\max(c-v_{t},0).$ Note
that maximizing value is equivalent to minimizing cost, which we illustrate in
Section 3.1. We say that a policy is $\alpha$-approximate if its value is at
least an $\alpha$ fraction of the optimal policy’s value.
## 3 Analyzing Variance Evolution
Before moving on to our main results, we show how to analyze the evolution of
the variance resulting from a given sampling policy. We first illustrate our
model with a particularly simple class of policies: those where $s_{t}$ takes
on only two possible values. We then analyze arbitrary periodic policies, and
show via contraction that they result in convergence to a periodic variance
evolution.
### 3.1 Visualizing the Decision Problem
To visualize the problem, we begin by plotting the result of an example policy
where the spending rate is constant for some interval of rounds, then shifts
to a different constant spending rate. Figure 1 illustrates one such policy.
The spending rates are indicated as alternating line segments, while the
variance is an oscillating curve, always converging toward the current
spending rate. Note that this particular policy is periodic, in the sense that
the final variance is the same as the initial variance. The horizontal line
gives one possible value for the cost of the outside option. Given this, the
optimal policy is to guess whenever the orange curve is below the green line
and take the outside option whenever it is above it. Thus, the loss associated
with this spending policy is given by the orange shaded area in Figure 1.
Minimizing this loss is equivalent to maximizing the green shaded area, which
corresponds to the value of the spending policy. The null policy, which takes
no samples and has variance greater than $c$ always (possibly after an initial
period if $v_{0}<c$), has value $0$.
Figure 1: The variance for a piecewise-constant sampling policy, and its loss
and benefit
### 3.2 Periodic Policies
We next consider policies that are periodic. A _periodic policy_ with period
$R$ has the property that $s_{t}=s_{t+R}$ for all $t\geq 1$. Such policies are
natural and have useful structure. In a periodic policy, the variance
$(v_{t})$ converges uniformly to being periodic in the limit as $t\to\infty$.
This follows because the impact of sampling on variance is a contraction map.
###### Definition 1.
Given a normed space $X$ with norm $||\cdot||$, a mapping $\Psi\colon X\to X$
is a _contraction map_ if there exists a $k<1$ such that, for all $x,y\in X$,
$||\Psi(x)-\Psi(y)||\leq k||x-y||$.
###### Lemma 2.
Fix a sampling policy $s$, and a time $R\geq 1$, and suppose that $s$ takes a
strictly positive number of samples in each round $t\leq R$. Let $\Psi$ be the
mapping defined as follows: supposing that $v_{0}=x$ and $v$ is the variance
function resulting from sampling policy $s$, set $\Psi(x):=v_{R}$. Then $\Psi$
is a contraction map over the non-negative reals, under the absolute value
norm.
The proof appears in Appendix C. It is well known that a contraction map has a
unique fixed point, and repeated application will converge to that fixed
point. Since we can view the impact of the periodic sampling policy as
repeated application of mapping $\Psi$ to the initial variance in order to
obtain $v_{0},v_{R},v_{2R},\dotsc$, we conclude that the variance will
converge uniformly to a periodic function for which $v_{t}=v_{t+R}$. Thus, for
the purpose of evaluating long-run average cost, it will be convenient (and
equivalent) to replace the initial condition on $v_{0}$ with a periodic
boundary condition $v_{0}=v_{R}$, and then choose $s$ to minimize the average
cost over a single period, $\frac{1}{R}\int_{0}^{R}\min\\{v_{t},c\\}dt,$
subject to the budget constraint that, at any round $T\leq R$, we have
$\sum_{t=1}^{T}s_{t}\leq BT$.
### 3.3 Lazy Policies
Write $\tilde{v}=v_{t-1}+\rho$ for the variance that would be obtained in
round $t$ if $s_{t}=0$. We say that a policy is _lazy_ if $s_{t}=0$ whenever
$\tilde{v}_{t}<c$. That is, samples are collected only at times where the
variance would otherwise be at or above the outside option value $c$.
Intuitively, we can think of such a policy as collecting a batch of samples in
one round, then “free-riding” off of the resulting information in subsequent
rounds. The free-riding occurs until the posterior variance grows large enough
that it becomes better to select the outside option, at which point the policy
may collect another batch of samples.
If a policy is lazy, then its variance function $v$ increases by $\rho$
whenever $\tilde{v}_{t}<c$, with downward steps only at times corresponding to
when samples are taken. Furthermore, the value of such a policy decomposes
among these sampling instances: for any $t$ where $s_{t}>0$, resulting in a
variance of $v_{t}<c$, if we write $h=\lfloor c-v_{t}\rfloor$ then we can
attribute a value of $\frac{1}{2}h(h+1)+(h+1)(c-v_{t}-h)$. Geometrically, this
is the area of the “discrete-step triangle” formed between the increasing
sequence of variances $v_{t}$ and the constant line at $c$, over the time
steps $t,\dotsc,t+h+1$.
### 3.4 On-Off Policies
An On-Off policy is a periodic policy parameterized by a time interval $T$ and
a sampling rate $S$. Roughly speaking, the policy alternates between intervals
where it samples at a rate of $S$ each round, and intervals where it does not
sample. The two interval lengths sum to $T$, and the length of the sampling
interval is set as large as possible subject to budget feasibility. More
formally, the policy sets $s_{t}=0$ for all $t\leq(1-\alpha)\cdot T$, where
$\alpha=\min\\{B/S,1\\}\in[0,1]$ and $s_{t}=S$ for all $t$ such that
$(1-\alpha)T<t\leq T$. This policy is then repeated, on a cycle of length $T$.
The fraction $\alpha$ is chosen to be as large as possible, subject to the
budget constraint.
### 3.5 Simple Example Revisited
We can now justify the simple example we presented in the introduction, where
$\rho=\sigma=1$, $B=1$, and $c=0.75$. The policy that takes a single sample
each round is periodic with period $1$, and hence will converge to a variance
that is likewise equal each round. This fixed point variance, $v^{*}$,
satisfies $v^{*}=\frac{v^{*}+1}{1+(v^{*}+1)}$ by Lemma 1. Solving for $v^{*}$
yields $v^{*}=\frac{\sqrt{5}-1}{2}<0.75$, which is the average cost per round.
If instead the policy takes $k$ samples every $k$ rounds, this results in a
variance that is periodic of period $k$. After the round in which samples are
taken, the fixed-point variance satisfies
$v^{*}=\frac{v^{*}+k}{1+k(v^{*}+k)}$, again by Lemma 1. Solving for $v^{*}$,
and noting that $v^{*}+1\geq 1>c$, yields that the cost incurred by this
policy is minimized when $k=2$.
To solve for the policy that alternates between taking no samples for two
round, followed by taking two samples on each of two rounds, suppose the long-
run, periodic variances are $v_{1},v_{2},v_{3},v_{4}$, where samples are taken
on rounds $3$ and $4$. Then we have $v_{2}=v_{1}+1$,
$v_{3}=\frac{v_{2}+1}{1+2(v_{2}+1)}$, $v_{4}=\frac{v_{3}+1}{1+2(v_{3}+1)}$,
and $v_{1}=v_{4}+1$. Combining this sequence of equations yields
$4v_{1}^{2}+4v_{1}-13=0$, which we can solve to find
$v_{1}=\frac{-1+\sqrt{14}}{2}\approx 1.3708$. Plugging this into the equations
for $v_{2},v_{3},v_{4}$ and taking the average of $\min\\{v_{i},0.75\\}$ over
$i\in\\{1,2,3,4\\}$ yields the reported average cost of $\approx 0.576$.
## 4 Solving for the Optimal Policy
In this section we show that when the cost of sampling is linear in the total
number of samples taken (i.e., $z=0$)444Recall that $z$ is the fixed per-round
cost of taking a positive number of samples. Even when $z=0$, there is still a
positive per-sample cost., and when fractional sampling is allowed, then the
supremum value over all on-off policies is an upper bound on the value of any
policy. This supremum is achieved in the limit as the time interval $T$ grows
large. So, while no individual policy achieves the supremum, one can get
arbitrarily close with an on-off policy of sufficiently long period. Proofs
appear in Appendix C.
We begin with some definitions. For a given period length $T>0$, write $s^{T}$
for the on-off policy of period $T$ with optimal long-run average value.
Recall $\text{Val}(s^{T})$ is the value of policy $s_{T}$. We first argue that
larger time horizons lead to better on-off policies.
###### Lemma 3.
With fractional samples, for all $T>T^{\prime}$, we have
$\text{Val}(s^{T})>\text{Val}(s^{T^{\prime}})$.
Write $V^{*}=\sup_{T\to\infty}\text{Val}(s^{T})$. Lemma 3 implies that
$V^{*}=\lim_{T\to\infty}\text{Val}(s^{T})$ as well. We show that no policy
satisfying the budget constraint can achieve value greater than $V^{*}$.
###### Theorem 1.
With fractional samples, the value of any valid policy $s$ is at most $V^{*}$.
The proof of Theorem 1 proceeds in two steps. First, for any given time
horizon $T$, it is suboptimal to move from having variance below the outside
option to above the outside option; one should always save up budget over the
initial rounds, then keep the variance below $c$ from that point onward. This
follows because the marginal sample cost of reducing variance diminishes as
variance grows, so it is more sample-efficient to recover from very high
variance once than to recover from moderately high variance multiple times.
Second, one must show that it is asymptotically optimal to keep the variance
not just below $c$, but uniform. This is done by a potential argument,
illustrating that a sequence of moves aimed at “smoothing out” the sampling
rate can only increase value and must terminate at a uniform policy. The
difficulty is that a sample affects not only the value in the round it is
taken, but in all subsequent rounds. We make use of an amortization argument
that appropriately credits value to samples, and use this to construct the
sequence of adjustments that increase overall value while bringing the
sampling sequence closer to uniform in an appropriate metric.
We also note that it is straightforward to compute the optimal on-off policy
for a given time horizon $T$, by choosing the sampling rate that maximizes
[value per round] $\times$ [fraction of time the policy is “on”]. One can
implement a policy whose value asymptotically approaches $V^{*}$ by repeated
doubling of the time horizon. Alternatively, since
$\lim_{T\to\infty}\text{Val}(s^{T})=V^{*}$, $s^{T}$ will be an approximately
optimal policy for sufficiently large $T$.
## 5 Approximate Optimality of Lazy Policies
In the previous section we solved for the optimal policy when $z=0$, meaning
that there is no fixed per-round cost when sampling. We now show that for
general $z$, lazy policies are approximately optimal, obtaining at least $1/2$
of the value of the optimal policy. All proofs are deferred to Appendix D.
We begin with a lemma that states that, for any valid sampling policy and any
sequence of timesteps, it is possible to match the variance at those timesteps
with a policy that only samples at precisely those timesteps, and the
resulting policy will be valid.
###### Lemma 4.
Fix any valid sampling policy $s$ (not necessarily lazy) with resulting
variances $(v_{t})$, and any sequence of timesteps
$t_{1}<t_{2}<\dotsc<t_{\ell}<\dotsc$. Then there is a valid policy
$s^{\prime}$ such that
$\\{t~{}|~{}s^{\prime}_{t}>0\\}\subseteq\\{t_{1},\dotsc,t_{\ell},\dotsc\\}$,
resulting in a variances $(\breve{v}_{t})$ with $\breve{v}_{t_{i}}\leq
v_{t_{i}}$ for all $i$.
The intuition is that if we take all the samples we would have spent between
timesteps $t_{\ell}$ and $t_{\ell+1}$ and instead spend them all at
$t_{\ell+1}$ the result will be a (weakly) lower variance at $t_{\ell+1}$. We
next show that any policy can be converted into a lazy policy at a loss of at
most half of its value.
Figure 2: Visualizing the construction in the proof of Theorem 2. Variance
(vertical) is plotted against time (horizontal). We approximate the value of
an optimal policy’s variance (orange) given $c$ (green). The squares (drawn in
blue) cover the gap between the curves, except possibly when
$|v_{t}-c|<\epsilon$ (for technical reasons). The lazy policy samples on
rounds corresponding to the left edge of each square, bringing the variance to
each square’s bottom-left corner.
###### Theorem 2.
The optimal lazy policy is $1/2$-approximate.
See Figure 2 for an illustration of the intuition behind the result. Consider
an arbitrary policy $s$, with resulting variance sequence $(v_{t})$. Imagine
covering the area between $(v_{t})$ and $c$ with squares, drawn left to right
with their upper faces lying on the outside option line, each chosen just
large enough so that $v_{t}$ never falls below the area covered by the
squares. The area of the squares is an upper bound on $\text{Val}(s)$.
Consider a lazy policy that drops a single atom on the left endpoint of each
square, bringing the variance to the square’s lower-left corner. The value of
this policy covers at least half of each square. Moreover, Lemma 4 implies
this policy is (approximately) valid, as it matches variances from the
original policy, possibly shifted early by a constant number of rounds. This
shifting can introduce non-validity; we fix this by delaying the policy’s
start by a constant number of rounds without affecting the asymptotic
behavior.
The factor $1/2$ in Theorem 2 is tight. To see this, fix the value of $c$ and
allow the budget $B$ to grow arbitrarily large. Then the optimal value tends
to $c$ as the budget grows, since the achievable variance on all rounds tends
to $0$. However, the lazy policy cannot achieve value greater than $c/2$, as
this is what would be obtained if the variance reached $0$ on the rounds on
which samples are taken.
Finally, while this result is non-constructive, one can compute a policy whose
value approaches an upper bound on the optimal lazy policy, in a similar
manner to the optimal on-off policy. One can show the best lazy policy over
any finite horizon has an “off” period (with no sampling) followed by an “on”
period (where $v_{t}\leq c$). One can then solve for the optimal number of
samples to take whenever $\tilde{v}_{t}>c$ by optimizing either value per unit
of (fixed plus per-sample) sampling cost, or by fully exhausting the budget,
whichever is better. See Lemma 8 in the appendix for details.
## 6 Extensions and Future Directions
We describe two extensions of our model in the appendix. First, we consider a
continuous-time variant where samples can be taken continously subject to a
flow cost, in addition to being requested as discrete atoms. The decision-
maker selects actions continuously, and aims to minimize loss over time. All
of our results carry forward to this continuous extension.
Second, returning to discrete time, we consider a non-Gaussian instance of our
framework. In this model, there is a binary hidden state of the world, which
flips each round independently with some small probability $\epsilon>0$. The
decision-maker’s action in each round is to guess the hidden state of this
simple two-state Markov process, and the objective is to maximize the fraction
of time that this guess is made correctly. Each sample is a binary signal
correlated with the hidden state, matching the state of the world with
probability $\tfrac{1}{2}+\delta$ where $\delta>0$. The decision-maker can
adaptively request samples in each round, subject to the accumulating budget
constraint, before making a guess.
Figure 3: Simulating the optimal policy for the non-Gaussian extension. The
round number is on the horizontal axis. The hidden state of the world is
binary and evolves stochastically (blue). The optimal policy tracks a
posterior distribution over the hidden state (red), and takes samples in order
to maintain a tuned level of certainty (dashed green). Note that most rounds
have only a small number of samples, with occasional spikes triggered
adaptively in response to uncertainty.
In this extension, as in our Gaussian model, the optimal policy collects
samples non-uniformly. In fact, the optimal policy has a simple form: it sets
a threshold $\theta>0$ and takes samples until the entropy of the posterior
distribution falls below $\theta$. Smaller $\theta$ leads to higher accuracy,
but also requires more samples on average, so the best policy will set
$\theta$ as low as possible subject to the budget constraint. Notably, the
result of this policy is that sampling tends to occur at a slow but steady
rate, keeping the entropy around $\theta$, except for occasional spikes of
samples in response to a perceived change in the hidden state. See Figure 3
for a visualization of a numerical simulation with a budget of $6$ samples (on
average) per round.
More generally, whenever the state evolves in a heavy-tailed manner, it is
tempting to take samples regularly in order to detect large, infrequent jumps
in state value, and then adaptively take many samples when such a jump is
evident. This simple model is one scenario where such behavior is optimal.
More generally, can we quantify the dynamic value of data and find an
(approximately) optimal data collection policy for more complex Markov chains,
or other practical applications?
## References
* [1] Charu C. Aggarwal. A framework for diagnosing changes in evolving data streams. In Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, SIGMOD ’03, pages 575–586, New York, NY, USA, 2003. ACM.
* [2] Charu C. Aggarwal. Data Streams: Models and Algorithms (Advances in Database Systems). Springer-Verlag, Berlin, Heidelberg, 2006.
* [3] Charu C. Aggarwal, Jiawei Han, Jianyong Wang, and Philip S. Yu. A framework for clustering evolving data streams. In Proceedings of the 29th International Conference on Very Large Data Bases - Volume 29, VLDB ’03, pages 81–92. VLDB Endowment, 2003.
* [4] Imanol Arrieta-Ibarra, Leonard Goff, Diego Jiménez-Hernández, Jaron Lanier, and E Glen Weyl. Should we treat data as labor? moving beyond" free". In AEA Papers and Proceedings, volume 108, pages 38–42, 2018.
* [5] Claudia Beleites, Ute Neugebauer, Thomas Bocklitz, Christoph Krafft, and Jürgen Popp. Sample size planning for classification models. Analytica chimica acta, 760:25–33, 2013.
* [6] Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. Amazon’s mechanical turk: A new source of inexpensive, yet high-quality, data? Perspectives on psychological science, 6(1):3–5, 2011.
* [7] Yiling Chen, Nicole Immorlica, Brendan Lucier, Vasilis Syrgkanis, and Juba Ziani. Optimal data acquisition for statistical estimation. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 27–44. ACM, 2018.
* [8] Yiling Chen and Bo Waggoner. Informational substitutes. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 239–247. IEEE, 2016.
* [9] Junghwan Cho, Kyewook Lee, Ellie Shin, Garry Choy, and Synho Do. How much data is needed to train a medical image deep learning system to achieve necessary high accuracy? CoRR, abs/1511.06348, 2015.
* [10] Corinna Cortes, Lawrence D Jackel, Sara A Solla, Vladimir Vapnik, and John S Denker. Learning curves: Asymptotic values and rate of convergence. In Advances in Neural Information Processing Systems, pages 327–334, 1994.
* [11] Bo Cowgill, Justin Wolfers, and Eric Zitzewitz. Using prediction markets to track information flows: Evidence from google. In AMMA, page 3, 2009.
* [12] Mayur Datar, Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Maintaining stream statistics over sliding windows: (extended abstract). In Proceedings of the Thirteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’02, pages 635–644, Philadelphia, PA, USA, 2002. Society for Industrial and Applied Mathematics.
* [13] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Robust estimators in high dimensions without the computational intractability. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 655–664, 2016.
* [14] I. Diakonikolas, D. M. Kane, and A. Stewart. Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 73–84, 2017.
* [15] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robustly learning a gaussian: Getting optimal error, efficiently. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’18, pages 2683–2702, Philadelphia, PA, USA, 2018\. Society for Industrial and Applied Mathematics.
* [16] Fang Fang, Maxwell Stinchcombe, and Andrew Whinston. " putting your money where your mouth is"-a betting platform for better prediction. Review of Network Economics, 6(2), 2007.
* [17] Simon Fothergill, Helena Mentis, Pushmeet Kohli, and Sebastian Nowozin. Instructing people for training gestural interactive systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1737–1746. ACM, 2012.
* [18] Anna C. Gilbert, Sudipto Guha, Piotr Indyk, Yannis Kotidis, S. Muthukrishnan, and Martin J. Strauss. Fast, small-space algorithms for approximate histogram maintenance. In Proceedings of the Thiry-fourth Annual ACM Symposium on Theory of Computing, STOC ’02, pages 389–398, New York, NY, USA, 2002. ACM.
* [19] Shugang Hao and Lingjie Duan. Regulating competition in age of information under network externalities. IEEE Journal on Selected Areas in Communications, 38(4):697–710, 2020.
* [20] Panagiotis G Ipeirotis. Analyzing the amazon mechanical turk marketplace. XRDS: Crossroads, The ACM Magazine for Students, 17(2):16–21, 2010\.
* [21] Adam Tauman Kalai, Ankur Moitra, and Gregory Valiant. Efficiently learning mixtures of two gaussians. In Proceedings of the Forty-Second ACM Symposium on Theory of Computing, STOC ’10, page 553–562, New York, NY, USA, 2010. Association for Computing Machinery.
* [22] H. M. Kalayeh and D. A. Landgrebe. Predicting the required number of training samples. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5(6):664–667, Nov 1983.
* [23] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. Journal of basic Engineering, 82(1):35–45, 1960.
* [24] Sanjit Kaul, Roy Yates, and Marco Gruteser. Real-time status: How often should one update? In 2012 Proceedings IEEE INFOCOM, pages 2731–2735. IEEE, 2012.
* [25] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885–1894, 2017.
* [26] K. A. Lai, A. B. Rao, and S. Vempala. Agnostic estimation of mean and covariance. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 665–674, Los Alamitos, CA, USA, oct 2016. IEEE Computer Society.
* [27] Ern J Lefferts, F Landis Markley, and Malcolm D Shuster. Kalman filtering for spacecraft attitude estimation. Journal of Guidance, Control, and Dynamics, 5(5):417–429, 1982\.
* [28] Chao Li and Gerome Miklau. Pricing aggregate queries in a data marketplace. In WebDB, pages 19–24, 2012.
* [29] Annie Liang, Xiaosheng Mu, and Vasilis Syrgkanis. Dynamic information acquisition from multiple sources. arXiv preprint arXiv:1703.06367, 2017.
* [30] Sam Roweis and Zoubin Ghahramani. A unifying review of linear gaussian models. Neural computation, 11(2):305–345, 1999.
* [31] Nihar Bhadresh Shah and Denny Zhou. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In Advances in neural information processing systems, pages 1–9, 2015.
* [32] Xiaohui Song and Jane W-S Liu. Performance of multiversion concurrency control algorithms in maintaining temporal consistency. In Proceedings Fourteenth Annual International Computer Software and Applications Conference, pages 132–133. IEEE Computer Society, 1990.
* [33] Florian Stahl, Fabian Schomm, and Gottfried Vossen. The data marketplace survey revisited. Technical report, Working Papers, ERCIS-European Research Center for Information Systems, 2014.
* [34] Amos Storkey. Machine learning markets. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 716–724, 2011.
* [35] Sebastian Thrun. Probabilistic algorithms in robotics. Ai Magazine, 21(4):93, 2000.
* [36] Xuehe Wang and Lingjie Duan. Dynamic pricing for controlling age of information. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 962–966. IEEE, 2019.
* [37] Daniel B Work, Olli-Pekka Tossavainen, Sébastien Blandin, Alexandre M Bayen, Tochukwu Iwuchukwu, and Kenneth Tracton. An ensemble kalman filtering approach to highway traffic estimation using gps enabled mobile devices. In Decision and Control, 2008. CDC 2008. 47th IEEE Conference on, pages 5062–5068. IEEE, 2008.
* [38] Xianwen Wu, Jing Yang, and Jingxian Wu. Optimal status update for age of information minimization with an energy harvesting source. IEEE Transactions on Green Communications and Networking, 2(1):193–204, 2017.
* [39] Meng Zhang, Ahmed Arafa, Jianwei Huang, and H Vincent Poor. How to price fresh data. arXiv preprint arXiv:1904.06899, 2019.
## Appendix A Appendix: A Continuous Model
We now define a continuous version of our optimization problem, which is
useful for modeling big-data situations in which the value of individual data
samples is small, but the budget is large enough to allow the accumulation of
large datasets. Our continuous model will correspond to a limit of discrete
models as the variance of the sampling errors and the budget $B$ grow large.
Our first step is to consider relaxing the discrete model to allow fractional
samples. This extends Lemma 1 so that $s$ can be fractional. Note that since
the effect of taking $s$ samples, from Lemma 1, depends on the ratio
$s/\sigma$, we can think of taking a fraction $\alpha<1$ of a sample with
variance $\sigma$ as equivalent to taking a single sample with variance
$\sigma/\alpha$. With this equivalence in mind, we can without loss of
generality scale the variance of samples so that $\sigma=1$; this requires
only that we interpret the sample budget and numbers of samples taken as
scaled in units of inverse variance.
Next, in the continuous model, the hidden state $x_{t}$ evolves continuously
for $t\geq 0$. The initial prior is Gaussian with some fixed variance $v_{0}$.
At each time $t$ we will have a posterior distribution over the hidden state.
Write $v(t)$ for the variance of the posterior distribution at time $t\geq 0$.
In particular, we have the initial condition $v(0)=v_{0}$.
Samples can be collected continuously over time at a specified density, as
well as in atoms at discrete points of time. As discussed above, we assume
without loss of generality that the variance of a single sample is equal to
$1$. Write $s(t)$ for the density at which samples are extracted at time $t$,
and write $a(t)$ for the mass of samples collected as an atom at time $t$.
Assume atoms are collected at times $\\{t_{i},i\in\mathbb{N}\\}$, i.e.,
$a(t)>0$ only if $t\in\\{t_{i}\\}$. Both $s$ and $a$, as well as the times
$\\{t_{i}\\}$ are chosen by the decision-maker.
To derive the evolution of the hidden state and the variance $v(t)$ of the
posterior at time $t$, we interpret this continuous model as a limit of the
following discretization. Partition time into intervals of length $\epsilon$,
say $[t,t+\epsilon)$ for each $t$ a multiple of $\epsilon$. We will consider a
discrete problem instance, with discrete rounds corresponding to times
$\epsilon,2\epsilon,3\epsilon,\dotsc$. At round $i$, corresponding to time
$t=i\cdot\epsilon$, a zero-mean Gaussian with variance $\epsilon$ is added to
the state. I.e., we take $\rho=\epsilon$ in our discrete model. We then
imagine drawing
$\int_{t=(i-1)\epsilon}^{i\epsilon}s(t)dt+\sum_{j:t_{j}\in[(i-1)\epsilon,i\epsilon)}a(t_{j})$
samples at round $i$, corresponding to time $t=i\cdot\epsilon$. This
represents the samples that would have been drawn over the course of the
interval $[(i-1)\epsilon,i\epsilon)$. We will also take the budget in this
discrete approximation to be $B/\epsilon$, so that the approximation is valid
if the continuous policy satisfies its budget requirement. Note that we can
think of this discretization as an approximation to the continuous problem,
with the same budget $B$, but with time scaled by a factor of $\epsilon$ so
that a single round in the discrete model corresponds to a time interval of
length $\epsilon$ in the continuous model.
Approximate $s$ by a function that is constant over intervals of length
$\epsilon$, say equal to $s(t)$ over the time interval $[t,t+\epsilon)$ for
each $t$ a multiple of $\epsilon$. Suppose for now that there are no atoms
over this period. We are then drawing $\epsilon s(t)$ samples at time
$t+\epsilon$. By Lemma 1, this causes the variance to drop by a factor of
$1+\epsilon s(t)v(t)$. The new variance at time $t+\epsilon$ is therefore
$v(t+\epsilon)=\frac{v(t)+\epsilon}{1+\epsilon s(t)(v(t)+\epsilon)}$
As this change occurs over a time window of length $\epsilon$, the average
rate of change of $v$ over this interval is
$\frac{v(t+\epsilon)-v(t)}{\epsilon}=\frac{1}{\epsilon}\left(\frac{v(t)+\epsilon}{1+\epsilon
s(t)(v(t)+\epsilon)}-v(t)\right)=\frac{1-s(t)v^{2}(t)-\epsilon
s(t)v(t)}{1+\epsilon s(t)(v(t)+\epsilon)}$
Taking the limit as $\epsilon\to 0$, the instantaneous rate of change of $v$
at $t$ is
$v^{\prime}(t)=1-s(t)\cdot v^{2}(t).$
The variance function $v$ is therefore described by the differential equation
above, for any $t$ at which $a(t)=0$.
If there is an atom at $t$, so that $a(t)>0$, then for sufficiently small
$\epsilon$ the number of samples in the range $[t,t+\epsilon)$ is instead
$a(t)+\epsilon s(t)$. As $\epsilon\to 0$, this introduces a discontinuity in
$v(\cdot)$ at $t$, since the number of samples taken does not vanish in the
limit. With this in mind, we will take the convention that $v(t)$ represents
$\lim_{t^{\prime}\to^{+}t}v(t)$, the right-limit of $v$; this informally
corresponds to the variance “after” having taken atoms at $t$. We then define
$\tilde{v}(t)=\lim_{t^{\prime}\to^{-}t}v(t)$ to be the variance “before”
applying any such atom. Lemma 1 then yields that
$v(t)=\frac{\tilde{v}(t)}{1+a(t)\tilde{v}(t)}.$
We emphasize that, under this notation, $v(t)$ represents the variance after
having applied the atom at time $t$, if any. These discontinuities combined
with the differential equation above provide a full characterization of the
evolution of the variance $v(\cdot)$, given $s(\cdot)$ and $a(\cdot)$ and the
initial condition $v(0)=v_{0}$.
Then the total number of samples acquired over a time period $[0,T]$ is
$\int_{0}^{T}s(t)dt+\sum_{i:t_{i}\in[0,T]}a(t_{i})$. We normalize the cost per
sample to 1 as in the discrete case, but modeling the fixed costs is more
subtle. In particular, consider some intermediate discretization. If we apply
the fixed cost $f$ at each interval, we get the counterintuitive result that
taking $s$ samples today is less expensive than taking $s/2$ samples in the
morning and $s/2$ in the afternoon, when logically the two should have at
least similar costs. On the other hand if we scale the cost to be $f/2$ we
could only ever take samples in the morning and implement the same policy as
we would have at the “day” level at half the fixed cost. To avoid this we make
the fixed cost history dependent. If the fixed cost was not paid in the
previous interval, the fixed cost is $f$. If the fixed cost was paid in the
previous interval, the cost is instead $\epsilon f$. This ensures that the
cost of implementing a policy from the original level of discretization at a
finer level has the exact same cost, while keeping policies which spread out
samples evenly over the interval at a similar cost. Furthermore, to allow
similar properties to hold when considering multiple possible levels of
discretization, we allow the decision maker to pay the fixed cost even in
periods when samples are not taken. Thus, for example, taking samples in early
morning and early afternoon but none in the late morning has the same cost as
taking the same samples in the morning and in the afternoon. This
interpretation has natural analogs in some of our example scenarios, such as
maintaining the satellite lock for the GPS even if limited numbers of samples
are taken.
In the continuous limit, this cost model becomes a flow cost of $f$, which
will be paid at all times samples are taken as well as during intervals when
samples are not taken of length at most 1. There is also a fixed cost $f$ when
sampling resumes after an interval of length greater than 1. Let $\phi$ be a
measure which has density $f$ when when the flow cost is paid and measure $f$
at times when the fixed cost is paid. Our budget requirement is that
$\int_{0}^{T}s(t)dt+\sum_{i:t_{i}\in[0,T]}a(t_{i})+\int_{0}^{T}d\phi\leq BT$
for all $T\geq 0$.
The optimization problem in the continuous setting is to choose the functions
$s(t)$ and $a(t)$ that minimizes the long-run average cost incurred by the
decision-maker,
$\lim\sup_{T\to\infty}\frac{1}{T}\int_{0}^{T}\min(v(t),c)dt,$
subject to the budget constraint.
Similar to the discrete setting, we define the _null policy_ to be the
sampling policy that takes no samples (i.e., $s(t)=0$ and $a(t)=0$ for all
$t$) and selects the outside option at all times, for an average cost of $c$.
Again, we define the value of a policy to be the difference between its
average cost and the average cost of the null policy:
$\lim\inf_{T\to\infty}\frac{1}{T}\int_{0}^{T}\max(c-v_{t},0)dt.$
We say a policy is $\alpha$-approximate if it achieves an $\alpha$ fraction of
the value of the optimal policy.
## Appendix B Appendix: Proofs from Section 2
###### Lemma 1.
Let $v_{t}$ be the variance of $G_{t}$ and suppose each $F_{t}$ is a zero-mean
Gaussian with variance $\rho$, and that each sample is subject to zero-mean
Gaussian noise with variance $\sigma$. Then, if the decision-maker takes $s$
samples in round $t+1$, the variance of $G_{t+1}$ is
$v_{t+1}=\frac{v_{t}+\rho}{1+\frac{s}{\sigma}(v_{t}+\rho)}.$
###### Proof.
Our model is a special case of the model underlying a Kalman filter. There,
generally, the evolution of the state can depend on a linear transformation of
$x_{t}$, a control input, and some Gaussian noise. In our model the
transformation of $x_{t}$ is the identity, there is no control input, and by
assumption the Gaussian noise is mean 0 variance $\rho$. Similarly, our
sampling model corresponds to the observation model assumed by a Kalman
filter.
Therefore, using the standard update rules for a Kalman filter [30], the
innovation variance at time $t+1$ (i.e., the variance of the posterior after
$x_{t}$ is updated by $\delta\sim F_{t+1}$ but before observing the samples)
is $\tilde{v}_{t+1}=v_{t}+\rho$. (Alternatively this can be observed directly
as we are summing two Gaussians.) This matches the desired quantity for the
case $s=0$, where no samples are taken.
For $s=1$, we can again apply the standard update rules for a Kalman filter to
get a posterior variance of
$\frac{\tilde{v}_{t+1}}{1+\frac{1}{\sigma}\tilde{v}_{t+1}}$, as desired. By
induction, if the decision maker instead takes $s>1$ samples, the posterior
variance will instead be
$\frac{\tilde{v}_{t+1}}{1+\frac{s}{\sigma}\tilde{v}_{t+1}}$, as desired. ∎
## Appendix C Appendix: Proofs from Sections 3 and 4
All of these results hold in both the discrete and continuous models with
essentially the same proofs. Therefore, we provide a unified treatment of them
for both cases.
###### Lemma 2.
Fix a sampling policy $s$, and a time $R>0$, in either the continuous or
discrete setting, and suppose that $s$ takes a strictly positive number of
samples in $(0,R]$. Let $\Psi$ be the mapping defined as follows: supposing
that $v_{0}=x$ and $v$ is the variance function resulting from sampling policy
$s$, set $\Psi(x):=v(R)$. Then $\Psi$ is a contraction map over the non-
negative reals, under the absolute value norm.
###### Proof.
We need to show $|\Psi(x)-\Psi(y)|\leq k\cdot|x-y|$ whenever $x>y$, where
$k<1$ is some constant that depends on $s$. We’ll prove this first for a
discrete policy. Write $v^{x}_{t}$ and $v^{y}_{t}$ for the variance at time
$t$ with starting condition $x$ and $y$, respectively. Take $v^{x}_{0}=x$ and
$v^{y}_{0}=y$ for notational convenience. We then have that
$v^{x}_{1}=\frac{x+\rho}{1+(s_{1}/\sigma)(x+\rho)}$ and
$v^{y}_{1}=\frac{y+\rho}{1+(s_{1}/\sigma)(y+\rho)}$ by Lemma 1. We then have
that $v^{x}_{1}\geq v^{y}_{1}$, and moreover
$v^{x}_{1}-v^{y}_{1}=\frac{x-y}{(1+(s_{1}/\sigma)(x+\rho))(1+(s_{1}/\sigma)(y+\rho))}\leq
x-y$
and the inequality is strict if $s_{1}>0$. We can therefore apply induction on
the rounds in $(0,R]$, plus the assumption that at least one of these rounds
has a positive number of samples, to conclude that $\Psi(x)-\Psi(y)<x-y$. To
bound the value of $k$, suppose that $t\geq 1$ is the first round in which a
positive number of samples is taken. Then we can find some sufficiently small
$\epsilon>0$ so that $v_{t-1}^{x}>\epsilon$ and $s_{t}/\sigma>\epsilon$. Then
we will have
$\displaystyle\Psi(x)-\Psi(y)$ $\displaystyle\leq v_{t}^{x}-v_{t}^{y}$
$\displaystyle=\frac{v_{t-1}^{x}-v_{t-1}^{y}}{(1+(s_{1}/\sigma)(v_{t-1}^{x}+\rho))(1+(s_{1}/\sigma)(v_{t-1}^{y}+\rho))}$
$\displaystyle\leq\frac{x-y}{1+(s_{1}/\sigma)(v_{t-1}^{x}+\rho)}$
$\displaystyle<\frac{x-y}{1+\epsilon^{2}}$
and hence we have $k\leq\frac{1}{1+\epsilon^{2}}<1$ as required.
To extend to continuous policies, take $v^{x}(t)$ and $v^{y}(t)$ for the
variances under the two respective start conditions, and note first that there
must exist some sufficiently small $\epsilon$ such that $v^{x}(t)>\epsilon$
for all $t\in[0,R]$, and for which the total mass of samples taken over range
$(0,R]$ is at least $\epsilon$. Take any discretization of the range $[0,R]$,
say into $r>1$ rounds, and consider the corresponding discretization of the
continuous policy, so that the sum of the number of samples taken over all
discrete rounds in the interval $[0,R]$ is at least $\epsilon$. As above, take
$v_{i}^{x}$ and $v_{i}^{y}$ to be the variances resulting from these
discretized policies after $i$ discrete rounds. Say $s_{i}$ samples are taken
at round $i$. Then, considering each round in sequence and applying the same
reasoning as in the discrete case above, we have that
$\displaystyle v^{x}_{r}-v^{y}_{r}$
$\displaystyle\leq(x-y)\cdot\prod_{i=1}^{r}\frac{1}{1+s_{i}v^{x}_{i-1}}$
$\displaystyle\leq(x-y)\cdot\prod_{i=1}^{r}\frac{1}{1+s_{i}\epsilon}$
$\displaystyle\leq(x-y)\cdot\frac{1}{1+\epsilon\sum_{i=1}^{r}s_{i}}$
$\displaystyle\leq\frac{x-y}{1+\epsilon^{2}}.$
Thus, for each such discretization, we have a contraction by a factor of at
least $\frac{1}{1+\epsilon^{2}}$. Taking a limit of such discretizations, we
conclude that this holds in the continuous limit as well, so that
$\Psi(x)-\Psi(y)<\frac{1}{1+\epsilon^{2}}(x-y)$ as required. ∎
It is well known that a contraction mapping has a unique fixed point, and
repeated application will converge to that fixed point. Since we can view the
impact of the periodic sampling policy as repeated application of mapping
$\Psi$ to the initial variance in order to obtain $v(0),v(R),v(2R),\dotsc$, we
conclude that the variance will converge uniformly to a periodic function for
which $v(t)=v(t+R)$. Thus, for the purpose of evaluating long-run average cost
of a periodic policy, it will be convenient (and equivalent) to replace the
initial condition on $v$, $v(0)=v_{0}$, with a periodic boundary condition
$v(0)=v(R)$, and then choose $s$ to minimize the average cost over a single
period:
$\frac{1}{R}\int_{0}^{R}\min\\{v(t),c\\}dt,$
subject to the budget constraint that, at any time $T\in(0,R]$, we have555Note
that we omit $t_{i}=0$ from the summation over atoms, to handle the edge case
where there is an atom at time $T$, and hence at time $0$ as well, which
should not be counted twice.
$\int_{0}^{T}s(t)dt+\sum_{i:t_{i}\in(0,T]}a(t_{i})\leq BT$.
For the remainder of the proofs in this section, we will allow fractional
sampling rates even in the discrete setting. Recall that one can define an
$\alpha\in(0,1)$ fraction of a sample to be one in which the variance is
increased by a factor of $1/\alpha$.
###### Lemma 3.
With fractional samples, for all $T>T^{\prime}$, we have
$\text{Val}(s^{T})>\text{Val}(s^{T^{\prime}})$.
###### Proof.
If $s$ is an on-off policy with period $T^{\prime}$, then the policy that uses
the same “on” sampling rate with a period of $T$ has weakly better average
value. This is because the variance of this policy is decreasing over the “on”
period, so is lowest at the end of the period. Thus the optimal on-off policy
with period $T$ has better average value than $s$. ∎
We will write $V^{*}=\sup_{T\to\infty}\text{Val}(s^{T})$. From the lemma
above, we have that $V^{*}=\lim_{T\to\infty}\text{Val}(s^{T})$ as well. The
following lemma will be useful for analyzing non-periodic policies.
###### Lemma 5.
For any policy $s$, there is a sequence of policies $\\{s^{T}\\}$ for
$T=1,2,\dotsc$, where $s^{T}$ is periodic with period $T$, such that
$\lim_{T\to\infty}Val(s^{T})\geq Val(s)$.
###### Proof.
(sketch) Fix $T$, let $W^{T}$ be the average value of policy $s$ over rounds
$[0,T]$, and let $s^{T}$ be a periodic policy that mimics policy $s$ over
rounds $[0,T]$. Note that $\lim_{T\to\infty}W^{T}=Val(s)$. The difference
between $W^{T}$ and $Val(s^{T})$ is driven by the initial condition: by Lemma
2, $Val(s^{T})$ is simply the average period value under the boundary
condition $v_{0}=v_{T}$, whereas $s$ may have some alternative initial
condition (say $v_{0}=v$). If the initial condition for $s$ lies below that of
$s^{T}$, we will modify policy $s^{T}$ into a new periodic policy, as follows.
First, we’ll note the (constant) number of extra samples needed on round $1$
to reach variance $v$ from an initially unbounded variance. Our modified
policy will first wait the (constant) number of rounds (say $r$) needed to
acquire this much budget, without sampling. Then, starting at round $r$, it
will bring the variance to $v$ (using this accumulated budget) and then
simulate policy $s$ over rounds $[0,T-r]$. Relative to $W^{T}$, this policy
obtains all of the value except for that accumulated by policy $s$ over rounds
$[T-r,T]$, which is at most a constant. This policy’s value therefore matches
$W^{T}$ in the limit as $T$ grows large. ∎
The following lemma shows that if there are intervals during which one takes
the outside option, then it is better to have them occur at the beginning of
the range $[0,R]$. The intuition is that it is cheaper to reduce the variance
to $c$ from a large value once than to reduce from a small value to $c$ many
times.
###### Lemma 6.
Consider a valid policy, given by $s(\cdot)$ and $a(\cdot)$, and any time
$R>0$. Then there is another valid policy $s^{*}(\cdot)$, $a^{*}(\cdot)$ and
time $r\in[0,R]$ such that $a^{*}(t)=s^{*}(t)=0$ for all $t<r$, $v^{*}(t)\leq
c$ for all $t\in[r,R]$, and the average value of $a^{*}(\cdot)$ up to time $R$
is at least the average value of the original policy up to time $R$. This is
true in both the discrete and continuous models (with fractional samples).
###### Proof.
We will write the following proof in the continuous model, but we note that
the same proof applies to the discrete model with just minor adjustments to
the notation. Let $s(\cdot)$ and $a(\cdot)$ be a sampling policy, and suppose
that it does not satisfy the conditions of $s^{*}$ and $a^{*}$ in the lemma
statement. Write $v(\cdot)$ for the resulting variance, and suppose that
$t_{1}$ is the infimum of all times for which $v(t)<c$. Note that we can
assume that $s(t)=0$ for any $t$ such that $v(t)>c$, without loss.
Suppose time interval $[t_{2},t_{3})$ is the earliest maximal interval
following $t_{1}$ such that $v(t)\geq c$ for all $t\in[t_{2},t_{3})$. In other
words, $[t_{2},t_{3})$ is an interval during which the decision-maker would
choose the outside option, and this interval occurs after some point at which
the decision-maker has not chosen the outside option. Such an interval must
exist, since we assumed that the given policy does not satisfy the conditions
of the lemma.
Our strategy will be to transform this policy $a(\cdot)$ into a different
policy that is closer to satisfying the conditions of the lemma. Roughly
speaking, we will do this by “shifting” the interval $[t_{2},t_{3})$ so that
it lies before $t_{1}$: we will push the sampling policy over the range
$[t_{1},t_{2})$ forward $(t_{3}-t_{2})$ units of time. This will result in a
policy with one fewer intervals of time in which the variance lies above $c$.
And, as we will show, this policy has the same total value as the original and
is valid. We can apply this construction to each such interval to construct
the policy $s^{*}$ and $a^{*}$ required by the lemma.
Let us more formally describe what we mean by shifting the interval
$[t_{2},t_{3})$. We have that $v(t_{2})=c$ and $a(t_{3})>0$ from the
definitions of $t_{2}$ and $t_{3}$. Write $\delta=t_{3}-t_{2}$. Then we have
$\tilde{v}(t_{3})=c+\delta$. Let $\gamma>0$ be such that
$\frac{c+\delta}{1+\gamma(c+\delta)}=c$. That is, $\gamma$ is the size of atom
such that, if $a(t_{3})=\gamma$, then we would have $v(t_{3})=c$. Since in
fact we have $v(t_{3})\leq c$ by maximality of the interval, and since
$s(t)=0$ for all $t\in(t_{2},t_{3})$ by assumption, it must be that
$a(t_{3})\geq\gamma$. On the other hand, let $\gamma^{\prime}>0$ be such that
$\frac{\tilde{v}(t_{1})+\delta}{1+\gamma^{\prime}(\tilde{v}(t_{1})+\delta)}=\tilde{v}(t_{1})$.
Since $\tilde{v}(t_{1})\geq c$, we must have $\gamma^{\prime}\leq\gamma$.
We are now ready to describe the shifted policy, given by $s^{*}$ and $a^{*}$.
We set $s^{*}(t)=a^{*}(t)=0$ for all $t<t_{1}+\delta$,
$a^{*}(t_{1}+\delta)=a(t_{1})+\gamma^{\prime}$, $a^{*}(t)=a(t-\delta)$ and
$s^{*}(t)=s(t-\delta)$ for all $t\in(t_{1}+\delta,t_{3})$,
$a^{*}(t_{3})=a(t_{3})-\gamma$, and $a^{*}(t)=a(t)$ and $s^{*}(t)=s(t)$ for
all $t>t_{3}$. Roughly speaking, the new policy “moves” the interval
$[t_{2},t_{3})$, where the variance lies above $c$, to occur before the
sampling behavior that began at time $t_{1}$. It also reduces the atom at
$t_{3}$ and increases the atom at $t_{1}$ (if any); the amounts are chosen so
that $v^{*}(t_{3})=v(t_{3})$, as we shall see.
We claim that $v^{*}(t)\geq c$ for all $t<t_{1}+\delta$,
$v^{*}(t)=v(t-\delta)$ for $t\in[t_{1}+\delta,t_{3})$, and $v^{*}(t)=v(t)$ for
$t\geq t_{3}$. This will imply that the average value of policy $a^{*}$ is
equal to that of policy $a$, since they differ only in that a portion of the
variance curve lying below $c$ has been shifted by $\delta$. That
$v^{*}(t)\geq c$ for all $t<t_{1}+\delta$ follows from the definition of
$a^{*}$. That $v^{*}(t_{1}+\delta)=v(t_{1})$ follows because
$\tilde{v}^{*}(t_{1}+\delta)=\tilde{v}(t_{1})+\delta$, and
$a^{*}(t_{1}+\delta)$ consists of an atom $\gamma^{\prime}$ that shifts the
variance from $\tilde{v}(t_{1})+\delta$ to $\tilde{v}(t_{1})$, plus another
atom $a(t_{1})$ that shifts the variance from $\tilde{v}(t_{1})$ to
$v(t_{1})$. Given that $v^{*}(t_{1}+\delta)=v(t_{1})$, we also have
$v^{*}(t+\delta)=v(t)$ for all $t\in[t_{1},t_{2})$, as the policy $a^{*}$ is
simply $a$ shifted by $\delta$ within this range. Finally, we have
$\tilde{v}^{*}(t_{3})=c$, and $a^{*}(t_{3})=a(t_{3})-\gamma$, which is
precisely the size of atom needed to shift the variance from $c$ to
$v(t_{3})$. So we have that $v^{*}(t)=v(t)$ for all $t\geq t_{3}$, as $a^{*}$
and $a$ coincide for all such $t$.
We conclude that $s^{*}$ and $a^{*}$ have the same average value as $s$ and
$a$. Also, $s^{*}$ and $a^{*}$ uses less total budget than $s$ and $a$, and
shifts some usage of budget to later points in time, so the new policy is
valid. Finally, the new policy has least one fewer maximal interval in which
the variance lies strictly above $c$. By repeating this construction
inductively, we obtain the policy required by the lemma. ∎
We are now ready to show that no policy that satisfies the budget constraint
can achieve value greater than $V^{*}$. This establishes our main first claim,
that on-off policies are optimal when $z=0$.
###### Theorem 1.
With fractional samples, the value of any valid policy $s$ is at most $V^{*}$.
###### Proof.
We will prove this claim under the discrete model. Taking the limit over ever-
finer discretizations then establishes the result for the continuous model as
well.
Choose some $T>0$, and fix any policy $s$ that is periodic with period $T$. We
will show that the average value of $s$ is at most $\text{Val}(s^{T})+o(1)$,
where the asymptotic notation is with respect to $T\to\infty$. Taking the
limit as $T\to\infty$ and applying the Lemma 5 above will then complete the
result.
Our approach to showing that the average value is at most
$\text{Val}(s^{T})+o(1)$ will be to convert $s$ into an on-off policy
$s^{\prime}$ of period $T$, without decreasing its average value. Recall from
Lemma 2 (and the discussion following its proof) that the long-run average
value of $s^{\prime}$ is simply the average period value under the periodic
boundary condition $v^{\prime}_{0}=v^{\prime}_{T}$. Moreover, when
constructing $s^{\prime}$, we can without loss of generality relax the
validity condition to be that at most $BT$ samples are taken at any point over
the interval $[0,T]$. This is because we can strengthen any policy under this
weaker condition to satisfy the original budget constraint by delaying the
start of the policy for $T$ rounds without taking any samples, and only then
starting policy $s^{\prime}$. This will have the same long-run average value
as our relaxed policy, again by Lemma 2.
We now describe a sequence of operations to convert $s$ into an on-off policy.
First, by Lemma 6, we can assume that $s$ spends no samples in the range
$[0,T^{\prime})$ for some $T^{\prime}<T$, then has $v_{t}\leq c$ for all
$t\in[T^{\prime},T]$. We will show that, given any policy of this form, one
can convert it into an on-off policy without degrading the total value over
the range $[0,T]$ by more than a constant.
We will apply a potential argument. Given a policy with variances given by
$v_{t}$, we will write $\bar{v}$ for the average variance in the range
$[T^{\prime},T]$. That is,
$\bar{v}\colon=\frac{1}{T-T^{\prime}+1}\sum_{T^{\prime}\leq t\leq T}v_{t}.$
Let $v^{+}=\max_{T^{\prime}\leq t\leq T}v_{t}$ and $v^{-}=\min_{T^{\prime}\leq
t\leq T}v_{t}$. That is, $v^{+}$ is the maximum variance and $v^{-}$ is the
minimum variance achieved during the interval where the variance lies below
$c$. Note that we must have $v^{+}\geq\bar{v}$ and $v^{-}\leq\bar{v}$. Let
$\Psi$ be the total number of timesteps $t$ between $T^{\prime}$ and $T$ in
which the variance is equal to either $v^{-}$ or $v^{+}$, plus $1$ if
$v^{+}=v^{-}=\bar{v}$. Then $\Psi$ is an integer lying between $2$ and
$T-T^{\prime}+2$. Also, $\Psi=T-T^{\prime}+2$ only if $v^{+}=v^{-}$, which
implies that all variances are precisely equal to $\bar{v}$. We will show how
to modify a policy $s$ (in which $v^{+}>v^{-}$) into a new policy $s^{\prime}$
so that $\Psi$ strictly increases, without changing the average policy value.
Write $A$ for the set of timesteps with variance equal to $v^{+}$, and $B$ for
the set of timesteps with variance equal to $v^{-}$. Say $|A|=a$ and $|B|=b$.
Note that $A\cap B=\emptyset$, since we are assuming $v^{+}>v^{-}$. We will
update the sampling policy so that, roughly speaking, the variance of the
timesteps in $A$ are all decreased, and the variance of the timesteps in $B$
are all increased, until either a new timestep becomes either a maximal or
minimal point, or until all timesteps have variance $\bar{v}$. More formally,
our update is parameterized by some $\epsilon>0$, and satisfies the following
conditions:
* •
at all timesteps $t\in A$, the variance is reduced by $\epsilon/a$,
* •
at all timesteps $t\in B$, the variance is increased by $\epsilon/b$,
* •
at timesteps not in $A\cup B$, the variance is unchanged,
* •
$\epsilon$ is maximal so that all elements of $A$ still have the maximum
variance, and all elements of $B$ still have the minimum variance.
Note that by the final condition, after making this change, either there will
be one more maximal timestep or one more minimal timestep, or else the minimum
equals the maximum. In either case, $\Psi$ will strictly increase. Moreover,
this update does not change the average variance of the policy.
It remains to show that we can implement this update without increasing the
total spend of the policy. To see this, consider updating just a single
timestep $t\in A$. The change involves adding samples at time $t$ to decrease
the variance by $\epsilon$, then removing samples from time $t+1$ to offset
the resulting decrease at that point. To decrease variance from $v_{t}$ to
$v_{t}-\epsilon/a$ requires an extra
$\frac{\epsilon/a}{v_{t}(v_{t}-\epsilon/a)}$ samples. The number of samples
that can be saved in the subsequent round is the amount required to move the
variance from $v_{t}+1$ to $v_{t}+1-\epsilon$, which is
$\frac{\epsilon/a}{(v_{t}+1)(v_{t}+1-\epsilon/a)}$. The net increase in
samples is therefore
$\left(\frac{\epsilon/a}{v_{t}(v_{t}-\epsilon/a)}-\frac{\epsilon/a}{(v_{t}+1)(v_{t}+1-\epsilon/a)}\right)$
Applying this operation to all timesteps in $A$ (of which there are $a$), and
recalling that they all have variance $v^{+}$, we have that the total cost in
samples is
$\left(\frac{\epsilon}{v^{+}(v^{+}-\epsilon/a)}-\frac{\epsilon}{(v^{+}+1)(v^{+}+1-\epsilon/a)}\right)=\epsilon\cdot\frac{2v^{+}+1-\epsilon/a}{v^{+}(v^{+}-\epsilon/a)(v^{+}+1)(v^{+}+1-\epsilon/a)}.$
A similar calculation yields that the total number of samples saved by
increasing the variance by $\epsilon/b$ for all timesteps in $B$ is
$\left(\frac{\epsilon}{v^{-}(v^{-}+\epsilon/b)}-\frac{\epsilon}{(v^{+}+1)(v^{+}+1+\epsilon/b)}\right)=\epsilon\cdot\frac{2v^{-}+1+\epsilon/b}{v^{-}(v^{-}+\epsilon/b)(v^{-}+1)(v^{-}+1+\epsilon/b)}.$
Recalling that $v^{+}-\epsilon/a\geq v^{-}+\epsilon/b$ and $v^{+}>v^{-}$, we
have that
$\displaystyle\epsilon\cdot\frac{2v^{-}+1+\epsilon/b}{v^{-}(v^{-}+\epsilon/b)(v^{-}+1)(v^{-}+1+\epsilon/b)}$
$\displaystyle=\frac{(v^{-}+1)+(v^{-}+\epsilon/b)}{v^{-}(v^{-}+\epsilon/b)(v^{-}+1)(v^{-}+1+\epsilon/b)}$
$\displaystyle=\frac{(v^{+}+1)\tfrac{v^{+}-\epsilon/a}{v^{-}+\epsilon/b}+(v^{+}-\epsilon/a)\tfrac{v^{+}+1}{v^{-}+1}}{v^{-}(v^{+}-\epsilon/a)(v^{+}+1)(v^{-}+1+\epsilon/b)}$
$\displaystyle>\frac{(v^{+}+1)+(v^{+}-\epsilon/a)}{v^{-}(v^{+}-\epsilon/a)(v^{+}+1)(v^{-}+1+\epsilon/b)}$
$\displaystyle>\frac{(v^{+}+1)+(v^{+}-\epsilon/a)}{v^{+}(v^{+}-\epsilon/a)(v^{+}+1)(v^{+}+1-\epsilon/b)}$
and hence the total number of samples saved is greater than the total number
of samples spent in making this change. Thus, the new policy also satisfies
the average budget constraint.
Repeating this procedure, we conclude that we must eventually reach a state in
which the resulting policy has constant variance equal to $\bar{v}$ in the
range $[T^{\prime},T]$. This policy has $s_{t}=1/\sqrt{\bar{v}}$ for all
$t\in(T^{\prime},T]$, and possibly a larger number of samples at
$s_{T}^{\prime}$. The on-off policy that sets
$s^{\prime}_{t}=1/\sqrt{\bar{v}}$ for all $t\in[T^{\prime},T]$ is therefore
also valid. Moreover, from our previous analysis, this on-off policy has
average value within $o(1)$ of policy $s$. We conclude that the value of $s$
is at most $\text{Val}(s^{T})+o(1)$, as required. ∎
## Appendix D Appendix: Proofs from Section 5 in the Continuous Model
It is technically more convenient to present these results for the continuous
model first, as that allows us to avoid rounding issues. Therefore we first
prove these results for the continuous model and then in Appendix E provide
proofs for the discrete version for those that do not have a unified proof
here.
We first prove a structural result about transforming policies.
###### Lemma 4.
Fix any valid sampling policy (not necessarily lazy) with resulting variance
function $v(\cdot)$, and any sequence of timestamps
$t_{1}<t_{2}<\dotsc<t_{\ell}<\dotsc$. Then there is a valid policy that spends
samples only in atoms, with
$\\{t~{}|~{}a(t)>0\\}\subseteq\\{t_{1},\dotsc,t_{\ell},\dotsc\\}$, resulting
in a variance function $\breve{v}(\cdot)$ with $\breve{v}(t_{i})\leq v(t_{i})$
for all $i$.
###### Proof.
We will prove this result for the continuous model. The proof for the discrete
model follows similarly, and is given in Section E.
For the continuous model, we’ll first prove the result for the case of a
single timestep $t_{1}$. The result then follows by repeated application to
each subsequent $t_{i}$ inductively. Let $s(t)$ and $a_{0}(t)$ be the
continuous sampling rate and atoms of the original policy, respectively, with
resulting variance $v(t)$. Assume first that $s(t)=0$ for all $t$ and that the
atoms in the interval $[0,t_{1}]$ occur at times
$0<\tau_{1}<\tau_{2}<\dotsc<\tau_{k}=t_{1}$, with corresponding number of
samples $a_{0}(\tau_{1}),\dotsc,a_{0}(\tau_{k})$. Note that the assumption
$\tau_{k}=t_{1}$ is without loss, as we could set $a_{0}(\tau_{k})=0$. If
$k=1$ then we are done, so assume $k\geq 2$. We will show that the alternative
policy which lumps together the first two atoms, given by $a_{1}$, where
$a_{1}(\tau_{1})=0$ and $a_{1}(\tau_{2})=a_{0}(\tau_{1})+a_{0}(\tau_{2})$, and
$a_{1}(\tau_{i})=a_{0}(\tau_{i})$ for all $i>2$, results in a variance
function $v_{1}$ such that $v_{1}(\tau_{i})\leq v(\tau_{i})$ for all $i\geq
2$. This will complete the claim, by repeated application to the first non-
zero atom in the sequence.
Recall that $\tilde{v}(\tau_{1})=v(0)+\tau_{1}$ is the the variance just prior
to applying the atom at $\tau_{1}$. Then we have that
$v(\tau_{1})=\frac{\tilde{v}(\tau_{1})}{1+a_{0}(\tau_{1})\tilde{v}(\tau_{1})}$.
Similarly, $\tilde{v}(\tau_{2})=v(\tau_{1})+(\tau_{2}-\tau_{1})$, so
$\tilde{v}(\tau_{2})=\frac{\tilde{v}(\tau_{1})}{1+a_{0}(\tau_{1})\tilde{v}(\tau_{1})}+\tau_{2}-\tau_{1}.$
We then have that
$v(\tau_{2})=\tilde{v}(\tau_{2})/(1+a_{0}(\tau_{2})\tilde{v}(\tau_{2}))$.
Alternatively, with $a_{1}$ we have $\tilde{v}_{1}(\tau_{2})=v(0)+\tau_{2}$.
If we let $\tilde{\tilde{v}}_{1}(\tau_{2})$ denote the variance after having
applied an atom of size $a_{0}(\tau_{1})$ at time $\tau_{2}$ but before
applying an atom of size $a_{0}(\tau_{2})$, we have
$\tilde{\tilde{v}}_{1}(\tau_{2})=\frac{\tilde{v}(\tau_{1})+\tau_{2}-\tau_{1}}{1+a_{0}(\tau_{1})(\tilde{v}(\tau_{1})+\tau_{2}-\tau_{1})}.$
We will then have that
$v_{1}(\tau_{2})=\tilde{\tilde{v}}_{1}(\tau_{2})/(1+a_{0}(\tau_{2})\tilde{\tilde{v}}_{1}(\tau_{2}))$.
We now note that $\tilde{\tilde{v}}_{1}(\tau_{2})\leq\tilde{v}(\tau_{2})$,
since
$\displaystyle\tilde{v}(\tau_{2})$
$\displaystyle=\frac{\tilde{v}(\tau_{1})}{1+a_{0}(\tau_{1})\tilde{v}(\tau_{1})}+\tau_{2}-\tau_{1}$
$\displaystyle\geq\frac{\tilde{v}(\tau_{1})+\tau_{2}-\tau_{1}}{1+a_{0}(\tau_{1})\tilde{v}(\tau_{1})}$
$\displaystyle\geq\frac{\tilde{v}(\tau_{1})+\tau_{2}-\tau_{1}}{1+a_{0}(\tau_{1})(\tilde{v}(\tau_{1})+\tau_{2}-\tau_{1})}$
$\displaystyle=\tilde{\tilde{v}}_{1}(\tau_{2}).$
We can therefore conclude that $v(\tau_{2})\geq v_{1}(\tau_{2})$. This further
implies that $v(\tau_{i})\geq v_{1}(\tau_{i})$ for all $i>2$ as well, since
$a_{1}(\tau_{i})=a(\tau_{i})$ for all $i>2$ and, inductively, the variance is
weakly lower under $v_{1}$ than under $v$ just prior to each atom, and hence
is lower after the application of each atom as well.
We now turn to the more general case where $s(t)$ is not identically $0$. We
again consider the case of a single timestep $t_{1}$, and the more general
result will follow inductively. We can view $s$ as the limit, as $\epsilon\to
0$, of a sequence of discretized policies that only use atoms, and only at
times that are multiples of $\epsilon$. We will take our sequence to be
$\epsilon=t_{1}/k$ for $k=1,2,3,\dotsc$, so that time $t_{1}$ is present in
each of these discretizations. For each such $\epsilon=t_{1}/k$, take $s_{k}$
to be the discretized version of $s$, so that $s_{k}\to s$ as $k\to\infty$.
Applying our lazy result above to policy $s_{k}$, we have that for each
$s_{k}$, there is a policy that matches the variance of $s_{k}$ at time
$t_{1}$, and that only applies an atom at $t_{1}$. Taking the limit as $k$
grows, we conclude that there is a policy that only takes an atom at time
$t_{1}$, whose variance at $t_{1}$ is no greater than $v(t_{1})$, the variance
generated by policy $s$. ∎
We can now prove our main approximation result for the continuous setting
without costs. We actually prove two versions, as a stronger bound is possible
for the case of $z=0$.
###### Theorem 2 (Version 1).
The optimal lazy policy is $1/2$-approximate, in the continuous setting and
with $z=0$.
###### Proof.
Write $s^{*}(t),a^{*}(t)$ for an optimal policy, and $v^{*}(t)$ for the
corresponding variance function. That is, $s^{*}$ is a policy that minimizes
$\lim\sup_{T\to\infty}\frac{1}{T}\int_{t=0}^{T}\min\\{c,v^{*}(t)\\}dt$
subject to budget constraints.
We note that, without loss of generality, we can assume that $s^{*}(t)=0$
whenever $v^{*}(t)>c$. That is, the policy has no continuous sampling when the
variance is above the outside option. This follows from our structural lemma
above, since any policy can be replaced by one that takes no sampling in an
interval where the variance lies above the outside option, and instead uses an
atom at the end of such an interval to bring the variance back down to the
outside option level $c$.
We now define an lazy policy that approximates $s^{*},a^{*}$ up to a fixed
$\epsilon>0$. We do so by defining a sequence of intervals iteratively. We
begin by setting $t_{0}=0$. For each $t_{i}$, we will define $t_{i+1}>t_{i}$
as follows. If $v^{*}(t)>c-\epsilon$ for all $t\in[t_{i},t_{i}+\epsilon]$,
take
$t_{i+1}=\inf\\{t>t_{i}\ \colon\ v^{*}(t)\leq c-\epsilon\\}.$
Otherwise, choose $t_{i+1}=t_{i}+\delta$, where
$\delta=\inf\\{\delta>\epsilon\ \colon\ c-v^{*}(t)\leq\delta\ \forall\
t\in[t_{i},t_{i}+\delta)\\}.$
Note that in this latter case we must always have $\delta\geq\epsilon$, and
hence $t_{i+1}\geq t_{i}+\epsilon$. We also must have $\delta\leq c$, since
certainly $v^{*}(t)\geq 0$ everywhere.
For each $i\geq 0$, let $m_{i}=\arg\inf_{m\in(t_{i},t_{i+1}]}\\{v^{*}(m)\\}$.
That is, $m$ is the time in the subinterval $(t_{i},t_{i+1}]$ where $v^{*}$
takes its lowest value.
Consider the policy that applies atoms at times $t_{0},t_{1},\dotsc$ and at
times $m_{0},m_{1},\dotsc$, so as to match the variance of $v^{*}$ at each of
those times. By Lemma 4, this policy is valid.
Next consider the policy that applies atoms only at times
$t_{0},t_{1},\dotsc$, and applies those atoms so that $v(t_{i})=v^{*}(m_{i})$
for each $i$. This policy is not necessarily valid, but we claim that this
policy can only ever go budget negative by at most $c\cdot B$, the amount of
budget accrued in $c$ time units. This is because within each interval
$[t_{i},t_{i+1}]$, this policy uses no more budget than the previous policy.
It may cause budget to be spent earlier than before, within the same window.
However, it only differs from the previous policy on subintervals where
$v^{*}(m_{i})<c-\epsilon$, and hence can only shift the spending of budget
earlier by at most $c$ time units as, in these cases, $\delta$ (the difference
between $t_{i}$ and $t_{i+1}$) is at most $c$. Thus, this new policy can go
budget-negative, but never by more than $cB$.
We next claim this policy is lazy. Indeed, for each sub-interval
$[t_{i},t_{i+1}]$, we have that $v(t_{i})\geq c-\delta$ where
$\delta=t_{i+1}-t_{i}$. Thus, our policy has the property that $\lim_{t\to
t_{i+1}}v(t)\geq c$, so each atom occurs at a point where the variance is at
or above $c$. Note that this makes use of the fact that variance drifts upward
at a rate of $1$ per unit time, in the continuous model.
We claim this (budget-infeasible) policy is $1/2$-approximate, up to an
additive $\epsilon$ term on the average cost. Since the policy is lazy, its
value (relative to the outside option) is the area of a sequence of isoceles
right-angled triangles. By construction, the squares that form the completion
of those triangles cover the entirety of the value of $s^{*},a^{*}$, except
possibly for regions where $v^{*}(t)\geq c-\epsilon$. See Figure 2 for a
visualization. So our policy is $1/2$-approximate, possibly excluding regions
where $v^{*}$ has an average contribution of $\epsilon$.
Finally, we note we can transform this policy to a budget-feasible one without
loss in the approximation factor. First, if we shift our policy to start at
time $c$, rather than time $0$, then it is precisely valid. As this decreases
its value by at most a bounded amount, the loss in average value over a time
horizon $T$ vanishes as $T\to\infty$. So we have a valid $1/2$-approximation
subject to an arbitrarily small additional additive loss. Taking $\epsilon\to
0$, and noting that there is a universally optimal lazy policy, gives us that
the optimal lazy policy is exactly a $1/2$-approximation. ∎
###### Theorem 2 (Version 2).
The optimal lazy policy is $1/3$-approximate, in the continuous setting and
with $z>0$.
###### Proof.
We consider the same construction as in Theorem Theorem 2 (Version 1). The
only step that could increase costs is the step that shifts atoms to the “left
endpoint” of its corresponding square. This might change the distance between
atoms, possibly increasing costs by requiring us to pay the fixed startup cost
more times.
To fix this, we’ll change the definition of a square so that a new square
cannot begin until the original policy takes a sample. This might “extend”
some squares to the right, forming rectangles. The value generated from the
extended part of the rectangle can be at most half the area of the square,
since by definition the original policy isn’t sampling during this time so its
variance rises at rate 1. This change therefore increases the approximation
factor to at most 3, since now one “triangle” is covering a square plus one
extra triangle.
Having made this change, we know by definition that the original policy
sampled at the left endpoint of each rectangle. We can therefore sample (only)
at the left endpoint of each rectangle, without increasing costs relative to
the original policy. Note that this _can_ increase costs relative to the
policy that takes atoms at the minimum-variance point within each rectangle;
so our cost comparison will only be with respect to the original policy. ∎
We show how to extend these results to the discrete setting in Appendix E. We
note that unlike the continuous setting, we does not suffer a loss in
approximation factor for the discrete setting when adding fixed costs.
We next show how to compute the best lazy policy, a result which was sketched
in Section 5, but not formally stated.
###### Lemma 7.
One can compute an asymptotically optimal valid lazy policy in closed form.
###### Proof.
We will consider some large $R$ and solve for a policy that maximizes average
value up to time $R$, then take a limit as $R\to\infty$. By Lemma 6, we can
assume the optimal policy sets $a(t)=0$ for all $t<r$, and has $v(t)\leq c$
for all $t\geq r$, where $r\leq R$.
For a given atom of size $s$, say taken at time $t\geq r$, recall that the
subsequent atom will be taken at time $t+(c-\tfrac{c}{1+sc})$, the next time
at which the variance is equal to $c$. We will therefore define the cost of
this atom as $s+\min\\{f,f(c-\tfrac{c}{1+sc})\\}$. This is the cost of the $s$
samples, plus the cost of either maintaining flow until the next atom is taken
(if $c-\tfrac{c}{1+sc}<1$) or of paying the start-up cost when the next atom
is taken (otherwise). The sum of costs over all atoms is equal to the total
cost of the policy. We therefore define the value density of the atom to be
$\frac{1}{2}\cdot\frac{1}{s+\min\\{f,f(c-\tfrac{c}{1+sc})}\cdot\left(c-\frac{c}{1+sc}\right)^{2}.$
This expression can be maximized with respect to $s$ by considering separately
the cases $(c-\tfrac{c}{1+sc})>1$ and $(c-\tfrac{c}{1+sc})\leq 1$. The
resulting solution will be the optimal choice of atom size, assuming that it
does exhaust the total budget I.e., when the total cost of the resulting
policy is at least $BR$.
If the optimal value of $s$ corresponds to a policy that does not exhaust the
total budget, then this means that it is time, rather than budget, that is the
binding constraint. In this case the optimal policy takes $r=0$ and chooses
$s$ so that the budget is exhausted at time $R$. That is, $s$ is chosen so
that $s+\min\\{f,f(c-\tfrac{c}{1+sc})\\}=B(c-\tfrac{c}{1+sc})$, meaning that
the total cost of an atom equals the budget acquired over the time interval
between that atom and the next. Again, one can solve for $s$ by considering
cases $(c-\tfrac{c}{1+sc})>1$ and $(c-\tfrac{c}{1+sc})\leq 1$. This $s$ will
correspond to the optimal sampling policy, which will take atoms at regular
intervals up to time $R$. ∎
## Appendix E Appendix: Proofs from Section 5 in the discrete model
In this section we complete proofs of statements that hold in both the
discrete and continuous models, that we have previously proved only in the
continuous model. We begin with the Lemma 4.
###### Lemma 4.
Fix any valid (not necessarily lazy) sampling policy with resulting variances
$v_{t}$, and any sequence of timesteps $t_{1}<t_{2}<\dotsc<t_{k}<\dotsc$. Then
there is a valid policy that spends samples only at timesteps that lie in the
set $\\{t_{1},\dotsc,t_{k},\dotsc\\}$, resulting in variances $\breve{v}_{t}$
with $\breve{v}_{t_{i}}\leq v_{t_{i}}$ for all $i$.
###### Proof.
We’ll prove this for the interval $[0,t_{1}]$, and the result then follows by
repeated application to each subsequent $t_{i}$ inductively. Assume that the
original policy and spends samples in the interval $(0,t_{1}]$ occur at times
$0<a_{1}<a_{2}<\dotsc<a_{k}=t_{1}$, with corresponding number of samples
$s_{1},\dotsc,s_{k}$, where $s_{i}\geq 0$ for each $i$. Note that the
assumption $a_{k}=t_{1}$ is without loss, as we could set $s_{k}=0$. If $k=1$
then we are done, so assume $k\geq 2$. We will show that the alternative
policy with samples $s^{\prime}_{1},\dotsc,s^{\prime}_{k}$, where
$s^{\prime}_{1}=0$ and $s^{\prime}_{2}=s_{1}+s_{2}$, and
$s^{\prime}_{i}=s_{i}$ for all $i>2$, results in a variance function
$v^{\prime}$ such that $v^{\prime}(a_{i})\leq v(a_{i})$ for all $i\geq 2$.
This will complete the claim, by repeated application to the first non-zero
atom in the sequence.
Write $v_{1}=v(0)+a_{1}$ for the variance just prior to taking the samples at
$a_{1}$. Then we have that $v(a_{1})=\frac{v_{1}}{1+s_{1}v_{1}}$. Writing
$v_{2}=v(a_{1})+(a_{2}-a_{1})$ for the variance just prior to taking the
samples at $a_{2}$, we have
$v_{2}=\frac{v_{1}}{1+s_{1}v_{1}}+a_{2}-a_{1}.$
We then have that $v(a_{2})=v_{2}/(1+s_{2}v_{2})$.
Alternatively, if we write $v^{\prime}_{2}$ for the variance of the policy
with $s^{\prime}_{1}=0$, after having taken $s_{1}$ samples at time $a_{2}$
but before taking $s_{2}$ more samples,
$v^{\prime}_{2}=\frac{v_{1}+a_{2}-a_{1}}{1+s_{1}(v_{1}+a_{2}-a_{1})}.$
We will then have that
$v^{\prime}(a_{2})=v^{\prime}_{2}/(1+s_{2}v^{\prime}_{2})$. We now note that
$v^{\prime}_{2}\leq v_{2}$, since
$v_{2}=\frac{v_{1}}{1+s_{1}v_{1}}+a_{2}-a_{1}\geq\frac{v_{1}+a_{2}-a_{1}}{1+s_{1}v_{1}}\geq\frac{v_{1}+a_{2}-a_{1}}{1+s_{1}(v_{1}+a_{2}-a_{1})}=v^{\prime}_{2}.$
We can therefore conclude that $v(a_{2})\geq v^{\prime}(a_{2})$. This further
implies that $v(a_{i})\geq v^{\prime}(a_{i})$ for all $i>2$ as well, since
$s^{\prime}_{i}=s_{i}$ for all $i>2$ and, inductively, the variance is weakly
lower under $v^{\prime}$ than under $v$ just prior to each set of samples, and
hence is lower after the application of each set of samples. ∎
We next complete the proof of our main approximation result in the discrete
setting.
###### Theorem 2.
In the discrete model, the optimal lazy policy is $1/2$-approximate.
###### Proof.
Write $s^{*}_{t}$ for the optimal sampling policy, and $v^{*}_{t}$ for the
corresponding variances. We note that, without loss of generality, we can
assume that if $s^{*}_{t}>0$ then $v^{*}_{t}<c$. That is, the policy does not
take samples if the resulting variance is still above the outside option. This
follows from our structural lemma above, since any policy can be replaced by
one that takes no samples until the variance would be below $c$, then takes
all the forgone samples then.
We now define a lazy policy that approximates $s^{*}$. We do so by defining a
sequence of intervals iteratively. We begin by setting $t_{0}=0$. For each
$t_{i}$, we will define $t_{i+1}>t_{i}$ as follows. Choose
$t_{i+1}=t_{i}+\delta$, where
$\delta=\inf\\{\delta\in\mathbb{N}^{+}\colon\ c-v^{*}_{t}\leq\delta\ \forall\
t\in[t_{i},t_{i}+\delta)\\}.$
We must have $\delta\leq c$, since certainly $v^{*}(t)\geq 0$ everywhere.
For each $i\geq 0$, let $m_{i}=\arg\inf_{m\in(t_{i},t_{i+1}]}\\{v^{*}_{m}\\}$.
That is, $m$ is the time in the subinterval $(t_{i},t_{i+1}]$ where $v^{*}$
takes its lowest value.
Consider the policy that takes samples at times $t_{0},t_{1},\dotsc$ and at
times $m_{0},m_{1},\dotsc$, so as to match the variance of $v^{*}$ at each of
those times. By Lemma 4, this policy uses no more budget than $s^{*}$ at any
given point of time, and it only takes samples on rounds when the original
policy took samples, so is therefore valid even when $z>0$.
Next consider the policy that applies atoms only at times
$t_{0},t_{1},\dotsc$, and applies those atoms so that
$v_{t_{i}}=v^{*}_{m_{i}}$ for each $i$. This policy is not necessarily valid,
but we claim that this policy can only ever go budget negative by at most
$c\cdot B$, the amount of budget accrued in $c$ time units. Proof of claim:
within each interval $(t_{i},t_{i+1}]$, this policy uses no more budget than
the previous one. It may cause budget to be spent earlier than before, within
the same window. However, it can only shift the spending of budget earlier by
at most $c$ time units. Thus, this new policy can go budget-negative, but
never by more than $cB$.
We next claim this policy is lazy. Indeed, for each sub-interval
$[t_{i},t_{i+1}]$, we have that $v_{t_{i}}\geq c-\delta$ where
$\delta=t_{i+1}-t_{i}$. Thus, our policy has the property that $\lim_{t\to
t_{i+1}^{*}}v(t)\geq c$, so each atom occurs at a point where the variance is
at or above $c$.
Finally, we claim that our policy is $1/2$-approximate. Since the policy is
lazy, its value (relative to the outside option) is lower bounded by the area
of a sequence of isoceles right-angled triangles. By construction, the squares
that form the completion of those triangles cover the entirety of the value of
$s^{*}$.
To complete the proof, we need to restore validity by shifting it to start at
time $c$, rather than time $0$. As this decreases its value by at most a
bounded amount, the loss in average value over a time horizon $T$ vanishes as
$T\to\infty$. ∎
###### Lemma 8.
One can compute a valid policy whose asymptotic value is at least the value of
the optimal lazy policy.
###### Proof.
We will consider some large $R$ and solve for a policy that maximizes average
value up to time $R$, then take a limit as $R\to\infty$. By Lemma 6, we can
assume the optimal policy sets $s_{t}=0$ for all $t<r$, and has $v(t)\leq c$
for all $t\geq r$, where $r\leq R$. Suppose for now that we are allowed to
take fractional samples.
Consider a lazy policy which takes samples until the variance is $v_{r}$ at
time $r$, then every time it is due to take samples takes the number $s$ such
that the variance becomes $r$ again. By construction, this means $s$ samples
are taken at each time other than $r$ at which they are taken, which happens
every $\delta=\lceil c-v_{r}\rceil$ periods. Therefore, $s$ satisfies the
equation $v_{r}=\tfrac{v_{r}+\delta}{1+s(v_{r}+\delta)}$, or
$s=\tfrac{\delta}{v_{r}(v_{r}+\delta)}$. We therefore define the value density
of this policy as
$\frac{1}{\frac{\delta}{v_{r}(v_{r}+\delta)}+f}\cdot\left(\sum_{i=0}^{\delta-1}c-v_{r}+i\right).$
(1)
We can then optimize over $v_{r}$ in the same manner as in Lemma 7 to find the
optimal way for a lazy policy to exhaust its budget. For any given $R$ there
may be a slight suboptimality due to the need to spend some samples to reach
$v_{r}$ the first time and some budget that does not get spent if there is not
budget for an integer number of spending periods, but these go to zero for $R$
large.
This gives an asymptotically valid lazy policy using fractional samples. Note
that the objective in Equation (1) is quasiconcave in $v_{r}$ for each fixed
$\delta$ (the derivative is a quadtratic in $v_{r}$ with negative coefficient
on the $v^{2}$ term and positive on the other two terms). Therefore the
optimal integer choice for a given $\delta$ can be found by “rounding” $v_{r}$
up or down the smallest amount that gives an integer $s$ and choosing one of
these. While the resulting policy of waiting and taking $s$ samples every
$\delta$ periods may not be lazy, it is asymptotically so for large $R$ by
Lemma 2
∎
## Appendix F Appendix: Optimal Policy for the Non-Gaussian Extension
In this section we solve for the form of the optimal policy in the binary
extension discussed in Section 6.
First we recall the model. There is a binary hidden state of the world,
$x_{t}\in\\{0,1\\}$, which flips each round independently with some small
probability $\epsilon>0$. The decision-maker’s action in each round is to
guess the hidden state and the objective is to maximize the fraction of time
that this guess is made correctly. Write $y_{t}\in\\{0,1\\}$ for the guess
made in round $t$. Each sample is a binary signal correlated with the hidden
state, equal to $x_{t}$ with probability $\tfrac{1}{2}+\delta$ where
$\delta>0$. The decision-maker can adaptively request samples in each round,
subject to the budget constraint (of $B>0$ samples per round on average),
before making a guess. Note that sampling is adaptive: the decision-maker can
observe the outcome of one sample in a round before choosing whether to take
the next. While this adaptivity was unimportant in the Gaussian setting (since
the sample outcomes were not payoff-relevant, only the induced variance), it
is significant for non-Gaussian evolution.
After sampling in each round, the decision-maker has a posterior distribution
$G_{t}$ over the state of the world. We’ll write $G_{tk}$ for the posterior
after $k\geq 0$ samples have been taken in round $t$. Note that $G_{tk}$ is
fully described by the probability that $x_{t}=1$, which we will denote by
$p_{tk}$. We claim that there is an optimal policy that sets a threshold
$\theta\in(0,1/2)$, and after having taken $k\geq 0$ samples in round $t$, it
takes another sample if and only if $p_{tk}\in[\theta,1-\theta]$. We call such
policies _threshold policies_.
###### Theorem 3.
For any $\epsilon$ and $\delta$, there is an optimal threshold policy.
###### Proof (sketch)..
Fix some large $T$. We will evaluate performance over the first $T$ rounds,
and show optimality up to a loss that is vanishing as $T$ grows large.
We first note that it suffices to consider policies that are admissible
subject to a budget constraint that binds in expectation. To see this, take a
policy that satisfies the budget constraint in expectation; we will construct
a policy with the same asymptotic value that satisfies the budget constraint
ex post. To do so, we’ll make two changes to the budget-in-expectation policy.
First, delay the policy’s execution by $\Theta(\sqrt{T})$ rounds. This has
negligible impact on asymptotic performance, but begins the policy with a pool
of funds to pull from. Standard concentration bounds will then imply that it
the policy will exceed its ex post budget exponentially rarely; whenever it
does, simply pause execution by another $\Theta(\sqrt{T})$ rounds to recover
the pool of funds and then continue. The resulting policy satisfies the budget
ex post, and the impact of such pauses on asymptotic performance will vanish
in the limit as $T$ grows large.
Next, we claim that it suffices to consider policies whose action after having
taken $k\geq 0$ samples in round $t$ depends only on $p_{tk}$. That is, the
actions are otherwise history independent. This is because the optimal policy
starting from a state with posterior $p_{tk}$ depends only on $p_{tk}$, and in
particular does not depend on the number of samples that have been taken on
round $t$ or any previous round (as the constraints bind only in expectation).
Thus, for any optimal policy that sometimes takes an additional sample when
the posterior is $p_{tk}$ and sometimes does not, it would likewise be optimal
to simply ignore the history and choose a decision independently at random,
according to a distribution consistent with how frequently each choice is made
when the posterior is $p_{tk}$ over the long-run execution of the policy.
We next claim that the long-run payoff of the optimal policy, starting in a
round $t$ where the posterior begins at $p_{t0}$, is weakly increasing in
$|p_{t0}-1/2|$. This follows from the fact that $p_{t0}$ being farther from
$1/2$ corresponds to additional certainty about the hidden state. For any
$1/2<p^{\prime}_{t0}<p_{t0}$, any policy with posterior mean $p_{t0}$ could
choose to “forget” information and behave as though its posterior is
$p^{\prime}_{t0}$, and achieve at least as high a payoff (in expectation) as a
policy whose true posterior in round $t$ is $p^{\prime}_{t0}$.
We next claim that the total long-run payoff of guessing after having taken
$k$ samples is increasing and weakly concave in $p_{tk}$, for $p_{tk}\geq
1/2$. (The case $p_{tk}<1/2$ will follow similarly by symmetry.) The fact that
payoffs are increasing follows by backward induction: this is certainly true
in the last round $T$, as the payoff in round $t$ is strictly increasing.
Then, for any $t<T$, the payoff in the subsequent threshold policy likewise
depends only on the value of the posterior at the beginning of the round (as
argued above), which is increasing in $p_{tk}$ and will be at least $1/2$ if
$p_{tk}\geq 1/2$. Concavity follows from the fact that in-round payoffs
increase linearly in $p_{tk}$, but inter-round reversion to the mean is more
pronounced for larger $p_{tk}$, so an increase in $p_{tk}$ has a sublinear
effect on the payoffs from subsequent rounds.
Since the payoff function is increasing in $p_{tk}$ in each round, the optimal
policy will be monotone: in each round, there will be a threshold above which
a guess is made, and below which samples are taken. This choice of thresholds
will be made to optimize long-run payoff given the average budget constraint.
Since the payoffs are concave in $p_{tk}$, and this value function is
identical across rounds, the optimal choice of thresholds will likewise be
uniform across rounds. By symmetry, if $\theta$ is chosen as the threshold for
$p_{tk}>1/2$, the threshold for $p_{tk}<1/2$ will be $1-\theta$. ∎
|
August 27, 2024
On the Relation between $\kappa$ and $\beta$
Eyelade et al
On the Relation between Kappa Distribution Functions and the Plasma Beta Parameter
in the Earth Magnetosphere: THEMIS observations.
Adetayo V. Eyelade
0000-0002-2301-307X]Adetayo V. Eyelade
Departmento de Física,
Universidad de Santiago de Chile (USACH),
Santiago, Chile
0000-0002-1053-3375]Marina Stepanova
Departmento de Física,
Universidad de Santiago de Chile (USACH),
Santiago, Chile
0000-0003-2481-2348]Cristóbal M. Espinoza
Departmento de Física,
Universidad de Santiago de Chile (USACH),
Santiago, Chile
0000-0002-9161-0888]Pablo S. Moya
Departmento de Física, Facultad de Ciencias, Universidad de Chile,
Santiago, Chile
The Earth's magnetosphere represents a natural plasma laboratory that allows us to study the behavior of particle distribution functions in the absence of Coulomb collisions, typically described by the Kappa distributions. We have investigated the properties of these functions for ions and electrons in different magnetospheric regions, thereby making it possible to reveal the $\kappa$-parameters for a wide range of plasma beta ($\beta$) values (from $10^{-3}$ to $10^{2}$). This was done using simultaneous ion and electron measurements from the five Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft spanning the years 2008 to 2018. It was found that for a fixed plasma $\beta$, the $\kappa$-index and core energy ($E_c$) of the distribution can be modeled by the power-law $\kappa=AE_c^\gamma$ for both species, and the relation between $\beta$, $\kappa$, and $E_c$ is much more complex than earlier reported: both $A$ and $\gamma$ exhibit systematic dependencies with $\beta$. Our results indicate that $\beta \sim 0.1-0.3$ is a range where the plasma is more dynamic since it is influenced by both the magnetic field and temperature fluctuations, which suggests that the transition between magnetically dominated plasmas to kinetically dominated plasmas occurs at these values of $\beta$. For $\beta > 1 $, both $A$ and $\gamma$ take nearly constant values, a feature that is especially notable for the electrons and might be related to their demagnetization. The relation between $\beta$, $\kappa$, and $E_c$ that we present is an important result that can be used by theoretical models in the future.
§ INTRODUCTION
Understanding the dynamics of charged energetic particle interactions in space and astrophysical plasmas has been one of the main challenges in the space physics community over several decades.
Owing to the lack of adequate collisions, these plasmas are usually observed in quasi-equilibrium stationary states different from thermodynamic equilibrium.
The kinetics of the relaxation process needed to reach a stationary state and the properties of such state in which the plasma and electromagnetic turbulence coexist, are still not well understood for a long list of space and astrophysical objects [Marsch, 2006, Bruno & Carbone, 2013, Yoon, 2017].
These unsolved problems border on a lack of sufficient understanding of the interaction of charged particles in plasma environments, such as the solar wind and Earth's magnetosphere, which are essentially collisionless plasma systems in non-equilibrium stationary states.
Over half a century since the introduction of the Kappa distribution function by Montgomery et al., 1965, several studies have shown that the properties of collisionless plasmas can be well modeled by distributions with enhanced suprathermal power-law tails, rather than Maxwellian distributions.
The Kappa distributions play a crucial role in the description of plasma objects such as the solar wind [Collier et al., 1996, Pierrard et al., 1999, Mann et al., 2002, Livadiotis & McComas, 2011, Yoon, 2014, Pierrard & Pieters, 2014], the Earth's magnetosheath [Vasyliunas, 1968, Ogasawara et al., 2013, Ogasawara et al., 2015], and several regions in the magnetosphere like the magnetotail [Grabbe, 2000], the ring current [Pisarenko et al., 2002], and the plasma sheet [Christon et al., 1988, Kletzing et al., 2003, Stepanova & Antonova, 2015, Espinoza et al., 2018, Kirpichev & Antonova, 2020].
The general form of the Kappa distribution is denoted by $f$:
\begin{eqnarray}
\label{eq:1}
f(E;n_{\alpha},\kappa_{\alpha},E_{c_{\alpha}}) =& n_{\alpha} \left( \frac{m_{\alpha}}{2 \pi E_{c_{\alpha}}} \right)^{\frac{3}{2}}\frac{\Gamma(\kappa_{\alpha})}{\Gamma(\kappa_{\alpha}-\frac{3}{2})\sqrt{\kappa_{\alpha}}} \nonumber \\
& \times \left[ 1+ \frac{E}{\kappa_{\alpha} E_{c_{\alpha}}} \right]^{- \kappa_{\alpha}-1} \quad,
\end{eqnarray}
where $f$ is the phase space density, $E$ is the kinetic energy, the sub-index $\alpha$ corresponds to the particle index, which can be electron (e) or ion (i); $n_{\alpha}$ is the particle density, $m_{\alpha}$ is the particle mass, $\Gamma$ is the Euler gamma function, and $\kappa_{\alpha}$ and $E_{c_{\alpha}}$ are the $\kappa$-parameter and characteristic or core energy, respectively. Kappa distributions given by Equation (<ref>) exhibit a thermal core with characteristic energy $E_{c_{\alpha}}$ and suprathermal tails, such that the total characteristic particle kinetic energy $E_{\rm{total}}$ is given by
\begin{equation}
\label{eq:2}
E_{\rm{total}} = E_{c{\alpha}}
\,\frac{\kappa_{\alpha}}{\kappa_{\alpha} - 3/2}\,,
\label{eq:energy}
\end{equation}
which enables a straightforward comparison between Kappa and Maxwellian distributions, and to outline the effects of suprathermals as shown by Lazar et al., 2015, Lazar et al., 2016.
The spectral index $\kappa_{\alpha}$ is a measure of the slope of the energy spectrum of the suprathermal particles that form the tail of the velocity distribution function. Hence $\kappa_{\alpha}$ primarily provides a measure of the departure of the stationary states from thermal equilibrium [Burlaga & F.-Viñas, 2005]. For $\kappa_{\alpha} \rightarrow \infty$, equation (<ref>) becomes identical to the Maxwell distribution and approaches the quasi-thermal core of the observed distribution.
\begin{eqnarray}
\label{eq:3}
f(E;n_{\alpha},E_{c_{\alpha}})= n_{\alpha}\left( \frac{m_{\alpha}}{2 \pi E_{c_{\alpha}}}\right)^{3/2}\exp\left(-\frac{E}{E_{c_{\alpha}}} \right)
\end{eqnarray}
In the solar wind, several mechanisms have been reported in the literature that lead to the generation of Kappa distributions [Livadiotis et al., 2018].
For instance, Lazar et al., 2017 observed the presence of suprathermal electron fluxes, which can be well modeled by Kappa distributions.
They argued that the use of Kappa and bi-Kappa distributions allows a realistic interpretation of non-thermal electrons and their effects on the electron firehose instability, where growth rates are observed to increase while the instability thresholds and electron kappa ($\kappa_e$) are seen to decrease.
The study by Maksimovic et al., 1997 describes the first exospheric model of the solar wind based on Kappa Velocity Distribution Functions for protons and electrons escaping from the corona.
Their model provides a possible hint regarding key features of the solar wind flow.
It indicates that the fastest solar wind flows that originate from the corona holes are high-speed streams with an enhanced high-velocity tail simulated by a Kappa function with a small electron kappa $\kappa_{e}$ value. Whereas, for hot equatorial regions where the slow solar wind originates, the electron velocity distribution functions are closer to the Maxwellian equilibrium, corresponding to $\kappa_{e} = \infty$.
More recently, a significant relationship has been found between the $\kappa$-index and other plasma parameters. For instance, it was established that $\kappa$-index correlates with the solar wind density and temperature [Livadiotis et al., 2018].
Besides, the $\kappa$-index was also found to be connected with the polytropic index and magnetic field [Livadiotis, 2017].
In addition, Livadiotis et al., 2018 remarked that the $\kappa$-index decreases when the magnetic field's long-range interactions induce correlations among particles, as the system is turned away from thermal equilibrium.
They found a strong correlation between the plasma $\beta$ parameter, (defined as $\left( p /B^{2}/2\mu_{0} \right)$, where $p$ is the plasma pressure, $B$ is the magnetic field, and $\mu_{0}$ is the magnetic permeability) and kappa. This correlation takes place when $\beta$ increases, as thermal pressure becomes dominant.
Similarly, as the $\kappa$-index increases, the long-range interactions due to the magnetic field become weaker. This observation further revealed that kappa regulation in the low beta regime is due to the magnetic field, which induces the correlation between particles.
Furthermore, there are many studies in some specific regions of the Earth magnetosphere that utilized Kappa distributions.
For instance, Christon et al., 1989, Christon et al., 1991 obtained ion and electron Kappa distributions in the plasma sheet using the particle instruments onboard the International Sun-Earth Explorer 1 (ISEE 1).
They found that the $\kappa$-index ranges between 4 and 8 for both ions and electrons, with a most probable value between 5 and 6, which shows that the spectral shape is distinctly non-Maxwellian.
Later, Haaland et al., 2010 found that the $\kappa$-index ranges between 3 and 6, using data of the Cluster satellites. Stepanova & Antonova, 2015 utilized Kappa distributions to fit ion and electron flux spectra for five events in which the THEMIS satellites were aligned along the plasma sheet. They obtained snapshots of kappa properties that show a tendency for the $\kappa$-index to increase in the tailward direction. Espinoza et al., 2018 also used the Kappa distribution to model ions and electrons flux spectra along the plasma sheet.
Their results reveal that $\kappa_{i}>\kappa_{e}$,
which suggests that non-thermal properties of the electrons are stronger than ions.
Besides, their results show a persistent dawn-dusk asymmetry in the relative numbers of energetic ions, which increases during substorms. This is consistent with the previous study of Wing et al., 2005.
Recently, Kirpichev & Antonova, 2020 measured the $\kappa$-parameters for ions in different magnetospheric regions and during quiet magnetospheric conditions.
They found that kappa depends on the core energy ($E_{c}$) for a wide energy range and a broad range of the plasma beta ($\beta$) parameter. Their results support earlier findings, which showed that $\kappa$-index increases with $E_{c}$ in the magnetosphere of the Earth [Christon et al., 1989] and the solar wind [Collier, 1999].
However, despite the aforementioned studies, there is no systematic experimental analysis that focuses on the coupling between Kappa distribution parameters (density, core energy, and $\kappa$-index) and the plasma beta $\beta$ parameter in the Earth's magnetosphere. This study, for the first time, considers simultaneous electron and ion measurements, we will explore more precisely the relationship between $\kappa$-index, core energy $E_{c}$, and plasma beta $\beta$, which will provide a better understanding of plasma thermalization, the relation between ion and electron properties, and the importance of the level of magnetization of each species within the Earth's magnetosphere, regardless of the details of a given magnetospheric region.
The paper is organized as follows: In section <ref> we describe the data and methodology for obtaining the ion and electron Kappa distribution parameters and plasma beta;
In section <ref> we present the results of the analyses and explore the relationship between kappa, beta, and core energy $E_{c}$.
In section <ref> we discuss the observed results, and in section <ref> we summarize and conclude our findings.
§ INSTRUMENTATION AND DATA ANALYSIS
The spatial coverage of the measured spectra for which both electron and ion Kappa fits were successful.
The color code is used to represent the value of the electron plasma $\beta_{e}$.
Upper panel: $X$ and $Y$gsm plane. Lower panel: $X$ and $Z$gsm plane.
The present study combines data sets of the multi-satellite mission Time History of Events and Macroscale Interactions during Substorms (THEMIS), using all its satellites (TH-A, TH-B, TH-C, TH-D, and TH-E), and spanning the years 2008 to 2018.
The data was downloaded via the THEMIS ftp website[http://themis.ssl.berkeley.edu/index.shtml].
All measurements were constrained to the following Geocentric Solar Magnetospheric (GSM) coordinate system: $-35 \leq X \leq 7$ R$_{E}$, $-30 \leq Y \leq 30$ R$_{E}$, $-10 \leq Z \leq 10$ R$_{E}$, and at distances larger than $5\,$R$_{E}$ from the center of the Earth.
This region is depicted in Figure <ref>, where panels (a) and (b) show the spatial coverage in the $X-Y_{GSM}$ and $X-Z_{GSM}$ planes, respectively. Each position is color-coded with the measured electron plasma beta $(\beta_e)$.
All measurements used in this study were averaged over 12 minute long intervals, which is long enough to make stable measurements of the particle fluxes, and at the same time, short enough to ensure that the distributions do not change significantly over this time [Stepanova & Antonova, 2015, Espinoza et al., 2018].
The magnetic field data used in this study were obtained from the Flux Gate Magnetometer (FGM) onboard the THEMIS satellites [Auster et al., 2008].
The plasma particle data were obtained from the Electrostatic Analyzers <cit.>, with an energy range from a few eV up to $30$ keV for electrons and $25$ keV for ions, and the Solid State Telescopes <cit.> with an energy range from $25$ keV to $6$ MeV.
Further, we used level 2 full mode particle energy fluxes, which are averaged over the particle instruments satellite rotation thereby significantly increasing the statistical measurements for each energy channel.
Magnetospheric plasmas are composed mainly of protons and electrons but we should also expect a low fraction of heavy ions. Unfortunately, THEMIS particle instruments do not distinguish protons from other ion species, hence, we refer to all of them as ions.
In our study, only the central energies of the particle instruments combined range were considered for fitting.
This was done to avoid contamination from the spacecraft potential and photoelectrons at low energies ($<40$ eV) and cosmic rays and low statistics at high energies.
The analyses were limited to the ranges $1.75$ to $210$ keV for ions, and $0.36$ and $203.5$ keV for electrons.
However, our statistics at both ends of the distribution were low for very low fluxes. Hence we included sometimes fewer channels, as a function of particle number density, by setting some control conditions for high energy channel SST and low energy channel ESA.
The observed ion and electron energy spectrum were fitted by transforming the three-dimensional kappa distribution from equation (1) to particle energy fluxes, denoted as $F$, as shown below:
\begin{equation}
F_{\alpha}(E) = \frac{n_{\alpha}}{\sqrt{2 \pi^3 m_{\alpha}}}\frac{E^{2}}{E^{3/2}_{c_{\alpha}}}\frac{\Gamma(\kappa_{\alpha})}{\Gamma(\kappa_{\alpha}-\frac{1}{2}) \sqrt{\kappa_{\alpha}}}\left[ 1+ \frac{E}{\kappa E_{c_{\alpha}}} \right]^{-\kappa_{\alpha}-1} \label{eq:3}
\end{equation}
Figure <ref> illustrates examples of ion (upper panels) and electron (bottom panels) energy flux spectra measured by combining both particle instruments (ESA and SST) onboard the THEMIS satellites (solid lines). The circles on the plots are the average of the spectra obtained for the 12 minutes time windows. The open circles represent measurements from the ESA, while the filled circles represent the SST.
The error bars for the averaged flux data represent the spread between the maximum and minimum observed values.
They were found to vary significantly between the ESA and SST data, so they were normalized in the same way as Espinoza et al., 2018.
The inverse squared of the error bars is used to define weights for the fits, which were performed using a non-linear least-squares method combined with the Levenberg-Mardquart algorithm.
We visually inspected hundreds of spectra and decided to work only with the fits that give a reduced chi-squared $\chi^{2}<100$. In addition, only measurements for which both electron and ion fits were successful in the same time interval were used in the subsequent analysis. In order to ensure that we study only plasmas in the magnetosphere, we restricted the particle density, the magnetic field in the X component, and ion bulk velocity to the values shown in Table <ref>. In all, we found 47,058-time intervals that satisfied these restrictions and could be well fitted by a Kappa distribution function for both ions and electrons. The final data set, analyzed in the next section, includes the three kappa parameters from the fits and the plasma beta for ions and electrons.
Examples of fits of Kappa distributions to the flux spectra of ions (a-c) and electrons (d-f), which are simultaneously measured on 29th March 2008 between 10.4 UT and 10.5 UT by THA (first column), THD (second column), and THE (third column). In each panel, the black solid lines are sub-datasets taken over a total of 12 minutes. Open circles represent averages of the subsets measurements of low energy particles (ESA) and filled circles represent averages of the subsets of measurements of high energy particles (SST).
The red dash-line is the kappa function curve fitted to the open and filled circles.
Criteria applied to select magnetospheric plasmas.
Parameter Symbol Condition
Plasma ion density (cm$^{-3}$) $n_{i}$ $ \geqslant 0.1$
Magnetic field (nT) $B_{x}$ $ \leqslant \; 100$
Ion bulk velocity (km s$^{-1}$) $v_{T}$ $\leqslant 250$
Statistics of ion $\beta$ and kappa parameters.
Parameter Minimum Maximum Mean Median $Q_{1}$ $Q_{3}$ $\sigma$
8cFor all $\beta_{i}$: 47,058 spectra
$\kappa_{i}$ 1.51 42.9 6.67 6.44 4.96 8.08 2.28
$E_{c_{i}}$ 0.25 10.17 3.45 2.97 1.91 4.73 2.01
$E^T_{i}$ 0.25 15.25 4.78 4.16 2.93 6.43 2.63
$n_{i}$ 0.10 3.70 0.65 0.37 0.23 0.94 0.60
$S_{k_{i}}$ -1.60 4.09 0.80 0.73 0.01 1.50 0.96
$\beta_{i}$ 0.001 92.2 0.86 0.29 0.13 0.71 2.69
8cFor low $\beta_{i}$ regime, $\beta_{i} \leqslant 1$: 38,708 spectra
$\kappa_{i}$ 1.51 42.9 6.64 6.38 4.87 8.08 2.33
$E_{c_{i}}$ 0.25 10.17 3.84 3.31 2.13 5.50 2.14
$E^T_{i}$ 0.25 15.25 5.54 5.16 3.53 7.20 2.74
$n_{i}$ 0.10 2.80 0.61 0.32 0.21 0.94 0.56
$S_{k_{i}}$ -1.59 4.09 0.82 0.80 0 1.65 1.06
$\beta_{i}$ 0.001 1.00 0.29 0.22 0.11 0.42 0.24
8cFor high $\beta_{i}$ regime, $\beta_{i} \geqslant 1$: 8,350 spectra
$\kappa_{i}$ 1.75 25.2 6.82 6.67 5.40 8.06 2.00
$E_{c_{i}}$ 0.25 7.50 2.71 2.51 1.61 3.46 1.48
$E^T_{i}$ 0.25 8.37 3.26 3.17 2.14 4.06 1.49
$n_{i}$ 0.10 3.70 0.75 0.46 0.31 1.04 0.67
$S_{k_{i}}$ -0.42 3.42 0.78 0.71 0.26 1.16 0.72
$\beta_{i}$ 1.00 92.2 3.47 1.82 1.30 3.23 5.69
Energies are given in keV, and densities in cm$^{-3}$. $E^T_i$ is the ion total energy, $S_{k_{i}}$ is the skewness of the ion $E_c$ distribution. $Q_1$, $Q_3$, and $\sigma$ represent the lower quartile, upper quartile, and standard deviation of each quantity, respectively.
Statistics for electron $\beta$ and kappa parameters.
Parameter Minimum Maximum Mean Median $Q_{1}$ $Q_{3}$ $\sigma$
8cFor all $\beta_{e}$: 47,058 spectra
$\kappa_{e}$ 1.51 34.3 4.60 4.41 3.75 5.20 1.37
$E_{c_{e}}$ 0.25 6.25 1.33 0.83 0.42 1.74 1.25
$E^T_{e}$ 0.25 7.12 1.85 1.41 0.77 2.51 1.44
$n_{e}$ 0.10 3.30 0.57 0.35 0.25 0.61 0.55
$S_{k_{e}}$ -0.98 5.07 1.24 1.07 0.38 1.97 1.09
$\beta_{e}$ 0.002 87.5 0.31 0.11 0.04 0.27 1.29
8cFor low $\beta_{e}$ regime, $\beta_{e} \leqslant 1$: 44,774 spectra
$\kappa_{e}$ 1.51 26.5 4.59 4.40 3.74 5.19 1.36
$E_{c_{e}}$ 0.25 6.25 1.33 0.86 0.44 1.72 1.24
$E^T_{e}$ 0.25 7.12 1.98 1.48 0.87 2.67 1.48
$n_{e}$ 0.10 3.30 0.56 0.34 0.25 0.64 0.54
$S_{k_{e}}$ -0.98 5.07 1.41 1.35 0.53 2.10 1.13
$\beta_{e}$ 0.002 1.00 0.17 0.09 0.04 0.23 0.19
8cFor high $\beta_{e}$ regime, $\beta_{e} \geqslant 1$: 2284 spectra
$\kappa_{e}$ 2.21 34.3 4.82 4.54 3.91 5.41 1.59
$E_{c_{e}}$ 0.25 5.62 1.34 0.81 0.38 1.96 1.28
$E^T_{e}$ 0.25 5.00 1.38 0.95 0.50 1.65 1.20
$n_{e}$ 0.1 3.30 0.59 0.38 0.27 0.60 0.58
$S_{k_{e}}$ -0.47 2.64 0.71 0.59 0 1.18 0.71
$\beta_{e}$ 1.00 87.5 3.08 1.68 1.23 2.78 5.06
Energies are given in keV, and densities in cm$^{-3}$. $E^T_e$ is the electron total energy, $S_{k_{e}}$ is the skewness of the electron $E_c$ distribution. $Q_1$, $Q_3$, and $\sigma$ represent the lower quartile, upper quartile, and standard deviation of each quantity, respectively.
Tables <ref> and <ref> give some statistical properties of the distributions of all the beta and kappa parameters obtained from the fits for ions and electrons, respectively.
The data set was also divided into two groups, depending on beta, and the same quantities are given for each of these groups ($\beta<1$ or $\beta>1$).
Figure <ref>(a), (b) and (c), show the relation between $\beta_{e}$ versus $\beta_{i}$, $\kappa_{e}$ versus $\kappa_{i}$, and $E_{c_{e}}$ versus $E_{c_{i}}$, respectively.
As expected, $\beta_{i}$ and $\beta_{e}$ are correlated since the plasma on large scales is quasi-neutral, and the majority of the data we have are from the plasma sheet and from the region that surrounds the Earth, which is filled with plasma sheet-like plasma.
In this case we expect ion temperatures to be higher than electron temperatures, with typical ion-to-electron temperature ratios $ T_{i}/ T_{e}$ between $4$ and $6$ according to Baumjohann, 1993, Borovsky et al., 1997, Espinoza et al., 2018.
Further, the relation between $\kappa_{e}$ and $\kappa_{i}$ that we observe is uncorrelated, which is consistent with low-statistic results by Christon et al., 1989.
The correlation we find between $E_{ce}$ and $E_{ci}$ is rather low, and less than what was obtained by Christon et al., 1989.
This could be related to the fact that we did not limit our study to the plasma sheet only. Nevertheless, these results suggest that a more detailed analysis is needed to establish whether there is a relation between $\beta$, $\kappa$, and $E_{c}$, which is done in the next section.
Interrelationship of ion and electron parameters $\beta$ (a), $\kappa$ (b), and $E_{c}$ (c).
§ RELATIONSHIP BETWEEN BETA ($\BETA$), KAPPA ($\KAPPA$) AND CORE ENERGY ($E_{C}$)
To assess the relation between beta, kappa, and core energy, we define a grid in the logarithmic ($\beta, \kappa $) space using a cell size of $\Delta \log_{10} \beta =\Delta \log_{10} \kappa_{i} = 0.1$. The grid is defined for the range -3 $< \log_{10} \beta <$ 2 and -0.6 $< \log_{10} \kappa <$1.6, and used to create the color-coded plots frequently presented in this study. Figure <ref> shows the number of measurements in each bin ($N$), where empty bins contain less than ten measurements.
2-D histogram of the number of observations grouped into bins of sizes $0.1$ in the $\beta - \kappa$ plane, for (a) ions, and (b) electrons. The colorbar indicates the number of observations $\langle N \rangle$ in logarithmic (base 10) color scale. The black and white solid lines represent Mean and Median kappa values in each beta bin, respectively.
The observed ion and electron kappa indices vary from 1.5 (lowest possible state) to a little above $40$, and the observed beta values are from $10^{-3}$ to nearly $10^2$ (Tables <ref> and <ref>).
The distribution of the observations in this space, however, presents a clear maximum, for both ions and electrons.
While for the ions, most cases are in the range $5 \leq \kappa_{i} \leq 8$ and $0.1 \leq \beta \leq 0.7$,
the electrons exhibit slightly smaller kappa and beta values in general:
$4 \leq \kappa_{e} \leq 5$ and $0.04 \leq \beta_{e} \leq 0.3$ (see Tables <ref> and <ref>).
Interestingly, for both species there are combinations of kappa and beta that are not observed.
For example, plasmas with $\beta_{i}<0.01$ and $\kappa_{i}>10$, or plasmas with $\beta_{e}>30$ and $\kappa_{e}>6$ appear not to occur often in the magnetosphere. As illustrated in Figure <ref>, the mean and median (black and white solid lines, respectively) kappa values increase with beta, for both species, up to $\beta\sim1$.
For $\beta>1$, the mean (and median) kappa is almost constant, with a slight decrease towards larger beta values. This feature is similar to the result obtained by Kirpichev & Antonova, 2020 (see their Figure 3) in the case of ions, but with smaller kappa values.
The core energies obtained from the considered fits cover the range $0.25$ keV to $15.25$ keV in the case of ions, and $0.25$ keV to $9.25$ keV for the electrons.
In order to study the relation between $\kappa$ and $E_c$ for a fixed $\beta$, it is necessary to obtain the most representative $E_{c}$ for each ($\beta,\kappa$) bin. A popular candidate would be the mean value, which works well in the case of normal distributions. In order to determine the degree to which they depart from a normal distribution, we analyze the distribution of core energies in each bin. As observed in Figures <ref>(b) and <ref>(b), the obtained distributions are not always Gaussian. To characterize this, we determined the mean, median, and skewness for the $E_{c}$ distribution in each ($\beta,\kappa$) bin. Figures <ref> and <ref> show the mean $E_{c}$ values and the skewness for ions and electrons.
As seen in both species, the hot plasma is negatively skewed; meanwhile, the cold plasma is positively skewed.
Therefore, to ensure the robustness of our study, the analyses were performed twice: once for the mean $E_{c}$ values and once for the median $E_{c}$ values.
We can confirm that the results are qualitatively very similar for both cases.
Median and also mode $E_{c}$ values as a function of $\kappa$ and $\beta$, for both species, can be seen in the Appendix (Figures <ref> and <ref>).
Previous studies of Kappa distributions for ions have reported that kappa increases with core energy in a linear fashion [Christon et al., 1989, Collier, 1999]. Meanwhile, Kirpichev & Antonova, 2020 has recently established that a power-law function of the form $\kappa = AE^{\gamma}$ can be used to describe the relationship between kappa and core energy in the case of ions.
Here we use the same function as Kirpichev & Antonova, 2020 for both species.
Core energy distributions and skewness examples.
(a) 2-D plot of average ion core energy $\langle E_{ci} \rangle$ (color bar) in the $\beta_{i} - \kappa_{i}$ plane.
(b) The skewness $S_{ki}$ of the distributions of $E_{ci}$ in each cell of the $\beta_{i} - \kappa_{i}$ plane.
(c) Histogram of one $E_{ci}$ distribution to illustrate a negative skewness. This particular distribution corresponds to the bin marked with a black cell in panels (a) and (b).
(d) Histogram of one $E_{ci}$ distribution to illustrate a positive skewness. This particular distribution corresponds to the bin marked with a red cell in panels (a) and (b).
Core energy distributions and skewness examples.
(a) 2-D plot of average electron core energy $\langle E_{ce} \rangle$ (color bar) in the $\beta_{e} - \kappa_{e}$ plane.
(b) The skewness $S_{ke}$ of the distributions of $E_{ce}$ in each cell of the $\beta_{e} - \kappa_{e}$ plane.
(c) Histogram of one $E_{ce}$ distribution to illustrate a negative skewness. This particular distribution corresponds to the bin marked with a black cell in panels (a) and (b).
(d) Histogram of one $E_{ce}$ distribution to illustrate a positive skewness. This particular distribution corresponds to the bin marked with a red cell in panels (a) and (b).
Examples of kappa versus core energy, for some selected beta values, are shown in Figure <ref>, where the left column (Panels (a),(c),(e), and (g)) show the results for ions, and the right column (Panels (b), (d), (f), and (h)) for electrons. The horizontal error bars correspond to the standard deviation of the core energy $E_{c}$ in each ($\kappa, \beta$) bin.
The fits to log-log data have linear correlation coefficients $R^2>0.7$.
Thus we conclude that under fixed $\beta $ conditions, the $\kappa$-index increases with $E_{c}$ for both ions and electrons, and that they relate via a power-law.
Table <ref> details the results of these fits for a few selected beta values.
In order to establish whether there is a dependence of $A$ and $\gamma$ on $\beta$ we plot all obtained fitting coefficients as shown in Figure <ref>. It was found that for both species, the relation between $A$ and beta has a clear minimum near $\beta=0.1$, and that is symmetric with respect to it up to at least $\beta=1$. A similar relation is observed for $\gamma$, which exhibits a maximum at approximately the same value of $\beta$.
To characterise this behaviour we use an empirical relation between the power-law coefficients $A$ or $\gamma$ and $\beta$ of the form $a\vert \log_{10}(\beta/\beta_{0}) \vert + b$, where $\beta_{0}$ is the location of the extremum.
This function was fitted to $A$ and $\gamma$ data around the minimum or maximum (as examples, the data used for the fits in the cases shown in Figure <ref> are plotted with filled circles).
The results of these fits can be found in Table <ref>.
In both cases, we find that $\beta_0 \sim 0.1$ and the slopes involved are different for $ A $ and $ \gamma $, for ions and electrons.
As we have mentioned in section <ref>, the total characteristic particle kinetic energy given by Equation (<ref>), enables a straightforward comparison between Kappa and Maxwellian distributions, as it also considers the effect of the suprathermal within the Kappa distribution model.
In order to see how the inclusion of this effect may alter our conclusions, we have repeated our analysis using $E_{total}$ instead of $E_c$.
Comparison of Fig. <ref> with Figs. <ref> and <ref> shows that for every combination between $\beta$ and $\kappa$, $\langle E_{total}\rangle$ is larger than $\langle E_c\rangle$ (see also Tables <ref> and <ref>).
Nevertheless the statistical results are similar and, as expected, noticeable differences appear only for very small $\kappa$ values close to 3/2, where $E_{total} \gg E_{c}$. In addition, we have corroborated that for both species $\kappa$ and $E_{total}$ also follow a power-law relation $\kappa = A^T E^{\gamma^T}_{total}$, with different $A^T$ and $\gamma^T$ parameters for different plasma beta (see Figure <ref>). Further, computing the value of our $A^T$ and $\gamma^T$ as function of beta, our results show that the relation between $E_{total}$ and $\kappa$ is qualitatively the same as the in the case of $E_{c}$ (see Figure <ref>).
Plots of the dependence of ion (left) and electron (right) core energies with $\kappa$, for different constant $\beta$ values.
Correlation coefficients and power-law fitting coefficients $A$ (Amplitude), and $\gamma$ (power-law index) are shown on the plots.
Power-law fitting coefficients $A$ (Amplitude) and $\gamma$ ( power-law index) obtained from the dependence of $\kappa$ on $E_c$, and their respective correlation coefficient ($R^{2}$). Values are given for both ion and electron and for some selected values of $\beta$.
$\beta$ $A_{i}$ $\gamma_{i}$ $R^2$ $ A_{e}$ $\gamma_{e}$ $R^2$
0.01 2.60 0.56 0.94 4.88 0.82 0.93
0.05 0.63 1.36 0.72 3.06 1.14 0.82
0.1 1.02 1.22 0.79 2.72 1.15 0.74
0.5 2.02 0.90 0.60 3.65 0.73 0.99
1 2.55 0.81 0.88 4.25 0.61 0.99
5 3.10 0.72 0.94 4.44 0.43 0.90
10 3.91 0.49 0.92 3.91 0.49 0.92
Dependency of the fitted power-law coefficient $A$ (top) and $\gamma$ (bottom) with $\beta$.
The left panels are for ions and the right panels for electrons.
(a) 2-D plot of ion total energy $E^T_{i}$ (color bar) in the $\beta_{i} - \kappa_{i}$ plane.
(b) The skewness $S_{ki}$ of the distributions of $E^T_{i}$ in each cell of the $\beta_{i} - \kappa_{i}$ plane.
(c) 2-D plot of electron total energy $E^T_{e}$ (color bar) in the $\beta_{e} - \kappa_{e}$ plane.
(d) The skewness $S_{ki}$ of the distributions of $E^T_{e}$ in each cell of the $\beta_{e} - \kappa_{e}$ plane.
Plots of the dependence of ion (left) and electron (right) Total energies with $\kappa$, for different constant $\beta$ values.
Correlation coefficients and power-law fitting coefficients $A^T$ (Amplitude), and $\gamma^T$ (power-law index) are shown on the plots.
Dependency of the fitted power-law coefficient $A^{T}$ (top) and $\gamma^{T}$ (bottom) with $\beta$.
The left panels are for ions and the right panels for electrons.
Measured parameters for the empirical relation $a\vert \log_{10}(\beta/\beta_{0}) \vert + b$ that describes the dependency between $A$ and $\beta$ or between $\gamma$ and $\beta$.
Fitting coefficient $a$ $\beta_{0}$ $b$ $R^2$
$A_{i}$ 1.37 0.06 0.76 0.80
$A^T_{i}$ 1.26 0.07 0.84 0.46
$\gamma_{i}$ -0.53 0.09 1.28 0.68
$\gamma^T_{i}$ -0.40 0.11 1.10 0.31
$A_{e}$ 2.21 0.15 2.25 0.95
$A^T_{e}$ 1.69 0.15 1.83 0.78
$\gamma_{e}$ -0.48 0.09 1.17 0.82
$\gamma^T_{e}$ -0.68 0.10 1.33 0.59
The last column gives the correlation coefficient. $A^T$ and $\gamma^T$ are amplitude and power index obtained from kappa and total energy relation.
§ DISCUSSION
The most evident result to emerge from our observation is the clear positive correlation between $\beta_{e}$ and $\beta_{i}$ (see Figure <ref>(a)), which would be impossible in the absence of an ion and electron temperature correlation, hence a pressure correlation.
In this case, we expect the ion temperature to be higher than the electron temperature, with a ratio of $ T_{i}/ T_{e}$ between $4$ and $6$ [Baumjohann, 1993, Borovsky et al., 1997, Espinoza et al., 2018, Wang et al., 2012].
Also, there is a lack of correlation between $\kappa_{e}$ and $\kappa_{i}$, which implies that the processes that regulate the macroscopic properties of each species are not related.
Perhaps the time scales for the energy relaxation of electrons and ions are different.
Figure <ref>(c) clearly emphasizes this as $E_{c_{e}}$ versus $E_{c_{i}}$ are only slightly correlated.
Despite the strong scattering observed for the $\kappa$ values, on average, $\kappa$ increases with $\beta$ up to $\beta=1$, and then it slowly decreases as $\beta$ increases beyond $1$.
This trend is observed for both species, although the effect is more pronounced for the electrons. This type of analysis was carried out in the solar wind, and a similar trend was noted. Our results are consistent with the previous studies of ion distributions by Livadiotis et al., 2018 (see their Figure 10(b)). The fact that the same relation between kappa and beta is observed in both systems, and at similar beta values, suggests this behavior is controlled by some universal property present in collisionless plasma systems.
In the case of the magnetosphere, comparing with similar previous studies, we note that in our case, ion $\kappa$ values are lower than those reported by Kirpichev & Antonova, 2020 (see their Figure 3), which may be related to the fact that they studied only some particular regions of the magnetosphere.
However, for both ions and electrons, the $\kappa$ values we found across different regions of the magnetosphere, are similar to those obtained by Espinoza et al., 2018, even though that study considered only the plasma sheet and restricted the dataset to $\beta > 1$. This good agreement also suggests that the relation between core energy, plasma beta and the kappa power-index may be explained by basic plasma physics processes rather than phenomenological or specific properties of the Earth's magnetosphere.
Our analysis of the relationship between $\beta$, $\kappa$, and $E_{c}$ (section <ref>) shows that some combinations of $\kappa$ and $\beta$ are not present in the analysed plasmas.
For low $\beta$ regimes this may be due to plasma dynamics in which the magnetic field dominates and drives the system towards specific configurations.
On the other hand, for large $\beta$ regimes, temperature fluctuations or kinetic instabilities are expected to dominate, which might make certain plasma configurations more stable than others.
A similar phenomenon was previously reported by Livadiotis et al., 2018 for solar wind observations. Thus, in low and high plasma $\beta$ regimes, the degrees of freedom decrease and make the plasma remain in just a few possible states.
In Figure <ref>(a) we observe that on average, ion core energies ($E_{c_{i}}$) are more enhanced as the plasma becomes closer to a Maxwellian: $\kappa_{i} > 8$ for $\langle E_{c_{i}} \rangle > 7 \,$keV.
The figure also shows that the colder plasma corresponding to $\kappa_{i} \leq 8 $ with energy value $\langle E_{c_{i}} \rangle \leq 3 \,$keV are dominant across all plasma $\beta_{i}$ regimes. In the case of electrons, as shown in Figure <ref>(a), electron core energies $E_{c_{e}} > 4 \,$keV correspond to larger $\kappa$ values ($\kappa_{e} \geq 10$), thus tend to be closer to thermodynamic equilibrium. Meanwhile, smaller kappa values ($\kappa_{e} \leq 8$) are typical for smaller energies $E_{c_{e}} \leq 2 \,$keV across all plasma $\beta_{e}$ regimes.
This correlation found between high energies and high values of kappa may suggest that plasma heating and the thermalization of the distribution may be in agreement with Collier, 1999, who suggested that the increment in kappa depends on ion core temperature, indicating that particle distributions of hotter plasmas tend to Maxwellian functions.
Previous studies have also shown that the $\kappa$-index usually increases with core energy. For instance, Collier, 1999 proposed that these two parameters have a linear relation; meanwhile, Kirpichev & Antonova, 2020 opted for a power-law dependency but using only ion measurements. Our study strongly agrees with the latter, as power-law functions fit well the relation between $\kappa$ and $E_{c}$ for all values of $\beta$ parameter, and for both species, as illustrated in Figure <ref>. Similar results are obtained for the relation between $\kappa$ and $E_{total}$ (see Figure <ref>).
Moreover, we have found that the coefficients of power-law fitting $A$ (amplitude) and $\gamma$ (power-law index) depend strongly on $\beta$, for both species, such that different $\beta$ values yield different values for $A$ and $\gamma$ (see Figures <ref> and <ref>).
These results are in contrast with Kirpichev & Antonova, 2020, who obtained practically constant values for the power index $\sim 0.5$.
The discrepancy can be attributed to the fact that they applied different, stronger restrictions to select plasmas for their analyses.
Further, the increment in kappa value with energy is attributed to positive values of the power-index ($\gamma$), consistent with the result showing that larger values of the core energy correspond to larger $\kappa$.
However, we found the relation is non-linear.
In low beta plasma ($\beta < 0.1$), the relation between kappa and core energy exhibits a small $\gamma$ value for ions and electrons, and the same happens in the high beta regime $(\beta > 0.1)$ as seen in Figure <ref>. Meanwhile, for a specific range of beta value ($\beta \sim 0.1-0.3$) the relation between kappa and energy is stronger showing a maximum value of $\gamma$ for both species. As previously mentioned, this is consistent with Livadiotis et al., 2018. Thus, the presence of extremums near $\beta \sim 0.1$, as shown in Figure <ref>, indicates the existence of two regimes. Regardless the details of the solar wind or the magnetosphere, in poorly collisional space plasma systems it is expected plasma magnetization to dominate in the low beta regime, and therefore the relation between $\kappa$ and core energy should be stronger. On the other hand, in the case of a large beta plasma correlations are mainly due to temperature fluctuations such that the magnetic field becomes less relevant for the determination of $\kappa$.
Between these two separated regimes there is an intermediate range in which other plasma effects should be relevant. For instance, the difference between cold and hot plasma, or the effectiveness of instabilities that was responsible for the power-law coefficients trend in Figure <ref>. This is consistent with theoretical linear analysis done by Mace & Sydora, 2010 in which deviations from the cold plasma approximation of whistler waves propagating through a bi-Kappa distributed plasma were found to be relevant for $\beta>0.1$, such that $\beta \sim 0.1$ may correspond to a transition value between cold plasmas ($\beta <0.1$), where the magnetic field controls the dynamics, and hot plasmas ($\beta >0.1$) in which temperature effects become relevant.
Recently a similar behavior was described by López et al., 2020, in which different regimes of plasma waves and instabilities were found to be strongly dependent on plasma beta.
For instance, at $\beta < 0.1$ there is a dominance of propagating parallel or obliquely to the magnetic field; while at $\beta > 0.1$ the characteristics of different relative drifts of suprathermal electrons are dominant, which can be in electrostatic mode.
Moreover, this is also consistent with Moya et al., 2020, where they allowed kappa to evolve in time due to the quasi-linear relaxation of electron cyclotron instability.
Their results suggest that kappa is not affected by the instability in the low plasma beta regime ($\beta < 0.1$), that for $\beta\sim0.3$ the relaxation of the instability resulted in a larger variation of $\kappa$, while in the high plasma beta regime ($\beta > 0.5$) kappa remains more or less the same.
§ SUMMARY AND CONCLUSION
The observations of ion and electron energy fluxes, made by the THEMIS mission instruments (section <ref>), have provided the opportunity to study the behavior of the Kappa distribution parameters for 47,058 cases, in which the spectra of both species were successfully modelled with Kappa functions.
The plasma measurements were made in the geomagnetic tail and in the plasma that surrounds the Earth beyond 7 R$_{E}$.
More specifically, the studied region corresponds to $-35 \leq X \leq 7$ R$_{E}$, $-30 \leq Y \leq 30$ R$_{E}$, and $-10 \leq Z \leq 10$ R$_{E}$ and only plasmas that satisfy the restrictions summarised in Table <ref> were considered.
We studied the relationship between $\kappa$ and $E_c$ for a wide range of the plasma $\beta$ parameter, from $10^{-3}$ to $10^{2}$.
This is the first time this type of research is carried out using THEMIS data covering different regions of the magnetosphere, and one important finding of our research is the presence of Kappa distributions in many regions of the magnetosphere. On average, the $\kappa$ indices of ions and electrons were found to increase with $\beta$, up to $\beta\sim1$ (Figure <ref>).
This tendency seems more pronounced for the electrons.
However, for $\beta>1$ the $\kappa$ indices tend to decrease slowly.
In addition, certain combinations of $\kappa$ and $\beta$ are absent in the studied plasmas (for instance $\beta_{i}<0.01$ and $\kappa_{i}>10$ for ions, or $\beta_{e}>30$ and $\kappa_{e}>6$ for electrons; see Fig. <ref>).
Such division means that the magnetic field plays a crucial role in the relative number of energetic particles and the presence of high energy tails in the distributions. Our results show also that systems with large ion kappa indices ($\kappa \geq 10$) appear to have higher core energies, $E_{c_i}\geq5$ keV (Figure <ref>(a)).
The same trend is observed for electrons, as systems with $\kappa_e>8$ have energies $E_{c_e}\geq3$ keV (Figure <ref>(a)). Thus, the hottest plasmas tend to be in states closer to thermodynamic equilibrium (as their distributions approach a Maxwellian).
A more detailed study for both species shows a robust correlation of the form $\kappa=AE_c^\gamma$, showing that kappa increases with energy, and the relation between $\kappa$ and core energy is stronger for a specific range of beta value ($\beta \sim 0.1-0.3$), for both ions and electrons.
This observation may reflect the level influence of the magnetization of the system on the propagation of plasma waves and instabilities <cit.>; or the effectiveness of the relaxation process of kinetic instabilities to induce changes on plasma parameters, including $\kappa$ [Moya et al., 2020].
Moreover, we found that both $A$ and $\gamma$ depend on $\beta$, the values of $A$ are found to exhibit a minimum near $\beta\sim0.1$; while the power-law indices $\gamma$ exhibit a maximum at around the same value of $\beta$. The observed trend for both species suggests a universal plasma transition at around $\beta\sim0.1$.
Waves, instabilities, and temperature fluctuations may have stronger or lesser effect on the observed $\kappa$ indices
depending on the value of $\beta$.
When $\beta$ is small (i.e. in a strongly magnetized plasma), $\kappa$ is more dependant on the magnetic field than on instabilities or fluctuations; and therefore plasma waves and instabilities have little effects on kappa.
There exist a transition near $\beta \sim 0.1-0.5$ (i.e. for less magnetized plasmas), in which the system becomes more susceptible to plasma waves and instabilities, which after relaxation can result in significant changes of plasma parameters, including $\kappa$, which is consistent with the results presented in Figure <ref>. Finally, for $\beta>0.5$, kinetic instability thresholds prevent the plasma parameters to deviate from a quasi-equilibrium state, and the value of $\kappa$ is expected to be determined mostly by temperature fluctuations.
Consistently, Figure <ref> shows that for $\beta>1.0$, both $A$ and $\gamma$ deviate from the observed trend and tend to take nearly constant values.
This effect is particularly noticeable for $A_{e}$, which may be evidence of electron demagnetization, which often takes place in highly fluctuating electric and magnetic fields [Antonova et al., 1999], and that are commonly observed at high plasma beta ($\beta\geq1$) [Valdivia et al., 2016].
The results we have obtained are remarkably similar to the case of solar wind, suggesting a universal behaviour of Kappa distributions in poorly collisional plasmas. The relation we have found between kappa, energy, and beta with the transition at a particular value of beta is a significant result that can be used by theoretical models in the future. Besides, these relation is much more complex than what was reported in the previous studies. A comprehensive insight into the behavior of these parameters should be investigated further using theoretical models. We expect our results to motivate the space and astrophysical plasma physics community to consider more in-depth studies using more realistic theoretical models and simulations to unravel the dynamics responsible for such behavior.
This work was supported by Agencia Nacional de Investigación y Desarrollo de Chile (ANID) grants 21181777 and 1191351 (P.S.M.). M.S. acknowledges support from Universidad de Santiago de Chile through the grant DICYT 042031S. C.M.E. and M.S. acknowledge support from AFOSR (FA9550-19-1-0384).
We acknowledge NASA contract
NAS5-02099 and V. Angelopoulos for the use of data from the THEMIS mission, specifically C.W. Carlson and J.P. McFadden for the use of ESA data, D. Larson for the use of SST data, and K.H. Glassmeier, U. Auster, and W. Baumjohann for
the use of FGM data. The data of the THEMIS satellite mission used in this paper are available on THEMIS mission website: http://themis.ssl.berkeley.edu/index.shtml.
§ MEDIAN AND MODE $E_C$ VALUES AS A FUNCTION OF $\KAPPA$ AND $\BETA$
Figs. <ref> and <ref> show the distribution of Median and Mode $E_c$ values, respectively, in the $\kappa$ and $\beta$ space.
These figures can be compared to Figs. <ref> and <ref>.
2-D distribution of the medians core energy values for each cell in the $\beta_{i} - k_{i}$ plane for ions (left) and electrons (right).
2-D distribution of the modes core energy values for each cell in the $\beta_{i} - k_{i}$ plane for ions (left) and electrons (right).
[Angelopoulos, 2008]
Angelopoulos, V. 2008, Space Science Reviews, 141, 5,
[Antonova et al., 1999]
Antonova, E., Stepanova, M., Vikhreva, E., Ovchinnikov, I., & Teltzov, M.
1999, Journal of Geophysical Research: Space Physics, 104, 19941
[Auster et al., 2008]
Auster, H., Glassmeier, K., Magnes, W., et al. 2008, Space Science Reviews,
141, 235
[Baumjohann, 1993]
Baumjohann, W. 1993, Space science reviews, 64, 141
[Borovsky et al., 1997]
Borovsky, J. E., Elphic, R. C., Funsten, H. O., & Thomsen, M. F. 1997, Journal
of Plasma Physics, 57, 1
[Bruno & Carbone, 2013]
Bruno, R., & Carbone, V. 2013, Living Reviews in Solar Physics, 10, 2
[Burlaga & F.-Viñas, 2005]
Burlaga, L., & F.-Viñas, A. 2005, Journal of Geophysical Research: Space
Physics, 110
[Christon et al., 1988]
Christon, S. P., Mitchell, D. G., Williams, D. J., et al. 1988, Journal
of Geophysical Research, 93, 2562, 10.1029/JA093iA04p02562
[Christon et al., 1989]
Christon, S. P., Williams, D. J., Mitchell, D. G., Frank, L. A., &
Huang, C. Y. 1989, Journal of Geophysical Research, 94, 13409,
[Christon et al., 1991]
Christon, S. P., Williams, D. J., Mitchell, D. G., Huang, C. Y., &
Frank, L. A. 1991, Journal of Geophysical Research, 96, 1,
[Collier, 1999]
Collier, M. R. 1999, Journal of Geophysical Research, 104, 28559,
[Collier et al., 1996]
Collier, M. R., Hamilton, D., Gloeckler, G., Bochsler, P., & Sheldon, R. 1996,
Geophysical research letters, 23, 1191
[Espinoza et al., 2018]
Espinoza, C. M., Stepanova, M., Moya, P. S., Antonova, E. E., & Valdivia,
J. A. 2018, Geophysical Research Letters, 45, 6362,
[Grabbe, 2000]
Grabbe, C. 2000, Physical review letters, 84, 3614
[Haaland et al., 2010]
Haaland, S., Kronberg, E. A., Daly, P. W., et al. 2010, Annales Geophysicae,
28, 1483, 10.5194/angeo-28-1483-2010
[Kirpichev & Antonova, 2020]
Kirpichev, I., & Antonova, E. 2020, The Astrophysical Journal, 891, 35
[Kletzing et al., 2003]
Kletzing, C., Scudder, J., Dors, E., & Curto, C. 2003, Journal of Geophysical
Research: Space Physics, 108
[Lazar et al., 2016]
Lazar, M., Fichtner, H., & Yoon, P. 2016, Astronomy & Astrophysics, 589, A39
[Lazar et al., 2015]
Lazar, M., Poedts, S., & Fichtner, H. 2015, Astronomy & Astrophysics, 582,
[Lazar et al., 2017]
Lazar, M., Shaaban, S., Poedts, S., & Štverák, Š. 2017,
Monthly Notices of the Royal Astronomical Society, 464, 564
[Livadiotis, 2017]
Livadiotis, G. 2017, Kappa distributions: Theory and applications in plasmas
[Livadiotis et al., 2018]
Livadiotis, G., Desai, M., & Wilson III, L. 2018, The Astrophysical Journal,
853, 142
[Livadiotis & McComas, 2011]
Livadiotis, G., & McComas, D. 2011, The Astrophysical Journal, 738, 64
[López et al., 2020]
López, R. A., Lazar, M., Shaaban, S. M., Poedts, S., & Moya, P. S. 2020,
The Astrophysical Journal Letters, 900, L25, 10.3847/2041-8213/abaf56
[Mace & Sydora, 2010]
Mace, R., & Sydora, R. 2010, Journal of Geophysical Research: Space Physics,
115, A07206, 10.1029/2009JA015064
[Maksimovic et al., 1997]
Maksimovic, M., Pierrard, V., & Lemaire, J. 1997, Astronomy and Astrophysics,
324, 725
[Mann et al., 2002]
Mann, G., Classen, H., Keppler, E., & Roelof, E. 2002, Astronomy &
Astrophysics, 391, 749
[Marsch, 2006]
Marsch, E. 2006, Living Reviews in Solar Physics, 3, 1
[McFadden et al., 2008]
McFadden, J., Carlson, C., Larson, D., et al. 2008, Space Science Reviews,
141, 277, 10.1007/s11214-008-9440-2
[Montgomery et al., 1965]
Montgomery, M. D., Singer, S., Conner, J. P., & Stogsdill, E. E. 1965,
Physical Review Letters, 14, 209, 10.1103/PhysRevLett.14.209
[Moya et al., 2020]
Moya, P. S., Lazar, M., & Poedts, S. 2020, Plasma Physics and Controlled
Fusion, 63, 025011, 10.1088/1361-6587/abce1a
[Ogasawara et al., 2013]
Ogasawara, K., Angelopoulos, V., Dayeh, M., et al. 2013, Journal of
Geophysical Research: Space Physics, 118, 3126
[Ogasawara et al., 2015]
Ogasawara, K., Dayeh, M., Funsten, H., et al. 2015, Journal of Geophysical
Research: Space Physics, 120, 964
[Pierrard et al., 1999]
Pierrard, V., Maksimovic, M., & Lemaire, J. 1999, Journal of Geophysical
Research: Space Physics, 104, 17021
[Pierrard & Pieters, 2014]
Pierrard, V., & Pieters, M. 2014, Journal of Geophysical Research: Space
Physics, 119, 9441
[Pisarenko et al., 2002]
Pisarenko, N., Budnik, E., Ermolaev, Y., et al. 2002, Journal of Atmospheric
and Solar-Terrestrial Physics, 64, 573 ,
[Stepanova & Antonova, 2015]
Stepanova, M., & Antonova, E. E. 2015, Journal of Geophysical Research: Space
Physics, 120, 2014JA020684, 10.1002/2014JA020684
[Valdivia et al., 2016]
Valdivia, J., Toledo, B., Gallo, N., et al. 2016, Advances in Space Research,
58, 2126
[Vasyliunas, 1968]
Vasyliunas, V. M. 1968, Journal of Geophysical Research, 73, 2839,
[Wang et al., 2012]
Wang, C.-P., Gkioulidou, M., Lyons, L. R., & Angelopoulos, V. 2012, Journal of
Geophysical Research: Space Physics, 117
[Wing et al., 2005]
Wing, S., Johnson, J., Newell, P., & Meng, C.-I. 2005, Journal of Geophysical
Research: Space Physics, 110
[Yoon, 2014]
Yoon, P. H. 2014, Journal of Geophysical Research (Space Physics), 119, 7074,
[Yoon, 2017]
—. 2017, Reviews of Modern Plasma Physics, 1, 4
[Angelopoulos, 2008]
Angelopoulos, V. 2008, Space Science Reviews, 141, 5,
[Antonova et al., 1999]
Antonova, E., Stepanova, M., Vikhreva, E., Ovchinnikov, I., & Teltzov, M.
1999, Journal of Geophysical Research: Space Physics, 104, 19941
[Auster et al., 2008]
Auster, H., Glassmeier, K., Magnes, W., et al. 2008, Space Science Reviews,
141, 235
[Baumjohann, 1993]
Baumjohann, W. 1993, Space science reviews, 64, 141
[Borovsky et al., 1997]
Borovsky, J. E., Elphic, R. C., Funsten, H. O., & Thomsen, M. F. 1997, Journal
of Plasma Physics, 57, 1
[Bruno & Carbone, 2013]
Bruno, R., & Carbone, V. 2013, Living Reviews in Solar Physics, 10, 2
[Burlaga & F.-Viñas, 2005]
Burlaga, L., & F.-Viñas, A. 2005, Journal of Geophysical Research: Space
Physics, 110
[Christon et al., 1988]
Christon, S. P., Mitchell, D. G., Williams, D. J., et al. 1988, Journal
of Geophysical Research, 93, 2562, 10.1029/JA093iA04p02562
[Christon et al., 1989]
Christon, S. P., Williams, D. J., Mitchell, D. G., Frank, L. A., &
Huang, C. Y. 1989, Journal of Geophysical Research, 94, 13409,
[Christon et al., 1991]
Christon, S. P., Williams, D. J., Mitchell, D. G., Huang, C. Y., &
Frank, L. A. 1991, Journal of Geophysical Research, 96, 1,
[Collier, 1999]
Collier, M. R. 1999, Journal of Geophysical Research, 104, 28559,
[Collier et al., 1996]
Collier, M. R., Hamilton, D., Gloeckler, G., Bochsler, P., & Sheldon, R. 1996,
Geophysical research letters, 23, 1191
[Espinoza et al., 2018]
Espinoza, C. M., Stepanova, M., Moya, P. S., Antonova, E. E., & Valdivia,
J. A. 2018, Geophysical Research Letters, 45, 6362,
[Grabbe, 2000]
Grabbe, C. 2000, Physical review letters, 84, 3614
[Haaland et al., 2010]
Haaland, S., Kronberg, E. A., Daly, P. W., et al. 2010, Annales Geophysicae,
28, 1483, 10.5194/angeo-28-1483-2010
[Kirpichev & Antonova, 2020]
Kirpichev, I., & Antonova, E. 2020, The Astrophysical Journal, 891, 35
[Kletzing et al., 2003]
Kletzing, C., Scudder, J., Dors, E., & Curto, C. 2003, Journal of Geophysical
Research: Space Physics, 108
[Lazar et al., 2016]
Lazar, M., Fichtner, H., & Yoon, P. 2016, Astronomy & Astrophysics, 589, A39
[Lazar et al., 2015]
Lazar, M., Poedts, S., & Fichtner, H. 2015, Astronomy & Astrophysics, 582,
[Lazar et al., 2017]
Lazar, M., Shaaban, S., Poedts, S., & Štverák, Š. 2017,
Monthly Notices of the Royal Astronomical Society, 464, 564
[Livadiotis, 2017]
Livadiotis, G. 2017, Kappa distributions: Theory and applications in plasmas
[Livadiotis et al., 2018]
Livadiotis, G., Desai, M., & Wilson III, L. 2018, The Astrophysical Journal,
853, 142
[Livadiotis & McComas, 2011]
Livadiotis, G., & McComas, D. 2011, The Astrophysical Journal, 738, 64
[López et al., 2020]
López, R. A., Lazar, M., Shaaban, S. M., Poedts, S., & Moya, P. S. 2020,
The Astrophysical Journal Letters, 900, L25, 10.3847/2041-8213/abaf56
[Mace & Sydora, 2010]
Mace, R., & Sydora, R. 2010, Journal of Geophysical Research: Space Physics,
115, A07206, 10.1029/2009JA015064
[Maksimovic et al., 1997]
Maksimovic, M., Pierrard, V., & Lemaire, J. 1997, Astronomy and Astrophysics,
324, 725
[Mann et al., 2002]
Mann, G., Classen, H., Keppler, E., & Roelof, E. 2002, Astronomy &
Astrophysics, 391, 749
[Marsch, 2006]
Marsch, E. 2006, Living Reviews in Solar Physics, 3, 1
[McFadden et al., 2008]
McFadden, J., Carlson, C., Larson, D., et al. 2008, Space Science Reviews,
141, 277, 10.1007/s11214-008-9440-2
[Montgomery et al., 1965]
Montgomery, M. D., Singer, S., Conner, J. P., & Stogsdill, E. E. 1965,
Physical Review Letters, 14, 209, 10.1103/PhysRevLett.14.209
[Moya et al., 2020]
Moya, P. S., Lazar, M., & Poedts, S. 2020, Plasma Physics and Controlled
Fusion, 63, 025011, 10.1088/1361-6587/abce1a
[Ogasawara et al., 2013]
Ogasawara, K., Angelopoulos, V., Dayeh, M., et al. 2013, Journal of
Geophysical Research: Space Physics, 118, 3126
[Ogasawara et al., 2015]
Ogasawara, K., Dayeh, M., Funsten, H., et al. 2015, Journal of Geophysical
Research: Space Physics, 120, 964
[Pierrard et al., 1999]
Pierrard, V., Maksimovic, M., & Lemaire, J. 1999, Journal of Geophysical
Research: Space Physics, 104, 17021
[Pierrard & Pieters, 2014]
Pierrard, V., & Pieters, M. 2014, Journal of Geophysical Research: Space
Physics, 119, 9441
[Pisarenko et al., 2002]
Pisarenko, N., Budnik, E., Ermolaev, Y., et al. 2002, Journal of Atmospheric
and Solar-Terrestrial Physics, 64, 573 ,
[Stepanova & Antonova, 2015]
Stepanova, M., & Antonova, E. E. 2015, Journal of Geophysical Research: Space
Physics, 120, 2014JA020684, 10.1002/2014JA020684
[Valdivia et al., 2016]
Valdivia, J., Toledo, B., Gallo, N., et al. 2016, Advances in Space Research,
58, 2126
[Vasyliunas, 1968]
Vasyliunas, V. M. 1968, Journal of Geophysical Research, 73, 2839,
[Wang et al., 2012]
Wang, C.-P., Gkioulidou, M., Lyons, L. R., & Angelopoulos, V. 2012, Journal of
Geophysical Research: Space Physics, 117
[Wing et al., 2005]
Wing, S., Johnson, J., Newell, P., & Meng, C.-I. 2005, Journal of Geophysical
Research: Space Physics, 110
[Yoon, 2014]
Yoon, P. H. 2014, Journal of Geophysical Research (Space Physics), 119, 7074,
[Yoon, 2017]
—. 2017, Reviews of Modern Plasma Physics, 1, 4
|
IEEE Copyright Notice
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
# Formal Verification of a Fail-Operational Automotive Driving System
Tobias Schmid, Stefanie Schraufstetter, Jonas Fritzsch, Dominik Hellhake,
Greta Koelln and Stefan Wagner T.Schmid, S.Schraufstetter and D.Hellhake are
with the Department of Development of Driving Dynamics, BMW AG, Munich,
Germany.
G.Koelln is with the Department of Development of Autonomous Driving, BMW AG,
Munich, Germany.
T.Schmid, J. Fritzsch, D. Hellhake and S. Wagner are with the Institute of
Software Engineering, University of Stuttgart, Stuttgart, Germany.Manuscript
received July 19, 2020; revised August 26, 2020.
###### Abstract
A fail-operational system for highly automated driving must complete the
driving task even in the presence of a failure. This requires redundant
architectures and a mechanism to reconfigure the system in case of a failure.
Therefore, an arbitration logic is used. For functional safety, the switch-
over to a fall-back level must be conducted in the presence of any electric
and electronic failure. To provide evidence for a safety argumentation in
compliance with ISO 26262, verification of the arbitration logic is necessary.
The verification process provides confirmation of the correct failure
reactions and that no unintended system states are attainable. Conventional
safety analyses, such as the failure mode and effect analysis, have its limits
in this regard. We present an analytical approach based on formal
verification, in particular model checking, to verify the fail-operational
behaviour of a driving system. For that reason, we model the system behaviour
and the relevant architecture and formally specify the safety requirements.
The scope of the analysis is defined according to the requirements of ISO
26262. We verify a fail-operational arbitration logic for highly automated
driving in compliance with the industry standard. Our results show that formal
methods for safety evaluation in automotive fail-operational driving systems
can be successfully applied. We were able to detect failures, which would have
been overlooked by other analyses and thus contribute to the development of
safety critical functions.
###### Index Terms:
Automotive, highly automated driving, fail-operational, Fault-tolerance,
Functional Safety, Dependability, ISO 26262, Formal Verification, Model
Checking
## 1 Introduction
In partial driving automation, the driver supervises the driving system at all
times (shown in figure 1) and is therefore able to act as a backup in case of
a failure. A failure in this context is the consequence of a fault leading to
the loss of an element, for example, a steering system. The system’s failure
reaction in partial driving automation is defined as fail-silent. At higher
levels of automation, the driver does not necessarily intervene immediately,
since s/he is not in charge of supervising the system permanently [25].
Therefore, a highly automated driving or conditional driving system is
required to operate even in case of a failure and initiates the transition to
a safe state to attain functional safety. Those systems are defined as fail-
operational. Functional safety is the absence of unreasonable risk due to
malfunctions of electric and electronic (E/E) systems and is mandatory for the
homologation of vehicle systems [24]. The standard for functional safety in
the automotive industry is ISO 26262 [11], which includes guidelines for the
evaluation, management, and development of safety critical systems.
Figure 1: Levels of Driving Automation [25]
The fail-operational behaviour of the driving system allows the availability
of driving functions to prevent a hazardous situation. Since every single E/E
failure must be tolerated, redundant architectures are inevitable. In the case
of the driving system, this includes redundant steering and braking systems as
well as sensors and control algorithms. When a failure occurs, a
reconfiguration and activation of a backup operation are necessary. The
switch-over is ensured by an implemented arbitration logic.
### 1.1 Problem Statement
The industry standard for functional safety, ISO 26262 [11], requires safety
argumentation to evaluate the absence of unreasonable risk due to malfunctions
of an E/E system. Verification provides evidence for the argumentation and the
appropriateness of the safety concept. The verification analysis process
verifies that the defined specifications and design meet higher-level
requirements, such as system-level requirements. In the context of the fail-
operational system behaviour it specifically includes arbitration logic, a
logic to ensure the switch-over to a backup level. Established methods in the
automotive industry, such as failure mode and effect analysis (FMEA), do not
meet the challenges of such systems because a large number of states and
propagation paths limit its practical application. To date, there is no
published approach for a safety argumentation or verification process of such
systems.
### 1.2 Research Objective
The objective of our research is the verification of the fail-operational
behaviour of a driving system and in particular, the arbitration logic in
accordance with ISO 26262. Our verification approach should be reliable to
contribute to an automotive safety case. In addition, the objective is to
demonstrate the application of formal methods, in particular model checking
for a safety argumentation of an industry-relevant, complex problem, a fail-
operational driving system. The scope of our investigation especially includes
the arbitration logic, and respective hardware interfaces such as power
supply.
### 1.3 Contribution
Our approach demonstrates that model checking is a reliable analytical
technique for the verification of a fail-operational driving system in
compliance with the industry standard for functional safety, ISO 26262. The
model integrates the arbitration logic and the relevant architecture using an
open-source tool (NuSMV111http://nusmv.fbk.eu/) and displays the
transformation of the safety requirements into a formal specification using
linear time logic. Furthermore, the requirements of the industry standard ISO
26262 regarding the scope and the verification process are identified.
Relevant failure cases are derived from the architecture and considered in the
model. We overcome the state space explosion problem during formal
verification, by limiting the failure combinations to the scope of ISO 26262,
segmenting the checking procedure, and bounding the search depth. Then the
validation of the model and the formal specifications is conducted. In
addition, a tool qualification is necessary for compliance with ISO 26262. The
qualification ensures the reliability of the tool and therefore the resilience
of the generated results. Our study demonstrates that verification via model
checking is applicable for sophisticated and extensive automotive problems and
it provides verification of accurateness of the system design in compliance
with ISO 26262. Thus, this study develops a safety argumentation of a fail-
operational driving system in the development of highly automated vehicles.
Complementary work [9] addresses the same project but focusses not on the
safety perspective and ISO 26262 but on the implementation of the model
checking. Within [9] we describe the handling of large scale model checking
problems and discuss different implementations.
After presenting the related work in section 2, we introduce a fail-
operational driving system including the arbitration logic in 3. The
verification process is explained in two parts in section 4. Section 4.1
describes the model; section 4.2. describes the formal specifications. The
scope is defined by the requirements in ISO 26262. Section 4.3 further
presents the implementation process. The validation approach and tool
qualification are provided in section 4.4 to comply with ISO 26262. The
results and application are discussed in section 4.5, including threats to
validity.
## 2 Related Work
Previous work that addresses the verification of fail-operational automotive
systems exists. Additionally, some studies address the verification of
automotive system by formal methods. This section reviews the literature as it
relates to the compliance of the industry standard ISO 26262.
### 2.1 Safety Analysis of Fail-Operational Automotive Systems
Since safety is a major aspect of fail-operational systems, it is discussed in
most related publications. However, previous research mainly focuses on
reliability factors.
The most comprehensive work is presented by Schnellbach [26]. This work
addresses the limitations of the first version of the industry standard ISO
26262, discusses relevant aspects of fail-operational systems, such as the
definition of the emergency operation time, and provides an approach to design
the redundant architecture based on reliability analysis. The arbitration
logic as a core element for fail-operational behaviour is addressed but not
it’s verification. Another comprehensive work addressing fail-operational
systems is published by Sari [12]. An architecture model is designed based on
a functional safety concept. The necessity of an arbitration logic is stated
but the analysis is not detailed. However, the author focuses on the analysis
of dependencies in redundant elements, another highly relevant aspect of fail-
operational systems.
The functional verification of a fail-operational system using a formal
approach is presented by Koelbl and Leue [17]. The system is modeled as a
state diagram at the vehicle level and takes into account an emergency
operation mode after a failure. The state model is checked using statistic
model checking to analyze the reliability of the concept. Although this model
aggregates component states to a system-level state, it does not reflect the
complexity of modern driving systems.
Comprehensive work regarding the functional safety of fail-operational systems
focusses on the analysis at the vehicle level, including safety, reliability,
and dependability. The verification of the fail-operational behavior is
addressed at the vehicle level and relevant analyses at the system level are
identified. The purpose of this study is to close the gap and provide a
verification procedure on system level.
### 2.2 Formal Verification of Automotive Systems
Formal verification and in particular, model checking have generated great
interest in research. The research focuses on modeling, algorithms, including
the handling of the state-space problem, and application. Clarke [8] gives an
overview of the state of the art model checking. According to Singh et al.
[27], formal methods provide accuracy, consistency and unambiguous
specifications, using mathematical theorems. Thus, the application to safety
critical systems has been addressed in the literature. We present publications
to specifically address the safety of automotive systems.
Formal verification in compliance with various aspects of the industry
standard ISO 26262 has been covered in several publications. Leitner-Fischer
and Leue [19] discuss the recommendations in ISO 26262 regarding the
verification and suitability of formal methods. The authors conclude that
formal methods support the safety life cycle described in ISO 26262 by
formalization and the proof of accuracy. In particular, formal methods provide
a systematic approach and allow the traceability of safety requirements. In
addition, the authors emphasize the necessity for a tool qualification. Bahig
and El-Kadi [4] come to similar conclusions and present a general process
based on formal modeling and model checking.
Further work addresses the application of formal methods to automotive
problems and partly the industry standard ISO 26262.
Abdulkhaleq and Wagner [1] combine system theoretic process analysis with
formal verification. Safety requirements are identified via system theoretic
process analysis and formally specified. Then, model checking is used to
verify a process model regarding the specifications. The basic concept is
comparable to our work. It shows the benefits of a structured approach and
provides a complete verification by focusing on system-level behaviour. Their
approach is applied to a cruise control system with limited states and
complies with the software safety requirements in ISO 26262 through traceable
and formal specifications.
Nyberg et al. [21] present their experience using formal verification
techniques including model checking at Scania, a Scandinavian truck company.
The results show the applicability of model checking in general, and the
challenges of the formalization of models and requirements in particular. They
conclude that the verification method needs to be chosen based on system and
modeling complexity. Large systems require more complexity and effort and may
not be verified comprehensively. ISO 26262 is only referenced as required for
a verification procedure.
Another application is presented by Nellen et al. [22]. The authors verify a
controller for a parking function using Mathworks,
Simulink222https://mathworks.com/products/simulink.html. They recognize a
dependency of the computing time to the model size and the time interval to be
analyzed. They conclude that model checking supports safety analysis and
increases the quality of the safety requirements by using the process of
formal specification. However, computing time and complexity limited the range
of applications.
Todorov et al. [29] use model checking to verify a cruise control algorithm.
Once the model is formalized and implemented, verification is very efficient
for large models. However, similar to Nellen et al. [22], long time spans of
constraints limited the analysis due to runtime problems despite the limited
model size.
Other work stated model checking can be used as a complement to other
verification techniques limit the time and effort. Da Silva et al. [3]
combined model checking with testing and simulation. They state that formal
verification is time consuming and, therefore, cannot be used extensively.
Similarly, Aniculaesei [2] uses other examples of model checking to identify
test cases.
Work from other industries, such as aviation, focuses on code level evaluation
and rarely considers distributed functions [20]. Thus, it is not relevant to
our study.
In conclusion, the functional verification of fail-operational functions has
not been sufficiently investigated. Related work demonstrates the
applicability of model checking to ISO 26262 in the automotive industry. In
particular, they examine the formal specification of requirements. However,
these studies are limited and the verification of large, distributed systems
has not yet been demonstrated. In addition, a complete coverage of ISO 26262
has only been addressed by [19] without an application. Current verification
approaches cover only some factors, the need for verification and the
identification of safety requirements. They do not include validation or tool
certification. The objective of this study is to close this gap.
## 3 Fail-Operation Driving Systems
Fail-operation behaviour in an automotive context is defined as the ability of
a system to be operable in the presence of a failure. Such behaviour is
required when it is not immediately possible to reach a safe state by
deactivating the system [13]. That corresponds to fault-tolerance according to
the industry standard ISO 26262 [11]. Since safety is a key requirement, the
fall-back operation time and functionality are limited in response to a system
failure [26].
### 3.1 Fail-Operation Safety Goals
Fail-operational behaviour requires redundant architectures because a fall-
back level is necessary in case of a failure resume the driving task [23]. The
necessity of such fail-operational system behaviour is defined as part of the
safety concept, resulting from the safety goals. Safety goals are safety
requirements at the system level and are identified from a hazard and risk
analysis. These safety goals are given in our analysis and included for
comprehensibility. Table I shows the applicable safety goals specified
according to ISO 26262 [11] and the literature [28] [17]. Those safety goals
include the activation, deactivation and operation of the highly automated
driving mode. The fail-operational behaviour is described in case of failure.
Each safety goal includes the relevant specification, a respective integrity
in accordance with ISO 26262 and the fault-tolerant-time interval (FTTI). The
fault-tolerant time interval describes the time span after which the failure
becomes critical and a transition to a safe state must have been achieved. The
safe state can either be a state without any hazard or a state where the
systems’ integrity corresponds to hazards.
Table I: Safety Goals for a Fail-Operational driving System ID | Safety Goal | Integrity | FTTI
---|---|---|---
SG 1 | The function must not be activated when the components do not signal readiness. | ASIL B | 0 ms
SG 2 | The function must not be deactivated falsely. | ASIL D | 0 ms
SG 3 | A collision by leaving the trajectory in the nominal operation must be prevented. | ASIL D | 0 ms
SG 4 | The arbitration logic must activate a functioning fall-back operation in case of a failure. | ASIL D | 200 ms
SG 5 | The system must decelerate after a switch-over. | ASIL B | 200 ms
SG 6 | A collision by leaving the trajectory in the fall-back operation must be prevented. | ASIL B | 0 ms
Activation is only allowed if no failure is present and no false deactivation
is possible. Furthermore, the driving task must be performed in accordance
with the respective integrity. Due to the possible severity, that is ASIL D
during normal operation. As specified by ISO 26262, the risk is limited by
deceleration. That is stated in Table I safety goal number 5 (SG 5).
Therefore, the fall-back operation meets the specifications of ASIL B. The
transition between the nominal- and fall-back operation is ensured by an
arbitration logic, which must determine the functional channel correctly with
the specifications of ASIL D and within the fault-tolerant time interval. That
means systematic failures in the logic must be prevented and all sequences of
a switch-over must be within the time interval.
### 3.2 Fail-Operational Architecture
In this section we present the architecture of a fail-operational driving
system in accordance with the safety goals shown in Table 1. Electric and
electronic (E/E) architectures at the vehicle level for fault tolerant driving
systems have been discussed in the literature by Kron et al. [18], Niebdalla
and Reuss [23], Schnellbach [26] and Sari [12]. All of these E/E architectures
use dynamic redundancy but differ in their fall-back configurations. Dynamic
redundancy, in contrast to static redundancy, uses fewer components and a
self-diagnostic system for each channel [12] [13].
This study, in collaboration with the BMW Group, investigates the fail-
operational driving system of BMW as described by Kron et al. [18] The system
is already in road testing and therefore in a mature stage of development. The
driving system consists of two redundant channels, each capable of conducting
the driving task. Figure 2 shows the architecture.
Figure 2: Architecture of a Fail-Operational Driving System [18]
The nominal channel is in the bottom and consists of sensors and modules to
determine the driving-strategy (DS1), the vehicle-control (VC1) and actuators,
each partitioned on separate electronic control units (ECUs). The fall-back
channel at the top includes sensors, a single ECU for the driving strategy,
and the vehicle control (DS2). The braking-systems (BRS) and electronic power
steering-system (EPS) are redundant in each channel, whereas the drive-unit is
only integrated into the nominal channel. That is possible since fail-silent
behaviour is safe in the drive-unit and therefore fault-tolerance is not
required. The system is able to conduct the driving-task in nominal operation
mode and three fall-back operation modes. These define the system
configurations. In case of a failure, a backup operation is activated. The
current fall-back operation mode depends on the failure. The nominal operation
and the fall-back operation mode 1 is conducted on the nominal channel in case
of a failure in the fall-back channel. Fall-back operation mode 3 is the
operation on the fall-back channel, in case of a failure in the nominal
channel. Since the primary actuators comprise a greater range of functions, a
prioritization system is implemented [15]. That means the actuators are used
in the corresponding channel, and in addition, the primary actuators (BR1,
EPS1) can be controlled by the control systems on the fall-back channel (VC2)
which is shown in operation mode 2. A direct loss results from an undetected
failure in the nominal operation mode or a second failure in the fall-back
operation. The switch-over is ensured by an arbitration logic that is
distributed throughout the cooperative modules in the applicable control
units. In addition, the driver is requested to take over in case of a failure.
If no take over is conducted after a certain time, the vehicle executes an
emergency operation, such as a deceleration.
The arbitration logic is implemented by a set of distributed state machines,
shown in Figure 3. The naming corresponds to figure 2. The logic has to ensure
the fail-operational behaviour and thus determines the operation mode, i.e.
nominal or fall-back. The composite of the state machines conducts the switch-
over cooperatively. Each state machine is partitioned on an electronic control
unit (ECU) and is connected to at least one communication bus and power
supply.
Figure 3: Arbitration Logic as Composite of State-Machines
Figure 4 displays a single state machine consisting of an initial state (Init)
and states representing ready, active and passive operations. The state
machines are connected via the transition conditions.
Figure 4: Exemplary State-Machine
After the start-up of the control-unit, the state machines are in in the
initial state (Init). When the diagnostics are completed and no failure has
been detected, the state-machines switch to Ready. By activation of the highly
automated driving mode, the state-machines on the nominal channel switch to
Active, whereas the state-machines at the fall-back level stay in Ready state.
In case of a failure, the respective state-machines switch to Passive and the
others react by deactivating the failed channel and activating the respective
fall-back channel. That also occurs if the other state machines do not receive
any signal from the failed state machine. As mentioned, a prioritization is
considered, which enables the operation of a primary system in combination
with the fall-back channel.
It is assumed that the fail-operational behaviour is independent of the
failure leading to the fall-back operation and the operation mode. In the
context of ISO 26262, that means dependent failures must be prevented.
## 4 Verification of the Arbitration Logic
In this section, we present the verification of the arbitration logic. We
first explain the overall requirements of ISO 26262 and then discuss the
individual sections in detail.
The development of critical safety software with regard to industry standards
provides comprehensive and trackable means to fulfill the safety goals. Thus,
a structured evidence in form of a verification is provided. The level of
detail is defined according to industry standard using integrity and failure
probabilities. Therefore, the method must be complete and well-structured with
regard to failures and its relevant behaviour. The safety argumentation
procedure must be consistent within the subsystems. A validation must be
provided and when tools are used, a tool qualification needs to be conducted
as well. The procedure and implementation must be independently reviewed to
assure compliance with ISO 26262.
We used the established and well documented open-source tool
NuSMV333http://nusmv.fbk.eu/ to check the model. This tool can be applied to
bounded and unbounded models and formal specifications using linear-time and
computational-tree logic. The algorithm is based on the construction of a
behaviour-driven-diagram and reachability algorithms [6]. The optimisation
strategy of other tools with regard to computing time is to detect failures
earlier. Since our goal is complete verification, this strategy is not
applicable to our research. Instead, we implement the appropriate state-
oriented syntax. See documentation [6] [7] for a detailed explanation.
In this section, we first construct the model. Then, we explain how to check
the model by formulating formal specifications and explaining the
implementation. Additionally, we present our validation procedure and explain
the tool qualification process to meet the requirements of ISO 26262. In the
last section, we present the verification result of a fail-operational driving
system at BMW and discuss its application including a comparison to other
safety analyses.
### 4.1 Modelling of the arbitration logic
This section discusses the modeling of the fail-operational system presented
in section 3. The modeling process of the system behaviour is also affected by
the definition of relevant failures. The relevant failures of the architecture
and external signals need to be considered in the implementation of the state
machine logic. We conduct the failure identification by an exploratory
approach following a failure mode and effect analysis and consider consistency
of the related safety analysis for a comprehensive safety argumentation.
Internal as well as external failures which trigger a reaction in the
arbitration logic are considered in the model. Internal failures include
architectural failures, such as control units, power supply, and
communication. External failures are signals which are not builded in the
arbitration logic. These are reported functional failures and signals for
activation and deactivation. Safety analysis at the software level verifies
the functional trigger. Safety analysis at the component level444Component
level is equivalent to ECU level in our context ensures a shut-down of the
ECU, and the separation of the voltage source for safety- critical failures.
That includes lower-level software failures, hardware failures, CPU parts
failures and communication failures. Central Processing Unit failures and
power supply failures are equivalent to a loss of communication. Both, power
supply and communication failures can affect either single connections or the
complete wiring for both sender and receiver. Sender and receiver failures are
included in ECU failures. The corruption of signals is inhibited with
integrity. That implies end-to-end protection and leads to identical failure
modes resulting in no transmission.
The purpose of the arbitration logic is to ensure fail-operational behaviour
and to determine the operation mode with regard to the system configurations.
As explained, this includes the activation, deactivation, and transition to a
fall-back operation in case of a failure. The model consists of modules for
each state machine and communication connection along with a main module,
where the instantiation occurs.
⬇
MODULE M_DS1 (VC1, VC2, EPS1, BRS1, Failure$_{Function DS1}$, Activation)
VAR
DS1: {Init, Ready, Failure, Active};
ASSIGN
init(DS1) := Init;
next(DS1:=
case
DS1 = Init & !Failure$_{Function DS1}$: {Ready};
DS1 = Ready & (Activation & (VC1&DS2) = Ready): {Active};
DS1 = Ready & Failure$_{Function DS1}$: {Failure};
DS1 = Active & !Activation: {Ready};
DS1 = Active & ((Failure$_{Function DS1}$|VC1|BRS1|EPS1)=Failure): {Passive};
DS1 = Failure & !Failure$_{Function DS1}$: {Init}
TRUE: DS1;
esac;
Implementation 1: Implementation of a state machine by means of the driving
strategy
Each state machine is modeled as an instance with corresponding inputs as
shown in driving strategy 1 DS1 in implementation 1.
The code in implementation 1 displays a simplified state machine for the
driving strategy similar to Figure 4 and uses a switch case statement. The
state machines read the other states, symbolised by the respective variables
VC1, VC2, and so on, switch accordingly. After the system has been started and
no failures have been diagnosed, the state machine signals Ready. The driving
strategy switches to Active when an activation is triggered and the other
state machines signal readiness. If a failure occurs, the system transitions
to the Passive state. In the Active state, the states of the other state
machines on the channel are additionally evaluated to ensure a deactivation of
the complete functional channel. If the failure is cured, the state machine
switches to Init but does not directly reactivate. Additionally, the driver
may deactivate the operation in any state, even if it is not displayed. In the
corresponding implementation 1, system specifications are commented in the
code to support comprehensibility.
Furthermore, the system architecture must be evaluated because the interface
to the hardware affects functionality. As previously stated, that includes the
partitioning on CPUs, power supply, and communication as shown in Figure 3.
Multiple buses are used to achieve communication between the state machines.
Each communication link between the dispatch of a state to an individual
state-machine is modeled. For robustness, the debouncing of signals is used in
the system. If a signal is not received, the receiver assumes the signal is
faulty after a signal has not been detected multiple times. Since the signals
are discrete and end-to-end protection with adequate integrity is implemented,
the failure mode is evaluated as no signal transmission. The power supply can
affect a complete power circle or single supply module and lead to a shut-down
of the corresponding CPUs, also resulting in no signal transmission.
Therefore, we model architectural failures as a trigger for communication
failures. Implementation 2 shows an example of a communication failure of the
connection between a state machines of the vehicle-control unit (VC1) and the
driving-strategy (DS1) as a result of a failure in the power supply.
⬇
init(Comm$_{DS1-VC1}$) := Init;
next(Comm$_{DS1-VC1}$) :=
case
Failure$_{Energy}$ & t$_{debounce}$ >= 3 : {Failure};
Failure$_{Energy}$ & t$_{debounce}$ < 3 : {Comm$_{DS1-VC1}$};
TRUE: Comm$_{DS1-VC1}$;
esac;
Implementation 2: Implementation of communication as architectural failure
The communication signal from the vehicle-control unit (VC1) to the driving-
strategy (DS1) bypasses the state of VC1 if no failure has been diagnosed or
if the debounce-time has not yet been reached. If that is not the case, the
corresponding state machine switches to the Passive mode. Similarly, failures
in control units, communication buses, and communication links are evaluated
in the same fashion. All communication connections are combined in one
communication bus.
The initiation of the state machines and the communication bus occurs in the
main module. Implementation 3 shows the structure.
⬇
Failure$_{Function DS1}$
…
Bus: M_Bus(DS1,VC1,BR1,EPS1,VC2,BR2,EPS2)
DS1: M_DS1 (Bus.VC1, Bus.VC2, Bus.EPS1, Bus.BRS1, Failure$_{Function DS1}$,
Activation)
Implementation 3: Main Module
In the first part, all variables and failures are initiated. The second part
shows the initiation of the bus, which first includes the communication
connection. In the second part, the state machines are shown. The state of
each machine is passed via the bus, instead of directly showing its state.
That enables the failure effects to be modeled.
The model depicts a simplified representation of the arbitration logic. First,
we model the communication connections independently of the actual bus system.
In particular, asynchronous communication behaviour is not modeled.
Furthermore, the computation is conducted cyclically and synchronously, which
generally does not represent the behaviour distributed systems. A delay of one
time-step is produced when the communication implemented via the bus module.
We explain the significance of these results in Section 4.5.
### 4.2 Formal Specification of the Safety Goals and Requirements
In addition to modeling, system constraints need to be formally specified to
verify the system. Safety requirements for the arbitration logic are
formulated in a similar fashion to Abdulkhaleq and Wagner [1]. The
requirements are derived from the safety goals in Table I. Then, they are
translated from a narrative form to a formal specification which can then be
used for the model checking procedure.
The requirements describe how the system reacts to an external trigger or a
failure, the operational modes, by the transitions of each state machine.
Specification 1 shows the definition of the fall-back mode 1 (FB1).
⬇
init(FB1) := FALSE;
next(FB1) :=
case
(DS1 & VC1 & EPS1) = Active &
(VC2 & BR2 & EPS2) != Active &
!(VC2 & BR2 & EPS2 = Ready) : True
esac;
Specification 1: Definition of fall-back operation mode
The state machines in the nominal channel are all in Active state; the ones in
the fall-back channel are not in Active. At least one state must be unequal to
Ready since only a detected failure in the fall-back channel leads to the
fall-back operation.
Important safety requirements include activation and deactivation, as well as
the actual fail-operational behavior to react to failures. The following
section describes the formal specifications of the requirements defined from
the safety goals.
Activation is only possible when all state machines signal readiness and a
request for activation has been sent. The requirements correspond to Safety
goal No. 1, which is ready to be implemented. Therefore, no further level of
detail is necessary.
⬇
G (NO -> (O ((DS1 & VC1 & … & EPS2)
= Ready & Activation = 1)))
Specification 2: Condition for Activation
The formal specification states that, once the normal operation is activated
all state machines are in Ready state and the trigger for the Activation was
present.
The failure-tolerant behaviour reacts to a failure and is thus inevitable for
fail-operational systems. The safety goals (e.g. Table I Safety Goal 4) do not
specify the explicit configuration of the system dependent of the failure.
However, that is necessary for verification since the fall-back operation
modes include degraded functionality and thus the operation is prioritized. In
addition, we specify the requirements for single and dual-point failures,
since the target operation mode might differ. The justification for this is
provided later.
As mentioned before, we differentiate between failures that are followed by a
direct reaction, and failures that are debounced for robustness. The
requirement states that for specific failures in the nominal channel, the
system must switch to the fall-back channel.
⬇
G (((DS1|VC1) = Failure |
Failure$_{NC}$.t$_{debounce}$=3)
-> (G [FTTI-5,FTTI+5] (FB2)))
Specification 3: Switch-over to fall-back after single point failure
Thus, the formal specifications, include the state machines in the nominal
channel which lead to fall-back operation 2 and the corresponding debounced
failures which occurred in the supply, etc. In this case the arbitration logic
must activate the fall-back operation using the fall-back channel. That has to
occur within the fault-tolerant time interval for all cases (G, globally).
For double-failures, we need a condition to verify that more than one failure
has occurred (see Specification 4). The upper limit is set by the failure
combinations, a subset of failures, which are verified via an negated
exclusive or construction. A single point failure needs to be excluded.
⬇
G ((((DS1|VC1|) = Fehler
| Failure$_{NC}$.t$_{debounce}$=3)
& !(DS1 xor VC1) = Failure xor Failure$_{NC}$.t$_{debounce}$=3))
-> (G [FTTI-5,FTTI+5] (FB2)))
Specification 4: Switch-over to fall-back after multi-point failure
Other specifications are not directly identified from the safety goals but
result from the safety concept of the arbitration logic. It is also necessary
to prevent the system from toggling between the channels and operation modes
even in the case of oscillation failures to ensure a stationary state.
⬇
G ((FB1 -> !FB2) & (FB2 -> !FB1) &
((FB1 | FB2) -> !NO))
Specification 5: Prevention of Toggling and Reactivation
Therefore, we specify that a switch from the primary fall-back operation to
the secondary fall-back operation and vice versa as well as a switch from any
fall-back operation back to the nominal operation mode are both defined as
failures.
We also exclude an operation in more than one operation mode. This can also be
covered in the definition, such as Specification 1.
⬇
G !(NB & FB1) & !(NB & FB2) & !(FB1 & FB1)))
Specification 6: Exclusiveness of operation modes
Specification 6 must be valid globally and ensures that only one mode is
active.
Deactivation, corresponding to Safety goal No. 2, is formulated similarly to
the Activation in Specification 2. The target state for each state machine is
not equal to active when a corresponding signal is received.
### 4.3 Extent and Implementation of the Model Checking
The scope of analysis is given by the requirements of ISO 26262 [11]. First,
we define the scope and extent of the analysis. Then, we explain the
implementation and in particular, the assignment of specifications to
failures.
In ISO 26262, fail-operational fault tolerance is defined as the ability for a
functionality to operate in the presence of one or more faults. That means the
analysis needs to cover at least single point failures. The ISO 26262
considers multi-point failures with higher order than two as safe, unless the
safety concept requires the contrary. In dual-point failures, plausibility has
to be evaluated based on the probability of occurrence and dependencies. Since
a fail-operational driving system fulfills both requirements, double-faults do
not need to be fully controlled. However, the specifications should cover all
reactions and the analysis of dependent failures is based on the failures
leading to a loss of functionality. All double-faults scenarios are evaluated.
Not every constraint is relevant to every failure combination. To limit the
constraints to the corresponding subset of each failure combination, we
separate the failure scenarios. In accordance with ISO 26262, failure
combinations are limited to second order to further counteract the state space
explosion. In addition, this separation allows us to verify the target channel
without limiting the failure scenarios in the constraints. Table II shows a
subset of the matrix to determine the relevant constraints for each failure
combination, depending on the target condition.
Table II: Matrix of failure combinations and target channel 1st Fault 2nd Fault | | Steering
---
Function 1
| Power Supply
---
fall-back
|
| Steering
---
Function 1
| fall-back operation
---
mode 3
Inactive | |
| Power Supply
---
fall-back
Inactive | | fall-back operation
---
mode 1
|
The primary faults are listed vertically; the secondary faults are listed
horizontally. Single-point failures are listed on the diagonal line. The other
cells list double point failures. Table II shows that a failure of the primary
steering function leads to a fall-back backup operation in fall-back mode 3.
This leads to specification 4. The matrix is not symmetric in general. For
example, an emergency braking function might only be triggered by a certain
order of events, or by a prioritization of components.
The failure times for the primary and secondary failure are limited in order
to verify the order of the failures. The failure tolerant time interval refers
to the last possible occurrence of the second failure and is implemented by a
counter. As shown in Section 4.1, we verify the state for a time interval to
ensure a stationary state. Therefore, we can use bounded model checking which
verifies the termination of the model checking. The search-depth is defined
according to the stationary state. All failures can occur at any time for any
possible time span during the defined interval and do not disappear. This
results in an approximated asynchronous behaviour, which egalises the
simplifications of the model.
### 4.4 Validation and Tool Qualification
Resilient implementation affords a validation to ensure correctness.
Furthermore, ISO 26262 [11] requires a tool qualification as proof of reliable
verification results.
#### Validation Procedure
The validation procedure needs to address the model and the formal
specifications.
We use individual failure combinations as stimuli and verify the model
behaviour. The target state are evaluated for every failure combination
according to Table II. However, complete model validation is conducted during
verification. For that purpose, a precondition is that constraints are used
which include any failure, debounced in the relevant cases. Those constraints
are similar to the specifications shown in Specification 3 and 4. That only
holds when violations are detected correctly, as addressed in the following
section.
The validation of the formal specifications requires more manual effort. We
use failure injections based on equivalence groups in accordance with Hoffmann
[10] to show the appropriate formulation. In addition, expert reviews ensure
completeness. The formal specifications shown in Subsection 4.1 consist of
preconditions and target criteria; we validate both to identify failures of
the arbitration logic. First, we identify a subset of failure combinations
which cover all specifications. Then, we conduct the validation steps listed
in Table III.
Table III: Matrix for validation of Specifications condition target criteria | true | false | |
---|---|---|---|---
true | ok | !target criteria | |
false | - | | !target criteria
---
& !condition
|
As a baseline situation, we use the specifications without any failure
injections. The verification result should be no violation. There is no
benefit in solely manipulating the condition since we check for violations.
Not fulfilling the condition would not change the result. However, a negation
of the target criteria would cause a failure. Therefore, we can show that it
is formulated correctly. We negate both the target criteria and the condition
to ensure the accuracy of the condition. This prevents a failure for the same
failure combination since the condition is not fulfilled. We conduct reviews
to ensure the completeness of the conditions. This allow us to validate only a
subset of conditions.
#### Tool Qualification
The ISO 26262 requires a qualification of the software tools used for the
development of safety critical products. The qualification is necessary
whenever a work product of the safety life-cycle, such as a verification,
relies on software tools. That is the case in our study. It needs to ensure
the resilience of the results. The qualification needs to incorporate NuSMV as
well as the control program to depict the relevant checking cases. We first
explain the qualification procedure according to ISO 2626 and then how to
implement it in our use case.
The ISO 26262 provides a guideline for the qualification process, which covers
an analysis and verification of the impact, either through a validation
procedure or a guideline for the development.
At first, the relevant use cases and the respective failure cases are
identified. Thus, the software-tool is treated as a black box system. An
evaluation of the tool impact (TIL) and the tool detection level (TDL)
resulting in a confidence level (TCL) is used to determine the necessary
qualification measures. Figure 5 gives an overview of the evaluation process
as described in ISO 26262.
Figure 5: Evaluation of the Tool Qualification Level [11]
The levels consist of several gradations. However, only level 1 is adequate
for safety critical utilization. At first, the tool impact level is determined
for every use-case. If there is a safety critical impact because of failures
during the verification process, the tool detection level is evaluated. If
safety critical failures are not systematically and comprehensively detected,
qualification is afforded. This can be granted by developing the tool in
compliance with ISO 26262 either by depending on the complying integrity or
validation procedure, evaluating the development process validation or having
confidence by its usage.
We illustrate the procedure for our verification tool in the following
section. The relevant use case is the detection of violations. Detecting a
false-negative is critical because it would lead to potential faults in the
product. Detecting a false-positive requires more effort in the evaluation but
it is not a safety critical issue. Reasons for a false-negative verification
of critical failures are because the model checking procedure did not occur or
because a wrong application of the specifications was used.
Since we use an available tool, we conduct a qualification by validation using
a test-set. The aim of the tool qualification is the proof of the tool
accuracy and not the full coverage of all test cases; a subset is sufficient.
The first relevant cause, no conduction of the model checking, is evaluated by
defining the target state in the formal specifications and the stated
validation of the specifications. Again, this is only applicable under the
premise that failures are detected. Therefore, we want to prove that this is
the case during the verification via model checking. We use failure injection
in the state-machines, implementation of the communication connections, and
manipulation of the matrix for failure combinations. In addition, faulty usage
of the tool is minimized by restricting the circle of users.
Compliance with ISO 26262 is obtained by validation and tool certification.
Reviews confirm this process. Our tool can thus be utilized for the
verification of the safety critical system in accordance with the industry
standards.
### 4.5 Results of the model checking
In the following section, we present the results and findings of the
verification process. We applied our procedure to a fail-operational driving
system of BMW. The architecture is similar to that presented in Section 4. The
system consists of seven coupled state machines with thirty individual states.
Those result in more than six thousand potential combinations. Through
prioritization, four operation modes are possible (the nominal mode and three
fall-back modes). In addition, five communication buses and 25 individual
signal connections are evaluated. The system’s failure matrix has a size of
$48\times 48$, including the power supply. The failure types are described in
subsection 4.1. The failure matrix is asymmetric due to the prioritization of
primary actuators.
First, we present the findings and results. Then, we evaluate the application
of the verification approach.
#### Contribution to the Safety Case
We were able to verify the system’s functional safety in accordance with the
industry standard for functional safety, ISO 26262 through the verification
process. The verification process directly enhances a fail-operational driving
system in accordance with ISO 26262 requirements. This is a mandatory
requirement for the homologation of automotive systems. Compliance with
industry standards was confirmed by an independent review. Further testing
verified the results.
We improved the quality of the system specifications by unambigous
requirements unambigious. However, we did identify one case, which lead to an
unspecified state. This theoretical failure which requires specific timing
during the recovery of a communication bus, lead to the activation of two
operation modes. Other than that, no unspecified or unambious behaviour was
detected. This demonstrates the high level of maturity of this project.
#### Application of the Approach
In addition to improving the evaluation process of critical safety
requirements, we evaluated the implementation of the verification system. The
application consists of two main aspects: implementation effort and the
calculation time. As stated, we used the open-source tool NuSMV [6] for
verification and a simple control program to depict failure cases.
The implementation of the model was fairly straightforward since the state-
based formulation of the systems requirements is easily transferable to the
model checking syntax. The implementation of the architecture afforded an
adequate concept, but required limited effort since all architectural failures
were treated similarly through a loss of communication.
The identification of the formal specifications of the requirements required
more effort. This is reflected in the literature [21] [22]. Knowledge of the
LTL respective CTL syntax, the model checking approach, and the system to be
modeled is required. We iteratively reworked the constraints using the
validation procedure and checked for plausibility in the case of deviations.
Despite our understanding of the issues, identifying the formal specifications
took a significant amount of time due to the system’s complexity.
Modifications to the requirements or the model require rework of the
specifications as well.
The computing time for a given model depends on the amount of constraints,
their formulation, and the amount of possible states. The latter is determined
by the model and the combination of failures. The formal specifications are
depicted by the resulting operational mode, as described in Section 4.3.
Figure 6 and 7 show the distribution of computing times in seconds for single
and double failures depending on the subsequent operation mode using boxplots.
The whiskers display the minimum and maximum values, whereas the box shows the
respective quartiles. We conducted the verification process on a conventional
computer (2.4 GHz, 8 GB RAM, 4 cores, 64 bit). We evaluated all failures and
failure combinations resulting in 48 different single failures and 2256 double
faults. A failure case includes the verification of all possible occurrences
of that failure and, therefore, requires multiple sequences to be checked.
These cases are not equally distributed among the operation modes, but at
least ten single failures and 120 double failures were evaluated.
Figure 6 displays the computing time in seconds for the single failures,
depending on the operation mode as a reaction to the failure. In that case,
the amount of constraints differ only slightly. The computing time for the
nominal channel is slightly higher than the computing time for failures
leading to fall-back 1 or fall-back 2. The mean time is between 220 s and 240
s. That increases when verifying failures leads to fall-back mode 3, which is
the operation in the fall-back channel. The increased transition time is due
to the amount of required state transitions, the deactivation of the nominal
channel, and the activation of the fall-back channel. The variance of the
computing time of failures with the same target channel depend on the failure
and the resulting state sequences. For example, a loss of the power-supply
affects all the connected state machines. The checking of the single-point
failures took three and a half hours.
Figure 6: Computing time for the model checking of single-point failures
The computing time is much higher for double-failures. This is due to the
larger state space spanned by the failure combinations.
Figure 7: Computing time for the model checking of double-point failures
Failure combinations not leading to a switch over take approximately the same
computing time for single and double-failures since the state space only
increases with increased failure states. In case of a switch-over the
computing time is longer. The mean time increases by more than 500 s. This
results from the increase in states and sequences because the constraints are
formulated similarly. In contrast, a system failure is verified under 100 s.
In that case, only one constraint, the attainment of the target state
including the deactivation of all state-machines, is verified. The checking of
double-point failures took seven days.
A comparison to other safety analysis techniques is necessary when evaluating
the application of our approach. Both inductive and deductive analysis are
used in the automotive industry to comply with ISO 26262. Since the goal in
our use case is to prevent systematic failures and investigate failure
reactions, probabilities are not relevant. It is common in the automotive
industry to use an inductive method such as the failure mode and effect
analysis. Although this method evaluates relevant failures and stationary
states, it simply cannot cope with the amount of possible state transitions
that may easily reach billions of possible sequences. Furthermore, we want to
mention that such safety analysis often takes months and may need to be
carried out over the complete development of the product live-cycles since
they do not provide fast feedback which can be directly incorporated into the
product development.
#### Threats to Validity
Threats from insufficient validity may potentially lead to unreliable or
faulty results which affect the internal validation. The threats to external
validity are also discussed.
Potential failures may arise in each step of the approach, during the modeling
and formal specifications procedure, as well as during the validation process.
The scope of the model is based on the identification of failures in
compliance with ISO 26262. We use a structured procedure to address these
threats. The ISO standard requires verification and validation of the
specifications as well as confirmation of every work product by independent
reviews. The review includes boundary conditions and simplifications.
Simplifications relate to the system behavior and in particular, the
corresponding timing, as mentioned in Section 4.1. First, the model is
implemented with synchronous behavior of the state machines. That means that
all outputs are determined in the same cycle. Then, the communication occurs
via an intermediate step using a communication bus. This causes a delay.
Different bus topologies are not considered. Every failure scenario, including
all timing combinations, is checked and validated. The assumptions are thus
substantiated. A stationary state is needed when using bounded model checking.
The complexity of the arbitration logic corresponds to the magnitude of
problems in the automotive industry. However, our approach is transferable to
other applications since it follows a systematic procedure as defined by ISO
26262. Nevertheless, our results are based on the verification of a specific
system. The simplifications of this model will determine where else it can be
used. The assumptions must be evaluated for every application since the timing
behaviour might differ. In conclusion, we have developed a procedure, which
can be effectively used to meet the requirements of the ISO 26262 norm in the
automotive industry. The time constraints need to be evaluated for each use-
case.
## 5 Conclusion and Future Work
This work presents an approach to verify a fail-operational automotive system
using model checking. The approach complies with the industry standard for
functional safety, ISO 26262, and increases safety case as an evidence of
functional safety through a verification process of a fail-operational system.
Related work addresses the verification of fail-operational systems mainly by
focusing on reliability. Previous applications of formal fail-operational
methods in the automotive context have only addressed problems with low
complexity. Our application addresses industry-relevant problems with high
complexity.
In particular, our approach includes all the necessary steps to check the
model and verify the safety system in compliance with ISO 26262. The model of
the arbitration logic with regard to safety critical failures, formal
specifications, is presented. We used the existing literature to define the
system states by consolidating the individual states of each element. We
defined the specification terms consisting of preconditions and target
criteria. This enabled a systematic verification of the safety criteria and
defined the scope of the verification in compliance with ISO 26262. Our
approach includes a method for allocating constraints to failure combinations
with the operation mode to limit the model checking effort and provide
evidence for functional safety in compliance with ISO 26262. Therefore, we
dealt with the state space explosion problem and overcame the limitations
described in the literature. We did this by limiting the failure combinations
to a required subset in compliance with ISO 26262, segmenting the analysis,
and using bounded model checking. The verification meets the model and formal
specifications. Furthermore, we addressed the tool qualification problem which
is a requirement of ISO 26262 to utilize software tools in the development of
safety critical systems. By verifying an actual arbitration logic, we could
clarify the requirements and more importantly, identify failures in the
system. Furthermore, we were able to show the application and confirm
compliance with the industry standard ISO 26262 with independent reviews.
In summary, our work contributes to three major aspects in fail-operation
systems. First, we focused on systematic failures, not realiability analysis,
during the verification of fail-operational automotive systems. Second, our
approach fully complies with ISO 26262, as confirmed by external reviews. This
is in contrast to related work which has only been able to partially comply
with ISO 26262. Third, our work uses a highly complex use case from the
industry in to check the models for fail-operational systems in the automotive
industry.
Future work should focus on optimizing computing time and integrating the
analysis in the development process. We were able to experience that a switch
to Linux based systems for example and more computing power drastically
reduces the computing time. The integration of formal verification might be in
a model-based analysis using Matlab, Simulink, where initial research has
already been conducted. The implementation of safety requirements should be
conducted stepwise, as formal verification might support all phases of the
product maturity.
## Acknowledgments
The authors would like to thank all the colleagues at the Institute for
Software Engineering at the University of Stuttgart and BMW, for their support
regarding this work, including those who are not specifically listed as
authors.
## References
* [1] Abdulkhaleq A., Wagner S. Integrated Safety Analysis Using Systems-Theoretic Process Analysis and Software Model Checking. In: Koornneef F., van Gulijk C. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2014. Lecture Notes in Computer Science, vol 9337. Springer, Cham. (2015) doi:10.1007/978-3-319-24255-2_10
* [2] Aniculaesei, A., Howar, F., Denecke, P., Rausch, A.: Automated generation of requirements-based test cases for an adaptive cruise control system. In: 2018 IEEE Workshop on Validation, Analysis and Evolution of Software Tests (VST). 2018 IEEE Workshop on Validation, Analysis and Evolution of Software Tests (VST), Campobasso, 20.03.2018 - 20.03.2018, pp. 11–15. IEEE (2018). doi: 10.1109/VST.2018.8327150
* [3] Augusto da Silva, F., Bagbaba, A.C., Hamdioui, S., Sauer, C.: Combining Fault Analysis Technologies for ISO26262 Functional Safety Verification. In: 2019 IEEE 28th Asian Test Symposium (ATS). 2019 IEEE 28th Asian Test Symposium (ATS), Kolkata, India, 10.12.2019 - 13.12.2019, pp. 129–1295. IEEE (2019). doi: 10.1109/ATS47505.2019.00024
* [4] G. Bahig and A. El-Kadi, Formal Verification of Automotive Design in Compliance With ISO 26262 Design Verification Guidelines, in IEEE Access, vol. 5, pp. 4505-4516 (2017) doi: 10.1109/ACCESS.2017.2683508.
* [5] Becker, K., Voss, S., Schätz, B.: Formal analysis of feature degradation in fault-tolerant automotive systems. Science of Computer Programming 154, 89–133 (2018). doi: 10.1016/j.scico.2017.10.007
* [6] Cavada, R., Cimatti, A., Jochim, C., Keighren, G., Olivetti, E., Pistore, M., Roveri, M., Tchaltsev, A.: NuSMV 2.6 User Manual. FBK-irst, Trient, Italien. http://nusmv.fbk.eu/NuSMV/userman/v26/nusmv.pdf (2018)
* [7] Cavada, R., Cimatti, A., Keighren, G., Olivetti, E., Pistore, M., Roveri, M.: NuSMV 2.5 Tutorial. ITC-irst, Trient, Italien. http://nusmv.fbk.eu/NuSMV/tutorial/v25/tutorial.pdf (2004)
* [8] Clarke, E.M., Henzinger, T.A., Veith, H., Bloem, R.: Handbook of Model Checking. Springer International Publishing, Cham (2018) doi:10.1007/978-3-319-10575-8
* [9] Fritzsch, J., Schmid, T., Wagner, S.: Experiences from Large-Scale Model Checking: Verification of a Vehicle Control System. https://arxiv.org/abs/2011.10351
* [10] Hoffmann, Dirk W.: Software-Qualität. Springer Vieweg, Heidelberg: Springer Berlin Heidelberg.(2013) doi: 10.1007/978-3-642-35700-8
* [11] International Organization for Standardization: ISO 26262: 2018 Road vehicles - Functional safety. ISO, Genf (2018)
* [12] B. Sari, Fail-operational Safety Architecture for ADAS/AD Systems and a Model-driven Approach for Dependent Failure Analysis. [S.l.]: Morgan Kaufmann, (2020) doi: 10.1007/978-3-658-29422-9
* [13] Isermann, R. Fehlertoleranz bei mechatronischen Systemen. Forsch Ingenieurwes 80, 41–56 (2016). https://doi.org/10.1007/s10010-016-0200-2
* [14] Kohn, A., Kabmeyer, M., Schneider, R., Roger, A., Stellwag, C., Herkersdorf, A.: Fail-operational in safety-related automotive multi-core systems. In: 10th IEEE International Symposium on Industrial Embedded Systems (SIES). 2015 10th IEEE International Symposium on Industrial Embedded Systems (SIES), Siegen, 08.06.2015 - 10.06.2015, pp. 1–4. IEEE (2015). doi: 10.1109/SIES.2015.7185051
* [15] Kuemmel, M.: Verfahren zur fehlerrobusten Regelung von hochautomatisierten Fahrzeugen Patent DE102017218395A1
* [16] Kaiser, B., Schneider D., Adler R., Domis D., Möhrle F., Berres A., Zeller M., Höfig K., Rothfelder M.: Advances in Component Fault Trees. In: Safety and Reliability ‐ Safe Societies in a Changing World: Proceedings of ESREL 2018, June 17-21, 2018, Trondheim, Norway. Taylor & Francis (CRC Press) (2018)
* [17] Kölbl M., Leue S. (2018) Automated Functional Safety Analysis of Automated Driving Systems. In: Howar F., Barnat J. (eds) Formal Methods for Industrial Critical Systems. FMICS 2018. Lecture Notes in Computer Science, vol 11119. Springer, Cham. (2018) https://doi.org/10.1007/978-3-030-00244-2_3
* [18] Kron A., Schaffer I., Tchai J., Meitinger KH., Schraufstetter S. (2019) Motion control solutions for automated driving systems at BMW. In: Bargende M., Reuss HC., Wagner A., Wiedemann J. (eds) 19. Internationales Stuttgarter Symposium. Proceedings. Springer Vieweg, Wiesbaden. (2019) https://doi.org/10.1007/978-3-658-25939-6_1
* [19] Leitner-Fischer, F., Leue, S.: The QuantUM approach in the context of the ISO Standard 26262 for automotive systems. Technical Report, Chair for Software Engineering, University of Konstanz. (2011)
* [20] Moy Y., Ledinot E., Delseny H., Wiels V. and Monate B., ”Testing or Formal Verification: DO-178C Alternatives and Industrial Experience,” in IEEE Software, vol. 30, no. 3, pp. 50-57 (2013) doi: 10.1109/MS.2013.43.
* [21] Nyberg, M., Gurov, D., Lidström, C., Rasmusson, A., Westman, J.: Formal Verification in Automotive Industry: Enablers and Obstacles. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification and Validation. Industrial Practice, vol. 11247. Lecture Notes in Computer Science, pp. 139–158. Springer International Publishing, Cham (2018)
* [22] Nyberg M., Gurov D., Lidström C., Rasmusson A., Westman J. (2018) Formal Verification in Automotive Industry: Enablers and Obstacles. In: Margaria T., Steffen B. (eds) Leveraging Applications of Formal Methods, Verification and Validation. Industrial Practice. ISoLA 2018. Lecture Notes in Computer Science, vol 11247. Springer, Cham. (2018) https://doi.org/10.1007/978-3-030-03427-6_14
* [23] Niedballa D., Reuss H.-C., Concepts of functional safety in E/E-architectures of highly automated and autonomous vehicles, in Proceedings, 20. Internationales Stuttgarter Symposium: Automobil- und Motorentechnik, M. Bargende, H.-C. Reuss, and A. Wagner, Eds., 1st ed., Wiesbaden: Springer Fachmedien Wiesbaden GmbH; Springer Vieweg, 2020, pp. 457–470.
* [24] H. Winner, S. Hakuli, F. Lotz, and C. Singer, Eds., Handbuch Fahrerassistenzsysteme: Grundlagen, Komponenten und Systeme für aktive Sicherheit und Komfort, 3rd ed. Wiesbaden: Springer Vieweg (2015) [Online]. doi: 10.1007/978-3-658-05734-3
* [25] SAE International Standard: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, 30 September 2016
* [26] Schnellbach, A.: Fail-operational automotive systems. Doctoral Thesis, TU Graz (2016)
* [27] M. Singh, A. K. Sharma, and R. Saxena, “Why Formal Methods Are Considered for Safety Critical Systems?,” JSEA, vol. 08, no. 10, pp. 531–538, 2015, doi: 10.4236/jsea.2015.810050.
* [28] Stolte, T., Bagschik, G, Maurer, M.: Safety goals and functional safety requirements for actuation systems of automated vehicles, in 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 2016, pp. 2191–2198.
* [29] Todorov, V., Boulanger, F., Taha, S.: Formal verification of automotive embedded software. In: Formalise: 6th International Conference on Formal Methods in Software Engineering, pp. 84–87, Gotheborg, Sweden (2018) doi: 10.1145/3193992.3194003
| Tobias Schmid received his Bachelor’s and Master’s degree from the
Technical University of Munich in mechanical engineering. Currently he is a
PhD candidate at the Institute for Software-Technology at the University of
Stuttgart. He conducts research on Safety Analysis of Fail-Operational driving
systems in collaboration with the BMW Group.
---|---
| Stefanie Schraufstetter holds a diploma in technical mathematics and a PhD
in computer science both from the Technical University of Munich. After
receiving her PhD, she joined the BMW Group where she is currently working as
an R& D Engineer in the field of highly autonomous driving and functional
safety.
---|---
| Jonas Fritzsch Jonas Fritzsch, M.Sc. researches on Software Engineering and
Architectures at the University of Stuttgart. He gathered over ten years of
experience in Enterprise Software Development while working for HPE (former
HP). As a university lecturer he teaches programming and algorithms to
computer science students.
---|---
| Dominik Hellhake received his master’s degree from the Technical University
of Munich in software engineering. He is a PhD candidate at the Institute for
Software-Technology at the University of Stuttgart. His research is about a
systematic approach for integration testing of distributed software systems in
the context of automotive series development.
---|---
| Greta Koelln received her master’s degree from the Otto von Guericke
University Magdeburg in mechanical engineering. She is a PhD candidate at the
Department of Mechanical Engineering at the University Magdeburg in
collaboration with the BMW Group. She conducts research in the field of Safety
Analysis with a focus on System Theoretic Process Analysis in the field of
automated/autonomous vehicles.
---|---
| Stefan Wagner is a full professor of empirical software engineering and
managing director of the Institute of Software Engineering at the University
of Stuttgart, Germany. He studied computer science in Augsburg and Edinburgh
and received a PhD in software engineering from TU Munich. His research
interests include software quality, requirements engineering, safety/security
engineering and agile/continuous software development. He is a member of IEEE,
ACM and the German GI.
---|---
|
# Asymptotic expansion of Fourier coefficients of reciprocals of Eisenstein
series
Bernhard Heim Faculty of Mathematics, Computer Science, and Natural Sciences,
RWTH Aachen University, 52056 Aachen, Germany<EMAIL_ADDRESS>and Markus Neuhauser Kutaisi International University, 5/7 Youth Avenue,
Kutaisi, 4600 Georgia<EMAIL_ADDRESS>
###### Abstract.
In this paper we give a classification of the asymptotic expansion of the
$q$-expansion of reciprocals of Eisenstein series $E_{k}$ of weight $k$ for
the modular group $\mathop{\rm SL}_{2}(\mathbb{Z})$. For $k\geq 12$ even, this
extends results of Hardy and Ramanujan, and Berndt, Bialek and Yee, utilizing
the Circle Method on the one hand, and results of Petersson, and Bringmann and
Kane, developing a theory of meromorphic Poincaré series on the other. We
follow a uniform approach, based on the zeros of the Eisenstein series with
the largest imaginary part. These special zeros provide information on the
singularities of the Fourier expansion of $1/E_{k}(z)$ with respect to
$q=e^{2\pi iz}$.
###### Key words and phrases:
Eisenstein series, Fourier coefficients, meromorphic modular forms,
polynomials, Ramanujan, recurrence relations
###### 2010 Mathematics Subject Classification:
Primary 11F30, 11M36, 26C10; Secondary 05A16, 11B37
## 1\. Introduction
In this paper we provide a new approach to determine the main asymptotic
growth terms in the Fourier expansion of the reciprocals $1/E_{k}$ of
Eisenstein series of weight $k$. We refer to [BFOR17], Chapter 15 for a very
good introduction into the topic.
Eisenstein series are defined by
(1)
$E_{k}(z):=1-\frac{2k}{B_{k}}\sum_{n=1}^{\infty}\sigma_{k-1}\left(n\right)\,q^{n}\quad(q:=e^{2\pi
iz}).$
They are modular forms [O03] on the upper half of the complex plane
$\mathbb{H}$. The algebra of modular forms with respect to the modular group
$\mathop{\rm SL}_{2}(\mathbb{Z})$ is generated by $E_{4}$ and $E_{6}$. As
usual $B_{k}$ denotes the $k$-th Bernoulli number and
$\sigma_{\ell}\left(n\right):=\sum_{d\mid n}d^{\ell}$.
Hardy and Ramanujan [HR18B] launched in their last joint paper, the study of
coefficients of meromorphic modular forms with a simple pole in the standard
fundamental domain $\mathbb{F}$. They demonstrated that, similar to their
famous asymptotic formula for the partition numbers
(2)
$p(n)\sim\frac{1}{4n\sqrt{3}}\,e^{\pi\sqrt{\frac{2}{3}n}},\qquad\sum_{n=0}^{\infty}p(n)\,q^{n}:=\frac{q^{\frac{1}{24}}}{\eta(z)},$
which had been given birth to the Circle Method [HR18A], formulas for the
coefficients of reciprocals of modular forms can be obtained. The reciprocal
of the Dedekind $\eta$-function is a weakly modular form of weight $-1/2$ on
$\mathbb{H}$.
Hardy and Ramanujan focused on the reciprocal of the Eisenstein series
$E_{6}$. They proved an explicit formula for the coefficients. Shortly
afterwards, in a letter to Hardy, Ramanujan stated several formulas of the
same type, including the $q$-expansion of $1/E_{4}$. No proofs were given.
Bialek in his Ph.D. thesis, written under the guidance of Berndt [BB05], and
finally Berndt, Bialek and Yee [BBY02] have proven the claims in the letter of
Ramanujan by extending the methods applied in [HR18B].
We illustrate the case $k=4$. Following Ramanujan, we frequently put
$\mathrm{E}_{k}(q_{z}):=E_{k}(z)$ for $q=q_{z}:=e^{2\pi iz}$. Let $\rho$ be
the unique zero of $E_{4}$ in $\mathbb{F}$. Let $\lambda$ run over the
integers of the form $3^{a}\prod_{\ell=1}^{r}p_{\ell}^{a_{\ell}}$, where $a=0$
or $1$. Here, $p_{\ell}$ is a prime of the form $6m+1$, and
$a_{j}\in\mathbb{N}_{0}$. Then [BB05]:
(3)
$\beta_{4}(n)=(-1)^{n}\frac{3}{\mathrm{E}_{6}(q_{\rho})}\sum_{(\lambda)}\sum_{(c,d)}\frac{h_{(c,d)}(n)}{\lambda^{3}}\,\,e^{\frac{\pi
n\sqrt{3}}{\lambda}}.$
Here, $(c,d)$ runs over _distinct_ solutions to $\lambda=c^{2}-cd+d^{2}$, such
that integers $a,b$ exist solving $ad-bc=1$. Let $h_{(1,0)}(n):=1$,
$h_{(2,1)}(n):=(-1)^{n}$, and for $\lambda\geq 7$:
(4) $h_{(c,d)}(n):=2\mathop{\rm
cos}\left((ad+bc-2ac-2bd+\lambda)\frac{\pi\,n}{\lambda}-6\mathop{\rm
arctan}\left(\frac{c\sqrt{3}}{2d-c}\right)\right).$
For the definition of distinct we refer to [BB05], Section 3. From the
explicit formula (3) one observes that the main asymptotic growth comes from
$(c,d)=(1,0)$. This yields ([BK17], Introduction):
(5) $\displaystyle\beta_{4}(n)$ $\displaystyle\sim$
$\displaystyle(-1)^{n}\frac{3}{E_{6}(\rho)}\,e^{\pi n\sqrt{3}},$ (6)
$\displaystyle\beta_{6}(n)$ $\displaystyle\sim$
$\displaystyle\frac{2}{E_{8}(i)}\,e^{2\pi n},$
where $\sum_{n=0}^{\infty}\beta_{k}(n)\,q^{n}:=\frac{1}{E_{k}\left(z\right)}$.
We added the asymptotic (6), which can be obtained in a similar way.
Petersson [P50] offered an alternative approach to study the $q$-expansion of
meromorphic modular forms. He defined Poincaré series with poles at arbitrary
points in $\mathbb{H}$ and of arbitrary order, to provide a basis for the
underlying vector spaces. Recently, Bringmann and Kane [BK17] have generalized
Petersson’s method. They have also recorded several important examples.
In this paper we study the asymptotic expansions for all reciprocals of
Eisenstein series. Instead of proving first an explicit formula and then
detecting the main growth terms, we provide a direct approach. This is based
on the distribution of the zeros in the standard fundamental domain with the
largest imaginary part.
Before we state our results, we want to point out as a warning that the limits
as $n\rightarrow\infty$ for
$\beta_{4}\left(n\right)/\beta_{4}\left(n+1\right)$ and
$\beta_{6}\left(n\right)/\beta_{6}\left(n+1\right)$ exist, but that this is
not true for all $k$ as indicated in Table 1.
$n$ | $\frac{\beta_{4}\left(n\right)}{\beta_{4}\left(n+1\right)}\approx$ | $\frac{\beta_{6}\left(n\right)}{\beta_{6}\left(n+1\right)}\approx$ | $\frac{\beta_{12}\left(n\right)}{\beta_{12}\left(n+1\right)}\approx$ | $\frac{\beta_{14}\left(n\right)}{\beta_{14}\left(n+1\right)}\approx$
---|---|---|---|---
$1$ | $-4.3290\cdot 10^{-3}$ | $1.8622\cdot 10^{-3}$ | $5.1172\cdot 10^{-4}$ | $1.2170\cdot 10^{-4}$
$2$ | $-4.3333\cdot 10^{-3}$ | $1.8677\cdot 10^{-3}$ | $-9.6536\cdot 10^{-3}$ | $4.1330\cdot 10^{-3}$
$3$ | $-4.3334\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $5.4260\cdot 10^{-4}$ | $1.1240\cdot 10^{-3}$
$4$ | $-4.3334\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $-8.9832\cdot 10^{-3}$ | $2.3564\cdot 10^{-3}$
$5$ | $-4.3334\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $5.8359\cdot 10^{-4}$ | $1.6491\cdot 10^{-3}$
$6$ | $-4.3334\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $-8.3936\cdot 10^{-3}$ | $1.9821\cdot 10^{-3}$
$7$ | $-4.3334\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $6.2477\cdot 10^{-4}$ | $1.8133\cdot 10^{-3}$
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
$19$ | $-4.3334\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $8.8114\cdot 10^{-4}$ | $1.8674\cdot 10^{-3}$
$20$ | $-4.3334\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $-5.6773\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$
Table 1. Quotients of successive coefficients of $1/\mathrm{E}_{k}$ for
$k\in\left\\{4,6,12,14\right\\}$
## 2\. Results
The constants in the asymptotic expansion of $\beta_{k}(n)$, the coefficients
of the $q$-expansion of the reciprocal of $\mathrm{E}_{k}$, involve the
Ramanujan $\Theta$-operator [R16, BKO04] induced by residue calculation. The
differential operator $\Theta:=q\frac{\mathrm{d}}{\mathrm{d}q}$ acts on formal
power series by:
(7)
$\Theta\left(\sum_{n=h}^{\infty}a(n)\,q^{n}\right):=\sum_{n=h}^{\infty}n\,a(n)\,q^{n}.$
Let $\mathrm{E}_{2}(q):=1-24\sum_{n=1}^{\infty}\sigma_{1}(n)\,q^{n}$.
Ramanujan observed that
(8)
$\Theta(\mathrm{E}_{4})=\left(\mathrm{E}_{4}\mathrm{E}_{2}-\mathrm{E}_{6}\right)/3\text{
and
}\Theta(\mathrm{E}_{6})=\left(\mathrm{E}_{6}\mathrm{E}_{2}-\mathrm{E}_{8}\right)/2.$
Our first results give an explicit interpretation of the data presented in
Table 1 for $k=6$ and $k=14$.
###### Theorem 1.
Let $k\geq 4$ and $k\equiv 2\pmod{4}$ be an integer. Then $1/\mathrm{E}_{k}$
has a $q$-expansion with radius $q_{i}=e^{-2\pi}$:
(9) $\frac{1}{\mathrm{E}_{k}(q)}=\sum_{n=0}^{\infty}\beta_{k}(n)\,q^{n}.$
The coefficients $\beta_{k}(n)$ are non-zero and have the asymptotic expansion
(10) $\beta_{k}(n)\sim-\frac{1}{\Theta(\mathrm{E}_{k})(q_{i})}\,\,q_{i}^{-n}.$
The number $q_{i}=e^{-2\pi}\approx 1.867442\cdot 10^{-3}$ is transcendental.
It is well-known that the so-called Gel′fond constant $e^{\pi}$ is
transcendental. This was first proven by Gel′fond in 1929. It can also be
deduced from the Gel′fond–Schneider Theorem, which solved Hilbert’s seventh
problem [W08]. We refer to a result by Nesterenko (also [W08], Section 5.6).
Let $z\in\mathbb{H}$. Then at least already three of the four numbers
(11) $q_{z},\mathrm{E}_{2}(q_{z}),\mathrm{E}_{4}(q_{z}),\text{ and
}\mathrm{E}_{6}(q_{z})$
are algebraically independent. Since
$\mathrm{E}_{4}(q_{\rho})=\mathrm{E}_{6}(q_{i})=0$, we obtain that
$q_{i},\mathrm{E}_{4}(q_{i})$ and $q_{\rho},\mathrm{E}_{6}(q_{\rho})$ are
transcendental.
Moreover, $\Theta(\mathrm{E}_{k})(q_{i})$ for $k=6,10,14$ can be explicitly
expressed by $\Gamma(\frac{1}{4})$ and $\pi$. For example,
(12)
$\Theta(\mathrm{E}_{6})(q_{i})=-\frac{1}{2}\mathrm{E}_{4}(q_{i})^{2},\text{
where }\mathrm{E}_{4}(q_{i})=\frac{3\,\Gamma(\frac{1}{4})^{8}}{(2\pi)^{6}}.$
We can also extract the numbers $q_{i}$ and $\mathrm{E}_{4}(q_{i})$ from the
coefficients.
###### Corollary 1.
Let $k\geq 4$ and $k\equiv 2\pmod{4}$. Then
(13)
$\displaystyle\lim_{n\to\infty}\frac{\beta_{k}\left(n\right)}{\beta_{k}\left(n+1\right)}$
$\displaystyle=$ $\displaystyle q_{i},$ (14)
$\displaystyle\lim_{n\to\infty}\frac{\beta_{6}\left(n\right)}{\beta_{10}\left(n\right)}$
$\displaystyle=$
$\displaystyle\lim_{n\to\infty}\frac{\beta_{10}\left(n\right)}{\beta_{14}\left(n\right)}=\mathrm{E}_{4}(q_{i}).$
Hardy and Ramanujan stated lower and upper bounds at the end of their initial
work [HR18B] on the coefficients of the reciprocal of $1/E_{6}$. We generalize
their idea to all cases $k\equiv 2\pmod{4}$ including $k=2$ and also improve
their result in the original case $k=6$.
###### Theorem 2.
Let $k\equiv 2\pmod{4}$ and $k$ a positive integer. Let
$x_{0}:=\frac{2k}{B_{k}}$. Then we have for all $n\in\mathbb{N}$
(15)
$\frac{\left(\frac{x_{0}+\sqrt{\Delta_{k}}}{2}\right)^{n+1}-\left(\frac{x_{0}-\sqrt{\Delta_{k}}}{2}\right)^{n+1}}{\sqrt{\Delta_{k}}}\leq\beta_{k}(n)$
with $\Delta_{k}=x_{0}^{2}+4\left(2^{k-1}+1\right)x_{0}$ and
(16)
$\beta_{k}(n)\leq\frac{\left(x_{0}-\frac{b_{k}-\sqrt{D_{k}}}{2}\right)\left(\frac{b_{k}+\sqrt{D_{k}}}{2}\right)^{n}+\left(\frac{b_{k}+\sqrt{D_{k}}}{2}-x_{0}\right)\left(\frac{b_{k}-\sqrt{D_{k}}}{2}\right)^{n}}{\sqrt{D_{k}}}$
with $b_{k}=x_{0}+a_{k}$, $c_{k}=\left(2^{k-1}+1-a_{k}\right)x_{0}$, and
$D_{k}=b_{k}^{2}+4c_{k}$ for all $k$ where $a_{2}=\sqrt{7/3}$ and
$a_{k}=\frac{3^{k-1}+1}{2^{k-1}+1}$ for $k\geq 6$.
The case $k\equiv 0\pmod{4}$ is more complicated. For large $k$, we cannot
expect that the limit as ${n\to\infty}$ of $\beta_{k}(n)/\beta_{k}(n+1)$
exists, since we have two poles on the circle of convergence. But for $k=4$
and $k=8$ there is still only one pole.
###### Proposition 1.
Let $q_{\rho}=e^{2\pi\rho}=-e^{-\pi\sqrt{3}}$. Let $m\in\mathbb{N}$. Then the
coefficients $\beta_{4,m}(n)$ of the $m$th power of $\mathrm{E}_{4}^{-1}$ i.
e.
(17)
$\sum_{n=0}^{\infty}\beta_{4,m}(n)\,q^{n}:=\left(\frac{1}{\mathrm{E}_{4}(q)}\right)^{m}$
satisfy for all $m$:
(18) $\lim_{n\to\infty}\frac{\beta_{4,m}(n)}{\beta_{4,m}(n+1)}=q_{\rho}.$
###### Remarks.
a) For small weights the following identities exist:
(19) $E_{8}=E_{4}^{2},\,E_{10}=E_{4}\cdot E_{6}\text{ and
}E_{14}=E_{4}^{2}\cdot E_{6}.$
b) Let the principal part of $\mathrm{E}_{4}^{-m}$ at the pole $q_{\rho}$ be
given by
(20) $\sum_{k=1}^{m}\frac{\lambda_{m,k}}{\left(q-q_{\rho}\right)^{k}},$
then $\lambda_{m,m}={\mathop{\rm
res}}_{q_{\rho}}\left(\mathrm{E}_{4}^{-1}\right)^{m}.$ It would be interesting
to get explicit formulas for all $\lambda_{m,k}$, $1\leq k\leq m$. Especially
for the case $m=2$.
c) We have $\mathop{\rm
res}_{q_{\rho}}(\mathrm{E}_{4}^{-1})=\frac{-3\,q_{\rho}}{\mathrm{E}_{6}(q_{\rho})}$.
We know that $\beta_{4}(n)$ and $\beta_{8}(n)$ are non-zero for all
$n\in\mathbb{N}_{0}$ [HN20B]. We provide new proof of the asymptotic expansion
for $k=4$. This is the main term of a formula first conjectured by Ramanujan
and proven about 80 years later by Bialek [BB05]. For the case $k=8$, we also
refer to [BK17].
###### Theorem 3.
We have $(-1)^{n}\beta_{4}(n)\in 240\mathbb{N}$ for all $n\in\mathbb{N}$.
Further, we have the asymptotic expansion
(21)
$\beta_{4}(n)\sim-\frac{1}{\Theta(\mathrm{E}_{4})({q_{\rho}})}\,q_{\rho}^{-n},$
where $\Theta(\mathrm{E}_{4})({q_{\rho}})=-\mathrm{E}_{6}(q_{\rho})/3$.
F. K. C. Rankin and H. P. F. Swinnerton-Dyer [RS70] have proven that all the
zeros of $E_{k}(z)$ in the standard fundamental domain $\mathbb{F}$ are in
$C=\\{z\in\mathbb{F}\,:\,|z|=1\\}\subset\mathbb{F}$. We recall the following
basic facts [O03]. The modular group $\Gamma:=\mathop{\rm SL}_{2}(\mathbb{Z})$
operates on the complex upper half plane $\mathbb{H}$, denoted by $\gamma(z)$,
where $\gamma\in\Gamma$ and $z\in\mathbb{H}$. The standard fundamental domain
$\mathbb{F}$ is given by
$\displaystyle\mathbb{F}$ $\displaystyle=$
$\displaystyle\left\\{z\in\mathbb{H}\,:\,|z|\geq 1\text{ and }0\leq\mathop{\rm
Re}\left(z\right)\leq 1/2\right\\}\cup$
$\displaystyle\left\\{z\in\mathbb{H}\,:\,|z|>1\text{ and }-1/2<\mathop{\rm
Re}\left(z\right)<0\right\\}.$
###### Proposition 2 (Rankin, Swinnerton-Dyer [RS70]).
Let $k\geq 4$ be an even integer. Let $z_{k}$ be the zero of $E_{k}$ with the
largest imaginary part. Then
(22) $z_{4}=z_{8}=\rho\text{ and }z_{k}=i\text{ for }k\equiv 2\pmod{4}.$
All other zeros satisfy $z_{k}\in C\,\backslash\,\\{i,\rho\\}$. Only for $k=8$
the zero $z_{k}$ is not simple.
Further, from [RS70] and Kohnen [K04] we obtain
###### Corollary 2.
Let $k\geq 12$ and $k\equiv 0\pmod{4}$. Let $k=12\,N+s$ for $s\in\\{0,4,8\\}$.
Then $z_{k}=e^{\frac{1}{2}\pi i\,\varphi}$, where
$\varphi\in\left(\frac{N-1}{N},1\right)$.
###### Theorem 4.
Let $k$ be a positive integer. Let $k\geq 12$ and $k\equiv 0\pmod{4}$. Then
$1/\mathrm{E}_{k}$ has a $q$-expansion with radius $|q_{z_{k}}|$, where
$z_{k}$ is the zero of $E_{k}$ with the largest imaginary part. Then
(23)
$\beta_{k}(n)\,q_{z_{k}}^{n}+\frac{1}{\Theta(\mathrm{E}_{k})(q_{z_{k}})}+\frac{1}{\Theta(\mathrm{E}_{k})(\overline{q}_{z_{k}})}\left(\frac{q_{z_{k}}}{\overline{q}_{z_{k}}}\right)^{n}$
constitutes a zero sequence.
The expression
(24)
$\frac{1}{\Theta(\mathrm{E}_{k})(q_{z_{k}})}+\frac{1}{\Theta(\mathrm{E}_{k})(\overline{q}_{z_{k}})}\left(\frac{q_{z_{k}}}{\overline{q}_{z_{k}}}\right)^{n}$
is bounded. But this is not sufficient to obtain an asymptotic expansion.
Nevertheless we have discovered a new property of the coefficients of
$1/\mathrm{E}_{k}$ for $k\equiv 0\pmod{4}$.
###### Theorem 5.
Let $k\equiv 0\pmod{4}$ and $k\geq 12$. Then there exists a subsequence
$\\{n_{t}\\}_{t=1}^{\infty}$ of $\\{n\\}_{n=1}^{\infty}$ such that
(25)
$\lim_{t\to\infty}\frac{\beta_{k}(n_{t})}{-q_{z_{k}}^{-n_{t}}\left(\frac{1}{\Theta(\mathrm{E}_{k})(q_{z_{k}})}+\frac{1}{\Theta(\mathrm{E}_{k})(\overline{q}_{z_{k}})}\left(\frac{q_{z_{k}}}{\overline{q}_{z_{k}}}\right)^{n_{t}}\right)}=1.$
The statement of this theorem is equivalent to
(26) $\lim_{t\to\infty}\frac{\beta_{k}(n_{t})}{-2\mathop{\rm
Re}\left(\frac{q_{z_{k}}^{-n_{t}}}{\Theta(\mathrm{E}_{k})(q_{z_{k}})}\right)}=1.$
We further have the following properties.
###### Theorem 6.
Let $k$ be a positive integer. Let $k\geq 12$ and $k\equiv 0\pmod{4}$.
* a)
Let $A_{k}(n)$ denote the number of changes of sign in the sequence
$\\{\beta_{k}(m)\\}_{m=0}^{n}$ and let $z_{k}=x_{k}+i\,y_{k}\in\mathbb{F}$ be
the zero of $E_{k}$ with the largest imaginary part. Then
$\lim_{n\to\infty}\frac{A_{k}(n)}{n}=2x_{k}.$
* b)
Let $B_{k}(n)$ be the number of non-zero coefficients among the $n$
coefficients $\\{\beta_{k}(m)\\}_{m=0}^{n-1}$. Then
$\limsup_{n\to\infty}\frac{n}{B_{k}(n)}\leq 2.$
We end this section with a considerably surprising result.
###### Corollary 3.
For large weights $k$ divisible by $4$, the coefficients of
$1/\mathrm{E}_{k}(q)$ satisfy
(27)
$\lim_{\ell\rightarrow\infty}\lim_{n\rightarrow\infty}\frac{A_{4\ell}(n)}{n}=0.$
## 3\. Proofs
### 3.1. Proof of Corollary 1, Proposition 1, and Theorem 1
We first recall a result from complex analysis. Let
$f(q)=\sum_{n=0}^{\infty}a(n)\,q^{n}$ be a power series regular at $q=0$ with
finite radius of convergence. Assume that there is only one singular point
$q_{0}$ on the circle of convergence. Let $q_{0}$ be a pole. Then it is known
([PS78], Part 3) that
(28) $\lim_{n\to\infty}\frac{a(n)}{a(n+1)}=q_{0}.$
This follows from the Laurent expansion of $f(q)$, which has a finite
principal part.
Let $\mathrm{E}_{k}(q)$ have exactly one zero $q_{0}\in B_{1}(0)$ with
absolute value smaller than all other zeros. Then we obtain the property (28)
for the coefficients of $1/\mathrm{E}_{k}$. Note that every zero of a modular
form has one representative in the fundamental domain $\mathbb{F}$.
The zeros of $E_{k}$ are controlled by a theorem by Rankin and Swinnerton-Dyer
([RS70], see also Section 2). They proved that every zero in $\mathbb{F}$ has
absolute value $1$. Further, let $k$ be a positive, even integer and $k\geq
4$. Let $k=12N+s$, where $s\in\\{4,6,8,10,0,14\\}$. Then $E_{k}$ has $N$
simple zeros in $C\setminus\\{i,\rho\\}$. Additionally we have simple zeros
$\rho$ for $s=4$ and $i$ for $s=6$. Further, $E_{k}$ has the double zero
$\rho$ for $s=8$, the simple zeros $i$ and $\rho$ for $s=10$, and the simple
zero $i$ and the double zero $\rho$ for $s=14$. Further, let $z_{k}$ be the
zero of $E_{k}$ with the largest imaginary part. Note that
(29) $z_{k}^{\prime}:=J(z_{k})=\left(\begin{array}[]{cc}0&-1\\\
1&0\end{array}\right)z_{k}$
and $z_{k}$ have the same imaginary part. Note that $J(i)=i$ and
$J(\rho)=\rho-1$. Thus, $1/\mathrm{E}_{k}$ has exactly one pole on the radius
of convergence iff $z_{k}=i$ or $z_{k}=\rho$.
###### Proof of Corollary 1.
From the theorem of Rankin and Swinnerton-Dyer we obtain that for $k\equiv
2\pmod{4}$ we have $z_{k}=i$ and $q_{i}=e^{-2\pi}$. This gives a first proof
of Corollary 1 (13). Corollary 1 (13) also follows directly from Theorem 1.
The quotients for small $k$ converge very quickly. We refer to Table 1 and
Table 2.
$n$ | $\frac{\beta_{8}\left(n\right)}{\beta_{8}\left(n+1\right)}\approx$ | $\frac{\beta_{10}\left(n\right)}{\beta_{10}\left(n+1\right)}\approx$ | $\frac{\beta_{12}\left(n\right)}{\beta_{12}\left(n+1\right)}\approx$ | $\frac{\beta_{14}\left(n\right)}{\beta_{14}\left(n+1\right)}\approx$ | $\frac{\beta_{16}\left(n\right)}{\beta_{16}\left(n+1\right)}\approx$
---|---|---|---|---|---
$17$ | $-4.1044\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $8.3715\cdot 10^{-4}$ | $1.8674\cdot 10^{-3}$ | $1.6465\cdot 10^{-3}$
$18$ | $-4.1159\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $-5.9626\cdot 10^{-3}$ | $1.8675\cdot 10^{-3}$ | $-1.7502\cdot 10^{-2}$
$19$ | $-4.1263\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $8.8114\cdot 10^{-4}$ | $1.8674\cdot 10^{-3}$ | $2.3584\cdot 10^{-4}$
$20$ | $-4.1357\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $-5.6773\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $3.8543\cdot 10^{-3}$
$21$ | $-4.1443\cdot 10^{-3}$ | $1.8674\cdot 10^{-3}$ | $9.2572\cdot 10^{-4}$ | $1.8674\cdot 10^{-3}$ | $-1.8095\cdot 10^{-3}$
Table 2. Quotients of successive coefficients of $1/\mathrm{E}_{k}$ for
$k\in\left\\{8,10,12,14,16\right\\}$.
Since $\Theta(\mathrm{E}_{6})(q_{i})=-\frac{1}{2}\mathrm{E}_{4}(q_{i})^{2}$
and $\Theta(\mathrm{E}_{10})(q_{i})=-\frac{1}{2}\mathrm{E}_{4}(q_{i})^{3}$,
the second part of the Corollary also follows from Theorem 1 and (19). An
approximate numerical value of $\mathrm{E}_{4}(q_{i})$ can be read off Table
3. The theorem by Nesterenko implies that this number is transcendental, since
$\mathrm{E}_{6}(q_{i})=0$. ∎
$n$ | $\frac{\beta_{6}(n)}{\beta_{10}(n)}\approx\phantom{xxxxxxxxxx}$
---|---
$0$ | $1.000000000000000000000000000000$
$1$ | $1.909090909090909090909090909091$
$2$ | $1.319410319410319410319410319410$
$3$ | $1.523715744177431256188987060285$
$4$ | $1.428309534304946335598514019013$
$\vdots$ | $\cdots\phantom{xxxxxxxxxxxxx}$
$80$ | $1.455762892268709322462422003594$
$90$ | $1.455762892268709322462422003599$
$100$ | $1.455762892268709322462422003599$
Table 3. Quotients of $\beta_{6}\left(n\right)$ and
$\beta_{10}\left(n\right)$.
Note that for each integer $\ell\geq 2$, the limit as $n\to\infty$ of
$\frac{\beta_{4\ell-2}(n)}{\beta_{4\ell+2}(n)}$ exists, but it is generally
not equal to $\mathrm{E}_{4}(q_{i})$.
###### Proof of Proposition 1.
Since $\left(1/\mathrm{E}_{4}\right)^{m}$ has only the pole $q_{\rho}$ on the
circle of convergence, again we have formula (28), which proves the
proposition. ∎
###### Proof of Theorem 1.
Let $w$ be any complex number. Let $B_{r}(w)=\\{z\in\mathbb{C}\,:\,|z-w|<r\\}$
be the open ball with radius $r$ around $w$. We denote the closure by
$\overline{B_{r}\left(w\right)}$ and its boundary by $\partial
B_{r}\left(w\right)$. Let $k\equiv 2\pmod{4}$. Then $\mathrm{E}_{k}$ has the
special property that restricted to $\overline{B_{|q_{i}|}(0)}$ it has exactly
one zero at $q_{i}$, which is also simple. This implies that the Taylor series
expansion of the reciprocal of $\mathrm{E}_{k}$ has radius of convergence
$\left|q_{i}\right|$ and only a simple pole at $q_{i}$:
(30)
$\frac{1}{\mathrm{E}_{k}(q)}=\sum_{n=0}^{\infty}\beta_{k}(n)\,q^{n}\qquad(|q|<|q_{i}|).$
Note that subtracting the principal part at $q_{i}$ provides a new Taylor
series expansion with a larger radius of convergence:
(31) $\frac{1}{\mathrm{E}_{k}(q)}-\frac{\mathop{\rm
res}_{q_{i}}(1/\mathrm{E}_{k})}{q-q_{i}}=\sum_{n=0}^{\infty}b(n)\,q^{n}.$
This implies that $b(n)q_{i}^{n}$ constitutes a zero sequence. Here,
$\mathop{\rm res}_{q_{i}}(1/\mathrm{E}_{k})$ denotes the residue at the pole
$q_{i}$. We obtain that
(32) $q_{i}^{n+1}\beta_{k}(n)+\mathop{\rm res}{}_{q_{i}}(1/\mathrm{E}_{k})$
constitutes a zero sequence. By a standard argument, we obtain that
(33) $\mathop{\rm
res}{}_{q_{i}}(1/\mathrm{E}_{k})=\frac{1}{\frac{\mathrm{d}}{\mathrm{d}q}\mathrm{E}_{k}(q_{i})}.$
Finally, we obtain the asymptotic behavior
(34) $\beta_{k}(n)\sim-\frac{1}{\Theta(\mathrm{E}_{k})(q_{i})}\,q_{i}^{-n}.$
∎
### 3.2. Proof of Theorem 2
We use the following easy to prove lemmata.
###### Lemma 1.
$\sigma_{\ell}\left(n\right)<\frac{\ell}{\ell-1}n^{\ell}$ for $\ell>1$ and
$\sigma_{1}\left(n\right)\leq\left(1+\ln n\right)n$.
###### Proof.
$\sigma_{\ell}\left(n\right)\leq\left(1+\int_{1}^{n}t^{-\ell}\,\mathrm{d}t\right)n^{\ell}<\frac{\ell}{\ell-1}n^{\ell}$
for $\ell>1$ and $\leq\left(1+\ln n\right)n$ for $\ell=1$. ∎
###### Lemma 2.
For $\ell\geq 5$ holds $3\sqrt[\ell]{\frac{1+3^{-\ell}}{1+2^{-\ell}}}>2.98$.
###### Proof.
Considering $\ell$ as a real variable $\geq 5$, we obtain the following
logarithmic derivative
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}\ell}\frac{1}{\ell}\ln\left(\frac{1+3^{-\ell}}{1+2^{-\ell}}\right)$
$\displaystyle=$
$\displaystyle-\frac{1}{\ell^{2}}\ln\left(\frac{1+3^{-\ell}}{1+2^{-\ell}}\right)+\frac{1}{\ell}\frac{1+2^{-\ell}}{1+3^{-\ell}}\left(-\frac{3^{-\ell}\ln
3}{1+3^{-\ell}}+\frac{2^{-\ell}\ln 2}{1+2^{-\ell}}\right)>0$
since $-\frac{\ln 3}{3^{\ell}+1}+\frac{\ln 2}{2^{\ell}+1}>-\frac{\ln
3}{3^{\ell}}+\frac{\ln 2}{2^{\ell+1}}>0$ for $\ell\geq 5$. Therefore, the
values of the original sequence are increasing and we take the smallest value
for $\ell=5$. ∎
###### Proof of Theorem 2.
With
$\varepsilon_{k}\left(n\right)=\frac{2k}{B_{k}}\sigma_{k-1}\left(n\right)$ we
obtain
$E_{k}\left(z\right)=1-\sum_{n=1}^{\infty}\varepsilon_{k}\left(n\right)q^{n}.$
Let
$1/\left(1-\varepsilon_{k}\left(1\right)q-\varepsilon_{k}\left(2\right)q^{2}\right)=\sum_{n=0}^{\infty}\alpha_{k}\left(n\right)q^{n}$.
The $\alpha_{k}\left(n\right)$ fulfill the recurrence relation
$\alpha_{k}\left(n\right)=\varepsilon_{k}\left(1\right)\alpha_{k}\left(n-1\right)+\varepsilon_{k}\left(2\right)\alpha_{k}\left(n-2\right)$
for $n\geq 2$. Obviously, $\alpha_{k}\left(0\right)=\beta_{k}\left(0\right)$,
$\alpha_{k}\left(1\right)=\beta_{k}\left(1\right)$, and by induction
$\alpha_{k}\left(n\right)=\varepsilon_{k}\left(1\right)\alpha_{k}\left(n-1\right)+\varepsilon_{k}\left(2\right)\alpha_{k}\left(n-2\right)\leq\sum_{j=1}^{n}\varepsilon_{k}\left(j\right)\beta_{k}\left(n-j\right)=\beta_{k}\left(n\right)$
using the power series expansion of $1/\mathrm{E}_{k}$.
For the upper bound let $a_{2}=\sqrt{7/3}$ and for $k\geq 6$ let
$a_{k}=\frac{\varepsilon_{k}\left(3\right)}{\varepsilon_{k}\left(2\right)}=\frac{\sigma_{k-1}\left(3\right)}{\sigma_{k-1}\left(2\right)}=\frac{3^{k-1}+1}{2^{k-1}+1}$.
For all $k\equiv 2\pmod{4}$ let $b_{k}=a_{k}+\varepsilon_{k}\left(1\right)$,
$c_{k}=\varepsilon_{k}\left(2\right)-a_{k}\varepsilon_{k}\left(1\right)$, and
$\frac{1-b_{k}q-c_{k}q^{2}}{1-a_{k}q}=1-\sum_{n=1}^{\infty}\delta_{k}\left(n\right)q^{n}$.
Therefore,
$\delta_{k}\left(1\right)=b_{k}-a_{k}=\varepsilon_{k}\left(1\right)$,
$\delta_{k}\left(2\right)=c_{k}+a_{k}\delta_{k}\left(1\right)=\varepsilon_{k}\left(2\right)$,
and $\delta_{k}\left(n\right)=a_{k}\delta_{k}\left(n-1\right)$ for $n\geq 3$.
Therefore $\delta_{k}\left(n\right)=\varepsilon_{k}\left(2\right)a_{k}^{n-2}$.
1. (1)
First, let $k=2$. Then
$\delta_{2}\left(n\right)=72\left(7/3\right)^{\left(n-2\right)/2}$. For
$n\in\left\\{3,4,5,6\right\\}$ we obtain
$24\sigma_{1}\left(n\right)\leq\delta_{2}\left(n\right)$. Using Lemma 1 we
obtain $\varepsilon_{2}\left(n\right)\leq 24\left(1+\ln n\right)n$. For $n=7$
we obtain $24\cdot\left(1+\ln 7\right)\cdot
7<504<72\left(7/3\right)^{\left(7-2\right)/2}$ and for $n\geq 7$ we obtain
$\frac{1+\ln\left(n+1\right)}{1+\ln
n}\frac{n+1}{n}\leq\left(1+\frac{\ln\left(1+\frac{1}{7}\right)}{1+\ln
n}\right)\frac{8}{7}<1.2<\sqrt{7/3}$. Therefore,
$\varepsilon_{2}\left(n\right)\leq\delta_{2}\left(n\right)$.
2. (2)
Now, let $k\geq 6$ then
$\delta_{k}\left(n\right)=\varepsilon_{k}\left(3\right)a_{k}^{n-3}=\frac{2k}{B_{k}}\left(3^{k-1}+1\right)\left(\left(\frac{3}{2}\right)^{k-1}\frac{1+3^{1-k}}{1+2^{1-k}}\right)^{n-3}.$
Using Lemma 1 we obtain $\sigma_{k-1}\left(n\right)<\frac{k-1}{k-2}n^{k-1}$.
Since $k\geq 6$ by Bernoulli’s inequality
$\frac{k-1}{k-2}\leq\frac{5}{4}=1+\frac{1}{4}<\left(1+\frac{1}{20}\right)^{5}\leq\left(\frac{21}{20}\right)^{k-1}$.
Therefore
$\sqrt[k-1]{\frac{B_{k}}{2k}\varepsilon_{k}\left(n\right)}=\sqrt[k-1]{\sigma_{k-1}\left(n\right)}<\sqrt[k-1]{\frac{k-1}{k-2}n^{k-1}}<\frac{21}{20}n.$
Using Lemma 2 implies
$\sqrt[k-1]{\frac{B_{k}}{2k}\delta_{k}\left(n\right)}>2.98\left(\frac{3}{2}\right)^{n-3}$.
Now $\frac{21}{20}n<2.98\left(\frac{3}{2}\right)^{n-3}$ for $n\geq 4$ as
$4.2<4.47$ for $n=4$ and $\frac{n}{n-1}<\frac{3}{2}$ for $n>4$.
We have shown $\varepsilon_{k}\left(n\right)=\delta_{k}\left(n\right)$ for
$n\in\left\\{1,2\right\\}$ and
$\varepsilon_{k}\left(n\right)\leq\delta_{k}\left(n\right)$ for all $n\geq 3$.
Let now
$\frac{1-a_{k}q}{1-b_{k}q-c_{k}q^{2}}=\sum_{n=0}^{\infty}\gamma_{k}\left(n\right)q^{n}$.
Then $\beta_{k}\left(n\right)=\gamma_{k}\left(n\right)$ for
$n\in\left\\{1,2\right\\}$ and by induction
$\gamma_{k}\left(n\right)=\sum_{j=1}^{n}\delta_{k}\left(j\right)\gamma_{k}\left(n-j\right)\geq\sum_{j=1}^{n}\varepsilon_{k}\left(j\right)\beta_{k}\left(n-j\right)=\beta_{k}\left(n\right)$
for $n\geq 3$.
We have shown
$\alpha_{k}\left(n\right)\leq\beta_{k}\left(n\right)\leq\gamma_{k}\left(n\right)$
for all $n\geq 1$. From the generating functions we can now determine formulas
for $\alpha_{k}\left(n\right)$ and $\gamma_{k}\left(n\right)$. The
characteristic equation for $\alpha_{k}\left(n\right)$ is
$\lambda_{k}^{2}-\varepsilon_{k}\left(1\right)\lambda_{k}-\varepsilon_{k}\left(2\right)=0$.
Let
$\Delta_{k}=\varepsilon_{k}\left(1\right)^{2}+4\varepsilon_{k}\left(2\right)=\left(\frac{2k}{B_{k}}\right)^{2}+\frac{8k}{B_{k}}\left(2^{k-1}+1\right)$.
Then
$\lambda_{k,\pm}=\frac{1}{2}\left(\varepsilon_{k}\left(1\right)\pm\sqrt{\Delta_{k}}\right)$.
We obtain $\left(\begin{array}[]{c}L_{k,+}\\\
L_{k,-}\end{array}\right)=\left(\begin{array}[]{cc}1&1\\\
\lambda_{k,+}&\lambda_{k,-}\end{array}\right)^{-1}\left(\begin{array}[]{c}1\\\
\varepsilon_{k}\left(1\right)\end{array}\right)=\frac{1}{\lambda_{k,+}-\lambda_{k,-}}\left(\begin{array}[]{c}\varepsilon_{k}\left(1\right)-\lambda_{k,-}\\\
\lambda_{k,+}-\varepsilon_{k}\left(1\right)\end{array}\right)=\frac{1}{\sqrt{\Delta_{k}}}\left(\begin{array}[]{c}\lambda_{k,+}\\\
-\lambda_{k,-}\end{array}\right)$. Therefore,
$\alpha_{k}\left(n\right)=L_{k,+}\lambda_{k,+}^{n}+L_{k,-}\lambda_{k,-}^{n}=\frac{\lambda_{k,+}^{n+1}-\lambda_{k,-}^{n+1}}{\sqrt{\Delta_{k}}}$
for all $n$.
The characteristic equation for $\gamma_{k}\left(n\right)$ is
$\mu_{k}^{2}-b_{k}\mu_{k}-c_{k}=0$. Let $D_{k}=b_{k}^{2}+4c_{k}$. Then
$\mu_{k,\pm}=\frac{1}{2}\left(b_{k}\pm\sqrt{D_{k}}\right)$,
$\left(\begin{array}[]{c}M_{k,+}\\\
M_{k,-}\end{array}\right)=\left(\begin{array}[]{cc}1&1\\\
\mu_{k,+}&\mu_{k,-}\end{array}\right)^{-1}\left(\begin{array}[]{c}1\\\
\varepsilon_{k}\left(1\right)\end{array}\right)=\frac{1}{\sqrt{D_{k}}}\left(\begin{array}[]{c}\varepsilon_{k}\left(1\right)-\mu_{k,-}\\\
\mu_{k,+}-\varepsilon_{k}\left(1\right)\end{array}\right),$
and $\gamma_{k}\left(n\right)=M_{k,+}\mu_{k,+}^{n}+M_{k,-}\mu_{k,-}^{n}$. ∎
###### Example (Slight improvement of [HR18B]).
Let $k=6$. Then
$\displaystyle\alpha_{6}\left(n\right)$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{320544}}\left(\left(\frac{504+\sqrt{320544}}{2}\right)^{n+1}-\left(\frac{504-\sqrt{320544}}{2}\right)^{n+1}\right)$
$\displaystyle\approx$
$\displaystyle\frac{1}{566.16}\left(535.08^{n+1}-\left(-31.083\right)^{n+1}\right).$
With $x_{0}=\frac{12}{B_{6}}=504$, $a_{6}=\frac{244}{33}$,
$b_{6}=\frac{16876}{33}$, $c_{6}=\frac{141960}{11}$,
$D_{6}=\frac{341015536}{1089}$ and $\sqrt{D_{6}}\approx 559.59$ we obtain
$\mu_{6,\pm}=\frac{b_{6}\pm\sqrt{D_{6}}}{2}$,
$M_{6,+}=\frac{1}{\sqrt{D_{6}}}\left(x_{0}-\frac{b_{6}-\sqrt{D_{6}}}{2}\right),\qquad
M_{6,-}=\frac{1}{\sqrt{D_{6}}}\left(\frac{b_{6}+\sqrt{D_{6}}}{2}-x_{0}\right).$
By (16) this finally yields
$\gamma_{6}\left(n\right)=M_{6,+}\mu_{6,+}^{n}+M_{6,-}\mu_{6,-}^{n}\approx\frac{528.10\cdot
535.49^{n}+31.494\cdot\left(-24.100\right)^{n}}{559.59}.$
The second and last column in Table 4 are the lower and upper bounds from
[HR18B].
$\begin{array}[]{|c||c|c|c|c|c|}\hline\cr
n&\frac{535^{n+1}-\left(-31\right)^{n+1}}{566}&\alpha_{6}\left(n\right)&\beta_{6}\left(n\right)&\gamma_{6}\left(n\right)&\frac{352\cdot
535.5^{n}+21\left(-24\right)^{n}}{373}\\\ \hline\cr\hline\cr 1&5.0400\cdot
10^{2}&5.0400\cdot 10^{2}&5.0400\cdot 10^{2}&5.0400\cdot 10^{2}&5.0400\cdot
10^{2}\\\ \hline\cr 2&2.7060\cdot 10^{5}&2.7065\cdot 10^{5}&2.7065\cdot
10^{5}&2.7065\cdot 10^{5}&2.7065\cdot 10^{5}\\\ \hline\cr 3&1.4474\cdot
10^{8}&1.4479\cdot 10^{8}&1.4491\cdot 10^{8}&1.4491\cdot 10^{8}&1.4491\cdot
10^{8}\\\ \hline\cr 4&7.7438\cdot 10^{10}&7.7475\cdot 10^{10}&7.7600\cdot
10^{10}&7.7600\cdot 10^{10}&7.7602\cdot 10^{10}\\\ \hline\cr 5&4.1429\cdot
10^{13}&4.1456\cdot 10^{13}&4.1554\cdot 10^{13}&4.1554\cdot
10^{13}&4.1556\cdot 10^{13}\\\ \hline\cr 6&2.2165\cdot 10^{16}&2.2182\cdot
10^{16}&2.2252\cdot 10^{16}&2.2252\cdot 10^{16}&2.2253\cdot 10^{16}\\\
\hline\cr 7&1.1858\cdot 10^{19}&1.1869\cdot 10^{19}&1.1916\cdot
10^{19}&1.1916\cdot 10^{19}&1.1917\cdot 10^{19}\\\ \hline\cr 8&6.3441\cdot
10^{21}&6.3511\cdot 10^{21}&6.3807\cdot 10^{21}&6.3809\cdot
10^{21}&6.3813\cdot 10^{21}\\\ \hline\cr 9&3.3941\cdot 10^{24}&3.3983\cdot
10^{24}&3.4168\cdot 10^{24}&3.4169\cdot 10^{24}&3.4172\cdot 10^{24}\\\
\hline\cr\end{array}$ Table 4. Improvement of upper and lower bounds
(approximation) for $\beta_{6}(n)$.
### 3.3. Proof of Theorem 3
For the special case of $k=4$ we refer to a result of [HN20B]. We have proven
that $(-1)^{n}\beta_{4}(n)\in 240\,\mathbb{N}$ for all $n\in\mathbb{N}$ (see
also [AKN97], last section, for an announcement of the result of strict sign
changes). We are mainly interested in the implication $\beta_{4}(n)\neq 0$.
###### Proof of Theorem 3.
Let $k=4$. Then $z_{4}=\rho$ and $J(z_{4})=\rho-1$. This implies that
$1/\mathrm{E}_{4}(q)=\sum_{n=0}^{\infty}\beta_{4}(n)\,q^{n}$ has $|q_{\rho}|$
as the radius of convergence. Further, the only singularity on the circle of
convergence is given by the pole $q_{\rho}$. Now we can proceed as in the
proof of Theorem 1 and obtain the asymptotic expansion of $\beta_{4}(n)$. Here
we use the fact that $\mathop{\rm res}_{q_{\rho}}\mathrm{E}_{4}^{-1}$ is equal
to
(35)
$\frac{q_{\rho}}{\Theta\left(\mathrm{E}_{4}\right)\left(q_{\rho}\right)}=\frac{-3\,q_{\rho}}{\mathrm{E}_{6}(q_{\rho})}.$
∎
### 3.4. Proof of Theorem 4 and Theorem 5
###### Proof of Theorem 4.
Let $k\equiv 0\pmod{4}$. We are interested in the zeros of $E_{k}$ which
contribute to poles on the circle of convergence of the power series
(36) $\frac{1}{\mathrm{E}_{k}(q)}=\sum_{n=0}^{\infty}\beta_{k}(n)\,q^{n}.$
Let $k\geq 12$ then Proposition 2 and Corollary 2 imply that there are exactly
two singularities provided by the two poles at $q_{z_{k}}$ and
$\overline{q}_{z_{k}}$. This implies that the radius of convergence is equal
to $\left|q_{z_{k}}\right|$. Here we also used the well-known fact, that the
imaginary part of $\gamma(z)$, when $\gamma$ is in the modular group and $z$
in the fundamental domain, does not increase. Next we consider the Laurent
expansion of $1/\mathrm{E}_{k}(q)$ around $q_{z_{k}}$. We subtract the
principal part from $1/\mathrm{E}_{k}(q)$ and obtain a holomorphic function at
$q_{z_{k}}$. We iterate this procedure and consider the Laurent expansion
around the other pole $\overline{q}_{z_{k}}$ and subtract again the principal
part. Note that we have poles of order one. This implies that
(37) $\frac{1}{\mathrm{E}_{k}\left(q\right)}-\frac{\mathop{\rm
res}_{q_{z_{k}}}\mathrm{E}_{k}^{-1}}{q-q_{z_{k}}}-\frac{\mathop{\rm
res}_{\overline{q}_{z_{k}}}\,\,\mathrm{E}_{k}^{-1}}{q-\overline{q}_{z_{k}}}$
has a holomorphic expansion $\sum_{n=0}^{\infty}b(n)\,q^{n}$, with a radius of
convergence larger than $|q_{z_{k}}|=\left|\overline{q_{z_{k}}}\right|$. This
implies that $b(n)q_{z_{k}}^{n}$ and $b(n)\overline{q}_{z_{k}}^{n}$ constitute
zero sequences. The residue values can be expressed by
$\Theta(\mathrm{E}_{k})$ evaluated at the poles. This leads to an expression
which allows in the final formula the number $q_{z_{k}}^{-n}$ to appear
instead of $q_{z_{k}}^{-(n+1)}$. See also the proof of Theorem 1. By the
identity principle $b(n)$ is equal to
(38)
$\beta_{k}(n)+\frac{1}{\Theta\left(\mathrm{E}_{k}\right)(q_{z_{k}})}\,q_{z_{k}}^{-n}+\frac{1}{\Theta\left(\mathrm{E}_{k}\right)(\overline{q}_{z_{k}})}\,\overline{q}_{z_{k}}^{-n}.$
This implies that
$\sum_{n=0}^{\infty}\left(\beta_{k}(n)+\frac{1}{\Theta\left(\mathrm{E}_{k}\right)(q_{z_{k}})}\,q_{z_{k}}^{-n}+\frac{1}{\Theta\left(\mathrm{E}_{k}\right)(\overline{q}_{z_{k}})}\,\overline{q}_{z_{k}}^{-n}\right)q^{n}=\sum_{n=0}^{\infty}b(n)q_{z_{k}}^{n}\,\left(\frac{q}{{q}_{z_{k}}}\right)^{n}$
for $q\in\mathbb{C}$ and $\left|q\right|<|q_{z_{k}}|$. Let $w=q/q_{z_{k}}$.
Then
$\sum_{n=0}^{\infty}\left(\beta_{k}(n)q_{z_{k}}^{n}+\frac{1}{\Theta\left(\mathrm{E}_{k}\right)(q_{z_{k}})}+\frac{1}{\Theta\left(\mathrm{E}_{k}\right)(\overline{q}_{z_{k}})}\,\left(\frac{q_{z_{k}}}{\overline{q}_{z_{k}}}\right)^{n}\right)w^{n}=\sum_{n=0}^{\infty}b(n)q_{z_{k}}^{n}\,w^{n}.$
In the final step we compare the coefficients with respect to $w^{n}$ and use
the identity principle for regular power series. Since $b(n)\,q_{z_{k}}^{n}$
constitutes a zero sequence, the claim of the theorem follows. ∎
###### Proof of Theorem 5.
Let $k\equiv 0\pmod{4}$ and $k\geq 12$. Let $z_{k}=x_{k}+iy_{k}$ be the zero
of $E_{k}$ in $\mathbb{F}$ with the largest imaginary part. Then $z_{k}\neq
i,\rho$. This implies by results by Kanou [K00] and Kohnen [K03] that $z_{k}$
is transcendental. Since we have chosen $z_{k}$ on the circle of unity, we can
conclude that $x_{k}$ and $y_{k}$ are also transcendental. By a well-known
result by Kronecker [K84], since $x_{k}$ is irrational, the orbit
(39)
$\mathbb{O}_{k}:=\left\\{\left(\frac{q_{z_{k}}}{\overline{q}_{z_{k}}}\right)^{n}\,:n\in\mathbb{N}\right\\}$
is dense in $\left\\{w=e^{2\pi i\alpha}\,:\,\alpha\in[0,1)\right\\}$. Let
$C_{k}:=1/\Theta(\mathrm{E}_{k})(q_{z_{k}})$. Since
(40) $\overline{C}_{k}=1/\Theta(\mathrm{E}_{k})(\overline{q}_{z_{k}}),$
for the closure of the set, we obtain
(41)
$D_{k}:=\left\\{\frac{1}{\Theta(\mathrm{E}_{k})(q_{z_{k}})}+\frac{1}{\Theta(\mathrm{E}_{k})(\overline{q}_{z_{k}})}\left(\frac{q_{z_{k}}}{\overline{q}_{z_{k}}}\right)^{n}\,:\,n\in\mathbb{N}\right\\}$
a circle with center $C_{k}$ and radius $|C_{k}|$:
(42) $\partial
B_{|C_{k}|}(C_{k})=\Big{\\{}z\in\mathbb{C}\,:\,|z-C_{k}|=|C_{k}|\Big{\\}}.$
We note that $0$ and $2\,C_{k}$ are not elements of $D_{k}$. Let
$d_{k}\in\partial B_{|C_{k}|}(C_{k})\setminus\left\\{0\right\\}$. Then there
exists a subsequence $\left\\{n_{t}\right\\}_{t=1}^{\infty}$ of
$\\{n\\}_{n=1}^{\infty}$ such that,
(43)
$\lim_{t\to\infty}\frac{1}{\Theta(\mathrm{E}_{k})(q_{z_{k}})}+\frac{1}{\Theta(\mathrm{E}_{k})(\overline{q}_{z_{k}})}\left(\frac{q_{z_{k}}}{\overline{q}_{z_{k}}}\right)^{n_{t}}=d_{k}$
Combining this result with Theorem 4 proves the claim. ∎
### 3.5. Proof of Theorem 6 and Corollary 3
We recall a result from complex analysis. Pólya and Szegő recorded the
following beautiful property ([PS78], Part Three, Chapter 5). Let
$f(x)=\sum_{n=0}^{\infty}a(n)\,x^{n}$ be a power series with radius of
convergence $0<r<\infty$ and real coefficients. We assume that we have only
two singularities on the circle of convergence and that these two
singularities are poles: $x_{1}=re^{i\alpha}$ and $x_{2}=re^{-i\alpha}$ with
$0<\alpha<\pi$. Let $A(n)$ denote the number of changes of sign in the
sequence $\\{a(m)\\}_{m=0}^{n}$. Then
$\lim_{n\to\infty}\frac{A(n)}{n}=\frac{\alpha}{\pi}$. The number of changes of
sign in a sequence of real numbers is given by the sign changes of the
sequence, when all zeros are removed. Results in this direction had also been
given by König [K75] in 1875.
###### Proof of Theorem 6, part a).
Let $k\equiv 0\pmod{4}$. Then
$1/\mathrm{E}_{k}(q)=\sum_{n=0}^{\infty}\beta_{k}(n)\,q^{n}$ has a radius of
convergence $|q_{z_{k}}|$, where $z_{k}=x_{k}+iy_{k}$ is the zero of $E_{k}$
with the largest imaginary part with $0<x_{k}<1/2$. We stated already that
$q_{z_{k}}$ and $\overline{q}_{z_{k}}$ are the single two singularities on the
circle of convergence. Note that $q_{z_{k}}=r_{k}\cdot e^{2\pi ix_{k}}$, where
$r_{k}=e^{-2\pi y_{k}}=|q_{z_{k}}|$. Further, $\overline{q}_{z_{k}}=r_{k}\cdot
e^{-2\pi ix_{k}}$. Thus all assumptions are fulfilled to apply the above cited
result for $A(n)=A_{k}(n)$ and $\alpha=2x_{k}$. ∎
###### Example.
We have $z_{16}\approx 0.196527+0.980498\,i$. See Table 5 for values
$A_{16}\left(n\right)/n$.
$\begin{array}[]{|r||r|r|r|}\hline\cr
n&\frac{A_{16}\left(n\right)}{n}\approx&\frac{A_{16}\left(10n\right)}{10n}\approx&\frac{A_{16}\left(100n\right)}{100n}\approx\\\
\hline\cr\hline\cr 2&0.50000000&0.40000000&0.39500000\\\ \hline\cr
3&0.33333333&0.40000000&0.39333333\\\ \hline\cr
4&0.50000000&0.40000000&0.39250000\\\ \hline\cr
5&0.40000000&0.40000000&0.39400000\\\ \hline\cr
6&0.33333333&0.40000000&0.39333333\\\ \hline\cr
7&0.42857143&0.40000000&0.39285714\\\ \hline\cr
8&0.37500000&0.40000000&0.39375000\\\ \hline\cr
9&0.44444444&0.38888889&0.39333333\\\ \hline\cr
10&0.40000000&0.39000000&0.39300000\\\ \hline\cr\end{array}$ Table 5. Portion
of sign changes for $k=16$.
We also recall another interesting result stated in [PS78] (Part Three,
Chapter 5). Let $f(x)=\sum_{n=0}^{\infty}a(n)\,x^{n}$ be a power series with
finite positive radius of convergence. We assume that there are only poles on
the circle of convergence. Let $B(n)$ be the number of non-zero coefficients
among the first $n$ coefficients $\\{a(m)\\}_{m=0}^{n-1}$. Then the number of
poles is not smaller than
(44) $\limsup_{n\to\infty}\frac{n}{B(n)}.$
###### Proof of Theorem 6, part b).
The number of poles is $2$. Thus, by the result above, two is an upper bound
for the term (44), which completes the proof. ∎
###### Example.
We have $B_{12}(n)=B_{16}(n)=B_{20}(n)=n$ for $n\leq 1000$.
###### Proof of Corollary 3.
From Theorem 6 we obtain
(45) $\lim_{n\to\infty}\frac{A_{4\ell}}{4\ell}=2\,x_{4\ell},$
where $x_{4\ell}$ is the real part of the zero of $E_{4\ell}$ with the largest
imaginary part. Finally, from Corollary 2 the claim follows, since $x_{4\ell}$
tends to zero. ∎
###### Acknowledgments.
To be entered later.
## References
* [AKN97] T. Asai, M. Kaneko, H. Ninomiya: Zeros of certain modular functions and an application. Commentarii Mathematici Univ. Sancti Pauli 46 No. 1 (1997), 93–101.
* [BB05] B. Berndt, P. Bialek: On the power series coefficients of certain quotients of Eisenstein series. Trans. American Math. Society 357 No. 11 (2005), 4379–4412.
* [BBY02] B. Berndt, P. Bialek, A. Yee: Formulas of Ramanujan for the power series coefficients of certain quotients of Eisenstein series. Int. Math. Res. Not. 21 (2002), 1077–1109.
* [BFOR17] K. Bringmann, A. Folsom, K. Ono, L. Rolen: Harmonic Maass Forms and Mock Modular Forms: Theory and Applications. Colloq. Publ., Amer. Math. Soc. 64 (2017).
* [BK17] K. Bringmann, B. Kane: Ramanujan and coefficients of meromorphic modular forms. J. Math. Pures Appl. 107 (2017), 100–122.
* [BKO04] J. Bruinier, W. Kohnen, K. Ono: The arithmetic of the values of modular functions and the divisors of modular forms. Compositio Math. 140 (2004), 552–566.
* [HR18A] G. Hardy, S. Ramanujan: Asymptotic formulae in combinatory analysis. Proc. London Math. Soc. (2) 17 (1918), 75–118.
* [HR18B] G. Hardy, S. Ramanujan: On the coefficients in the expansion of certain modular functions. Proc. R. Soc. Lond. A 95 (1918), 144–155.
* [HN20B] B. Heim, M. Neuhauser: Polynomials and reciprocals of Eisenstein series. Int. Journal of Number Theory 15 No. 6 (2020), pp. 11.
* [K00] N. Kanou: Transcendency of zeros of Eisenstein series. Proc. Japan Acad., Ser. A 76 No. 5 (2000), 51–54.
* [K03] W. Kohnen: Transcendence of zeros of Eisenstein series and other modular functions. Commentarii Mathematici Univ. Sancti Pauli 52 No. 1 (2003), 55–57.
* [K04] W. Kohnen: Zeros of Eisenstein series. Kyushu J. Math. 58 (2004), 251–256.
* [K75] J. König: Ein allgemeiner Ausdruck für die ihrem absoluten Betrage nach kleinste Wurzel der Gleichung $n$ten Grades. Math. Ann. 9 (1875), 530–540.
* [K84] L. Kronecker: Näherungsweise ganzzahlige Auflösung linearer Gleichungen. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften (1884), 1179–1193.
* [O03] K. Ono: The Web of Modularity: Arithmetic of the Coefficients of Modular Forms and q-series. CBMS Regional Conference Series in Mathematics 102, American Mathematical Society, Providence, RI (2004).
* [P50] H. Petersson: Konstruktion der Modulformen und der zu gewissen Grenzkreisgruppen gehörigen automorphen Formen von positiver reeller Dimension und die vollständige Bestimmung ihrer Fourierkoeffizienten. S.-B. Heidelb. Akad. Wiss. Math.-Nat. Kl. (1950), 415–474.
* [PS78] G. Pólya, G. Szegő: Problems and Theorems in Analysis I. Classics in Mathematics, Springer (1978).
* [R16] S. Ramanujan: On certain arithmetical functions. Trans. Cambridge Philos. Soc. 22 (1916), 159–184. In: G. H. Hardy, P. V. Seshu Aiyar, B. M. Wilson (eds.) Collected Papers of Srinivasa Ramanujan. AMS Chelsea Publishing, American Mathematical Society, Providence, RI (2000), 136–162.
* [RS70] F. K. C. Rankin, H. P. F. Swinnerton-Dyer: On the zeros of Eisenstein series. Bull. London Math. Soc. 2 (1970), 169–170.
* [W08] M. Waldschmidt: Elliptic functions and transcendence. In: K. Alladi (ed.) Surveys in Number Theory. Developments in Mathematics 17, Springer (2008), 143–188.
|
# Characterising heavy-tailed networks using q-generalised entropy and
q-adjacency kernels
Ismo T. Koponen Corresponding author<EMAIL_ADDRESS>Elina Palmgren
Esko Keski-Vakkuri Department of Physics, P.O. Box 64, FI-00014 University of
Helsinki, Finland
###### Abstract
Heavy-tailed networks, which have degree distributions characterised by slower
than exponentially bounded tails, are common in many different situations.
Some interesting cases, where heavy tails are characterised by inverse powers
$\lambda$ in the range $1<\lambda<2,$ arise for associative knowledge
networks, and semantic and linguistic networks. In these cases, the
differences between the networks are often delicate, calling for robust
methods to characterise the differences. Here, we introduce a method for
comparing networks using a density matrix based on q-generalised adjacency
matrix kernels. It is shown that comparison of networks can then be performed
using the q-generalised Kullback-Leibler divergence. In addition, the
q-generalised divergence can be interpreted as a q-generalised free energy,
which enables the thermodynamic-like macroscopic description of the heavy-
tailed networks. The viability of the q-generalised adjacency kernels and the
thermodynamic-like description in characterisation of complex networks is
demonstrated using a simulated set of networks, which are modular and heavy-
tailed with a degree distribution of inverse power law in the range
$1<\lambda<2$.
###### keywords:
Heavy tailed networks , adjacency kernels , q-entropy , generalised
thermodynamics
, ,
## 1 Introduction
When associative knowledge or term associations between words and terms are
represented as networks, the networks are often found to be heavy-tailed [1,
2, 3]. Being heavy-tailed means here that the trailing edge of distribution of
low frequency for node degrees $d$ is not exponentially bounded but instead
exhibits an inverse power-law type of decay $P(d)\propto d^{-\lambda}$ with
$\lambda\in\,]1,3]$. In addition, such networks may also have a clear modular
or community structure, originating, for example, from the thematic content of
nodes [4]. For example, networks representing students’ associative knowledge
have distributions of degree and Katz centralities which are heavy-tailed and
can be fitted with inverse power laws with exponents in the range of 1 to 2
[1, 2]. The heavy tails with such values of $\lambda$ also appear to be
ubiquitous in word frequency, ranking distributions and many linguistic
networks [3, 5]. Powers from 1.5 to 2 are encountered in information and
knowledge networks like Wikipedia and in semantic networks resulting from
acquiring knowledge (terms) from large knowledge networks [5, 6].
The research on associative networks often focuses on finding individual key
items in the networks and their connectivities. Communicability centrality, as
introduced by Estrada, and the closely related Katz centrality have been
useful in exploring the connectivities of the nodes through long paths (or
walks) [1, 2]. In practice, the real networks in such applications are always
quite small (about 1000-1500 nodes) and centrality measures have usually very
large statistical variation. Therefore, it would be advantageous to seek for a
descriptions which use all available information of the network states.
However, taking the networks as holistic systems, that is, as connected sets
of items, requires different approaches based on the description of the
networks on macro-level [7, 8]. For this, matrix transformations based on a
complete adjacency matrix that describes the network are a promising starting
point. For example, the communicability centrality is related to a matrix
transformation which is obtained as an exponential transformation, while Katz
centrality is related to Neumann transformation [9, 10]. Here, we introduce a
q-generalised [11, 12, 13] matrix transformation (called a q-adjacency kernel
in what follows), which interpolates between these two well-known
transformations. It is shown that the q-adjacency kernel provides a starting
point for a thermodynamic-like macroscopic description of heavy-tailed
networks and allows us to define a quantity that can be interpreted as the
q-generalised free energy of a network.
The q-generalised free energy for a heavy-tailed network is obtained here by a
derivation based on q-generalised information theoretic entropy. The
information theoretic entropic measures have recently found many application
in the characterisation of complex networks, multilayer and multiplex networks
[7, 8, 14, 15]. For example, the Jensen-Shannon divergence, which can be seen
as a symmetrised and regularised form of the Kullback-Leibler divergence, has
proved to be a flexible and robust information theoretic entropic measure for
comparison of the networks, because it is symmetric, bounded and it provides
connection points to several related divergence measures within statistical
physics [14, 15]. The quantum version of the Jensen-Shannon divergence can be
utilised effectively to measure the similarity between networks, as based on
quantum walks [16]. Another class of information theoretic entropies is
provided by Rényi and Tsallis entropies [8, 11, 17, 18] which are closely
related classes of entropies. These entropies belong to a class of
q-generalised information theoretic measures, which have recently found
applications in many complex systems, quantum systems and entangled systems
[8]. The Rényi and Tsallis entropies form a family of information theoretic
measures that generalise the Shannon entropy, and can be used as measures of
mutual information and relative entropies (see e.g. [17]). Regarding the
characterisation of heavy-tailed networks, the generalised q-entropies have
the advantage of providing a parameter (q-index) to tune for specific
properties of heavy-tailed networks, either for the low or high frequency part
of the distributions, thus allowing foraging for information of the desired
properties of the distribution [19, 20, 21]. Consequently, the Tsallis and
Rényi entropies have found applications in linguistic and semantic networks
[19] as well as in comparisons of texts based on word frequency occurrence
[20, 21], all these are cases where heavy-tailed networks are of interest.
Here, we show that the Tsallis entropy and the q-generalised Kullback-Leibler
(termed Kullback-Leibler-Tsallis in what follows) divergence [22, 23] (see
also [24, 25]) are particularly suitable as a starting point to derive a
q-generalised free energy to characterise the heavy-tailed network, thus
providing access to a macro-level, thermodynamic-like description of the
heavy-tailed networks.
To test the applicability of q-adjacency kernels for networks that are heavy-
tailed and modular, such as associative knowledge networks, we use a recently
introduced generative model to produce networks that resemble the empirical
networks [26]. The minimal model to generate the networks is introduced first,
in section 2. In section 3 we discuss in detail how q-generalised adjacency
kernels are obtained, and how the thermodynamic-like interpretation is based
on q-generalised Kullback-Leibler-Tsallis divergence. The section 3 ends with
a discussion on how by using ensembles of the simulated heavy-tailed networks
the validity of the derived theoretical results can be tested. Section 4
provides the results showing that the heavy-tailed networks yield to the
thermodynamic-like macro-scale characterisation, based on the q-generalised
entropy and free energy. In addition, Appendix A contains a sequel result of
practical utility, for comparison of heavy-tailed networks. Finally, the
implications and the practical use of the results are discussed.
## 2 Minimal generative model and simulations
Many real networks have hub-like nodes so that their degree distributions can
be characterised as heavy-tailed distributions. Here, heavy tails refer to a
form of the trailing edge of a distribution that decays considerably more
slowly than a normal distribution, or to exponential distributions for which
the trailing edge resembles to some degree the inverse power-law type
distribution given as
$P(d)\propto d^{-\lambda^{\prime}},\>\>\>\lambda^{\prime}\in\,]1,3].$ (1)
The inverse power-law type distributions are used here as a model of heavy-
tailed distributions. However, it is not assumed that the heavy-tailed
distributions are genuinely inverse power-law or can be strictly characterised
as such, since this rarely seems to be the case [27]. Nevertheless, for
practical reasons, it is useful to consider the networks with heavy-tailed
distributions using the model of inverse power-laws, because many essential
characteristics of the networks are captured by that class of distributions
[28]. In addition to heavy tails, many real networks are also highly modular,
with the modularity emerging e.g. from content based features [4] or
thematisation [2]. Here, we focus on empirically interesting cases of
associative knowledge networks with $\lambda^{\prime}\in\,]1,2]$ with high
modularity $Q$ (as defined following Newman [29]) within range $0.8<Q<0.9$ [1,
2].
We utilise here the generative model to produce a set of heavy-tailed networks
that have a highly modular structure [26]. The model is based on generation of
affinities for a fixed set of nodes, with the nodes assigned to different,
pre-fixed classes. Here, we use parameters which generate networks of
approximately N=1000 connected nodes and with inverse powers $\lambda$ of
degree-distributions obtaining values $\lambda$ =1.3, 1.5, 1.7 and 1.9. The
networks are generated using, for affinities $\pi_{k}$ of nodes, the
distribution
$P(\pi_{k})=P_{0}\;[1-(\lambda-1)\,\Lambda\pi_{k}]^{-1/(\lambda-1)},\;\;\;\lambda\in\,]1,2],\;\;\;\Lambda>0.$
(2)
It is interesting to note that Eq. (2) can also be written in the form of a
q-exponential $P(\pi_{k})/P_{0}={\rm exp}_{q}[\Lambda\pi_{k}]$, which is the
q-generalisation of the exponential function [11, 12, 13]. The parameter
$\lambda$ determines the inverse power of the resulting degree distribution of
the network, while $\Lambda$ controls the cut-off of affinities, and
$\pi_{k}<\Lambda(\lambda-1)$. The affinity distribution can be derived in
several ways and by using several parametrisations, [26, 30, 31]. What is
essential here is that the affinity distribution in Eq. (2) can be used to
generate networks with the desired inverse power. Due to modularity, however,
the resulting inverse power of degree distribution is never exactly $\lambda$
but usually somewhat larger, $\lambda^{\prime}>\lambda$ [26].
In simulations, only one nested modular structure is used, which is three-
tiered; the smallest modules contain N’ potential nodes, the second tier three
of these smaller modules and thus 3xN’ , and the highest and third tier
contains three of the modules of 3xN’ nodes, thus forming a set of N=3x(3xN’)
nodes in total. The connections between N nodes are established
probabilistically as governed by the affinity distribution. Therefore, not all
nodes in the nested tiers will be connected. With parameters chosen so that
the resulting network has the size of about N=1000 connected nodes, the
inverse power in range $1.3<\lambda<2.0$ and the modularity in range
$0.8<Q<0.9$. The modularity of the networks is relaxed step-by-step, by
rewiring until no changes due to rewiring are observed. In the rewiring, the
degree sequence of the nodes is preserved. We have shown that such affinity-
based linking of nodes with minimal other assumptions reproduces the
empirically discovered properties of associative knowledge networks quite
adequately [26].
The simulations to generate networks and their analysis are carried out in
Mathematica using the IGraph package [32]. IGraph provides functionality for
generating efficiently affinity-based networks simply by providing the
probabilities $\pi_{k}$ for the routine IGStaticFittnessGame. The output of
the routine is a network with a predetermined number of links, linked
according to the probabilities $\pi_{k}$ drawn from distribution $P(\pi_{k})$
in Eq. (2). The relaxation of modularity by rewiring is done using routine
IGRewire. For each network, the simulations provide the adjacency matrix ${\bf
A}$, which is then used to obtain the desired adjacency kernels to
characterise the networks. Next, we turn to how the adjacency kernels are
constructed and how the heavy-tailed networks can be characterised using them.
## 3 Thermodynamic-like description of networks with heavy tails
The thermodynamic-like description of the networks with heavy tails can be
constructed using a suitable generalisation of the matrix transforms that have
been used more traditionally to describe and characterise networks. It is
shown how these generalisations provide a basis for constructing the
thermodynamics-like description and and generalised free energy -like quantity
for characterisation of the network. In that, the discussion closely follows
the study by Abe and Rajagopal [33], who derived the q-generalised
thermodynamics for q-generalised ensembles. The motivation of [33] was to
develop a non-extensive version of thermodynamic laws for finite systems where
the infinite volume thermodynamic limit is not applicable and surface effects
violate extensivity. Similar constraints apply to finite complex networks,
hence we are interested in exploring connections to non-extensive
thermodynamics by applying the results and approaches of [33] to networks with
heavy tails.
### 3.1 Adjacency kernels and q-kernels
On the most basic level, networks are described by the set of nodes (vertices)
and links (edges) between the nodes. This information is provided by the
network’s adjacency matrix ${\bf A}$, whose element $a_{ij}=[{\bf A}]_{ij}$
has the value 1 if the nodes $i$ and $j$ are connected. Otherwise its value is
0. From the adjacency matrix, many other relevant properties of the network
become available through matrix transformations. Following previous studies on
adjacency kernels [9, 10] we define a graph kernel as a symmetric and positive
semidefinite function which denotes a certain similarity or proximity between
two nodes in the graph. All such graph kernels of interest here are based on
the adjacency matrix ${\bf A}$ of the graph and are thus called adjacency
kernels. Two well-known adjacency kernels, the exponential and the Neumann
kernel, are related to path counting between nodes, thus providing information
on how the nodes are globally connected [9, 10].
The exponential kernel is one of the matrix transformations most widely used
to analyse networks. The exponential kernel is defined as [9, 10, 34, 35, 36,
37, 38]
${\rm exp}[\beta{\bf A}]={\bf{I}+\frac{\beta^{{\rm 1}}{\bf A}^{{\rm 1}}}{{\rm
1}!}+\frac{\beta^{{\rm 2}}{\bf A}^{{\rm 2}}}{{\rm 2}!}+\frac{\beta^{{\rm
3}}{\bf A}^{{\rm 3}}}{{\rm 3}!}+\ldots,}$ (3)
where matrix power ${\bf A}^{k}$ counts paths (walks) of length $k$. The
exponential adjacency kernel weights the paths according to their length $k$
by a weight factor $\beta^{k}$ and inverse of the factorial $k!$. Therefore,
connections with short paths gain more importance than those with long paths.
The diagonal elements of the exponential kernel are called the Estrada index
and the so-called communicability centrality is obtained as a row sum of the
off-diagonal elements [35]. The exponential kernel and the associated measures
(the Estrada index and the communicability centrality) perform well in finding
globally important nodes. In addition, the exponential kernel is
computationally very robust and stable, which is an advantage for analysis of
complex networks [34, 35, 36, 39].
The Neumann kernel [9, 10] is another well-known adjacency kernel with many
applications in characterising complex networks [40, 41]. The Neumann kernel
is given by [9, 10]
$[{\bf I}-\beta{\bf A}]^{-1}={\bf{I}+\beta^{{\rm 1}}{\bf A}^{{\rm
1}}+\beta^{{\rm 2}}{\bf A}^{{\rm 2}}+\beta^{{\rm 3}}{\bf A}^{{\rm
3}}+\ldots,}$ (4)
where the weight factor $\beta$ must be chosen so that $\beta^{-1}$ is larger
than the largest eigenvalue of matrix ${\bf A}$. The row sum of the off-
diagonal elements of the $i$th row in the Neumann kernel provides the Katz
centrality [40, 41] of node $i$, widely applied in social network analysis in
finding the influential nodes [42]. The Katz centrality has also been
successfully used in finding the central key nodes in heavy-tailed associative
networks, which are of interest here [2]. One disadvantage of the Katz
centrality is that it is difficult to optimise the weight factor $\beta$ to
maintain the stability of computation and the desired resolving power [10,
43]. On basis of effects of the choice of parameters in behaviour of the
kernels and consequently, on ranking of nodes, it seems to be advisable to
strive for largest values of weight factor but to check how results depend on
the choice of it [10].
The generalisation of the walk counting, which interpolates between the
exponential and Neumann kernels, is the q-generalised exponential kernel [33]
${\rm exp}_{q}[\beta{\bf A}]=[{\bf I}-(q-1)\beta{\bf
A}]^{-1/(q-1)},\>\>\>q\in\,]1,2]\>\>{\rm and}\>\>\beta>0,$ (5)
where $\beta$ must be chosen so that $\beta<1/B$ with $B$ being the largest
eigenvalue of matrix ${\bf A}$. The q-exponential adjacency kernel can be
expanded in series
$\displaystyle{\rm exp}_{q}[\beta{\bf A}]$ $\displaystyle=$ $\displaystyle{\bf
I}\\!+\\!\beta^{1}{\bf A}^{1}\\!+\\!\frac{q}{2!}\beta^{2}{\bf
A}^{2}\\!+\\!\frac{2q^{2}\\!-\\!q}{3!}\beta^{3}{\bf
A}^{3}+\\!\ldots=\sum_{{\rm k}}c_{k}(q)\>\beta^{{\rm k}}{\bf A}^{{\rm k}}$ (6)
$\displaystyle c_{k}(q)$ $\displaystyle=$
$\displaystyle(1-q)^{k}\binom{\frac{1}{1\\!-\\!q}}{k}=\frac{(1-q)^{k}}{k!}\left(\frac{1}{1\\!-\\!q}+1-k\right)_{k}$
(7)
where $\binom{\,\cdot\,}{\cdot}$ is the binomial coefficient and $(\cdot)_{k}$
is the Pochhammer ascending factorial [44]. In the limit $q\rightarrow 1$ the
q-adjacency kernel approaches the exponential kernel since $c_{k}\rightarrow
1/k!$ while for $q\rightarrow 2$ it approaches the Neumann kernel when
$c_{k}\rightarrow 1$. The q-adjacency kernel interpolates between walk
counting where walks of length $k$ are either weighted by $\beta^{k}$ (Neumann
kernel) only or much more heavily with $\beta^{k}/k!$ (exponential kernel).
The q-adjacency kernel is also a positive semidefinite matrix and has non-
negative eigenvalues when $\beta$ is smaller than the inverse of the largest
eigenvalue of the adjacency matrix. The q-adjacency kernel in Eq. (5) opens up
a path to two interesting generalisations. First, the q-kernel provides a
starting point for constructing the thermodynamic-like description of the
networks with a heavy-tailed degree distribution. Second, it leads to
q-generalisations of the Estrada-type communicability centrality, which is of
high practical utility in characterising heavy-tailed networks. However, while
it is useful, we have not found a thermodynamic-like interpretation for it.
Hence we have relegated the discussion of the generalised communacibility
centrality to Appendix A. We proceed next to the details of the thermodynamic-
like interpretation of the kernels.
### 3.2 Thermodynamic-like description of heavy-tailed networks
The heavy-tailed networks generated by simulations and based on the generic
model introduced in section 2 are described using adjacency matrix A. Using
the q-adjacency kernels we characterise the connectivity (or communicability)
properties of the network by defining the density matrix
$\boldsymbol{\rho}=Z_{q}^{-1}[{\bf I}-(q-1)\beta{\bf A}]^{-1/(q-1)}\equiv
Z_{q}^{-1}\exp_{q}(\beta{\bf A}),$ (8)
where $Z_{q}={\rm Tr}[{\bf I}-(q-1)\beta{\bf A}]^{-1/(q-1)}$ is the
q-generalised partition function (compare with [33]). The density matrix
provides a complete description of the network and, in that sense, is a
holistic measure for the characterisation of networks. Moreover, it can be
taken as a starting point for robust comparison of the structure and
structural similarity of networks [7, 8].
#### 3.2.1 Thermodynamics corresponding to Gibbs states with $q=1$
We review first a thermodynamic interpretation of the density matrix in the
case $q=1$, where
$\boldsymbol{\rho}=Z^{-1}\exp({\beta{\bf A}})\ .$ (9)
There are two obvious ways to interpret (9) as a thermal (Gibbs) state. First,
in a system with a Hamiltonian ${\bf H}=-{\bf A}$,
$\boldsymbol{\rho}=Z^{-1}\exp({-\beta{\bf H}})\ .$ (10)
with a positive inverse temperature $\beta>0$, where the prefactor
$Z=Z(\beta)={\rm Tr}(e^{-\beta{\bf H}})$ is the partition function. For
example, one could consider the Hamiltonian as the graph Laplacian ${\bf
H}={\bf D}-{\bf A}$ without the diagonal matrix ${\bf D}$ of the node degrees
(compare with [7, 8]). Second, one can choose the adjacency matrix directly as
the Hamiltonian, ${\bf H^{\prime}}={\bf A}$ (the prime is used to distinguish
this alternative choice), but then one must choose a negative temperature
$\beta^{\prime}=-\beta<0$ to write $\exp(\beta{\bf
A})=\exp(-\beta^{\prime}{\bf H^{\prime}})$ [34, 36]. Negative temperatures
arise naturally in systems with a bounded spectrum of energy eigenvalues (as
in this case, H has a finite set of eigenvalues). For convenience, we proceed
with the first interpretation with the Hamiltonian ${\bf H}$.
The Gibbs state maximizes the von Neumann entropy
$S({\boldsymbol{\sigma}})=-{\rm
Tr}({\boldsymbol{\sigma}}\log{\boldsymbol{\sigma}})\,$ (11)
with the constraint that the expectation value of the Hamiltonian $\langle{\bf
H}\rangle_{{\boldsymbol{\sigma}}}={\rm Tr}({\boldsymbol{\sigma}}{\bf H})$ is
kept fixed. That is, with the constraint, the density matrix
${\boldsymbol{\sigma}}$ for which $S({\boldsymbol{\sigma}})$ is maximised, is
the Gibbs state ${\boldsymbol{\sigma}}={\boldsymbol{\rho}}$. Substituting the
Gibbs state to the von Neumann entropy, so that it becomes the thermal entropy
$S(\beta)=S({\boldsymbol{\rho}})$ of the ensemble, the first law of
thermodynamics holds,
$F(\beta)=E(\beta)-\beta^{-1}S(\beta)\ ,$ (12)
with the free energy $F(\beta)=-\beta^{-1}\log Z(\beta)$ and thermal energy
$E(\beta)=\langle H\rangle_{\boldsymbol{\rho}}$.
We now move to consider comparisons between density matrices
${\boldsymbol{\rho}}$ and ${\boldsymbol{\sigma}}$. Suppose
${\boldsymbol{\sigma}}=Z^{-1}(\beta)\exp(-\beta{\bf H})$ and
${\boldsymbol{\rho}}={\boldsymbol{\sigma}}+\delta{\boldsymbol{\rho}}$ by a
small perturbation $\delta{\boldsymbol{\rho}}$ with ${\rm
Tr}(\delta{\boldsymbol{\rho}})=0$. We define the change in entropy
$\delta S\equiv
S({\boldsymbol{\rho}})-S({\boldsymbol{\sigma}})=S({\boldsymbol{\rho}})-S(\beta)$
(13)
and a change in energy
$\delta E\equiv\langle{\bf H}\rangle_{\boldsymbol{\rho}}-\langle{\bf
H}\rangle_{\boldsymbol{\sigma}}={\rm Tr}({\boldsymbol{\rho}}{\bf H})-E(\beta)\
.$ (14)
Then, it is known that the relative entropy (Kullback-Leibler divergence)
between ${\boldsymbol{\rho}},{\boldsymbol{\sigma}}$,
$K[\boldsymbol{\rho}||\boldsymbol{\sigma}]={\rm Tr}[\boldsymbol{\rho}({\rm
ln}\boldsymbol{\rho}-{\rm ln}\boldsymbol{\sigma})]$ (15)
for the infinitesimal variation
${\boldsymbol{\rho}}={\boldsymbol{\sigma}}+\delta{\boldsymbol{\rho}}$ from the
thermal density matrix ${\boldsymbol{\sigma}}$ gives an infinitesimal change
in free energy $\delta F$ with (compare with [34, 36])
$\beta\delta F=K[{\boldsymbol{\rho}}||{\boldsymbol{\sigma}}]=\delta
E-\beta^{-1}\delta S\ .$ (16)
Next we proceed to show that by analogous reasoning similar results can be
obtained for q-generalised states.
#### 3.2.2 Thermodynamics corresponding to q-generalised states with $q>1$
It is possible to generalise the above derivation based on Gibbs states with
$q=1$ for q-generalised states corresponding to the index $q>1$. One starts
with the constrained extremisation problem, now with the goal of maximising
the generalised q-entropy (Tsallis entropy) of form [11, 22, 23, 33, 45, 46]
$S_{q}[{\bf\boldsymbol{\sigma}}]=\frac{1}{1-q}({\rm
Tr}{\boldsymbol{\sigma}}^{q}-1).$ (17)
In the limit $q\rightarrow 1$ the q-entropy reduces to the usual von Neumann
entropy (11). Now we add a constraint that the expecation value $\langle{\bf
H}\rangle_{{\boldsymbol{\sigma}}}={\rm Tr}({\boldsymbol{\sigma}}{\bf H})$ is
kept fixed and ask what is the form of the density matrix
${\boldsymbol{\sigma}}$ that maximises entropy in Eq. (17). The result turns
out to be ${\boldsymbol{\sigma}}={\boldsymbol{\rho}}_{q}$, with [33, 45]
${\boldsymbol{\rho}}_{q}=Z_{q}^{-1}\exp_{q}({-\beta{\bf H}})\ ,$ (18)
where $Z_{q}=Z_{q}(\beta)={\rm Tr}[\exp_{q}({-\beta{\bf H}})]$. Note that
setting ${\bf H}=-{\bf A}$, Eq. (18) agrees with Eq. (8) as expected. This
gives a q-generalised density matrix in Eq. (8) for description of the
networks with adjacency matrix ${\bf A}$.
Next we will consider two closely related networks, the original modular one
with a state ${\boldsymbol{\sigma}}$ and the rewired counterpart with
${\boldsymbol{\rho}}={\boldsymbol{\sigma}}+\delta{\boldsymbol{\rho}}$. The
original state is q-distributed, with ${\boldsymbol{\sigma}}$ of the form in
Eq. (8). We are interested in finding the q-generalised relation of (16)
between changes in free energy, energy, and entropy. The q-distribution arises
from constrained maximisation of the Tsallis q-entropy, generalising the von
Neumann entropy. Correspondingly, the appropriate quantity that generalises
the Kullback-Leibler divergence is the q-generalised Kullback-Leibler-Tsallis
divergence (henceforth q-divergence for brevity), given by [22, 23, 33, 45]
$K_{q}[\boldsymbol{\rho}||\boldsymbol{\sigma}]=\frac{1}{1-q}[1-{\rm
Tr}(\boldsymbol{\rho}^{q}\boldsymbol{\sigma}^{1-q})],$ (19)
which at the limit $q\rightarrow 1$ approaches the Kullback-Leibler
divergence. The q-divergence is always positive, and zero for identical
states. Note that when $q\in\,]1,2]$ and density matrices are positive
semidefinite, the term $\boldsymbol{\sigma}^{1-q}$ needs to be interpreted as
$[\boldsymbol{\sigma}^{-1}]^{q-1}$, where $\boldsymbol{\sigma}^{-1}$ is the
pseudo-inverse of density matrix $\boldsymbol{\sigma}$. Here, we have followed
the ordering of density matrices $\boldsymbol{\rho}\leq\boldsymbol{\sigma}$ so
that states of $\boldsymbol{\rho}$ (corresponding to the rewired networks) are
obtained from $\boldsymbol{\sigma}$ (corresponding to the original modular
network), meaning that the number of links in the rewired network is always
equal or smaller than in the original one.
For the q-generalisation of the infinitesimal first law in Eq. (16), we follow
[33]. We first need a q-generalisation of the expectation value of the
Hamiltonian, defined by [33]
$\bar{E}_{q}={\rm Tr}({\bf H}\boldsymbol{\rho}^{q})/{\rm
Tr}(\boldsymbol{\rho}^{q})\ ,$ (20)
the trace in the denominator is needed for an expectation value with respect
to a unit normalised density matrix ${\boldsymbol{\rho}}^{q}/{\rm
Tr({\boldsymbol{\rho}}^{q})}$. With this definition, by a direct calculation
one obtains [33] an extension of the usual thermodynamic relation,
$\delta S_{q}/\delta\bar{E}_{q}=\beta,\;\;\;q\in\,]1,2]\>\>{\rm
and}\>\>\beta>0\ ,$ (21)
in agreement with the interpretation of $\beta$ as the inverse temperature.
Starting from the q-divergence as defined in Eq. (19) and assuming that
changes $\delta\boldsymbol{\rho}$ in state $\boldsymbol{\rho}$ are small
enough so that ${\rm Tr}\delta\boldsymbol{\rho}\approx 0$ and
$\boldsymbol{\rho}\approx\boldsymbol{\sigma}$ in all cases, we can show by
direct calculation (compare with derivation in ref. [33]) that
$({\rm Tr}\boldsymbol{\sigma}^{q})\;\delta
K_{q}[\boldsymbol{\rho}||\boldsymbol{\sigma}]=\beta\;\delta\bar{E}_{q}-\delta
S_{q}[\boldsymbol{\rho}],\>\>\>q\in\,]1,2].$ (22)
It is now possible to interpret the term $({\rm
Tr}\boldsymbol{\sigma^{q}})\;\delta
K_{q}[\boldsymbol{\rho}||\boldsymbol{\sigma}]$ as analogous to a change in
free energy of a thermodynamic systems, by defining a change in q- free energy
as
$\delta F_{q}=\beta^{-1}({\rm Tr}\boldsymbol{\sigma}^{q})\;\delta
K_{q}[\boldsymbol{\rho}||\boldsymbol{\sigma}].$ (23)
Then, a relation resembling the infinitesimal form of the first law of
thermodynamics holds,
$\delta F_{q}=\delta\bar{E}_{q}-\beta^{-1}\delta S_{q}.$ (24)
We now restore the adjacency matrix by replacing ${\bf H}=-{\bf A}$, and
denote its q-expectation value by $\bar{A}_{q}={\rm Tr}({\bf
A}\boldsymbol{\rho}^{q})/{\rm Tr}(\boldsymbol{\rho}^{q})$, so that
$\delta\bar{E}_{q}=-\delta\bar{A}_{q}$. The generalised first law then reads
$\delta F_{q}=-\delta\bar{A}_{q}-\beta^{-1}\delta S_{q}\ .$ (25)
In the alternative interpretation, where the adjacency matrix is chosen as the
Hamiltonian, H’ = A, we would have arrived at this form. The minus sign in
front of $\bar{A}_{q}$ is consistent with the inverse temperature
$\beta^{\prime}=-\beta$ being negative in the alternative interpretation. In
that case, the alternative form of (24) is rewritten as
$\delta F^{\prime}_{q}=\delta\bar{E^{\prime}}_{q}-(\beta^{\prime})^{-1}\delta
S_{q},$ (26)
where we have defined $\delta F^{\prime}_{q}=(\beta^{\prime})^{-1}({\rm
Tr}\boldsymbol{\sigma}^{q})\;\delta
K_{q}[\boldsymbol{\rho}||\boldsymbol{\sigma}]=-\delta F_{q}$ as a change in q-
free energy.
At the limit $q\rightarrow 1$ the result in Eq. (25) agrees with the results
and interpretation suggested by Estrada [34, 35, 36], who has proposed a
similar relation by using the exponential adjacency kernel and Kullback-
Leibler divergence. The result in Eqs. (22)-(25) can be taken as
q-generalisation of the previous results by Estrada. The advantage of the
q-generalised adjacency kernels is that using the freedom provided by choice
of parameters $\beta$ and $q$, the contribution of paths of different lengths
to the connectivity (or communicability) of nodes can be explored. At the
limit $q\rightarrow 1$ short paths are more dominant for connectivity than
long ones, while at the limit $q\rightarrow 2$ the contribution of longer
paths becomes more important. In both cases, also weight factors can be used
to tune the scale of paths that one wishes to take into account in the
connectivity of nodes. At the limit $\beta\rightarrow 0$, a vanishing weight
is attached to all links, so that the system is essentially a set of
disconnected nodes, while the larger the value of $\beta$, more tightly
connected the system.
### 3.3 Generalised free energy and response function from simulations
The viability of the thermodynamic-like description of heavy-tailed networks
is tested by first generating a set of networks using the generative model,
designed to produce networks with inverse power law type degree distribution
with power $1<\lambda<2$ and with high modularity. To obtain the q-generalised
free energy from simulations is, however, not entirely straightforward for two
reasons. First, the systems are always finite and thus the density matrix as
defined in Eq. (8) does not exactly maximise the q-entropy. Secondly, due to
the small size of the systems, statistical fluctuations are large, and it is
difficult to obtain the quantities ${\rm Tr}\boldsymbol{\sigma}^{q}$, $\delta
S_{q}$ and $\delta\bar{A}_{q}$ accurately, while change in divergence $\delta
K_{q}$ is computationally more stable. Therefore, based on the simulations
where networks are generated, we first justify that relation in Eq. (22) holds
by demonstrating the validity of equivalence
$K_{0}(\beta)\;\delta\tilde{K}_{q}=-\beta A_{o}(\beta)\delta\bar{A}_{q}-\delta
S_{q}.$ (27)
Here we have assumed that $K_{0}\simeq{\rm Tr}\boldsymbol{\sigma}^{q}{\rm
Max}[\delta K_{q}]$ depends only on $\beta$, and
$\delta\tilde{K}_{q}=\tilde{K}_{q}/{\rm Max}[\tilde{K}_{q}]$ only on rewiring
frequency $\pi$. The normalisation parameter $A_{0}(\beta)$ is allowed to
compensate for the $\beta$-dependence of the normalisation factor ${\rm
Tr}\boldsymbol{\rho}^{q}$, due to the finite size of networks affecting the
cut-off of eigenvalues of the adjacency matrix. The validity of Eq. (22) is
then guaranteed in the region of parameters where the scaling functions
$K_{0}$ and $\tilde{K}_{q}$ can be found so that Eq. (27) holds. As will be
seen, this is possible for networks with heavy tails characterised by
$1<\lambda<2$ when $1.3<q<1.7$ (in some cases also for lower and higher values
of $q$). In this region of parameter $q$, the thermodynamic-like description
is viable and the q-generalised free energy change is
$\delta F_{q}=\beta^{-1}K_{0}\;\tilde{K}_{q},$ (28)
where functions $K_{0}$ and $\tilde{K}_{q}$ are obtained by fitting to
simulation data.
The q-generalised free energy allows us to define how the network responds to
changes in the external parameters $\beta$ and $\pi$. In the thermodynamic
interpretation, the response to changes in $\beta$ is analogous to heat
capacity
$\chi=\beta^{2}\frac{\partial^{2}(\beta
F_{q})}{\partial\beta^{2}}\propto\beta^{2}\,\frac{\partial^{2}K_{0}}{\partial\beta^{2}},$
(29)
where it is assumed that the scaling with regard to $\beta$ is the same for
the rewired and non-rewired (fully modular) networks. Then, the response is
determined up to an unknown constant (only the change in free energy due to
rewiring is known). The response function allows a simple description how and
in what state the network is most responsive to changes in the strength of the
links. Different definitions of the response function are possible (see e.g.
[47]), but here we have preferred to retain the analogy with a thermal
response (i.e. heat capacity).
## 4 Results and discussion
The results for the diagonal values of the q-adjacency kernels are discussed
first, because they have the closest relation to a thermodynamic-like
interpretation in the case $q=1$, when diagonal values provide the so-called
Estrada index, which yields to a thermodynamic interpretation in terms of free
energy [34, 36]. In small systems, however, the distribution of eigenvalues
has so large fluctuations that the utility of such interpretation is in
practice diminished. Here, we show results that demonstrate that a
thermodynamic-like interpretation similar to that proposed by Estrada for
diagonal components is viable also for generalised q-adjacency kernels, which
describe the state of the network as a whole, but with much reduced
fluctuations.
### 4.1 Choice of parameters
The simulation results to be discussed here show that within a certain region
of parameters $q$ and $\beta$ we can maintain the interpretation of divergence
difference $\Delta K_{q}$ as change of free energy $\Delta F_{q}$ of the
network (note that for simulation results, symbols $\Delta(\cdot)$ instead of
$\delta(\cdot)$ without subscript $q$ are used). One of the central choices is
the choice of $\beta_{{\rm MAX}}$, which must be lower than the inverse of the
highest eigenvalue of the adjacency matrix. Since the highest eigenvalue of
the adjacency matrix depends on the details of the network, the exact choice
of maximum value is case-dependent. In practice, however, for the heavy-tailed
networks of interest here, values of $\beta$ which are from 50% to 70% of the
maximum value $\beta_{{\rm MAX}}$ provide the best compromise between
stability of results and resolving power [2]. On the other hand, when $\beta$
is too low, resolving power with regard to differences in characteristics of
nodes is lost. Therefore, the $\beta_{{\rm MAX}}$ is first chosen to be about
70% of the highest possible value, and it depends on $\lambda$ linearly so
that $\beta_{{\rm MAX}}=-0.02+0.33\lambda$ for $\lambda\in]1,2]$. The values
of $\beta_{{\rm MAX}}$ for different choices of $\lambda$ are summarised in
Table I. The adjacency kernel for each value of $\lambda$ is then evaluated
for seven values ranging from $\beta_{{\rm MAX}}$ to 0.7$\beta_{{\rm MAX}}$.
Table I: The parameters $\beta_{{\rm MAX}}$ in simulations and the maximum
number of attempted links $M_{{\rm A}}$ and realised links $M_{{\rm R}}$ (on
average) with $M_{0}$=2000. In all cases, the number of connected nodes is on
average about N=1000. The realised powers $\lambda^{\prime}$ for given
$\lambda$ are also shown.
| | $\lambda=1.3$ | $\lambda=1.5$ | $\lambda=1.7$ | $\lambda=1.9$
---|---|---|---|---|---
$\beta_{{\rm MAX}}$ | | 0.023 | 0.030 | 0.036 | 0.043
${\rm M}_{{\rm A}}/{\rm M}_{0}$ | | 2.0 | 1.4 | 1.1 | 1.0
${\rm M}_{{\rm R}}/{\rm M}_{0}$ | | 1.8 | 1.3 | 1.0 | 0.9
$\lambda^{\prime}$ | | 1.7 | 1.8 | 1.9 | 2.1
In the simulations to produce generic networks, only one type of modularity is
chosen, corresponding to the three-tiered modular structure where the first
tier has units of N’=200 nodes, the second tier three of such units of 200
nodes (for a total of 600), and the third tier three units of 600 nodes,
providing N=1800 nodes in total. All nodes, however, are not connected and the
choice of parameters lead to total number of connected nodes of about N=1000.
The structure is thus a block model with a three-tiered hierarchy of
connections. This selection results in the initial modular structure with
modularity in range $0.80<Q<0.90$ depending on the case. As was explained in
section 2, the parameters are selected so that they roughly correspond to
values found empirically for networks of associative, thematic knowledge [1,
2]. However, provided that the modularity is high enough, the exact structure
of initial modularity and its variation are not crucial for the properties of
interest here (for effects of modularity, see ref. [26]). Therefore one
initial state is enough. Instead, the relaxation of the initial modularity and
its effects are of interest. The modularity of the networks is relaxed by
rewiring the links with relative frequency $\nu\in[0,\nu_{{\rm MAX}}/M]$. In
practise, the value $\nu_{{\rm MAX}}=5.0\cdot 10^{5}$ is chosen to guarantee
complete rewiring, while $\nu=1$ means that, on average, each link is rewired
once. In the rewiring, performed with IGraph algorithm IGRewire, the degree
sequence of the nodes is preserved.
### 4.2 The diagonal values of density matrices
The degree centrality distributions $P(d)$ for degrees of nodes are shown in
Figure 1 (the upper row). The distributions are clearly heavy-tailed and can
follow a power law in a broad range of degrees. It should be noted that
irrespective of the choice of $\lambda$, defining the affinity distribution,
the powers corresponding to the degree distribution are rather narrowly
distributed from $\lambda^{\prime}\approx 1.7$ for $\lambda=1.3$ up to
$\lambda^{\prime}\approx 2.0$ for $\lambda=1.9$. This behaviour is most likely
caused by the fact that due to constraints set by modularity, high affinity
nodes are more likely to be connected when only about 1000 of all the
potential 1800 nodes are connected. A similar effect of values
$\lambda^{\prime}$ exceeding the values of $\lambda$ was also observed in
simulations where a broader set of modularities was explored [26].
The density distribution of diagonal values $[\boldsymbol{\rho}]_{ii}$ of
density matrix $\boldsymbol{\rho}$ are related to the eigenvalues of the
density matrix and are shown in two lower rows in Figure 1 for the fully
modular and fully rewired networks. The diagonal values have different
distributions corresponding to different values of $q$ and thus show that they
are more sensitive to the details of the networks than the degree
distribution. Moreover, the relaxation of modularity substantially affects the
leading edge of the distribution. The modular networks have high number of low
values of $[\boldsymbol{\rho}]_{ii}$ due to modular structure, in which local
connections in modules are tight. However, with relaxation of the modularity,
the distributions become more power-law type and higher values of
$[\boldsymbol{\rho}]_{ii}$ become more dominant, indicating that the number of
longer paths increases with loss of modularity. This effect is more pronounced
the higher the value of $q$. For the lowest values of $q$ the distribution of
the diagonal values of $[\boldsymbol{\rho}]_{ii}$ is in all cases close to an
inverse power law distribution. For higher values of $q$, the high frequency
leading edge corresponding to the low values of $[\boldsymbol{\rho}]_{ii}$
begins to deviate from power law behaviour. Also, in the low frequency part,
the statistical fluctuations are high. In practice, for example in case of
associative knowledge networks, just this part of the distribution is of
interest. With small sample sizes, however, the large statistical fluctuations
become an obstacle for similarity comparisons based on Kullback-Leibler
divergence; the statistical fluctuations are usually too large to allow robust
results [2, 26].
Figure 1: Distributions of degree $d$ for parameters $\lambda=$ 1.3, 1.5, 1.7
and 1.9 (upper row, with estimates of power-law fits shown by
$\lambda^{\prime}$). The distributions of diagonal elements
$[\boldsymbol{\rho}]_{ii}$ of density matrix $\boldsymbol{\rho}$ for each
$\lambda$ and values $q=$ 1.3, 1.5 and 1.7 are shown on two lowest rows,
respectively. In each case, the distributions are shown for original modular
(symbol $\star$, upper sets of data points) and completely rewired networks
(symbol $\bullet$, lower sets of data points) with $\beta=\beta_{{\rm max}}$.
The results for degree distributions are with $10^{4}$ repetitions while for
$[\boldsymbol{\rho}]_{ii}$ 100 repetitions are used.
### 4.3 The thermodynamic-like interpretation
To compare heavy-tailed networks and the effects of modularity we use the
q-generalised entropy, q-generalised divergences, and the complete density
matrix $\boldsymbol{\rho}$ to characterise the networks. In addition, a
suitably chosen q-adjacency kernel may allow macroscopic description based on
q-generalised free energy. To secure this, we need to show that the relation
in Eq. (27) holds for simulation results. Figure 2 shows the change $\Delta
K_{q}$ (simulation results for $\delta K_{q}$ and with subscript $q$ omitted
in what follows) for different values of $q$ and $\beta$ as a function of
rewiring frequency $\nu$ so that all divergences are scaled to the maximal
value $K_{0}={\rm Max}[\Delta K]$. The reference (density matrix
$\boldsymbol{\sigma}$ in Eq. (19)) is chosen to be the original, fully modular
network. The curves for $\Delta K$ are sigmoidal, but their maximum values
$K_{0}={\rm Max}[\Delta K]$ are different for different choices of $q$ and
$\beta$ as shown in Fig. 2 in the middle panel. However, the scaled values
$\Delta\hat{K}=\Delta K/K_{0}$ collapse to a single curve (shown at left in
Fig. 2) that is the same for all choices of $q\in[1.3,1.9]$ and
$\beta/\beta_{{\rm MAX}}\in[0.7,1.0]$, thus demonstrating the scaling of the
$\Delta K$. The transitional region where the properties of the network change
essentially is now recognised to be located in $-1<{\rm Log}\,\nu<1$ and in
region ${\rm Log}\,\nu>2$ (note that here ${\rm Log}$ refers to natural
logarithm) the network is essentially completely rewired (i.e. random, only
degree sequence preserved). The most interesting properties of the changes of
the network, however, are contained to the maximal value
$K_{0}(\beta;q,\lambda)={\rm Max}[\Delta K]$, which depends on $\beta$ and
parameters $q$ and $\lambda$. This quantity is discussed in more detail below.
To demonstrate the validity of Eq. (27) the scaled values of simulation
results for $\Delta K+\Delta S$ and $-\beta\Delta A$ (corresponding to
quantities at left and right side in Eq. (27), respectively) are shown
separately on the panel at right of Fig. 2. If Eq. (27) holds, the scaled
curves should be linearly proportional. This is suggested by the appearance of
the curves shown. The curves, however, are averaged for each set of parameters
over the 100 repetitions where the statistical fluctuations are still large,
as shown in Fig. 2\. The Pearson correlation of data points for $\Delta
K+\Delta S$ and $-\beta\Delta A$ is in all cases from 0.97 to 0.99 with
p-values well below $10^{-4}$, indicating good correlation in most cases, and
at worse close to $10^{-3}$ for high values of $q$, still indicating a
reasonable correlation. This means that we can take the relation in Eq. (27)
to be valid in the cases studied. The dependence is also linear with good
accuracy; the residuals of the averaged values from linear fits remain below
10% of the mean values for the average values shown in Fig. 2 for $q<1.5$,
increasing to about 30% in the worst cases for $q=1.9,\lambda=1.3$ and
$\beta\approx\beta_{{\rm MAX}}$. The results indicate that the interpretation
of the divergence in terms of free energy as suggested by Eqs. (25) and (28)
is viable provided that parameter $q$ is in range $1.3<q<1.7$ and close enough
$\lambda$ for $q<1.7$. Values $q<1.3$ could not be tested, because
computations rapidly become unstable for low values of $q$.
Figure 2: The scaled divergence difference $\Delta\hat{K}$ (see text for
definition) is shown on the left panel as a function of rewiring frequency
$\nu$. In the middle panel are shown values of $\Delta K$ (scaled to maximal
value in the ensemble) for heavy-tailed networks with different values of
$\lambda$. At right are shown the (scaled) values of $\Delta K+\Delta S$ (the
upper figure) and $-\beta\Delta A$ (the lower figure). Note that
interpretation of $\Delta K$ as free energy requires that these terms are
linearly proportional. In all cases, data-points are plotted showing the
variance in data. The natural logarithm ${\rm Log}$ is used throughout.
The dependence of the maximal value $K_{0}={\rm Max}[\Delta K]$ on $\beta$ and
parameters $q$ and $\lambda$ is somewhat complex but regular. In Figure 3, at
the left panel, $K_{0}$ is shown scaled in a suitable way, to obtain a linear
dependence. The linear form $\kappa_{0}(\beta/\beta_{{\rm
MAX}})=a+b(1-\beta/\beta_{{\rm MAX}})$ with $a=-1.86$ and $b=47.1$ shown in
Fig. 3 is obtained by scaling
$\kappa_{0}=\bar{\kappa}_{\lambda}K_{0}^{-\alpha_{\lambda}}-C_{\lambda}.$ (30)
Here coefficients $\bar{\kappa}_{\lambda}(q)$ and $C_{\lambda}(q)$ and power
$\alpha_{\lambda}(q)$ depend on parameters $\lambda$ and $q$ as shown in Fig.
3 in the middle panel. The sigmoidal fitting functions displayed in the
figures for these parameters are obtained from simulation results through
simple, descriptive fits to data. It should be noted that there is no
theoretical basis to motivate the form of the fits so they are only practical
vehicles for interpolation and moderate extrapolations. Moreover, the number
of parameter combinations feasible and reasonable here is quite limited, which
means large insecurity in the fits. However, these limitations are not severe,
because we are not aiming here at quantitatively accurate description but
rather to demonstrate the generic behaviour of the networks when control
parameters are changed. The generic behaviour is adequately obtained despite
the limited accuracy of the fits. The linear behaviour of $\kappa_{0}$ with
the fitting function for the corresponding coefficients allows us to find the
functional dependence of $\Delta K$ on $\beta$, needed for macroscopic
description of the network.
Figure 3: The scaling functions for construction of free energy $F_{q}$. The
dependence of maximum value $\kappa_{0}$ of $\Delta K$ is shown at the left
panel, in scaled form and as function of $1-\beta/\beta_{{\rm max}}$. On the
middle panel we show the fits to empirical data as used to construct $\alpha$,
$\bar{\kappa}$ and $C_{o}$ for the semi-empirical functional form of free
energy $F_{q}$. In these plots, the data-points are shown. On the right panel
we show the semi-empirical construction of functional form of $\kappa_{0}$ for
the maximum value of change in divergence (upper figure) and the corresponding
function for the thermal response $\chi$. In both figures, the dominant
dependence is on parameter $q$ and within each set of results, corresponding
to given choice of $q$, a bunch of results corresponding to different choices
of $\lambda$ is visible. In each bunch, the highest value of $\lambda=1.9$
corresponds to the topmost curve and in the curves below it correspond to the
lower values $\lambda$ = 1.7, 1.5 and 1.3 in that order.
The scaling function $\kappa_{0}$ allows us finally to obtain the maximum
value $K_{0}$ and through it, the response function $\chi$, which for a true
thermodynamic system would correspond to the thermal capacity. The maximal
value $K_{0}$ and response function $\chi$ are shown in Figure 3 at right.
Their most important feature is that they increase with increasing value of
$\beta$, which means that with higher $\beta$ better resolving power (larger
differences) is obtained. Also, the response function allows the
interpretation that the larger the $\beta$, the more sensitive the value of
divergence is to changes in structure of the network (i.e. with a large value
of $\beta$ better resolving power is possible). Another interesting notion is
that the value of $q$ mostly determines the behaviour of both, while values of
$\lambda$ affect them much less; the results are bunched according to the
value of $q$, and within these bunches, small differences originate from
different choices for parameter $q$. Roughly, this means that differences in
$\Delta K$ are more pronounced the smaller the value of $q$. This, of course,
is expected in the case of heavy-tailed distributions since the smaller the
values of $q$, the more sensitive the divergence is for low frequency values
in distributions. Note, however, that if one wishes to retain the validity of
Eq. (27) (and thus, also the validity of Eqs. (25) and (28)) the value of $q$
for divergence cannot be chosen freely but must be same as for the q-adjacency
kernel.
The present study leaves unanswered the question how the current model behaves
when $q\rightarrow 1$. The theoretical argumentation presented here suggests
that at that limit one should arrive at a description based on canonical
ensemble (Gibbs’ ensemble), von Neumann entropy and Kullback-Leibler
divergence. However, for values $\lambda<1.3$ reaching good computational
stability is challenging and computations become quite unstable. Similarly,
when of $q\rightarrow 2$ the computational results become very unstable. In
both cases, reaching good stability would require very extensive ensembles.
The detailed reason for this is not known to us, but such behaviour is
compatible with the notion that extrapolation from non-extensive
thermodynamics of finite systems ($q>1$) to the usual canonical extensive
thermodynamics ($q=1$) also requires the thermodynamic limit of infinite
systems (compare with ref. [33]).
In summary, the main result contained in Figs. 2-3 is that the theoretically
defined q-generalised free energy is a viable description of the macro-level
state of the network changes when the modularity of the network is reduced.
Within the thermodynamic-like interpretation, the change can be interpreted as
lowering of the free energy due to relaxation of the modularity. In connection
with this change, the probability of long paths increases. The effect is best
seen for the adjacency kernels with low values of $q<1.7$. The model also
contains parameter $\beta$, which allows a different weighting of links,
$\beta>1$ corresponding to strong links, while $\beta<1$ means making the
links weaker, with the extreme case $\beta\rightarrow 0$ corresponding to
totally disintegrating the network.
## 5 Conclusions
The practical motivation to conceptualise the properties of heavy-tailed
networks from the macroscopic viewpoint derives from the notion that
associative knowledge networks tend to be heavy-tailed, with broad tails that
can be fitted with inverse power laws with powers in the range $1<\lambda<2$
and that in addition these networks are often highly modular [1, 2, 3].
However, real empirical networks are often quite small, and they cannot be
easily or reliably characterised by pre-selected distributions of certain
centrality like communicability or Katz centrality. Also, the similarity
comparisons are often very awkward [1, 2]. This prompts an attempt to use more
holistic descriptions based on the network’s density matrix, which provides
all available information about the network [7, 8]. Such description can be
based on q-generalised adjacency kernels introduced here, which allows
interpolation between known exponential and Neumann kernels.
The q-generalised adjacency kernel opens a door to the macro-level description
of heavy-tailed networks, through the q-generalised Kullback-Leibler-Tsallis
divergence. A parallel result has previously been obtained for closely similar
q-generalised ensembles in the case on non-extensive statistics in general
[33, 45]. Here we have shown that analogously, the Kullback-Leibler-Tsallis
divergence can be interpreted as a change of a q-generalised free energy of a
modular heavy-tailed complex network, when modularity becomes relaxed. The
finding that it is possible to define q-generalised free energy of heavy-
tailed networks, and based on it, to obtain a thermodynamic-like
interpretation, opens up interesting avenues to describe networks using
thermodynamic-like concepts. For example, it is shown that with the
q-generalised free energy it is possible to derive the response of the network
for changes of values of $\beta$, paralleling the thermal response if the
thermodynamic interpretation is evoked. The response of the network with
increasing value of $\beta$ indicates that changes in the rigidity of the
network (resilience) are smaller, the larger the values of $\beta$; the
thermal capacity of the network increases when $\beta$ increases.
The present study leaves many fundamental questions unanswered, most notably
the question how the role of the adjacency matrix should be interpreted in
terms of the Hamiltonian, and how far one can pursue the physical
interpretations on the basis of structural, mathematical analogies.
Nevertheless, we believe that the results provided here are promising steps
towards a more general theory of heavy-tailed networks.
## Appendix A: The q-communicabilities and q-divergence
This Appendix is a sequel to the main text and in it, we discuss
communicability distributions. Starting from the q-generalized kernels, we
define the q-generalized communicabilities. We then study their distributions
arising from networks with heavy-tailed degree distributions, and use the
q-divergences to differentiate between q-communicability distributions arising
from modular and rewired networks.
The q-exponential adjacency kernel in Eq. 5 provides a basis for defining a
q-generalised communicability (in brief, q-communicability) between nodes $k$
and $i$ as a row sum of q-adjacency kernel, in form
$\Gamma_{i}(q,\beta)=Z^{-1}\sum_{j}{\rm exp}_{q}[\beta\,{\bf A}]_{ij}\,$ (A.1)
where $Z={\rm Tr}\;{\rm exp}_{q}[\beta{\bf A}]$ is the normalisation factor.
The q-generalised total communicability provides Estrada’s total
communicability in the limit $q\rightarrow 1$ and the Katz centrality is
obtained in the limit $q\rightarrow 2$, corresponding to the exponential and
Neumannn kernels, respectively.
For heavy-tailed networks, the q-communicability values are distributed
according to an inverse power law, with inverse powers $\gamma$ which are
roughly in range from 2 to 3, but with a cut-off corresponding to largest
observed value $\Gamma_{\rm max}={\rm Max}[\\{\Gamma_{i}\\}]$. By rescaling
the values of $\Gamma_{i}$ by the maximum $\Gamma_{\rm max}$, and relabel
$\Gamma_{i}/\Gamma_{\rm max}\rightarrow\Gamma_{i}\in[0,1]$, the new values can
be represented in the form of the discrete distribution
$p_{i}(\Gamma_{i})=N^{-1}\left(1+(\alpha-1)\frac{1}{\epsilon}\,\Gamma_{i}\right)^{-1/(\alpha-1)},\>\>{\rm
with}\>\alpha\in\left]1,2\right],$ (A.2)
where $p_{i}$ is the probability of a given value $\Gamma_{i}\in[0,1]$ and $N$
is the normalisation factor. The exponent $\alpha$ is related to inverse power
$\gamma$ as $\alpha=1+1/\gamma$. In what follows, we use $\alpha$ in
discussing the theoretical results while for simulation data $\gamma$ is often
more convenient. The parameter $\epsilon=\gamma/\Gamma_{\rm max}\ll 1$ has
small values, but must always be different from zero. The distribution in Eq.
(A.2) is q-exponential with exponent $q=\alpha$, where $\alpha$ is now used to
refer to the q-index to avoid any confusion with index $q$ in Eq. (5)
associated with a q-adjacency kernel.
Results for power $\gamma$ characterising the q-communicabilities in case
$\Lambda=1.5$ are shown in Figure A.1. The results are obtained for
$q=\\{1.05,1.1,1.2,\ldots,2.0\\}$ and with $\beta/\beta_{\rm
max}=\\{0.1,0.2,0.3,\ldots,1.0\\}$, where $\beta_{\rm max}$ is about 80 % of
the inverse of the largest estimated eigenvalue of the adjacency matrix. The
values of $\gamma$, and $\gamma^{\prime}$ in Fig. A.1 for fully rewired
($Q$=0) and original modular ($Q$=0.86) networks, respectively, are obtained
by least-square fits to log-log distributions. In fitting the coefficients,
attention is paid to the tail of the distribution, and small values
$\Gamma<10^{-3}$ are allowed to deviate from the fits. The difference between
powers $\gamma$ and $\gamma^{\prime}$ is generally quite small, but
detectable, as is seen in Fig. A.1 from the relative change
$(\gamma^{\prime}-\gamma)/\gamma$ (in the middle).
We can now utilise the q-divergence to compare q-communicability distributions
with better resolution than provided by simply comparing the powers $\gamma$
and $\gamma^{\prime}$. With the parametrisation
$\alpha=\alpha^{\prime}(1+\eta)$, where
$\eta=(\alpha-\alpha^{\prime})/\alpha^{\prime}>0$ is the relative difference,
we can now obtain the q-divergence from Eq. (19) by replacing the density
matrices with the discrete distributions and the traces of matrices by sums
over the distributions, in the conventional way. A closed analytical form for
the q-divergence is then obtained by replacing the discrete distribution (A.2)
with the corresponding continuous distribution and evaluating the sum in Eq.
(19) as a Riemann integral. These replacements are not trivial but can,
however, be justified for q-exponentials [48]. The resulting expression for
the q-divergence is then
$K_{\alpha}(\eta,\epsilon)=\frac{1}{\alpha-1}\left[\kappa_{\alpha}(\eta,\epsilon)\bar{K}_{\alpha}(\eta,\epsilon)\left(\epsilon^{\frac{-(2-\alpha)}{\alpha-1}}-(1+\epsilon)^{\frac{-(2-\alpha)}{\alpha-1}}\right)^{\alpha-1}-1\right]$
(A.3)
$\kappa_{\alpha}(\eta,\epsilon)=\frac{(\alpha-1)^{\alpha-1}(2-\alpha(1+\eta))^{\alpha}}{(2-\alpha)^{\alpha-1}[2-\alpha(1+2\eta)][\alpha(1+\eta)-1]^{\alpha-1}}$
(A.4)
$\bar{K}_{\alpha}(\eta,\epsilon)=\frac{\epsilon^{\frac{\alpha(1+2\eta)-2}{\alpha(1+\eta)-1}}-(1+\epsilon)^{\frac{\alpha(1+2\eta)-2}{\alpha(1+\eta)-1}}}{\left(\epsilon^{1-\frac{1}{\alpha(1+\eta)-1}}-(1+\epsilon)^{1-\frac{1}{\alpha(1+\eta)-1}}\right)^{\alpha}}$
(A.5)
The parameter $\eta=(\alpha-\alpha^{\prime})/\alpha^{\prime}>0$ is the
relative difference of the q-indices, characterising the distribution, while
the parameter $\epsilon$ is assumed to be the same for both distributions.
Note that the parameter $\eta$ attains always small values in the region of
interest, where $\alpha=1+1/\gamma$ with $\gamma\in]2,3]$.
Figure A.1: The contour plots of inverse powers $\gamma^{\prime}$ for modular
network (at the left) and the relative difference
$(\gamma^{\prime}-\gamma)/\gamma$ (in the middle) with $\gamma$ corresponding
to a non-modular (rewired) distribution. The logarithm of q-divergence
$K_{\alpha}$ is shown at the right. Results are shown for parameters
$q\in]1,2]$ and $\beta/\beta_{\rm max}\in[0.1,1.0]$. Note that the valus of
$\alpha$ and $\gamma$ are related through $\alpha=1+1/\gamma$. In reporting
inverse powers we have used values of $\gamma$ because the range of their
variation is [2,3] while for $\alpha$ the corresponding range is narrower
[1.33,1.5].
By using the above results with the values $\gamma$ and $\gamma^{\prime}$ from
Fig. A.1 we eventually obtain the q-divergence $K_{\alpha}$ shown in Fig. A.1
(at the right) as a function of the parameters $q$ and $\beta$. We also used
the analytical results in Eqs. (A.3)–(A.5) and in them, conversion
$\eta=(\alpha^{\prime}-\alpha)/\alpha$, where $\alpha=1+1/\gamma$ and
$\alpha^{\prime}=1+\gamma^{\prime}$. The main message of the result in Fig.
A.1 is the following: From the values of $K_{\alpha}$ in Fig. A.1, we see that
although the region close to the Katz centrality (the upper right corner of
contour plot with $\beta>0.8$ and $q>1.7$) displays better resolution (i.e.
large values of q-divergence), the region corresponding to Estrada’s
communicability at $q<1.2$ and $\beta>0.3$ also performs reasonably well in
providing large enough divergences for good resolution. In this region the
choice of $\beta$ is not so crucial for the stability of the calculations as
in the case of the region corresponding to the Katz centrality.
Finally, it should be mentioned that although the thermodynamic-like
interpretation of q-divergence in Eqs. (A.3)–(A.5) is tempting, this is not
viable now. The major complication in the thermodynamic interpretation arises
now from the fact that communicability is a derived quantity which depends on
parameter choices and does not refer to parameter independent and constitutive
property of a network. Therefore, the Eq. (A.5) is better interpreted as
divergence only. The utility of the analytical result in Eq. (A.5) is,
however, that it can be used to estimate the optimal region for the parameters
with a compromise between a good resolution of divergence and computational
simplicity.
The results demonstrate the practical utility and robustness of Estrada’s
communicability for applications where heavy-tailed distributions are
involved. The results support Estrada’s communicability as the most obvious
choice, in many cases when a compromise between computational stability, ease,
and sufficient accuracy is needed. This conclusion is of course already well-
known [39]; the present study additionally shows, using the generalised
q-adjacency kernels, that there is a smooth transition in the q-divergence
between the regions corresponding to Estrada’s communicability and the Katz
centrality.
## References
* [1] I. T. Koponen, M. Nousiainen, University Students’ Associative Knowledge of History of Science: Matthew Effect in Action?, Eur. J. Sci. Math. Educ. 6 (2018) 69–81.
* [2] H. Lommi, I. T. Koponen, Network cartography of university students’ knowledge landscapes about the history of science: landmarks and thematic communities, Appl. Netw. Sci. 4 (2019) 6.
* [3] A. S. Morais, H. Olsson, L. J. Schooler, Mapping the Structure of Semantic memory, Cogn. Sci. 37 (2013) 125–145.
* [4] R. Interdonato, M. Atzmueller, S. Gaito, R. Kanawati, C. Largeron, A. Sala, Feature-rich networks: going beyond complex network topologies, Appl. Netw. Sci. 4 (2019) 4.
* [5] G. W. Thompson, C. T. Kello, Walking across Wikipedia: a scale-free network model of semantic memory retrieval, Front. Psychol. 5 (2014) 86.
* [6] A. P. Masucci, A. Kalampokis, V. M. Equíluz, E. Hernández-García, Wikipedia Information Flow Analysis Reveals the Scale-Free Architecture of the Semantic Space, PLoS ONE 6 (2011) e17333.
* [7] J. Biamonte, M. Faccin, M. De Domenico, Complex networks from classic to quantum, Comm. Phys. 2 (2019) 53.
* [8] M. De Domenico, J. Biamonte, Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison, Phys. Rev. X. 6 (2016) 041062.
* [9] J. Kunegis, D. Fay, C. Bauckhage, Spectral evolution in dynamic networks, Knowl. Inf. Syst. 37 (2013) 1–36.
* [10] M. Benzi, C. Klymko, On the Limiting Behavior of Parameter-Dependent Network Centrality Measures, SIAM Journal on Matrix Analysis and Applications, 36 (2015), pp. 686-706.
* [11] C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 (1988) 479–487.
* [12] E. P. Borges, On a $q$-generalization of circular and hyperbolic functions, J. Phys. A: Math. Gen. 31 (1998) 5281–5288.
* [13] T. Yamano, Some properties of $q$-logarithm and $q$-exponential functions in Tsallis statistics, Physica A. 305 (2002) 486–496.
* [14] M. A. Ré, R. K. Azad, Generalization of Entropy Based Divergence Measures for Symbolic Sequence Analysis, PLoS ONE 9 (2014) e93532.
* [15] J. P. Bagrow, E. M. Bollt, An information-theoretic, all-scales approach to comparing networks, Appl. Netw. Sci. 4 (2019) 45.
* [16] G. Minello, L. Rossi, A. Torsello, Can a Quantum Walk Tell Which Is Which? A Study of Quantum Walk-Based Graph Similarity, Entropy 21 (2019) 328.
* [17] F. Müller-Lennert, O. Dupuis, S. Szehr, M. Fehr, M. Tomamichel, On quantum Rényi entropies: A new generalization and some properties, J. Math. Phys. 54 (2013) 122203.
* [18] G. A. Tsekouras, C. Tsallis, Generalized entropy arising from a distribution of $q$ indices, Phys. Rev. E. 71 (2005) 046144.
* [19] M. Gerlach, F. Font-Clos, E. G. Altmann, Similarity of Symbol Frequency Distributions with Heavy Tails, Phys. Rev. X 6, (2016) 021009.
* [20] L. Dias, M. Gerlach, J. Scharloth, E. G. Altmann, Using text analysis to quantify the similarity and evolution of scientific disciplines. R. Soc. Open Sci. 5 (2018) 171545.
* [21] E. G. Altmann, L. Dias, M. Gerlach, Generalized entropies and the similarity of texts, J. Stat. Mech. (2017) 014002.
* [22] S. Abe, Nonadditive generalization of the quantum Kullback-Leibler divergence for measuring the degree of purification, Phys. Rev. A 68, (2003) 032302.
* [23] S. Abe, Quantum q-divergence, Physica A 344 (2004) 359–365.
* [24] A. F. T. Martins, N. A. Smith, E. P. Xing, Pedro M. Q. Aguiar, M. A. T. Figueiredo, Nonextensive Information Theoretic Kernels on Measures, J. Mach. Learn. Res. 10 (2009) 935-975.
* [25] S. Furuichi, F.-C. Mitroi-Symeonidis, E. Symeonidis, On Some Properties of Tsallis Hypoentropies and Hypodivergences, Entropy 16 (2014) 5377-5399.
* [26] I. T. Koponen, Modelling Students’ Thematically Associated Knowledge: Networked Knowledge from Affinity Statistics, in: Complex Networks X. CompleNet 2019. Springer Proceedings in Complexity, Springer, Cham, 2019, pp. 123–134.
* [27] A. D. Broido, A. Clauset, Scale-free networks are rare, Nat. Commun. 10 (2019) 1017.
* [28] P. Holme, Rare and everywhere: Perspectives on scale-free networks, Nat. Commun. 10 (2019) 1016.
* [29] M.E. J. Newman, M. Girvan, Finding and evaluating community structure in networks, Phys. Rev. E 69 (2004) 026113.
* [30] V. D. P. Servedio, G. Caldarelli, P. Buttà, Vertex intrinsic fitness: How to produce arbitrary scale-free networks, Phys. Rev. E. 70 (2004) 056126.
* [31] G. Caldarelli, A. Capocci, P. De Los Rios, M. A. Muñoz, Scale-Free Networks from Varying Vertex Intrinsic Fitness, Phys. Rev. Lett. 89 (2002) 258702.
* [32] G. Csárdi, T. Nepusz, The Igraph software package for complex network research, InterJournal. Complex Systems (2006) 1695.
* [33] S. Abe, A. K. Rajagopal, Validity of the Second Law in Nonextensive Quantum Thermodynamics, Phys. Rev. Lett. 91 (2003) 120601.
* [34] E. Estrada, N. Hatano, M. Benzi, The physics of communicability in complex networks, Phys. Rep. 514 (2012) 89–119.
* [35] E. Estrada, D. J. Higham, N. Hatano, Communicability betweenness in complex networks, Physica A. 388 (2009) 764–774.
* [36] E. Estrada, The Structure of Complex Networks, Oxford University Press, Oxford, 2012.
* [37] M. Dehmer, Information processing in complex networks: Graph entropy and information functionals. Appl. Math. Comput. 201 (2008) 82–94.
* [38] M. Dehmer, A. Mowshowitz, A history of graph entropy measures, Inf. Sci. 181 (2011) 57–78.
* [39] M. Benzi, E. Estrada, C. Klymko, Ranking hubs and authorities using matrix functions, Lin. Algb. Appl. 438 (2013) 2447–2474.
* [40] L. Katz, A New Status Index Derived from Sociometric Analysis, Psychometrika 18 (1953) 39–43.
* [41] K. J. Sharkey, A control analysis perspective on Katz centrality, Sci. Rep. 7 (2017) 17247.
* [42] S. P. Borgatti, Centrality and network flow, Soc. Netw. 27 (2005) 55–71.
* [43] R. Ghosh, K. Lerman, Parameterized centrality metric for network analysis, Phys. Rev. E. 83 (2011) 066118.
* [44] M. Abramowitz, I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover, New York, 1972.
* [45] S. Abe, Temperature of nonextensive systems: Tsallis entropy as Clausius entropy, Physica A. 368 (2006) 430–434.
* [46] C. Tsallis, R. S. Mendes, A. R. Plastino, The role of constraints within generalized nonextensive statistics, Physica A. 261 (1998) 543–554.
* [47] A. Fronczak, P. Fronczak, P., J. A. Hołyst, Fluctuation-dissipation relations in complex networks, Phys. Rev. E. 73 (2006) 016108.
* [48] A. Plastino, M. C. Rocca, On the putative essential discreteness of q-generalized entropies, Physica A 488 (2017) 56–59.
|
# Inverse design of broadband and lossless topological photonic crystal
waveguide modes
Eric Nussbaum Department of Physics, Engineering Physics and Astronomy,
Queen’s University, Kingston, ON K7L 3N6, Canada Erik Sauer Department of
Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON
K7L 3N6, Canada Stephen Hughes Department of Physics, Engineering Physics
and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada
###### Abstract
Topological photonic crystal waveguides can create edge states that may be
more robust against fabrication disorder, and can yield propagation modes
below the light line. We present a fully three-dimensional method to modify
state-of-the-art designs to achieve a significant bandwidth improvement for
lossless propagation. Starting from an initial design with a normalized
bandwidth of 7.5$\%$ (13.4 THz), the modification gives more than 100%
bandwidth improvement to 16.2$\%$ (28.0 THz). This new design is obtained
using automatic differentiation enabled inverse design and a guided mode
expansion technique to efficiently calculate the band structure and edge state
modes.
††journal: ol
## 1 Introduction
Photonic crystal slab (PCS) waveguides can slow down light and even stop light
(theoretically) in sub-wavelength dimensions, which can be exploited for
enhancing various light-matter interactions for applications in sensing,
nonlinear optics and quantum optics [1]. Moreover, when integrated with
semiconductor quantum dots, PCS waveguides can enable single photons to be
produced with almost 100% on-chip radiative emission [2, 3, 4]. However, a
significant problem for exploiting such waveguides is that they suffer, in
many cases dramatically, from disorder-induced scattering [5], which is
particularly severe in the slow-light regime. This effect is now well
understood theoretically and experimentally [6, 7, 8, 9], and seems to be a
fundamental problem, since the slow-light regime is also the regime for
exploiting enhanced light-matter interactions.
In recent years, topological PCS waveguides have been demonstrated [10, 11],
which may suppress disorder-induced backscattering through “topological
protection.” Moreover, PCS waveguides also exhibit Bloch modes with local
chirality [12, 13, 14], which can be used to couple to spin charged quantum
dots, manifesting in unidirectional single photon emission. Unfortunately,
many of the topological PCS waveguide modes fall above the light line and are
thus intrinsically lossy, with significant propagation losses [15]. Very
recently, new classes of topological PCS waveguides have been presented, using
the so-called Valley Hall effect [16, 17], which more easily allow edge state
modes to fall below the light line. For future applications of these
topological PCS structures, one goal is to increase the operation bandwidth
for single mode operation below the light line.
In this Letter, we demonstrate an efficient way to significantly increase the
lossless bandwidth for such devices, by combining inverse design techniques
[18, 19] with the semi-analytical approach to computing photonic band
structures known as the guided mode expansion (GME) technique [20]. Starting
with a state-of-the-art design from He et al. [17], we show how to increase
the bandwidth by more than 110$\%$, yielding a significant operation bandwidth
of 28.0 THz below the light line, with single mode operation that is
compatible with telecom wavelengths. Moreover, our methodology can be used to
optimize many target figures of merit for PCS structures, in an intuitive and
efficient way.
## 2 Methodology and Theory
The photonic band structure of PCS waveguides can be efficiently calculated
using the GME method, a semi-analytical method [20] that formulates Maxwell’s
equation as an eigenvalue problem with a lattice structure matrix
representation and the guided modes of the slab, allowing one to accurately
calculate both the real and imaginary band structure with fast computational
run times.
In this work, we use a GME Python package named Legume, from Minkov et al.
[21, 22]. We then combine this approach, which is already very efficient, with
inverse design techniques, to show how to significantly improve the
operational bandwidth of topological waveguide modes that are intrinsically
lossless, namely how to significantly increase the bandwidth of single mode
operation below the light line.
We first summarize the salient details of the GME. One can rewrite Maxwell’s
equations in the frequency domain, to obtain an eigenvalue equation in terms
of the magnetic field, $\bm{H}(\bm{r})$:
$\bm{\nabla}\times\left(\frac{1}{\epsilon(\bm{r})}\bm{\nabla}\times\bm{H}(\bm{r})\right)=\left(\frac{\omega}{c}\right)^{2}\bm{H}(\bm{r}),$
(1)
with the condition $\nabla\cdot\bm{H}(\bm{r})=0$, where $\epsilon(\bm{r})$ is
the dielectric constant. To solve (1), the GME technique expands the magnetic
field into an orthonormal set of basis states as
$\bm{H}(\bm{r})=\sum_{\mu}c_{\mu}\bm{H}_{\mu}(\bm{r}),$ (2)
and so (1) can be written as
$\sum_{\nu}\mathcal{H}_{\mu\nu}c_{\nu}=\frac{\omega^{2}}{c^{2}}c_{\mu},$ (3)
where the elements of the Hermitian matrix $\mathcal{H}_{\mu\nu}$ are defined
as:
$\mathcal{H}_{\mu\nu}=\int\cfrac{1}{\epsilon(\bm{r})}\left(\nabla\times\bm{H}^{*}_{\mu}(\bm{r})\right)\cdot\left(\nabla\times\bm{H}_{\nu}(\bm{r})\right)d\bm{r}.$
(4)
To define an appropriate basis set $\bm{H}_{\mu}(\bm{r})$, the GME technique
uses the guided modes of the effective homogeneous slab waveguide, with a
dielectric constant taken as the spatial average of the dielectric constant in
the slab layer. The guided modes of the effective waveguide depend on a wave
vector, which can take any value in the slab plane. The modes of the PCS
depend on the wave vector, $\bm{k}$, which can be restricted to the first
Brillouin zone. Thus, only the effective waveguide modes with wave vector
$\bm{k}+\bm{G}$, where $\bm{G}$ is a reciprocal lattice vector of the PCS, are
included in the basis. The guided mode expansion is then
$\bm{H}_{\bm{k}}(\bm{r})=\sum_{\bm{G},\alpha}c(\bm{k}+\bm{G},\alpha)\bm{H}_{\bm{k}+\bm{G},\alpha}^{\rm
guided}(\bm{r}),$ (5)
where $\bm{H}_{\bm{k}+\bm{G},\alpha}^{\rm guided}(\bm{r})$ is a guided mode of
the effective waveguide and $\alpha$ is the index of the guided mode [20].
Once the magnetic field of a photonic mode is found, the orthonormal electric
field eigenmodes can be straightforwardly obtained [20].
We next describe the inverse design approach, and how it can be implemented
with the GME. Inverse design treats the design process as an optimization
problem. An objective function is written, which accepts a device
parameterization and returns an appropriate figure of merit (FOM). This
function is then optimized, typically with a gradient-based optimization
algorithm. The gradient calculations are normally performed using the adjoint
variable method, however this method cannot be easily implemented with the GME
technique [21]. Instead, we use automatic differentiation (AD), which allows
the gradient of an arbitrarily complex function to be calculated.
The fundamental idea behind AD is that the gradient of a function can composed
of the gradients of its subfunctions. When using an AD library, the objective
function is defined, and then by tracking sub-function executions, the AD
library creates an operator to return the objective function gradient. In this
work, we use the AD library Autograd, which allows the objective function to
be written in normal Python code and is compatible with most of the NumPy
library [23]. The GME library being used, Legume, allows its back end to be
set to Autograd compatible code, making calculating the objective function
gradient straightforward.
## 3 Design process and results
We start our inverse design optimization from a state-of-the-art topological
PCS waveguide design recently introduced by He et al. [17]. As shown in Fig.
1, this lattice structure is formed from two photonic crystals (PCs), which
can be described as a triangular lattice of unit cells containing two air
holes, with lattice constant $a$ (see Fig. 1), or alternatively as a
triangular lattice of 6 hole honeycomb unit cells with lattice constant
$\sqrt{3}a$ and cluster radius fixed at $R=a/\sqrt{3}$. In each two-hole unit
cell, the two circular air holes have different radii, $r_{1}$ and $r_{2}$.
Above the interface, the relative positions of the smaller and bigger holes
are flipped about the $x$ axis. Both PCs therefore have the same band
structure, which contains a photonic bandgap (PBG). We define the PBG as a
frequency range below the light line, where there are no radiation modes and
intrinsic losses. In Fig. 2, we show the PBG properties for various system
paramaters; the PBG is shaded dark blue in panels 2(b-d). The interface is
constructed by truncating each PCS at a value of $y$ at which a row of holes
with radius $r_{2}$ exists and positioning the PCs such that the triangular
lattice is continuous across the interface. To be consistent with He et al.
[17], the PCS and the waveguide are modeled with a silica substrate with
$\epsilon=2.1$, and the slab dielectric constant is $\epsilon=12.0$.
Figure 1: Two schematics of the topological waveguide proposed by He et al.
[17]. The 3D schematic (top) is composed of a Si PCS (blue) on top of a SiO2
substrate (red). The 2D lattice schematic (bottom) has relevant geometric
quantities labeled in red; the air holes are shown in white and the direction
of propagation is designated as the $x$ direction.
The objective of the optimization is to increase the bandwidth of a target
single guided mode. We define the bandwidth as the frequency range, under the
light line, in which no PCS bands exist, where the guided mode band exists as
a one-to-one transformation of frequency.
First, the hole sizes in the PCS that the waveguide is built from is optimized
to increase the size of the PBG, then the interface holes are optimized for
the bandwidth. The PCS is modeled as a triangular lattice of two-hole cells,
which are outlined outlined with a green line in Fig. 2 (a). Only the hole
radii in each unit cell are allowed to be modified and having assuming a
lattice constant, $a$, of 453 nm, are not allowed to fall bellow 30 nm such
that the holes can be fabricated with modern fabrication technology [24]. The
holes are also required to maintain 30 nm between their edges. The
optimization quickly reduced the smaller hole’s radius to the minimum allowed
size. At this point, a methodical search across the possible values of
$r_{2}$, with $r_{1}$ held at 30 nm, is conducted. At each hole size, the band
structure and the PBG are calculated. In Fig. 2(b), the PBG range across
values of $r_{2}$ is shaded in dark blue. In Fig. 2(c) and (d), sample band
structures are shown for the values of $r_{2}$ indicated by the vertical
dashed black lines in Fig. 2(b). Using the GME technique, the methodical
search across 50 values of $r_{2}$ takes less than 5 minutes to run on a
laptop. From the PBG, we calculate the gap-midgap ratio,
$\Delta\omega/\omega_{m}$, which we define as the PBG width divided by its
central frequency value. For this calculation, the region $\omega a/2\pi
c>0.34$ (shaded in light grey in Fig. 2(b)) was not considered as part of the
PBG because this region is above the light line for the waveguide. A radius of
approximately 0.31 $[a]$ (140 nm) is chosen for the second PCS hole radius.
Figure 2: Summary of a methodical search to across values of $r_{2}$ to
increase the band gap, with $r_{1}=30\,{\rm nm}$. (a) Top down view of the PC,
modeled with a triangular lattice, the unit cell is outlined in green. (b) The
PBG shaded in blue ("Gap map"), sweeping across $r_{2}$ with $r_{1}$ held at
30 nm. The gap-midgap ratio is plotted in red, not considering $\omega a/2\pi
c>0.34$ (shaded in light grey) as part of the PBG. (c) Sample band structure
with $r_{2}$ at the first vertical dashed line in (b). (d) Sample band
structure with $r_{2}$ at the second vertical dashed line in (b). Figure 3:
(a) Schematic (top down view) of the initial design with the interface holes
indicated by the red line. (b) Schematic of the new design. (c) Group index,
$n_{g}=|c/v_{g}|$ and band structure for the initial design. The light line
for the silica substrate in shaded in grey, the light line for an air bridge
is the dashed black line. The PCS bands are shaded in light blue, the
bandwidth is shaded turquoise, and the two guided bands are plotted in
magenta. All bandwidth and $n_{g}$ calculations are performed for the guided
band drawn with a solid line. (d) Group index and band structure for the new
design.
This PCS is then used to form a waveguide (cf. [17]) as shown in Fig. 1,
except the interface hole radii are chosen to be smaller than $r_{2}$ to
increase the initial distance between the edges of the interface holes. For
our purposes, using the GME technique, the waveguide is modeled with a super
cell $a$ wide in the $x$ direction. As the computation time increases with the
size of the super cell, we chose a length of approximately $13a$ in the $y$
direction for the optimization calculations. To obtain more accurate
calculations of the relevant PCS bands which define the PBG, such as those
performed for Figs. 3 and 4, a super cell approximately $30a$ long in the $y$
direction is used. The super cell with width $a$ contains two holes which form
the interface. The two interface holes’ radii and position are allowed to be
modified as well as the slab thickness, while enforcing the same geometric
constraints as described above for the PC. The FOM used is the bandwidth-mid
bandwidth ratio, which we define as the bandwidth divided by its central
frequency value. We then used the stochastic optimization algorithm termed
“Adam” [25] with an objective function returning this FOM.
Figure 3 presents schematics of the initial and final designs in (a) and (b)
and their respective band structures in (c) and (d), respectively. The new
edge mode gives more than a 110$\%$ increase in the operation bandwidth. The
initial design has a bandwidth of 169.4 to 182.7 THz (1.770 to 1.641 $\mu$m)
while the new design has a bandwidth of 159.1 to 187.1 THz (1.885 to 1.603
$\mu$m). Thus, we have improved the operation bandwidth for bound mode
operation by more than 110%, and the general properties of the mode are also
better confined within the PBG. As shown in Fig. 3, both designs have a
similar $n_{g}$ dispersion curve, though the new design has a region of
slightly larger $n_{g}$ near the bandwidth edge, and far better dispersive
properties in momentum space.
Figure 4: (a) Zoom in of the Bloch mode intensity $|\bm{E}|^{2}$ for the
initial design; calculations performed for a super cell approximately $34a$
long. Cross sections are taken at the center of the slab thickness. The field
is plotted for the mode and wave vector $\bm{k}$ indicated by the circular
marker in Fig. 3(c), at $k_{x}a=15\pi/16$. (b) Bloch mode intensity for the
new design. (c) The associated $S_{3}$ plot for the initial design, further
zoomed in to around the waveguide interface (d) Associated $S_{3}$ plot for
the new design.
Carrying out 10 optimization iterations takes approximately 25 minutes to run
on a standard computer workstation. The presented final design is the
$347_{th}$ iteration; the optimization took approximately 14 hours to run,
which is quite remarkable given the complexity of solving a full 3D complex
design problem in nanophotonics.
Finally, we study the spatial details of the Bloch modes. The polarization of
the electric field $\bm{E}(\bm{r})$ can be described by the four Stokes
parameters [26], and we study the normalized third Stokes parameter to
quantify the local chiral nature of the modes,
$S_{3}(\bm{r})=\frac{2\operatorname{Im}[E_{x}^{*}(\bm{r})E_{y}(\bm{r})]}{|E_{x}(\bm{r})|^{2}+|E_{y}(\bm{r})|^{2}},$
(6)
where points at which $S_{3}(\bm{r})$ is equal to $\pm 1$ are points where the
electric field is right- or left-circularly polarized, respectively.
In Fig. 4, cross-sections of the normalized electric field Bloch mode
intensity $|\bm{E}(\bm{r})|^{2}$, as well as the associated $S_{3}(\bm{r})$,
are plotted for the initial design in (a) and (c), and the new design in (b)
and (d); these plots are all taken at the center of the slab thickness. The
Bloch modes show that the field is well confined to the interface for both
designs, and there is reasonable overlap with the chiral points, which can be
exploited for unidirectional single photon emission [12, 13]
## 4 Conclusions
Using an efficient computation method to obtain PCS waveguide band structures
(GME) and automatic differentiation, we have achieved a significant
improvement to a state-of-the-art design. The same techniques can be applied
to optimize the such devices for a variety of different objectives such as
quantum dot coupling at a chiral point, achieving extremely slow light, etc.
In this work, the chosen device parameterization significantly constrained the
optimization to only modify the size of the PCS holes and the size and
position of holes which define the interface. Increasing the degrees of
freedom while still using a shape parameterization by, e.g., modifying the
position of the holes in each PCS super cell, allowing the shape of the holes
to change, or allowing more holes to be modified while optimizing the
interface, should allow even greater improvements to be obtained, while still
maintaining the ease of using a shape parameterization.
## 5 Funding
Canadian Foundation for Innovation; the Natural Sciences and Engineering
Research Council of Canada; and Queen’s University, Canada.
## 6 Acknowledgments
We thank Juan Pablo Vasco and Nir Rotenberg for useful discussions.
## References
* [1] T. F. Krauss, “Why do we need slow light?” Nature Photonics 2, 448–450 (2008).
* [2] V. S. C. Manga Rao and S. Hughes, “Single quantum-dot purcell factor and $\beta$ factor in a photonic crystal waveguide,” Physical Review B 75, 205437 (2007).
* [3] M. Arcari, I. Söllner, A. Javadi, S. Lindskov Hansen, S. Mahmoodian, J. Liu, H. Thyrrestrup, E. H. Lee, J. D. Song, S. Stobbe, and P. Lodahl, “Near-unity coupling efficiency of a quantum emitter to a photonic crystal waveguide,” Phys. Rev. Lett. 113, 093603 (2014).
* [4] A. Laucht, S. Pütz, T. Günthner, N. Hauke, R. Saive, S. Frédérick, M. Bichler, M.-C. Amann, A. W. Holleitner, M. Kaniber, and J. J. Finley, “A waveguide-coupled on-chip single-photon source,” Phys. Rev. X 2, 011014 (2012).
* [5] S. Hughes, L. Ramunno, J. F. Young, and J. E. Sipe, “Extrinsic Optical Scattering Loss in Photonic Crystal Waveguides: Role of Fabrication Disorder and Photon Group Velocity,” Physical Review Letters 94, 033903 (2005).
* [6] E. Kuramochi, M. Notomi, S. Hughes, A. Shinya, T. Watanabe, and L. Ramunno, “Disorder-induced scattering loss of line-defect waveguides in photonic crystal slabs,” Physical Review B 72, 161318(R) (2005).
* [7] M. Patterson, S. Hughes, S. Combrié, N.-V.-Q. Tran, A. De Rossi, R. Gabet, and Y. Jaouën, “Disorder-Induced Coherent Scattering in Slow-Light Photonic Crystal Waveguides,” Physical Review Letters 102, 253903 (2009).
* [8] M. Patterson, S. Hughes, S. Schulz, D. M. Beggs, T. P. White, L. O’Faolain, and T. F. Krauss, “Disorder-induced incoherent scattering losses in photonic crystal waveguides: Bloch mode reshaping, multiple scattering, and breakdown of the Beer-Lambert law,” Physical Review B 80, 195305 (2009).
* [9] L. O’Faolain, T. P. White, D. O’Brien, X. Yuan, M. D. Settle, and T. F. Krauss, “Dependence of extrinsic loss on group velocity in photonic crystal waveguides,” Opt. Express 15, 13129–13138 (2007).
* [10] S. Barik, H. Miyake, W. DeGottardi, E. Waks, and M. Hafezi, “Two-dimensionally confined topological edge states in photonic crystals,” New Journal of Physics 18, 113013 (2016).
* [11] P. D. Anderson and G. Subramania, “Unidirectional edge states in topological honeycomb-lattice membrane photonic crystals,” Optics Express 25, 23293–23301 (2017).
* [12] A. Young, A. Thijssen, D. Beggs, P. Androvitsaneas, L. Kuipers, J. Rarity, S. Hughes, and R. Oulton, “Polarization Engineering in Photonic Crystal Waveguides for Spin-Photon Entanglers,” Physical Review Letters 115 (2015).
* [13] I. Söllner, S. Mahmoodian, S. L. Hansen, L. Midolo, A. Javadi, G. Kiršanskė, T. Pregnolato, H. El-Ella, E. H. Lee, J. D. Song, S. Stobbe, and P. Lodahl, “Deterministic photon–emitter coupling in chiral photonic circuits,” Nature Nanotechnology 10, 775–778 (2015).
* [14] S. Barik, A. Karasahin, C. Flower, T. Cai, H. Miyake, W. DeGottardi, M. Hafezi, and E. Waks, “A topological quantum optics interface,” Science 359, 666–668 (2018).
* [15] E. Sauer, J. P. Vasco, and S. Hughes, “Theory of intrinsic propagation losses in topological edge states of planar photonic crystals,” Physical Review Research 2, 043109 (2020).
* [16] M. I. Shalaev, W. Walasik, A. Tsukernik, Y. Xu, and N. M. Litchinitser, “Robust topologically protected transport in photonic crystals at telecommunication wavelengths,” Nature Nanotechnology 14, 31–34 (2019).
* [17] X.-T. He, E.-T. Liang, J.-J. Yuan, H.-Y. Qiu, X.-D. Chen, F.-L. Zhao, and J.-W. Dong, “A silicon-on-insulator slab for topological valley transport,” Nature Communications 10, 872 (2019).
* [18] A. Y. Piggott, J. Lu, K. G. Lagoudakis, J. Petykiewicz, T. M. Babinec, and J. Vučković, “Inverse design and demonstration of a compact and broadband on-chip wavelength demultiplexer,” Nature Photonics 9, 374–377 (2015).
* [19] S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nature Photonics 12, 659–670 (2018).
* [20] L. C. Andreani and D. Gerace, “Photonic-crystal slabs with a triangular lattice of triangular holes investigated using a guided-mode expansion method,” Physical Review B 73, 235114 (2006).
* [21] M. Minkov, I. A. D. Williamson, L. C. Andreani, D. Gerace, B. Lou, A. Y. Song, T. W. Hughes, and S. Fan, “Inverse Design of Photonic Crystals through Automatic Differentiation,” ACS Photonics 7, 1729–1741 (2020).
* [22] M. Minkov, I. Williamson, and S. Fan, “legume: Differentiable guided mode expansion methods,” GitHub (2020).
* [23] D. Maclaurin, D. Duvenaud, M. Johnson, and J. Townsend, “Autograd,” GitHub.
* [24] R. E. Christiansen, F. Wang, and O. Sigmund, “Topological Insulators by Topology Optimization,” Physical Review Letters 122, 234502 (2019).
* [25] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in _arXiv:1412.6980 [Cs],_ (International Conference on Learning Representations (ICLR), 2015).
* [26] B. Lang, D. M. Beggs, A. B. Young, J. G. Rarity, and R. Oulton, “Stability of polarization singularities in disordered photonic crystal waveguides,” Physical Review A 92, 063819 (2015).
|
# Classification of pedagogical content using conventional machine learning
and deep learning model
Vedat Apuk, Krenare Pireva Nuçi
Department of Computer Science and Engineering
University for Business and Technology
10000 Prishine, Kosovo
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The advent of the Internet and a large number of digital technologies has
brought with it many different challenges. A large amount of data is found on
the web, which in most cases is unstructured and unorganized, and this
contributes to the fact that the use and manipulation of this data is quite a
difficult process. Due to this fact, the usage of different machine and deep
learning techniques for Text Classification has gained its importance, which
improved this discipline and made it more interesting for scientists and
researchers for further study. This paper aims to classify the pedagogically
content using two different models, the K-Nearest Neighbor (KNN) from the
conventional models and the Long short-term memory (LSTM) recurrent neural
network from the deep learning models. The result indicates that the accuracy
of classifying the pedagogical content reaches 92.52 % using KNN model and
87.71 % using LSTM model.
_K_ eywords Document Classification $\cdot$ KNN $\cdot$ LSTM $\cdot$ coursera
dataset $\cdot$ education $\cdot$ text classification $\cdot$ deep learning
models $\cdot$ machine learning models
## 1 Introduction
Billions of users create a large amount of data every day, which in a sense
comes from various types of sources. This data is in most cases unorganized
and unclassified and is presented in various formats such as text, video,
audio, or images. Processing and analyzing this data is a major challenge that
we face every day. The problem of unstructured and unorganized text dates back
to ancient times, but Text Classification as a discipline first appeared in
the early 60s, where 30 years later the interest in various spheres for it
increased [1], and began to be applied in various types of domains and
applications such as for movie review [2], document classification [3],
ecommerce [4], social media [5], online courses [6, 7], etc. As interest has
grown more in the upcoming years, the uses start solving the problems with
higher accurate results in more flexible ways. Knowledge Engineering (KE) was
one of the applications of text classification in the late 80s, where the
process took place by manually defining rules based on expert knowledge in
terms of categorization of the document for a particular category [1]. After
this time, there was a great wave of use of various modern and advanced
methods for text classification, which all improved this discipline and made
it more interesting for scientists and researchers, more specifically the use
of machine learning techniques. These techniques bring a lot of advantages, as
they are now in very large numbers, where they provide solutions to almost
every problem we may encounter.
The need for education and learning dates back to ancient times, where people
are constantly improving and trying to gain as much knowledge as possible.
There are various sources of learning available today including various MOOC
platforms such as Coursera, Khan Academy, Udemy, Udacity, edX, to name a few,
and as technology has evolved it has contributed to better methods of
acquiring knowledge that will facilitate this process. The data coming from
these sources are in most cases in digital form, more specifically in the form
of video and text lessons. The platforms that contain these lessons are called
Massive Open Online Courses (MOOCs), where in addition to the video lesson, it
also contains its textual representation called a transcript. Considering that
the duration of a video lesson depends on several parameters, such as the
category of video material, the platform on which the lesson is provided, the
complexity of the topic, the number of instructors, and the group of lesson
attendants. The duration of the lessons indirectly dictates how long the
transcript will be, in other words how many words it can contain. The category
shows the nature of the video and the topics that will be presented in it. As
it is already known, that each video lesson belongs to a certain category, or
in a group of categories, so does the transcript as well. Given this
advantage, we can conclude the fact that text classification is becoming quite
extensive as a discipline, where also its use can solve many challenging
problems in every domain, and specifically in education domain.
The aim of this paper is to investigate two classification techniques that are
used to classify the pedagogical content, and the focus tends to compare
conventional machine learning models with deep learning models, by selecting
KNN algorithm for the first approach and LSTM architecture for the latter one.
To better indicate the idea we want to present, the paper will be divided into
several sections, as follows: as part of literature review the main processes
of classifying documents are explained, continuing with related work conducted
so far in this area. In the experimental section, the design of the
conventional machine learning models and deep learning models will be
elaborated and the results for the each of the architectures will be presented
using a number of evaluation techniques (recall, precision, F-Score,
accuracy). The paper will be concluded with conclusion and future work.
## 2 Text Classification Processes and Related Work
Text mining or text analytics is one of the artificial intelligence techniques
that uses Natural Language Processing (NLP) to transform unorganized and
unstructured text into an appropriately structured format that will make it
easier to process and analyze data. For businesses and other corporations,
generating large amounts of data has become a daily routine. Analysis of this
data help companies gain smarter and more creative insights regarding their
services or products collected from a variety of sources in automated manner.
But this analysis step requires processing a huge amount of data where the
data needs to be prepared, and this is in most cases the cause of various
problems. NLP is made up of five steps or phases, and they are Lexical
Analysis, Syntax Analysis, Semantic Analysis, Pragmatics, and Discourse [8].
Figure 1: Natural Language Processing steps.
Figure 1 shows the following steps within NLP, where each of these steps will
be briefly described, with the intention to understand the main concepts:
1. 1.
Lexical Analysis - involves identifying the structure of a sentence, to
separate words from the text, and create individual words, sentences, or
paragraphs, which also includes separating punctuation from words
2. 2.
Syntax Analysis - involves parsing words and arranging words in a sentence to
have a certain meaning and relationship between them, where it is based
exclusively on grammar.
3. 3.
Semantic Analysis - implicates to analyze the grammatical structure of a word
and seeks for a specific meaning in that word. The semantic analysis makes it
possible to understand the relationship between lexical items.
4. 4.
Pragmatics - means how the interpretation of a sentence is affected in its use
in different situations to understand what it means and encompasses.
5. 5.
Discourse - points out that the current sentence may depend on the previous
sentence, where it can also affect the meaning of the sentence that comes
after it.
So, the goal of text classification or text analysis is to structure and
classify data to facilitate the analysis process. Today, as shown in Figure 2,
in order to perform text classification in the existing data, we follow the
four phases emphasized by [9]:
1. 1.
Feature Extraction
2. 2.
Dimension Reductions
3. 3.
Classifier Selection
4. 4.
Evaluation
Figure 2: Four-phase model of a text classification system.
### 2.1 Feature Extraction
As shown in Figure 2, with feature extraction as an initial phase one piece of
text or document is converted into a so-called structured feature space, which
will be useful to us when using a classifier. But prior to this, needs to
perform data cleaning, taking care of missing data, removal of unnecessary
characters or letters, in order to bring the data in an appropriate shape for
extracting the features, otherwise omitting the data cleaning can directly
affect negatively the performance and the accuracy of the final results.
Figure 3: Techniques of data preprocessing phase.
Emphasizing the importance of pre-processing data, in Figure 3, are depicted a
number of processes that are followed to clear the data and prepare it for
further processing [9]. Such processes as:
* •
Tokenization - is the process of separating a piece of text into smaller units
called tokens. The way the token is formed is based on a delimiter, which in
most cases is space. Also, tokens can be words or sub-words, but also at a
lower level, based on characters.
* •
Stop Words - are words that are commonly used in one language, that are
unnecessary in the data processing part, and in most cases are ignored because
they take up more space in the database, and affect longer processing times.
In English stop words are words like: "a", "the", "an", "it", "in", "because",
"what", to name a few.
* •
Capitalization - is the part where it is necessary to identify the correct
capitalization of the word, where the first word in the sentence will be
automatically capitalized first.
* •
Noise Removal - is the process of removing characters, numbers, and parts of
text that affect your analysis. These characters can be some special
characters, punctuation, source code removal, HTML code removal, unique
characters that represent a particular word, numbers, and many other
identifiers.
* •
Spelling Correction - is a problem where the meaning of a particular word can
be mispronounced, where the word loses its meaning. This problem can be solved
in two ways: with edit distance and another with overlap using k-gram.
* •
Stemming - is a process where more morphological variants are produced than
the base word or the so-called root word. For example different morphological
variants of root words "like" such as "likes", "liked", "liking" and "likely".
* •
Lemmatization - in this technique words are replaced with root words or words
that have a similar meaning, and such words are called lemmas.
* •
Syntactic Word Representation (such as N-Gram) - is a contiguous sequence of n
items from one part of the text.
* •
Syntactic N-Gram - are n-grams that are constructed using paths in syntactic
trees.
* –
Weighted Words (such as TF and TF*IDF)
* –
Word Embedding (such as Word2Vec, GloVe, FastText)
### 2.2 Dimension Reductions
As we can conclude from the name itself that in this step the goal is to
transform from a high-dimensional space to a low-dimensional space. The reason
for this is that we strive to improve performance, speedup time, and reduce
memory complexity. There are a number of algorithms or techniques in this step
that could be implemented, such as: (i) Principal Component Analysis (PCA),
(ii) Non-negative Matrix Factorization (NMF), (iii) Linear Discriminant
Analysis (LDA) and (iv) Kernel PCA.
Figure 4: Categorization of dimension reductions algorithms.
### 2.3 Classifier Selection
One of the main concerns is to choose the right classifier model that will be
able to perform with a certain set of data to achieve the desired results.
Choosing the right classifier model is not an easy task, and is a challenge
that is also referred to in the literature as the Algorithm Selection Problem
(ASP). Every day we come across applications that use classification
algorithms in some hands. The results of the task depend on choosing the right
algorithm that will complete a particular job while showing very good
performance and problem optimization. In general, there is no single algorithm
that can work for every type of problem, and that can learn all the tasks
while still being efficient, and this phenomenon is also known as performance
complementary [10]. Many factors affect the performance of a particular
algorithm, some of which is the amount of data assigned to it for testing and
training, the operating system to be executed, the specifications of the
machine on which the algorithm will be performed, and many other factors that
directly or indirectly affect the selection of the algorithm.
Some of the algorithms used for text classification are: Logistic Regression,
Naive Bayes, K-Nearest Neighbor (KNN), Support Vector Machines (SVM), Decision
Trees, Random Forests, Neural Network algorithms (such as DNN, CNN, RNN) and
Combination Techniques.
In our experiment we have used K-Nearest Neighbor (KNN) from the conventional
models whereas LSTM recurrent neural network from the deep learning models.
### 2.4 Evaluation
One of the most important steps when creating a model for text classification
is the evaluation phase. In this phase, algorithms are analyzed or scored to
assess how efficiently they performed. It should also be suggested that
comparing different parameters or metrics with this method is not an easy
task.
There is a so-called confusion matrix table (see Figure 5) in which
classification metrics such as True Positives (TPs), False Positives (FPs),
False Negatives (FNs) and True Negatives (TNs) are calculated and presented
[11].
Figure 5: Confusion Matrix
Figure 5 shows a confusion matrix table in which the prediction results are
displayed horizontally, while a label that is positive or negative is shown
vertically. Another evaluation technique that is lately being used is also
F-Score. In this paper, in order to evaluate the experimental model the
precision, recall, F-Score and Accuracy are used.
### 2.5 Related Work
The various technologies available today have drastically improved the way
people try to gain new knowledge. Technology has greatly influenced the
improvement of this process, and at the same time contributed to the
development of systems that enable a more efficient and easier learning
process. With this fact the use of various Massive Open Online Courses (MOOCs)
begins to increase, which bring with them various opportunities, but also
challenges. Attempts to identify and analyze the opportunities and challenges
of MOOCs both from pedagogical and business standpoint have led to understand
how some of the very well known and successful platforms like Coursera, edX
and Udacity have contributed to the improvement of their business model
through various aspects,using the models for: certification model, freemium
model, advertising model, job-matching model, and subcontractor model [12].
During the analysis of these platforms, the authors in [9] concluded that
quite a low number of students actually take assessment exams at the end of a
MOOC which makes it difficult to assess whether students joining a MOOC are
actually learning the content, and hence whether the MOOC is achieving its
goal. One of the main components of these platforms is Learning Objects (LOs).
Various techniques regarding Learning Objects (LOs) representation are
presented, in which it contains pedagogical values [13]. Using the
representation features of Learning Objects will provide possibilities to
personalize and customizable contents when presenting to learners along with
the ability to choose an individual learning path that best suits them, aiming
to maximize the learning outcome as claimed in [13]. There are plenty of
examples where K-Means, Decision Trees, Deep Neural Network (DNN) and other
machine learning techniques have been used for classification purposes [14].
As eLearning platforms are becoming more accessible, where their main goal is
to provide a smarter way of learning. The new paradigm of e-Learning also
known as Cloud eLearning aiming to offer personalised learning using Cloud
resources, where the main challenge is the process of content classification
and matching it with learners preferences. As part of this work, the author
[15] integrated as middle layer the recommendation systems using hierarchical
clustering technique to recommend learners courses or materials that are
similar to their needs before proposing a learning path using artificial
intelligent automated planner. Also, paper [16] contributes to the
classification systems in pedagogical content, with the main focus on the
content classification of video lectures. The authors recommended model for
the visual content classification system (VCCS) for multimedia lecture videos
is to classify the content displayed on the blackboard. Through this
recommended model, the authors showed over several stages how lecture videos
are processed and then with a combination of support vector machines (SVM) and
optical character recognition (OCR) classifies visual content, text and
equations [16]. Furthermore in [17], researchers presented the classification
and organization of pedagogical documents using domain ontology.
In one of the previous studies [18], the authors of this paper presented a
technique for automatic classification of MOOC videos, where the first step is
to extract transcripts from video and then convert them into image
representation using a statistical co-occurrence transform. After that, a CNN
model with a dataset was implemented which was collected from Khan Academy
with a total of 2545 videos, in order to evaluate the technique presented in
the paper. Based on label accuracy, the best results were achieved with the
CNN model, with the value of 97.87%. Also, similar work has been carried out
by Imran, Kurti and Kastrati in [19] where they have proposed a video
classification framework, consisting of three main modules: pre-processing,
transcript representation, and classifier. In this paper, it was concluded
that much better classification results were achieved with general-level than
with specific-level, argued with the fact of class overlap that the specific-
level category contains.
This paper aims to classify the pedagogical content using two different
algorithms, K-Nearest Neighbour as an conventional machine learning model and
Long short-term memory (LSTM) as an artificial recurrent neural network
architecture used in deep learning.
## 3 Methodology
In this section is given the methodology used during the research and the
experimental part. Initially a brief introduction regarding the dataset is
given, and continuing with explanation of the architectures that are modelled
to classify pedagogical content.
Python technology is used for the whole experiment, and specifically to
implement the KNN model is used the built-in functions and modules of scikit-
learn library, whereas for the implementation of the RNN model is used Keras
library, that runs on top of Tensorflow. In the following subsections, the
used dataset as part of this experiment is described in detail, following with
both models, the KNN and LSTM.
### 3.1 Dataset
The process of collecting and reviewing data is not an easy task, and in most
cases requires a lot of research and finding relevant data that are used to
achieve the desired results. The dataset [20] used in this paper for the
experimental purposes is used in [19] and it is modelled from scratch. This
dataset consists of a total of 12,032 videos collected from the Coursera
platform from more than 200 different courses. Coursera categorizes courses
into a 2-level hierarchical structure from general level to fine-grained
level. The general level consists of 8 categories, the specific level of 40
categories, and the course level of a total of 200 categories. In addition to
these three levels that made up the course, a video lesson transcript was also
included.
Figure 6: Top five most frequent categories for all three levels. Figure 7:
Top five least frequent categories for all three levels.
Figure 6 presents the top five most frequent categories, while Figure 7
presents the top five least frequent categories by the number of transcripts
that these categories contained. In order for the data to be in the correct
format for further analysis and modeling process, the data needs to go under
pre-process phase, by preparing, cleaning, and transformed in a desired shape.
The data preparation and preprocessing part depends on the given dataset, and
in our case the first step after the review is to remove the noisy data (such
as ’[MUSIC]’ which are recorded very frequently in all transcript records).
Following the steps depiced in Figure 3 the entire textual content of the
transcript is converted into lowercase, and removed the non-letters
characters. Further, the stopwords are removed from the transcript where it
helped us reduce the derived words to their particular word stem or root word
as explained in 2.1. The dataset is transformed finally into the desired shape
after finishing the lemmatization process, and it is ready to be used for both
architectures that we have modelled, KNN and LSTM described further in the
following subsections.
### 3.2 K-Nearest Neighbour model
K-Nearest Neighbors (KNN) is one of the techniques that is used in both
classification and regression. It is known that KNN has no model other than
collecting the entire dataset, and there is no need for learning. The
predictions made with the KNN for the new data point are by searching the
entire dataset for the K most similar instance (so-called neighbors) in
relation to the output variant of the K instance [21].
There are a number of steps that the KNN algorithm goes through, such as:
1. 1.
Modify K with the number of specific neighbors.
2. 2.
Calculate the distance between the available raw data examples.
3. 3.
Sort the calculated distances.
4. 4.
Get the labels of top K entries.
5. 5.
Generated prediction results for the test case.
In this experiment, while implementing the KNN model, immediately after the
process of cleaning and preparing data, is built a dictionary of features,
which transforms documents to feature vectors and convert the transcripts of
documents to a matrix of token counts using CountVectorizer method. Then, the
count matrix is transformed to a normalized tf-idf representation using
TfidfTransformer method. After this is identified the exact number of
neighbors which in our case resulted in 7 neighbors. To train the classifier,
the dataset is divided into two subsets: 80% for training and 20% for testing.
Where the latter subset is used to predict the category for each input text
record.
Figure 8: Implementation of KNN Classifier.
Figure 8 shows a screenshot of implementation of our KNN classifier using
Python technology and scikit-learn library, as mentioned in Section 3.
### 3.3 Long short-term memory model
Recurrent Neural Networks (RNN) are types of artificial neural networks that
allow previous outputs to be used as inputs while having hidden states [22].
These algorithms are mostly used in fields such as: Natural Language
Processing (NLP), Speech Recognition, Robot Control, Machine Translation,
Music Composition, Grammar Learning, and many others. Typically, a feedforward
network maps one input to one output. But as such, the inputs and outputs of
neural networks can vary in the length and type of networks used for different
examples and applications [23].
Figure 9: Implementation of LSTM model.
Figure 9 shows the implementation of our LSTM model, where in this experiment
in order to implement the RNN model, we used the LSTM architecture that
remembers values over arbitrary intervals. As part of this architecture
firstly are created Sequence models as the input layer to our network, then
adding the Embedding layer which encodes to integer values the textual data
entered as input, and as a result of this layer each word is then represented
by a unique integer.
For this layer, we have specified three required parameters with their
respected values:
* •
Maximum number of words - which in our case is 50000.
* •
Embedding Dim - 100.
* •
Input length - shape of X value which for us is 3002.
Further are dropped out hidden and visible units between the layers in the
network, with a dropout rate of 0.2, the same value is for recurrent dropout
as well. This is followed by the implementation of LSTM layer, and Dense layer
to which we passed as the first parameter the number of units denoting the
dimensionality of the output space, which in our case depends on the number of
categories that are selected to classify, and as the second parameter the
activation function, in this case is chosen the softmax function. And as a
final step, is used categorical_crossentropy as a loss function, and Adam as
an optimizer of the network. To prevent underfitting or overfitting of the
network, and to select the appropriate number of training epochs is used
EarlyStopping with ’val_loss’ as a monitoring metric with patience of 3
epochs.
## 4 Results
Table 1 shows the classification results with the conventional model using
K-Nearest Neighbours algorithm. As shown in Table 1, the general level based
on the precision metric has shown a very good result, 92.63% of accuracy
whereas 87.89% accuracy is estimated by precision metrics specific level. And
at the course level, also based on the precision metric reaches 78.59%.
Analyzing the results for all three levels, we notice that the percentage of
accuracy decreases from the upper level (general level) up to the lower level
(course level). In our case, the general level consists of 8 sub-categories,
the specific level of 40 sub-categories, and the course level consists of 200
sub-categories. From this we can infer that that the number of sub-categories
for a single level by which the video is classified on the Coursera platform
differs in each level.
Table 1: Classification results with K-Nearest Neighbours. Category | Precision (%) | Recall (%) | F1 Score (%) | Accuracy (%)
---|---|---|---|---
General Level | 92.63 | 92.52 | 92.53 | 92.52
Specific Level | 87.89 | 87.58 | 87.49 | 87.58
Course Level | 78.59 | 76.73 | 76.11 | 76.73
Table 2 shows the classification results with the Recurrent Neural Networks,
more specifically with an Long Short-Term Memory (LSTM) architecture. Using
LSTM classifier, the general level based on the precision metric reaches
88.22% of accuracy whereas in the specific level, 72.31% . Finally, at the
course level, the results shows 59.49% of accuracy. Analyzing the results
using LSTM architecture the highest accuracy is achieved at the general level,
followed by a specific level, while the lowest accuracy is achieved at the
course level.
Table 2: Classification results with Recurrent Neural Networks Category | Precision (%) | Recall (%) | F1 Score (%) | Accuracy (%)
---|---|---|---|---
General Level | 88.22 | 87.71 | 87.68 | 87.71
Specific Level | 72.31 | 69.93 | 70.13 | 69.93
Course Level | 59.49 | 52.91 | 53.99 | 52.91
## 5 Conclusion and future work
In this paper are presented and discussed the classification results of the
conducted experiment for all three category levels (General, Specific and
Course level) using both architectures, KNN and LSTM. We can conclude that
better results are achieved for levels with a smaller number of categories
than for levels with a larger number of categories. In our case, as the
category number increased in classes the results decreased. With this, we
claim that the classification results are directly affected by the number of
categories that each level contains. From results shown in Table 1 and Table 2
KNN reached 92.52% of accuracy compared to LSTM with 87.71% at general level,
87.58% compared to 69.93% at specific level and finally 76.73% compared to
52.91% at course level. The conducted results could be affected from several
factors. First, the quantity of data required for LSTM, since a large number
of categories increases the complexity of the problem, and thus requires more
data to train the model. The result could have been affected due to the high
similarity of different transcripts. Many of the transcripts belonged to
different classes at the course level, and they had many similarities in the
context of the sentences and keywords, so the model could not properly
distinguish in which class the transcripts belonged. However, the final
results gives us a spark for future work to investigate more on recurrent
neural networks like, applying hyperparameters tuning, or even expand the
number of architectures to further investigate the pedagogical content
classification.
## References
* [1] Fabrizio Sebastiani. Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1):1–47, 2002.
* [2] Cicero Dos Santos and Maira Gatti. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 69–78, 2014.
* [3] Zenun Kastrati, Ali Shariq Imran, and Sule Yildirim Yayilgan. A general framework for text document classification using semcon and acvsr. In International Conference on Human Interface and the Management of Information, pages 310–319. Springer, 2015.
* [4] Arfinda Ilmania, Samuel Cahyawijaya, Ayu Purwarianti, et al. Aspect detection and sentiment classification using deep neural network for indonesian aspect-based sentiment analysis. In 2018 International Conference on Asian Language Processing (IALP), pages 62–67. IEEE, 2018.
* [5] Ali Shariq Imran, Sher Muhammad Daudpota, Zenun Kastrati, and Rakhi Batra. Cross-cultural polarity and emotion detection using sentiment analysis and deep learning on COVID-19 related tweets. IEEE Access, 8:181074–181090, 2020.
* [6] Zenun Kastrati, Ali Shariq Imran, and Arianit Kurti. Weakly supervised framework for aspect-based sentiment analysis on students’ reviews of MOOCs. IEEE Access, 8:106799–106810, 2020.
* [7] Alya Itani, Laurent Brisson, and Serge Garlatti. Understanding learner’s drop-out in MOOCs. In international conference on intelligent data engineering and automated learning, pages 233–244. Springer, 2018.
* [8] Hannes Max Hapke, Hobson Lane, and Cole Howard. Natural language processing in action, 2019.
* [9] Kamran Kowsari, Kiana Jafari Meimandi, Mojtaba Heidarysafa, Sanjana Mendu, Laura Barnes, and Donald Brown. Text classification algorithms: A survey. Information, 10(4):150, 2019.
* [10] Irfan Khan, Xianchao Zhang, Mobashar Rehman, and Rahman Ali. A literature survey and empirical study of meta-learning for classifier selection. IEEE Access, 8:10262–10281, 2020.
* [11] Jake Lever, Martin Krzywinski, and Naomi Altman. Erratum: Corrigendum: Classification evaluation. Nature Methods, 13(10):890–890, 2016.
* [12] Fisnik Dalipi, Sule Yayilgan, Ali Shariq Imran, and Zenun Kastrati. Towards understanding the MOOC trend: pedagogical challenges and business opportunities. In International Conference on Learning and Collaboration Technologies, pages 281–291. Springer, 2016.
* [13] Krenare Pireva, Ali Shariq Imran, and Fisnik Dalipi. User behaviour analysis on LMS and MOOC. In 2015 IEEE Conference on e-Learning, e-Management and e-Services (IC3e), pages 21–26. IEEE, 2015.
* [14] Fisnik Dalipi, Ali Shariq Imran, and Zenun Kastrati. MOOC dropout prediction using machine learning techniques: Review and research challenges. In 2018 IEEE Global Engineering Education Conference (EDUCON), pages 1007–1014. IEEE, 2018.
* [15] Krenare Pireva and Petros Kefalas. A recommender system based on hierarchical clustering for cloud e-learning. In International Symposium on Intelligent and Distributed Computing, pages 235–245. Springer, 2017.
* [16] Ali Shariq Imran and Faouzi Alaya Cheikh. Blackboard content classification for lecture videos. In 2011 18th IEEE International Conference on Image Processing, pages 2989–2992. IEEE, 2011.
* [17] Ali Shariq Imran and Zenun Kastrati. Pedagogical document classification and organization using domain ontology. In International Conference on Learning and Collaboration Technologies, pages 499–509. Springer, 2016.
* [18] Houssem Chatbri, Kevin McGuinness, Suzanne Little, Jiang Zhou, Keisuke Kameyama, Paul Kwan, and Noel E O’Connor. Automatic mooc video classification using transcript features and convolutional neural networks. In Proceedings of the 2017 ACM Workshop on Multimedia-based Educational and Knowledge Technologies for Personalized and Social Online Training, pages 21–26, 2017.
* [19] Zenun Kastrati, Ali Shariq Imran, and Arianit Kurti. Integrating word embeddings and document topics with deep learning in a video classification framework. Pattern Recognition Letters, 128:85–92, 2019.
* [20] Zenun Kastrati, Arianit Kurti, and Ali Shariq Imran. Wet: Word embedding-topic distribution vectors for mooc video lectures dataset. Data in brief, 28:105090, 2020.
* [21] Jason Brownlee. Master Machine Learning Algorithms: discover how they work and implement them from scratch. Machine Learning Mastery, 2016.
* [22] Afshine Amidi and Shervine Amidi. Vip cheatsheet: Recurrent neural networks, 2018.
* [23] Larry Medsker and Lakhmi C Jain. Recurrent neural networks: design and applications. CRC press, 1999.
|
# Impact of the protein composition on the structure and viscoelasticity of
polymer-like gluten gels
Laurence Ramos1, Amélie Banc1, Ameur Louhichi1, Justine Pincemaille1,2,
Jacques Jestin3, Zhendong Fu4, Marie-Sousai Appavou4, Paul Menut2,5, Marie-
Hélène Morel2 1Laboratoire Charles Coulomb (L2C), Univ. Montpellier, CNRS,
Montpellier, France 2Ingénierie des Agro-polymères et Technologies Emergentes
(IATE), Univ. Montpellier, CIRAD, INRAE, Montpellier SupAgro, Montpellier,
France 3Laboratoire Léon Brillouin, UMR 12, Université Paris-Saclay,
IRAMIS/CEA Saclay, 91191 Gif-sur-Yvette Cedex, France 4Forschungszentrum
Jülich GmbH JCNS am MLZ Lichtenbergstr. 1, 85748 Garching, Germany
5Université Paris-Saclay, INRAE, AgroParisTech, UMR SayFood, 91300 Massy,
France<EMAIL_ADDRESS>
###### Abstract
We investigate the structure of gluten polymer-like gels in a binary mixture
of water/ethanol, $50/50$ v/v, a good solvent for gluten proteins. Gluten
comprises two main families of proteins, monomeric gliadins and polymer
glutenins. In the semi-dilute regime, scattering experiments highlight two
classes of behavior, akin to standard polymer solution and polymer gel,
depending on the protein composition. We demonstrate that these two classes
are encoded in the structural features of the proteins in very dilute
solution, and are correlated with the presence of proteins assemblies of
typical size tens of nanometers. The assemblies only exist when the protein
mixture is sufficiently enriched in glutenins. They are found directly
associated to the presence in the gel of domains enriched in non-exchangeable
H-bonds and of size comparable to that of the protein assemblies. The domains
are probed in neutron scattering experiments thanks to their unique contrast.
We show that the sample visco-elasticity is also directly correlated to the
quantity of domains enriched in H-bonds, showing the key role of H-bonds in
ruling the visco-elasticity of polymer gluten gels.
* January, 18 2021
## 1 Introduction
Polymer materials may exhibit a large variety of unique properties, ranging
from high water content, softness, and flexibility for hydrogels to resilience
and temperature sensitivity for elastomers. The specific properties of polymer
materials entail specific uses in different contexts including biomedical
applications for hydrogels made of natural or biodegradable synthetic polymers
[1], construction materials, and sensors for elastomers [2]. Nature also
abounds in polymer gels and elastomers with unique properties dedicated to
specific functions, as mucus [3], synovial fluid [4], the gelatinous layer of
tension wood [5], seed mucilage hydrogels [6] and natural rubber [7]. In all
these examples, the complex multi-component composition and the many types of
interactions at play are intimately connected to drive the hierarchical
structures of the polymer-like gel materials and control their mechanical
properties.
This complexity and this intricacy also hold for gluten. Gluten, the insoluble
protein of wheat, forms in its water hydrated state a highly cohesive and
viscoelastic mass (comprising typically $2$ g of water per g of protein), akin
to an elastomer [8]. Gluten visco-elasticity is crucial in food science as it
allows wheat flour to be baked into bread and biscuit. Gluten proteins belong
to the broad family of prolamins; they are proteins rich in proline and
glutamine amino-acids, which may confer texture to food materials because of
the formation of protein strands under extensional flow [9]. Gluten is a
complex mixture of several types of protein, which can be divided into two
main classes, monomeric gliadins and polymeric glutenins. In glutenins,
glutenin sub-units are linked together by disulfide bonds yielding polymers
with molar mass up to several millions g/mol [10, 11]. Gluten proteins belong
to the wider class of intrinsically disordered proteins that are currently
extensively investigated because of their crucial role in many biological
processes. Gluten proteins are certainly the most documented elastomeric plant
proteins [12]. However, the study of gluten is very difficult because gluten
proteins are broadly polydisperse and essentially insoluble in water. Hence,
despite several decades of investigation, a full understanding of the
structure of gluten in relation to its viscoelastic properties is still
lacking. The active debate about gluten being regarded preferentially as a
particulate gel or as a polymer gel is not fully closed (see [13] and the
references therein), although we strongly believe that the polymeric nature
plays a major role, as inferred especially from our recent investigations [14,
15, 16, 17, 18]. Previous works have also pointed out the important role of
disulfide bonds and hydrogen bonds in the structuration of gluten [19, 20,
21].
In order to shed light on the structure of gluten and on the relationship
between structure and viscoelasticity, our strategy is to study model systems,
produced by dispersing dedicated protein extracts in a good solvent, a mixture
of water and ethanol, allowing an efficient solubilization of the proteins.
Thanks to this approach, homogeneous samples with a wide range of protein
concentration, spanning several orders of magnitude, can be studied. Note that
this could not be performed with pure water as a solvent: water being a bad
solvent for gluten, homogenous samples can only be produced at very high
protein concentration. In our case, thanks to the choice of water/ethanol good
solvent, unprecedented structural and mechanical data have been obtained. In
particular, we have shown, for a given protein extract comprising equal
amounts of gliadin and gliadin, that most structural and viscoelastic
properties of protein dispersions can be qualitatively and quantitatively
rationalized in the framework of polymer gels [14, 15]. More recently, we have
developed a protocol to obtain from industrial gluten model protein extracts
with controlled and tunable composition in gliadin and glutenin [22]. In this
paper, we leverage on this recent advancement and investigate the structure
and visco-elasticity of suspensions of gluten proteins of various composition.
This study allows us to demonstrate a correlation between the presence of
large proteins assemblies in very dilute regime and the presence of domains
enriched in H-bonds, of size comparable to the assemblies, in the semi-dilute
regime, and a direct link between the hierarchical structures of proteins in
the dilute regime and the sample visco-elasticity.
The paper is organized as follows. We first describe the materials and the
different experimental methods. We then present and discuss the experimental
results regarding the structure and visco-elasticity of the samples as probed
thanks to a combination of complementary scattering and rheology techniques.
We finally conclude by emphasizing the crucial role of protein assemblies in
the materials.
## 2 Materials and methods
### 2.1 Materials and sample preparation
Gluten protein extracts are prepared from an industrial gluten (courtesy of
TEREOS-SYRAL, Aalst, Belgium), following a protocol described elsewhere [17,
22]. In brief, gluten is first dispersed at room temperature in a
water/ethanol $50/50$ v/v solvent. Only the proteins well dispersed in the
solvent are kept (the insoluble part is discarded). The dispersion is then
quenched to a low temperature $T_{q}$, leading to a liquid-liquid phase
separation into a light phase and a heavy phase. In the following, both the
light and heavy phases are used after freeze-drying. The composition of the
freeze-dried protein extracts are characterized by chromatography in a
denaturating solvent, in which weak intra- and intermolecular interactions are
suppressed [17]. Overall, the light phase is enriched in gliadin, the
monomeric proteins, whose molar mass, $M_{\rm{w}}$, lies in the range
$(25-65)\times 10^{3}$ g/mol. Conversely, the heavy phase is enriched in
glutenin, the polymeric proteins, with $M_{\rm{w}}$ from $90\times 10^{3}$
g/mol to several $10^{6}$ g/mol. Interestingly, the exact composition of the
protein extracts is varied by changing $T_{q}$, allowing a fine tuning of the
protein composition. In our experiments, $T_{q}$ ranges between $-0.8$ and
$12.5^{\circ}$C. The composition of the protein extracts is characterized by
its mass fraction of glutenin as
$GLU=\frac{m_{\rm{Glu}}}{m_{\rm{Glu}}+m_{\rm{Gli}}}$ with $m_{\rm{Glu}}$,
respectively $m_{\rm{Gli}}$, the mass of glutenin, respectively gliadin, in
the extract. We obtain protein extracts with $GLU$ in the range $(1-66)$ %.
Samples are prepared by dispersing the freeze-dried protein extracts in the
appropriate volume of solvent, a water/ethanol $50/50$ v/v mixture.
Hydrogenated and deuterated solvents are used. Deuterated solvent comprises OD
ethanol (C2H5OD) and heavy water (D2O). The protein concentration $C$ ranges
between $4$ and $400$ mg/mL.
### 2.2 Methods
#### 2.2.1 Small-angle X-Ray scattering
Small-angle X-ray scattering experiments are performed in house and in the
European Radiation Synchrotron Facility, ESRF, (Grenoble, France). The in-
house set-up comprises a high brightness X-ray tube with low power and an
aspheric multilayer optic (GeniX 3D from Xenocs) delivering an ultra low
divergent beam ($0.5$ mrad); a two-dimensional Schneider 2D image plate
detector prototype is used to collect the scattering intensity. The sample-
detector distance is set at $1.9$ m. Synchrotron experiments are conducted at
the ID02 beamline of ESRF [23], using three different sample-detector
distances ($d=1.5$, $7$ and $30$ m) in combination with a wavelength $0.0995$
nm, yielding scattering vectors $q$ in the range $(2\times 10^{-3}-7)$ nm-1.
In all experiments, the samples (prepared with hydrogenated or deuterated
solvents and with protein concentration in the range ($10-400$ mg/mL) are held
in glass capillaries of diameter $1.5$ mm. Standard non-linear fitting
procedures are used to analyze the data.
#### 2.2.2 Small-angle and very small-angle neutron scattering
Several facilities are used for small-angle neutron scattering experiments
(SANS) and very small-angle neutron scattering experiments (VSANS). SANS
measurements at Laboratoire Léon Brillouin (Saclay, France) are performed on
instrument PA20 using three configurations with the following wavelength
$\lambda$ and sample-detector distance $d$ ($\lambda=0.6$ nm and $d=1.5$ m;
$\lambda=0.6$ nm and $d=8$ m; $\lambda=1.5$ nm and $d=19$ m) yielding
scattering vector $q$ in the range $(10^{-2}-2)$ nm-1. VSANS and SANS are also
conducted on two instruments operated by JCNS at the Heinz Maier-Leibnitz
Zentrum (MLZ, Garching Germany). SANS experiments are performed on KWS2 [24]
using three configurations with $\lambda=0.7$ nm and $d=2$ m; $\lambda=0.7$ nm
and $d=8$ m; $\lambda=1$ nm and $d=20$ m, yielding scattering vector $q$ in
the range $(2\times 10^{-2}-3)$ nm-1. VSANS experiments are performed at KWS3
[25] with $\lambda=1.28$ nm and $d=10$ m yielding $q$ in the range ($2.2\times
10^{-3}-2\times 10^{-2})$ nm-1 and at KWS2 with $\lambda=0.7$ nm and $d=20$ m
using the focusing mode with MgF2 lenses [26], yielding $q$ in the range
$(3\times 10^{-3}-3\times 10^{-2})$ nm-1. In all cases, samples (prepared with
hydrogenated or deuterated solvents and with protein concentration in the
range ($210-302$ mg/mL)) are held in quartz Hellma cells with thickness of $1$
mm or $2$ mm. Standard data correction and calibration are performed to
analyze the data. Data are corrected for empty cell scattering, solvent
scattering, transmission, and detector sensitivity. Absolute scale
transformation is performed using standard procedures [27]. Standard non-
linear fitting procedures are used to analyze the scattering profiles.
#### 2.2.3 Rheological measurements
Linear viscoelastic measurements are performed using a Anton Paar MC302
stress-controlled rheometer. We use cone and plate geometries, with different
diameters ($8$, $25$ or $50$ mm), depending on the sample visco-elasticity.
The whole geometry is immersed in a bath of silicon oil to avoid solvent
evaporation. After loading the sample (with a spatula for gels or simply by
pouring liquid samples), the gap between the cone and the plate is set to its
prescribed value ($100\mu$m, for the $50$ mm diameter plate, $50\mu$m for
smaller plates). We let the samples equilibrate until the normal force acting
on the cone relaxes to zero before starting the measurements. The frequency
dependence storage, $G^{\prime}$, and loss, $G"$, moduli are measured in the
linear regime, with a typical strain amplitude of $1$ %. Measurements are
performed with samples prepared with deuterated solvents with a fixed protein
concentration $C=237$ mg/mL and different $GLU$ content. The temperature is
fixed at $25^{\circ}$C.
## 3 Results and discussion
### 3.1 Structural features in very dilute regime
Figure 1 reports the composition of the species present in very dilute
suspensions ($C=4$ mg/mL) of the different protein extracts as determined
thanks to asymmetrical flow field-flow fractionation [17]. Three classes of
objects, monomeric gliadins, glutenin polymers and protein assemblies, are
identified depending on their average size, $<R>$, and molar mass
$<M_{\rm{w}}>$. For gliadins $<R>=7$ nm, $<M_{\rm{w}}>=8\times 10^{4}$ g/mol,
for glutenin polymers $<R>=20$ nm and $<M_{\rm{w}}>=4\times 10^{5}$ g/mol and
for protein assemblies $<R>=85$ nm, $<M_{\rm{w}}>=3\times 10^{7}$ g/mol
111Note that for gliadins and polymers, $<R>$ refers to a mean hydrodynamic
radius, whereas it refers to a mean radius of gyration for the assemblies.. We
find that the proportion of the different species evolves with the protein
composition. As anticipated, at small $GLU$ ($GLU<$ ca $30$ %), the proportion
of polymers increases with $GLU$. Interestingly, above $30$ %, the proportion
of polymer is roughly constant and one observes the emergence of large protein
assemblies as a third species, whose amount increases with $GLU$.
Figure 1: Mass fraction of monomers, polymers and assemblies (see text) in
dilute suspensions of the different protein extracts. Adapted from [17]. The
relative error, as evaluated from 3 replicated measurement with the extract
with $GLU=47$ % is less than $5$ %.
### 3.2 Structural features in the semi-dilute regime
#### 3.2.1 Spatial distribution of the gluten proteins
X-ray scattering is sensitive to the electronic densities of the species.
Hence in small-angle X-ray scattering experiments, the contrast originates
from the difference in the scattering densities between the gluten proteins
and the solvent. Experiments therefore probes the spatial distribution of the
proteins in the solvent. We show (inset Fig. 2) that the scattering profile of
a low concentration sample depleted in glutenin ($C=10$ mg/mL, $GLU=13$ %) is
typical of a solution of polymer coils in the dilute regime. Note that this is
in accordance with the fact that gluten proteins are intrinsically disordered
proteins [28, 29, 30]. At intermediate scattering vectors ($0.5$
nm${}^{-1}<q<3$ nm-1), the scattered intensity, $I$, varies as $q^{-p}$, with
$p=2$. This power law scaling is characteristic of Gaussian chains in a theta-
solvent [31]. At smaller length scale, the transition from the $q^{-2}$
scaling to a $q^{-1}$ scaling, at a scattering vector $q_{c}$, allows the
determination of the persistence length $l_{p}$ of the chains, following
$q_{c}\times l_{p}=1.9$ [32]. We measure $q_{c}$ of the order of $2.8$ nm-1
yielding $l_{p}$ of the order of $0.7$ nm. This small value indicates that the
polypeptide chains of the gluten proteins are very flexible, which is typical
for intrinsically disordered proteins [28, 33]. On the other hand, for length
scales larger than the radius of gyration of the coils, at small $q$, a
plateau of the scattered intensity is observed in a log-log representation. By
modeling the transition from the plateau to the power law decrease with a
Lorentzian function as predicted with a Orstein-Zernicke (OZ) model,
$I(q)=\frac{A}{1+(q\xi)^{2}}$, one can evaluate the correlation length $\xi$
[31]. This length is equal to the radius of gyration of the scattering objects
in dilute regime, and is expected to decrease with concentration in the semi-
dilute regime. The fit with OZ (line in the inset Fig. 2) gives $\xi=3.1$ nm,
a numerical value consistent with the size of gliadin [17, 34]. Note that data
are equally well fitted using a Debye function [35], which is the form factor
for Gaussian chains, yielding a comparable size ($4.6$ nm).
Figure 2: Correlation length, $\xi$, as a function of the protein
concentration for a protein extract with $GLU=13$ %. $\xi$ is extracted from a
fit with a Lorentzian function (see text). The inset shows the scattered
intensity measured at room temperature as a function of the scattering vector
for a sample with $C=10$ mg/mL and $GLU=13$ %. Black symbols are experimental
data points and the green solid line is the best fit. Measurements are
performed at room temperature. Figure 3: Scattering profiles measured by
small-angle X-ray scattering, for samples prepared with a hydrogenated solvent
at various protein concentrations, as indicated in the legends, for protein
extracts with (a) $GLU=13$ %, and (b) $GLU=45$ %. Measurements are performed
at room temperature. Data are acquired at ESRF facility.
Data acquired at different protein concentrations (Fig. 3a) all superpose at
large scattering vectors, when the scattered intensity, $I$, is normalized by
$C$, indicating a unique structure at small length scales, independent of the
protein concentration, as expected. All data also exhibit a plateau at small
$q$ but whose height, normalized by $C$ decreases as $C$ increases. This
indicates a higher compressibility of the samples as $C$ increases, due to the
interpenetration of the polymer coils, which signs the transition from dilute
to semi-dilute regimes, at $C^{*}$. Above the overlap concentration $C^{*}$,
the correlation length $\xi$ decreases due to coil interpenetration. We find
that $\xi$, as obtained from fits of the scattering profiles with the OZ
model, decreases from about $3$ nm down to $1$ nm, as $C$ increases from $10$
to $400$ mg/mL (Fig. 2). The overlap concentration $C^{*}$ is defined as the
concentration from which $\xi$ decreases with increasing $C$. From purely
geometrical arguments, the overlap concentration reads
$C^{*}=\frac{3M_{\rm{w}}}{4\pi R_{\rm{G}}^{3}Na}$, with $R_{\rm{G}}$ the
radius of gyration of the coils, $M_{\rm{w}}$ their molar mass and $Na$ the
Avogadro number. Experimentally, one measures $C^{*}\simeq 100$ mg/mL. Taking
$M_{w}=40000$ g/mol as average molar mass for gliadins, one determines a
characteristic size of the order of $5.4$ nm, a numerical value in agreement
with the values directly measured in the dilute regime. Finally, we note that
the experimental evolution of $\xi$ with $C>C^{*}$ is consistent with the
theoretically expected $C^{-1}$ scaling for a polymer in a theta-solvent [36],
in full agreement with the $p=2$ power law exponent found for the scattered
intensity at intermediate $q$-range.
Overall, glutenin depleted samples exhibit the structural features of polymer
coils in theta-solvent conditions. By contrast, the structural features of the
samples enriched in glutenin polymer are more complex. We show in Figure 3b
the scattering profiles, normalized by the protein concentration, for samples
prepared with a gluten extract with $GLU=45$ %. Data overlap at large $q$ and
display the scattering features expected for polymer chains in theta-solvent,
in agreement with what is measured for glutenin depleted samples (Fig. 3a). At
larger length scale, i.e. at small $q$, on the other hand, the scattered
intensity is found to vary as a power law with the scattering vector: $I\sim
q^{-d_{\rm{f}}}$. This power law indicates large length scale heterogeneities
in the spatial organization of the chains, which are characterized by a
fractal dimension $d_{\rm{f}}$. The fractal dimension is the same at all
concentrations but the amplitude of the power law slightly varies with $C$.
The fractal structure is measured up to the smallest accessible $q$
($q_{\rm{min}}=10^{-2}$ nm-1), hence up to length scales of the order of
$2\pi/q_{\rm{min}}\approx 600$ nm. This length scale is much larger than the
typical size of the protein assemblies ($<R>=85$ nm, see above). Considering
the presence of much larger polymer-like objects for the samples prepared with
$GLU=45$ % than for the samples prepared with $GLU=13$ %, we expect for the
samples with $GLU=45$ % an overlap concentration smaller than the one for the
samples with $GLU=13$ ($C^{*}\simeq 100$ mg/mL). Hence, it is reasonable to
state that data shown in Figure 3b very likely correspond to the semi-dilute
regime.
Similar analysis as the ones described above are performed for samples
prepared with different protein extracts, with $GLU$ ranging from $4$ to $66$
%. The evolutions with the glutenin content of several structural parameters,
the persistence length, the power law exponent at intermediate $q$, and the
fractal dimension at smaller $q$, are plotted in Figure 4. Results show a
polymer-like behavior of all extracts, independently of their composition,
with a same persistence length ($l_{p}=0.74\pm 0.1$ nm) and a same behavior at
small length scale ($p=2.0\pm 0.2$). Interestingly, however, two regimes are
clearly evidenced, by the onset of a power law evolution of the scattered
intensity at small $q$, characterized by a fractal dimension of the order of
$2$. This onset coincides with the threshold for the presence of protein
assemblies detected in very dilute regime ($GLU$ larger than $30$ %
typically).
Figure 4: Persistence length, $l_{p}$ (a), exponent of the scattered intensity
with wave vector $q$ at large $q$, $p$ (b) and fractal dimension measured at
small $q$ (c) as a function of the glutenin fraction of the protein extract.
Open symbols correspond to data obtained with slightly different set-ups and
protocols [14], which may explain the difference, in particular for the
parameter $p$. The dotted lines in (a, b) correspond to the average of $l_{p}$
(a), and $p$ (b).
#### 3.2.2 Indirect probing of H-bonds between proteins
Figure 5: Scattering profiles measured by (a) SAXS and VSANS/SANS, for a
sample with $GLU=66$ % prepared in a hydrogenated or deuterated solvent, (b)
SANS and VSANS for samples prepared in deuterated solvent with different
protein extracts as indicated in the legend. The symbols are data points and
the lines are best fits using a Debye-Bueche model (see text). In (a, b) the
protein concentration is $C=237$ mg/mL. In (a), because of different contrast
in SAXS and VSANS/SANS, data are shifted vertically to allow an overlap of the
scattered intensity at large $q$. Measurements are performed at room
temperature. SAXS data are acquired at ESRF facility and VSANS/SANS data are
acquired at MLZ facility.
Figure 5a summarizes the features of the scattering profiles, as measured by
small-angle X-ray scattering (SAXS), and by very small-angle and small-angle
neutron scattering (VSANS and SANS), for samples prepared in hydrogenated and
deuterated solvents. Data are only displayed for fixed protein concentration
($C=237$ mg/mL) and composition ($GLU=66$ %) but similar results are obtained
for other $GLU$ (data not shown). As mentioned above, in SAXS, one probes the
spatial distribution of the protein chains in the solvent. A neutron
scattering experiment by contrast is sensitive to the scattering length
densities of the different species, and the main contrast probed in the
experiment changes depending whether the solvent is hydrogenated or
deuterated. In the case of a hydrogenated solvent, the main contrast is the
one between proteins and solvent. We find that the scattering profiles
measured in SAXS (for both solvents) and in SANS using an hydrogenated solvent
nicely overlap in the whole range of scattering vectors. The perfect overlap
of the SAXS data for hydrogenated and deuterated solvents in the whole
$q$-range demonstrates that replacing H by D in the solvent molecules does not
change the spatial organization of the proteins, for length scales ranging
from $\sim 1$ nm to $\sim 1$ $\mu$m, suggesting identical interactions at
play. In the case of neutron scattering measurements with a deuterated
solvent, the contrast mainly originates from the differences between the
scattering length density of H and D. In that case, we observe that data at
large $q$ ($q>0.2$ nm-1) also overlap with the other scattering profiles,
showing a unique structure of the protein chains at small length scale. In
sharp contrast, however, drastically different scattering profiles are
measured by VSANS and SANS for a deuterated solvent in the low $q$ region:
instead of the $q^{-2}$ scaling, due to large scale fractal organization of
heterogeneities in the proteins spatial organization, a $q^{-4}$ scaling
followed by a pseudo-plateau at very small $q$ is measured. As already
discussed previously for a sample with $GLU=52$ % [37], the striking
difference between the scattering profiles originates from the heterogeneous
exchange between the deuterium atoms comprised in the solvent and the labile
hydrogen atoms of the protein chains. In certain regions of the sample, strong
and/or multiple H-bonds between proteins prevent the standard D/H exchange
between solvent and proteins. In these domains, one expects an enrichment in H
as opposed to other parts of the samples. Neutron scattering experiments are
sensitive to the H/D contrast, hence are probing the H-rich domains, which are
domains enriched in H-bonds between proteins. The $q^{-4}$ scaling indicates
well defined H-rich domains with sharp interfaces. The transition from this
scaling towards a plateau at smaller $q$ allows an evaluation of the
characteristic size of these domains. More quantitatively, the scattering
profiles can be fitted with a Debye Bueche model (DB), conventionally used to
describe micro-phase separated solids with sharp interfaces [38]:
$I=\frac{I_{0}}{[1+(q\Xi)^{2}]^{2}}$. Here $I_{0}$ is the plateau value of the
scattering intensity at low $q$ and $\Xi$ is the characteristic size of the
phase-separated domains. We find that the DB model accounts well for the
experimental data, as exemplified by the fits of some selected data (Fig. 5b).
We find moreover that the characteristic size of the domains enriched in
H-bonds between proteins are constant and independent on the protein
composition, $\Xi=(67\pm 2)$ nm. Interestingly, the size $\Xi$ is comparable
to the size of the protein assemblies measured in very dilute solution
(average size $85$ nm).
Figure 6: Scattered intensity at very low $q$ as measured in VSANS normalized
by the protein concentration as a function of the amount of assemblies in the
samples. Full and empty symbols correspond to measurements performed with
protein extracts as obtained with the protocol described in the text . Gluten
concentration is $C=237$ mg/mL for full symbols (as in Fig. 5b), for the
cross-filled symbol (resp. dot-filled), $C=210$ mg/ml (resp. $302$ mg/ml) and
$GLU=56$ % (resp. $44$ %). Half-filled symbols correspond to measurements
performed with mixtures of two extracts with $GLU=13$ % and $GLU=66$ %.
To check more quantitatively the link between the domains enriched in H-bonds
between proteins, measured in semi-dilute regime, and the presence of protein
assemblies as inferred from measurements in very dilute regime, we plot in
Figure 6 $I_{0}/C$, with $I_{0}$ the value of the low $q$ plateau of the
scattered intensity, as a function of the % of assemblies in the sample. The %
of assemblies is evaluated in dilute samples using asymmetrical flow field-
flow fractionation coupled to a differential refractive index detection. We
measure that $I_{0}/C$ varies roughly linearly with the % of assemblies in the
samples. Note that because the size of the domains is measured to be constant
and independent of the composition of the protein extract, $I_{0}$ is expected
to be directly proportional to the number of domains enriched H-bonds between
proteins per unit volume in the sample (assuming a constant H/D contrast and a
constant composition of the domains). Hence Figure 6 suggests a direct
proportionality constant between the number of protein assemblies and the
number of H-bonds rich domains. In Figure 6, full symbols correspond to
measurements performed with different protein extracts as obtained with the
protocol described above with different quenching temperature $T_{q}$, and
half-filled symbols correspond to VSANS measurements performed with mixtures
of two extracts with $GLU=13$ % and $GLU=66$ %. The reasonable collapse of all
data onto a single curve indicate that a simple dilution law holds. It
moreover suggests that the protein assemblies are stable whatever their
environment (quantity of solvent and presence of gliadins).
To better assess the stability of the regions enriched in H-bonds between
proteins, we investigate the evolution of the scattering profiles with
temperature. Measurements acquired at different temperatures, from $8$ to
$35^{\circ}$C, in SAXS and SANS and for hydrogenated and deuterated solvents,
are shown in Figure 7 for a sample with $C=237$ mg/mL and $GLU=66$ %. As
temperature decreases, the onset of a liquid-liquid phase-separation is
evidenced in SAXS by a transition from a $q^{-2}$ to a $q^{-4}$ power law
dependence at small $q$. As anticipated from previous experiments in a fully
hydrogenated solvent [39], the liquid-liquid phase separation is detected,
both by SAXS and SANS, for samples prepared with a hydrogenated solvent
(H-solvent). We find that a liquid-liquid phase separation also takes place
for a sample prepared in a deuterated solvent (D-solvent). For a sample
prepared in a H-solvent, slightly different temperatures are determined with
the two techniques (which might be explained by the different set-ups used).
Using a same SAXS apparatus, we measure a significantly higher temperature in
a D-solvent (about $18^{\circ}$C) than in a H-solvent (about $14^{\circ}$C),
consistently with previous measurements by differential scanning calorimetry
for other samples [18]. This is in line with differences in the strength of
hydrogen bonds for H and for D and hints as a role of hydrogen-bonding as
driving force for liquid-liquid phase-separation. Interestingly, we do not
find any modification with temperature of the SANS pattern of a sample
prepared in a D-solvent (Fig. 7d). Hence, the number and size of the domains
enriched in H-bonds between proteins are not affected by the liquid-liquid
phase separation. We believe these domains get concentrated in the rich-phase,
which is enriched in glutenins [22], in agreement also with infrared data
showing that the heavy phase in enriched in H (and depleted in D) as compared
to the light phase [39]. In addition, the fact that these domains are not
perturbed when the temperature varies in the one-phase region (in the range
$[18-35]^{\circ}$C) strongly suggests that they are very stable and robust
structures.
Figure 7: Scattering profiles measured at different temperatures, as indicated
in the legend, by small angle X-ray scattering (a, b) and neutron scattering
(c, d), for samples with a fixed protein concentration ($C=237$ mg/mL) and a
fixed glutenin content ($GLU=66$ %) prepared in a hydrogenated solvent (a, c)
or deuterated solvent (b, d). SAXS measurements have been acquired using a in-
house set-up and SANS measurements have been acquired at LLB (c) and MLZ (d)
facilities.
### 3.3 Linear visco-elasticity
We report in Figure 8, the frequency dependence of the storage, $G^{\prime}$,
and loss, $G"$, moduli as a function of frequency, for samples with a fixed
concentration ($C=237$ mg/mL), but different composition of the gluten
extract. Samples depleted in protein assemblies ($GLU=13$ % and $GLU=19$ %)
are purely viscous. The storage modulus is too low to be measured reliably and
the loss modulus is found to be proportional to the frequency:
$G"=\eta\omega$, with a viscosity $\eta\simeq 100$ mPas. This value is in
agreement with measurements performed with gliadin suspensions prepared using
a different protocol [40]. By contrast, the samples prepared with protein
extracts comprising protein assemblies ($GLU>30$ %) display a marked visco-
elastic signature. The two most enriched in glutenin samples ($GLU=56$ and
$66$ %) are essentially elastic: their storage modulus is nearly frequency-
independent, and is larger than their loss modulus, in most or the whole
experimentally investigated frequency range (from $10^{-2}$ to $10^{2}$
rad/s).
Concentration-dependent aging of gels prepared with $GLU=52$ % has been
previously investigated in detail by some of us and we have shown that the
visco-elasticity and the gelation process could be quantitatively rationalized
in the framework of near-critical gels [15, 41, 42, 43]. We show here that the
same features occurs for a whole class of gluten gels. The near-critical gel
features is especially exemplified in the sample with $GLU=44$ %. In the
window of experimentally accessible frequencies, a fresh sample exhibits the
visco-elastic properties of a critical gel, with $G^{\prime}\sim
G"\sim\omega^{0.85}$ and $G">G^{\prime}$. For an aged gel on the other hand,
an elastic plateau ($G^{\prime}>G"$) is measured at low frequency and the
transition to power evolution of the two moduli is measured at higher
frequency. This is the signature of near-critical visco-elasticity above the
gel point.
Visco-elasticity of the gluten gels is governed by H-bonds [15, 44, 18], and
aging, as related to the increase of the elastic modulus with the time elapsed
since sample preparation, is likely due to the reorganization of the H-bonds
in the sample. In accordance, we note that the purely viscous samples do not
exhibit any aging features, as opposed to the visco-elastic samples.
Interestingly, we also find that the sample that is the most enriched in
glutenin ($GLU=66$ %) does not seem to exhibit any aging over the investigated
period (data for a fresh sample and a $35$ day-old sample perfectly overlap),
as opposed to the samples with $GLU=44$ % and $GLU=56$ %, which exhibit
significant increase of their complex modulus with time. Although, this
finding should deserve a deeper investigation, we believe this might be due to
the hindrance of the H-bonds reorganization in a highly elastic material.
Overall we find that the rheological properties of the samples are directly
related to their structure. The emergence of visco-elasticity is directly
correlated to the presence of protein assemblies in the dilute regime, which
is also associated to the presence of H-bonds-rich domains in semi-dilute
regime.
Figure 8: Storage and loss moduli as a function of frequency, for fresh and
$35$ days old samples prepared with a fixed protein concentration $C=237$
mg/mL, and with protein extracts comprising various amount of glutenin, as
indicated in the legend. In (a, b), the lines are power law evolution with an
exponent of $1$ (a) and $0.85$ (b).
## 4 Conclusion
We have investigated the structure of dispersion of gluten proteins, with
tuneable composition, in a good solvent. Gluten proteins are mainly composed
of a blend of monomeric proteins, gliadins, and polymeric proteins, glutenins.
In principle, gluten proteins are by themselves at the cross-road between
polymers and colloids. Despite the complexity and the numerous interactions at
play in gluten, however, our experiments show that the material properties are
dominated by the polymer nature of the constituents. Thanks to a combination
of several techniques that probe the sample properties in different
concentration regimes, we have evidenced the major role played by the protein
assemblies. These assemblies are non compact and very stable structures with a
size of the order of $100$ nm, which form even in very dilute regime, once the
proportion of glutenins in the protein blend is sufficiently high. They can be
assimilated to microgels with polymer chains held together by weak hydrogen
bonds. Thanks to their distinctive contrast in neutron scattering, we have
been able to identify their signature in semi-dilute regime and quantify their
amount as a function of the initial protein composition. The fact that sizes
of the same order of magnitude are measured when the protein concentrations
varies by almost two orders of magnitude is intriguing and would deserve
further investigation. Finally, when varying the protein composition, we find
that the emergence of visco-elasticity coincides with the emergence of the
protein assemblies, demonstrating their crucial function tuning gluten gel
mechanical properties. Gluten being an essential ingredient of dough, and
being the ingredient largely responsible for the unique visco-elastic
properties of wheat dough, characterizing and rationalizing the properties of
gluten gels is obviously crucial in many technological and industrial
applications.
## Acknowledgements
Financial supports from ANR Elastobio (ANR 18 CE06 0012 01) and from Labex
Numev (ANR-10-LAB-20) are acknowledged. This work is also based upon
experiments performed at the KWS-2 and KWS-3 instruments operated by JCNS at
the Heinz Maier-Leibnitz Zentrum (MLZ), Garching, Germany, at PA-20 beamline
operated by Laboratoire Léon Brillouin, Gif-sur-Yvette, France and at ID02
beamline operated at the European Synchrotron Radiation Facility (ESRF),
Grenoble, France. The authors thank Theyencheri Narayanan and Alessandro
Mariani for assistance in using beamline ID02 at ESRF and Philippe Dieudonné
for help in the in-house SAXS measurements.
## Bibliography
## References
* [1] Li Y 2012 Chemical Society Reviews 41 2193–2221
* [2] Cankaya N 2017 Elastomers (IntechOpen)
* [3] Meldrum O W, Yakubov G E, Bonilla M R, Deshmukh O, McGuckin M A and Gidley M J 2018 Scientific Reports 8 5802
* [4] Jay G D, Torres J R, Warman M L, Laderer M C and Breuer K S 2007 Proceedings of the National Academy of Sciences 104 6194–6199
* [5] Nishikubo N, Awano T, Banasiak A, Bourquin V, Ibatullin F, Funada R, Brumer H, Teeri T T, Hayashi T, Sundberg B and Mellerowicz E J 2007 Plant and Cell Physiology 48 843–855
* [6] Yu L, Yakubov G E, Gilbert E P, Sewell K, van de Meene A M and Stokes J R 2019 Carbohydrate Polymers 207 333–342
* [7] Sabu T, Chin H C, Laly P, Rajisha K R, Jithin J and Maria H 2013 Natural Rubber Materials (RSC Polymer Chemistry Series)
* [8] Wrigley C, Bekes F and Bushuk W 2006 Gliadin and Glutenin: The Unique Balance of Wheat Quality (American Association of Cereal Chemists, Inc.)
* [9] Shewry P R and Tatham A S 1990 Biochemical Journal 267 1–12
* [10] Wrigley C W 1996 Nature 381 738–739
* [11] Wieser H 2007 Food Microbiology 24 115–119
* [12] Tatham A S, Hayes L, Shewry P R and Urry D W 2001 Biochimica et Biophysica Acta (BBA)-Protein Structure and Molecular Enzymology 1548 187–193
* [13] MacRitchie F 2014 Journal of Cereal Science 60 4–6 ISSN 07335210
* [14] Dahesh M, Banc A, Duri A, Morel M H and Ramos L 2014 J. Phys. Chem. B 118 11065–11076
* [15] Dahesh M, Banc A, Duri A, Morel M H and Ramos L 2016 Food Hydrocolloids 52 1–10
* [16] Banc A, Dahesh M, Wolf M, Morel M H and Ramos L 2017 Journal of Cereal Science 75 175–178
* [17] Morel M H, Pincemaille J, Chauveau E, Louhichi A, Violleau F, Menut P, Ramos L and Banc A 2020 Food Hydrocolloids 103 105676
* [18] Costanzo S, Banc A, Louhichi A, Chauveau E, Wu B, Morel M H and Ramos L 2020 Macromolecules 53 9470–9479
* [19] Belton P S 1999 Journal of Cereal Science 29 103–107
* [20] Morel M H, Redl A and Guilbert S 2002 Biomacromolecules 3 488–497
* [21] Ng T S K, McKinley G H and Ewoldt R H 2011 Journal of Rheology 55 627–654
* [22] Pincemaille J, Lecacheaux L, Banc A, Menut P, Ramos L and Morel M H manuscript in preparation
* [23] Narayanan T, Sztucki M, Van Vaerenbergh P, Léonardon J, Gorini J, Claustre L, Sever F, Morse J and Boesecke P 2018 Journal of Applied Crystallography 51 1511–1524
* [24] Radulescu A, Szekely N K, Appavou M S, Pipich V, Kohnke T, Ossovyi V, Staringer S, Schneider G J, Amann M, Zhang-Haagen B, Brandl G, Drochner M, Engels R, Hanslik R and Kemmerling G 2016 Journal of Visualized Experiments 54639
* [25] Pipich, V and Fu, Z 2015 Journal of Large-Scale Research Facilities 1 A31
* [26] Frielinghaus H, Pipich V, Radulescu A, Heiderich M, Hanslik R, Dahlhoff K, Iwase H, Koizumi S and Schwahn D 2009 Journal of Applied Crystallography 681–690
* [27] http://qtisas.com/
* [28] Hofmann H, Soranno A, Borgia A, Gast K, Nettels D and Schuler B 2012 Proceedings of the National Academy of Sciences 109 16155–16160
* [29] Kikhney A G and Svergun D I 2015 FEBS Letters 589 2570–2577
* [30] Balu R, Mata J P, Knott R, Elvin C M, Hill A J, Choudhury N R and Dutta N K 2016 The Journal of Physical Chemistry B 120 6490–6503
* [31] Rubinstein M and Colby R H 2003 Polymer Physics (Oxford University Press, Oxford)
* [32] Denkinger P and Burchard W 1991 Journal of Polymer Science Part B: Polymer Physics 29 589–600
* [33] Ohashi T, Galiacy S D, Briscoe G and Erickson H P 2007 Protein Science 16 1429–1438
* [34] Thomson N H, Miles M J, Popineau Y, Harries J, Shewry P and Tatham A S 1999 Biochimica et Biophysica Acta (BBA)-Protein Structure and Molecular Enzymology 1430 359–366
* [35] Pedersen J S and Schurtenberger P 2004 Journal of Polymer Science Part B: Polymer Physics 42 3081–3094 ISSN 0887-6266, 1099-0488 URL http://doi.wiley.com/10.1002/polb.20173
* [36] Schaefer D W 1984 Polymer 25 387–394
* [37] Banc A, Charbonneau C, Dahesh M, Appavou M S, Fu Z, Morel M H and Ramos L 2016 Soft Matter 12 5340–5352
* [38] Debye P and Bueche A M 1949 Journal of Applied Physics 20 518–525
* [39] Banc A, Pincemaille J, Costanzo S, Chauveau E, Appavou M S, Morel M H, Menut P and Ramos L 2019 Soft Matter 15 6160–6170
* [40] Boire A, Menut P, Morel M H and Sanchez C 2015 The Journal of Physical Chemistry B 119 5412–5421
* [41] Winter H H and Chambon F 1987 Journal of Rheology 30 367–382
* [42] Martin J E, Adolf D and Wilcoxon J P 1988 Physical Review Letters 61 2620
* [43] Martin J E, Adolf D and Wilcoxon J P 1989 Physical Review A 39 1325–1332
* [44] Ng T S K and McKinley G H 2008 Journal of Rheology 52 417–449
|
# Non-local in time telegraph equations and very slowly growing variances
Francisco Alegría Instituto de Ciencias Físicas y Matemáticas. Facultad de
Ciencias. Universidad Austral de Chile, Valdivia, Chile. Departamento de
Matemática y Estadística. Facultad de Ingeniería y Ciencias. Universidad de La
Frontera, Temuco, Chile<EMAIL_ADDRESS>and Juan C. Pozo
Departamento de Matemáticas, Facultad de Ciencias, Universidad de Chile, Las
Palmeras 3425, Ñuñoa, Santiago, Chile. Partially supported by Fondecyt grant
1181084<EMAIL_ADDRESS>
###### Abstract.
In this paper we consider a class of non-local in time telegraph equations.
Recently, in [16] it has been proved that the fundamental solutions of such
equations can be interpreted as the probability density function of a
stochastic process. We study the asymptotic behavior of the variance of this
process at large and short times. In this context, we develop a method to
construct new examples such the variance has a slowly growth behavior,
extending the earlier results. Finally, we show that our approach can be
adapted to define new integro-differential operators which are interesting in
sub-diffusion processes.
###### 2010 Mathematics Subject Classification:
Primary 45K05, 34K25, 35R10
## 1\. Introduction
The study of non-local in time differential equations have received a lot of
attention in last years due to its deep connection with non-local transport
phenomena, control of stochastic jump processes, description of anomalous
diffusion in physics and memory effects in parabolic equations, see [8, 9, 20,
16, 15, 10] and references therein.
Let $k\in L_{1,loc}(\mathbb{R}_{+})$ and $\eta,\nu$ be positive constants. In
this paper we consider the non-local in time telegraph equation
$\partial_{t}^{2}\bigl{(}k\ast k\ast
u(\cdot,z)\bigr{)}(t)+\eta\,\partial_{t}\bigl{(}k\ast
u(\cdot,z)\bigr{)}(t)-\nu\,\partial_{z}^{2}u(t,z)=0,\ t>0,\ z\in\mathbb{R},$
(1.1)
which has been recently proposed in [16]. We are interested into study some
properties of the fundamental solution of (1.1). For this reason we consider
the initial conditions
$u(0,z)=\delta_{0}(z)\quad\text{and}\quad\partial_{t}u(0,z)=0,\
z\in\mathbb{R},$ (1.2)
where the $\partial_{t}u(0,z)=0$ must be considered whenever it exists. We
point out that the fundamental solution of (1.1) has been already studied in
[16, Section 4] assuming that $k$ is a kernel of type $(\mathcal{PC})$, which
means that the following condition is satisfied.
1. $(\mathcal{PC})$
$k\in L_{1,loc}(\mathbb{R}_{+})$ is nonnegative, nonincreasing, and there is
$\ell\in L_{1,loc}(\mathbb{R}_{+})$ such that $(k*\ell)=1$ in $(0,\infty)$. In
this case we also write $(k,\ell)\in(\mathcal{PC})$.
In this paper we also assume that $k$ is a kernel of type $(\mathcal{PC})$ and
we extend some of the results obtained in [16].
It is worth mentioning that the most classical example of a pair
$(k,\ell)\in(\mathcal{PC})$ is given by $(g_{1-\alpha},g_{\alpha})$ with
$\alpha\in(0,1)$, where $g_{\beta}$ with $\beta>0$ is the standard notation
for the function
$g_{\beta}(t)=\dfrac{t^{\beta-1}}{\Gamma(\beta)},\quad t>0.$
In this case (1.1)-(1.2) takes the form of the time fractional telegraph
equation
$\displaystyle\partial_{t}^{2\alpha}u(t,z)+\eta\,\partial_{t}^{\alpha}u(t,z)-\nu\,\partial_{z}^{2}u(t,z)$
$\displaystyle=0,\ t>0,\ z\in\mathbb{R},$ (1.3) $\displaystyle
u(0,z)=\delta_{0}(z)\quad\text{and}\quad\partial_{t}u(0,z)$ $\displaystyle=0,\
z\in\mathbb{R},$ (1.4)
which have been extensively studied, for instance see [14, 19] and references
therein. In this case the initial condition on the derivative of $u$ only must
be considered if $\alpha\in(\frac{1}{2},1]$. The solution $U_{\alpha}$ of
(1.3)-(1.4) exhibits several interesting properties, one of them being that
$U_{\alpha}$ can be viewed as the probability density function of a stochastic
process denoted by $X_{\alpha}(t)$, (cf. [14]). Moreover, the variance of the
process $X_{\alpha}(t)$ is given by
$\textrm{Var}[X_{\alpha}(t)]=2\nu t^{2\alpha}E_{\alpha,2\alpha+1}(-\eta
t^{\alpha}),\ t\geq 0,$
where $E_{\alpha,2\alpha+1}$ is an example of the so-called Mittag-Leffler
function of two parameters, see [5, Chapter 4] or [12, Appendix E] for several
properties and results of this function.
Motivated by this result, it has been established in [16, Theorem 1.1] that
for every $(k,\ell)\in(\mathcal{PC})$ the fundamental solution $U(t,z)$ of
(1.1) can be interpreted as a probability density function on $\mathbb{R}$ and
there exists a process, denoted by $X(t)$, whose distribution coincides with
$U(t,\cdot)$ for all time $t>0$. Further, the corresponding variance of this
process is positive, increasing on $(0,\infty)$ and it is given by
$\mathrm{Var}[X(t)]=2\nu(1\ast r_{\eta}\ast\ell)(t),\quad t\geq 0,$ (1.5)
where $r_{\eta}$ is the integrated resolvent associated to $\ell$, (see
Definition 2.2 below).
The importance of knowing the variance of a stochastic process relies in the
fact that allows measuring how far the set of random values are spread out
from their average. Although (1.5) provides an exact representation of
$\textrm{Var}[X(t)]$, in general the function $r_{\eta}$ cannot be computed
explicitly. In consequence, get more analytical properties of the variance
could be a hard task. For example, we are particularly interested in knowing
how slow its growth rate can be. In this context, in [16, Section 4] it has
been proved that the asymptotic behavior of $\textrm{Var}[X(t)]$ could be of
very different kinds, which are e.g., exponential, algebraic and logarithmic.
To the best of our knowledge, the slowest growth rate of $\textrm{Var}[X(t)]$
known in the literature is logarithmic, and this rate is obtained considering
$k(t)=\int_{0}^{1}\dfrac{t^{\alpha-1}}{\Gamma(\alpha)}d\alpha,\quad\text{and}\quad\ell(t)=\int_{0}^{\infty}\dfrac{e^{-st}}{1+s}ds,\quad
t>0.$ (1.6)
So a basic question arises: Is there a pair $(k,\ell)\in(\mathcal{PC})$ such
that the variance grows slower than a logarithmic function at infinity? In
this paper we show that the answer to this question is affirmative. Indeed, we
develop a method to construct infinitely many examples of pairs
$(k,\ell)\in(\mathcal{PC})$ answering affirmatively this question. These
examples cover a broad range of slowly growing functions.
The paper is organized as follows. In Section 2, we give some preliminaries
concept that we need to our work. Section 3 is the central part of this work.
In this section we prove the main results of our work. In Section 4, we
explain why this method could be interesting in another contexts such as sub-
diffusion processes.
## 2\. Preliminaries
The Laplace transform of a subexponential function
$f\colon[0,\infty)\to\mathbb{R}$ defined on the half-line will be denoted by
$\widehat{f}(\lambda)=\int_{0}^{\infty}e^{-\lambda t}f(t)dt,\quad\lambda>0.$
###### Definition 2.1.
An infinitely differentiable function $f\colon(0,\infty)\to\mathbb{R}$ is
called completely monotone if $(-1)^{n}f^{(n)}(\lambda)\geq 0$ for all
$n\in\mathbb{N}_{0}$ and $\lambda>0$. An infinitely differentiable function
$f\colon(0,\infty)\to\mathbb{R}$ is called Bernstein function if
$f(\lambda)\geq 0$ for $\lambda>0$ and $f^{\prime}$ is a completely monotonic
function. We will denote the class of completely monotonic functions by
$(\mathcal{CM})$, and the class of Bernstein functions will be denoted by
$(\mathcal{BF})$.
A detailed collection of the most important properties and results about the
classes $(\mathcal{CM})$ and $(\mathcal{BF})$ can be found in [7, Chapter 3]
and [18].
###### Definition 2.2.
Let $\eta\in\mathbb{C}$ and $\ell\in L_{1,loc}(\mathbb{R}_{+})$. The solution
$r_{\eta}\colon\mathbb{R}_{+}\to\mathbb{C}$ of the scalar Volterra equation
$r_{\eta}(t)+\eta(r_{\eta}\ast\ell)(t)=\ell(t),\quad t>0,$ (2.1)
is called integral scalar resolvent associated to $\ell$.
The integral scalar resolvent $r_{\eta}$ has proved to be essential for the
treatment of non-homogeneous Volterra equations, see [2, 3, 6, 15] and
references therein.
We remark that if $\eta\in\mathbb{R}_{+}$ then the condition
$(k,\ell)\in(\mathcal{PC})$ implies that $r_{\eta}$ is positive, see [17,
Proposition 4.5]. Further, if additionally $\ell$ is completely monotonic then
$r_{\eta}$ is completely monotonic as well, (see [17, Lemma 4.1]). More
properties and results about $r_{\eta}$ can be found in [6, Chapter 5].
Now we introduce a version of Karamata-Feller Tauberian theorem. The proof can
be found in [1, Section 1.7, Chapter I] or [4, Chapter XIII].
###### Definition 2.3.
Let $\varrho\in\mathbb{R}$. We say that a function $L:(0,\infty)\to(0,\infty)$
is a regularly varying function of index $\varrho$ if for every fixed $x>0$ we
have that
$\lim_{t\to\infty}\dfrac{L(tx)}{L(t)}=x^{\varrho}.$
The regularly varying functions at infinity of index $\varrho=0$ are called
slowly varying functions.
###### Remark 2.4.
Let $F\colon(0,\infty)\to(0,\infty)$ be a regularly varying function at
infinity of index $\varrho$. It follows from [1, Theorem 1.4.1] that there is
a slowly varying function $L\colon(0,\infty)\to(0,\infty)$ such that
$F(x)=x^{\varrho}L(x)$ for $x>0$.
###### Theorem 2.5 (Karamata-Feller Theorem).
Let $L_{1},L_{2}\colon(0,\infty)\to(0,\infty)$ slowly varying functions. Let
$\beta>0$ and $w:(0,\infty)\rightarrow\mathbb{R}$ be a monotone function whose
Laplace transform $\widehat{w}(\lambda)$ exists for all
$\lambda\in\mathbb{C}_{+}$. Then
$\widehat{w}(\lambda)\sim\,\frac{1}{\lambda^{\beta}}\,L_{1}(\lambda),\
\text{as}\;\lambda\to\infty,\ \;\mbox{if and only if}\
\;w(t)\sim\frac{t^{\beta-1}}{\Gamma(\beta)}L_{1}\left(\dfrac{1}{t}\right),\
\mbox{as}\;t\to 0^{+},$
and
$\widehat{w}(\lambda)\sim\,\frac{1}{\lambda^{\beta}}\,L_{2}\left(\frac{1}{\lambda}\right),\
\mbox{as}\ \lambda\to 0^{+},\ \;\mbox{if and only if}\ \
w(t)\sim\frac{t^{\beta-1}}{\Gamma(\beta)}L_{2}(t),\ \mbox{as}\ t\to\infty.$
Here the approaches are on the positive real axis and the notation $f(t)\sim
g(t)$ as $t\to t_{*}$ means that $\lim_{t\to t_{*}}f(t)/g(t)=1$.
## 3\. Very slowly growing variances
We begin this section recalling the result established in [16, Theorem 1.1]
which is the base of our research.
###### Theorem 3.1.
Let $\eta,\nu$ positive constants and $(k,\ell)\in(\mathcal{PC})$. The
variance of the process $X(t)$, whose density function coincides with the
fundamental solution of (1.1), satisfies the following Volterra equation
$\textrm{Var}[X(t)]+\eta(\ell\ast\textrm{Var}[X(\cdot)])(t)=2\nu(1\ast\ell\ast\ell)(t),\quad
t\geq 0.$ (3.1)
Further, $\textrm{Var}[X(t)]$ is positive and increasing on $(0,\infty)$ and
it satisfies the formula
$\textrm{Var}[X(t)]=2\nu(1\ast\ell\ast r_{\eta})(t),\quad t\geq 0.$ (3.2)
Since (3.2) is given by means of convolutions, we can use Theorem 2.5 to study
the behavior of $\mathrm{Var}[X(t)]$ at large times and short times. The
following theorem is the main results of our work.
###### Theorem 3.2.
Let $(k,\ell)\in(\mathcal{PC})$. If the Laplace transform $\widehat{\ell}$ is
a regularly varying function of index $\varrho_{1}<\frac{1}{2}$, then
$\mathrm{Var}[X(t)]\sim\dfrac{2\nu}{\Gamma(1-2\varrho_{1})}\left(\widehat{\ell}\bigl{(}t^{-1}\bigr{)}\right)^{2},\
\text{as}\ t\to 0^{+}.$ (3.3)
Further, if the function $t\mapsto\widehat{\ell}(t^{-1})$ is a regularly
varying function of index $\varrho_{2}>-1$, then
$\mathrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta\,\Gamma(1+\varrho_{2})}\,\widehat{\ell}\Bigl{(}\frac{1}{t}\Bigr{)},\quad\text{as}\quad
t\to\infty.$ (3.4)
###### Proof.
Set $V(t)=\mathrm{Var}[X(t)]$ for $t\geq 0$. It follows from (3.2) that the
Laplace transform of $V$ is given by
$\widehat{V}(\lambda)=\dfrac{2\nu}{\lambda}\cdot\dfrac{\widehat{\ell}(\lambda)}{1+\eta\widehat{\ell}(\lambda)}\cdot\widehat{\ell}(\lambda),\quad\lambda>0.$
Since $\widehat{\ell}(\lambda)\to 0$ as $\lambda\to\infty$, we have that
$\widehat{V}(\lambda)\sim\dfrac{2\nu}{\lambda^{1-2\varrho_{1}}}L_{1}(\lambda),\quad\text{as}\quad\lambda\to\infty,$
where $L_{1}(t)=t^{-2\varrho_{1}}\bigl{(}\widehat{\ell}(t)\bigr{)}^{2}$. Since
$\widehat{\ell}$ is a regular variation function of index $\varrho_{1}$, by
Remark 2.4, we have that $L_{1}$ is a slowly varying function. Since
$\frac{1}{2}>\varrho_{1}$, it follows from Theorem 2.5 that
$\mathrm{Var}[X(t)]\sim\dfrac{2\nu}{\Gamma(1-2\varrho_{1})}\left(\widehat{\ell}\bigl{(}t^{-1}\bigr{)}\right)^{2},\
\text{as}\ t\to 0^{+}.$
On the other hand, we note that
$\widehat{V}(\lambda)=\dfrac{2\nu}{\lambda^{1+\varrho_{2}}}L_{2}\left(\dfrac{1}{\lambda}\right),\quad\lambda>0.$
where $L_{2}\colon(0,\infty)\to(0,\infty)$ is defined by
$L_{2}(t)=\frac{\widehat{\ell}(t^{-1})}{1+\eta\widehat{\ell}(t^{-1})}\cdot\frac{\widehat{\ell}(t^{-1})}{t^{\varrho_{2}}},\quad\text{for}\quad
t>0.$
Since $t\mapsto\widehat{\ell}(t^{-1})$ is a regularly varying function of
index $\varrho_{2}$, by Remark 2.4 we have that $L_{2}$ is a slowly varying
function. Furthermore,
$L_{2}(\lambda^{-1})\sim\frac{1}{\eta\lambda^{\varrho_{2}}}\widehat{\ell}(\lambda^{-1})\quad\text{as}\quad\lambda\to
0^{+}.$
Since $\varrho_{2}>-1$, it follows from Theorem 2.5 that
$\mathrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\cdot\dfrac{t^{\varrho_{2}}}{\Gamma(1+\varrho_{2})}L_{2}(t)\sim\dfrac{2\nu}{\eta\,\Gamma(1+\varrho_{2})}\,\widehat{\ell}\Bigl{(}\frac{1}{t}\Bigr{)},\quad\text{as}\quad
t\to\infty.$
∎
We remark that the conditions of Theorem 3.2 are satisfied for many pairs of
functions $(k,\ell)\in(\mathcal{PC})$. For instance, all the examples
presented in [16, Section 5] satisfy these conditions. For the sake of brevity
of the text we omit their proof.
###### Example 3.3.
(Time fractional telegraph equation). If $k=g_{1-\alpha}$ with
$\alpha\in(0,1)$, then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\dfrac{t^{\alpha}}{\Gamma(1+\alpha)},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim
2\nu\,\dfrac{t^{2\alpha}}{\Gamma(1+2\alpha)},\quad\text{as}\quad t\to 0^{+}.$
###### Example 3.4.
(Sum of two time fractional derivatives). If $k=g_{1-\alpha}+g_{1-\beta}$ with
$0<\alpha<\beta<1$, then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\dfrac{t^{\alpha}}{\Gamma(1+\alpha)},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim
2\nu\,\dfrac{t^{2\beta}}{\Gamma(1+2\beta)},\quad\text{as}\quad t\to 0^{+}.$
###### Example 3.5.
(Time-fractional telegraph equation with Mittag-Leffler weight). If
$k(t)=t^{\beta-1}E_{\alpha,\beta}(-\omega t^{\alpha})$ with $0<\alpha,\beta<1$
and $\omega>0$, then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\dfrac{\omega\,t^{\alpha+1-\beta}}{\Gamma(2+\alpha-\beta)},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim
2\nu\,\dfrac{t^{2-2\beta}}{\Gamma(2+\alpha-\beta)},\quad\text{as}\quad t\to
0^{+}.$
###### Example 3.6.
(Time distributed order telegraph equation). If $\displaystyle
k(t)=\int_{a}^{b}g_{\alpha}(t)d\alpha$, with $0\leq a<b\leq 1$, then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\dfrac{t^{1-b}\log(t)}{\Gamma(2-b)},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim
2\nu\,\dfrac{t^{1+b-2a}\bigl{(}\log(t)\bigr{)}^{2}}{\Gamma(2-b)},\quad\text{as}\quad
t\to 0^{+}.$
###### Remark 3.7.
We note that if $b=1$ and $a\in(0,1)$ in Example 3.6 then we have an infinity
family of processes whose variance behaves like a logarithmic function at
infinity.
The following example has not been established in the literature before. In
order to present it, we define recursively the functions $\Theta_{n}$ as
follows
$\Theta_{1}(t,x)=\int_{0}^{x}g_{\alpha}(t)d\alpha\quad\text{and}\quad\Theta_{n+1}(t,x)=\int_{0}^{x}\Theta_{n}(t,y)dy,\quad
t>0,\ x>0.$ (3.5)
Further, for $n\in\mathbb{N}$ we define the functions
$\theta_{n}\colon(0,\infty)\to(0,\infty)$ by
$\theta_{n}(t)=\Theta_{n}(t,1),\ \text{for}\ t>0.$ (3.6)
###### Example 3.8.
Let $n\in\mathbb{N}$. Consider $\displaystyle k=\theta_{n}$ where $\theta_{n}$
has been defined in (3.6), then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\bigl{(}\log(t)\bigr{)}^{n},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim\nu\big{(}(n-1)!\bigr{)}^{2}\bigl{(}t\cdot\log(t)\bigr{)}^{2},\quad\text{as}\quad
t\to 0^{+}.$
###### Proof.
Let $n\in\mathbb{N}$. We note that $\theta_{n}$ is positive and locally
integrable on $\mathbb{R}_{+}$. Since $\alpha\in(0,1)$ we have that
$g_{\alpha}\in(\mathcal{CM})$. We recall that the class of completely
monotonic functions is closed under addition and pointwise limits, see [18,
Corollary 1.6 and Corollary 1.7]. Therefore, we have that
$\theta_{n}\in(\mathcal{CM})$ as well. It follows from [6, Theorem 5.4 and
Theorem 5.5] that there exists $\zeta_{n}\in(\mathcal{CM})$ such that
$\theta_{n}\ast\zeta_{n}=1.$ (3.7)
Consequently, we have that $(\theta_{n},\zeta_{n})\in(\mathcal{PC})$. Further,
for $\lambda>0$ and $x>0$ we note that
$\widehat{\Theta}_{1}(\lambda,x)=\dfrac{\lambda^{x}-1}{\lambda^{x}\log(\lambda)}$.
Hence, it follows from (3.5) and an inductive procedure that
$\widehat{\Theta}_{n}(\lambda,x)=\dfrac{1}{\lambda^{x}\bigl{(}\log(\lambda)\bigr{)}^{n}}\left((-1)^{n}+\lambda^{x}\sum_{k=1}^{n}\dfrac{(-1)^{k-1}x^{n-k}(\log(\lambda))^{n-k}}{(n-k)!}\right),\
\lambda>0,\ x>0,$
for $n\geq 2$. This in turn implies that
$\widehat{\theta}_{n}(\lambda)=\dfrac{1}{\lambda\bigl{(}\log(\lambda)\bigr{)}^{n}}\left((-1)^{n}+\lambda\sum_{k=1}^{n}\dfrac{(-1)^{k-1}(\log(\lambda))^{n-k}}{(n-k)!}\right),\quad\lambda>0,$
and
$\widehat{\zeta}_{n}(\lambda)=\dfrac{\bigl{(}\log(\lambda)\bigr{)}^{n}}{\displaystyle(-1)^{n}+\lambda\sum_{k=1}^{n}\dfrac{(-1)^{k-1}(\log(\lambda))^{n-k}}{(n-k)!}},\quad\lambda>0,\
$
for $n\geq 2$. We note that $\widehat{\zeta}_{n}(\lambda)$ can be rewritten as
follows
$\widehat{\zeta}_{n}(\lambda)=\dfrac{\log(\lambda)}{\displaystyle\frac{(-1)^{n}}{\big{(}\log(\lambda)\big{)}^{n-1}}+\frac{\lambda}{(n-1)!}+\lambda\sum_{k=2}^{n}\dfrac{(-1)^{k-1}}{(n-k)!(\log(\lambda))^{k-1}}},\quad\lambda>0$
Hence, $\widehat{\zeta}_{n}(\lambda)\sim\frac{(n-1)!\log(\lambda)}{\lambda}$
as $\lambda\to\infty$. This implies that $\widehat{\zeta}_{n}$ is a regularly
varying function of index $\varrho=-1$. On the other hand, we have that
$\widehat{\zeta}_{n}(t^{-1})=\dfrac{(-1)^{n}\big{(}\log(t)\big{)}^{n}}{\displaystyle(-1)^{n}+\frac{(-1)^{n-1}}{t}\sum_{k=1}^{n}\frac{\big{(}\log(t)\big{)}^{n-k}}{(n-k)!}},\quad
t>0,$
which implies that $\widehat{\zeta}_{n}(t^{-1})\sim\big{(}\log(t)\big{)}^{n}$
as $t\to\infty$. Since the logarithmic function is regularly varying, and the
multiplication of regularly varying functions is regularly varying as well,
this in turn implies that the function $t\mapsto\zeta_{n}(t^{-1})$ is a
regularly varying function. Moreover,
$\widehat{\zeta}_{n}(t^{-1})=\dfrac{(-1)^{n}\,t\log(t)}{\displaystyle\frac{(-1)^{n}\,t}{(\log(t))^{n-1}}+\frac{(-1)^{n-1}}{(n-1)!}+\sum_{k=2}^{n}\frac{1}{(n-k)!\,\big{(}\log(t)\big{)}^{k-1}}},\quad
t>0,$
which implies that $\widehat{\zeta}_{n}(t^{-1})\sim(-1)(n-1)!\,t\log(t)$ as
$t\to 0^{+}$. Consequently, as direct application of Theorem 3.2 we obtain
that
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\bigl{(}\log(t)\bigr{)}^{n},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim\nu\big{(}(n-1)!\bigr{)}^{2}(\bigl{(}t\cdot\log(t)\bigr{)}^{2},\quad\text{as}\quad
t\to 0^{+}.$
∎
We remark that all the examples above are very interesting, However, as we
have mentioned in the Introduction, we are interested into find examples of
$(k,\ell)\in(\mathcal{PC})$ such that $\textrm{Var}[X(t)]$ grows slower than a
logarithmic function at infinity. To this end we prove the following result.
###### Lemma 3.9.
Let $f,g\in L_{1,loc}(\mathbb{R}_{+})$. Assume that $f,g\in(\mathcal{CM})$,
then there exists $h\in(\mathcal{CM})$ such that
$\widehat{h}(\lambda)=\widehat{f}(\lambda)\
\widehat{g}\bigl{(}\widehat{f}(\lambda)\bigr{)},\quad\lambda>0.$
###### Proof.
Consider $a=1\ast f$, $b\equiv 1$, $c=1\ast g$. It is clear that $a,b$ and $c$
are Bernstein functions. According to [17, Lemma 4.3], there exists a
Bernstein function $e\colon(0,\infty)\to(0,\infty)$ such that
$e(0^{+})=a(0^{+})=0$ and
$\widehat{e}(\lambda)=\widehat{a}(\lambda)\widehat{dc}\left(\dfrac{\widehat{a}(\lambda)}{\widehat{b}(\lambda)}\right),\
\lambda>0,$
where $\widehat{dc}$ stands for the Laplace transform of the measure $dc$
which is defined by
$\widehat{dc}(\lambda)=\int_{0}^{\infty}e^{-\lambda t}dc(t),\quad\lambda>0.$
Since $c=1\ast g$, we have that $dc(t)=g(t)dt$, and consequently
$\widehat{dc}=\widehat{g}$. Let us now define $h(t)=\frac{d}{dt}e(t)$ for
$t>0$. Since $e\in(\mathcal{BF})$ we have that $h\in(\mathcal{CM})$ and
$\widehat{h}(\lambda)=\lambda\widehat{e}(\lambda)-e(0^{+})=\widehat{f}(\lambda)\
\widehat{g}\bigl{(}\widehat{f}(\lambda)\bigr{)},\quad\lambda>0.$
∎
###### Corollary 3.10.
For all $\delta\in(0,1]$ there exists a pair
$(\phi^{\delta}_{1},\psi^{\delta}_{1})\in(\mathcal{PC})$ such that
$\widehat{\psi^{\delta}_{1}}\bigl{(}t^{-1}\bigr{)}\sim\bigl{(}\log(t)\bigr{)}^{\delta},\quad\text{as}\quad
t\to\infty,$
and
$\widehat{\psi^{\delta}_{1}}\bigl{(}t^{-1}\bigr{)}\sim(t\cdot\log(t^{-1})\bigr{)}^{\delta}\quad\text{as}\quad
t\to 0^{+}.$
If $\delta=1$, we will simply write $(\phi_{1},\psi_{1})\in(\mathcal{PC})$.
###### Proof.
Let $\delta\in(0,1]$. Consider the pair $(\theta,\zeta)\in(\mathcal{PC})$
given by
$\theta(t)=\int_{0}^{1}\dfrac{t^{\alpha-1}}{\Gamma(\alpha)}d\alpha,\quad\text{and}\quad\zeta(t)=\int_{0}^{\infty}\dfrac{e^{-st}}{1+s}ds,\quad
t>0.$ (3.8)
It is a well known fact that both $\theta$ and $\zeta$ are completely
monotonic functions. So, applying Lemma 3.9 with $f=\theta$ and
$g=g_{1-\delta}$, we conclude that there exists $h_{1\delta}\in(\mathcal{CM})$
such that
$\widehat{h_{1\delta}}(\lambda)=\left(\dfrac{\lambda-1}{\lambda\,\log(\lambda)}\right)^{\delta},\quad\lambda>0.$
Applying again Lemma 3.9 with $f=\zeta$ and $g=g_{1-\delta}$, we note that
there exists $h_{2\delta}\in(\mathcal{CM})$ such that
$\widehat{h_{2\delta}}(\lambda)=\left(\dfrac{\log(\lambda)}{\lambda-1}\right)^{\delta},\quad\lambda>0.$
Now define the kernels
$\phi^{\delta}_{1}=g_{1-\delta}\ast
h_{2\delta},\quad\text{and}\quad\psi^{\delta}_{1}=h_{1\delta}.$
Applying directly the Laplace transform, we have
$\widehat{\phi^{\delta}_{1}}(\lambda)=\dfrac{1}{\lambda}\left(\dfrac{\lambda-1}{\log(\lambda)}\right)^{\delta},\quad\text{and
}\quad\widehat{\psi^{\delta}_{1}}(\lambda)=\left(\dfrac{\log(\lambda)}{\lambda-1}\right)^{\delta},\quad\lambda>0.$
(3.9)
We note that by construction $\psi^{\delta}_{1}\in(\mathcal{CM})$. Hence, it
follows from [6, Theorem 5.4 and Theorem 5.5] that
$\phi^{\delta}_{1}\in(\mathcal{CM})$. In consequence
$(\phi^{\delta}_{1},\psi^{\delta}_{1})\in(\mathcal{PC})$. Furthermore, it
clear that
$\widehat{\psi^{\delta}_{1}}(t^{-1})=\left(\dfrac{\log(t)}{1-t^{-1}}\right)^{\delta},\quad
t>0,$
which in turn implies that
$\widehat{\psi^{\delta}_{1}}(t^{-1})\sim\bigl{(}\log(t)\bigr{)}^{\delta},\quad\text{as}\quad
t\to\infty.$
To compute the behavior $\widehat{\psi^{\delta}_{1}}(t^{-1})$ as $t\to 0^{+}$,
we rewrite this function as follows
$\widehat{\psi}_{\delta}\bigl{(}t^{-1}\bigr{)}=\left(\dfrac{t\,\log(t)}{t-1}\right)^{\delta},\quad
t>0,$
which implies that
$\widehat{\psi}_{\delta}\bigl{(}t^{-1}\bigr{)}\sim(-t\cdot\log(t)\bigr{)}^{\delta}\quad\text{as}\quad
t\to 0^{+}.$
∎
###### Remark 3.11.
Let $\delta\in(0,1]$ and $(\phi_{\delta},\psi_{\delta})\in(\mathcal{PC})$
given in Corollary 3.10. If $\delta=1$, then $(\phi_{1},\psi_{1})$ is the same
pair of functions defined by Kochubei in [11]. On the other hand, since
$\widehat{\psi^{\delta}_{1}}(t^{-1})\sim(\log(t))^{\delta}$ as $t\to\infty$ we
have that $\psi^{\delta}_{1}\notin L_{1}(\mathbb{R}_{+})$.
###### Example 3.12.
Let $\delta\in(0,1]$. Consider the pair
$\displaystyle(k,\ell)=(\phi^{\delta}_{1},\psi^{\delta}_{1})$ given in
Corollary 3.10, then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\bigl{(}\log(t)\bigr{)}^{\delta},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim
2\nu\dfrac{\bigl{(}t\cdot\log(t^{-1})\bigr{)}^{2\delta}}{\Gamma(1+2\delta)},\quad\text{as}\quad
t\to 0^{+}.$
###### Proof.
It follows from (3.9) that $\widehat{\ell}$ is a regularly varying function of
index $\varrho=-\delta$. Further, since
$\widehat{\ell}(t^{-1})\sim(\log(t))^{\delta},\ \text{as}\ t\to\infty,$
it follows that $\widehat{\ell}(t^{-1})$ is a slowly varying function.
Therefore, Theorem 3.2 implies that
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\bigl{(}\log(t)\bigr{)}^{\delta},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim
2\nu\dfrac{\bigl{(}t\cdot\log(t^{-1})\bigr{)}^{2\delta}}{\Gamma(1+2\delta)},\quad\text{as}\quad
t\to 0^{+}.$
∎
###### Corollary 3.13.
For all $n\in\mathbb{N}$ there exists a pair
$(\phi_{n},\psi_{n})\in(\mathcal{PC})$ such that $\phi_{n}\in(\mathcal{CM})$
and
$\widehat{\psi_{n}}(t^{-1})\sim\log^{[n]}(t),\ \text{as}\ t\to\infty,$
and
$\widehat{\psi_{n}}(t^{-1})\sim t\,\bigl{(}\log(t^{-1})\bigr{)}^{n}\
\text{as}\ t\to 0^{+},$
where
$\log^{[n]}=\underbrace{\log\circ\log\circ\dots\circ\log}_{n-\rm{times}}$.
###### Proof.
The proof will be done by induction. For $n=1$, we consider the pair
$(\phi_{1},\psi_{1})\in(\mathcal{PC})$ given by Corollary 3.10 with
$\delta=1$.
Let $n>1$. Assume that there exists a pair
$(\phi_{n},\psi_{n})\in(\mathcal{PC})$ such that $\phi_{n}\in(\mathcal{CM})$
satisfying
$\widehat{\psi}_{n}(t^{-1})\sim\log^{[n]}(t),\ \text{as}\ t\to\infty.$
and
$\widehat{\psi_{n}}(t^{-1})\sim t\,\bigl{(}\log(t^{-1})\bigr{)}^{n},\
\text{as}\ t\to 0^{+}.$
Since $\phi_{n}\in(\mathcal{CM})$ and $\phi_{n}\ast\psi_{n}=1$, it follows
from [6, Theorem 5.4 and Theorem 5.5] that $\psi_{n}\in(\mathcal{CM})$. Hence,
applying Lemma 3.9 with $f=\psi_{n}$ and $g=g_{1-\delta}$ with
$\delta\in(0,1)$, we conclude that for all $\delta\in(0,1)$ there exists
$\omega_{n}^{\delta}\in(\mathcal{CM})$ such that
$\widehat{\omega_{n}^{\delta}}(\lambda)=\big{(}\widehat{\psi}_{n}(\lambda)\big{)}^{\delta},\quad\lambda>0.$
Using again [6, Theorem 5.4 and Theorem 5.5] we can establish the existence of
$\varphi^{\delta}_{n}\in(\mathcal{CM})$ such that
$\omega_{n}^{\delta}\ast\varphi_{n}^{\delta}=1$, which in turn implies that
$\widehat{\varphi_{n}^{\delta}}(\lambda)=\frac{1}{\lambda}\bigl{(}\widehat{\psi}_{n}(\lambda)\bigr{)}^{-\delta},\quad\lambda>0.$
Let us now define
$\phi_{n+1}(t)=\int_{0}^{1}\varphi_{n}^{\delta}(t)d\delta,\quad t>0.$ (3.10)
Since the class of completely monotonic functions is closed under addition and
pointwise limits, we conclude that $\phi_{n+1}\in(\mathcal{CM})$. In
consequence, it follows from [6, Theorem 5.4 and Theorem 5.5] that there
exists $\psi_{n+1}\in(\mathcal{CM})$ such that
$\psi_{n+1}\ast\varphi_{n+1}=1.$ (3.11)
Furthermore, we have that
$\widehat{\phi_{n+1}}(\lambda)=\int_{0}^{1}\widehat{\varphi^{\delta}_{n}}(\lambda)d\delta=\int_{0}^{1}\dfrac{1}{\lambda}\Bigl{(}\widehat{\psi}_{n}(\lambda)\Bigr{)}^{-\delta}d\delta=\dfrac{\widehat{\psi}_{n}(\lambda)-1}{\lambda\,\widehat{\psi}_{n}(\lambda)\,\log\bigl{(}\widehat{\psi}_{n}(\lambda)\bigr{)}},\quad\lambda>0.$
Since $(\phi_{n+1},\psi_{n+1})\in(\mathcal{PC})$, this in turn implies that
$\widehat{\psi_{n+1}}(\lambda)=\dfrac{\widehat{\psi}_{n}(\lambda)\log\bigl{(}\widehat{\psi}_{n}(\lambda)\bigr{)}}{\widehat{\psi}_{n}(\lambda)-1},\quad\lambda>0.$
(3.12)
Therefore, we have that
$\widehat{\psi_{n+1}}(t^{-1})=\dfrac{\widehat{\psi_{n}}(t^{-1})\log\bigl{(}\widehat{\psi_{n}}(t^{-1})\bigr{)}}{\widehat{\psi_{n}}(t^{-1})-1},\quad
t>0.$
We note that the inductive hypothesis implies that
$\dfrac{\widehat{\psi}_{n}(t^{-1})}{\widehat{\psi}_{n}(t^{-1})-1}\sim\dfrac{\log^{[n]}(t)}{\log^{[n]}(t)-1}\sim
1,\ \text{as}\ t\to\infty.$
In consequence we have
$\widehat{\psi_{n+1}}\bigl{(}t^{-1}\bigr{)}\sim\log\bigl{(}\widehat{\psi}_{n}(t^{-1})\bigr{)}\sim\log^{[n+1]}(t),\
\text{as}\ t\to\infty.$
On the other hand, since $\widehat{\psi_{n}}(t^{-1})\to 0$ as $t\to 0^{+}$, we
have that
$\displaystyle\widehat{\psi_{n+1}}\bigl{(}t^{-1}\bigr{)}$
$\displaystyle\sim\widehat{\psi}_{n}(t^{-1})\log\bigl{(}\widehat{\psi}_{n}(t^{-1})\bigr{)},\
\text{as}\quad t\to 0^{+},$
which by the inductive hypothesis is equivalent to
$\displaystyle\widehat{\psi_{n+1}}\bigl{(}t^{-1}\bigr{)}$
$\displaystyle\sim(t\,\bigl{(}\log(t^{-1})\bigr{)}^{n})\Bigl{(}\log(t)+\log(\log(t^{-1})^{n})\Bigr{)},\quad\text{as}\
t\to 0^{+}.$
We recall that
$\lim_{t\to 0^{+}}\dfrac{\log(\log(t^{-1})^{n})}{\log(t)}=0,$
for all $n\in\mathbb{N}$. Hence
$\widehat{\psi_{n+1}}(t^{-1})\sim t\,\bigl{(}\log(t^{-1})\bigr{)}^{n+1},\
\text{as}\ t\to 0^{+},$
and the proof is complete. ∎
###### Remark 3.14.
Let $n\in\mathbb{N}$ and $(\phi_{n},\psi_{n})\in(\mathcal{PC})$ given in
Corollary 3.13. Since $\widehat{\psi_{n}}(t^{-1})\sim\log^{[n]}(t)$ as
$t\to\infty$, we have that for all $n\in\mathbb{N}$ the functions
$\psi_{n}\notin L_{1}(\mathbb{R}_{+})$.
###### Example 3.15.
Let $n\in\\{2,3,\cdots\\}$. Consider pair $(k,\ell)=(\phi_{n},\psi_{n})$ given
in Corollary 3.13, then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\log^{[n]}(t),\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim\nu
t^{2}\,\bigl{(}\log(t^{-1})\bigr{)}^{2n},\quad\text{as}\quad t\to 0^{+},$
where
$\log^{[n]}=\underbrace{\log\circ\log\circ\dots\circ\log}_{n-\rm{times}}$.
###### Proof.
We note that for every $n\geq 2$ the function $\widehat{\psi_{n}}$ is a
regularly varying function of index $\varrho=-1$. Indeed, recall that
$\widehat{\psi}_{1}$ is defined by
$\widehat{\psi}_{1}(t)=\dfrac{\log(t)}{t-1},\quad t>0.$
Hence, it clear that $\widehat{\psi}_{1}$ is a regularly varying function of
index $\varrho=-1$. Assume now that $\widehat{\psi}_{n}$ is a regularly
varying function of index $\varrho=-1$. Since $\widehat{\psi_{n}}(t)\to 0$ as
$t\to\infty$, we have that
$t\mapsto\dfrac{\widehat{\psi}_{n}(t)}{\widehat{\psi}_{n}(t)-1},\ t>0,$
is a regularly varying function of index $\varrho=-1$. Moreover, by properties
of the logarithmic function we have that
$t\mapsto\log(\widehat{\psi}_{n}(t)),\ t>0,$
is a slowly varying function. Therefore, it follows from (3.12) that
$\widehat{\psi}_{n+1}$ is a regularly varying function of index $\varrho=-1$.
On the other hand, it follows from Corollary 3.13 that
$\widehat{\psi}_{n}(t^{-1})\sim\log^{[n]}(t),\ \text{as}\ t\to\infty.$ (3.13)
Therefore, $\widehat{\ell}(t^{-1})$ is a slowly varying function and Theorem
3.2 implies that
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\log^{[n]}(t),\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim\nu
t^{2}\,\bigl{(}\log(t^{-1})\bigr{)}^{2n},\quad\text{as}\quad t\to 0^{+}.$
∎
###### Corollary 3.16.
For all $n\in\mathbb{N}$ and $\delta\in(0,1)$ there exists a pair
$(\phi_{n}^{\delta},\psi_{n}^{\delta})\in(\mathcal{PC})$ such that
$\phi^{\delta}_{n}\in(\mathcal{CM})$ and
$\widehat{\psi^{\delta}_{n}}(t^{-1})\sim\bigl{(}\log^{[n]}(t)\bigr{)}^{\delta},\
\text{as}\ t\to\infty,$
and
$\widehat{\psi_{n}}(t^{-1})\sim
t^{\delta}\,\bigl{(}\log(t^{-1})\bigr{)}^{\delta n}\ \text{as}\ t\to 0^{+}.$
###### Proof.
Let $n\in\mathbb{N}$ and $\delta\in(0,1)$. Consider the pair
$(\phi_{n},\psi_{n})\in(\mathcal{PC})$ given in Corollary 3.13. According to
Lemma 3.9 there are completely monotonic functions $h_{1n}^{\delta}$ and
$h_{2n}^{\delta}$ such that
$\widehat{h_{1n}^{\delta}}(\lambda)=\bigl{(}\widehat{\phi_{n}}(\lambda)\bigr{)}^{\delta},\quad\lambda>0,$
and
$\widehat{h_{2n}^{\delta}}(\lambda)=\bigl{(}\widehat{\psi_{n}}(\lambda)\bigr{)}^{\delta},\quad\lambda>0.$
Now define the kernels
$\phi_{n}^{\delta}=g_{1-\delta}\ast
h_{2n}^{\delta},\quad\text{and}\quad\psi_{n}^{\delta}=h_{1n}^{\delta}.$ (3.14)
Applying directly the Laplace transform, we have
$\widehat{\phi_{n}^{\delta}}(\lambda)=\dfrac{1}{\lambda}\left(\widehat{\phi_{n}}(\lambda)\right)^{\delta}\quad\text{and
}\quad\widehat{\psi_{n}^{\delta}}(\lambda)=\left(\widehat{\psi_{n}}(\lambda)\right)^{\delta},\quad\lambda>0.$
We note that by construction $\psi_{n}^{\delta}\in(\mathcal{CM})$. Hence, it
follows from [6, Theorem 5.4 and Theorem 5.5] that
$\phi^{\delta}_{n}\in(\mathcal{CM})$. In consequence
$(\phi^{\delta}_{n},\psi^{\delta}_{n})\in(\mathcal{PC})$. The rest of the
proof follows the same ideas of Corollary 3.13. ∎
###### Example 3.17.
Let $n\in\\{2,3,\cdots\\}$ and $\delta\in(0,1)$. Consider pair
$(k,\ell)=(\phi_{n},\psi_{n})$ given in Corollary 3.16, then
$\textrm{Var}[X(t)]\sim\dfrac{2\nu}{\eta}\bigl{(}\log^{[n]}(t)\bigr{)}^{\delta},\quad\text{as}\quad
t\to\infty,$
and
$\textrm{Var}[X(t)]\sim
2\nu\dfrac{t^{2\delta}\,\bigl{(}\log(t^{-1})\bigr{)}^{2n\delta}}{\Gamma(1+2\delta)},\quad\text{as}\quad
t\to 0^{+},$
where
$\log^{[n]}=\underbrace{\log\circ\log\circ\dots\circ\log}_{n-\rm{times}}$.
###### Proof.
Following the same ideas of Corollary 3.13 we note that for all
$n\in\mathbb{N}$ the functions $\widehat{\psi_{n}^{\delta}}$ are regularly
varying functions of index $\varrho=-\delta$ and
$\widehat{\psi_{n}^{\delta}}(t^{-1})$ is a slowly varying function. The rest
of the proof is similar to the proof of Example 3.15. ∎
## 4\. Application to ultra slow diffusion equations
There are another contexts where the pairs $(k,\ell)\in(\mathcal{PC})$ play a
fundamental role. For example, this type of functions has been successfully
exploited to study the so-called sub-diffusion processes, see [8, 15, 20] and
references therein. In order to fix some ideas and explain why the results
developed in this work could be interesting in the theory of subdiffusion
processes, we consider the following equation
$\displaystyle\partial_{t}(k\ast(u(\cdot,x)-u_{0}(x)))(t)-\Delta u(t,x)$
$\displaystyle=0,\quad t>0,x\in\mathbb{R}^{d},$ (4.1) $\displaystyle u(0,x)$
$\displaystyle=u_{0}(x),\quad x\in\mathbb{R}^{d},$ (4.2)
where $k$ is a kernel of type $(\mathcal{PC})$ and $u_{0}$ is a given
function. It has been proved in [8, Section 2] that the fundamental solution
of (4.1) coincides with the probability density function of a stochastic
process $X(t)$. Moreover, in [8, Lemma 2.1] it has been proved that the mean
square displacement $M(t)$ of such process is given by
$M(t)=2d(1\ast\ell)(t),\quad t\geq 0.$ (4.3)
The function $M(t)$ allows to measure how fast or slow is the diffusion of the
equation (4.1). It is worthwhile to mention that the slowest known rate of
growth of $M(t)$ follows a logarithmic law, for instance see [11] and
references therein. In such work Kochubei considered functions of the form
$k(t)=\int_{0}^{1}g_{\alpha}(t)\sigma(\alpha)d\alpha,\quad t>0,$
where $\sigma(t)$ with $t\in[0,1]$ is a continuous, non-negative function
different from zero on a set of positive measure.
Those equations of the form (4.1) whose mean square displacement follows a
logarithmic rate (or even slower) are known in the specialized literature as
ultra slow diffusion equations and they are strongly related with ultraslow
inverse subordinators, see [13].
Our work allows to study some ultra-slow diffusion equations which are not
considered before. For instance, the equation (4.1) with
$(k,\ell)=(\phi_{n},\psi_{n})$ for some $n\in\mathbb{N}$, where
$(\phi_{n},\psi_{n})$ has been defined in Corollary 3.13. According to (4.3)
we have that the Laplace transform of $M$ is given by
$\widehat{M}(\lambda)=\frac{2d}{\lambda}\,\widehat{\psi_{n}}(\lambda),\quad\lambda>0,$
which by Karamata-Feller’s Theorem 2.5 and the asymptotic behavior of
$\widehat{\psi_{n}}$ given in (3.13) imply that
$M(t)\sim 2d\,\log^{[n]}(t),\quad\text{as}\quad t\to\infty.$
In consequence, the mean square displacement $M(t)$ grows (at infinity) slower
than a logarithmic function.
This procedure can be applied to all the pairs of functions in
$(\mathcal{PC})$ defined in the Corollary 3.10 and Corollary 3.16. As far we
know, this implies that there are an infinite number of ultra-slow diffusion
equations which have not been analyzed before. All these new interesting
examples will be studied in a forthcoming work.
## References
* [1] N. H. Bingham, C. M. Goldie, and J. L. Teugels, _Regular variation_ , Encyclopedia of Mathematics and its Applications, vol. 27, Cambridge University Press, Cambridge, 1989. MR 1015093
* [2] Ph. Clément and J. A. Nohel, _Abstract linear and nonlinear Volterra equations preserving positivity_ , SIAM J. Math. Anal. 10 (1979), no. 2, 365–388. MR 523852
* [3] by same author, _Asymptotic behavior of solutions of nonlinear Volterra equations with completely positive kernels_ , SIAM J. Math. Anal. 12 (1981), no. 4, 514–535. MR 617711
* [4] William Feller, _An introduction to probability theory and its applications. Vol. II_ , Second edition, John Wiley & Sons, Inc., New York-London-Sydney, 1971. MR 0270403
* [5] Rudolf Gorenflo, Anatoly A. Kilbas, Francesco Mainardi, and Sergei V. Rogosin, _Mittag-Leffler functions, related topics and applications_ , Springer Monographs in Mathematics, Springer, Heidelberg, 2014. MR 3244285
* [6] G. Gripenberg, S.-O. Londen, and O. Staffans, _Volterra integral and functional equations_ , Encyclopedia of Mathematics and its Applications, vol. 34, Cambridge University Press, Cambridge, 1990. MR 1050319
* [7] Niels Jacob, _Pseudo-differential operators and Markov processes_ , Mathematical Research, vol. 94, Akademie Verlag, Berlin, 1996. MR 1409607
* [8] Jukka Kemppainen, Juhana Siljander, Vicente Vergara, and Rico Zacher, _Decay estimates for time-fractional and other non-local in time subdiffusion equations in $\mathbb{R}^{d}$_, Math. Ann. 366 (2016), no. 3-4, 941–979. MR 3563229
* [9] Jukka Kemppainen, Juhana Siljander, and Rico Zacher, _Representation of solutions and large-time behavior for fully nonlocal diffusion equations_ , J. Differential Equations 263 (2017), no. 1, 149–201. MR 3631303
* [10] Jukka Kemppainen and Rico Zacher, _Long-time behavior of non-local in time Fokker-Planck equations via the entropy method_ , Math. Models Methods Appl. Sci. 29 (2019), no. 2, 209–235. MR 3917402
* [11] Anatoly N. Kochubei, _Distributed order calculus and equations of ultraslow diffusion_ , J. Math. Anal. Appl. 340 (2008), no. 1, 252–281. MR 2376152
* [12] Francesco Mainardi, _Fractional calculus and waves in linear viscoelasticity_ , Imperial College Press, London, 2010, An introduction to mathematical models. MR 2676137
* [13] Mark M. Meerschaert and Hans-Peter Scheffler, _Stochastic model for ultraslow diffusion_ , Stochastic Process. Appl. 116 (2006), no. 9, 1215–1235. MR 2251542
* [14] Enzo Orsingher and Luisa Beghin, _Time-fractional telegraph equations and telegraph processes with Brownian time_ , Probab. Theory Related Fields 128 (2004), no. 1, 141–160. MR 2027298
* [15] Juan C. Pozo and Vicente Vergara, _Fundamental solutions and decay of fully non-local problems_ , Discrete $\&$ Continuous Dynamical Systems - A 39 (2019), 639–666.
* [16] by same author, _A non-local in time telegraph equation_ , Nonlinear Anal. 193 (2020), 111411. MR 4062965
* [17] Jan Prüss, _Evolutionary integral equations and applications_ , Modern Birkhäuser Classics, Birkhäuser/Springer Basel AG, Basel, 1993, [2012] reprint of the 1993 edition. MR 2964432
* [18] René L. Schilling, Renming Song, and Zoran Vondraček, _Bernstein functions_ , De Gruyter Studies in Mathematics, vol. 37, Walter de Gruyter & Co., Berlin, 2010, Theory and applications. MR 2598208
* [19] Vicente Vergara, _Asymptotic behaviour of the time-fractional telegraph equation_ , J. Appl. Probab. 51 (2014), no. 3, 890–893. MR 3256235
* [20] Vicente Vergara and Rico Zacher, _Optimal decay estimates for time-fractional and other nonlocal subdiffusion equations via energy methods_ , SIAM J. Math. Anal. 47 (2015), no. 1, 210–239. MR 3296607
|
11institutetext: INAF - Istituto di Astrofisica e Planetologia Spaziali, via
del Fosso del Cavaliere, 100, 00133, Rome, Italy
11email<EMAIL_ADDRESS>22institutetext: IAA - Instituto de
Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada,
Spain 33institutetext: MPIfR - Max Planck Institute for Radio Astronomy, auf
dem Hügel, 69, 53121, Bonn, Germany 44institutetext: Aalto University
Metsähovi Radio Observatory, Metsähovintie 114, 02540 Kylmälä, Finland
55institutetext: Aalto University Department of Electronics and
Nanoengineering, PL 15500, 00076 Aalto, Finland 66institutetext: Lebedev
Physical Institute, Leninsky prosp. 53, Moscow, 119991, Russia
77institutetext: Moscow Institute of Physics and Technology, Dolgoprundy,
Institutsky per., 9, Moscow region, 141700, Russia 88institutetext:
Departament d’Astronomia i Astrofísica, Universitat de València, C/ Dr.
Moliner, 50, 46100 Burjassot, València, Spain 99institutetext: Observatori
Astronòmic, Universitat de València, C/ Catedràtic José Beltran, 2, 46980
Paterna, València, Spain 1010institutetext: CSIRO Astronomy and Space Science,
PO Box 76, Epping, NSW 1710, Australia 1111institutetext: JIVE - Joint
Institute for VLBI ERIC, Oude Hoogeveensedijk 4, 7991 PD Dwingekoo, The
Netherlands 1212institutetext: Dept. of Astrodynamics and Space Missions,
Delft University of Technology, Kluyverweg 1, 2629 HS Delft, The Netherlands
1313institutetext: Crimean Astrophysical Observatory, Nauchny 298409, Crimea,
Russia 1414institutetext: Department of Physics and Astronomy, Michigan State
University, East Lansing, Michigan 48824, USA 1515institutetext: Sternberg
Astronomical Institute, Moscow State University, Universitetskii pr. 13,
119992 Moscow, Russia
# _RadioAstron_ reveals a spine-sheath jet structure in 3C 273
G. Bruni Contact<EMAIL_ADDRESS>J. L. Gómez 22 L. Vega-García
33 A. P. Lobanov 3377 A. Fuentes 22 T. Savolainen 445533 Y. Y. Kovalev 667733
M. Perucho 8899 J.-M. Martí 8899 P. G. Edwards 1010 L. I. Gurvits 111112121010
M. M. Lisakov 3366 A. B. Pushkarev 13136677 K. V. Sokolovsky 14141515
(Received 14 September 2020; accepted …)
We present Space-VLBI _RadioAstron_ observations at 1.6 GHz and 4.8 GHz of the
flat spectrum radio quasar 3C 273, with detections on baselines up to 4.5 and
3.3 Earth Diameters, respectively. Achieving the best angular resolution at
1.6 GHz to date, we have imaged limb-brightening in the jet, not previously
detected in this source. In contrast, at 4.8 GHz, we detected emission from a
central stream of plasma, with a spatial distribution complementary to the
limb-brightened emission, indicating an origin in the spine of the jet. While
a stratification across the jet width in the flow density, internal energy,
magnetic field, or bulk flow velocity are usually invoked to explain the limb-
brightening, the different jet structure detected at the two frequencies
probably requires a stratification in the emitting electron energy
distribution. Future dedicated numerical simulations will allow the
determination of which combination of physical parameters are needed to
reproduce the spine/sheath structure observed by _RadioAstron_ in 3C 273.
###### Key Words.:
galaxies: active – galaxies: jets – quasars: individual: 3C273
## 1 Introduction
About 10% of Active Galactic Nuclei (AGN) have prominent relativistic jets of
plasma extending up to megaparsec-scale distances from the supermassive black
hole (SMBH) which powers them (see e.g. Sikora et al. 2007 and references
therein). These are readily studied at radio wavelengths, and propagate in the
inner near-nucleus segment as an outflow with a parabolic shape switching to a
nearly conical one down the stream (Asada & Nakamura, 2012; Nokhrina et al.,
2019; Kovalev et al., 2020). The launching, acceleration, and collimation of
such relativistic jets are the subjects of ongoing theoretical and
observational studies. The two most accredited models are the Blandford &
Znajek (1977) mechanism, in which the rotational energy of the black hole
powers jet launching, and the Blandford & Payne (1982) mechanism, in which the
accretion disk produces a magnetically driven wind. More recently, a spine-
sheath model, involving a stratified inner jet, has been proposed and
developed by different authors (e.g. Pelletier & Roland 1989; Sol et al. 1989;
Celotti et al. 2001; Ghisellini et al. 2005; Tavecchio & Ghisellini 2008;
D’Arcangelo et al. 2009; Xie et al. 2012; Mimica et al. 2015). In this
scenario, the jet flow consists of two different fluids: a fast, low-density
component, streaming along the central axis (spine) and emerging from the
immediate vicinity of the black hole, and a slower, denser component at the
edges (sheath) of the conical jet, emerging from the accretion disk. This
would imply that both the Blandford & Znajek (1977) and the Blandford & Payne
(1982) mechanisms can be invoked to produce the two components of the outflow
(Hardee, 2007; Xie et al., 2012). Notably, as an effect of relativistic
Doppler boosting on photons, the spine would dominate the overall jet emission
in blazars (having a small viewing angles from the jet axis), while the sheath
would be more evident in radio galaxies (seen at large viewing angles).
The Very Long Baseline Interferometry (VLBI) technique enables the parsec-
scale structure of nearby AGN jets to be resolved and the structure of such
outflows to be studied in great detail, including the inner regions in the
proximity of the jet base. This powerful technique has allowed prominent
emission from the jet edges — also known as limb-brightening, and related to
the sheath structure mentioned above — to be detected in a number of sources.
Among the pioneering works on this topic, Attridge et al. (1999) presented the
first linear polarization observations of a spine/sheath structure in the
quasar 1055+018, performed with the Very Long Baseline Array (VLBA). Later, a
similar structure in the BL Lac Mrk 501 was found by different authors:
Giroletti et al. (2004) through observations with the first dedicated Space
VLBI (SVLBI) mission, VSOP/HALCA, revelaed a limbs brightening in the jet
structure, and explained that in terms of a velocity gradient in the jet.
Later, Pushkarev et al. (2005) confirmed the result detecting a spine-sheath
polarization structure with the VLBA. More recently, Boccardi et al. (2016)
presented Global Millimeter VLBI Array (GMVA) observations of Cygnus A at 86
GHz, revealing a limb-brightening in the jet flow, with a transverse width
suggesting a launching point in the accretion disk rather than in the SMBH
vicinity. Kim et al. (2018) stacked five GMVA epochs to image the jet base in
M87, finding that the limb-brightened structure could be anchored in the inner
portion of the accretion disk, similarly to Cygnus A. A stratification of the
jet flow in the same source was previously found by Mertens et al. (2016),
through the kinematic analysis of multiple VLBA images at 43 GHz. Finally,
Giovannini et al. (2018) used Space VLBI _RadioAstron_ observations (see
below) to reveal a bright outer jet layer in 3C 84, with a wide jet base
suggesting either a rapid lateral expansion of the jet within 100
$r_{\mathrm{g}}$ from the black hole or an origin in the accretion disk.
The _RadioAstron_ (hereafter RA) Space VLBI mission (Kardashev et al., 2013),
led by the Astro Space Center (ASC, Moscow, Russia) and the Lavochkin
Scientific and Production Association (Khimki, Russia), operated between 2011
and 2019. With a diameter of 10 meters, the RA space radio telescope performed
interferometric observations with arrays of ground radio telescopes, with a
maximum Earth-space baseline at apogee of $\sim$350,000 km. It operated at
0.32 GHz (P-band), 1.6 GHz (L-band), 4.8 GHz (C-band), and 22 GHz (K-band).
Three Key Science Programmes (KSPs) on AGN imaging have collected data since
2013 to study the launching, collimation, and magnetic field properties of
jets in known AGN (see Bruni et al. 2020 for a summary of previous results and
observed targets). In particular, the RA AGN polarization KSP aims to probe
the jet structure and magnetic fields configuration at angular resolution down
to a few tens of $\mu$as, in AGN known to have the most prominent polarization
properties. More than 20 imaging experiments have been performed. Within the
project, observations at K-band of BL Lac from the first observing period
(Announcement of Opportunity 1, AO-1, July 2013 – June 2014) were presented in
Gómez et al. (2016), producing the image with the highest angular resolution
to date (21 $\mu$as) and revealing helical magnetic fields in the jet.
The flat-spectrum radio source 3C 273, the subject of this work, is the first
identified quasar (Hazard et al., 1963; Oke, 1963; Schmidt, 1963). One of the
most observed VLBI targets, 3C 273 offered an opportunity for studying the jet
cross-section morphology with VSOP/HALCA (Lobanov & Zensus, 2001). The quasar
3C 273 was also investigated by RA in a non-imaging mode. These observations
at 18, 6 and 1.3 cm resulted in the detection of the brightness temperatures
in the core of the source exceeding the Inverse Compton limit (Kovalev et al.,
2016) and potential refractive substructure (Johnson et al., 2016). The source
was targeted twice as a part of the RA AGN polarization KSP: Bruni et al.
(2017) performed 22 GHz observations of 3C 273, showing a brightness
temperature drop of two orders of magnitude in only one year compared to
Kovalev et al. (2016).
Here we present a two-frequency, 1.6 and 4.8 GHz, study of the jet structure
in 3C 273. Taking advantage of the unprecedented angular resolution offered by
RA at 1.6 GHz, we imaged for the first time a limb-brightened jet structure
for this source, with a spatial distribution complementary to a spine-
dominated emission detected at 4.8 GHz during previous RA observations,
performed in the framework of the Strong AGN KSP. A comparison with the 4.8
GHz RA images allows us to consider a new insight in the properties of the jet
flow.
Figure 1: The $u,v$-coverage for _RadioAstron_ observations at 1.6 GHz (left
panel) and 4.8 GHz (right panel) presented in this work. The central bulge (in
grey) of $u,v$-tracks span about one Earth diameter (ground stations
baselines), while the “wings” (in colors) represent the _RadioAstron_ Space-
baselines contribution. Only space-segments giving fringes are plotted, i.e.,
up to a maximum projected space–ground baseline of $\sim$4.5 ED at 1.6 GHz and
$\sim$3.3 ED at 4.8 GHz. Figure 2: _RadioAstron_ images of 3C 273 at 1.6 GHz
(left panel, June 2014) and 4.8 GHz (right panel, April 2014). The beam is
shown on the lower-left corner: 1.04$\times$0.58 mas, P.A. 79.4∘ at 1.6 GHz,
and 0.86$\times$0.46 mas, P.A. 22.0∘ at 4.8 GHz. The two lowest contour levels
are $\pm$3 and $\pm$7 times the RMS of the image noise level (3$\times$3.5
mJy/beam at 1.6 GHz, 7$\times$2.2 mJy/beam at 4.8 GHz). Successive contours
are drawn as $c_{n}=(3/2)\times c_{n-1}$ up to 90$\%$ of the total intensity
peak (0.57 Jy/beam at 1.6 GHz, 2.01 Jy/beam at 4.8 GHz). Figure 3: Example of
jet profiles (right) for different positions along the stream axis (left) at
1.6 GHz. The distance from the axis is indicated in mas in the right panel.
## 2 _RadioAstron_ observations and data processing
Observations at 1.6 GHz were performed on 2014 June 13, under project code
GA030F for ground antennas and raks04f for RA. The array was composed of
antennas in Russia (Kalyazin 64 m, Badary 32 m, Zelenchukskaya 32 m), Japan
(Usuda 64 m), Australia (ATCA 5$\times$22 m, Ceduna 30 m, Hobart 26 m, Mopra
22 m, Parkes 64 m), New Zealand (Warkworth 12 m) and South Africa
(Hartebeesthoek 26 m). The tracking station for RA was Pushchino for the
entire experiment. The total observing time was 9 hours (05–14 UT). RA
participated with 10 scans, 14.5 minutes each, for a total of 2.4 hours on
target, covering the space-baselines between 1.6 and 4.5 Earth Diameters (ED).
Observations at 4.8 GHz were performed on 2014 April 30 with the project code
GL038F for the ground array and raks05d for RA. The antennas composing the
ground array were in Russia (Kalyazin 64 m), Australia (Ceduna, Hobart), South
Africa (Hartebeesthoek), USA (Mauna Kea 25 m), and Europe (Effelsberg 100 m,
Noto 32 m, Onsala 25 m, Torun 32 m, Yebes 40 m, Westerbork 11$\times$25 m).
The tracking station for RA was Pushchino for the entire experiment. The total
observing time was 12 hours (10–22 UT). RA participated with 27 scans, 9.5
minutes each, for a total of 4.3 hours on target, covering the space-baselines
between 0.9 and 3.3 ED. Fig. 1 presents $uv$-coverage in the two RA
experiments discussed in this work. The data from both observing bands were
processed at the MPIfR correlator, making use of the RA-dedicated version of
DiFX software VLBI correlator (Bruni et al., 2016). Fringe-fitting at the
correlator stage was performed using the largest available antennas as
references for each experiment (ATCA, Parkes, Effelsberg), first setting the
clock value for the ground array antennas, and then searching for signal in
each RA scan: this allowed us to have a first-order solution for each space-
ground scan that could later be refined through baseline-stacking (see below
for details). For scans giving no fringes, we applied extrapolated clock
values from the successful part of the experiment.
The following data reduction strategy in
AIPS111http://www.aips.nrao.edu/index.shtml, proven to be successful for RA
AGN imaging projects, was adopted for both experiments. First, the a priori
amplitude calibration was applied using the values for the antenna gains and
system temperature measured at each antenna during the observations. A
parallactic angle correction was applied to the ground array antennas to
account for the axis rotation of the antenna feeds with respect to the target
source. The data were then fringe-fitted. The ground array data were fringe-
fitted first, then the SVLBI baselines, using stacked solutions of the ground
baselines and a model of the source (baselines stacking within the FRING task
in AIPS). This procedure was repeated scan by scan for the SVLBI baselines in
order to use a more accurate model describing the source structure. The
solution interval was set to 2 min for ground array observations and to 4 min
for the SVLBI scans, adopting a SNR threshold of 5. At 1.6 GHz, SVLBI fringes
were found on baselines up to $\sim$4.5 Earth Diameters (ED), while at 4.8 GHz
up to $\sim$3.3 ED.
Finally, we imaged the calibrated data in
Difmap222ftp://ftp.astro.caltech.edu/pub/difmap/difmap.html (Shepherd, 1997).
First, we flagged RA scans for which SVLBI fringes gave a SNR$<$5 in AIPS.
Visibilities were then average over 30 seconds at 1.6 GHz, while over 10
seconds at 4.8 GHz, and the standard uniform weighting scheme (uvw=2,–1) was
applied at both frequencies. The source model was built through the standard
iterative CLEANing and phase self-calibration technique, adopting solution
intervals equal to the data averaging time. The feasibility of phase self-
calibration was assured by the high SNR of the visibilities, and the abundance
of baselines. Once a satisfactory model was assessed, amplitude self-
calibration was performed as well. The final images angular resolution and RMS
were: 1.04$\times$0.58 mas and 3.5 mJy/beam at 1.6 GHz, while 0.86$\times$0.46
mas and 2.2 mJy/beam at 4.8 GHz.
## 3 Results
### 3.1 Limb-brightened jet emission
The RA observations presented in this work allowed us to reach an
unprecedented angular resolution at 1.6 GHz for this source, and to detect for
the first time a limb-brightened jet in 3C 273 (Fig. 2, left panel). The beam
in this image obtained with a uniform weighting scheme is 1.04$\times$0.58
mas, with a position angle (PA) of 79.4∘ and image RMS noise of 3.5 mJy/beam.
As clearly visible, the structure of the jet is dominated by an elongated
structural feature unresolved in the direction orthogonal to the jet for the
inner 5 mas from the core, while two distinct emission trails are visible
along the rest of the jet, extending towards the south-west for further
$\sim$20 mas. A brighter region is present at about 15 mas from the core,
where the jet bends towards the south, that may be associated with a shock
produced by the interaction of the jet with the ambient medium (cf. the cases
of 4C 41.17, Gurvits et al. 1997, and J0906$+$6930, An et al. 2020). This
limb-brightened emission was not seen in previous observations of 3C 273, most
probably because of the lower angular resolution of previous images (see,
e.g., the VLBA image, obtained at 1.7 GHz in 2011, reported in Kovalev et al.
2016).
We further investigated the limb-brightening properties through a detailed
tomography along the jet. The MOJAVE team produced stacked images of 373 jets,
including 3C 273, at 15 GHz with a time span of 20 years (Pushkarev et al.,
2017). As demonstrated in that work, it is necessary to stack the images in
order to properly map the full jet width at 15 GHz. In Fig. 4 (right panel),
we present the contours from the MOJAVE stacked image overlaid on our L-band
image, with the purpose of comparing the different structure. A shift of
43$\times$55 pixels in RA and Dec, respectively, was applied to align the two
images, corresponding to 2.15 mas eastward and 2.75 mas northward (see section
3.2 for a full description of the method). No evident limb-brightened emission
is detected in the MOJAVE data. Furthermore, the emission is not symmetric
along the full length of the observed jet: the first $\sim$5 mas from the core
shows an enhanced emission in the southern region, while this shifts to the
northern side from $\sim$5 to $\sim$10 mas. Remarkably, the cores and the
innermost $\sim$5 mas jet emission are consistent in the two images, and the
limb-brightening lies at the edges of the MOJAVE contours. Overall, the MOJAVE
intensity distribution, including the jet curvature, is in agreement with our
SVLBI results.
We used the jet ridge-line from MOJAVE observations to measure the jet profile
along slices perpendicular to it in our RA image, for a total of 440 profiles
of 6 mas width. The first 72 profiles, corresponding to the inner 3 mas, do
not show a clear double-peaked structure. Down the flow at larger distances, a
double structure is visible, but the two peaks are very close to each other.
This structure is observed between the inner $\sim 3$ mas and $\sim 4$ mas.
Beyond 4 mas, the double-peaked structure is more prominent, showing a clear
limb-brightening. In Fig. 3, we present five representative jet profiles,
clearly showing the double peak resulting from the limbs. The latter can be
roughly four times brighter than the central region of the jet. Moreover, as
seen in the MOJAVE stacked image, the southern region is brighter in the first
few mas from the core, while the northern side is stronger in the second half
of the jet.
Figure 4: Left panel: the _RadioAstron_ images of 3C 273 at 1.6 GHz (contours)
and 4.8 GHz (in colors) obtained in 2014. Both are convolved with the MOJAVE
circular beam (0.83$\times$0.83 mas). Right panel: the _RadioAstron_ 1.6 GHz
image with the stacked image from MOJAVE, both convolved with the MOJAVE
circular beam of the same size as in the left panel. The two lowest contour
levels are $\pm$5 and $\pm$9 times the RMS noise level (5$\times$2.8 mJy/beam
at 1.6 GHz, 9$\times$0.5 mJy/beam at 15 GHz, respectively). Successive
contours are drawn as $c_{n}=(3/2)\times c_{n-1}$ up to 90$\%$ of the total
intensity peak (0.7 Jy/beam at 1.6 GHz, 6.1 Jy/beam at 15 GHz).
Figure 5: Left panel: spectral index map obtained from the _RadioAstron_
images of 3C 273 at 1.6 GHz and 4.8 GHz, both convolved with the MOJAVE
circular beam (0.83$\times$0.83 mas). Pixel values below 5$\sigma$ for both
the 1.6 GHz and 4.8 GHz images were blanked. The dotted line in the color bar
indicates a value of 1. Right panel: spectral index error map. The adopted
convention for the spectral index $\alpha$ is $S\propto\nu^{\alpha}$
.
Figure 6: The 300 jet streamlines used to calculate the integrated flux
density along the flow, parallel to the jet axis from MOJAVE (central line in
black), overplotted on the 1.6 GHz (left panel) and 4.8 GHz (right panel)
_RadioAstron_ contours maps (both convolved with the MOJAVE beam). The
starting point is set at 4 mas from the core, where the spine/sheath structure
becomes evident. Central panel: integrated flux density, versus distance from
jet axis, calculated along jet streamlines for the 1.6 GHz (left) and 4.8 GHz
(right) _RadioAstron_ images. The profile uncertainty is reported as a shaded
area.
### 3.2 Evidence of a spine/sheath structure along the jet
Given the results from the 1.6 GHz observations described above, we considered
the RA 4.8 GHz image of the same source, obtained less than 2 months earlier,
in order to compare the two jet structures. The 4.8 GHz image is presented in
Fig. 2: an angular resolution of 0.86$\times$0.46 mas was obtained (uniform
weighting) with a beam PA of 22∘. The average image RMS noise is $\sim$2.2
mJy/beam. In this image, a single stream is visible for the whole jet, with no
indication of any limb-brightening, contrary to what is seen at 1.6 GHz.
Remarkably, the jet curvature is the same as reconstructed in the 1.6 GHz
image, and the brighter regions are also in agreement between the two maps.
To better compare the different jet morphologies visible in the two images, we
superimposed the 1.6 GHz map with the one at 4.8 GHz, restored with a matched
circular beam equal to the MOJAVE one (0.83$\times$0.83 mas). Image
registration was performed via a cross-correlation analysis of the total
intensity maps (see Gómez et al. 2016, and references therein). In particular,
we considered the inner jet spine only, which looks similar in the two images,
covering the first $\sim$5 mas. Several spectral index maps were produced,
adopting a different shift between the two maps, to identify by visual
inspection the one showing the smallest gradients across the jet width (Fig.
5, see Plavin et al. 2019 for more details about this method). The final shift
adopted was 36$\times$48 pixels in RA and Dec, respectively, for the 4.8 GHz
image, corresponding to 1.8 mas eastwards and 2.4 mas northward.
The resulting image is presented in Fig. 4 (left panel), where 1.6 GHz is
represented in contours and 4.8 GHz in colors. It is evident that the emission
at 4.8 GHz falls between the two limbs detected at 1.6 GHz, suggesting that
the former traces the jet spine, while the latter the sheath. Although
dominance of the jet spine or the jet sheath has been observed in other
sources, for the first time here we detect the two structures at frequencies
so closely spaced (1.6 GHz vs. 4.8 GHz).
Finally, in order to quantify the prominence of the limbs and the spine at the
two frequencies, we traced 300 lines parallel to the jet axis, and calculated
the integrated flux density along the lines for both frequencies. To compute
the profiles we cut the jet orthogonally to the MOJAVE ridgeline. Since the
ridgeline is curved, a line is first fitted between two adjacent points along
the ridgeline, then a cut perpendicular to this line is made. We repeat this
process for the whole length of the ridgeline and derive the flux density
along each cut. Finally, to estimate the profile of the jet flux density
between the two sides at 1.6 GHz and 4.8 GHz, we have traced parallel lines to
the jet axis (i.e., the MOJAVE ridgeline), also called fluid lines. We created
a total of 300 parallel lines, covering the entire jet width, and summed the
flux density along the lines. The fluid lines used to integrate the flux
density along the flow, with estimated integrated profiles, are shown in Fig.
6. Uncertainties have been computed for each streamline as:
$S_{err}=\sqrt{\displaystyle\sum_{i=1}^{N}\left(\frac{RMS}{S_{i}}\right)^{2}},$
(1)
where N is number of pixels along the streamline, and RMS is the standard
deviation of the pixels flux density ($S_{i}$). We found again a tangential
double-peaked profile at 1.6 GHz, and a single-peaked one at 4.8 GHz (see Fig.
6), confirming the spine/sheath jet structure.
It should be noted that the double helical structure reported in the C-band
image of 3C 273 obtained from the VSOP observation (Lobanov et al., 2000;
Lobanov & Zensus, 2001) may remain undetected in our C-band image, owing to an
$\approx 3.5$ times smaller beam and a factor of $\approx 2.5$ lower dynamic
range of the RA C-band image presented here. This results in a sensitivity to
weak and extended emission almost an order of magnitude lower, which may
preclude effective detection of the transverse structure of the flow in the RA
image at distances larger than $\approx 5$ mas.
## 4 Possible physical factors concurring in the observed jet structure
### Propagating structures along the jet
Vega-García (2018) reported an oscillation in the jet direction of 3C 273
between the previous VSOP observation (Lobanov & Zensus, 2001) and the RA
observation, separated by seventeen years. The author reported an oscillation
velocity of $\simeq 0.5\,c$, and a pattern speed of $(0.070\pm 0.016)\,c$.
These velocities, clearly smaller than the flow speed revealed by, e.g., the
strong brightness asymmetry, show that the observed structures are caused by
waves that propagate through the jet (i.e. helical patters, see Perucho et al.
2012; Cohen et al. 2015; Vega-García et al. 2019). The wavelengths that can be
derived from the RA observations reported there coincide with those given in
Lobanov & Zensus (2001), plus a further, longer wavelength of $\simeq 50\,{\rm
mas}$, due to the sensitivity achieved at larger scales.
In this respect, the limb brightening observed in this work can be caused by a
combination of the off-axis pressure enhancements caused by the helical and
elliptical waves (see Lobanov & Zensus 2001) and the interaction of the jet
with its environment as it oscillates, in the same way as reported by Walker
et al. (2018) for the case of M87. The compression of the gas and the magnetic
lines due to the medium resistance to jet expansion can cause the observed
rise in emissivity.
### Velocity stratification in the jet flow
The observed limb-brightening of the RA image at 1.6 GHz requires a
stratification across the jet width in the flow density, internal energy,
magnetic field, and/or bulk flow velocity. A “hollow” jet, with a steep
increase in the particle and/or magnetic field energy towards the jet edges
can naturally explain the limb brightening, as shown for instance in Ogihara
et al. (2019). Numerical simulations also predict a faster jet spine,
surrounded by a slower, denser sheath (see, e.g., the previously mentioned
two-fluids model, developed by various authors starting in the 1980s). For a
given viewing angle, $\theta$, we can compute the Lorentz factor that
maximizes the Doppler boosting, which is given by
$\Gamma_{\max}=sin^{-1}\theta$. Jets with a bulk Lorentz factor at the jet
spine significantly larger than $\Gamma_{\max}$ will show a limb-brightening,
while those with a value similar or smaller than this will show a spine
brightening instead. For 3C 273, the estimated viewing angle and Lorentz
factor are $\theta=6^{\circ}$ and $\Gamma_{\max}=10$, respectively (Jorstad et
al., 2005; Savolainen et al., 2006). Hence, a stratification in the jet flow,
with a progressive deceleration towards the jet edges, will easily produce the
limb-brightening emission seen in the 1.6 GHz RA image.
### Energy stratification across the jet
The aforementioned stratification in the jet flow, as expected for a two-fluid
model, should however produce a very similar limb-brightening at both the
observed frequencies of 1.6 GHz and 4.8 GHz. To explain why at 4.8 GHz we
observe a spine-brightening instead, a stratification across the jet width in
the emitting electron energy distribution is required. For the case of a two-
fluid model (with a variable flow velocity), such stratification will be
amplified via Doppler boosting, that in addition will shift the emitting
frequency of the electrons in the fluid frame to the one measured in the
observer’s frame. This stratification in energy can be produced by larger
radiative losses at the jet axis (where most of the emission is produced at
high frequencies), leading to higher energy electrons and therefore increased
emission at progressively higher frequencies towards the jet axis. This should
lead to a clear stratification in the spectral index across the jet width, as
visible in Fig. 5.
### Instrumental effects evaluation
Significant gaps in the _u,v_ -plane can potentially generate artifacts in the
final image. The experimental setup of SVLBI observations can easily incur in
such issue, when performed with a space antenna at a distance larger than
$\sim$1 ED from ground. However, depending on the brightness, extension, and
morphology of the source under study, the potential artifacts have often a
limited impact on the final images. Through a multi-epoch analysis of the jet
morphology in S5 0836+710, Perucho et al. (2012) and Vega-García et al. (2019)
discussed how _u,v_ -coverage should not introduce relevant differences on the
observed jet structure, but only minor ripples along the ridge lines. Indeed,
they found the latter to be consistent among different epochs of observations,
and instrumental setups. The features discussed in the present work are
prominent (SNR¿100), with an extension of several beams, and visible as long
waves along the jet ($\sim$10 mas), so we can reasonably exclude an
instrumental effect due to gaps in _u,v_ -coverage. In addition, it has been
shown that spectral indices (and then flux distribution) obtained from
observations with uneven _u,v_ -coverage can be trusted (i.e. have an accuracy
¿90%) for a sufficient pixels SNR (¿5), and when within 10-15 mas from the
phase-center (Lobanov, 1998). Both those conditions are satisfied by the
structures presented in this work.
## 5 Conclusions
We have presented an analysis of RA images at 1.6 GHz and 4.8 GHz for 3C 273,
the former being the highest angular resolution image to date for this source
at this frequency. Our findings can be summarized as follows:
* •
For the first time, a limb-brightened emission is evident in the image at 1.6
GHz, showing an enhanced emission for the two edges of the jet, starting from
about 4 mas from the core and following the whole jet extension.
* •
Conversely, at 4.8 GHz only a single stream is detected, consistently
displaced between the edges of the 1.6 GHz jet. This is confirmed by the jet
profile drawn from the integrated flux density along the jet streamlines,
indicating a double-peaked profile at 1.6 GHz and a single-peaked one at 4.8
GHz.
* •
The observed morphology is indicative of a spine/sheath structure in the jet.
This can be explained in terms of the following, concurring, physical factors:
1) helical patterns propagating along the jet, similar to the ones reported in
previous Space-VLBI observations of this source (Lobanov & Zensus, 2001); 2) a
velocity stratification in the jet flow, with a faster jet spine and a slower,
denser, sheath; 3) an energy stratification across the jet, necessary to
produce the noticeable morphological differences between the two observed
close frequencies. This stratification scenario is supported by the spectral
index gradient measured across the jet, and consistent along its extension.
A more detailed quantitative consideration through dedicated general
relativistic magneto-hydrodynamical numerical simulations, able to reproduce
the spine/sheath structure seen in these RA observations of 3C 273, will be
published in a future work.
###### Acknowledgements.
The research at the IAA-CSIC was supported in part by the Spanish Ministerio
de Economía y Competividad through grant AYA2016- 80889-P and the State Agency
for Research of the Spanish MCIU through the “Center of Excellence Severo
Ochoa” award for the Instituto de Astrofísica de Andalucía grant
SEV-2017-0709. APL, YYK, and ABP were supported by the Russian Science
Foundation (project 20-62-46021). TS was partly supported by the Academy of
Finland projects 274477 and 315721. MP acknowledges the support by the Spanish
Ministerio de Ciencia e Innovación (MICINN) under grant PID2019-105510GB-C31.
MP and JMM acknowledge financial support from the Spanish Ministry of Science
through Grants PID2019-107427GB-C33 and AYA2016-77237-C3-3-P, and from the
Generalitat Valenciana through grant PROMETEU/2019/071. LIG acknowledges
support by the CSIRO Distinguished Visitor Programme. The _RadioAstron_
project is led by the Astro Space Center of the Lebedev Physical Institute of
the Russian Academy of Sciences and the Lavochkin Scientific and Production
Association under a contract with the Roscosmos State Corporation, in
collaboration with partner organizations in Russia and other countries. This
publication has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 730562 [RadioNet].
This paper includes data observed with the 100-m Effelsberg radio-telescope,
which is operated by the Max-Planck-Institut für Radioastronomie in Bonn
(Germany). The National Radio Astronomy Observatory is a facility of the
National Science Foundation operated under cooperative agreement by Associated
Universities, Inc. The European VLBI Network is a joint facility of
independent European, African, Asian, and North American radio astronomy
institutes. The Long Baseline Array is part of the Australia Telescope
National Facility which is funded by the Australian Government for operation
as a National Facility managed by CSIRO. This research made use of Python
(http://www.python.org), Numpy (van der Walt et al., 2011), Pandas (McKinney,
2010), and Matplotlib (Hunter, 2007). We also made use of Astropy
(http://www.astropy.org), a community-developed core Python package for
Astronomy (Astropy Collaboration et al., 2013, 2018).
## References
* An et al. (2020) An, T., Mohan, P., Zhang, Y., et al. 2020, Nature Communications, 11, 143
* Asada & Nakamura (2012) Asada, K. & Nakamura, M. 2012, ApJ, 745, L28
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Attridge et al. (1999) Attridge, J. M., Roberts, D. H., & Wardle, J. F. C. 1999, ApJ, 518, L87
* Blandford & Payne (1982) Blandford, R. D. & Payne, D. G. 1982, MNRAS, 199, 883
* Blandford & Znajek (1977) Blandford, R. D. & Znajek, R. L. 1977, MNRAS, 179, 433
* Boccardi et al. (2016) Boccardi, B., Krichbaum, T. P., Bach, U., Bremer, M., & Zensus, J. A. 2016, A&A, 588, L9
* Bruni et al. (2016) Bruni, G., Anderson, J., Alef, W., et al. 2016, Galaxies, 4, 55
* Bruni et al. (2017) Bruni, G., Gómez, J. L., Casadio, C., et al. 2017, A&A, 604, A111
* Bruni et al. (2020) Bruni, G., Savolainen, T., Gómez, J. L., et al. 2020, Advances in Space Research, 65, 712
* Celotti et al. (2001) Celotti, A., Ghisellini, G., & Chiaberge, M. 2001, MNRAS, 321, L1
* Cohen et al. (2015) Cohen, M. H., Meier, D. L., Arshakian, T. G., et al. 2015, ApJ, 803, 3
* D’Arcangelo et al. (2009) D’Arcangelo, F. D., Marscher, A. P., Jorstad, S. G., et al. 2009, ApJ, 697, 985
* Ghisellini et al. (2005) Ghisellini, G., Tavecchio, F., & Chiaberge, M. 2005, A&A, 432, 401
* Giovannini et al. (2018) Giovannini, G., Savolainen, T., Orienti, M., et al. 2018, Nature Astronomy, 2, 472
* Giroletti et al. (2004) Giroletti, M., Giovannini, G., Feretti, L., et al. 2004, ApJ, 600, 127
* Gómez et al. (2016) Gómez, J. L., Lobanov, A. P., Bruni, G., et al. 2016, ApJ, 817, 96
* Gurvits et al. (1997) Gurvits, L. I., Schilizzi, R. T., Miley, G. K., et al. 1997, A&A, 318, 11
* Hardee (2007) Hardee, P. E. 2007, ApJ, 664, 26
* Hazard et al. (1963) Hazard, C., Mackey, M. B., & Shimmins, A. J. 1963, Nature, 197, 1037
* Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90
* Johnson et al. (2016) Johnson, M. D., Kovalev, Y. Y., Gwinn, C. R., et al. 2016, ApJ, 820, L10
* Jorstad et al. (2005) Jorstad, S. G., Marscher, A. P., Lister, M. L., et al. 2005, AJ, 130, 1418
* Kardashev et al. (2013) Kardashev, N. S., Khartov, V. V., Abramov, V. V., et al. 2013, Astronomy Reports, 57, 153
* Kim et al. (2018) Kim, J. Y., Krichbaum, T. P., Lu, R. S., et al. 2018, A&A, 616, A188
* Kovalev et al. (2016) Kovalev, Y. Y., Kardashev, N. S., Kellermann, K. I., et al. 2016, ApJ, 820, L9
* Kovalev et al. (2020) Kovalev, Y. Y., Pushkarev, A. B., Nokhrina, E. E., et al. 2020, MNRAS, 495, 3576
* Lobanov (1998) Lobanov, A. P. 1998, A&AS, 132, 261
* Lobanov & Zensus (2001) Lobanov, A. P. & Zensus, J. A. 2001, Science, 294, 128
* Lobanov et al. (2000) Lobanov, A. P., Zensus, J. A., Abraham, Z., et al. 2000, Advances in Space Research, 26, 669
* McKinney (2010) McKinney, W. 2010, in Proceedings of the 9th Python in Science Conference, ed. S. van der Walt & J. Millman, 51 – 56
* Mertens et al. (2016) Mertens, F., Lobanov, A. P., Walker, R. C., & Hardee, P. E. 2016, A&A, 595, A54
* Mimica et al. (2015) Mimica, P., Giannios, D., Metzger, B. D., & Aloy, M. A. 2015, MNRAS, 450, 2824
* Nokhrina et al. (2019) Nokhrina, E. E., Gurvits, L. I., Beskin, V. S., et al. 2019, MNRAS, 489, 1197
* Ogihara et al. (2019) Ogihara, T., Takahashi, K., & Toma, K. 2019, ApJ, 877, 19
* Oke (1963) Oke, J. B. 1963, Nature, 197, 1040
* Pelletier & Roland (1989) Pelletier, G. & Roland, J. 1989, A&A, 224, 24
* Perucho et al. (2012) Perucho, M., Kovalev, Y. Y., Lobanov, A. P., Hardee, P. E., & Agudo, I. 2012, ApJ, 749, 55
* Plavin et al. (2019) Plavin, A. V., Kovalev, Y. Y., Pushkarev, A. B., & Lobanov, A. P. 2019, MNRAS, 485, 1822
* Pushkarev et al. (2005) Pushkarev, A. B., Gabuzda, D. C., Vetukhnovskaya, Y. N., & Yakimov, V. E. 2005, MNRAS, 356, 859
* Pushkarev et al. (2017) Pushkarev, A. B., Kovalev, Y. Y., Lister, M. L., & Savolainen, T. 2017, MNRAS, 468, 4992
* Savolainen et al. (2006) Savolainen, T., Wiik, K., Valtaoja, E., & Tornikoski, M. 2006, A&A, 446, 71
* Schmidt (1963) Schmidt, M. 1963, Nature, 197, 1040
* Shepherd (1997) Shepherd, M. C. 1997, in Astronomical Society of the Pacific Conference Series, Vol. 125, Astronomical Data Analysis Software and Systems VI, ed. G. Hunt & H. Payne, 77
* Sikora et al. (2007) Sikora, M., Stawarz, Ł., & Lasota, J.-P. 2007, ApJ, 658, 815
* Sol et al. (1989) Sol, H., Pelletier, G., & Asseo, E. 1989, MNRAS, 237, 411
* Tavecchio & Ghisellini (2008) Tavecchio, F. & Ghisellini, G. 2008, MNRAS, 386, 945
* van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science Engineering, 13, 22
* Vega-García (2018) Vega-García, L. 2018, PhD thesis, Universität zu Köln, https://kups.ub.uni-koeln.de/9379/
* Vega-García et al. (2019) Vega-García, L., Perucho, M., & Lobanov, A. P. 2019, A&A, 627, A79
* Walker et al. (2018) Walker, R. C., Hardee, P. E., Davies, F. B., Ly, C., & Junor, W. 2018, ApJ, 855, 128
* Xie et al. (2012) Xie, W., Lei, W.-H., Zou, Y.-C., et al. 2012, Research in Astronomy and Astrophysics, 12, 817
|
# Reducing Malware Analysis Overhead with Coverings
Mohsen Ahmadi1 Kevin Leach2, Ryan Dougherty3, Stephanie Forrest1 and Westley
Weimer2
3United States Military Academy, West Point, New York
{pwnslinger<EMAIL_ADDRESS>2University of Michigan, Ann Arbor,
Michigan
{kjleach<EMAIL_ADDRESS>1Arizona State University, Tempe, Arizona
<EMAIL_ADDRESS>
###### Abstract
There is a growing body of malware samples that evade automated analysis and
detection tools. Malware may measure fingerprints (”artifacts”) of the
underlying analysis tool or environment, and change their behavior when
artifacts are detected. While analysis tools can mitigate artifacts to reduce
exposure, such concealment is expensive. However, not every sample checks for
every type of artifact—analysis efficiency can be improved by mitigating only
those artifacts most likely to be used by a sample. Using that insight, we
propose , a system which identifies a small set of ”covering” tool
configurations that collectively defeat most malware samples with increased
efficiency. identifies a set of tool configurations which maximize analysis
throughput and detection accuracy while minimizing manual effort, enabling
scalable automation for analyzing stealthy malware.
We evaluate our approach against a benchmark of 1535 labeled stealthy malware
samples. Our approach increases analysis throughput over the state of the art
on over 95% of these samples. We also investigate cost-benefit tradeoffs
between the fraction of successfully-analyzed samples and computing resources
required. provides a practical, tunable method for efficiently deploying
analysis resources.
###### Index Terms:
Malware analysis, covering sets, artifact mitigation
## 1 Introduction
Malware continues to proliferate, significantly eroding user and corporate
privacy and trust in computer systems [34, 44, 43, 63]. Malwarebytes Threat
Landscape reported a 13% increase in malware targeting businesses in 2019
[42]. SonicWall detected around 10 billion malware attacks in 2019 [59].
Although Symantec notes a 61% decrease in the number of new malware variants
between 2017 and 2018, the distribution of specific samples like
Adware/InstallCore increased 360% from 2018 to 2019 [64, 42]. Keeping abreast
of this large volume of malware requires effective scalable malware analysis
techniques.
Once a malware sample has been detected and analyzed, automated techniques
such as signature matching can quickly identify other copies. Understanding
novel malware samples, however, requires lengthly analysis using both
automated and manual techniques [73, 23]. Analysts frequently execute samples
under laboratory setups [19, 68] using virtualization. This includes not only
virtual machine monitors like VMWare [67], Xen [21], and VirtualBox [47], but
also tools that depend on virtualization such as Ether [18], HyperDbg [24], or
Spider [16]. Executing the malware sample in a controlled environment allows
the analyst to observe its behavior safely. If malware causes damage,the
damage is limited to the virtualized environment, which can be destroyed and
restarted to analyze subsequent samples. Virtualization is now a lynchpin of
computer security and analysis applications [38, 5, hay2008forensics, 30, 51].
As these malware analysis methods have matured, malware authors have in turn
adopted evasive, or stealthy, techniques to avoid or subvert automated
analysis [62, 11, 61]. Chen _et al._ [11], for example, reported that 40% of
malware samples hide or reduce malicious behavior when run in a VM or with a
debugger attached. Stealthy malware techniques include anti-debugging [11, 9,
22], anti-virtualization [52, 6], and anti-emulation [54]. These methods
detect a particular feature, or _artifact_ , of the analysis environment which
allows the malware to determine if it is being analyzed. When an artifact is
detected, the malware can avoid executing its malicious payload, thereby
hiding its true function from the analyst. Table I summarizes common
artifacts, derived from Zhang _et al._ [74]. Studying the behavior of stealthy
malware requires that the analyst mitigate the artifacts by configuring the
environment in a way that prevents detection by the malware. Over time,
malware authors have discovered a wide diversity of artifact types, which has
increased the time required to manually determine the best mitigation strategy
for each malware sample. This process has proven difficult to automate.
Given current trends, we expect that malware authors will continue discovering
new artifacts, forcing analysts to develop new mitigations, leading to
continued escalation of the complexity and cost of conducting malware
analysis. Balzarotti _et al._ describe how stealthy malware samples check for
evidence of analyses and behave differently when they are present [7]. They
classify stealthy malware by running samples under multiple environments and
using the differences between those runs, especially in terms of patterns of
system call execution, to characterize evasive behavior. For example, if a
sample is executed under both VMWare [67] and VirtualBox [47], and the VMWare
instance does not exhibit malicious behavior, one can conclude that the sample
detects VMWare-specific artifacts (e.g., [49]). Many techniques, from machine
learning [50] to symbolic execution and traces [26] to hybrid dynamic analyses
[40], among others, have been proposed to tackle this problem of environment-
aware malware—even as new black hat approaches for more insidious stealthy
evasion (e.g., [65, 45]) are proposed as well.
This paper presents 111: Malware Instrumentation with Minimized Overhead for
Stealthy Analysis. to address the need for high-throughput, low-overhead
automated analysis of stealthy malware. ’s key insight is that any malware
sample is likely to use a small set of artifact mitigation strategies out of
the large set of possible mitigations. We propose using _coverings_ to find a
small set of analysis configurations that collectively cover (mitigate) the
techniques used by most stealthy malware samples while minimizing the cost of
each individual analysis configuration. can be used as part of an automated
malware analysis or triage system to help detect and understand new stealthy
malicious samples.
We extend the previous state-of-the-art to consider both the _cost_ and
_coverage_ of artifact mitigation strategies. Given the popularity of stealthy
malware and the increasing number of anti-stealth techniques, the question is
no longer whether or not evasion _should_ be mitigated, but _which set_ of
techniques should be used for a particular sample. Since samples often use
combinations of artifacts to evade detection [62], this is not a simple
decision. First, each stealth mitigation technique comes with associated
costs—development time, deployment time, CPU time, memory and disk
utilization, runtime overhead, etc.—compared to a bare-metal or bare-VM setup.
These costs are critical because the rate at which new malware is deployed
[66] combined with the time and resources required to complete each analysis
has led to a situation in which analysis time can be a bottleneck [10].
Second, some stealth mitigation techniques supplant or subsume others but with
different costs. For example, an API call can be hooked to read VMWare-
specific registry keys to prevent malware targeting that registry key from
detecting the environment. Such a strategy is more efficient than using an
alternate approach to hide the registry key.
TABLE I: Example artifacts used by stealthy malware [74].
Artifact Name | Artifact Description
---|---
Hardware ID | VMs have devices with obvious strings (e.g., “VMWare Hard Drive”) or specific identifiers (e.g., MAC address).
Registry Key | Windows VMs have telling registry keys (e.g., unique dates and times associated with VM creation).
CPU behavior | VMs may not faithfully reproduce CPU instructions.
Resource constraint | Malware analysis VMs may be given sparse resources (e.g., $<$20GB hard disk)
Timing | VMs may not virtualize internal timers, or may incur noticeable overhead
Debugger presence | Tools like gdb that instrument samples are detectable
API calls | API calls that are hooked for analysis can be detected.
Process names | VMs, analysis and monitoring tools have some processes with specific predefined names (e.g., vmtoolsd in VMWare).
HCI | check the human interactions with system e.g. mouse and keyboard activity
To summarize, the main contributions of this paper are:
* •
A new algorithm for identifying a low-cost set of artifact covering
configurations;
* •
, a system for selecting and deploying covering combinations of artifact
mitigations to maximize analysis throughput and accuracy;
* •
An empirical study of 1535 labeled stealthy malware samples from the wild,
demonstrating that achieves high coverage of stealthy malware and high
automated analysis throughput; and
* •
Open-source software that provides a unified framework for conducting scalable
evasive malware analysis. We release the codebase of under the following
Github repository for public access: github.com/AdaptiveComputationLab/MIMOSA.
## 2 Background
We call malware _stealthy_ if it actively seeks to detect, disable, or
otherwise subvert malware analysis tools. Stealthy malware operates by
checking for signatures, or _artifacts_ , associated with various analysis
tools or techniques. For example, a malware sample may invoke the
isDebuggerPresent Win32 API call to determine whether a debugger is attached
to the process—if a debugger is attached, the sample may conclude that an
analyst is instrumenting it and change its behavior accordingly. There are
many different types of artifacts exposed by the wide variety of analysis
tools and frameworks used today. Briefly, stealthy malware samples use
artifacts as heuristics to determine if they are under analysis, and change
their behavior to subvert the tool.
Stealthy, evasive malware has been studied extensively [10], and is of
increasing concern in industrial settings, with companies such as Minerva and
Lastline marketing solutions for detecting stealthy malware. In addition,
stealth is often a property gained through the use of packers [2, 1, 55] that
can systematically change malware statically to evade detection and subvert
analysis. Thus, there is a need for defensive methods that can keep up with
the escalating arms race with malware.
An _artifact_ is information about the execution environment that a malware
sample can use to determine if it is running non-natively. For example, if a
malware sample checks whether a debugger is attached to it, that sample may
behave differently in an attempt to conceal its true behavior from an analyst
using the debugger. For any given artifact, there can be multiple _artifact
mitigation strategies_ for preventing exposure of the artifact to the sample.
Each such strategy comes with an associated (1) _mitigation cost_ , which
captures overhead, development effort, or other economic disadvantage, and (2)
generality, or _artifact coverage_ , which is the fraction of stealthy samples
defeated by the strategy.
We consider three broad malware analysis methods:
1. 1.
_manual analysis_ , in which a human analyst reverse engineers, modifies, and
analyzes the sample. This laborious process can take many hours of effort per
sample.
2. 2.
_bare metal analysis_ , in which the sample is run natively rather than in a
VM and thus exposes no artifacts to the sample but also incurs risk to the
host environment.
3. 3.
_combined environment analysis_ , in which the sample is run in multiple
disparate environments so that the sample is exposed to disjoint sets of
artifacts.
In this paper, we focus on the third approach, namely combined environment
analysis. Earlier work [7, 36] used observed differences between runs in
disparate environments to determine which artifacts are used by a stealthy
malware sample. Historically, however, such approaches have not involved many
analysis environments, instead focusing on case studies that compare runs
between limited numbers of virtualization environments. Given the growing
number of malware mitigation techniques, there is a need for techniques that
enable fine-grained control over the artifacts exposed by the analysis
environment. By precomputing a set of configurations that can be tested in
parallel and reused for different malware samples, we hypothesize that will
both increase coverage and analysis scalability of stealthy malware.
## 3 Motivating Examples
In this section, we consider two artifact families commonly used by stealthy
malware to detect an analysis environment: debugger-related API calls and hard
disk capacity. For each artifact family, we highlight multiple mitigation
strategies an analyst might use defeat such evasion, and we illustrate how
each strategy can have a different cost and effectiveness. These tradeoffs
motivate ’s design.
At one extreme, the analyst could run the sample on a bare metal machine
without virtualization, exposing the fewest artifacts (high coverage). This
strategy has high cost because it precludes parallel analyses involving multi-
tenant VMs and it can be expensive to recover from the malware payload. At the
other extreme, the analyst might use a single mitigation strategy (low
overhead). Recall that stealthy malware operates by executing myriad checks
for such artifacts, sometimes six or more [62], so this approach is likely to
have low coverage. Even if the single mitigation strategy chosen defeats one
check, it is unlikely to defeat all of them. As a third alternative, the
analyst could apply every known mitigation strategy simultaneously. However,
in practice, a single sample rarely checks for the majority of known
artifacts.222Advanced Persistent Threats are an exception, which we exclude
from consideration. The third alternative is also not practical because some
mitigations are incompatible: they may require specific VMs or incompatible
hardware configurations, and combining all possible mitigations will often
incur unacceptable overheads. Given a set of available analysis tools, can
produce sets of configurations that occupy different points in the cost-
coverage space.
### 3.1 Debugger Presence Artifact Family
Some stealthy malware samples explicitly check for the presence of standard
debugging software. Analyzing stealthy malware requires tools that do not
expose related artifacts to the malware sample under test. In some cases, this
can be trivial. For example, we can mitigate the isDebuggerPresent API call by
hooking it and returning a spoofed value so that, from the malware’s
perspective, it appears as though no debugger is attached. Such a hook is
fairly simple and requires low runtime overhead (indeed, some debuggers used
for malware analysis, such as OllyDbg [72], offer an option to hook this API
call). However, sometimes this is ineffective: other techniques exist, beyond
that single API call, that can be used by malware to determine the presence of
a debugger (e.g., isRemoteDebuggerPresent or fields in process control
structures). We could employ one of many strategies to mitigate this “debugger
presence” artifact family:
1. 1.
do nothing, risking exposure to samples that invoke any API calls related to
debugger presence;
2. 2.
hook one or more API calls within the OS;
3. 3.
run in an instrumented virtual machine that does not directly attach a
debugger to the sample; or
4. 4.
use a physical machine to preclude exposure.
Strategy (2) is attractive because hooking these API calls would incur
relatively low runtime overhead. However, hooking API calls like this requires
development effort specific to the platform being used for analysis. Moreover,
deciding to hook API calls may introduce subsequent mechanisms for determining
the presence of a different artifact. For example, hooking API calls in
Windows requires modifying a process data structure, which could itself be
checked by the malware. Alternatively, we could opt to run the sample in
another environment such as an instrumented virtual machine (e.g., Ether
[18]), but this would incur more significant runtime overhead, reducing
efficiency. In brief, moving from strategy (1) to strategy (4) increases the
coverage of stealthy malware samples, but at a greater cost.
### 3.2 Hard Disk Capacity Artifact Family
As a more complex example, many stealthy samples will check the size of the
hard disk. If the hard disk capacity is below some threshold, the sample may
conclude that it is executing in a resource-constrained virtual machine for
automated analysis. Depending on the guest OS, there may be a variety of API
calls that would, either directly or indirectly, measure the available hard
disk space. Based on our experience, pafish and some other loaders check for a
threshold of 60GB, and if the hard drive size is less than this value, they
consider it as a potential analysis environment. An analyzer has several
strategies for addressing this “hard disk capacity” artifact family:
1. 1.
do nothing, risking exposure to samples that look for specific hard drive
capacities;
2. 2.
hook one or more API calls associated with disk space;
3. 3.
externally hook the API call from a hypervisor context,
4. 4.
allocate a larger virtual hard disk to a virtual machine used for malware
analysis; or
5. 5.
run the sample on a physical machine to preclude artifact exposure.
Strategy (2) is cheaper in terms of analysis cost, but requires more effort to
research and understand each of the (potentially many) associated API calls
(e.g., in addition to measuring disk size directly by querying disk
information, a malware sample could write a large amount of data and check if
the OS raises an exception once space is depleted). On the other hand,
strategy (4) requires resources and effort at runtime, restricting the number
of parallel VMs that could be used for malware analysis. Finally, we could
instead allocate an entire physical analysis machine for the sample, which
would successfully mitigate all artifacts in the disk space family for the
largest subset of malware, but also inhibits analysis scalability.
These examples show how multiple mitigation strategies can exist for the same
artifact family, how those strategies can have different costs, and how those
strategies can vary in coverage or effectiveness. However, although we have
thus far presented them in linear lists, the conflicts we demonstrated mean
that a more nuanced representation is merited. For example, for the debugging
presence artifact family, strategies (3) and (4) conflict and cannot be
employed simultaneously. These observations motivate our adoption of the
lattice data structure (to address coverage and conflict concerns) and our
extension of the covering array algorithm (to address coverage and cost
concerns).
## 4 Proposed Workflow
Figure 1: A simplified illustration of our workflow, consists of four major
engines including Covering algorithm, VMCloak[33], Dispatcher, and Detox.
In this section, we describe the workflow we envision to support. We seek to
make the automated analysis of stealthy malware more efficient. Current
techniques either rely on human creativity (e.g., debugging with IDA Pro [28]
or OllyDbg [72]) or heavy-weight analysis techniques that incur significant
overhead (e.g., MalT [74] or Ether [18]). Moreover, differencing approaches,
such that of as Balzarotti _et al._ [7], execute a sample in multiple
instrumented environments and use the difference in runs to determine which
artifact is used by the sample, potentially wasting resources.
Given a list of available artifacts, the strategies available for mitigating
them, and a cost model, ’s objective is to select a small set of
configurations, which can be deployed in parallel on a given malware sample.
That is, given a fixed number of available servers, each will be configured to
mitigate a different specific subset of artifacts, with lower total cost,
e.g., runtime, to lower analysis latency compared to existing methods. Once
the covering configurations are identified, deploys each of the configurations
as a separate instance of an instrumented VM. manages the VMs to gather
logging information and support malware analysis.
Figure 2: workflow: In step ①, we develop a set of known artifacts, a set of
known mitigation strategies, and a cost model for each artifact, all of which
serve as input to our algorithm. In step ②, our covering algorithm generates a
set of mitigation configurations for each server in a particular cluster.
Generated configurations are inputs to our VMCloak engine that provisions VMs
that mitigate subsets of artifacts. In step ③, a malware sample repository, a
list of configurations and corresponding VMs are passed to the Dispatcher. In
step ④, the Dispatcher spawns and manages the analysis of VM instances based
on those provisioned by VMCloak. We record API call traces, which are analyzed
to inspect and monitor VM state. In Step ⑤, our Detox engine correlates the
collected API logs and VMI using heuristics to determine whether the malware
sample was detected by one of the VM instances or not.
’s high-level workflow is illustrated in Figure 1 with details given in Figure
2. In Step 1, we apply our covering algorithm (Algorithm 1), which takes (1) a
list of artifacts, (2) corresponding costs for each artifact (Section 5.1),
and (3) a set of mitigation strategies for each artifact (Section 5.2) as
input. The algorithm returns a covering set of configurations for designing
and deploying different virtual machines. Each covering is represented as a
vector of bits, where each element indicates whether that artifact should be
mitigated in the server’s configuration.
The cost model can include a multitude of factors, as determined by the
analyst, including VM run-time, memory usage, development time of the
mitigation, etc. uses our covering algorithm (Section 5.3) to determine a set
of mitigation configurations for each server in a malware analysis cluster
based on the cost model.
Next, these coverings are realized in a malware analysis cluster by
configuring specific virtual machines. The VMCloak module receives the set of
configurations generated previously and maps VM snapshots to nodes in the
cluster. VMCloak is ’s custom VM provisioning and deployment framework.
Coverings may entail specific virtualization backends (e.g., QEMU vs.
VirtualBox), hooking API calls, or modifying the guest kernel (e.g., network
drivers).
With a configuration established for each server in the analysis cluster, next
allocates samples to servers. We assume access to a suite of hypervisors and
hardware resources that can be configured a priori to realize the set of
mitigations specified by the covering. As described in Section 6.2, we
implemented 13 such hypervisor and hardware configurations (Table III),
managed by our dispatcher module to spin up and spin down analysis resources
as samples are processed.
Each sample is then executed as a process within each configuration. As the
process runs, collects API traces and VM state logs through Virtual Machine
Introspection (VMI) and determines heuristically determine whether the sample
is executed successfully. These heuristics are stored in the Detox engine,
which correlates process and VM execution logs to infer more semantic
patterns. In particular, Detox detects if the process exits, if certain
network communication patterns exist, or if certain process names are created.
We conclude that a sample has been executed successfully if it runs to
termination under each environment. Section 6 uses well-labeled corpus of
stealthy malware to evaluate ’s effectiveness.
## 5 Algorithmic Approach
A key insight of our approach is that efficient analysis of malware samples
must balance two competing factors: the number of artifacts that are mitigated
and the cost of deploying multiple mitigations. Because stealthy malware uses
artifacts to evade detection, it is desirable to mitigate as many artifacts as
possible to minimize the chance of disclosing to the sample that it is being
analyzed. However, mitigating all artifacts simultaneously imposes
unreasonable costs, so the goal is to find sets of configurations, where each
configuration is a subset of the available artifact mitigations. The idea then
is that each configuration can be run simultaneously, will individually be
relatively inexpensive to deploy, but collectively most malware samples will
be defeated by at least one configuration.
Given a set of artifact mitigation strategies (configurations) and a model
that assigns a cost to each strategy, we describe an algorithm for efficiently
selecting a set that maximizes coverage while minimizing cost. At a high
level, there are three main components:
1. 1.
The analyst decides on a cost model. Any non-negative cost function can be
used. For example, the model might include development effort and analysis
efficiency, combined linearly to compute a total cost.
2. 2.
For each artifact family, each mitigation strategy is represented as a
configuration. Each configuration has an associated cost, computed via the
cost model.
3. 3.
The covering algorithm then selects from the many possible configurations to
produce a small set that optimizes the trade-off between cost and coverage.
We next describe each component in more detail.
### 5.1 Cost Model
Abstractly, we model cost as a function mapping each artifact mitigation
strategy to $\mathbb{R}_{\geq 0}$. Our approach operates regardless of how
this cost function is defined, but we consider, and provide qualitative
details for, two exemplar cost functions: development time and analysis
efficiency (Section 6).
If a mitigation strategy is known (e.g., from a published paper) but an
implementation is not available, the analysis organization incurs a software
development cost to implement it. Software engineers must be paid to design,
implement, test and deploy the mitigation. A full discussion of software
engineering costs is beyond our scope [27], but we note that there are many
organizations or situations in which developer time is an expensive, limiting
resource compared to abundant server, cloud, or compute time.
Given an available set of implemented mitigations, a second cost is the
overhead of deploying them. There are a number of relevant metrics here such
as throughput and energy consumption. Given the rate at which new stealthy
samples are discovered [66, 36] and the costs associated with zero-day
exploits, rapid analysis response is often paramount. Given a fixed computing
budget, if one approach admits analysis after 100 time units and another
approach only admits analysis after 800 time units, the former is preferred.
For example, consider a scenario in which ten servers are available. One could
deploy heavyweight tools (such as Ether [18], BareCloud [36], or MalT [74]) on
all ten servers; this would produce a suitable analysis but is not efficient:
it would take a long time for an analysis to run to completion. Alternatively,
one could deploy lighter-weight systems such as LO-PHI [61] or VMI-based
introspection. This would be more efficient, but risks detection by samples in
the input corpus, at which point the analysis fails.
Note that while more expensive mitigations usually have higher coverage, this
is not always the case. For example, if a cost model is used that captures
only developer-hours, then the mitigation strategy of hooking API calls is
both more expensive (it requires a developer to write code) and less effective
than using an alternative analysis environment (which may require little
developer time in such a model).
### 5.2 Selecting Artifacts and Mitigations
First, we enumerated a number of potential artifacts commonly used by our
corpus of stealthy malware samples on Windows systems (Section 6.1). We
followed existing literature [74, 10] and the pafish tool [48] to group these
artifacts into a taxonomy of categories. We consider nine artifact families
for a total of 39 specific artifacts, which together for a representative
sample of indicative behavior of stealthy malware.
For each artifact, we implemented several mitigation strategies across a
number of hypervisor backends. The mitigation strategies ranged in complexity
from straightforward scripting (e.g., synthetic mouse movements) to more
complex patches to the hypervisor source code (e.g., to hook kernel API calls
made within the guest). Table II lists each artifacts we considered in our
prototype, and the Appendix shows several example mitigation implementations.
As new artifacts are discovered in the future and exploited by adversaries,
mitigations can be implemented and added to incrementally. However, our
current implementation includes the artifacts exploited by our representative
dataset of 1536 manually analyzed stealthy samples. The cost analysis and
coverings construction, however, generalizes regardless of artifact behavior
or exploitation.
TABLE II: Summary of mitigated artifacts in . We categorize artifacts according to conceptual similarity. Artifact Family | Mitigation Examples
---|---
VM-specific Registry Keys | Hook RegOpenKeyEx API
| Hook RegQueryValueEx API
| Remove offending keys from guest (e.g., HARDWARE\ACPI\DSDT\VBOX__)
| Use alternate VM
| Run on bare metal
Mouse / Keyboard / Video Detection | Spoof peripheral input
| Replace spoofed driver files (e.g., VBoxMouse.sys)
| Use higher resolution (e.g., $>$800x600)
| Pass through graphics adapter
Internal Timing | Hook instructions that read MSRs
| Hook GetLastInputInfo API
| Hook GetTickCount API
| Virtualize time stamp counter (TSC)
| Run on bare metal
Device Properties | Spoof device names
| Allocate bridged network
| Hook Device Query APIs
| Hook I/O APIs
| Allocate more virtual CPUs
| Modify BIOS, system, baseBoard, chassis, and OEM Strings
| Change NIC MAC address
| Use alternate VM
| Run on bare metal
Drive capacity check | Hook CreateFile API
| Hook DeviceIoControl API
| Hook GetDiskFreeSpaceExA API
| Hook WriteFile API
| Hook GetDriveTypeA API
| Hook GetVolumeInformationA API
| Allocate Large virtual disk
| Allocate physical disk
Memory capacity check | Hook GlobalMemoryStatusEx API
| Allocate larger VM guest
| Run on bare metal
Hooked API detection | Externally Hook APIs (e.g., hook hypercalls)
| Use hardware breakpoints
| Run on bare metal
Retrieving CPU Vendor Name | Patch VMM
| Change VM config
| Run on QEMU full system emulation
Process / Drive Name Detection | Patch VMM
| Change VM config
| Run on QEMU full system emulation
Invalid Instruction Behavior | Patch VMM
| Use alternate VM
| Run on bare metal
### 5.3 Generating Coverings
Next, we present our algorithm for generating a set of low-cost covers. Let
$\mathcal{A}=\\{A_{1},\cdots,A_{n}\\}$ be the set of $n$ artifacts, and
$\mathcal{C}=\\{C_{1},\cdots,C_{s}\\}$ be the set of configurations. Let
$S_{1},\cdots,S_{p}$ be the set of samples observed. For each sample $S_{i}$,
we associate with it a set of behaviors $B(S_{i})$, which is a subset of the
artifacts $\mathcal{A}$. For each configuration $C_{j}$, we associate it with
a set of mitigations $M(C_{j})$, which is also a subset of the artifacts
$\mathcal{A}$.
Our goal is to construct a binary array (called a covering), where each of the
rows corresponds to a configuration, and each of the columns corresponds to an
artifact, with the following property. For any sample $S_{i}$, there are
configurations $C_{j_{1}},\cdots,C_{j_{\ell}}$ for which $B(S_{i})$ is a
subset of $M(C_{j_{1}})\cup\cdots\cup M(C_{j_{\ell}})$; in other words, for
any sample, there are some configurations that together fully mitigate the
sample. In terms of the array itself, suppose that $B(S_{i})$ involves the
columns $b_{1},\cdots,b_{m}$. Then the union of all rows $r$ in these columns
has a 1 in each entry, where 1 in column $b_{i}$ indicates that configuration
$r$ mitigates the artifact $b_{i}$, and 0 otherwise. If the property is not
maintained, we generate an array that mitigates as many samples as possible
(high coverage), while also having the cost(s) of the chosen configurations be
as low as possible.
In addition, we maintain a set of desirably high (DH) and desirably low (DL)
characteristics, where each configuration has a valuation for each of these.
For a covering corresponding to a set of configurations, the measure for the
covering of the same characteristic may be the average from each
configuration, the total, or some other measure. For example, if the
characteristic is measuring the deployment time, then the total deployment
time for a set of configurations is the total over all of their deployment
times. In general, we want to generate a set of configurations such that the
covering’s DH characteristics are as large as possible, and the DL
characteristics are as low as possible. For the deployment time example, this
would be a DL characteristic; coverage would be a DH characteristic. Because
different characteristics can have different impacts on a system, we aim to
produce a collection of coverings such that none of them “overshadow” any
other one.
Next, we walk through the algorithm. First, we will discuss generating a
covering of all the considered configurations and artifacts relevant for any
sample. For each configuration and artifact, we mark whether or not the
configuration mitigates the artifact; the array corresponding to the covering
is the natural one. We determine the cost and coverage of each configuration
in turn. Next, we generate all subsets of configurations; suppose these
subsets are $S_{1},\cdots,S_{k}$. We say that a subset of configurations
$S_{i}$ dominates another subset $S_{j}$ if the following properties hold:
* •
$S_{i}$’s DH characteristics are all at least those of $S_{j}$,
* •
$S_{i}$’s DL characteristics are all at most those of $S_{j}$, and
* •
either (1) some DH characteristic of $S_{i}$ is strictly larger than that of
$S_{j}$, or (2) some DL characteristic of $S_{j}$ is strictly smaller than
that of $S_{j}$.
The Pareto front of the subsets is the collection of subsets $\mathcal{S}$
such that none of the subsets dominates any other in $\mathcal{S}$, which can
be found by non-dominated sorting [15].
This algorithm is not efficient because it examines every subset, which takes
exponential time in the number of configurations. We give an optimization that
improves the running time in practice, contingent on the following assumption.
Suppose that all of the DH and DL characteristics (other than coverage) are
monotonic, which means that if a new configuration $c$ is added to a set of
configurations $\mathcal{S}$, then $\mathcal{S}\cup\\{c\\}$ cannot have larger
DH characteristics nor smaller DL characteristics than those of $\mathcal{S}$.
For example, adding a configuration does not decrease the total deployment
time, so this is a monotone DL characteristic. Note that coverage as defined
here is always monotone.
Let $\mathcal{A}_{i}$ be all subsets of size $i$, and suppose all non-coverage
characteristics are monotone. Let $a_{i}$ be a subset in $\mathcal{A}_{i}$,
and let $c$ be any configuration not in $a_{i}$. If the coverage of
$a_{i}\cup\\{c\\}$ is more than $a_{i}$, then we need to observe some subset
in $\mathcal{A}_{i+1}$ (because $a_{i}\cup\\{c\\}$ is one such subset).
However, if the coverage does not increase for any subset in $\mathcal{A}_{i}$
with any new configuration $c$, then we can terminate the algorithm because
(1) the coverage does not increase, and (2) the characteristics are monotone.
We give a more detailed description in Algorithm 1. In practice, all of the
characteristics we have used are monotone, and the algorithm benefits because
most configurations in the Pareto frontier had fewer than four configurations,
a significant improvement over the brute-force strategy. We present and
discuss various points of Pareto frontiers derived from this algorithm in
Section 6.
An advantage of our algorithm is that it is highly likely that a configuration
will cover the artifacts employed by any observed sample. The construction of
Algorithm 1 produces minimal subsets of configurations (i.e., deleting any
configuration from any subset will cause coverage to decrease). Indeed, as
demonstrated in Section 6, most of the points found on the Pareto frontier
involved a very small number of configurations.
Generate the covering $C$ with rows $R$ as configurations, and columns as
(monotone) characteristics.
PreviousCoverage $\leftarrow$ $\emptyset$. PointsToConsider $\leftarrow$
$\emptyset$.
for _$i=1$ to $|R|$_ do
NewCoverage $\leftarrow$ $\emptyset$.
for _each subset $S$ of size $i$ of $R$_ do
Add the coverage of $S$ to NewCoverage, and both the coverage and costs of $S$
to PointsToConsider.
Call the parent of $S$ to be every subset of $S$ of size $|S|$-1 (i.e.,
deletion of a single element).
end for
if _the coverage of each subset in NewCoverage is the same as its parents in
PreviousCoverage_ then
Exit this loop.
end if
else
PreviousCoverage $\leftarrow$ NewCoverage.
end if
end for
Output the Pareto frontier of PointsToConsider using non-dominated sorting.
Algorithm 1 Pareto Generation of Configurations via Coverings when the
characteristics are monotone.
## 6 Empirical Evaluation
adapts coverings to choose artifact mitigation strategies that enable the
accurate and rapid analysis of stealthy malware that would otherwise take
significant effort to analyze and understand. In this section, we present
results from two empirical evaluations of .
We begin by introducing an indicative use case (see Section 4). Consider an
enterprise that desires to use a set of servers with finite capacity for
automated malware classification and triage. We assume that low-latency
analysis of stealthy samples is paramount: given a fixed set of computing
resources, we want the analysis of a given sample to complete as quickly as
possible (e.g., to support subsequent human analysis, defense creation,
signature generation, etc.). We further assume that the input samples are
stealthy, and the analysis tool must mitigate the artifacts exposed to each
sample to prevent subversion. Although it might be possible to use all servers
available to the enterprise to mitigate all potential artifacts, this is not
an efficient use of resources and does not provide the lowest analysis
latency. Instead, we apply our algorithm to determine which sets of artifacts
are to be mitigated by each server. This minimizes the latency of analyzing
each sample across all available servers while maximizing the combined
analysis power of all available servers.
To evaluate our approach, we consider three research questions:
RQ1
Coverage — Does produce artifact mitigation configurations that effectively
covers stealthy malware samples?
RQ2
Scalability — Does produce artifact mitigation configurations that admit low-
cost, high-throughput automated stealthy malware analyses?
RQ3
Efficiency — What tradeoffs exist in the resource costs and coverage space
among the configuration sets produced by ?
We first discuss the corpus of malware we used in our evaluation. Then, we
discuss each research question in turn.
### 6.1 Malware Corpus Selection
We consider stealthy malware that targets Windows. Of the many available
malware corpora, only a few focus directly on stealthy malware, in part
because they are so difficult to analyze automatically. We studied two of
these in detail (BareCloud [36] and an anonymous security company) and found
that they the labels were inadequate for our purpose because they did not
label the specific artifacts used by each sample. A sample might be labeled
“device id detection,” for example, rather than listing the specific device it
checked for. Instead, we obtained a set of 1535 unique samples from
independent security researchers, which are analyzed according to the
artifacts they use. This dataset consists only malware samples that have been
manually identified as stealthy and curated precisely. Other work has used
larger malware databases for similar experiments [8, 13], but as mentioned
above these datasets are not labeled with enough specificity for our study.
Figure 3: Distribution of malware samples in our dataset according to the
number of unique artifacts employed. For example, more than 600 of our 1535
samples employed a single artifact. The graph is not cumulative.
Figure 3 shows that each individual malware sample in our corpus uses between
one and five evasion techniques, thus confirming our hypothesis that most
malware considers only a few artifacts and supporting our design decisions for
. In addition, we show a taxonomy of malware families in our corpus in Figure
4.
Figure 4: Taxonomy of malware samples contained in the corpus, which includes
samples of several popular families such as Lethic (Trojan), Nymaim (Trojan),
and InstallCore (Potentially Unwanted Program (PUA)). Figure 5: Frequency of
artifacts detected by samples in the malware dataset.
We categorized the samples based on the artifacts they are looking for in the
system, summarized in Figure 5. Among these artifacts, checking for BIOS and
SCSI device metadata were common. Additionally, many of our samples checked
for the existence of specific processes (e.g., helper programs for in-guest
clipboard access, video acceleration, etc.). We categorized which specific
process was used by each stealthy malware sample, shown in Figure 6. In
particular, Xen service (xenservice.exe) is the most frequently-checked
process among other processes used.
Figure 6: Distribution of process names checked for by stealthy malware.
#### 6.1.1 Pafish
In addition, we used pafish [48], an open source tool that enumerates common
checks used by stealthy malware, to determine whether a given configuration
could provide coverage over specific artifacts. Pafish is well-suited to this
task because it can be configured to check or ignore specific artifacts. We
used pafish to confirm the sets of artifacts mitigated by each configuration
before we applied each configuration to malware samples in our dataset.
Figure 7: Success and failure counts for each tested configuration, when run
against the 1535 stealthy malware samples.
### 6.2 RQ1: Coverage — Artifact Mitigation
In this experiment, assigns artifact mitigation strategies to analysis
servers. We say that the _configuration set size_ is the number of
configurations combined together — this is an input parameter that represents
the number of distinct configurations that the user is willing to run
concurrently. For example, if more servers are available for analysis, a
larger configuration set size can be selected. We say that a stealthy malware
sample is successfully analyzed if at least one configuration in the
configuration set produced by mitigates all of the artifacts it uses.
TABLE III: List of tested configurations and the artifacts they mitigate. Each
column corresponds to whether a specific category of artifact is mitigated in
that configuration. Note that some configurations support different backend
(e.g., qemu_legacy_conf1 can be run in both KVM and QEMU), yielding differing
artifact mitigations.
Index | Backend | Configuration | Process | Debugger | CPUID | RDTSC | CPU # | Invalid Inst. | TickCount | HCI | BIOS | File Check | HDD - SCSI | Disk size | Memory | MAC | ACPI
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
1 | KVM | qemu_patched_conf1 | ✓ | ✓ | – | – | ✓ | ✓ | – | ✓ | ✓ | – | – | – | – | ✓ | ✓
2 | qemu_patched_conf2 | ✓ | ✓ | – | – | ✓ | ✓ | – | ✓ | ✓ | – | ✓ | – | ✓ | – | ✓
3 | VMWare | vmware_conf3 | ✓ | ✓ | – | – | ✓ | ✓ | – | ✓ | ✓ | – | – | – | – | ✓ | ✓
4 | vmware_conf2 | ✓ | ✓ | – | – | – | – | – | ✓ | – | ✓ | – | – | ✓ | – | ✓
5 | vmware_conf2_vmtools | – | ✓ | – | – | ✓ | – | – | ✓ | – | – | – | – | ✓ | – | ✓
6 | KVM | qemu_legacy_conf1 | ✓ | ✓ | – | – | ✓ | ✓ | – | ✓ | ✓ | ✓ | – | – | ✓ | ✓ | ✓
7 | qemu_legacy_conf2 | ✓ | ✓ | – | – | – | ✓ | – | ✓ | ✓ | ✓ | – | – | – | – | ✓
8 | Virtualbox | vbox_conf1_guestadditions | – | ✓ | – | – | – | ✓ | – | ✓ | ✓ | – | ✓ | ✓ | – | – | –
9 | vbox_conf2_guestadditions | – | ✓ | – | – | ✓ | ✓ | – | ✓ | – | – | ✓ | ✓ | – | ✓ | ✓
10 | vbox_conf1 | ✓ | ✓ | ✓ | – | ✓ | ✓ | – | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | – | –
11 | vbox_conf2 | ✓ | ✓ | – | – | – | ✓ | – | ✓ | – | ✓ | ✓ | ✓ | – | ✓ | –
12 | QEMU | qemu_legacy_conf1 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | – | – | ✓ | ✓
13 | qemu_legacy_conf2 | ✓ | ✓ | ✓ | – | ✓ | ✓ | – | ✓ | ✓ | ✓ | ✓ | – | ✓ | – | ✓
For each configuration, we represent artifact coverage as a bit-array in which
each set bit implies that that particular artifact has been successfully
mitigated in the environment. Table III gives details about each configuration
instance.
We use VMWare, VirtualBox, KVM, and QEMU backends for virtualizing guests to
complete an analysis of each sample. We use 13 different configurations across
each of these backends for conducting analyses. Each configuration implements
a subset of mitigations against each class of artifacts. For example, the
_qemu_patched_conf1_ contains intentionally low RAM size ($<1$GB), exposing
the RAM detection family of artifacts, but also contains custom patches that
remove all QEMU-related hardcoded strings throughout the source. In contrast,
the _VMWare_conf2_ configuration employs the VMWare Tools suite for faster
execution, exposing process names (i.e., of VMWare Tools). Broadly, we
designed and implemented these configurations by considering the families of
artifacts exposed by our dataset (Section 6.1) and the expected complexity in
mitigating each artifact family across each virtualization backend.
We compute whether at least one configuration covers each evasive sample by
analyzing traces of API call invocations, including arguments passed to each
call and the corresponding output. We developed a module (“Detox” engine in
Figure 2) to wrap and unify multiple Virtual Machine Introspection (VMI) APIs,
including Icebox [3], PyReBox, DRAKVUF [39], and VMWare VProbes [69]. Thus, we
collect multiple API trace logs for each sample for each configuration, based
on the virtualization backend used. Next, we aggregate these API trace logs to
bridge the semantic gap [57, 29]: doing so allows reconstructing higher
abstraction API traces invoked against the guest OS.
Given each trace of each sample, we confirm detection results based on the
malware’s behavior across all configurations. Specifically, we follow the
malware execution trace up to the point when it starts to create, manipulate,
or remove a memory section, segment, or page using APIs such as
NtCreateSection, NtMapViewOfSection, or NtSetContext. Then, we compare these
results against ground truth established in our corpus to ensure that the
malicious process executed completely. If the analysis differs from the ground
truth (e.g., if the sample detects the environment and hides its behavior), we
say that configuration does _not_ cover the sample. If there exists at least
one configuration that _does_ cover the sample, we call that sample covered.
We show the detection rate of each configuration across our entire corpus of
malware in Figure 7.
1020406080100Coverage Percentage23Configuration Set Sizes45RandomSemi-
randomKoTHMimosa
Figure 8: Proportion of stealthy malware samples covered for different
configuration set sizes for various techniques. Random indicates a randomly-
generated coverage vector. Semi-random represents a randomly-selected subset
of our 13 configurations. King-of-the-Hill represents the best single
configuration from our set of 13. Our approach achieves higher levels of
coverage compared to the best available single configuration.
#### 6.2.1 RQ1 Result Summary
Our mitigation strategies and corresponding configurations provide varying
coverage levels across an indicative dataset of 1535 stealthy malware samples,
allowing us to explore the trade-off space between coverage provided by
analysis tools and the cost of deploying those tools or acquiring analyses. In
Figure 8, we show a the level of coverage achieved by our approach compared to
other approaches versus configuration set size. Specifically, we measure the
coverage achieved by a set of configurations of a specific size for (1) Random
— a randomly-generated coverage vector, (2) Semi-random — a randomly-selected
configuration from our set of 13 configurations (shown in Table III), (3)
King-of-the-Hill (KoTH) — the best single configuration from our set of 13
configurations, and (4) , the set of configurations selected by our approach.
For this set of experiments, we averaged 10 trials.
We view KoTH as the baseline for automated malware analysis systems that do
not use our approach (e.g., companies that pick the “best” sandbox they can,
and scale it up to multiple machines in a cluster). Our approach achieves 97%
coverage when combining five configurations, compared to KoTH, which achieves
65% coverage. This suggests our approach can generate configurations of
malware analysis environments that can apply to most stealthy malware samples
in an indicative corpus.
### 6.3 RQ2: Scalability — Automated Analysis
We also evaluated our approach with respect to malware analysis throughput.
Because our approach also considers the relative costs (e.g., overhead, disk
utilization) of each configuration, we can measure our system’s effectiveness
at scale. For example, if a given configuration _does not_ cover a given
sample, that configuration wastes time and resources attempting to execute
that sample. Thus, we can compute the amount of resources _wasted_ by
considering the total resources consumed by configurations executing samples
that were not covered by those configurations.
We measure the time wasted by a configuration using virtual machine
introspection (VMI) to reconstruct events that occur within each configuration
guest environment from low-level execution traces collected for each sample.
We compared these execution traces against ground truth execution traces
gathered for each sample (provided as part of our malware dataset). Each
sample’s collected and ground truth traces were compared using the trace
merging algorithm introduced by Virtuoso [20], VMWatcher [31], and VMWare
VProbes [69]. For each sample in each configuration, we report the time $t$ at
which the measured and ground truth traces diverged — where a sample’s anti-
analysis technique caused the execution to differ from the ground truth. If
the traces never diverge, then we conclude the sample was covered. Thus, for
each uncovered sample and configuration, we report the time wasted as the
difference between time $t$ and some maximum timeout (configured as 2s here;
state-of-the-art typically uses 5s timeouts [39]).
Figure 9 shows a comparison of time wasted of various approaches versus
configuration set size, as described in Section 6.2.1. Our approach spends 3X
less CPU time executing samples that _are not covered_ by configurations. As a
result, our approach can scale analysis of malware samples 3X over state-of-
the-art by accurately analyzing a higher proportion of samples in less time.
10102030405060Analysis Time (sec)23Configuration Set Sizes45RandomSemi-
randomKoTHMimosa
Figure 9: Average analysis time wasted executing each sample for sets of
different configuration sizes. Random refers to a randomly-generated coverage
vector. Semi-random refers to a random subsets of our 13 configurations. King-
of-the-Hill represents the best single configuration selected from our 13
configurations. Our approach wastes the least amount of time failing to
execute stealthy samples, enabling higher automated analysis throughput.
### 6.4 RQ3: Efficiency — Analysis Tradeoffs
In this section, we consider tradeoffs between analysis resource cost and
stealthy malware sample coverage. Recall that a stealthy malware sample is
_covered_ if all of the artifacts it uses are mitigated. We evaluate coverage
with respect to two cost functions: memory utilization and disk throughput.
Both are relevant for scalable automated malware analysis.
We analyzed each of the 1535 stealthy malware samples. For each sample, we
determined which set of configurations would mitigate the artifacts used by
that sample, then measured how much of a resource was used during that
sample’s execution. In particular, we measured disk throughput (bytes per
second) and memory utilization (approximated by measuring average free bytes
during execution). We used to generate a Pareto front by considering which
subsets of configurations would require which levels of resource to achieve a
particular degree of coverage.
Table IV shows the Pareto front and indicative points for the memory
utilization cost function. As an example, the point with the highest coverage
(i.e., 1432 out of 1535 samples analyzed successfully) required an average of
718MB during execution, while the configurations with lower coverage (e.g.,
1020 samples) used only 316MB of memory. Overall, this graph shows how to
balance the tradeoff between malware analysis tool configurations with respect
to memory usage.
Similarly, Table V show the Pareto front for the disk throughput cost
function. As before, there is a tradeoff between how many samples are covered
and the disk usage is required to obtain analyses per sample.
### 6.5 RQ3 Tradeoffs Summary
enables finding a Pareto-optimal point that provides accurate stealthy malware
analyses while minimizing the resource allocation required to obtain those
analyses.
TABLE IV: Indicative points in the Pareto front comparing samples covered with memory utilization. | Samples | Avg. Available
---|---|---
Configuration Set | Covered | Memory(MB)
vbox_conf2 | 1432 | 718
vbox_conf1
qemu_legacy_conf1
qemu_patched_conf1
vmware_conf3
qemu_legacy_conf2
qemu_legacy_conf1
vbox_conf2_guestadditions
vbox_conf1_guestadditions
vmware_conf3 | 1020 | 316
TABLE V: Indicative points in the Pareto front comparing samples covered with disk throughput. | Samples | Avg. Disk Write
---|---|---
Config Set | Covered | (KBytes/sec)
vbox_conf1 | 1432 | 397
qemu_legacy_conf1
qemu_legacy_conf2
qemu_legacy_conf2 | 1044 | 209
## 7 Discussion
In this section, we discuss (1) potential threats to the validity of the
experimental results, (2) using for controlling an adaptive malware analysis
system, and (3) potential future improvements that can be made to cost
functions.
### 7.1 Threats to Validity
First, we characterized artifact families according to conceptual similarity.
The artifact families ultimately inform what structure the corresponding
covering takes. There is no standard method for classifying artifacts in this
manner—the effectiveness or utility of could change depending on the specific
assumptions we made about which artifacts are categorically similar.
Second, our experimental approach for RQ2 measured execution time only while
the sample was actively executing. In practice, there are other considerations
that have impact on the overall efficiency of malware analysis (e.g.,
restoring clean virtual disks, reloading the OS image, etc.).
Third, although our evaluation incorporated 1535 stealthy malware samples from
the wild, we produced configurations whose costs were measured in isolation
(e.g., we measured CPU utilization separately from memory utilization).
Additional engineering effort is required to construct a production-quality
end-to-end system that uses the configurations produced by to apply to a real
set of hardware.
### 7.2 Remarks on Adaptability
takes as input a set of modeled mitigation strategies and associated costs,
and it produces as output a coverage-optimal, low-heuristic-cost array of
strategies. This approach can be extended to adapt over time to changes in the
distribution of stealthy malware. For example, if new artifacts are discovered
or if the costs associated with mitigating each one changes with technology,
our overall approach and algorithms will still be applicable as a tool for
finding cost-optimal analysis configurations.
As a specific example, recent work leveraged “wear and tear” of virtual
machine environments [46]. In essence, malware samples can look for evidence
that an environment is “aged.” An analyst that spins up a vanilla VM image may
fall victim to a sample that detects if the environment is pristine and newly-
created. That is, the perceived “age” of the virtualized environment is the
artifact. Malware campaigns like Dyre and Dridex use heuristics like (1)
investigating the clipboard for evidence of random strings associated with
normal use, and (2) registry keys to track historical use of common prorgrams
(e.g., Microsoft Word). We do not include such artifacts in our prototype
coverage calculation because our dataset did not contain samples that
exploited wear and tear artifacts; however, they can be readily incorporated
by implementing a corresponding mitigation. For example, our prototype
currently moves random files to the Desktop, Recycle bin, and Temp
directories, and it also injects decoy entries in the Registry. We could
introduce this as a full mitigation in our framework: the coverings vector
would be augmented to reflect this new artifact so that it is covered in the
optimally-generated configurations.
### 7.3 Remarks on Cost Functions
currently considers optimizing for cost, which can be captured in several
ways: CPU utilization, memory utilization, and runtime overhead with respect
to latency. However, these one-dimensional approaches may admit coverings that
are difficult to interpret. For example, in a cluster of 10 servers, assigning
nine servers to do no mitigation (minimal cost) and one server to run bare
metal (maximal coverage) is a well-formed solution.
We also discussed a second parameter that captures _benefit_ : coverage of
stealthy malware samples is important for acquiring faithful, interpretable
analyses. For example, if we know a mitigation strategy will cover 90% of
stealthy malware, we may be willing to pay a higher cost to use that strategy
because of its overall coverage. On the other hand, a strategy that only
covers 2% of stealthy malware in the wild may be disregarded. While we
examined the cost-benefit space in our evaluation, future work will include a
multidimensional heuristic search to find optimal coverings with respect to
more complex cost functions.
## 8 Related Work
Various projects have focused on detecting and evading analysis systems in
both x86 executables [54, 12, 56, 53] and mobile devices (e.g., Android [32]).
In this section, we discuss this work in three categories: (1) malware
detection using behavioral analysis, (2) malware analysis using virtual
machine infrastructure, and (3) malware analysis using bare-metal machines.
### 8.1 Stealthy Malware Detection
Current stealthy malware analysis techniques generally rely either on human
creativity (e.g., debugging with IDA Pro [28] or OllyDbg [72]) or heavy-weight
analysis tools that incur significant overhead (e.g., MalT [74] or Ether
[18]). Moreover, differencing approaches, such that of as Balzarotti _et al._
[7], work by executing a sample in multiple instrumented environments and use
the difference in runs to determine which artifact is used by the sample,
potentially wasting resources.
Balzarotti _et al._ [7] demonstrate the ability to detect evasive behaviors by
running malware in various runtime environments and comparing their system
calls. Lindorfer _et al._ [41] later employed a similar technique, but used
various malware sandboxes and scored their evasive behaviors. HASTEN [37]
specifically focuses on stalling malware, which is a particularly difficult
evasion technique to analyze because the malware appears benign for an
extended period of time. TriggerScope [25] similarly examines Android programs
which mask there malicious behavior until a certain _trigger_ is observed. Our
technique leverages a combination of multiple environments that separately
mitigate different artifact families, instead providing environments that are
more likely for the sample to execute faithfully.
Our approach is conceptually related to SLIME [14], an automated tool for
disarming anti-sandboxing techniques employed by stealthy malware. SLIME runs
a sample many times, each time configuring the environment to explicitly
expose certain artifacts to the sample. In contrast, our approach seeks to
minimize the total cost of execution (or the resources consumed) to either
identify or analyze the sample under test. In addition, we introduce a novel
structure called a covering that helps identify the optimal configuration for
an analysis system.
### 8.2 Virtual Machine Analysis
Ether [18] is a malware analysis framework based on hardware virtualization
extensions (e.g., Intel VT). It runs outside of the guest operating systems,
in the hypervisor, by relying on underlying hardware features. BitBlaze [58]
and Anubis [4] are QEMU-based malware analysis systems. They focus on
understanding malware behavior, instead of achieving better transparency. V2E
[71] combines both hardware virtualization and software emulation. HyperDbg
[24] uses the hardware virtualization that allows the late launching of VMX
modes to install a virtual machine monitor, and run the analysis code in the
VMX root mode. SPIDER [17] uses Extended Page Tables to implement invisible
breakpoints and hardware virtualization to hide its side-effects. DRAKVUF [39]
is another VMI-based system capable of both user and kernel-level analysis.
We note that recent work has investigated changes to the sandboxing
environment to give it the appearance of age or use [46]. For example, a
dearth of Documents, Downloads, event logs, or installed software could be a
hint that the sample is not executing in a real, vulnerable environment.
Although our current prototype does not address samples exhibiting such “age”
checks, as discussed above, we could readily incorporate it. As with other new
or yet-undiscovered artifacts, our overall framework would not change. One
would simply implement a configurable mitigation against that new artifact and
include it as a strategy used by our coverings algorithm.
### 8.3 Bare-metal Analysis
BareBox [35] is a malware analysis framework based on a bare-metal machine
without any virtualization or emulation techniques, which is used for
analyzing user mode malware. Follow up work, BareCloud [36], uses mostly un-
instrumented bare-metal machines, and is capable of analyzing stealthy malware
by detecting file system changes. Willems _et al._ [70] propose a method for
using branch tracing, implemented on a physical CPU, to analyze stealthy
malware. LO-PHI [60] is a system capable of both live memory and disk
introspection on bare-metal machines, which can be used for analyzing stealthy
malware. MalT [74] uses System Management Mode to instrument a bare-metal
system at the instruction level, exposing very few artifacts to the system.
While LO-PHI and MalT both have high deployment overheads, they also expose
very few artifacts to samples under test; thus, either could conceptually
serve as our highest coverage (and highest cost) configuration.
## 9 Conclusion
Stealthy and obfuscated malware is expanding rapidly. As the security arms
race continues, malware authors use increasingly sophisticated techniques to
subvert analysis. The large volume of new malware released every year makes
automated analysis increasingly mandatory to identify and understand new
malware samples. Techniques to address the volume of stealthy malware are
critical.
In this paper, we introduced coverings, a novel way of representing the
problem of analyzing stealthy malware efficiently, and a prototype
implementation called . We studied a broad set of artifacts exposed by
analysis environments and the mitigation strategies required to prevent
malware samples from using those artifacts to subvert detection. We modeled
the mitigations using a partially ordered structure according to the number of
artifacts mitigated and the cost associated with deploying that strategy. We
developed 32 such mitigation strategies. We presented an algorithm that finds
the lowest-cost selection of mitigation strategies to implement while
guaranteeing a total coverage of the artifacts. Finally, we empirically
evaluated using 1535 stealthy malware samples from the wild. We found that can
find mitigation strategies that reduce the overhead and memory utilization
associated with mitigating all artifacts considered.
## 10 Acknowledgments
We thank Giovanni Vigna, Christopher Kruegel, and Hojjat Aghakhani for
graciously providing the well-labeled corpus of evasive malware samples used
in our evaluation. This work would not be possible without the tireless
engineering effort invested to construct such a dataset, so we are grateful
for members of the community who share data to improve the state-of-the-art.
We also thank the anonymous reviewers for their valuable comments and
suggestions, and the Avira company, Alexander Vukcevic, Director of Protection
Labs and QA, and Shahab Hamzeloofard for helping us with determining
provenance of our malware samples.
We gratefully acknowledge the partial support of NSF (CCF 1908633, 1763674),
DARPA (FA8750-19C-0003, N6600120C4020), AFRL (FA8750-19-1-0501), and the Santa
Fe Institute. Any opinions, findings, and conclusions in this paper are those
of the authors and do not necessarily reflect the views of our sponsors.
The opinions in the work are solely of the authors, and do not necessarily
reflect those of the U.S. Army, U.S. Army Research Labs, the U.S. Military
Academy, or the Department of Defense.
## References
* [1] “ASPack,” http://www.aspack.com, retrieved November 2016.
* [2] “UPX: The Ultimate Packer for eXecutables,” https://upx.github.io, retrieved November 2016.
* [3] B. Amiaux, L. Farey, J.-M. Borello, and T. Rennes, “Icebox: analyse de malwares par introspection de machine virtuelle.”
* [4] Anubis, “Analyzing unknown binaries,” http://anubis.iseclab.org.
* [5] M. Auty, A. Case, M. Cohen, B. Dolan-Gavitt, M. H. Ligh, J. Levy, and A. Walters. Volatility framework - volatile memory extraction utility framework. [Online]. Available: http://www.volatilityfoundation.org/
* [6] E. Bachaalany, “Detect if your program is running inside a Virtual Machine,” http://www.codeproject.com/Articles/9823/Detect-if-your-program-is-running-inside-a-Virtual.
* [7] D. Balzarotti, M. Cova, C. Karlberger, and G. Vigna, “Efficient detection of split personalities in malware.” in _Networks and Distributed Systems Security Symposium_ , 2010.
* [8] U. Bayer, P. M. Comparetti, C. Hlauschek, C. Kruegel, and E. Kirda, “Scalable, behavior-based malware clustering.” in _NDSS_ , vol. 9. Citeseer, 2009, pp. 8–11.
* [9] R. Branco, G. Barbosa, and P. Neto, “Scientific but Not Academical Overview of Malware Anti-Debugging, Anti-Disassembly and Anti-VM Technologies,” in _Black Hat_ , 2012.
* [10] A. Bulazel and B. Yener, “A survey on automated dynamic malware analysis evasion and counter-evasion: Pc, mobile, and web,” in _Proceedings of the 1st Reversing and Offensive-oriented Trends Symposium_. ACM, 2017, p. 2.
* [11] X. Chen, J. Andersen, Z. Mao, M. Bailey, and J. Nazario, “Towards an understanding of anti-virtualization and anti-debugging behavior in modern malware,” in _Proceedings of the 38th Annual IEEE International Conference on Dependable Systems and Networks (DSN ’08)_ , 2008.
* [12] X. Chen, J. Andersen, Z. M. Mao, M. Bailey, and J. Nazario, “Towards an understanding of anti-virtualization and anti-debugging behavior in modern malware,” in _Dependable Systems and Networks With FTCS and DCC, 2008. DSN 2008. IEEE International Conference on_. IEEE, 2008, pp. 177–186.
* [13] B. Cheng, J. Ming, J. Fu, G. Peng, T. Chen, X. Zhang, and J.-Y. Marion, “Towards paving the way for large-scale windows malware analysis: Generic binary unpacking with orders-of-magnitude performance boost,” in _Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security_ , ser. CCS ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 395–411. [Online]. Available: https://doi.org/10.1145/3243734.3243771
* [14] Y. Chubachi and K. Aiko, “Slime: Automated anti-sandboxing disarmament system,” https://www.blackhat.com/docs/asia-15/materials/asia-15-Chubachi-Slime-Automated-Anti-Sandboxing-Disarmament-System.pdf, 2015\.
* [15] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” _IEEE Transactions on Evolutionary Computation_ , vol. 6, no. 2, pp. 182–197, 2002.
* [16] Z. Deng, X. Zhang, and D. Xu, “Spider: Stealthy binary program instrumentation and debugging via hardware virtualization,” in _Proceedings of the Annual Computer Security Applications Conference (ACSAC’13)_ , 2013.
* [17] ——, “Spider: stealthy binary program instrumentation and debugging via hardware virtualization,” in _Proceedings of the 29th Annual Computer Security Applications Conference_. ACM, 2013, pp. 289–298.
* [18] A. Dinaburg, P. Royal, M. Sharif, and W. Lee, “Ether: Malware analysis via hardware virtualization extensions,” in _Proceedings of the 15th ACM Conference on Computer and Communications Security (CCS ’08)_ , 2008.
* [19] D. Distler, _Malware Analysis: An Introduction_. SANS Institute, December 2007, available via https://www.sans.org/reading-room/whitepapers/malicious/malware-analysis-introduction-2103.
* [20] B. Dolan-Gavitt, T. Leek, M. Zhivich, J. Giffin, and W. Lee, “Virtuoso: Narrowing the semantic gap in virtual machine introspection,” in _Security and Privacy (SP), 2011 IEEE Symposium on_. IEEE, 2011, pp. 297–312.
* [21] B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, I. Pratt, A. Warfield, P. Barham, and R. Neugebauer, “Xen and the art of virtualization,” in _In Proceedings of the ACM Symposium on Operating Systems Principles_ , 2003\.
* [22] N. Falliere, “Windows anti-debug reference,” http://www.symantec.com/connect/articles/windows-anti-debug-reference, 2010\.
* [23] D. Farmer and W. Venema, _Forensic Discover_. Addison-Wesley, 2005.
* [24] A. Fattori, R. Paleari, L. Martignoni, and M. Monga, “Dynamic and Transparent Analysis of Commodity Production Systems,” in _Proceedings of the IEEE/ACM International Conference on Automated Software Engineering (ASE’10)_ , 2010.
* [25] Y. Fratantonio, A. Bianchi, W. Robertson, E. Kirda, C. Kruegel, and G. Vigna, “TriggerScope: Towards Detecting Logic Bombs in Android Apps,” in _Proceedings of the IEEE Symposium on Security and Privacy (S &P)_, San Jose, CA, May 2016.
* [26] Y. Hong, Y. Hu, C.-M. Lai, S. Felix Wu, I. Neamtiu, P. McDaniel, P. Yu, H. Cam, and G.-J. Ahn, “Defining and detecting environment discrimination in android apps,” in _Security and Privacy in Communication Networks_ , X. Lin, A. Ghorbani, K. Ren, S. Zhu, and A. Zhang, Eds. Cham: Springer International Publishing, 2018, pp. 510–529.
* [27] H. Huijgens, A. Van Deursen, L. L. Minku, and C. Lokan, “Effort and cost in software engineering: A comparison of two industrial data sets,” in _Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering_. ACM, 2017, pp. 51–60.
* [28] IDA Pro, www.hex-rays.com/products/ida/.
* [29] B. Jain, M. B. Baig, D. Zhang, D. E. Porter, and R. Sion, “Sok: Introspections on trust and the semantic gap,” in _2014 IEEE symposium on security and privacy_. IEEE, 2014, pp. 605–620.
* [30] X. Jiang, X. Wang, and D. Xu, “Stealthy malware detection through vmm-based out-of-the-box semantic view reconstruction,” in _Proceedings of the 14th ACM conference on Computer and communications security_. ACM, 2007, pp. 128–138.
* [31] ——, “Stealthy malware detection through vmm-based ”out-of-the-box” semantic view reconstruction,” in _Proceedings of the 14th ACM Conference on Computer and Communications Security_ , ser. CCS ’07. New York, NY, USA: Association for Computing Machinery, 2007, p. 128–138. [Online]. Available: https://doi.org/10.1145/1315245.1315262
* [32] Y. Jing, Z. Zhao, G.-J. Ahn, and H. Hu, “Morpheus: Automatically generating heuristics to detect android emulators,” in _Proceedings of the 30th Annual Computer Security Applications Conference_ , ser. ACSAC ’14. New York, NY, USA: ACM, 2014, pp. 216–225. [Online]. Available: http://doi.acm.org/10.1145/2664243.2664250
* [33] Jurriaan Bremer, Thorsten Sick, Rasmus Männa, and Mohsen Ahmadi, “VMCloak,” https://github.com/AdaptiveComputationLab/vmcloak, 2020.
* [34] Kaspersky Lab, “Kaspersky Security Bulletin 2017,” https://media.kaspersky.com/jp/pdf/pr/Kaspersky_KSB2017_Statistics-PR-1045.pdf.
* [35] D. Kirat, G. Vigna, and C. Kruegel, “BareBox: Efficient malware analysis on bare-metal,” in _Proceedings of the 27th Annual Computer Security Applications Conference (ACSAC’11)_ , 2011.
* [36] ——, “Barecloud: Bare-metal analysis-based evasive malware detection.” in _USENIX Security Symposium_ , 2014, pp. 287–301.
* [37] C. Kolbitsch, E. Kirda, and C. Kruegel, “The power of procrastination: detection and mitigation of execution-stalling malicious code,” in _Proceedings of the 18th ACM conference on Computer and communications security_. ACM, 2011, pp. 285–296.
* [38] A. Kopytov, “Draugr—live memory forensics on linux,” http://code.google.com/p/draugr.
* [39] T. K. Lengyel, S. Maresca, B. D. Payne, G. D. Webster, S. Vogl, and A. Kiayias, “Scalability, fidelity and stealth in the drakvuf dynamic malware analysis system,” in _Proceedings of the 30th Annual Computer Security Applications Conference_ , 2014, pp. 386–395.
* [40] Q. Li, Y. Zhang, L. Su, Y. Wu, X. Ma, and Z. Yang, “An improved method to unveil malware’s hidden behavior,” in _Information Security and Cryptology_ , X. Chen, D. Lin, and M. Yung, Eds. Cham: Springer International Publishing, 2018, pp. 362–382.
* [41] M. Lindorfer, C. Kolbitsch, and P. Milani Comparetti, “Detecting environment-sensitive malware,” in _Recent Advances in Intrusion Detection_ , R. Sommer, D. Balzarotti, and G. Maier, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 338–357.
* [42] Malwarebytes Labs, “State of Malware Report 2020,” https://resources.malwarebytes.com/files/2020/02/2020_State-of-Malware-Report.pdf.
* [43] McAfee, “Threats Report: Fourth Quarter 2014,” http://www.mcafee.com/us/resources/reports/rp-quarterly-threat-q4-2014.pdf.
* [44] ——, “Threats Report: March 2016,” http://www.mcafee.com/us/resources/reports/rp-quarterly-threats-mar-2016.pdf.
* [45] N. Miramirkhani, M. P. Appini, N. Nikiforakis, and M. Polychronakis, “Spotless sandboxes: Evading malware analysis systems using wear-and-tear artifacts,” in _2017 IEEE Symposium on Security and Privacy (SP)_ , May 2017, pp. 1009–1024.
* [46] ——, “Spotless sandboxes: Evading malware analysis systems using wear-and-tear artifacts,” in _2017 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2017, pp. 1009–1024.
* [47] Oracle, “VirtualBox,” http://www.virtualbox.com, 2007.
* [48] A. Ortega, “Paranoid fish.” [Online]. Available: http://github.com/a0rtega/pafish
* [49] Y. Oyama, “Trends of anti-analysis operations of malwares observed in api call logs,” _Journal of Computer Virology and Hacking Techniques_ , vol. 14, no. 1, pp. 69–85, 2018.
* [50] X. Pan, X. Wang, Y. Duan, X. Wang, and H. Yin, “Dark hazard: Learning-based, large-scale discovery of hidden sensitive operations in android apps.” in _NDSS_ , 2017.
* [51] N. L. Petroni, J. Aaron, W. Timothy, F. William, and A. Arbaugh, “Fatkit: A framework for the extraction and analysis of digital forensic data from volatile system memory,” _Digital Investigation_ , vol. 3, 2006.
* [52] D. Quist and V. Val Smith, “Detecting the Presence of Virtual Machines Using the Local Data Table,” http://www.offensivecomputing.net/.
* [53] D. Quist, V. Smith, and O. Computing, “Detecting the presence of virtual machines using the local data table,” _Offensive Computing_ , 2006.
* [54] T. Raffetseder, C. Kruegel, and E. Kirda, “Detecting system emulators,” in _Information Security_. Springer Berlin Heidelberg, 2007.
* [55] Reversing Labs, “RLPack,” https://reversinglabs.com, retrieved November 2016.
* [56] J. Rutkowska, “Red Pill,” http://www.ouah.org/Red_Pill.html.
* [57] A. Saberi, Y. Fu, and Z. Lin, “Hybrid-bridge: Efficiently bridging the semantic gap in virtual machine introspection via decoupled execution and training memoization,” in _Proceedings of the 21st Annual Network and Distributed System Security Symposium (NDSS’14)_ , 2014.
* [58] D. Song, D. Brumley, H. Yin, J. Caballero, I. Jager, M. Kang, Z. Liang, J. Newsome, P. Poosankam, and P. Saxena, “Bitblaze: A new approach to computer security via binary analysis,” in _Proceedings of the 4th International Conference on Information Systems Security (ICISS’08)_ , 2008.
* [59] Sonicwall, “Sonicwall Cyber Threat Report 2020,” https://www.sonicwall.com/resources/2020-cyber-threat-report-pdf/.
* [60] C. Spensky, H. Hu, and K. Leach., “LO-PHI: Low Observable Physical Host Instrumentation,” in _Proceedings of 2016 Network and Distributed System Security Symposium (NDSS’16)_ , 2016.
* [61] C. Spensky, H. Hu, and K. Leach, “LO-PHI: Low observable physical host instrumentation,” in _Networks and Distributed Systems Security Symposium 2016 (NDSS 2016)_ , San Diego, CA, February 2016, acceptance rate: 15.8%.
* [62] S. Stefnisson, “Evasive malware now a commodity,” https://www.securityweek.com/evasive-malware-now-commodity, 2018.
* [63] Symantec, “Internet security threat report,” https://www.symantec.com/content/dam/symantec/docs/reports/istr-22-2017-en.pdf, 2017\.
* [64] Symantec Labs, “Internet Security Threat Report (ISTR) 2019,” https://docs.broadcom.com/doc/istr-24-2019-en, February 2019.
* [65] R. Tanabe, W. Ueno, K. Ishii, K. Yoshioka, T. Matsumoto, T. Kasama, D. Inoue, and C. Rossow, “Evasive malware via identifier implanting,” in _Detection of Intrusions and Malware, and Vulnerability Assessment_ , C. Giuffrida, S. Bardin, and G. Blanc, Eds. Cham: Springer International Publishing, 2018, pp. 162–184.
* [66] ——, “Evasive malware via identifier implanting,” in _International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment_. Springer, 2018, pp. 162–184.
* [67] VMWare, Inc., “Vmware server,” http://www.vmware.com/products/server, 2008\.
* [68] P. L. Wedum, _Malware Analysis: A Systematic Approach_. Norwegian University of Science and Technology, 2008, master’s Thesis: Available via https://brage.bibsys.no/xmlui//bitstream/handle/11250/261770/-1/347719_FULLTEXT01.pdf.
* [69] F. Westphal, S. Axelsson, C. Neuhaus, and A. Polze, “Vmi-pl: A monitoring language for virtual platforms using virtual machine introspection,” _Digital Investigation_ , vol. 11, pp. S85–S94, 2014.
* [70] C. Willems, R. Hund, A. Fobian, D. Felsch, T. Holz, and A. Vasudevan, “Down to the bare metal: Using processor features for binary analysis,” in _Proceedings of the 28th Annual Computer Security Applications Conference_. ACM, 2012, pp. 189–198.
* [71] L.-K. Yan, M. Jayachandra, M. Zhang, and H. Yin, “V2E: Combining hardware virtualization and software emulation for transparent and extensible malware analysis,” in _Proceedings of the 8th ACM SIGPLAN/SIGOPS Conference on Virtual Execution Environments (VEE’12)_ , 2012. [Online]. Available: http://doi.acm.org/10.1145/2151024.2151053
* [72] O. Yuschuk, “OllyDbg,” www.ollydbg.de.
* [73] L. Zelster, “Mastering 4 stages of malware analysis,” https://zeltser.com/mastering-4-stages-of-malware-analysis/, February 2015\.
* [74] F. Zhang, K. Leach, H. Wang, A. Stavrou, and K. Sun, “Using Hardware Features to Increase Debugging Transparency,” in _Proceedings of the 36th IEEE Symposium on Security and Privacy_ , 2015.
| Mohsen Ahmadi is a Senior Application Security Engineer. He received his
MSc degree in Computer Science from Arizona State University and his BS from
University of Isfahan. His main research is focused on program analysis,
improving fuzzing techniques, and embedded security. He is a big open-source
software fanatic and is an active security researcher.
---|---
| Kevin Leach is a postdocotral researcher and lecturer at the University of
Michigan. He earned the PhD from the University of Virginia in 2016. His
research combines the areas of systems security and software engineering — he
has developed techniques for transparent system introspection, kernel
hotpatching, and stealthy malware analysis.
---|---
| Ryan Dougherty is an assistant professor in the Department of Electrical
Engineering and Computer Science at West Point. He earned his B.S. and Ph.D.
from Arizona State University in 2019, and was a Visiting Assistant Professor
at Colgate University before joining West Point. His academic interests
include software engineering, theory of computation, and combinatorial design
theory.
---|---
| Stephanie Forrest is a professor of Computer Science at Arizona State
University, where she directs the Biodesign Center for Biocomputation,
Security and Society. Her interdisciplinary research focuses on the
intersection of biology and computation, including cybersecurity, software
engineering, and biological modeling.
---|---
| Westley Weimer is a professor of Computer Science and Engineering at the
University of Michigan. His main research interests include static, dynamic,
and medical imaging-based techniques to improve program quality, fix defects,
and understand how humans engineer software. He received a BA degree in
computer science and mathematics from Cornell and MS and PhD degrees from
Berkeley.
---|---
|
# Quantum-accurate magneto-elastic predictions with classical spin-lattice
dynamics
Svetoslav Nikolov Computational Multiscale Department, Sandia National
Laboratories, P.O. Box 5800, MS 1322, Albuquerque, NM 87185 Mitchell A. Wood
Computational Multiscale Department, Sandia National Laboratories, P.O. Box
5800, MS 1322, Albuquerque, NM 87185 Attila Cangi Center for Advanced
Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf, 02826
Görlitz, Germany Jean-Bernard Maillet CEA - DAM, DIF, Arpajon Cedex F-91297,
France Université Paris-Saclay, CEA, LMCE, 91680 Bruyères-le-Châtel, France
Mihai-Cosmin Marinica Université Paris-Saclay, CEA, Service de Recherches de
Métallurgie Physique, Gif-sur-Yvette 91191, France Aidan P. Thompson
Computational Multiscale Department, Sandia National Laboratories, P.O. Box
5800, MS 1322, Albuquerque, NM 87185 Michael P. Desjarlais Sandia National
Laboratories, P.O. Box 5800, MS 1322, Albuquerque, NM 87185 Julien Tranchida
Computational Multiscale Department, Sandia National Laboratories, P.O. Box
5800, MS 1322, Albuquerque, NM 87185<EMAIL_ADDRESS>
###### Abstract
A data-driven framework is presented for building magneto-elastic machine-
learning interatomic potentials (ML-IAPs) for large-scale spin-lattice
dynamics simulations. The magneto-elastic ML-IAPs are constructed by coupling
a collective atomic spin model with an ML-IAP. Together they represent a
potential energy surface from which the mechanical forces on the atoms and the
precession dynamics of the atomic spins are computed. Both the atomic spin
model and the ML-IAP are parametrized on data from first-principles
calculations. We demonstrate the efficacy of our data-driven framework across
magneto-structural phase transitions by generating a magneto-elastic ML-IAP
for $\alpha$-iron. The combined potential energy surface yields excellent
agreement with first-principles magneto-elastic calculations and quantitative
predictions of diverse materials properties including bulk modulus,
magnetization, and specific heat across the ferromagnetic-paramagnetic phase
transition.
## Introduction
Magnetism strongly influences thermomechanical properties in a large variety
of materials, such as single-element magnetic metals [1, 2], steels [3], high-
entropy alloys [4, 5], nuclear fuels such as uranium dioxide [6], magnetic
oxides [7, 8], and numerous other classes of functional materials [9]. Despite
the critical role of magnetism in the aforementioned materials classes,
modeling efforts to study the interplay between structural and magnetic
properties have been notably lacking. Furthermore, there are unanswered
scientific questions regarding the significance of magnetism in matter that is
shock-compressed [10, 11] or exposed to strong electromagnetic fields such as
in coherent lights sources[12, 13], pulsed power and high magnetic fields
facilities [14, 15]. Properties of interest include phase transitions, thermal
stability of magnetic defects, magneto-mechanical couplings, but many of these
subjects are challenging or prohibited by state of the art computational
tools.
Figure 1: Constant pressure heat capacity of $\alpha$-iron versus temperature.
The black triangles denote experimental measurements [16, 17], the red squares
our simulation results, and black dashed line indicates the experimental Curie
transition temperature. This illustrates the well-known ferromagnetic-
paramagnetic phase transition, where the heat capacity diverges at the Curie
temperature.
A prime but simple example of the computational advance made herein is the
heat-capacity of $\alpha$-iron displayed in Figure 1. The experimental
measurement of the heat capacity Cp diverges at the magnetic Curie transition,
characteristic of a second-order phase transition [18]. Without a scalable
coupled spin-lattice dynamics simulation environment, that properly accounts
for thermal expansion and magnetic contribution to the pressure, reproducing
the divergence of Cp (and of other thermomechanical properties) at the
critical point is not possible.
Accurate numerical simulations are critical for enabling technological
advances, as they shape our fundamental understanding of the underlying solid
state physics that dictates material behavior. Developing high fidelity models
however is challenging, because it necessitates capturing physical phenomena
that occur across several length and time scales. This can only be achieved
with sufficiently accurate multiscale simulation tools [19, 20], which is the
focus of this work.
Classical molecular dynamics (MD) simulations [21] provide a useful framework
for multiscale modeling by leveraging interatomic potentials (IAPs) to
represent the dynamics of atoms on a Born-Oppenheimer potential energy surface
(PES) [22]. By utilizing massively parallel algorithms [23] and long time-
scale methodologies[24], MD enables bridging _first-principles_ with
continuum-scale simulations [25].
The absorption of machine learning (ML) techniques into the creation of
interatomic potentials has lead to classical MD simulations that approach the
accuracy of _first-principles_ methods. A large number of these highly
accurate ML-IAPs[26, 27, 28, 29, 30, 31, 32] have been developed. In general,
they are parameterized on training data (configuration energy, atomic forces)
from _first-principles_ methods like density functional theory (DFT) [33] and
utilize different flavors of ML model forms to construct the PES. While they
have proven to be useful for large-scale simulations of materials properties
[34, 35], further progress in multiscale modeling is hampered by the
limitation of ML-IAPs to non-magnetic materials phenomena. Even with highly
accurate ML-IAPs, state-of-the-art MD simulations cannot reproduce the
divergent behavior of Cp near the critical point (Figure 1) because they fail
to account for the magnetic degrees of freedom [36].
Coupling atomic spin dynamics with classical MD has been pioneered by Ma _et
al._ [37, 38, 39]. Herein, a classical magnetic spin is assigned to each atom
in addition to its position leading to a 6N-dimensional PES (5N if the
magnetic spin norms are fixed), instead of the common 3N-dimensional PES in
classical MD:
$E=\sum_{i=1}^{N}\epsilon\left(\\{\bm{r}_{ij},\bm{s}_{i}\\}\right)\,,$ (1)
where $\bm{r}_{ij}=\bm{r}_{i}-\bm{r}_{j}$ denotes the relative position
between atoms i and j, $\bm{s}_{i}$ the classical spin assigned to atom $i$,
and $N$ the number of atoms in the system. In most classical spin-lattice
calculations, the 6N-dimensional PES is constructed by introducing an atomic
spin model on top of a mechanical IAP [37]. For example, a common approach is
to combine a distance-dependent Heisenberg Hamiltonian with an embedded-atom-
method (EAM) potential [40, 39].
While these prior approaches recover experimental properties on a qualitative
level [41, 42], their combined representation of phononic and magnetic degrees
of freedom is not sufficiently consistent for providing quantitative
predictions at the level of _first-principles_ results. More recently, Ma _et
al._ developed a magneto-elastic IAP for magnetic iron based on data from
_first-principles_ calculations [43]. However, this remained an isolated
attempt as there is no general methodology for generating a magneto-elastic
PES in a classical context that enables large-scale spin-lattice dynamics
simulations for any magnetic material.
In this work, we overcome this methodological obstacle by providing a data-
driven framework for generating magneto-elastic ML-IAPs that (1) provide a
consistent representation of both mechanical and magnetic degrees of freedom
and (2) achieve near _first-principles_ accuracy. We refer to our new class of
IAPs as "magneto-elastic ML-IAPs" as they generate a consistent PES accurately
representing the magnetic degrees of freedom and the interplay between
magnetic and elastic phenomena. Our framework couples an atomic spin model
(Heisenberg Hamiltonian) with an ML-IAP and provides a unified magneto-elastic
PES which yields the correct mechanical forces on the atoms in the MD
framework. The Heisenberg Hamiltonian is parameterized with data from DFT
spin-spiral calculations at different degrees of lattice compression. In
constructing the ML-IAP, we leverage the flexible and data-driven spectral
neighbor analysis potential (SNAP) methodology [32] which is trained on a
database of magnetic configurations generated using DFT calculations.
We apply our framework to generate a magneto-elastic ML-IAP for the $\alpha$
phase of iron. We demonstrate that our potential is transferable to an
extended area of the phase diagram, corresponding to a temperature and
pressure range of 0 to 1200 K and 0 to 13 GPa (up to the $\alpha\to\gamma$ and
$\alpha\to\epsilon$ transitions, respectively). The Curie temperature, which
experimentally occurs at approximately 1045 K, lies within this parameter
space. After presenting our training workflow, the "Results" section will
probe the "quantum-accuracy" of our magneto-elastic ML-IAP by performing
magneto-static comparisons to _first-principles_ measurements. We then stress
that our generated magneto-elastic ML-IAP can also be directly used in the
LAMMPS package [23] to perform magneto-dynamic simulations that take into
account both the thermal expansion of the lattice and magnetic pressure due to
spin disorder. This enables us to maintain a constant ambient pressure
throughout all calculations of thermomechanical properties, consistent with
conditions prevalent in experiments. As illustrated in Figure 1, our framework
allows us to perform the first pressure-controlled quantitative prediction of
the critical behavior across a second-order phase transition within a
classical spin-lattice dynamics simulation.
## Results
Figure 2: Magneto-elastic ML-IAP training workflow. A training set of DFT
calculations is partitioned into those that train the SNAP interatomic
potential and those that train the spin Hamiltonian, respectively. A non-
magnetic interatomic potential is fit to configuration energies and atomic
forces after the spin Hamiltonian contribution is subtracted and is validated
against magneto-elastic properties computed in LAMMPS. Optimization of the
spin Hamiltonian and interatomic potential parameters is handled by DAKOTA.
In this section we outline our advancements in magnetic materials modeling. We
first present our training workflow and subsequently assess our results by
comparing both static and dynamic properties in $\alpha$-iron against _first-
principles_ calculations and experiments.
Figure 2 displays our training workflow. Further details to each box in this
diagram are presented as a subsection in the "Methods" section. All atomic
configurations in the training set result from _first-principles_ calculations
performed with the same DFT setup (same pseudo-potential and energy cutoff,
similar k-point densities) as detailed in the "Methods" section. In contrast
to traditional force-matching approaches in the development of classical IAPs,
we treat the magnetic and phononic degrees of freedom in the PES in a
consistent and unified manner, as indicated by the exchange of information
between spin Hamiltonian and SNAP potential parametrization steps. After
parameterizing our atomic spin Hamiltonian by leveraging DFT spin-spiral
results, its energy, forces, and stress contributions are subtracted from each
atomic configuration in the _first-principles_ training set. The ML-IAP is
then trained to reproduce the non-magnetic component of the _first-principles_
data. Finally, both components of the magneto-elastic PES are recombined to
construct a unified magneto-elastic ML-IAP that is consistently trained on
_first-principles_ data. Optimization is handled by the DAKOTA software
package[44] in both fitting steps. For the SNAP component of the potential,
DAKOTA optimizes the radial cutoff of the interaction along with the weights
of each training data set (energy and force weights) to generate different
candidate potentials. Those candidates potentials are then recombined with the
spin Hamiltonian and tested against selected objective functions (mean-
absolute errors (MAEs) in lattice constants, cohesive energies, elastic
constants, forces and total energies). Table 1 summarizes the different groups
of training data, the optimal weights obtained for each of those groups, and
the corresponding energy and force MAEs. The target values for the objective
functions are based on both experimental and DFT data, as outlined in Table 2.
Objective function evaluations are done within LAMMPS [23].
Herein, the critical innovation that enables a leap forward in predictive
simulations of magnetic materials is this data-driven workflow. Magnetic and
phononic contributions to the PES are taken into account explicitly and any
miscounting is avoided (for example, no double counting of the magnetic energy
or contribution to the pressure). The obtained magneto-elastic ML-IAP can
directly be used to run spin-lattice calculations in LAMMPS [23, 45, 40].
### Magneto-Static Accuracy
Figure 3: Plots of the equation of state data from _first-principles_
calculations (VASP computations) and our magneto-elastic ML-IAP (LAMMPS
computations) for seven different spin-spirals: a) $\Gamma$ point b) vectors
along the $\Gamma H$ high-symmetry line, and c) vectors along the $\Gamma$P
high-symmetry line. Visualizations of the corresponding spin-spiral supercells
and associated q-vectors are shown to the right of and above each plot,
respectively.
We first assess the quantitative agreement of our magneto-elastic ML-IAP by
comparing with DFT results where magnetic order and elastic deformations are
coupled. This is done by leveraging a particular subset of spin configurations
referred to as spin-spirals, for which the energy and corresponding pressure
can be evaluated from both DFT and classical magneto-elastic potential
calculations. Details about definition and computation of spin-spirals can be
found in the "Methods" section. Equation-of-state calculations (energy and
pressure versus volume) are performed at the $\Gamma$ point (corresponding to
the purely ferromagnetic state) and for spin-spirals corresponding to
q-vectors along the $\Gamma$H and $\Gamma$P high-symmetry lines. The
calculations at the $\Gamma$ point represent the magnetic ground state and,
hence, serve as a point of reference for the spin spiral calculations. The
geometric orientation of the various computed spin spirals is visualized in
Figure 3. The first set ($q=0.01$ along $\Gamma H$ and $q=0.07$ along $\Gamma
P$) represents "long" spirals, close to the $\Gamma$ point, the second set
($q=0.1$ along $\Gamma H$ and $q=0.14$ in $\Gamma P$) represents spirals with
intermediate periodicity, and the last set ($q=0.2$ along $\Gamma H$ and
$q=0.21$ along $\Gamma P$) is chosen close to the borders of the magnetic
training set (see red demarcation lines in Figure 5 in the "Methods" section).
The DFT results are obtained by leveraging the generalized Bloch theorem,
whereas our classical spin-lattice calculations were performed by generating
the corresponding supercells (details given in the "Methods" section).
Excellent agreement between our classical spin-lattice model and DFT is
achieved at the $\Gamma$ point and for the two first q-vectors on each high-
symmetry line ($q=0.01$ and $q=0.1$ along $\Gamma H$, $q=0.07$ and $q=0.14$
along $\Gamma P$) in the pressure range relevant for the $\alpha$-phase of
iron (up to 13 GPa which corresponds to the $\alpha\leavevmode\nobreak\
\to\leavevmode\nobreak\ \epsilon$ transition). At higher q-vector values, the
energy and pressure predictions of our atomic spin-lattice model still agree
reasonably well with the DFT calculations. The observed deviation from the DFT
results can be explained by the limitations of our atomic spin-lattice model:
as both the pressure and the relative angle between neighboring spins
increase, fluctuations of the atomic spin norms become more important. As
discussed in the "Methods" and "Discussion" sections, these are not included
in the Hamiltonian of our atomic spin-lattice model.
### Magneto-Dynamic Accuracy
Figure 4: Plots a-f show magnetoelastic data obtained with our magneto-elastic
ML-IAP. The green ( ), blue ( ), and red ( ) markers indicate the choice of
equilibration conditions: "fixed-volume conditions" (FVC), "pressure-
controlled conditions" (PCC) and "pressure-controlled and magnetization-
controlled conditions" (PCMCC), respectively. In all plots, experimental data
(extracted from five different references [46, 17, 16, 47, 48]) is denoted by
the filled triangles ($\blacktriangle$), and the dotted black lines ( )
represent the experimental Curie temperature. The plots in a-b) show
magnetization and specific heat comparisons between different ensembles and
experiments. The light blue region in (b) indicates the low temperature regime
$T\lesssim$250 K where quantum effects reduce the experimental heat capacity
below the classical Dulong-Petit limiting value of $3R$ [49]. The data in plot
c) illustrates how the lattice expands with temperature. An inherent offset
exists between our model (trained to match the DFT data at 0 K) and
experimental measurements. Plots d-f show (d) bulk modulus, (e)
$(c\textsubscript{11}-c\textsubscript{12})/2$ shear constant, and (f)
$c\textsubscript{44}$ shear constant for the three aforementioned sets of
conditions.
Turning now to spin-lattice dynamics calculations based on our magneto-elastic
ML-IAP (as detailed in the "Methods" section), we assess the quantitative
accuracy with respect to experimental measurements of changes in magnetic and
thermoelastic properties as the material is heated. In making this comparison,
it is necessary to choose which thermodynamic state variables will be held
fixed and which will be allowed to vary with temperature. Spin-lattice
dynamics algorithms have been developed for simulations in a canonical
ensemble (CE) which preserves the number of particles, the volume, and the
temperature in the system [39]. Our first set of simulation conditions,
referred to as "fixed-volume conditions" (FVC), hold the volume fixed while
running dynamics in the CE at specified values of the lattice and spin
temperatures. A disadvantage of this choice is that the pressure steadily
increases as heat is added to the material, in contradiction to the
experimental observations, which are conducted at constant pressure. To this
date, an isobaric spin-lattice algorithm has not been developed (preserving
the system’s pressure rather than its volume). However, our methodology as
implemented in LAMMPS enables us to compute the magnetic contribution to the
pressure. By alternating thermalization (coupled spin-lattice dynamics in a
CE) and pressure equilibration (frozen spin configuration in an isobaric
ensemble) steps, it is possible to control the pressure of our spin-lattice
system. Hence, we refer to calculations performed in this pressure-controlled
CE as "pressure-controlled conditions" (PCC). In both conditions, the
temperature of the spin and lattice subsystems is set using two separate
Langevin thermostats (one acting on the spins, the other on the lattice) [39].
Finally, this enables us to define a third set of conditions: in addition to
controlling the pressure, the spin thermostat can be set to match a given
magnetization value (i.e., the experimental magnetization) rather than a
temperature. We refer to this as "pressure-controlled and magnetization-
controlled conditions" (PCMCC). Figure 6 in the "Methods" section displays the
different definitions of the spin temperature and the evolution of the
pressure for those three different conditions.
In practice, FVC, PCC and PCMCC only differ in their equilibration conditions
(control of pressure and / or magnetization), as each of the corresponding
simulations are performed in a canonical ensemble. We illustrate the
predictive capability of our magneto-elastic ML-IAP in $\alpha$-iron for these
equilibration conditions in Figure 4.a-f (FVC : , PCC : , PCMCC : ). The
agreement of the following magneto-elastic properties with experimental
results is assessed: magnetization (Figure 4.a), heat-capacity Cp (Figure
4.b), thermal expansion (cell volume on Figure 4.c), bulk modulus (Figure
4.d), and two shear constants, $(c\textsubscript{11}-c\textsubscript{12})/2$
and $c\textsubscript{44}$ (Figure 4.e-f). The "Spin-Lattice Dynamics"
subsection of the "Methods" section details the computation of those
temperature-dependent elastic constants.
We first work under the FVC ( ), keeping a constant volume and equal spin and
lattice temperatures (Figure 4.c and Figure 6). At constant volume, our model
predicts a Curie temperature of approximately 716K (Figure 4.a). Specific heat
calculations shown in Figure 4.b were performed by computing the derivative of
the internal energy, taking both the lattice and magnetic contributions into
account. The SNAP contribution (lattice only) was first isolated and
determined to be a constant value of 26.4 $\rm{Jmol}^{-1}\rm{K}^{-1}$, in good
agreement with the Dulong-Petit value of $3R$ [49]. The magnetic contribution
offsets the total specific heat at low temperature, as the magnetization
steadily decreases (thus steadily increasing the magnetic energy). Also at low
temperature, deviation between simulations and experiment (highlighted by the
semi-transparent blue region in Figure 4.b) occurs due to quantum effects
which reduce the experimental heat capacity below the $3R$ value. The FVC
heat-capacity is determined at constant volume, although we use the symbol Cp
on the axis label because the enhanced simulations described below are indeed
conducted at constant pressure conditions. In those constant volume
conditions, the pressure evolution with temperature increase is substantial
(up to 12 GPa, almost corresponding to the $\alpha\to\epsilon$ transition, as
can be seen on Figure 6), which has a strong impact on the underlying elastic
properties. Interestingly, at the Curie temperature (here 716K), the
increasing pressure exhibits an inflection point, confirming the importance of
spin fluctuations on the thermoelastic properties. The temperature dependence
of three elastic constants is shown in Figure 4.d-f. For the bulk modulus, FVC
does not agree well with experimental data, especially at higher temperatures.
The FVC results tend to overestimate the stiffness, which most likely arises
from the build-up of thermal stresses in the material. Under these conditions
a nearly temperature-invariant c44 response is predicted, which is in strong
contrast to trends in experiment. Despite these shortcomings, the FVC
calculations actually match the experimental data for shear constant
$(c\textsubscript{11}-c\textsubscript{12})/2$ relatively well throughout the
entire temperature range. In general, the fixed volume assumption made under
FVC fails to account for thermal expansion, leading to incorrect elastic
predictions.
We correct this shortcoming of the model by working under PCC ( ) which allows
for thermal expansion. As can be seen on Figure 4.c, the cell volumes are
relaxed at each finite temperature, until the pressure in the system drops to
approximately 0 GPa. As shown in Figure 4.a, the thermal expansion incorrectly
moves the onset of Curie transition to approximately 536K. As the average
interatomic distance increases, the strength of the exchange interaction is
lowered, thus decreasing the transition temperature. The computed heat-
capacity (Figure 4.b) now corresponds to the derivative of the free energy,
and to an actual Cp measurement. However, as in the FVC, the low agreement
between the experimental and computed magnetization evolution leads to an
offset in the initial Cp and does not match the Dulong-Petit value at low
temperature. The PCC fares better in reproducing the experimental bulk modulus
up to the Curie transition (no hardening observed). PCC also does better in
terms of the shear constant c44, as it is able to reproduce the thermal
softening seen in experiments. However, for shear constant
$(c\textsubscript{11}-c\textsubscript{12})/2$, PCC underestimates the extent
of the thermal softening. Overall, PCC does better than FVC in terms of
elastic properties, but deviates more in terms of magnetic predictions
compared to experiment. By shifting the Curie transition towards lower
temperatures, it reduces the range of validity of our elastic calculations.
In order to improve the magnetic predictions of $\alpha$-iron, we finally
consider the PCMCC scheme ( ). In addition to allowing for thermal expansion
similarly to the PCC, we also set the spin thermostat temperature in order to
reproduce the experimental magnetization. Below the Curie transition, the spin
temperature increases more slowly than the lattice temperature, while above
the Curie transition, it increases at the same rate as the lattice temperature
(see Figure 6 in the "Methods section"). Figure 4.a shows that the obtained
magnetization under PCMCC closely matches that of experiment. Most
prominently, the resulting Cp agrees well with experiments (Figure 4.b). The
Dulong-Petit value is recovered at low temperature, and the Cp discontinuity
at the Curie transition is well captured. The thermal expansion trend is also
in much better agreement with experiments, with very comparable slopes between
approximately 200 and 750K (Figure 4.c). Up to approximately 600 K, PCMCC
agree very well with the experimental values for
$(c\textsubscript{11}-c\textsubscript{12})/2$ (Figure 4.e) but at 800-1000K a
slight hardening is observed, which contradicts experimental data. For the
bulk modulus, PCMCC correctly predicts the nearly linear trend up to the Curie
temperature.
We note that in all three sets of conditions, a rapid increase of about 25-30
GPa in the bulk modulus is observed as we move across the critical point. This
jump was found to be strongly impacted by the underlying mechanical potential.
The prediction accuracy could possibly be improved by including additional,
finite-temperature objective functions in the fitting procedure. The PCMCC
prediction of the shear constant c44 closely matches the PCC data. This tends
to indicate that this shear constant c44 is not impacted significantly by the
spin dynamics. For both pressure controlled conditions (PCC and PCMCC) the
maximum deviation from experiments occurs near 700K and is approximately 14%.
## Discussion
We presented a data-driven framework for automated generation of magneto-
elastic ML-IAPs which enable large-scale spin-lattice dynamics simulations for
any magnetic material in LAMMPS. This framework was demonstrated by generating
a robust magneto-elastic ML-IAP for $\alpha$-iron. First we investigated the
magneto-static accuracy (energy and pressure) with respect to equivalent
_first-principles_ calculations. It was demonstrated that the generated
magneto-elastic ML-IAP (which represents the corresponding 5-N dimensional
PES) is in close agreement with _first-principles_ magneto-elastic
calculations. This was achieved by properly partitioning the PES into magnetic
and mechanical degrees of freedom. Subsequently, we investigated the magneto-
dynamic accuracy by comparing predicted finite temperature magneto-elastic
properties (magnetization, heat-capacity, thermal-expansion, bulk modulus, and
shear constants) across the ferromagnetic-paramagnetic phase transition from
spin-lattice dynamics simulations against data from experiments. In the course
of this, we analyzed the choice of simulation conditions (control of pressure
and magnetization) and highlighted the importance of thermal and magnetic
pressure contributions. This is an important advance over traditional
classical magnetization dynamics methods, where contributions from thermal
expansion or spin pressure due to disorder are negated. We demonstrated that
spin-lattice dynamics simulations of controlled pressure and constrained
magnetization yields qualitative agreement with the measured magneto-elastic
properties.
Our framework enables predictions of critical properties across the second-
order phase transition within classical spin-lattice dynamics simulations,
such as the divergent behavior of the heat capacity around the Curie
temperature (Figure 1 and 4.b). We provide a more comprehensive perspective on
our results by comparing them within the context of other first-principles and
classical methods. At low temperature, _first-principles_ methods can capture
the electronic component of the heat-capacity, up to the Dulong-Petit value
[50, 49] (the difference with our model is highlighted by the blue area on
Figure 4.b). However, computing Cp across the Curie transition requires a
dynamic treatment of large spin-ensembles whose calculation is computationally
expensive in terms of _first-principles_ methods. Classical IAPs do not
explicitly treat magnetic degrees of freedom and, thus, cannot reproduce the
effects of this magnetic phase-transition [51]. An empirical model which is
based on _first-principles_ calculations and accounts for electronic, phononic
and magnetic degrees of freedom gave excellent agreement with the experimental
Cp curve of $\alpha$-iron up to the Curie temperature [52]. However, this
model does not extend above the Curie temperature, does not account for the
pressure generated by the corresponding spin configurations, and cannot be
easily extended to other thermomechanical properties. Thus, for a range of
temperature from about 250 K to 1200 K, our model provides with a set of very
good predictions, obtained for the computational cost of classical MD
calculations only.
We conclude the discussion of our results by pointing out limitations of the
present method and future prospects. First, note that the agreement to the
experimental Curie transition (T${}_{c}\approx 716$K in a fixed volume
calculation) could have been adjusted by parameterizing the spin potential on
a smaller range of the high-symmetry lines (see Figure 5), or by adding an
objective function aimed at matching the experimental value in the spin-
potential fitting procedure. However, this additional constraint would have
worsened the agreement of our model with the DFT energy and pressure results
(as displayed on Figure 3) and would contradict the overall objective of this
work.
For temperatures below approximately 250 K, our classical framework cannot
access the quantized free energy, and is thus enable to accurately reproduce
the trends of all the quantities being its derivatives (Cp, elastic constants,
…). This is reducing the agreement versus experiments of the magneto-dynamic
accuracy measurements displayed on Figure 4 at low temperature, and can be
seen as a limitation of our classical approach [53].
Another limitation of our work lies in the simplicity of the spin Hamiltonian
model used. Extended spin Hamiltonians, such as spin-cluster expansions, might
be a promising route to improving the accuracy of the magnetic component of
the PES by both accounting for the fluctuation of the magnetic moment
magnitudes and many-body spin interactions [54, 55]. A straightforward
extension of this work could combine recently developed extended spin
Hamiltonians with _first-principles_ studies, and apply our formalism to
extend our $\alpha$-iron magneto-elastic ML-IAP to account for defect
configurations [56, 57], Cr clustering [58, 59], and magneto-structural phase-
transitions [11, 60].
Enhanced magnetic thermostats have also been proposed in order to better match
the experimental magnetic transition versus temperature [61, 62]. Such
thermostats could be implemented in LAMMPS and used to replace the
magnetization-controlled conditions defined in the "Results" section. This
could extend the range validity of our framework to areas of phase-diagrams
where the magnetization distribution is not well measured (for example in the
$\epsilon$ phase of iron).
A recent study added a magnetic contribution to the set of descriptors used in
a moment-tensor ML-IAP [63]. Although this approach does not explicitly
simulate the magnetization dynamics (and its effects on thermomechanical
properties), the authors demonstrated remarkable improvement in terms of error
convergence. At this stage of our work, we believe improving the modeling of
the magnetic component of the PES remains our first priority (and thus
implementing and fitting improved spin Hamiltonians, as discussed above).
However, depending on the success of this first effort, this complementary
approach could be leveraged to improve the quantum-accuracy of our magneto-
elastic ML-IAPs.
In summary, we have presented a new computational framework for near quantum-
accuracy simulations of magneto-elastic materials properties. By leveraging
the flexibility of ML-IAPs, our data-driven workflow enables to model the
interplay between magnetic and phononic dynamics for a large class of magnetic
materials. Furthermore, our straightforward connection to the LAMMPS package
makes it possible to perform large-scale quantitative magneto-elastic
predictions over controlled pressure and temperature spaces, hitherto study
unexplored magneto-dynamics properties of materials.
## Methods
### Density functional theory calculations
Parameterizing both the ML-IAP and the magnetic Heisenberg Hamiltonian relies
on data computed using spin-dependent DFT calculations. They were performed
using VASP [64, 65]. In all calculations the PBE [66] exchange-correlation
functional was employed. We used PAW pseudopotentials [67] with 8 valence
electrons and a core radius of $r_{c}=2.3$ aB. The plane wave cutoff was set
to $320$ eV and the convergence in each self-consistency cycle was set to
$10^{-8}$. The Fermi-Dirac smearing scheme with a width of 0.026 eV was used.
The Brillouin zone was sampled on a $10\times 10\times 10$ grid of _k_
-points. The number of bands used was 224 per atom.
### Spin-Spiral Calculations
Spin-spirals define a subset of non-collinear magnetic states. In this work,
we leverage spin-spirals as a convenient tool to perform one-to-one
comparisons between _first-principles_ and classical magneto-elastic
calculations. They can be defined as follows:
$\bm{s}_{j}=\sin\theta\cos(\bm{q}\cdot\bm{R}_{0j})\bm{\hat{x}}+\sin\theta\sin(\bm{q}\cdot\bm{R}_{0j})\bm{\hat{y}}+\cos\theta\bm{\hat{z}}\,,$
(2)
where $\bm{q}$ is the spin-spiral vector, $\bm{R}_{0j}$ is the position of
atom $j$ relative to a central atom $0$, $\bm{s}_{j}$ is the spin on atom $j$,
and $\theta$ is a constant angle between the spins and the spin-spiral vector
(often referred to as "cone angle") [68]. $\bm{\hat{x}}$, $\bm{\hat{y}}$, and
$\bm{\hat{z}}$ are the unit vectors along $[100]$, $[010]$, and $[001]$,
respectively. Our calculations are restricted to $\theta=\pi/2$, corresponding
to flat spin-spirals in the (001) plane.
_First-principles_ calculations of the per-atom energy and the pressure
corresponding to spin-spiral states are performed using DFT by leveraging the
frozen-magnon approach [69, 70] and the generalized Bloch theorem [71] as
implemented in VASP [72]. We consider a primitive cell of one atom. A
$10\times 10\times 10$ k-point grid, an energy cutoff of 320 eV, and 224 bands
proved sufficient to reach the level of accuracy expected in our model (as can
be seen in Figure 5).
Classical calculations are performed by using Eq. (2) to generate supercells
accommodating the spin-spirals corresponding to the $\bm{q}$-vectors used in
the DFT calculations. Based on a given supercell and a spin Hamiltonian, the
per-atom energy and pressure are computed using the SPIN package of LAMMPS
[23, 40].
### Spin Hamiltonian
A spin Hamiltonian is used to model the energy, mechanical forces, and
pressure contributions of magnetic configurations. Rosengaard and Johansson
[73] and Szilva _et al._ [74] showed that adding a biquadratic term to the
classical Heisenberg Hamiltonian improves the accuracy of magnetic excitations
in 3-d transition ferromagnets. We adopted their Hamiltonian form:
$\displaystyle\mathcal{H}_{mag}$ $\displaystyle=$ $\displaystyle-\sum_{i\neq
j}^{N}{J}\left(r_{ij}\right)\,\left[\bm{s}_{i}\cdot\bm{s}_{j}-1\right]$ (3)
$\displaystyle-\sum_{i\neq
j}^{N}{K}\left(r_{ij}\right)\,\left[\left(\bm{s}_{i}\cdot\bm{s}_{j}\right)^{2}-1\right]\,,$
where $\bm{s}_{i}$ and $\bm{s}_{j}$ are classical atomic spins of unit length
located on atoms $i$ and $j$, ${J}\left(r_{ij}\right)$ and
${K}\left(r_{ij}\right)$ (in eV) are magnetic exchange functions, and $r_{ij}$
is the interatomic distance between magnetic atoms $i$ and $j$. The two terms
in Eq. 3 are offset by subtracting the spin ground state (corresponding to a
purely ferromagnetic situation), as detailed in Ma _et al._ [37]. Although
this offset of the exchange energy does not affect the precession dynamics of
the spins, it allows to offset the corresponding mechanical forces. Without
this additional term, the magnetic contribution to the forces and the pressure
are not zero at the energy ground state. For the exchange interaction terms
${J}\left(r_{ij}\right)\ $ and ${K}\left(r_{ij}\right)$, the interatomic
dependence is taken into account through the following function based on an
approximation of the Bethe-Slater curve [75, 76]:
$f\left(r\right)=4\alpha\left(\frac{r}{\delta}\right)^{2}\left(1-\gamma\left(\frac{r}{\delta}\right)^{2}\right)\exp\left(-\left(\frac{r}{\delta}\right)^{2}\right)\Theta\left(R_{c}-r\right)\,,$
(4)
where $\alpha$ denotes the interaction energy, $\delta$ the interaction decay
length, $\gamma$ a dimensionless curvature parameter, r a distance, and
$\Theta\left(R_{c}-r\right)$ a Heaviside step function for the radial cutoff
$R_{c}$. This assumes that the interaction decays rapidly with the interatomic
distance, consistent with former calculations [77, 74]. We set
$R_{c}=\leavevmode\nobreak\ $5Å to include five neighbor shells, as Pajda _et
al._ [77] showed that the exchange interaction decays slower along the $[111]$
direction in $\alpha$-iron.
Using Eq. (3) and leveraging the generalized spin-lattice Poisson bracket as
defined by Yang _et al._ [78], the magnetic precession vectors
($\bm{\omega}_{i}$), mechanical forces ($\bm{F}_{i}$), and their corresponding
virial components ($W\left(\bm{r}^{N}\right)$) are derived:
$\displaystyle\bm{\omega}_{i}$ $\displaystyle=$
$\displaystyle\frac{1}{\hbar}\sum_{j}^{N_{i}}{J}\left(r_{ij}\right)\,\bm{s}_{j}+K\left(r_{ij}\right)\left(\bm{s}_{i}\cdot\bm{s}_{j}\right)\bm{s}_{j}\,,$
(5) $\displaystyle\bm{F}_{i}$ $\displaystyle=$
$\displaystyle\sum_{j}^{N_{i}}\frac{d{J}\left(r_{ij}\right)}{d_{rij}}\,\left[\bm{s}_{i}\cdot\bm{s}_{j}-1\right]\bm{e}_{ij}$
(6)
$\displaystyle+\frac{d{K}\left(r_{ij}\right)}{d_{rij}}\,\left[\left(\bm{s}_{i}\cdot\bm{s}_{j}\right)^{2}-1\right]\bm{e}_{ij}\,,$
$\displaystyle W\left(\bm{r}^{N}\right)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N}\bm{r}_{i}\cdot\bm{F}_{i}\,,$ (7)
where $\bm{r}^{N}$ denotes a $3N$ size vector of all atomic positions and
$\bm{r}_{i}$ the position vector of atom $i$. We note that the virial
components enable computing the spin contribution to the pressure.
Figure 5: Comparison of spin-spiral results along sections of the $\Gamma$H
and $\Gamma$P high-symmetry lines. The upper plot displays the per-atom
energy, the middle one the atomic moment fluctuations (in Bohr magneton per
atom), and on the bottom the evolution of the pressure. The energy and
pressure fluctuations are plotted with respect to the magnetic ground state at
the $\Gamma$ point. The green and red dots represent experimental measurements
obtained by Loong _et al._ [79] and Lynn [80]. In all three plots, the dashed
lines correspond to the DFT results, and the continuous lines to our classical
model results, whereas the line color (black or blue) corresponds to the
lattice compression (0 or 2%, respectively). In the middle plot, the green
dashed horizontal line represent the experimental equilibrium value (2.2
$\mu_{B}$ per atom), which is the constant value chosen in our model. In all
three plots, the red vertical dashed lines are delimiting the $\bm{q}$-vectors
on which our spin Hamiltonian was parametrized.
The spin Hamiltonian is used to reproduce spin-spiral energy and pressure
reference results obtained from DFT. They are sampled along two high-symmetry
lines, $\Gamma$H and $\Gamma$P, and for two different lattice constant values
(corresponding to the equilibrium bulk value and to a lattice compression of
2%). This allows us to encapsulate in the model the influence of lattice
compression on the spin stiffness and the Curie temperature, which was
experimentally and theoretically predicted to be small [81, 82, 83]. Figure 5
displays the excellent agreement obtained between our _first-principles_ spin-
spiral energies and experimental measurements.
Our current spin Hamiltonian does not account for fluctuations of the magnetic
moment magnitudes, i.e. the norm of atomic spins remains constant in our
calculations. As can be seen in Figure 5, this is not the case for our DFT
results, as those fluctuations can become important when departing from the
$\Gamma$ point. We thus decided to parameterize our model only on spin-spirals
corresponding to $\bm{q}$-vectors for which the spin norm deviates from the
ferromagnetic value ($\approx 2.2\leavevmode\nobreak\ \mu_{B}$/atom at the
$\rm{\Gamma}$ point) by less than 5%. The red dashed lines in Figure 5 delimit
this $\bm{q}$-vector range.
Finally, we used the single objective genetic algorithm within the DAKOTA
software package [44] to optimize the six coefficients of
${J}\left(r_{ij}\right)$ and ${K}\left(r_{ij}\right)$ in order to obtain the
best possible agreement between our reference DFT spin-spiral energy and
pressure results and our spin model. Figure 5 displays the obtained result. As
can be seen in Figure 4, for a fixed-volume calculation, our spin Hamiltonian
predicts a Curie temperature of 716K. Note that a better match of the DFT
spin-spiral energies would yield a larger spin-stiffness, and thus a better
agreement for the Curie temperature. However, this would worsen the pressure
agreement.
Spin-orbit coupling effects were included by accounting for an iron-type cubic
anisotropy [84]:
$\begin{split}H_{cubic}&=-\sum_{i=1}^{N}K_{1}\bigg{[}(\bm{s}_{i}\cdot\bm{\hat{x}})^{2}(\bm{s}_{i}\cdot\bm{\hat{y}})^{2}+(\bm{s}_{i}\cdot\bm{\hat{y}})^{2}(\bm{s}_{i}\cdot\bm{\hat{z}})^{2}+...\\\
&(\bm{s}_{i}\cdot\bm{\hat{x}})^{2}(\bm{s}_{i}\cdot\bm{\hat{z}})^{2}\bigg{]}+K_{2}^{(c)}(\bm{s}_{i}\cdot\bm{\hat{x}})^{2}(\bm{s}_{i}\cdot\bm{\hat{y}})^{2}(\bm{s}_{i}\cdot\bm{\hat{z}})^{2}\
,\end{split}$ (8)
with $K_{1}=0.001$ eV and $K_{2}^{(c)}=0.0005$ eV the intensity coefficients
corresponding to $\alpha$-iron. The cubic anisotropy was only included to run
calculations, but ignored in the fitting procedure as its intensity is below
the range of accuracy of our ML-IAP.
In all our classical spin-lattice dynamics calculations, our system size
remained small compared to the typical magnetic domain-wall width in iron
[84]. Thus, long-range dipole-dipole interactions could safely be neglected.
### SNAP Potential
For this work, an interatomic potential for iron was developed that is
specifically parameterized for use in coupled spin and molecular dynamics
simulations. Training data for a Spectral Neighborhood Analysis Potential
(SNAP)[45, 85, 86] was collected to constrain the fit to the pressure and
temperature phase space of $<20$GPa and $<2000$K. The set of non-colinear,
spin-polarized VASP calculations includes $\alpha$\- (BCC), $\epsilon$\- (HCP)
and liquid-iron, Table 1 displays the quantity of each training type and
target properties that are captured therein. Optimization of a SNAP potential
necessitates that the generated training database be broken into these groups
(rows in Table 1) such that the weighted linear regression can (de-)emphasize
different parts in search of a global minima in objective function errors.
Each training group is assigned a unique weight for its’ associated energies
and atomic forces for each candidate potential, optimization of these weights
is controlled by DAKOTA. Regression is carried out using singular value
decomposition with a squared loss function (L2 norm). In order to avoid double
counting, and properly simulate the magnetic properties of iron in classical
MD, we have adapted the SNAP fitting protocol[45] to isolate the non-magnetic
energy and forces from the generated training data. To do so, the fitted
biquadratic spin Hamiltonian is evaluated for every atom in the training set,
and its’ contribution to the total energy and per-atom forces is subtracted.
This is akin to previous uses of an ion core repulsion[87] or electrostatic
interaction term[88] as a reference potential while fitting SNAP models.
| $\\#$ of Config. | $\\#$ of Forces | Target Property | Energy Fit Weight | Forces Fit Weight | Energy MAE (eV) | Forces MAE (eV$\cdot$Å-1)
---|---|---|---|---|---|---|---
Eq. of State | 403 | 65286 | Volumetric Deform | $4.2\cdot 10^{3}$ | $2.0\cdot 10^{5}$ | $1.6\cdot 10^{-2}$ | $2.4\cdot 10^{-1}$
DFT-MD, 300K | 40 | 15360 | Bulk phonons | $2.9\cdot 10^{5}$ | $1.1\cdot 10^{5}$ | $5.2\cdot 10^{-4}$ | $2.4\cdot 10^{-1}$
Liquid w/ Spins | 10 | 3000 | Magnetic Disorder | $5.5\cdot 10^{1}$ | $1.9\cdot 10^{4}$ | $2.0\cdot 10^{-1}$ | $5.9\cdot 10^{-1}$
Liquid w/o Spins | 52 | 15300 | Structural Disorder | $3.3\cdot 10^{3}$ | $2.0\cdot 10^{4}$ | $2.2\cdot 10^{-1}$ | $8.0\cdot 10^{-1}$
Point Defects | 10 | 3096 | Defect Energetics | $1.4\cdot 10^{2}$ | $3.5\cdot 10^{4}$ | $2.8\cdot 10^{-2}$ | $1.1\cdot 10^{-1}$
Martensitic Transform | 168 | 1008 | $\alpha\to\epsilon$ | $4.0\cdot 10^{2}$ | $2.3\cdot 10^{3}$ | $9.2\cdot 10^{-2}$ | $2.3\cdot 10^{-1}$
Table 1: Training set for linear SNAP model adapted from Ref. [[89]] to
include explicit spin degrees of freedom. Regression of SNAP coefficients
takes into account both configuration energies and forces from DFT,
optimization of group weights is applied to either term independently.
Weighted linear regression is carried out via reported optimal fit weights,
values have already been scaled by the number of training points each group
contributes. The two last columns report the obtained mean-absolute errors
(MAEs) in eV per atom.
Optimization of the SNAP potential was achieved using a single objective
genetic algorithm within the DAKOTA software package[44]. Radial cutoff
distance, training group weights and number of bispectrum descriptors were
varied to minimize a set of objective functions, as percent error to available
DFT or experimental[90] data, that encapsulate the desired mechanical
properties of Fe. These objective functions specific to $\alpha$-iron are
listed in Table 2, and the RMSE energy and force regression errors are
included in optimization as well. In all objectives, our linear SNAP model
with 31 bispectrum descriptors achieves accuracy in all mechanical properties
within a few percent of experiment/DFT. Additionally, lattice constants and
cohesive energies of $\gamma$\- (FCC) and $\epsilon$-iron (HCP) phases were
fit, but given far less priority with respect to the $\alpha$-iron mechanical
properties resulting in $\sim 6-7\%$ errors with respect to DFT. Importantly,
each of the objective functions were evaluated including the magnetic spin
contributions to avoid unforeseen changes in property predictions. A full
breakdown of the optimal training group weights and mean absolute energy/force
errors are given in Table 1. Group weights listed have been adjusted by the
number of configurations or forces they are applied to, therefore allowing for
larger group weights to be (cautiously) interpreted more valuable at meeting
the set of targeted objective functions. This optimized Fe-SNAP interatomic
potential is contained as Supplemental Material along with LAMMPS input
scripts used in the following section.
| SNAP | Exp/DFT | Units | Error $\%$
---|---|---|---|---
c11 | 243.25 | 239.55 | GPa | 1.54$\%$
c12 | 135.65 | 138.1 | GPa | 1.77$\%$
c44 | 118.73 | 120.75 | GPa | 1.67$\%$
Bulk modulus | 171.52 | 169.55 | GPa | 1.16$\%$
$0.5$(c11-c12) | 53.8 | 51.9 | GPa | 3.66$\%$
Poisson ratio | 0.358 | 0.36 | - | 1.10$\%$
bcc energy | -8.25 | -8.26 | eV | 0.02$\%$
bcc lat. const. | 2.838 | 2.83 | Å | 0.30$\%$
Table 2: Objective functions of the DAKOTA optimization with ground truth
values taken from the present DFT calculations(at zero Kelvin) or
experiments[90]. Percent error is used as the objective function to avoid
artificial importance scaling based on units of the target property.
### Spin-Lattice Dynamics
Calculations are performed following the spin-lattice dynamics approach as
implemented in the SPIN package of LAMMPS [23, 40], and set by the spin-
lattice Hamiltonian below:
$\begin{split}\mathcal{H}_{sl}(\bm{r},\bm{p},\bm{s})&=\mathcal{H}_{mag}(\bm{r},\bm{s})+\sum_{i=1}^{N}\frac{{\lvert\bm{p}\rvert^{2}}}{2m_{i}}+\sum_{i,j=1}^{N}V_{SNAP}(r_{ij})\end{split}$
(9)
where $\mathcal{H}_{mag}$ is the spin Hamiltonian defined by the combination
of Eq. (3) and Eq. (8). The term $V_{SNAP}(r_{ij})$ is our SNAP ML-IAP. The
second term on the right in Eq. (9), represents the kinetic energy, where the
particle momentum is given as $\bm{p}$ and the mass of particle $\it{i}$ is
$m_{i}$. Based on this spin-lattice Hamiltonian and leveraging the generalized
spin-lattice Poisson bracket as defined by Yang _et al._ [78], the equations
of motion can be defined as:
$\begin{split}\hskip
2.27621pt\frac{d\bm{r}_{i}}{dt}=\frac{\bm{p}_{i}}{m_{i}}\end{split}$ (10)
$\begin{split}\frac{d\bm{p}_{i}}{dt}&=\sum_{j,i\neq
j}^{N}\bigg{[}-\frac{dV_{SNAP}(r_{ij})}{dr_{ij}}+\frac{dJ(r_{ij})}{dr_{ij}}(\bm{s}_{i}\cdot\bm{s}_{j})+...\\\
&\frac{dK(r_{ij})}{dr_{ij}}(\bm{s}_{i}\cdot\bm{s}_{j})^{2}\bigg{]}\bm{e}_{ij}-\frac{\gamma_{L}}{m_{i}}\bm{p}_{i}+\bm{f}(t)\end{split}$
(11)
$\begin{split}\hskip
1.13809pt\frac{d\bm{s}_{i}}{dt}&=\frac{1}{(1+\lambda^{2})}\bigg{[}(\bm{\omega}_{i}+\bm{\eta}(t))\times\bm{s}_{i}+...\\\
&\lambda\bm{s}_{i}\times(\bm{\omega}_{i}\times\bm{s}_{i})\bigg{]}\end{split}$
(12)
Particle positions are advanced according to Eq. (10). The derivative of the
momentum, given in Eq. (11), is dependent not only on the mechanical potential
but the magnetic exchange functions as well. Here $\gamma_{L}$ is the Langevin
damping constant for the lattice and $\bm{f}$ is a fluctuating force following
Gaussian statistics given below[40].
$\displaystyle\langle\bm{f}(t)\rangle$ $\displaystyle=$ $\displaystyle 0$ (13)
$\displaystyle\langle f_{\alpha}(t)f_{\beta}(t^{\prime})\rangle$
$\displaystyle=$ $\displaystyle
2k_{B}T_{l}\gamma_{L}\delta_{\alpha\beta}\delta(t-t^{\prime})$ (14)
The fluctuating force $\bm{f}$ is coupled to $\gamma_{L}$ via the fluctuation
dissipation theorem as shown in Eq. (14). Here $k_{B}$ is the Boltzmann
constant, $T_{l}$ is the lattice temperature, and $\alpha$ and $\beta$ are
coordinates. Shown in Eq. (12) is the stochastic Landau-Lifshitz-Gilbert
equation which describes the precessional motion of spins under the influence
of thermal noise. In Eq. (12), $\lambda$ is the transverse damping constant
and $\bm{\omega}_{i}$ is a spin force analogue as shown in Eq. (5). The
variable $\bm{\eta}(t)$ is a random vector whose components are drawn from a
Gaussian probability distribution given below:
$\displaystyle\langle\bm{\eta}(t)\rangle$ $\displaystyle=$ $\displaystyle 0$
(15) $\displaystyle\langle\eta_{\alpha}(t)\eta_{\beta}(t^{\prime})\rangle$
$\displaystyle=$ $\displaystyle D_{S}\delta_{\alpha\beta}\delta(t-t^{\prime})$
(16)
where the amplitude of the noise $D_{S}$ can be related to the temperature of
the external spin bath $T_{s}$ according to $D_{S}=2\pi\lambda
k_{B}T_{s}/\hbar$ [39].
SD-MD calculations are carried out using a 20x20x20 BCC cell. The BCC lattice
is oriented along each of the coordinate directions. The MD timestep in all
cases is set to 0.1 femtoseconds. The damping constants are set to 0.1
(Gilbert damping, no units) for the spin thermostat, and to 0.1 picoseconds
for the lattice thermostat. Initially all spins start out aligned in the
z-direction. To measure the magnetic properties for the canonical ensemble we
initially thermalize the system under NVT dynamics at the target spin/lattice
temperatures for 40 picoseconds and then sample the target properties for 10
picoseconds. For pressure-controlled simulations (see PCC and MCPCC in the
"Results" section), after the initial 40 picosecond of temperature
equilibration we freeze the spin configuration and run isobaric-isothermal NPT
dynamics in order to allow the system to thermally expand (still accounting
for the effect of the "magnetic" pressure, generated by the spin Hamiltonian).
The pressure damping parameter is set to 10 picoseconds. The pressure
equilibration run is terminated once the system pressure drops below 0.05 GPa.
After this, the spin configuration is unfrozen and another equilibration run
is carried out under NVT dynamics for 20 picoseconds. Unfreezing the spin
configurations causes a small jump in the pressure, typically within the range
of +/- 2 GPa. To reduce this pressure fluctuation, a series of uniform
isotropic box deformations are performed under the NVE ensemble. During this
procedure the box is deformed in 0.02% increments every 2 picoseconds until
the magnitude of the pressure is reduced to negligible values (< 10 MPa).
Figure 6 displays the pressure profiles obtained within the FVC and PCMCC
(similar to the PCC).
Figure 6: Symbols (( ) and ( )) associated to the left axis represent the
pressure evolution in the FVC and PCMCC (similar to the PCC), as a function of
lattice temperature. Doted lines associated to the right axis represent the
spin thermostat temperature for the PCMCC and FVC (similar to PCC) as a
function of the lattice temperature.
For the magnetization-controlled conditions (PCMCC in the "Results" section),
the spin temperature is adjusted to match the experimental magnetization
values. Spin temperature adjustments are made based on the magnetization curve
obtained in the pressure-controlled conditions (PCC in the "Results" section).
The corresponding spin-lattice temperature relationship is shown in Eqs.
(17-20). Here the fitting coefficients are given as $a_{1}=471.6$,
$a_{2}=6362$, $a_{3}=2774$, $a_{4}=1119$, $a_{5}=13.6$, $a_{6}=1043.3$, and
$a_{7}=0.1$, respectively. The functions $T_{s,pre}$ and $T_{s,post}$
prescribe how the spin temperature varies before and after the critical point.
At the critical point we use a switching function $f_{sw}$ to smoothly
transition from $T_{s,pre}$ to $T_{s,post}$:
$\displaystyle T_{s,post}(T_{l})$ $\displaystyle=$ $\displaystyle T_{l}-a_{1}$
(17) $\displaystyle T_{s,pre}(T_{l})$ $\displaystyle=$ $\displaystyle
a_{2}\exp\left[-\left(\frac{T_{l}-a_{3}}{a_{4}}\right)^{2}\right]-a_{5}$ (18)
$\displaystyle f_{sw}(T_{l})$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[1+\tanh\left(\frac{T_{l}-a_{6}}{a_{7}}\right)\right]$
(19) $\displaystyle T_{s}(T_{l})$ $\displaystyle=$ $\displaystyle
f_{sw}T_{s,post}+(1-f_{sw})T_{s,pre}$ (20)
Figure 6 displays the spin temperature profiles for the FVC (and, similarly,
the PCC), and the PCMCC. After the magnetic measurements we compute elastic
constants by performing both uniaxial and shear deformations along each of the
coordinate directions and planes. The magnitude of these deformations in all
cases is $2\%$ of the box length. Following each deformation the box is
relaxed for 3 picoseconds. After this relaxation the stresses are sampled for
2 picoseconds.
## Data Availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## Code Availability
The code which was used to train the SNAP potential is available from:
https://github.com/FitSNAP/FitSNAP.
## References
* [1] Tatsumoto, E. & Okamoto, T. Temperature dependence of the magnetostriction constants in iron and silicon iron. _Journal of the Physical Society of Japan_ 14, 1588–1594 (1959).
* [2] Bahl, C. R. H. & Nielsen, K. K. The effect of demagnetization on the magnetocaloric properties of gadolinium. _Journal of Applied Physics_ 105, 013916 (2009).
* [3] Tavares, S., Fruchart, D., Miraglia, S. & Laborie, D. Magnetic properties of an aisi 420 martensitic stainless steel. _Journal of alloys and compounds_ 312, 307–314 (2000).
* [4] Huang, S., Holmström, E., Eriksson, O. & Vitos, L. Mapping the magnetic transition temperatures for medium-and high-entropy alloys. _Intermetallics_ 95, 80–84 (2018).
* [5] Rao, Z. _et al._ Unveiling the mechanism of abnormal magnetic behavior of fenicomncu high-entropy alloys through a joint experimental-theoretical study. _Physical Review Materials_ 4, 014402 (2020).
* [6] Jaime, M. _et al._ Piezomagnetism and magnetoelastic memory in uranium dioxide. _Nature communications_ 8, 1–7 (2017).
* [7] Nussle, T., Thibaudeau, P. & Nicolis, S. Dynamic magnetostriction for antiferromagnets. _Physical Review B_ 100, 214428 (2019).
* [8] Lejman, M. _et al._ Magnetoelastic and magnetoelectric couplings across the antiferromagnetic transition in multiferroic bifeo 3. _Physical Review B_ 99, 104103 (2019).
* [9] Patrick, C. E., Marchant, G. A. & Staunton, J. B. Spin orientation and magnetostriction of tb 1- x dy x fe 2 from first principles. _Physical Review Applied_ 14, 014091 (2020).
* [10] Graham, R., Morosin, B., Venturini, E. & Carr, M. Materials Modification and Synthesis Under High Pressure Shock Compression. _Annual Review of Materials Science_ 16, 315–341, DOI: 10.1146/annurev.ms.16.080186.001531 (1986).
* [11] Surh, M. P., Benedict, L. X. & Sadigh, B. Magnetostructural transition kinetics in shocked iron. _Physical review letters_ 117, 085701 (2016).
* [12] Moses, E. I., Boyd, R. N., Remington, B. A., Keane, C. J. & Al-Ayat, R. The national ignition facility: Ushering in a new age for high energy density science. _Physics of Plasmas_ 16, 041006, DOI: 10.1063/1.3116505 (2009).
* [13] Tschentscher, T. _et al._ Photon beam transport and scientific instruments at the european xfel. _Applied Sciences_ 7, DOI: 10.3390/app7060592 (2017).
* [14] Tan, X., Chan, S., Han, K. & Xu, H. Combined effects of magnetic interaction and domain wall pinning on the coercivity in a bulk nd 60 fe 30 al 10 ferromagnet. _Scientific reports_ 4, 6805 (2014).
* [15] Gràcia-Condal, A. _et al._ Multicaloric effects in metamagnetic heusler ni-mn-in under uniaxial stress and magnetic field. _Applied Physics Reviews_ 7, 041406 (2020).
* [16] Wallace, D. C., Sidles, P. & Danielson, G. Specific heat of high purity iron by a pulse heating method. _Journal of applied physics_ 31, 168–176 (1960).
* [17] Touloukian, Y. & Buyco, E. Thermophysical properties of matter, vol. 4, specific heat. _IFI/Plenum, New York_ (1970).
* [18] Chandler, D. _Introduction to modern statistical mechanics_ (1987).
* [19] Horstemeyer, M. F. The near Future: ICME for the Creation of New Materials and Structures. In _Integrated Computational Materials Engineering (ICME) for Metals_ , chap. 10, 410–423, DOI: 10.1002/9781118342664.ch10 (John Wiley & Sons, Ltd, 2012).
* [20] van der Giessen, E. _et al._ Roadmap on multiscale materials modeling. _Modelling and Simulation in Materials Science and Engineering_ 28, 043001 (2020).
* [21] Alder, B. J. & Wainwright, T. E. Studies in Molecular Dynamics. I. General Method. _The Journal of Chemical Physics_ 31, 459–466, DOI: 10.1063/1.1730376 (1959).
* [22] Rapaport, D. C. _The art of molecular dynamics simulation_ (Cambridge university press, 2004).
* [23] Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. _Journal of computational physics_ 117, 1–19 (1995).
* [24] Voter, A. F., Montalenti, F. & Germann, T. C. Extending the time scale in atomistic simulation of materials. _Annual Review of Materials Research_ 32, 321–346 (2002).
* [25] Zepeda-Ruiz, L. A., Stukowski, A., Oppelstrup, T. & Bulatov, V. V. Probing the limits of metal plasticity with molecular dynamics simulations. _Nature_ 550, 492–495 (2017).
* [26] Huan, T. D. _et al._ A universal strategy for the creation of machine learning-based atomistic force fields. _npj Computational Materials_ 3, DOI: 10.1038/s41524-017-0042-y (2017).
* [27] Smith, J. S., Isayev, O. & Roitberg, A. E. Ani-1: an extensible neural network potential with dft accuracy at force field computational cost. _Chem. Sci._ 8, 3192–3203, DOI: 10.1039/C6SC05720A (2017).
* [28] Zhang, L., Han, J., Wang, H., Car, R. & E, W. Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics. _Phys. Rev. Lett._ 120, 143001, DOI: 10.1103/PhysRevLett.120.143001 (2018).
* [29] Bartók, A. P., Payne, M. C., Kondor, R. & Csányi, G. Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons. _Phys. Rev. Lett._ 104, 136403, DOI: 10.1103/PhysRevLett.104.136403 (2010).
* [30] Jaramillo-Botero, A., Naserifar, S. & Goddard, W. A. General Multiobjective Force Field Optimization Framework, with Application to Reactive Force Fields for Silicon Carbide. _Journal of Chemical Theory and Computation_ 10, 1426–1439, DOI: 10.1021/ct5001044 (2014).
* [31] Lubbers, N., Smith, J. S. & Barros, K. Hierarchical modeling of molecular energies using a deep neural network. _The Journal of Chemical Physics_ 148, 241715, DOI: 10.1063/1.5011181 (2018).
* [32] Thompson, A. P., Swiler, L. P., Trott, C. R., Foiles, S. M. & Tucker, G. J. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials. _Journal of Computational Physics_ 285, 316–330, DOI: 10.1016/j.jcp.2014.12.018 (2015).
* [33] Kohn, W. & Sham, L. J. Self-Consistent Equations Including Exchange and Correlation Effects. _Phys. Rev._ 140, A1133–A1138, DOI: 10.1103/PhysRev.140.A1133 (1965).
* [34] Li, X.-G., Chen, C., Zheng, H., Zuo, Y. & Ong, S. P. Complex strengthening mechanisms in the nbmotaw multi-principal element alloy. _npj Computational Materials_ 6, 1–10 (2020).
* [35] Cusentino, M., Wood, M. & Thompson, A. Suppression of helium bubble nucleation in beryllium exposed tungsten surfaces. _Nuclear Fusion_ 60, 126018 (2020).
* [36] Dragoni, D., Daff, T. D., Csányi, G. & Marzari, N. Achieving dft accuracy with a machine-learning interatomic potential: Thermomechanics and defects in bcc ferromagnetic iron. _Physical Review Materials_ 2, 013808 (2018).
* [37] Ma, P.-W., Woo, C. & Dudarev, S. Large-scale simulation of the spin-lattice dynamics in ferromagnetic iron. _Physical review B_ 78, 024434 (2008).
* [38] Ma, P.-W., Dudarev, S. & Woo, C. Spilady: A parallel cpu and gpu code for spin–lattice magnetic molecular dynamics simulations. _Computer Physics Communications_ 207, 350–361 (2016).
* [39] Ma, P.-W. & Dudarev, S. Atomistic spin-lattice dynamics. _Handbook of Materials Modeling: Methods: Theory and Modeling_ 1017–1035 (2020).
* [40] Tranchida, J., Plimpton, S., Thibaudeau, P. & Thompson, A. P. Massively parallel symplectic algorithm for coupled magnetic spin dynamics and molecular dynamics. _Journal of Computational Physics_ 372, 406–425 (2018).
* [41] Dos Santos, G. _et al._ Size-and temperature-dependent magnetization of iron nanoclusters. _Physical Review B_ 102, 184426 (2020).
* [42] Zhou, Y., Tranchida, J., Ge, Y., Murthy, J. & Fisher, T. S. Atomistic simulation of phonon and magnon thermal transport across the ferromagnetic-paramagnetic transition. _Physical Review B_ 101, 224303 (2020).
* [43] Ma, P.-W., Dudarev, S. & Wróbel, J. S. Dynamic simulation of structural phase transitions in magnetic iron. _Physical Review B_ 96, 094418 (2017).
* [44] Eldred, M. S. _et al._ Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Tech. Rep., Citeseer (2006).
* [45] Thompson, A. P., Swiler, L. P., Trott, C. R., Foiles, S. M. & Tucker, G. J. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials. _Journal of Computational Physics_ 285, 316–330 (2015).
* [46] Crangle, J. & Goodman, G. The magnetization of pure iron and nickel. _Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences_ 321, 477–491 (1971).
* [47] Seki, I. & Nagata, K. Lattice constant of iron and austenite including its supersaturation phase of carbon. _ISIJ international_ 45, 1789–1794 (2005).
* [48] Basinski, Z. S., Hume-Rothery, W. & Sutton, A. The lattice expansion of iron. _Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences_ 229, 459–467 (1955).
* [49] Ashcroft, N. W., Mermin, N. D. & Wei, D. _Solid State Physics_ (Cengage Learning Asia Pte Limited, 2016).
* [50] Dragoni, D., Ceresoli, D. & Marzari, N. Thermoelastic properties of $\alpha$-iron from first-principles. _Physical Review B_ 91, 104105 (2015).
* [51] Dragoni, D., Ceresoli, D. & Marzari, N. Vibrational and thermoelastic properties of bcc iron from selected eam potentials. _Computational Materials Science_ 152, 99–106 (2018).
* [52] Körmann, F. _et al._ Free energy of bcc iron: Integrated ab initio derivation of vibrational, electronic, and magnetic contributions. _Physical Review B_ 78, 033102 (2008).
* [53] Arakawa, K. _et al._ Quantum de-trapping and transport of heavy defects in tungsten. _Nature materials_ 19, 508–511 (2020).
* [54] Drautz, R. & Fähnle, M. Spin-cluster expansion: Parametrization of the general adiabatic magnetic energy surface with ab initio accuracy. _Physical Review B_ 69, 104404 (2004).
* [55] Drautz, R. Atomic cluster expansion of scalar, vectorial, and tensorial properties including magnetism and charge transfer. _Physical Review B_ 102, 024104 (2020).
* [56] Marinica, M.-C., Willaime, F. & Crocombette, J.-P. Irradiation-induced formation of nanocrystallites with c 15 laves phase structure in bcc iron. _Physical review letters_ 108, 025501 (2012).
* [57] Chapman, J. B., Ma, P.-W. & Dudarev, S. L. Effect of non-heisenberg magnetic interactions on defects in ferromagnetic iron. _Physical Review B_ 102, 224106 (2020).
* [58] Chapman, J. B., Ma, P.-W. & Dudarev, S. L. Dynamics of magnetism in fe-cr alloys with cr clustering. _Physical Review B_ 99, 184413 (2019).
* [59] Klaver, T., Drautz, R. & Finnis, M. Magnetism and thermodynamics of defect-free fe-cr alloys. _Physical Review B_ 74, 094435 (2006).
* [60] Kalantar, D. _et al._ Direct observation of the $\alpha$\- $\varepsilon$ transition in shock-compressed iron via nanosecond x-ray diffraction. _Physical review letters_ 95, 075502 (2005).
* [61] Woo, C., Wen, H., Semenov, A., Dudarev, S. & Ma, P.-W. Quantum heat bath for spin-lattice dynamics. _Physical Review B_ 91, 104306 (2015).
* [62] Bergqvist, L. & Bergman, A. Realistic finite temperature simulations of magnetic systems using quantum statistics. _Physical Review Materials_ 2, 013802 (2018).
* [63] Novikov, I., Grabowski, B., Kormann, F. & Shapeev, A. Machine-learning interatomic potentials reproduce vibrational and magnetic degrees of freedom. _arXiv preprint arXiv:2012.12763_ (2020).
* [64] Kresse, G. & Furthmüller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. _Computational Materials Science_ 6, 15 – 50, DOI: https://doi.org/10.1016/0927-0256(96)00008-0 (1996).
* [65] Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. _Phys. Rev. B_ 59, 1758–1775, DOI: 10.1103/PhysRevB.59.1758 (1999).
* [66] Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. _Physical review letters_ 77, 3865 (1996).
* [67] Blöchl, P. E. Projector augmented-wave method. _Physical review B_ 50, 17953 (1994).
* [68] Zimmermann, B. _et al._ Comparison of first-principles methods to extract magnetic parameters in ultrathin films: Co/pt (111). _Physical Review B_ 99, 214426 (2019).
* [69] Halilov, S., Perlov, A., Oppeneer, P. & Eschrig, H. Magnon spectrum and related finite-temperature magnetic properties: A first-principle approach. _EPL (Europhysics Letters)_ 39, 91 (1997).
* [70] Kurz, P., Förster, F., Nordström, L., Bihlmayer, G. & Blügel, S. Ab initio treatment of noncollinear magnets with the full-potential linearized augmented plane wave method. _Physical Review B_ 69, 024415 (2004).
* [71] Sandratskii, L. Noncollinear magnetism in itinerant-electron systems: theory and applications. _Advances in Physics_ 47, 91–160 (1998).
* [72] Marsman, M. & Hafner, J. Broken symmetries in the crystalline and magnetic structures of $\gamma$-iron. _Physical Review B_ 66, 224409 (2002).
* [73] Rosengaard, N. & Johansson, B. Finite-temperature study of itinerant ferromagnetism in fe, co, and ni. _Physical Review B_ 55, 14975 (1997).
* [74] Szilva, A. _et al._ Interatomic exchange interactions for finite-temperature magnetism and nonequilibrium spin dynamics. _Physical review letters_ 111, 127204 (2013).
* [75] Kaneyoshi, T. _Introduction to amorphous magnets_ (World Scientific Publishing Company, 1992).
* [76] Yosida, K., Mattis, D. C. & Yosida, K. _THEORY OF MAGNETISM.: Edition en anglais_ , vol. 122 (Springer Science & Business Media, 1996).
* [77] Pajda, M., Kudrnovskỳ, J., Turek, I., Drchal, V. & Bruno, P. Ab initio calculations of exchange interactions, spin-wave stiffness constants, and curie temperatures of fe, co, and ni. _Physical Review B_ 64, 174402 (2001).
* [78] Yang, K.-H. & Hirschfelder, J. O. Generalizations of classical poisson brackets to include spin. _Physical Review A_ 22, 1814 (1980).
* [79] Loong, C.-K., Carpenter, J., Lynn, J., Robinson, R. & Mook, H. Neutron scattering study of the magnetic excitations in ferromagnetic iron at high energy transfers. _Journal of applied physics_ 55, 1895–1897 (1984).
* [80] Lynn, J. Temperature dependence of the magnetic excitations in iron. _Physical Review B_ 11, 2624 (1975).
* [81] Leger, J., Loriers-Susse, C. & Vodar, B. Pressure effect on the curie temperatures of transition metals and alloys. _Physical Review B_ 6, 4250 (1972).
* [82] Morán, S., Ederer, C. & Fähnle, M. Ab initio electron theory for magnetism in fe: Pressure dependence of spin-wave energies, exchange parameters, and curie temperature. _Physical Review B_ 67, 012407 (2003).
* [83] Körmann, F., Dick, A., Hickel, T. & Neugebauer, J. Pressure dependence of the curie temperature in bcc iron studied by ab initio simulations. _Physical Review B_ 79, 184406 (2009).
* [84] Skomski, R. _et al._ _Simple models of magnetism_ (Oxford University Press on Demand, 2008).
* [85] Wood, M. A. & Thompson, A. P. Extending the accuracy of the snap interatomic potential form. _The Journal of Chemical Physics_ 148, 241721 (2018).
* [86] Zuo, Y. _et al._ Performance and cost assessment of machine learning interatomic potentials. _The Journal of Physical Chemistry A_ 124, 731–745 (2020).
* [87] Wood, M. A., Cusentino, M. A., Wirth, B. D. & Thompson, A. P. Data-driven material models for atomistic simulation. _Physical Review B_ 99, 184305 (2019).
* [88] Deng, Z., Chen, C., Li, X.-G. & Ong, S. P. An electrostatic spectral neighbor analysis potential for lithium nitride. _npj Computational Materials_ 5, 1–8 (2019).
* [89] Goryaeva, A. M., Maillet, J.-B. & Marinica, M.-C. Towards better efficiency of interatomic linear machine learning potentials. _Computational Materials Science_ 166, 200–209 (2019).
* [90] Adams, J. J., Agosta, D., Leisure, R. & Ledbetter, H. Elastic constants of monocrystal iron from 3 to 500 k. _Journal of applied physics_ 100, 113530 (2006).
## Acknowledgements
All authors thank Mark Wilson for his detailed review and edits. Sandia
National Laboratories is a multimission laboratory managed and operated by
National Technology & Engineering Solutions of Sandia, LLC, a wholly owned
subsidiary of Honeywell International Inc., for the U.S. Department of
Energy’s National Nuclear Security Administration under contract DE-NA0003525.
This paper describes objective technical results and analysis. Any subjective
views or opinions that might be expressed in the paper do not necessarily
represent the views of the U.S. Department of Energy or the United States
Government. AC acknowledges funding from the Center for Advanced Systems
Understanding (CASUS) which is financed by the German Federal Ministry of
Education and Research (BMBF) and by the Saxon State Ministry for Science,
Art, and Tourism (SMWK) with tax funds on the basis of the budget approved by
the Saxon State Parliament.
## Author contributions statement
AC, MAW, MPD and JT performed the DFT calculations. JBM, MCM, JT and MAW
generated the Database of configurations. JT implemented the extended spin
Hamiltonian and the magnetic pressure computation in LAMMPS, and parametrized
it on _first-principles_ calculations. MAW and SN trained the SNAP potential.
JT and SN performed the magneto-static calculations. SN, APT, MAW and JT
performed the magneto-dynamics calculations. All authors participated in
conceiving the research and writing the manuscript.
## Competing interests
The authors declare no competing interests.
|
# Sharp large time behaviour in $N$-dimensional reaction-diffusion equations
of bistable type
Jean-Michel Roquejoffre
Institut de Mathématiques de Toulouse; UMR 5219
Université de Toulouse; CNRS
Université Toulouse III, 118 route de Narbonne, 31062 Toulouse, France
<EMAIL_ADDRESS>
Violaine Roussier-Michon
Institut de Mathématiques de Toulouse; UMR 5219
Université de Toulouse; CNRS
INSA Toulouse, 135 av. Rangueil, 31077 Toulouse, France
<EMAIL_ADDRESS>
###### Abstract
We study the large time behaviour of the reaction-diffsuion equation
$\partial_{t}u=\Delta u+f(u)$ in spatial dimension $N$, when the nonlinear
term is bistable and the initial datum is compactly supported. We prove the
existence of a Lipschitz function $s^{\infty}$ of the unit sphere, such that
$u(t,x)$ converges uniformly in $\mathbb{R}^{N}$, as $t$ goes to infinity, to
$U_{c_{*}}\bigg{(}|x|-c_{*}t+\displaystyle\frac{N-1}{c_{*}}\mathrm{ln}t+s^{\infty}\Big{(}\displaystyle\frac{x}{|x|}\Big{)}\bigg{)}$,
where $U_{c*}$ is the unique 1D travelling profile. This extends earlier
results that identified the locations of the level sets of the solutions with
$o_{t\to+\infty}(t)$ precision, or identified precisely the level sets
locations for almost radial initial data.
## 1 Introduction
### 1.1 Question under study
The paper is devoted to the large time behaviour of the solution of the
reaction-diffusion equation
$\displaystyle\partial_{t}u=\Delta u+f(u),$ $\displaystyle\quad
t>0\,,\,x\in\mathbb{R}^{N}$ (1) $\displaystyle u(0,x)=u_{0}(x),$
$\displaystyle\quad\quad\quad\quad x\in\mathbb{R}^{N}$ (2)
where $f\in{\cal C}^{\infty}([0,1],\mathbb{R})$. We will assume the existence
of $\theta\in(0,1)$ such that
$f<0\ \hbox{on $(0,\theta)$},\ f>0\ \hbox{on $(\theta,1)$},\quad
f^{\prime}(0)<0,\ f^{\prime}(1)<0,\quad\int_{0}^{1}f>0.$
Thus $f$ is said, in reference to the equation $\dot{u}=f(u)$, of the bistable
type. A typical example is
$f(u)=u(u-\theta)(1-u),\ \ \ 0<\theta<\frac{1}{2}.$
We consider compactly supported initial datum $u_{0}$ of the form
$\displaystyle\exists R_{2}>R_{1}>0\,,\,\forall
x\in\mathbb{R}^{N}\,,\quad{\mathbf{1}}_{B_{R_{1}}}(x)\leq
u_{0}(x)\leq{\mathbf{1}}_{B_{R_{2}}}(x),$ (3)
where ${\mathbf{1}}_{A}$ is the indicator of the set $A$ and $B_{R}$ is the
ball of $\mathbb{R}^{N}$ of radius $R$ centered at the origin. Equation (1)
has a unique classical solution $u(t,x)$ in ${\cal
C}^{\infty}([0,+\infty[\times\mathbb{R}^{N},[0,1])$ emanating from $u_{0}$,
see [11] for instance. From Aronson and Weinberger [1], as soon as $R_{1}>0$
is large enough, the solution $u$ spreads at a fixed speed $c_{*}>0$, in the
following sense:
$\min_{|x|\leq ct}u(t,x)\to 1\mbox{ as }t\to+\infty\,,\mbox{ for all }0\leq
c<c_{*}$
and
$\sup_{|x|\geq ct}u(t,x)\to 0\mbox{ as }t\to+\infty\,,\mbox{ for all
}c>c_{*}.$
The goal of this work is to sharpen this result. The goal of this paper is to
prove the following
###### Theorem 1.1
Let $u_{0}$ satisfy assumption (3). There is a Lipschitz function
$s^{\infty}$, defined on the unit sphere of $\mathbb{R}^{N}$, such that the
solution $u$ of (1) emanating from $u_{0}$ satisfies
$\lim\limits_{t\to+\infty}\sup_{x\in\mathbb{R}^{N}}\left|u(t,x)-U_{*}\biggl{(}|x|-c_{*}t+\frac{N-1}{c_{*}}{\mathrm{ln}}t+s^{\infty}\Big{(}\frac{x}{|x|}\Big{)}\biggl{)}\right|=0.$
### 1.2 Relation to existing works
In the case $N=1$, equation (1) with $N=1$ reads
$\partial_{t}u=\partial_{xx}u+f(u),\quad t>0\,,\,x\in\mathbb{R}.$ (4)
It admits one-dimensional travelling fronts $U(x-ct)$ if and only if
$c=c_{*}$, the just mentionned spreading speed. The profile $U$, satisfies
$U^{\prime\prime}+c_{*}\,U^{\prime}+f(U)=0,\quad x\in\mathbb{R},$ (5)
together with the conditions at infinity
$\lim\limits_{x\to-\infty}U(x)=1\quad\mbox{ and
}\quad\lim\limits_{x\to+\infty}U(x)=0.$ (6)
Any solution $U$ to (5)-(6) is a shift of a fixed profile $U_{*}$:
$U(x)=U_{*}(x+s)$ with some fixed $s\in\mathbb{R}$. The large time behaviour
of (4) has a history of important contributions, the most fundamental being
perhaps that of Fife and McLeod [8]. They proved that the solution of (4)
starting from an initial datum $u_{0}(x)$ that is roughly front-like at
infinity, namely
$\limsup_{x\to-\infty}u_{0}(x)>\theta,\quad\liminf_{x\to+\infty}u_{0}(x)<\theta$
gives rise to a solution $u(t,x)$ that converges to a travelling wave
exponentially in time. Precisely, there exists $x_{0}\in\mathbb{R}$ (depending
on $u_{0}$ in a way that is not explicit in general) such that
$\sup_{x\in\mathbb{R}}|u(t,x)-U_{*}(x+c_{*}t+x_{0})|\leq Ce^{-\omega t},$
where $\omega>0$ is essentially the first nonzero eigenvalue of the linear
operator
$-\partial_{xx}-f^{\prime}(U_{*}(.+x_{0})).$
The large time behaviour of the solutions to (1) has not been described at
that level of precision, to the exception of a former paper by the second
author [20] tackling the case of almost spherically symmetric initial data,
and which will be the starting point of our work. This contribution proves the
convergence to travelling waves, shifted by the logarithmic delay
$\displaystyle\frac{N-1}{c_{*}}\mathrm{ln}t$ plus an additional, possibly
angle dependent constant. While the result takes advantage of the nearly
spherically symmetric, it emphasises the fact that the part of the shift that
is constant in time is, in general, angle dependent. We also refer to [21], a
work that also identifies the fact that alsmost spherically symmetric, but
nonsymmetric initial data will remain so for all later times.
In several space dimensions $N\geq 2$, a line of results, in a spirit
different from that of Theorem 1.1, is the convergence in profile of the
solutions. Namely, $u(t,x)$ is followed in the reference frame where it is
bounded away from 0 or 1, and its asymptotic shape is characterised. We
mention a very interesting contribution of Jones [12], stating that the level
sets of the solution of (1), whatever the nonlinearity is, will have
oscillations only of the size $O_{t\to+\infty}(1)$. This is a consequence of
the following fact: if $\lambda$ is a regular value of $u$, the normal to the
$\lambda$-level set of $u$ meets the convex hull of the support of the initial
datum. A simple proof of this fact is given by Berestycki in [2]. This work
has been revisited in [19].
Instead of a bistable nonlinearity, we may consider (1) with a nonlinearity
$f>0$ and concave between 0 and 1 (so-called Fisher-KPP nonlinearity, in
reference to the seminal paper [13]). It is a well-known fact that one-
dimensional waves exist for all speeds $c\geq c_{*}=2\sqrt{f^{\prime}(0)}$. If
$U_{*}$ is a wave with bottom speed, we recently proved in [17], in
collaboration with L. Rossi, that the dynamics of $u$ is
$\lim\limits_{t\to+\infty}\sup_{x\in\mathbb{R}^{N}}\left|u(t,x)-U_{*}\biggl{(}|x|-c_{*}t+\frac{N+2}{c_{*}}{\mathrm{ln}}t+s^{\infty}\Big{(}\frac{x}{|x|}\Big{)}\biggl{)}\right|=0.$
(7)
Thus, in both cases, there is a logarithmic delay. However they are of
different nature. In the bistable case, the delay is purely due to curvature
terms, as will be clear from Section 3, and as had already been elucidated in
[20]. In the Fisher-KPP case, there is an additional shift
$\displaystyle\frac{3}{c_{*}}\mathrm{ln}t$, which is already present in one
space dimension, and that is called the Bramson shift [4], [5]. It comes from
the fact that, as 0 is the most unstable value in the range of $u$ \- that is,
the growth for the linearised equation $\dot{v}=f^{\prime}(u)v$ is maximal
when $u=0$ -, the dynamics of $u(t,x)$ is driven by its tail, which implies a
different behaviour that is very much related to the one-dimensional Dirichlet
heat equation. Bramson’s proof is probabilistic, and a new interpretation of
this result is proposed in [14]. Before the complete proof of [17], the
position of the level sets had been identified with $O_{t\to+\infty}(1)$
precision by Gärtner [9], that is, they expand like
$c_{*}t-\displaystyle\frac{N+2}{2}\mathrm{ln}t$. Estimate (7) is proved on the
basis of the ideas of [17].
As an illustration to our explanation, we mention the recent contribution [6],
which treats the porous medium equation with Fisher-KPP nonlinearity. It
identifies the position of the level sets with $O_{t\to+\infty}(1)$ precision,
that is, they expand like $c_{*}t-\displaystyle\frac{N-1}{c_{*}}\mathrm{ln}t$.
It may look surprising, as the nonlinearity is the Fisher-KPP one. However,
this can be explained by the fact that the porous medium equation is really a
free boundary problem, so that the solution has no tail. This entails a
behaviour that is more closely related to what is observed in the bistable
case.
### 1.3 Strategy of the proof of Theorem 1.1, organisation of the paper
Let us explain how the proof of Theorem 1.1 proceeds. The first step is to
identify the reference frame in which $u(t,x)$ is nontrivial, for this we
apply the existing analysis of the second author [20]. Once this is done, we
write, as in [17], equation (1) in polar coordinates, shifted in the correct
reference frame. This has the inconvenience of cancelling out, at large times,
the angular diffusion, which deprives us of an important source of
compactness. To recover it we estimate the angular derivative, something that
was quite useful in the fisher-KPP case [17]. However, while one could use the
maximum principle in [17] in a relatively easy fashion - the asymptotic
equation was the linear heat equation in the tail of the solution - one cannot
do it here. Indeed, what drives the propagation is the body of the solution,
not its tail. As a result, there is no obvious application of the maximum
principle, and the estimate proceeds by applying a Fife-McLeod type idea to
the angular derivative of $u$, by comparing it to its radial derivative. This
is done in three successive steps detailed in Section 3. Once this is under
control, a stability result, once again in the Fife-Mc Leod type, but
complicated by the presence of angular terms, helps concluding the proof.
The organisation of the paper follows the main steps of this strategy. In
Section 2, we trap the solution between two 1D travelling waves moving like
$c_{*}t-\displaystyle\frac{N-1}{c_{*}}\mathrm{ln}t$, thus characterising the
reference frame in which the solution is nontrivial; we also prepare the
equations. Section 3 is devoted to the main estimate, namely an estimate on
the angular variable of $u$. The proof of Theorem 1.1 is concluded in Section
4. We make some final remarks in Section 5.
## 2 Radial bounds and preparation of the equations
The main result of this section, that we will deduce from Theorem 1 of [20],
is the following.
###### Proposition 2.1
Let $u$ solve (1) with initial datum $u_{0}$ satisfying (3). There are four
real numbers $t_{0}>0$, $C>0$ and $s_{-}<s_{+}$ such that, for all $t\geq
t_{0}$ and $x\in\mathbb{R}^{N}$, we get
$U_{*}(|x|-c_{*}t+\frac{N-1}{c_{*}}\mathrm{ln}t-s_{-})-C\,\frac{{\mathrm{ln}}t}{t}\leq
u(t,x)\leq
U_{*}(|x|-c_{*}t+\frac{N-1}{c_{*}}\mathrm{ln}t-s_{+})+C\,\frac{{\mathrm{ln}}t}{t}.$
(8)
Proof. Let $u_{0}$ satisfy assumption (3) and $u$ be the unique solution to
(1) emanating from $u_{0}$. Define $R_{0}>0$ and $\delta_{0}>0$, depending on
the non-linearity $f$, as in theorem 1 of [20].
We first build a super-solution named $\bar{u}$ as follows. Choose
$\varepsilon\in(0,\displaystyle\frac{\delta_{0}}{\sqrt{R_{2}+1}})$ and
$\bar{R}>R_{0}$ such that
$\forall x\in\mathbb{R}^{N}\,,\quad
u_{0}(x)\leq{\mathbf{1}}_{B_{R_{2}}}(x)\leq
U_{*}(|x|-\bar{R})+\varepsilon{\mathbf{1}}_{B_{R_{2}+1}}(x)$
Let $\bar{u}$ be the solution to (1) emanating from
$U_{*}(|x|-\bar{R})+\varepsilon{\mathbf{1}}_{B_{R_{2}+1}}(x)$. By the maximum
principle, we get that for all $t>0$ and all $x\in\mathbb{R}^{N}$,
$u(t,x)\leq\bar{u}(t,x)$ and we just have to compare $\bar{u}$ with a front.
This is done by theorem 1 in [20]. Indeed, defining
$X=\\{u:\mathbb{R}^{N}\to\mathbb{R}\,|\,\exists\tilde{u}\in
H^{1}(\mathbb{R}^{+})\mbox{ such that }u(x)=\tilde{u}(|x|)\mbox{ for
}x\in\mathbb{R}^{N}\\}$
we get
$\|\bar{u}(0,x)-U_{*}(|x|-\bar{R})\|_{X}\leq\varepsilon\sqrt{R_{2}+1}\leq\delta_{0}$
and therefore, by theorem 1 and the remarks below in [20], there exist
$L\in\mathbb{R}$ and $C>0$ such that for all $t>0$ and $x\in\mathbb{R}^{N}$,
$|\bar{u}(t,x)-U_{*}(|x|-c_{*}t+\frac{N-1}{c_{*}}\ln t+L)|\leq C\,\frac{\ln
t}{t}$
Defining $s^{+}=-L$ leads to the right hand side of (8) for any $t\geq 0$.
Dealing with a sub-solution is not so simple because a small perturbation as
$max(U_{*}(|x|+\underline{R})+\varepsilon,0)$ may not developp a front. We
therefore use Aronson and Weinberger’s result [1] to wait until the solution
$u$ has propagated enough. Fix $\varepsilon>0$ and $\underline{R}>R_{0}$ such
that $U_{*}(-\underline{R})\leq 1-\varepsilon$. Then, for all
$x\in\mathbb{R}^{N}$, we have
$U_{*}(|x|-\underline{R})\leq U_{*}(-\underline{R})\leq 1-\varepsilon.$
On the other hand, by Aronson and Weinberger’s result [1] with $c=c_{*}/2$,
there exists $t_{\varepsilon}>0$ such that for any $t\geq t_{\varepsilon}$ and
$|x|\leq ct$, $1-\varepsilon\leq u(t,x)\leq 1$. Choose $t_{0}\geq
t_{\varepsilon}$ such that
$\|U_{*}(\cdot-\underline{R})\|_{H^{1}(ct_{0},\infty)}\leq\delta_{0}$
and define
$\underline{u}(t_{0},x)=U_{*}(|x|-\underline{R}){\mathbf{1}}_{B_{ct_{0}}}(x)$.
Then, $\underline{u}(t_{0},x)\leq u(t_{0},x)$ for any $x\in\mathbb{R}^{N}$.
Let $\underline{u}$ be the solution to (1) emanating from
$\underline{u}(t_{0},x)$ at $t=t_{0}$. The maximum principle ensures that
$\underline{u}(t,x)\leq u(t,x)$ for any $t\geq t_{0}$ and $x\in\mathbb{R}^{N}$
and we just have to compare $\underline{u}$ with a front. Since
$\|\underline{u}(t_{0},x)-U_{*}(|x|-\underline{R})\|_{X}=\|U_{*}(\cdot-\underline{R})\|_{H^{1}(ct_{0},\infty)}\leq\delta_{0}$,
theorem 1 in [20] applies and there exists $L\in\mathbb{R}$ and $C>0$ such
that forall $t\geq t_{0}$ and $x\in\mathbb{R}^{N}$,
$|\underline{u}(t,x)-U_{*}(|x|-c_{*}t+\frac{N-1}{c_{*}}\ln t+L)|\leq
C\,\frac{\ln t}{t}$
Defining $s_{-}=-L$ leads to the left hand side of (8) for any $t\geq t_{0}$.
$\Box$
This proposition makes it clear that the transition zone, where $u$ is neither
close to $1$ nor $0$, is located around
$R(t)=c_{*}t-\displaystyle\frac{N-1}{c_{*}}\ln t$. We therefore choose to
handle the initial equation (1) in a frame, moving at speed $\dot{R}(t)$ in
any radial direction. Let us explain those transformations on the equations.
From now on, we take $t=1$ as initial time and (2) is replaced by
$u(1,x)=u_{0}(x)$. This will be handier in view of the following
transformations and, since equation (1) is invariant by translation in time,
there is no loss of generality. We first use the polar coordinates
$x\mapsto(r=|x|>0,\Theta=\frac{x}{|x|}\in\mathbb{S}^{N-1})$
then (1) becomes
$\partial_{t}u=\partial_{rr}u+\frac{N-1}{r}\partial_{r}u+\frac{\Delta_{\Theta}u}{r^{2}}+f(u),\quad{t>1,\
r>0,\ \Theta\in\mathbb{S}^{N-1}}.$
Here, $\Delta_{\Theta}$ is the Laplace-Beltrami operator on the unit sphere of
$\mathbb{R}^{N}$. Its precise expression will not be needed in the sequel. The
initial condition reads $u(1,r,\Theta)=u_{0}(r,\Theta)$.
Since we mentionned that the transition zone is located around
$R(t)=c_{*}t-k\ln t$ with $k=(N-1)/c_{*}$, we choose the change of variables
$r^{\prime}=r-R(t)$ and $u(t,r,\Theta)=u_{1}(t,r-R(t),\Theta)$. We drop the
primes and indexes, and (1) becomes
$\partial_{t}u=\partial_{rr}u+c_{*}\partial_{r}u+\biggl{(}\frac{N-1}{r+c_{*}t-k{\mathrm{ln}}t}-\frac{k}{t}\biggl{)}\partial_{r}u+\frac{\Delta_{\Theta}u}{(r+c_{*}t-k{\mathrm{ln}}t)^{2}}+f(u).$
(9)
The equation is valid for $t>1$, $r>-2t+k{\mathrm{ln}}t$, and
$\Theta\in\mathbb{S}^{N-1}$ and the initial condition becomes
$u(1,r,\Theta)=u_{0}(r+c_{*},\Theta)$.
To unravel the mechanisms at work, our first guess is that the term in
$\Delta_{\Theta}v$ will not matter too much, because it decays like $t^{-2}$
(an integrable power of $t$), except in the zone $r\sim-c_{*}t$, where we know
(for instance [1]) that $u(t,r,\Theta)$ goes to 1 as $t\to+\infty$. This
confirms the information given by proposition 2.1 that the dynamics is like
that of the one-dimensional equation. On the other hand, in the advection
term, we have
$\displaystyle\frac{N-1}{r+c_{*}t-k{\mathrm{ln}}t}\sim_{t\to+\infty}\frac{N-1}{c_{*}t},$
except for extremely large $r$. This is nonintegrable in $t$, but we balance
it with the $\displaystyle\frac{k}{t}$ term since we chose
$k=\frac{N-1}{c_{*}}.$ (10)
This heuristics confirms that $R(t)=c_{*}t-\displaystyle\frac{N-1}{c_{*}}\ln
t$ is the right moving frame to observe the large time dynamics of (1). In the
sequel, we will keep the notation $k$, keeping in mind that $k$ is given by
formula (10). Also, from now on, we will only consider solutions of (9).
## 3 Boundedness of the angular derivative
This section is devoted to the following estimate
###### Theorem 3.1
Let $u$ solve (9) with initial datum $u_{0}(\cdot+c_{*},\cdot)$. Then, there
is $C>0$ such that
$\forall t\geq
1\,,\quad\|\nabla_{\Theta}u(t,.,.)\|_{L^{\infty}((-c_{*}t/2,+\infty)\times\mathbb{S}^{N-1})}\leq
C.$ (11)
The proof of this theorem 3.1 is based on a bootstrap argument. We will first
prove that the quantity on the left handside is an $o(t)$, which will allow us
to prove that it is an $O(t^{\varepsilon})$ for all $\varepsilon>0$, which
will in turn leads us to $O(1)$. The main idea is to adapt the construction by
Fife and McLeod [8] of sub and super-solutions, but at the level of the linear
equation. The main ingredient is that $\partial_{r}u$ becomes bounded away
from 0 on every compact set, so that it may serve as a comparison function.
And so, the main step (namely section 3.2) will consist in comparing
$|\nabla_{\Theta}u|$ to a suitable multiple of $\partial_{r}u$, as it almost
satisfies the same equation. The main body of the work will consist in
quantifying what this innocent ”almost” means. This idea of using the
longitudinal derivative of the solution as a comparison tool (as oppose to
that of the wave, which has a long history dating back to Fife-McLeod) was
first used in [16], to prove the convergence to travelling waves in
cylindrical geometry.
Proof of theorem 3.1. So, let $u$ solve equation (9) with datum
$u_{0}(\cdot+c_{*},\cdot)$.
### 3.1 The $o(t)$ estimate
Let us perform the revert change of variables explained in the previous
section to come back to $u$ solution to equation (1). Pick any direction
$\Theta$ on the unit sphere. We may, even if it means rotating, assume that
$\Theta=0$, so that we are looking in the direction $Ox_{1}$. Let
$x^{\prime}=(x_{2},...,x_{N})$ be the coordinates orthogonal to the direction
$Ox_{1}$. Consider the sector
$\Sigma_{t}=\\{x\in\mathbb{R}^{N}\,|\,x_{1}>0,\
\frac{|x^{\prime}|}{x_{1}}\leq\frac{1}{t^{3/4}}\\},$
Notice that for $x\in\Sigma_{t}$, when $x_{1}\sim t$, we have
$|x^{\prime}|\leq t^{1/4}$. Write (1) in $\Sigma_{t}$, in the reference frame
moving like $c_{*}t-k{\mathrm{ln}}t$ in the direction $Ox_{1}$, it reads
$X_{1}=x_{1}-c_{*}t+k\ln t$,
$u(t,x)=u_{1}(t,X_{1},x^{\prime})=u_{1}(t,x_{1}-c_{*}t+k\ln t,x^{\prime})$ so
that dropping indexes,
$\partial_{t}u=\Delta u+\left(c_{*}-\frac{k}{t}\right)\partial_{1}u+f(u).$
We also have, because $r=|(X_{1}+c_{*}t-k{\mathrm{ln}}t,x^{\prime})|$ in
$\Sigma_{t}-(c_{*}-k{\mathrm{ln}}t)e_{1}$:
$r-c_{*}t+k{\mathrm{ln}}t=X_{1}+o_{t\to+\infty}(1),\ \hbox{uniformly in
$\Sigma_{t}-(c_{*}-k{\mathrm{ln}}t)e_{1}$}$
Proposition 2.1 and parabolic regularity implies that the trajectories
$(u(T+t,X_{1},x^{\prime}))_{T>0}$ are relatively compact in
$C^{2}([-\tau,\tau]\times\mathbb{R}\times[-M,M]^{N-1})$ for all $\tau>0$ and
$M>0$. If $u_{\infty}(t,X_{1},x^{\prime})$ is a limiting trajectory we have
for $(t,X_{1},x^{\prime})\in\mathbb{R}^{N+1}$
$\displaystyle\partial_{t}u_{\infty}=\Delta
u_{\infty}+\left(c_{*}-\displaystyle\frac{k}{t}\right)\partial_{1}u_{\infty}+f(u_{\infty})$
(12) $\displaystyle U_{*}(X_{1}-s_{-})\leq u_{\infty}(t,X_{1},x^{\prime})\leq
U_{*}(X_{1}-s_{+}).$ (13)
From Theorem 1.1 of [18] there is $s_{\infty}\in\mathbb{R}$ such that for
$(t,X_{1},x^{\prime})\in\mathbb{R}^{N+1}$
$u_{\infty}(t,X_{1},x^{\prime})=U_{*}(X_{1}-s_{\infty}).$
Parabolic regularity implies
$\lim_{t\to+\infty}|\nabla_{x^{\prime}}u(t,X_{1},x^{\prime})|=0,\ \hbox{
uniformly in }(t,X_{1})\in\mathbb{R}_{+}\times\mathbb{R}\mbox{ and
}x^{\prime}\mbox{ on every compact set.}$
Let us translate this result in the variables of equation (9). Because
$\nabla_{\Theta}u(t,r,0)=(r+c_{*}t-k{\mathrm{ln}}t)\nabla_{x^{\prime}}u(t,X_{1},0),$
(14)
we have the expected estimate on $\nabla_{\Theta}u$ for $\Theta=0$. Note that
our argument is uniform in the direction considered, so that we have in the
end, for $u$ solution to (9)
$\lim_{t\to+\infty}\frac{\|\nabla_{\Theta}u(t,.,.)\|_{L^{\infty}((-c_{*}t/2,+\infty)\times\mathbb{S}^{N-1})}}{(r+c_{*}t-k{\mathrm{ln}}t)}=0.$
Parabolic regularity again implies the following corollary.
###### Corollary 3.2
We have
$\lim_{t\to+\infty}\frac{\|\Delta_{\Theta}u(t,.,.)\|_{L^{\infty}((-c_{*}t/2,+\infty)\times\mathbb{S}^{N-1})}}{(r+c_{*}t-k{\mathrm{ln}}t)^{2}}=0.$
We also extract from the preceding argument an additional corollary.
###### Corollary 3.3
For every $M>0$, there is $T_{M}>0$ and $\delta_{M}>0$, the function
$M\mapsto\delta_{M}$ decreasing, such that
$-\partial_{r}u(t,r,\Theta)\geq\delta_{M}\ \hbox{for $t\geq T_{M}$, $-M\leq
r\leq M$, $\Theta\in{\mathbb{S}}^{N-1}$}.$
### 3.2 The $O(t^{\varepsilon})$ estimate
For $u$ solution to (9), denote
$V(t,r,\Theta)=-\partial_{r}u(t,r,\Theta).$
The equation for $V$ is
$\biggl{(}\partial_{t}+L(t)-f^{\prime}(u)\biggl{)}V=-\frac{N-1}{(r+c_{*}t-k{\mathrm{ln}}t)^{2}}V+\frac{2\Delta_{\Theta}u}{(r+c_{*}t-k{\mathrm{ln}}t)^{3}},$
(15)
the expression of $L(t)$ being
$L(t)=-\partial_{rr}-c_{*}\partial_{r}-\biggl{(}\frac{N-1}{r+c_{*}t-k{\mathrm{ln}}t}-\frac{k}{t}\biggl{)}\partial_{r}-\frac{\Delta_{\Theta}}{(r+c_{*}t-k{\mathrm{ln}}t)^{2}}.$
(16)
If $\Theta=(\theta_{1},...,\theta_{N-1})$, we set
$u_{i}=\partial_{\theta_{i}}u,$ (17)
we have
$\biggl{(}\partial_{t}+L(t)-f^{\prime}(u)\biggl{)}u_{i}=0.$ (18)
A super-solution for (18) is looked for under the form
$\overline{v}(t,r,\Theta)=\xi(t)V(t,r,\Theta)+q(t),$ (19)
that is, almost as in Fife-McLeod [8], up to the fact that we work at the
level of the linearised equation. In the sequel, we will always study
$\overline{v}$ in the range $r\geq-c_{*}t/2$, the range
$[-c_{*}t+k{\mathrm{ln}}t,-c_{*}t/2)$ being taken care of by Proposition 2.1.
Also, it is enough to construct $\overline{v}$ for $t$ large enough.
We shall now introduce new notations to explain how we will choose the
functions $\xi(t)$ and $q(t)$. Denote $F=\|f^{\prime}\|_{\infty}>0$ and
$\varepsilon(t)=\big{\|}\frac{N-1}{r+c_{*}t-k{\mathrm{ln}}t}V-\frac{2\Delta_{\Theta}u}{(r+c_{*}t-k{\mathrm{ln}}t)^{2}}\big{\|}_{\infty}\,.$
Then, $\varepsilon(t)$ is a nonnegative function tending to $0$ at infinity,
this last estimate comes from Corollary 3.2. Pick $M>0$ and $\delta>0$ such
that
$f^{\prime}(u(t,r,\Theta))\leq-\delta\ \hbox{if $|r|\geq M.$}$
By corollary 3.3, there exists $T_{M}>0$ and $\delta_{M}>0$ such that
$V\geq\delta_{M}$ for $t\geq T_{M}$, $|r|\leq M$ and
$\Theta\in{\mathbb{S}}^{N-1}$. Moreover, if $V^{-}$ stands for the negative
part of $V$, we also get from corollary 3.3 that
$\lim_{t\to+\infty}\lim_{N\to\infty}\sup_{r\in(-c_{*}t/2,+\infty)\backslash(-N,N)}V^{-}(t,r,\Theta)=0.$
Then, for any $\eta>0$, there exist $T>0$ and $N>0$ such that
$V(t,r,\Theta)\geq 0$ for $r\in[-N,N]$ and $|V^{-}(t,r,\Theta)|\leq\eta$ for
$r\in(-c_{*}t/2,+\infty)\backslash(-N,N)$. Choose now $\eta>0$ small enough so
that $0<\eta<\min(\delta_{M},\displaystyle\frac{\delta\delta_{M}}{2F})$ and
$N>M$.
Now that all those preliminaries are given, let define $\xi$ and $q$ as the
unique solutions to the ODE system for $t>T$
$\left\\{\begin{array}[]{rll}\dot{q}+\displaystyle\displaystyle\frac{\delta}{4}q=&\displaystyle\frac{\varepsilon(t)\xi}{c_{*}t/2-k{\mathrm{ln}}t}\,,\\\
\dot{\xi}=&\displaystyle\frac{\delta+F}{\delta_{M}+\eta}q\,,\end{array}\right.$
(20)
the initial data at $(q(T),\xi(T))$ being nonnegative and sufficiently large
so that $\overline{v}(T,r,\Theta)\geq|u_{i}(T,r,\Theta)|$. Then, for any
$t\geq T$, $q(t)\geq 0$ and $\xi(t)\geq 0$ and we shall prove that
$\overline{v}$ defined by (19) with $\xi$ and $q$ verifying (20) is a
supersolution to (18). We have
$\begin{array}[]{rll}&\biggl{(}\partial_{t}+L(t)-f^{\prime}(u)\biggl{)}\overline{v}\\\
=&\dot{\xi}(t)V+\xi(t)\biggl{(}\partial_{t}V+L(t)V-f^{\prime}(u)V\biggl{)}+\dot{q}-f^{\prime}(u)q\\\
\geq&\dot{\xi}(t)V+\dot{q}-f^{\prime}(u)q-\displaystyle\frac{\varepsilon(t)\xi}{r+c_{*}t-k{\mathrm{ln}}t},\end{array}$
Consider first the set where $|r|\geq M$. Since $\dot{\xi}\geq 0$, $q\geq 0$,
$|V^{-}|\leq\eta$ and
$\eta<\min(\delta_{M},\displaystyle\frac{\delta\delta_{M}}{2F})$,
$\begin{array}[]{rll}\biggl{(}\partial_{t}+L(t)-f^{\prime}(u)\biggl{)}\overline{v}\geq&-\dot{\xi}\eta+\dot{q}+\delta
q-\displaystyle\frac{\varepsilon(t)\xi}{c_{*}t/2-k{\mathrm{ln}}t},\\\
=&-\eta\displaystyle\frac{\delta+F}{\delta_{M}+\eta}q+\dot{q}+\delta
q-\displaystyle\frac{\varepsilon(t)\xi}{c_{*}t/2-k{\mathrm{ln}}t}\\\
\geq&\dot{q}+\displaystyle\frac{\delta}{4}q-\displaystyle\frac{\varepsilon(t)\xi}{c_{*}t/2-k{\mathrm{ln}}t}=0,\end{array}$
In the range $|r|\leq M$, the super-solution condition is true since
$\begin{array}[]{rll}\biggl{(}\partial_{t}+L(t)-f^{\prime}(u)\biggl{)}\overline{v}\geq&\dot{\xi}\delta_{M}+\dot{q}-Fq-\displaystyle\frac{\varepsilon(t)\xi}{c_{*}t/2-k{\mathrm{ln}}t},\\\
=&\delta_{M}\displaystyle\frac{\delta+F}{\delta_{M}+\eta}q+\dot{q}-Fq-\displaystyle\frac{\varepsilon(t)\xi}{c_{*}t/2-k{\mathrm{ln}}t}\\\
\geq&\dot{q}+\displaystyle\frac{\delta}{4}q-\displaystyle\frac{\varepsilon(t)\xi}{c_{*}t/2-k{\mathrm{ln}}t}=0,\end{array}$
Finally, for any $r\geq-c_{*}t/2$, $\overline{v}$ is a supersolution to (18).
Set $\varepsilon>0$. In order to prove $|\nabla_{\Theta}u|$ is a
$O(t^{\varepsilon})$, it suffices to study $(q(t),\xi(t))$ as time goes to
infinity. The equation being linear, it is enough to study it with
$(q(T),\xi(T))=(1,1)$. The first equation in (20) gives
$\displaystyle q(t)\leq$ $\displaystyle
e^{-\delta(t-T)/4}+\int_{T}^{t}e^{-\delta(t-s)/4}\displaystyle\frac{\varepsilon(s)\xi(s)}{c_{*}s/2-k\ln
s}ds\,,$ $\displaystyle\leq$ $\displaystyle
e^{-\delta(t-T)/4}+C\varepsilon\xi(t)\int_{T}^{t}\displaystyle\frac{e^{-\delta(t-s)/4}}{1+s}ds\leq
C\varepsilon\displaystyle\frac{\xi(t)}{1+t}$
with $C$ a universal constant. Indeed, $\dot{\xi}\geq 0$ and $T$ can be chosen
large enough so that for any $s\geq T$, $0\leq\varepsilon(s)\leq\varepsilon$.
We also have estimated $c_{*}t-k{\mathrm{ln}}t$ by a suitably small multiple
of $1+t$. Plugging this result in the second equation of (20), we get for any
$t\geq T$
$\dot{\xi}(t)\leq C\varepsilon\frac{\xi(t)}{1+t},$
with a universal constant $C$. Therefore, $\xi(t)\leq(1+t)^{C\varepsilon}$,
which is precisely the desired estimate. The same will hold for $q$.
### 3.3 Conclusion
We start from
$|\nabla_{\Theta}u(t,r,\Theta)|\leq Ct^{1/20}.$
Set $u_{ij}=\partial_{\theta_{i}\theta_{j}}u$, we have
$\biggl{(}\partial_{t}+L(t)-f^{\prime}(u)\biggl{)}u_{ij}=f^{\prime\prime}(u)u_{i}u_{j}=O(t^{1/10}).$
Parabolic regularity implies (this involves a rescaling of $\Theta$ by $t$,
then scaling back):
$|u_{ij}(t,r,\Theta)|\leq Ct^{21/20}.$
This allows in turn a more precise estimate in the equation for
$V=-\partial_{r}u$, because we have now
$\biggl{(}\partial_{t}+L(t)-f^{\prime}(u)\biggl{)}V=O(t^{-39/20}),$
in the range $r\in(-c_{*}t/2,+\infty)$. Clearly, the right handside is an
integrable power of $t$, we may therefore redo the previous step, replacing
$\displaystyle\frac{\varepsilon(t)\xi(t)}{1+t}$
by$\displaystyle\frac{\xi(t)}{(1+t)^{39/20}}$. This leads to the desired
estimate on $\overline{v}$ and then on $\nabla_{\Theta}u$, ending the proof of
theorem 3.1. $\Box$.
## 4 Convergence
Theorem 1.1 will result from the following stability result, once again close
in spirit to Fife and McLeod [8].
###### Theorem 4.1
Let $u$ be a solution to (9) and $s$ a Lipschitz function of
${\mathbb{S}}^{N-1}$. For every $\varepsilon>0$, there is $T_{\varepsilon}>0$
and $\eta_{\varepsilon}>0$ (depending possibly on $\|s\|_{\infty}$ and
$\|\nabla s\|_{\infty}$) such that, if we have
$|u(T_{\varepsilon},r,\Theta)-U_{*}(r+s(\Theta))|\leq\eta_{\varepsilon},\quad
r>-c_{*}T_{\varepsilon}+{\mathrm{ln}}T_{\varepsilon}\,,\,\Theta\in{\mathbb{S}}^{N-1},$
(21)
then there is $T_{\varepsilon}^{\prime}\geq T_{\varepsilon}$ such that, for
all $t\geq T_{\varepsilon}^{\prime}$ we have:
$|u(t,r,\Theta)-U_{*}(r+s(\Theta))|\leq\varepsilon,\quad
r>-c_{*}T_{\varepsilon}^{\prime}+{\mathrm{ln}}T_{\varepsilon}^{\prime}\,,\,\Theta\in{\mathbb{S}}^{N-1}.$
Let us postpone the proof of theorem 4.1 to prove theorem 1.1.
Proof of Theorem 1.1: Let $u$ solve (1) with initial datum $u_{0}$ satisfying
(3). Perform transformations listed in section 2 to deal with polar
coordinates in the radial moving frame. We still denote $u$ the solution of
(9). Define, for all $\tau>0$,
$\Omega_{\tau}=\\{t\in[-\tau,\tau],r\in[-c_{*}t+k{\mathrm{ln}}t,+\infty),\Theta\in{\mathbb{S}}^{N-1}\\}$
Then, by parabolic regularization and Ascoli’s theorem, the family
$(u(T+.,.,.))_{T>0}$ is relatively compact in $C(\Omega_{\tau})$ for every
$\tau>0$. Therefore, there is a sequence $(t_{n})_{n}$ going to infinity such
that $(u(t_{n}+.,.,.))_{n}$ converges, uniformly in every $\Omega_{\tau}$, to
a uniformly continuous function $u_{\infty}(t,r,\Theta)$ satisfying for
$t\in\mathbb{R}$
$\begin{array}[]{rll}\partial_{t}u_{\infty}=\partial_{rr}u_{\infty}+c_{*}\partial_{r}u+f(u_{\infty})&\hbox{in
${\mathcal{D}}^{\prime}(\mathbb{R}^{2}\times{\mathbb{S}}^{N-1})$}\\\
U_{*}(r-s_{-})\leq u_{\infty}(t,r,\Theta)\leq U_{*}(r-s_{+}).&\end{array}$
thanks to the radial barriers obtained in section 2. Moreover,
$u_{\infty}(.,.,\Theta)$ is $C^{1}$ in $t$ and $C^{2}$ in $r$ due to parabolic
regularity, for every $\Theta\in{\mathbb{S}}^{N-1}$. From [8], for every
$\Theta\in{\mathbb{S}}^{N-1}$, there is $s(\Theta)\in\mathbb{R}$ such that
$u_{\infty}(t,r,\Theta)=U_{*}(r+s(\Theta)).$
From Theorem 3.1, the function $s$ is Lipschitz. Moreover, from Theorem 4.1,
the whole family $(u(t,.,.))_{t>0}$ converges uniformly on
$[0,+\infty)\times\mathbb{S}^{N-1}$ to the function $(r,\Theta)\mapsto
U_{*}(r+s(\Theta))$. Reverting to the original variables proves theorem 1.1.
$\Box$
As said above, Theorem 4.1 is a Fife and Mc Leod’s type result, and so will be
obtained from the construction of sub and super solutions very much inspired
from [8]. However, while those in [8] were explicit, the ones constructed here
solve a nonlinear differential system, and so must be studied with a little
care. This is the object of the following intermediate lemma.
For any $\varepsilon>0$, $C>0$ and $T_{0}>0$ such that
$c_{*}T_{0}-k{\mathrm{ln}}T_{0}\geq 100$, we define
$g_{\varepsilon}(t,\xi)=\frac{C}{\varepsilon}\big{|}\frac{N-1}{c_{*}t-k{\mathrm{ln}}t+\xi}-\frac{k}{t}\big{|}\,,\,t>T_{0}\,,\,\xi\in\mathbb{R}$
###### Lemma 4.2
Let $\delta$, $\gamma$ and $\eta$ be strictly positive constants. Consider the
differential system
$\left\\{\begin{array}[]{rlll}\dot{q}+\delta q=&g_{\varepsilon}(t,\xi)&t>T\\\
\gamma\dot{\xi}=&Cq+g_{\varepsilon}(t,\xi)&t>T\\\ q(T)=&\eta,\
\xi(T)=0.&\end{array}\right.$ (22)
where $\varepsilon$, $C$ and $g_{\varepsilon}$ are defined above. There is
$T\geq T_{0}$ and $K>0$ depending on all the constants involved in (22),
except $\eta$, such that (22) has a solution defined on $[T,+\infty)$ which
satisfies for any $t\geq T$,
$0\leq q(t)\leq K(\eta+\frac{1}{\varepsilon
T^{1/2}})e^{-\delta/2(t-T)}+\frac{K}{t^{3/2}},\quad\xi(t)\leq
K(\eta+\frac{1}{\varepsilon T^{1/2}}).$ (23)
Proof. In what follows, $K$ denotes a generic positive constant that may
differ from lines to lines. We first derive a logarithmic bound on $\xi$, then
the desired bound. By definition, there exists $K>0$ such that for any $t>T$,
$0\leq g_{\varepsilon}(t,\xi(t))\leq\frac{K}{\varepsilon(t+\xi(t))},$
so that we have
$q(t)\leq\eta
e^{-\delta(t-T)}+\frac{K}{\varepsilon}\int_{T}^{t}\frac{e^{-\delta(t-s)}}{s+\xi(s)}ds.$
Let $T^{*}\geq T$ be the largest $\tau\geq T$ such that
$\hbox{for all $t\in[T,\tau]$,}\ \xi(t)\leq({\mathrm{ln}}t)^{2}.$ (24)
So, for any $s\in[T,T^{*}]$, we may write
$\frac{1}{\varepsilon(s+\xi(s))}\leq\frac{K}{\varepsilon s},$
so that for any $t\in[T,T^{*}]$, cutting the integral at
$s=\displaystyle\frac{t}{2}$, we get
$q(t)\leq\frac{K}{\varepsilon t}+(\frac{K}{\varepsilon
T}+\eta)e^{-\frac{\delta}{2}(t-T)}.$
This implies in turn
$\dot{\xi}(t)\leq\frac{K}{\varepsilon t},$
hence $\xi(t)\leq\displaystyle\frac{K}{\varepsilon}{\mathrm{ln}}t$, a
contradiction with the definition of $T^{*}$, as soon as $T$ is large enough,
say, of the order $\varepsilon^{-2}$. So for any $t\geq T$,
$\xi(t)\leq({\mathrm{ln}}t)^{2}$. But then, we have a more precise estimate of
$g_{\varepsilon}(t,\xi(t))$. Using the actual value of $k$ we have
$g_{\varepsilon}(t,\xi(t))\leq\frac{C(N-1)}{\varepsilon
c_{*}t}\biggl{(}\frac{1}{1-\ln
t/t+({\mathrm{ln}}t)^{2}/(c_{*}t)}-1\biggl{)}\leq\frac{K{\mathrm{ln}}t}{\varepsilon
t^{2}}\leq\frac{K}{\varepsilon t^{3/2}},$
thus an integrable power of $t$. Plugging this new estimate into (22) yields
(23), hence the lemma. $\Box$
Proof of Theorem 4.1. Let $u$ be a solution to (9) and $s$ as in the statment
of theorem 4.1. Let $\varepsilon>0$ and choose
$T_{\varepsilon}=O(\frac{1}{\varepsilon^{2}})$ and
$\eta_{\varepsilon}=O(\varepsilon^{2})$. We first regularise $s$: set
$s^{\pm}_{\varepsilon}(\Theta)=(\rho_{\varepsilon}*s)(\Theta)\pm
C\varepsilon,$
where $\rho_{\varepsilon}$ is an $\varepsilon$-approximation of the identity
and $C>0$ large enough (say, a multiple of $\|\nabla s\|_{\infty}$). Then, by
assumption (21), we have
$U_{*}(r+s^{-}_{\varepsilon}(\Theta))-C\varepsilon\leq
u(T_{\varepsilon},r,\Theta)\leq
U_{*}(r+s^{+}_{\varepsilon}(\Theta))+C\varepsilon.$
Also notice that
$\|\nabla_{\Theta}s^{+}_{\varepsilon}\|_{\infty}\leq\|\nabla_{\Theta}s\|_{\infty},\
\|\Delta_{\Theta}s^{+}_{\varepsilon}\|_{\infty}\leq\frac{\|\nabla_{\Theta}s\|_{\infty}}{\varepsilon}.$
Write equation (9) for $u$ as
$N\\!L[u]=0.$
We are going to construct sub and super-solutions to (9), starting from
$t=T_{\varepsilon}$, at small distance of $u(t,r,\Theta)$. We display details
for the super-solution’s case, as it goes in the same way for sub-solutions.
We try the form, in the spirit of Fife and Mc Leod [8],
$\overline{u}(t,r,\Theta)=U_{*}(r+s^{+}_{\varepsilon}(\Theta)-\xi^{+}(t))+q^{+}(t).$
(25)
where $\xi^{+}$ and $q^{+}$ are chosen in the following way.
We first define $\mu_{0}\in(0,\min(\theta,1-\theta)))$ and $\delta>0$ such
that
$f^{\prime}(u)\leq-\delta\ \hbox{on }[0,\mu_{0}]\cup[1-\mu_{0},1].$
Choose $M>0$ such that $U_{*}(\rho)\in[0,\mu_{0}]\cup[1-\mu_{0},1]$ if
$\rho\notin[-M,M]$ and $\delta_{M}>0$ such that
$U^{\prime}_{*}(\rho)\leq-\delta_{M}$ if $\rho\in[-M,M]$.
Now define $\xi^{+}$ and $q^{+}$ as the unique solution to (22) with
$\gamma=\delta_{M}$ and $\eta=C\varepsilon$. Then, choosing $T_{\varepsilon}$
possibly larger, we know by lemma 4.2 that there exists $K>0$ such that for
any $t\geq T_{\varepsilon}$,
$0\leq q(t)\leq K(C\varepsilon+\frac{1}{\varepsilon
T^{3/2}})e^{-\delta/2(t-T)}+\frac{K}{t^{3/2}},\quad\xi(t)\leq
K(C\varepsilon+\frac{1}{\varepsilon T^{1/2}}).$
Adjusting $\varepsilon>0$ such that
$2K(C\varepsilon+\frac{1}{\sqrt{T_{\varepsilon}}})<\mu_{0}$, it implies that
$0\leq q^{+}(t)\leq\mu_{0}\mbox{ and }\dot{\xi}^{+}(t)\geq 0$
Once these preliminaries are done with, we may compute $NL[\bar{u}]$. Set
$\rho:=r+s^{+}_{\varepsilon}(\Theta)-\xi^{+}(t)$. Then,
$\begin{array}[]{rll}N\\!L[\overline{u}]=&-\dot{\xi}^{+}(t)U_{*}^{\prime}(\rho)+\dot{q}^{+}(t)-f(U_{*}(\rho)+q^{+})+f(U_{*}(\rho))\\\
&-\biggl{(}\displaystyle\frac{N-1}{\rho+c_{*}t-k{\mathrm{ln}}t+\xi^{+}(t)-s^{+}_{\varepsilon}(\Theta)}-\displaystyle\frac{k}{t}\biggl{)}U_{*}^{\prime}(\rho)-\displaystyle\frac{U_{*}^{\prime}(\rho)\Delta_{\Theta}s^{+}_{\varepsilon}+U_{*}^{\prime\prime}(\rho)|\nabla_{\Theta}s^{+}_{\varepsilon}|^{2}}{(\rho+c_{*}t-k{\mathrm{ln}}t+\xi^{+}(t)-s^{+}_{\varepsilon}(\Theta))^{2}}.\end{array}$
Consider first the range $|\rho|\geq M$. Then, plugging $\overline{u}$ in the
expression of $N\\!L$ reveals that a sufficient condition for
$N\\!L[\overline{u}]\geq 0$ is
$\dot{q}^{+}+\delta
q^{+}\geq\frac{\|\nabla_{\Theta}s\|_{\infty}}{\varepsilon\biggl{(}\rho+c_{*}t-k{\mathrm{ln}}t+\xi^{+}(t)\biggl{)}^{2}}+\biggl{|}\frac{N-1}{\rho+c_{*}t-k{\mathrm{ln}}t+\xi^{+}(t)}-\frac{k}{t}\biggl{|}.$
(26)
In the range $\rho\leq M$, a sufficient condition is
$\delta_{M}\dot{\xi}^{+}-Cq^{+}\geq\frac{\|\nabla_{\Theta}s\|_{\infty}}{\varepsilon\biggl{(}\rho+c_{*}t-k{\mathrm{ln}}t+\xi^{+}(t)\biggl{)}^{2}}+\biggl{|}\frac{N-1}{\rho+c_{*}t-k{\mathrm{ln}}t+\xi^{+}(t)}-\frac{k}{t}\biggl{|},$
(27)
Both conditions are satisfied as $\xi^{+}$ and $q^{+}$ satisfy (22). Note that
we always may absorb the first term in the right handside of (26) or (27) in a
function of the type $g_{\varepsilon}(t,\xi^{+}(t))$. Thus $\bar{u}$ is a
supersolution to (9) and for any $t\geq T_{\varepsilon}$
$u(t,r,\theta)\leq U_{*}(r+s(\theta))+\varepsilon$
Dealing in the same way with a subsolution finishes the proof of the theorem.
$\Box$.
## 5 Further questions and final remarks
Instead of working with a bistable nonlinearity, one could think of dealing
with a nonlinearity $f$ for which there is $\theta\in(0,1)$ such that
$f\equiv 0\ \hbox{on $[0,\theta]$},\ \ \ f>0\ \hbox{on $(\theta,1)$},\ \ \
f(1)=0.$
We believe that Theorem 1.1 holds in this case. However, the degeneracy of $f$
near 0 would certainly impose the construction of sub and super solutions with
exponential weights. The main step would once again be the gradient estimate,
which already carries more technicalities than the basic Fife-Mc Leod
construction. Some rather tedious computations should therefore be expected.
A more challenging question is to assess when the shift $s^{\infty}(\Theta)$
is trivial, that is, constant with respect to $\Theta$. The analysis of the
second author in [20] shows that the set of (almost spherically symmetric)
initial data giving raise to a nontrivial shift is quite big (open and dense).
The issue is therefore whether any non spherically symmetric initial datum
will generate a nontrivial shift. We note that [20] does not preclude
codimension 1 sets of initial data generating trivial shifts. We hope to
address this question in the future.
Another, possibly easier question, concerns further regularity for
$s^{\infty}$. In [20] it is shown to be $L^{2}$, in this work we upgrade its
regularity to Lipschitz. Whether it is $C^{2}$, or whether the derivative may
develop discontinuities even if the initial datum is smooth, is something that
does not immediately stem from our analysis. We simply note that, given the
goal that is ours in this paper, additional regularity is only anecdotical, in
the sense that it would have slightly simplified the convergence proof.
However the question is interesting in its own right and we note that, in [17]
we could not go further than Lipschitz regularity either. Whether more
involved considerations would have allowed us to reach further regularity, or,
as opposed to this, a new phenomenon occurs, is something that we do not know.
Finally, we believe that it would be interesting to understand the rate of
convergence of $u(t,x)$ to the shifted wave. In he Fisher-KPP case, it is a
very interesting problem that was first raised by Ebert and Van Saarloos [7],
in one spatial dimension. They proposed a full expansion in terms of powers of
$t^{-1/2}$ and found universal (that is, not depending on the initial datum
and not even on the nonlinearity $f$) terms. The analysis of [7] was carried
out in the formal style, and a first mathematically rigorous proof of the
expansion up to the order $t^{-1+\delta}$ ($\delta>0$ is any small positive
number) was given in [15]. An analysis, still in the formal style, of [3],
finds an expansion that is different from that of [7] at higher orders, in the
sense that $t^{-1}\mathrm{ln}t$ terms pop up. The analysis of [3] is confirmed
in a mathematically rigorous way by Graham in [10]. So, coming back to our
model (1), we believe that pushing the expansion, perhaps to exponentially
small terms, would be of interest. Making a more extensive use of the concept
of radial waves, as defined in [20], could be a starting point.
## References
* [1] D.G. Aronson, H.F. Weinberger, Multidimensional nonlinear diffusion arising in population genetics, Adv. in Math. 30 (1978), 33–76.
* [2] H. Berestycki, The influence of advection on the propagation of fronts in reaction-diffusion equations, in: Nonlinear PDE’s in Condensed Matter and Reactive Flows, eds. H. Berestycki, Y. Pomeau, NATO Science Series C, Mathematical and Physical Sciences, Kluwer Acad. Publ., Dordrecht, NL, 569 (2002).
* [3] J. Berestycki, E. Brunet, J. Derrida, A new approach to computing the asymptotics of the position of Fisher-KPP fronts, Europhysics Letters 122 (2018), 10001.
* [4] M.D. Bramson, Maximal displacement of branching Brownian motion, Comm. Pure Appl. Math. 31, 1978, 531–581.
* [5] M. Bramson, Convergence of solutions of the Kolmogorov equation to travelling waves, Mem. Amer. Math. Soc. 44, 1983.
* [6] Y. Du, F. Quiros, M. Zhou, Logarithmic corrections in Fisher-KPP type Porous Medium Equations, J. Math. Pures Appl. 136 (2020), 415–455.
* [7] U. Ebert, W. Van Saarloos, Front propagation into unstable states: universal algebraic convergence towards uniformly translating pulled fronts, Phys. D 146 (2000), 1–99.
* [8] P.C. Fife, B. McLeod, The approach of solutions of nonlinear diffusion equations to travelling front solutions, Arch. Ration. Mech. Anal. 65 (1977), 335–361.
* [9] J. Gärtner, Location of wave fronts for the multidimensional KPP equation and Brownian first exit densities, Math. Nachr. 105 (1982), 317–351.
* [10] C. Graham, Precise asymptotics for Fisher-KPP fronts, Nonlinearity 32 (2019), 1967–998.
* [11] D. Henry, Geometric theory of semilinear parabolic equations, Lecture Notes in Mathematics, 840, Springer-Verlag, Berlin-New York, 1981.
* [12] C.K.R.T. Jones, Asymptotic behaviour of a reaction-diffusion equation in higher space dimensions, Rocky Mountain J. Math. 13 (1983), 355–364.
* [13] A.N. Kolmogorov, I.G. Petrovskii, N.S. Piskunov, Étude de l’équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique, Bull. Univ. État Moscou, Sér. Inter. A 1 (1937), 1–26.
* [14] J. Nolen, J.-M. Roquejoffre, L. Ryzhik, Convergence to a single wave in the Fisher-KPP equation, Chinese Ann. Math. Ser. B (special issue in honour of H. Brezis) 38 (2017), 629–646.
* [15] J. Nolen, J.-M. Roquejoffre, L. Ryzhik, Refined long time asymptotics for the Fisher-KPP fronts, Comm. Contemp. Math, 21 (2019), 1850072.
* [16] J.-M. Roquejoffre, Convergence to travelling waves for solutions of a class of semilinear parabolic equations, J. Diff. Eq, 108 (1994), 262–295.
* [17] J.-M. Roquejoffre, L. Rossi, V. Roussier-Michon, Sharp large time behaviour in $N$\- dimensional Fisher-KPP equations, DCDS A, 39 (2019), 7265–7290.
* [18] J.-M. Roquejoffre, V. Roussier-Michon, Nontrivial large-time behaviour in bistable reaction-diffusion equations, Annali Mat. Pura Appl. 188 (2009), 207–233.
* [19] L. Rossi, Symmetrization and anti-symmetrization in parabolic equations, Proc. Amer. Math. Soc. 145 (2017), 2527–2537.
* [20] V. Roussier, Stability of radially symmetric travelling waves in reaction-diffusion equations, Ann. Inst. Henri Poincaré, Analyse non linéaire 21 (2004), 341–379.
* [21] H. Yagisita, Nearly spherically symmetric expanding fronts in a bistable reaction-diffusion equation, J. Dynam. Differential Equations 13 (2001), 323–353.
|
# Rabi oscillations, entanglement and teleportation in the anti-Jaynes-
Cummings model
Christopher Mayero Maseno University, Kenya Joseph Akeyo Omolo Maseno
University, Kenya Onyango Stephen Okeyo Maseno University, Kenya
###### Abstract
This paper provides a scheme for generating maximally entangled qubit states
in the anti-Jaynes-Cummings interaction mechanism, so called entangled anti-
polariton qubit states. We demonstrate that in an initial vacuum-field, Rabi
oscillations in a cavity mode in the anti-Jaynes-Cummings interaction process,
occur in the reverse sense relative to the Jaynes-Cummings interaction process
and that time evolution of entanglement in the anti-Jaynes-Cummings
interaction process takes the same form as in the Jaynes-Cummings interaction
process. With the generated anti-polariton qubit state as one of the initial
qubits, we present quantum teleportation of an atomic quantum state by
applying entanglement swapping protocol achieving an impressive maximal
teleportation fidelity $F_{\rho}=1$.
Keywords: Jaynes-Cummings, anti-Jaynes-Cummings, Rabi oscillations,
entanglement, entanglement swapping, teleportation, maximal teleportation
fidelity
## I Introduction
The basic model of quantized light-matter interaction describing a two-level
atom coupled to a single mode of quantized electromagnetic radiation is the
quantum Rabi model [1, 2, 3, 4]. Recently, it has been shown that the operator
ordering principle distinguishes the Jaynes-Cummings (JC) and anti-Jaynes-
Cummings (AJC) Hamiltonians [2, 3, 4] as normal and anti-normal order
components of the quantum Rabi model. In this approach the JC interaction
represents the coupling of a two-level atom to the rotating positive frequency
component of the field mode while the AJC interaction represents the coupling
of the two-level atom to the anti-rotating (anti-clockwise) negative frequency
component of the field mode, because the electromagnetic field mode is
composed of positive and negative frequency components [5].
The long-standing challenge of determining a conserved excitation number and
corresponding $U(1)$ symmetry operators for the AJC component was finally
solved in [2]. The discovery and proof of a conserved excitation number
operator of the AJC Hamiltonian [2] now means that dynamics generated by the
AJC Hamiltonian is exactly solvable, as demonstrated in the polariton and
anti-polariton qubit (photospin qubit) models in [3, 4].
Noting that the JC model has been extensively studied in both theory and
experiment in quantum optics, we now focus attention on the AJC model which
has not received much attention over the years due to the erroneously assumed
lack of a conserved excitation number operator. The reformulation developed in
[2, 3, 4], drastically simplifies exact solutions of the AJC model, which we
shall here apply.
In this paper, we are interested in analysis of quantum state configuration of
the qubit states, entanglement of qubits in the AJC model and the application
of the entangled qubit state vectors in teleportation of an entangled atomic
quantum state. The content of this paper is therefore summarized as follows.
Section II presents an overview of the theoretical model. In section III, Rabi
oscillations in the AJC model is studied. In section IV, entanglement of AJC
qubit state vectors is analysed. In section V, teleportation as an application
of entanglement is presented and finally section VI presents the conclusion.
## II The model
The quantum Rabi model of a quantized electromagnetic field mode interacting
with a two-level atom is generated by the Hamiltonian [2]
$\hat{H}_{R}=\frac{1}{2}\hbar\omega(\hat{a}^{\dagger}\hat{a}+\hat{a}\hat{a}^{\dagger})+\hbar\omega_{0}\hat{s}_{z}+\hbar\lambda(\hat{a}+\hat{a}^{\dagger})(\hat{s}_{+}+\hat{s}_{-})$
(1)
noting that the free field mode Hamiltonian is expressed in normal and anti-
normal order form
$\frac{1}{2}\hbar\omega(\hat{a}^{\dagger}\hat{a}+\hat{a}\hat{a}^{\dagger})$.
Here, $\omega\hskip 2.84526pt,\hskip 2.84526pt\hat{a}\hskip 2.84526pt,\hskip
2.84526pt\hat{a}^{\dagger}$ are quantized field mode angular frequency,
annihilation and creation operators, while
$\omega_{0},\,\hat{s}_{z}\,\hat{s}_{+},\,\hat{s}_{-}$ are atomic state
transition angular frequency and operators. The Rabi Hamiltonian in Eq. (1) is
expressed in a symmetrized two-component form [2, 3, 4]
$\hat{H}_{R}=\frac{1}{2}(\hat{H}+\hat{\overline{H}}\,)$ (2)
where $\hat{H}$ is the standard JC Hamiltonian interpreted as a polariton
qubit Hamiltonian expressed in the form [2]
$\displaystyle\hat{H}$ $\displaystyle=$
$\displaystyle\hbar\omega\hat{N}+2\hbar\lambda\hat{A}-\frac{1}{2}\hbar\omega\quad;\quad\hat{N}=\hat{a}^{\dagger}\hat{a}+\hat{s}_{+}\hat{s}_{-}$
$\displaystyle\hat{A}$ $\displaystyle=$
$\displaystyle\alpha\hat{s}_{z}+\hat{a}\hat{s}_{+}+\hat{a}^{\dagger}\hat{s}_{-}\quad;\quad\alpha=\frac{\omega_{0}-\omega}{2\lambda}$
(3)
while $\hat{\overline{H}}$ is the AJC Hamiltonian interpreted as an anti-
polariton qubit Hamiltonian in the form [2]
$\displaystyle\hat{\overline{H}}$ $\displaystyle=$
$\displaystyle\hbar\omega\hat{\overline{N}}+2\hbar\lambda\hat{\overline{A}}-\frac{1}{2}\hbar\omega\quad;\quad\hat{\overline{N}}=\hat{a}\hat{a}^{\dagger}+\hat{s}_{-}\hat{s}_{+}$
$\displaystyle\hat{\overline{A}}$ $\displaystyle=$
$\displaystyle\overline{\alpha}\hat{s}_{z}+\hat{a}\hat{s}_{-}+\hat{a}^{\dagger}\hat{s}_{+}\quad;\quad\overline{\alpha}=\frac{\omega_{0}+\omega}{2\lambda}\;.$
(4)
In Eqs. (3) and (4), $\hat{N}$, $\hat{\overline{N}}$ and $\hat{A}$,
$\hat{\overline{A}}$ are the respective polariton and anti-polariton qubit
conserved excitation numbers and state transition operators.
Following the physical property established in [4], that for the field mode in
an initial vacuum state only an atom in an initial excited state $|e\rangle$
entering the cavity couples to the rotating positive frequency field component
in the JC interaction mechanism, while only an atom in an initial ground state
$|g\rangle$ entering the cavity couples to the anti-rotating negative
frequency field component in an AJC interaction mechanism, we generally take
the atom to be in an initial excited state $|e\rangle$ in the JC model and in
an initial ground state $|g\rangle$ in the AJC model.
Considering the AJC dynamics, applying the state transition operator
$\hat{\overline{A}}$ from Eq. (4) to the initial atom-field n-photon ground
state vector $|g,n\rangle$, the basic qubit state vectors $|\psi_{gn}\rangle$
and $|\overline{\phi}_{gn}\rangle$ are determined in the form (n=0,1,2,….) [4]
$|\psi_{gn}\rangle=|g,n\rangle\quad;\quad|\overline{\phi}_{gn}\rangle=-\overline{c}_{gn}|g,n\rangle+\overline{s}_{gn}|e,n+1\rangle$
(5)
with dimensionless interaction parameters $\overline{c}_{gn}$,
$\overline{s}_{gn}$ and Rabi frequency $\overline{R}_{gn}$ defined as
$\displaystyle\overline{c}_{gn}$ $\displaystyle=$
$\displaystyle\frac{\overline{\delta}}{2\overline{R}_{gn}}\quad;\quad\overline{s}_{gn}=\frac{2\lambda\sqrt{n+1}}{\overline{R}_{gn}}\quad;\quad\overline{R}_{gn}=2\lambda{\overline{A}_{gn}}$
$\displaystyle\overline{A}_{gn}$ $\displaystyle=$
$\displaystyle\sqrt{(n+1)+\frac{\overline{\delta}^{2}}{16\lambda^{2}}}\quad;\quad\overline{\delta}=\omega_{0}+\omega$
(6)
where we have introduced sum frequency $\overline{\delta}=\omega_{0}+\omega$
to redefine $\overline{\alpha}$ in Eq. (4).
The qubit state vectors in Eq. (5) satisfy the qubit state transition
algebraic operations
$\hat{\overline{A}}|\psi_{gn}\rangle=\overline{A}_{gn}|\overline{\phi}_{gn}\rangle\quad;\quad\hat{\overline{A}}|\overline{\phi}_{gn}\rangle=\overline{A}_{gn}|\psi_{gn}\rangle$
(7)
In the AJC qubit subspace spanned by normalized but non-orthogonal basic qubit
state vectors $|\psi_{gn}\rangle$, $|\overline{\phi}_{gn}\rangle$ the basic
qubit state transition operator $\hat{\overline{\varepsilon}}_{g}$ and
identity operator $\hat{\overline{I}}_{g}$ are introduced according to the
definitions [4]
$\hat{\overline{\varepsilon}}_{g}=\frac{\hat{\overline{A}}}{\overline{A}_{gn}}\quad;\quad\hat{\overline{I}}_{g}=\frac{\hat{\overline{A}}^{2}}{\overline{A}_{gn}^{2}}\quad\Rightarrow\quad\hat{\overline{I}}_{g}=\hat{\overline{\varepsilon}}_{g}^{2}$
(8)
which on substituting into Eq. (7) generates the basic qubit state transition
algebraic operations
$\displaystyle\hat{\overline{\varepsilon}}_{g}|\psi_{gn}\rangle$
$\displaystyle=$
$\displaystyle|\overline{\phi}_{gn}\rangle\quad;\quad\hat{\overline{\varepsilon}}_{g}|\overline{\phi}_{gn}\rangle=|\psi_{gn}\rangle$
$\displaystyle\hat{\overline{I}}_{g}|\psi_{gn}\rangle$ $\displaystyle=$
$\displaystyle|\psi_{gn}\rangle\quad;\quad\hat{\overline{I}}_{g}|\overline{\phi}_{gn}\rangle=|\overline{\phi}_{gn}\rangle$
(9)
The algebraic properties
$\hat{\overline{\varepsilon}}_{g}^{2k}=\hat{\overline{I}}_{g}$ and
$\hat{\overline{\varepsilon}}_{g}^{2k+1}=\hat{\overline{\varepsilon}}_{g}$
easily gives the final property [4]
$e^{-i\theta\hat{\overline{\varepsilon}}_{g}}=\cos(\theta)\hat{\overline{I}}_{g}-i\sin(\theta)\hat{\overline{\varepsilon}}_{g}$
(10)
which is useful in evaluating time-evolution operators.
The AJC qubit Hamiltonian defined within the qubit subspace spanned by the
basic qubit state vectors $|\psi_{gn}\rangle$ , $|\overline{\phi}_{gn}\rangle$
is then expressed in terms of the basic qubit states transition operators
$\hat{\overline{\varepsilon}}_{g}$, $\hat{\overline{I}}_{g}$ in the form [4]
$\hat{\overline{H}}_{g}=\hbar\omega(n+\frac{3}{2})\hat{\overline{I}}_{g}+\hbar\overline{R}_{gn}\hat{\overline{\varepsilon}}_{g}\,.$
(11)
We use this form of the AJC Hamiltonian to determine the general time-evolving
state vector describing Rabi oscillations in the AJC dynamics in Sec. III
below.
## III Rabi oscillations
The general dynamics generated by the AJC Hamiltonian in Eq. (11) is described
by a time evolving AJC qubit state vector
$\displaystyle|\overline{\Psi}_{gn}(t)\rangle$ obtained from the time-
dependent Schrödinger equation in the form [4]
$|\overline{\Psi}_{gn}(t)\rangle=\hat{\overline{U}}_{g}(t)|\psi_{gn}\rangle\quad;\quad\hat{\overline{U}}_{g}(t)=e^{-\frac{i}{\hbar}\hat{\overline{H}}_{g}t}$
(12)
where $\hat{\overline{U}}_{g}(t)$ is the time evolution operator. Substituting
$\hat{\overline{H}}_{g}$ from Eq. (11) into Eq. (12) and applying appropriate
algebraic properties [4], we use the relation in Eq. (10) to express the time
evolution operator in its final form
$\hat{\overline{U}}_{g}(t)=e^{-i\omega{t}(n+\frac{3}{2})}\left\\{\cos(\overline{R}_{gn}t)\hat{\overline{I}}_{g}-i\sin(\overline{R}_{gn}t)\hat{\overline{\varepsilon}}_{g}\right\\}$
(13)
which we substitute into equation Eq. (12) and use the qubit state transition
operations in Eq. (9) to obtain the time-evolving AJC qubit state vector in
the form
$|\overline{\Psi}_{gn}(t)\rangle=e^{-i\omega{t}(n+\frac{3}{2})}\Big{\\{}\cos(\overline{R}_{gn}t)|\psi_{gn}\rangle-i\sin(\overline{R}_{gn}t)|\overline{\phi}_{gn}\rangle\Big{\\}}$
(14)
This time evolving state vector describes Rabi oscillations between the basic
qubit states $|\psi_{gn}\rangle$ and $|\overline{\phi}_{gn}\rangle$ at Rabi
frequency $\overline{R}_{gn}$.
In order to determine the length of the Bloch vector associated with the state
vector in Eq. (14), we introduce the density operator
$\hat{\overline{\rho}}_{gn}(t)=|\overline{\Psi}_{gn}(t)\rangle\langle{\overline{\Psi}_{gn}}(t)|$
(15a) which we expand to obtain $\displaystyle\hat{\overline{\rho}}_{gn}(t)$
$\displaystyle=$
$\displaystyle\cos^{2}(\overline{R}_{gn}t)|\psi_{gn}\rangle\langle\psi_{gn}|+\frac{i}{2}\sin(2\overline{R}_{gn}t)|\psi_{gn}\rangle\langle\overline{\phi}_{gn}|$
$\displaystyle-\frac{i}{2}\sin(2\overline{R}_{gn}t)|\overline{\phi}_{gn}\rangle\langle\psi_{gn}|+\sin^{2}(\overline{R}_{gn}t)|\overline{\phi}\rangle\langle\overline{\phi}|\;.$
Defining the coefficients of the projectors in Eq. (LABEL:eq:blochvec) as
$\displaystyle\overline{\rho}_{gn}^{11}(t)$ $\displaystyle=$
$\displaystyle\cos^{2}(\overline{R}_{gn}t)\quad;\quad\overline{\rho}_{gn}^{12}(t)=\frac{i}{2}\sin(2\overline{R}_{gn}t)$
$\displaystyle\overline{\rho}_{gn}^{21}(t)$ $\displaystyle=$
$\displaystyle-\frac{i}{2}\sin(2\overline{R}_{gn}t)\quad;\quad\overline{\rho}_{gn}^{22}(t)=\sin^{2}(\overline{R}_{gn}t)\quad$
(15c) and interpreting the coefficients in Eq. (15c) as elements of a $2\times
2$ density matrix $\overline{\rho}_{gn}(t)$, which we express in terms of
standard Pauli operator matrices $I$, $\sigma_{x}$, $\sigma_{y}$ and
$\sigma_{z}$ as
$\overline{\rho}_{gn}(t)=\begin{pmatrix}\overline{\rho}_{gn}^{11}(t)&\overline{\rho}_{gn}^{12}(t)\\\
\overline{\rho}_{gn}^{21}(t)&\overline{\rho}_{gn}^{22}(t)\\\
\end{pmatrix}=\frac{1}{2}\left(I+\vec{\overline{\rho}}_{gn}(t)\cdot\vec{\sigma}\right)$
(15d) where $\vec{\sigma}=\left(\sigma_{x},\sigma_{y},\sigma_{z}\right)$ is
the Pauli matrix vector and we have introduced the time-evolving Bloch vector
$\vec{\overline{\rho}}_{gn}(t)$ obtained in the form
$\vec{\overline{\rho}}_{gn}(t)=\left(\overline{\rho}_{gn}^{x}(t),\overline{\rho}_{gn}^{y}(t),\overline{\rho}_{gn}^{z}(t)\right)$
(15e) with components defined as $\displaystyle\overline{\rho}_{gn}^{x}(t)$
$\displaystyle=$
$\displaystyle\overline{\rho}_{gn}^{12}(t)+\overline{\rho}_{gn}^{21}(t)=0$
$\displaystyle\overline{\rho}_{gn}^{y}(t)$ $\displaystyle=$ $\displaystyle
i\left(\overline{\rho}_{gn}^{12}(t)-\overline{\rho}_{gn}^{21}(t)\right)=-\sin(2\overline{R}_{gn}t)$
$\displaystyle\overline{\rho}_{gn}^{z}(t)$ $\displaystyle=$
$\displaystyle\overline{\rho}_{gn}^{11}(t)-\overline{\rho}_{gn}^{22}(t)=\cos(2\overline{R}_{gn}t)$
(15f) The Bloch vector in Eq. (15e) takes the explicit form
$\vec{\overline{\rho}}_{gn}(t)=\Big{(}0,-\sin(2\overline{R}_{gn}t),\cos(2\overline{R}_{gn}t)\Big{)}$
(15g) which has unit length obtained easily as
$|\vec{\overline{\rho}}_{gn}(t)|=1$ (15h)
The property that the Bloch vector $\vec{\overline{\rho}}_{gn}(t)$ is of unit
length (the Bloch sphere has unit radius), clearly shows that the general time
evolving state vector $|\overline{\Psi}_{gn}(t)\rangle$ in Eq. (14) is a pure
state.
We now proceed to demonstrate the time evolution of the Bloch vector
$\vec{\overline{\rho}}_{gn}(t)$ which in effect describes the geometric
configuration of states. We have adopted class 4 Bloch-sphere entanglement of
a quantum rank-2 bipartite state [6, 7] to bring a clear visualization of this
interaction. In this respect, we consider the specific example (which also
applies to the general n-photon case) of an atom initially in ground state
$|g\rangle$ entering a cavity with the field mode starting off in an initial
vacuum state $|0\rangle$, such that the initial atom-field state is
$|g,0\rangle$. It is important to note that in the AJC interaction process the
initial atom-field ground state $|g,0\rangle$ is an absolute ground state with
both atom and field mode in the ground state $|g\rangle$, $|0\rangle$, in
contrast to the commonly applied initial atom-field ground state $|e,0\rangle$
in the JC model where only the field mode $|0\rangle$ is in the ground state
and the atom in the excited state $|e\rangle$.
In the specific example starting with an atom in the ground state $|g\rangle$
and the field mode in the vacuum state $|0\rangle$ the basic qubit state
vectors $|\psi_{g0}\rangle$ and $|\overline{\phi}_{g0}\rangle$, together with
the corresponding entanglement parameters, are obtained by setting $n=0$ in
Eqs. (5) and (6) in the form
$\displaystyle|\psi_{g0}\rangle$ $\displaystyle=$
$\displaystyle|g,0\rangle\quad;\quad|\overline{\phi}_{g0}\rangle=-\overline{c}_{g0}|g,0\rangle+\overline{s}_{g0}|e,1\rangle\quad;$
$\displaystyle\overline{c}_{g0}$ $\displaystyle=$
$\displaystyle\frac{\overline{\delta}}{2\overline{R}_{g0}}\quad;\quad\overline{s}_{g0}=\frac{2\lambda}{\overline{R}_{g0}}\quad;\quad\overline{R}_{g0}=\frac{1}{2}\sqrt{16\lambda^{2}+\overline{\delta}^{2}}$
$\displaystyle|g,0\rangle$ $\displaystyle=$
$\displaystyle|g\rangle\otimes|0\rangle\quad;\quad|e,1\rangle=|e\rangle\otimes|1\rangle$
(16)
The corresponding Hamiltonian in Eq. (11) becomes ($n=0$)
$\hat{\overline{H}}_{g}=\frac{3}{2}\hbar\omega\hat{\overline{I}}_{g}+\hbar\overline{R}_{g0}\hat{\overline{\varepsilon}}_{g}$
(17)
The time-evolving state vector in Eq. (14) takes the form ($n=0$)
$|\overline{\Psi}_{g0}(t)\rangle=e^{-i\frac{3}{2}\omega{t}}\left\\{\cos(\overline{R}_{g0}t)|\psi_{g0}\rangle-i\sin(\overline{R}_{g0}t)|\overline{\phi}_{g0}\rangle\right\\}$
(18)
which describes Rabi oscillations at frequency $\overline{R}_{g0}$ between the
initial separable qubit state vector $|\psi_{g0}\rangle$ and the entangled
qubit state vector $|\overline{\phi}_{g0}\rangle$.
The Rabi oscillation process is best described by the corresponding Bloch
vector which follows from Eq. (15g) in the form ($n=0$)
$\vec{\overline{\rho}}_{g0}(t)=\left(0,-\sin(2\overline{R}_{g0}t),\cos(2\overline{R}_{g0}t)\right)$
(19)
The time evolution of this Bloch vector reveals that the Rabi oscillations
between the basic qubit state vectors $|\psi_{g0}\rangle$,
$|\overline{\phi}_{g0}\rangle$ describe circles on which the states are
distributed on the Bloch sphere as we demonstrate in Fig. 1 below.
In Fig. 1 we have plotted the AJC Rabi oscillation process with respective
Rabi frequencies $\overline{R}_{g0}$ determined according to Eq. (16) for
various values of sum frequency $\overline{\delta}=\omega_{0}+\omega$. We have
provided a comparison with plots of the corresponding JC process in Fig. 2.
To facilitate the desired comparison of the AJC Rabi oscillation process with
the standard JC Rabi oscillation process plotted in Fig. 2, we substitute the
redefinition $\overline{\delta}=\omega_{0}+\omega=\delta+2\omega$ to express
the Rabi frequency $\overline{R}_{g0}$ in Eq. (16) in the form
$\overline{R}_{g0}=\frac{1}{2}\sqrt{16\lambda^{2}+(\delta+2\omega)^{2}}\,.$
(20)
In the present work, we have chosen the field mode frequency $\omega=2\lambda$
($\lambda=0.5\omega$) such that for both AJC and JC processes we vary only the
detuning frequency $\delta=\omega_{0}-\omega$. The resonance case $\delta=0$
in the JC interaction now means $\overline{\delta}=2\omega=4\lambda$ in the
AJC interaction.
For various values of $\delta=\lambda\,,\,3\lambda\,,\,0$, we use the general
time evolving state vector in Eq. (18), with $\overline{R}_{g0}$ as defined in
Eq. (20) to determine the coupled qubit state vectors $|\psi_{g0}\rangle$ ,
$|\overline{\phi}_{g0}\rangle$ in Eq. (16) by setting
$\overline{R}_{g0}t=\frac{\pi}{2}$, describing half cycle of Rabi oscillation
as presented below. In each case we have an accumulated global phase factor
which does not affect measurement results [8, 9, 10], but we have maintained
them here in Eqs. (21a) - (21c) to explain the continuous time evolution over
one cycle.
$\delta=\lambda\quad;\quad\overline{\delta}=5\lambda\,:$ (21)
$|g,0\rangle\rightarrow{e}^{-i\pi\frac{79}{82}}\left\\{-\frac{5}{\sqrt{41}}|g,0\rangle+\frac{4}{\sqrt{41}}|e,1\rangle\right\\}\rightarrow{e}^{-i\pi\frac{79}{41}}|g,0\rangle$
(21a)
$\delta=3\lambda\quad;\quad\overline{\delta}=7\lambda\,:$
$|g,0\rangle\rightarrow{e}^{-i\pi\frac{113}{130}}\left\\{-\frac{7}{\sqrt{65}}|g,0\rangle+\frac{4}{\sqrt{65}}|e,1\rangle\right\\}\rightarrow{e}^{-i\pi\frac{113}{65}}|g,0\rangle$
(21b)
$\delta=0\quad;\quad\overline{\delta}=4\lambda\,:$
$|g,0\rangle\rightarrow{e}^{-i\pi}\left\\{{-\frac{1}{\sqrt{2}}}|g,0\rangle+\frac{1}{\sqrt{2}}|e,1\rangle\right\\}\rightarrow{e}^{-i\pi{2}}|g,0\rangle$
(21c)
The AJC Rabi oscillations for cases $\delta=\lambda\,,3\lambda\,,0$ are
plotted as red, black and blue circles in Fig. 1, while the corresponding
plots in the JC process are provided in Fig. 2 as a comparison. Here, Fig. 1
is a Bloch sphere entanglement [6] that corresponds to a 2-dimensional
subspace of $\mathbb{C}^{2}\hskip 1.42262pt\otimes\hskip
1.42262pt\mathbb{C}^{2}$ Span$\left\\{|g,0\rangle\hskip 1.42262pt,\hskip
1.42262pt-\overline{c}_{g0}|g,0\rangle+\overline{s}_{g0}|e,1\rangle\right\\}$
with $\overline{c}_{g0}=\frac{\overline{\delta}}{2\overline{R}_{g0}}$ and
$\overline{s}_{g0}=\frac{2\lambda}{\overline{R}_{g0}}$ while Fig. 2 is a Bloch
sphere entanglement corresponding to a 2-dimensional subspace of
$\mathbb{C}^{2}\hskip 1.42262pt\otimes\hskip 1.42262pt\mathbb{C}^{2}$
Span$\left\\{|e,0\rangle\hskip 1.42262pt,\hskip
1.42262ptc_{e0}|e,0\rangle+s_{e0}|g,1\rangle\right\\}$ with
$c_{e0}=\frac{\delta}{2R_{e0}}$ and $s_{e0}=\frac{2\lambda}{R_{e0}}$, where we
recall that, in the JC interaction the initial atom-field ground state with
the field mode in the vacuum state is $|e,0\rangle$.
Figure 1: Rabi oscillations in AJC interaction mechanism. The Rabi
oscillations for values of sum frequencies are shown by red
($\overline{\delta}=5\lambda\,;\,\delta=\lambda$), black
($\overline{\delta}=7\lambda\,;\;\delta=3\lambda$) and blue
($\overline{\delta}=4\lambda\,;\,\delta=\omega_{0}-\omega=0$). Figure 2: Rabi
oscillations in JC interaction mechanism. Here, blue circle is at resonance
with detuning $\delta=\omega_{0}-\omega=0$, red circle is for detuning
$\delta=\lambda$ and black circle for detuning $\delta=3\lambda$.
In Fig. 1 we observe:
1. (i)
that due to the larger sum frequency $\overline{\delta}=\delta+2\omega$ in the
AJC interaction process as compared to the detuning frequency $\delta$ in the
JC interaction process, the Rabi oscillation circles in the much faster AJC
process are much smaller compared to the corresponding Rabi oscillation
circles in the slower JC interaction process. This effect is in agreement with
the assumption usually adopted to drop the AJC interaction components in the
rotating wave approximation (RWA), noting that the fast oscillating AJC
process averages out over time. We have demonstrated the physical property
that the size of the Rabi oscillations curves decreases with increasing Rabi
oscillation frequency by plotting the AJC oscillation curves for a
considerably larger Rabi frequency $\overline{R}_{g0}$ where we have set the
field mode frequency $\omega=10\lambda(\lambda=0.1\omega)$ in Fig. 3. It is
clear in Fig. 3 that for this higher value of the Rabi frequency
$\overline{R}_{g0}$ the Rabi oscillation curves almost converge to a point-
like form;
2. (ii)
that Rabi oscillations in the AJC interaction process as demonstrated in Fig.
1 occur in the left hemisphere of the Bloch sphere while in the JC interaction
process the oscillations occur in the right hemisphere as demonstrated in Fig.
2. This demonstrates an important physical property that the AJC interaction
process occurs in the reverse sense relative to the JC interaction process;
3. (iii)
an interesting feature that appears at resonance specified by $\delta=0$.
While in the JC model plotted in Fig 2 the Rabi oscillation at resonance
$\delta=0$ (blue circle) lies precisely on the yz-plane normal to the
equatorial plane, the corresponding AJC Rabi oscillation (blue circle in Fig.
1) is at an axis away from the yz-plane about the south pole of the Bloch
sphere. This feature is due to the fact that the frequency detuning
$\overline{\delta}=2\omega$ takes a non-zero value under resonance $\delta=0$
such that the AJC oscillations maintain their original forms even under
resonance.
Figure 3: Rabi oscillations in AJC interaction mechanism. The Rabi
oscillations for values of sum frequencies are shown by red
($\overline{\delta}=21\lambda\hskip 2.84526pt;\hskip 2.84526pt\delta=\lambda$)
and black ($\overline{\delta}=23\lambda\hskip 2.84526pt;\hskip
2.84526pt\delta=3\lambda$).
We note that the qubit state transitions described by the Bloch vector in the
AJC process (Fig. 1) are blue-side band transitions characterized by the sum
frequency $\overline{\delta}=\omega_{0}+\omega=\delta+2\omega$ according to
the definition of the Rabi frequency $\overline{R}_{g0}$ in eq. (20).
The geometric configuration of the state space demonstrated on the Bloch-
sphere in Fig. 2 determined using the approach in [4] agrees precisely with
that determined using the semi-classical approach in [11] corresponding to a
2-dimensional subspace of $\mathbb{C}^{2}$ Span
$\left\\{|e\rangle\,,|g\rangle\right\\}$. In the approach [11], at resonance
where detuning $\delta=0$ the atomic population is inverted from $|e\rangle$
to $|g\rangle$ and the Bloch-vector
$\vec{r}=(\sin(\theta)\cos(\phi)\,,\sin(\theta)\sin(\phi)\,,\cos(\theta))$
describes a path along the yz-plane on the Bloch-sphere. For other values of
detuning, the atom evolves from $|e\rangle$ to a linear superposition of
$|e\rangle$ and $|g\rangle$ and back to $|e\rangle$ and the Bloch-vector
$\vec{r}$ describes a circle about the north pole of the Bloch-sphere.
## IV Entanglement properties
In quantum information, it is of interest to measure or quantify the
entanglement of states. In this paper we apply the von Neumann entropy as a
measure of entanglement. The von Neumann entropy [12, 13, 14, 15, 16] of a
quantum state $\hat{\rho}$ is defined as
$S(\hat{\rho})=-tr\left(\hat{\rho}\log\hat{\rho}\right)=-\sum_{i}\lambda_{i}\log\lambda_{i}$
(22)
where the logarithm is taken to base d, d being the dimension of the Hilbert
space containing $\hat{\rho}$ and $\lambda_{i}s$ are the eigenvalues that
diagonal $\hat{\rho}$. It follows that
$0\leqslant{S(\hat{\rho})}\leqslant{1}$, where $S(\hat{\rho})=0$ if and only
if $\hat{\rho}$ is a pure state.
Further, the von Neumann entropy of the reduced density matrices of a
bipartite pure state $\hat{\rho}_{AB}=|\psi_{AB}\rangle\langle\psi_{AB}|$ is a
good and convenient entanglement measure E($\hat{\rho}_{AB}$). The
entanglement measure defined as the entropy of either of the quantum subsystem
is obtained as
$E(\hat{\rho}_{AB})=-tr(\hat{\rho}_{A}\log_{2}\hat{\rho}_{A})=-tr(\hat{\rho}_{B}\log_{2}\hat{\rho}_{B})$
(23)
where for all states we have $0\leq{E(\hat{\rho}_{AB})}\leq{1}$. Here the
limit $0$ is achieved if the pure state is a product
$|\psi\rangle=|\psi_{A}\rangle\otimes|\psi_{B}\rangle$ and $1$ is achieved for
maximally entangled states, noting that the reduced density matrices are
maximally mixed states.
In this section we analyse the entanglement properties of the qubit state
vectors and the dynamical evolution of entanglement generated in the AJC
interaction.
### IV.1 Entanglement analysis of basic qubit state vectors
$|\psi_{g0}\rangle$ and $|\overline{\phi}_{g0}\rangle$
Let us start by considering the entanglement properties of the initial state
$|\psi_{g0}\rangle$ which according to the definition in Eq. (16) is a
separable pure state. The density operator of the qubit state vector
$|\psi_{g0}\rangle=|g,0\rangle$ is obtained as
$\hat{\rho}_{g0}=|g,0\rangle\langle{g,0}|$ (24a) Using the definition
$|g,0\rangle=|g\rangle\otimes|0\rangle$, we take the partial trace of
$\hat{\rho}_{g0}$ in Eq. (24a) with respect to the field mode and atom states
respectively, to obtain the respective atom and field reduced density
operators $\hat{\rho}_{A}$, $\hat{\rho}_{F}$ in the form (subscripts
$A\equiv{atom}$ and $F\equiv{field}$)
$\hat{\rho}_{A}=tr_{F}(\hat{\rho}_{g0})=|g\rangle\langle{g}|\quad;\quad\hat{\rho}_{F}=tr_{A}(\hat{\rho}_{g0})=|0\rangle\langle{0}|$
(24b) which take explicit $2\times 2$ matrix forms
$\hat{\rho}_{A}=\begin{pmatrix}0&0\\\ 0&1\\\
\end{pmatrix}\quad;\quad\hat{\rho}_{F}=\begin{pmatrix}1&0\\\ 0&0\\\
\end{pmatrix}$ (24c) The trace of $\hat{\rho}_{A}$, $\hat{\rho}_{A}^{2}$ and
$\hat{\rho}_{F}$, $\hat{\rho}_{F}^{2}$ of the matrices in Eq. (24c) are
$tr(\hat{\rho}_{A})=tr(\hat{\rho}_{A}^{2})=1\quad;\quad{tr}(\hat{\rho}_{F})=tr(\hat{\rho}_{F}^{2})=1$
(24d) The unit trace determined in Eq. (24d) proves that the initial qubit
state vector $|\psi_{g0}\rangle=|g,0\rangle$ is a pure state.
Next, we substitute the matrix form of $\hat{\rho}_{A}$ and $\hat{\rho}_{F}$
from Eq. (24c) into Eq. (23) to obtain equal von Neumann entanglement
entropies
$E(\hat{\rho}_{g0})=S(\hat{\rho}_{A})=S(\hat{\rho}_{F})=0$ (24e)
which together with the property in Eq. (24d) quantifies the initial qubit
state vector $|\psi_{g0}\rangle=|g,0\rangle$ as a pure separable state,
agreeing with the definition in Eq. (16).
We proceed to determine the entanglement properties of the (transition) qubit
state vector $|\overline{\phi}_{g0}\rangle$ defined in Eq. (16). For parameter
values $\delta=\lambda\hskip 2.84526pt,\hskip
2.84526pt\overline{\delta}=5\lambda$ we ignore the phase factor in Eq. (21a),
to write the transition qubit state vector in the form
$\delta=\lambda\quad;\quad\overline{\delta}=5\lambda\,:\quad|\overline{\phi}_{g0}\rangle=-\frac{5}{\sqrt{41}}|g,0\rangle+\frac{4}{\sqrt{41}}|e,1\rangle$
(25a) The corresponding density operator of the state in Eq. (25a) is
$\displaystyle\hat{\overline{\rho}}_{g0}$ $\displaystyle=$
$\displaystyle\frac{25}{41}|g,0\rangle\langle{g,0}|-\frac{20}{41}|g,0\rangle\langle{e,1}|-\frac{20}{41}|e,1\rangle\langle{g,0}|$
(25b) $\displaystyle+\frac{16}{41}|e,1\rangle\langle{e,1}|$ which takes the
explicit $4\times 4$ matrix form
$\hat{\overline{\rho}}_{g0}=\begin{pmatrix}0&0&0&0\\\
0&\frac{16}{41}&-\frac{20}{41}&0\\\ 0&-\frac{20}{41}&\frac{25}{41}&0\\\
0&0&0&0\\\ \end{pmatrix}$ (25c) with eigenvalues $\lambda_{1}=1$,
$\lambda_{2}=0$, $\lambda_{3}=0$, $\lambda_{4}=0$. Applying Eq. (22), its von
Neumann entropy $S(\hat{\overline{\rho}}_{g0})=0$ (25d) quantifying the state
$|\overline{\phi}_{g0}\rangle$ in Eq. (25a) as a bipartite pure state.
Taking the partial trace of $\hat{\overline{\rho}}_{g0}$ in Eq. (25b) with
respect to the field mode and atom states respectively, we obtain the
respective atom and field reduced density operators
$\hat{\overline{\rho}}_{A}$, $\hat{\overline{\rho}}_{F}$ together with their
squares in the form
$\displaystyle\hat{\overline{\rho}}_{A}$ $\displaystyle=$ $\displaystyle
tr_{F}(\hat{\overline{\rho}}_{g0})=\frac{25}{41}|g\rangle\langle{g}|+\frac{16}{41}|e\rangle\langle{e}|\quad;\quad$
$\displaystyle\hat{\overline{\rho}}_{A}^{2}$ $\displaystyle=$
$\displaystyle\frac{625}{1681}|g\rangle\langle{g}|+\frac{256}{1681}|e\rangle\langle{e}|$
$\displaystyle\hat{\overline{\rho}}_{F}$ $\displaystyle=$ $\displaystyle
tr_{A}(\hat{\overline{\rho}}_{g0})=\frac{25}{41}|0\rangle\langle{0}|+\frac{16}{41}|1\rangle\langle{1}|\quad;\quad$
$\displaystyle\hat{\overline{\rho}}_{F}^{2}$ $\displaystyle=$
$\displaystyle\frac{625}{1681}|0\rangle\langle{0}|+\frac{256}{1681}|1\rangle\langle{1}|$
(25e)
The trace of $\hat{\overline{\rho}}_{A}^{2}$ and
$\hat{\overline{\rho}}_{F}^{2}$ in Eq. (25e) gives
$tr(\hat{\overline{\rho}}_{A}^{2})=tr(\hat{\overline{\rho}}_{F}^{2})=\frac{881}{1681}<1$
(25f)
demonstrating that $\hat{\overline{\rho}}_{A}$ and $\hat{\overline{\rho}}_{F}$
are mixed states. To quantify the mixedness we determine the length of the
Bloch vector along the $z$-axis as follows
$r_{z}=tr(\hat{\overline{\rho}}_{A}\hat{\sigma}_{z})=tr(\hat{\overline{\rho}}_{F}\hat{\sigma}_{z})=\frac{9}{41}$
(25g)
which shows that the reduced density operators $\hat{\overline{\rho}}_{A}$,
$\hat{\overline{\rho}}_{F}$ are non-maximally mixed states.
The eigenvalues $\left(\lambda_{1},\,\lambda_{2}\right)$ of
$\hat{\overline{\rho}}_{A}$ and $\hat{\overline{\rho}}_{F}$ are
$\left(\frac{16}{41},\,\frac{25}{41}\right)$ and
$\left(\frac{25}{41},\,\frac{16}{41}\right)$ respectively, which on
substituting into Eq. (22), gives equal von Neumann entanglement entropies
$\displaystyle E(\hat{\overline{\rho}}_{g0})$ $\displaystyle=$ $\displaystyle
S(\hat{\overline{\rho}}_{A})=S(\hat{\overline{\rho}}_{F})$ $\displaystyle=$
$\displaystyle-\frac{16}{41}\log_{2}\left(\frac{16}{41}\right)-\frac{25}{41}\log_{2}\left(\frac{25}{41}\right)=0.964957$
Taking the properties in Eqs. (25d), (25f) - (LABEL:eq:entropy9) together
clearly characterizes the qubit state $|\overline{\phi}_{g0}\rangle$ in Eq.
(25a) as an entangled bipartite pure state. However, since
$S(\hat{\overline{\rho}}_{A})=S(\hat{\overline{\rho}}_{F})<1$ the state is not
maximally entangled. Similarly, the transition qubit state vector
$|\overline{\phi}_{g0}\rangle=-\frac{7}{\sqrt{65}}|g,0\rangle+\frac{4}{\sqrt{65}}|e,1\rangle$
obtained for $\delta=3\lambda,\,\overline{\delta}=7\lambda$ in Eq. (21b) is an
entangled bipartite pure state, but not maximally entangled.
Finally, we consider the resonance case $\delta=0$, characterized by
$\overline{\delta}=4\lambda$ in the AJC. Ignoring the phase factor in Eq.
(21c) the transition qubit state vector $|\overline{\phi}_{g0}\rangle$ takes
the form
$\delta=0\quad;\quad\overline{\delta}=4\lambda\,:\quad|\overline{\phi}_{g0}\rangle=-\frac{1}{\sqrt{2}}|g,0\rangle+\frac{1}{\sqrt{2}}|e,1\rangle$
(26a) The corresponding density operator of the state in Eq. (26a) is
$\hat{\overline{\rho}}_{g0}=\frac{1}{2}|g,0\rangle\langle{g,0}|-\frac{1}{2}|g,0\rangle\langle{e,1}|-\frac{1}{2}|e,1\rangle\langle{g,0}|+\frac{1}{2}|e,1\rangle\langle{e,1}|$
(26b) which takes the explicit $4\times{4}$ matrix form
$\hat{\overline{\rho}}_{g0}=\begin{pmatrix}0&0&0&0\\\
0&\frac{1}{2}&-\frac{1}{2}&0\\\ 0&-\frac{1}{2}&\frac{1}{2}&0\\\ 0&0&0&0\\\
\end{pmatrix}$ (26c) with eigenvalues
$\lambda_{1}=1\,,\,\lambda_{2}=0\,,\,\lambda_{3}=0\,,\,\lambda_{4}=0$.
Applying eq. (22) its von Neumann entropy $S(\hat{\overline{\rho}}_{g0})=0$
(26d) quantifying the state in Eq. (26a) as a bipartite pure state.
Taking the partial trace of $\hat{\overline{\rho}}_{g0}$ in Eq. (26b) with
respect to the field mode and atom states respectively, we obtain the
respective atom and field reduced density operators
$\hat{\overline{\rho}}_{A}$, $\hat{\overline{\rho}}_{F}$ together with their
squares in the form
$\displaystyle\hat{\overline{\rho}}_{A}$ $\displaystyle=$ $\displaystyle
tr_{F}(\hat{\overline{\rho}}_{g0})=\frac{1}{2}|g\rangle\langle{g}|+\frac{1}{2}|e\rangle\langle{e}|\quad;\quad$
$\displaystyle\hat{\overline{\rho}}_{A}^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{4}|g\rangle\langle{g}|+\frac{1}{4}|e\rangle\langle{e}|$
$\displaystyle\hat{\overline{\rho}}_{F}$ $\displaystyle=$ $\displaystyle
tr_{A}(\hat{\overline{\rho}}_{g0})=\frac{1}{2}|0\rangle\langle{0}|+\frac{1}{2}|1\rangle\langle{1}|\quad;\quad$
$\displaystyle\hat{\overline{\rho}}_{F}^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{4}|0\rangle\langle{0}|+\frac{1}{4}|1\rangle\langle{1}|$
(26e)
The trace of $\hat{\overline{\rho}}_{A}^{2}$ and
$\hat{\overline{\rho}}_{F}^{2}$ in Eq. (26e) is
$tr(\hat{\overline{\rho}}_{A}^{2})=tr(\hat{\overline{\rho}}_{F}^{2})=\frac{1}{2}<1$
(26f)
which reveals that the reduced density operators $\hat{\overline{\rho}}_{A}$,
$\hat{\overline{\rho}}_{F}$ are mixed states. To quantify the mixedness, we
determine the length of the Bloch vector along the z-axis as follows
$r_{z}=tr(\hat{\overline{\rho}}_{A}\hat{\sigma}_{z})=tr(\hat{\overline{\rho}}_{F}\hat{\sigma}_{z})=0$
(26g)
showing that the reduced density operators $\hat{\overline{\rho}}_{A}$ and
$\hat{\overline{\rho}}_{F}$ are maximally mixed states.
The eigenvalues $(\lambda_{1},\lambda_{2})$ of $\hat{\overline{\rho}}_{A}$ and
$\hat{\overline{\rho}}_{F}$ are $(\frac{1}{2},\frac{1}{2})$ respectively which
on substituting into Eq. (22), gives equal von Neumann entanglement entropies
$\displaystyle E(\hat{\overline{\rho}}_{g0})$ $\displaystyle=$ $\displaystyle
S(\hat{\overline{\rho}}_{A})=S(\hat{\overline{\rho}}_{F})$ (26h)
$\displaystyle=$
$\displaystyle-\frac{1}{2}\log_{2}\left(\frac{1}{2}\right)-\frac{1}{2}\log_{2}\left(\frac{1}{2}\right)=1$
The unit entropy determined in Eq. (26h) together with the properties in Eqs.
(26d) - (26g) quantifies the transition qubit state determined at resonance
$\delta=0$ in Eq. (26a) (or Eq. (21c)) as a maximally entangled bipartite pure
state. Due to this maximal entanglement property, we shall use the resonance
transition qubit state $|\overline{\phi}_{g0}\rangle$ in Eq. (26a) to
implement teleportation by entanglement swapping protocol in Sec. V below.
Similar proof of entanglement of the AJC qubit states is easily achieved for
all possible values of sum frequency parameter
$\overline{\delta}=\omega_{0}+\omega$, confirming that in the initial vacuum-
field AJC interaction, reversible transitions occur only between a pure
initial separable qubit state vector $|\psi_{g0}\rangle$ and a pure entangled
qubit state vector $|\overline{\phi}_{g0}\rangle$. This property of Rabi
oscillations between an initial separable state and an entangled transition
qubit state occurs in the general AJC interaction described by the general
time evolving state vector $|\overline{\Psi}_{gn}(t)\rangle$ in Eq. (14).
### IV.2 Entanglement evolution
Let us consider the general dynamics of AJC interaction described by the
general time-evolving qubit state vector $|\overline{\Psi}_{gn}(t)\rangle$ in
Eq. (14). Substituting $|\overline{\Psi}_{gn}(t)\rangle$ from Eq. (14) into
Eq. (15a) and using the definitions of $|\psi_{gn}\rangle$,
$|\overline{\phi}_{gn}\rangle$ in Eq. (5) the density operator takes the form
$\displaystyle\displaystyle\hat{\overline{\rho}}_{gn}(t)=\left\\{\cos^{2}(\overline{R}_{gn}t)+\overline{c}_{gn}^{2}\sin^{2}(\overline{R}_{gn}t)\right\\}|g,n\rangle\langle{g,n}|+\left\\{{i}~{}\overline{s}_{gn}\cos(\overline{R}_{gn}t)\sin(\overline{R}_{gn}t)-\overline{c}_{gn}\overline{s}_{gn}\sin^{2}(\overline{R}_{gn}t)\right\\}|g,n\rangle\langle{e,n+1}|$
$\displaystyle+\left\\{{-i}~{}\overline{s}_{gn}\cos(\overline{R}_{gn}t)\sin(\overline{R}_{gn}t)-\overline{c}_{gn}\overline{s}_{gn}\sin^{2}(\overline{R}_{gn}t)\right\\}|e,n+1\rangle\langle{g,n}|+\left\\{\overline{s}_{gn}^{2}\sin^{2}(\overline{R}_{gn}t)\right\\}|e,n+1\rangle\langle{e,n+1}|$
(27)
The reduced density operator of the atom is determined by tracing over the
field states, thus taking the form
$\hat{\overline{\rho}}_{A}(t)=P_{g}(t)|g\rangle\langle{g}|+P_{e}(t)|e\rangle\langle{e}|$
(28)
after introducing the general time evolving atomic state probabilities
$P_{g}(t)$, $P_{e}(t)$ obtained as
$\displaystyle P_{g}(t)$ $\displaystyle=$
$\displaystyle\cos^{2}(\overline{R}_{gn}t)+\overline{c}_{gn}^{2}\sin^{2}(\overline{R}_{gn}t)$
$\displaystyle P_{e}(t)$ $\displaystyle=$
$\displaystyle\overline{s}_{gn}^{2}\sin^{2}(\overline{R}_{gn}t)$ (29)
where the dimensionless interaction parameters $\overline{c}_{gn}$,
$\overline{s}_{gn}$ are defined in Eq. (6) and the Rabi frequency takes the
form
$\overline{R}_{gn}=\frac{1}{2}\sqrt{16\lambda^{2}(n+1)+\overline{\delta}^{2}}$
(30)
Expressing $\hat{\overline{\rho}}_{A}(t)$ in Eq. (28) in $2\times 2$ matrix
form
$\hat{\overline{\rho}}_{A}(t)=\begin{pmatrix}P_{e}(t)&0\\\ 0&P_{g}(t)\\\
\end{pmatrix}$ (31)
we determine the quantum system entanglement degree $E(t)$ defined in Eq. (23)
as
$\displaystyle E(t)$ $\displaystyle=$ $\displaystyle-
tr(\hat{\overline{\rho}}_{A}(t)\log_{2}\hat{\overline{\rho}}_{A}(t))$
$\displaystyle=$ $\displaystyle-tr\left(\begin{pmatrix}P_{e}(t)&0\\\
0&P_{g}(t)\\\ \end{pmatrix}\begin{pmatrix}\log_{2}P_{e}(t)&0\\\
0&\log_{2}P_{g}(t)\\\ \end{pmatrix}\right)$
which takes the final form
$E(t)=-P_{e}(t)\log_{2}P_{e}(t)-P_{g}(t)\log_{2}P_{g}(t)$ (33)
Using the definitions of the dimensionless parameters $\overline{c}_{gn}$,
$\overline{s}_{gn}$ and the Rabi frequency $\overline{R}_{gn}$ in Eqs. (6) ,
(30), we evaluate the probabilities in Eq. (29) and plot the quantum system
entanglement degree $E(\tau)$ in Eq. (33) against scaled time
$\tau=\lambda{t}$ for arbitrarily chosen values of sum frequency
$\overline{\delta}=2\lambda\,,\,6\lambda\,,\,8\lambda$ and photon number
$n=1,\,2,\,3,\,6$ in Figs. 4 \- 6 below.
Figure 4: Degree of entanglement against scaled time for sum frequency
$\overline{\delta}=2\lambda$ when $n=1$ and $n=2$ Figure 5: Degree of
entanglement against scaled time for sum frequency
$\overline{\delta}=6\lambda$ and $\overline{\delta}=8\lambda$ when $n=1$
Figure 6: Degree of entanglement against scaled time for sum frequency
$\overline{\delta}=8\lambda$ when $n=1$, $n=2$, $n=3$ and $n=6$
The graphs in Figs. 4 \- 6 show the effect of photon number $n$ and sum
frequency $\overline{\delta}=\omega_{0}+\omega$ on the dynamical behavior of
quantum entanglement measured by the von Neumann entropy $E(\tau)$ (min
$E(\tau)=0$ ; max $E(\tau)=1$). In the three figures, the phenomenon of
entanglement sudden birth (ESB) and sudden death (ESD) is observed during the
time evolution of entanglement similar to that observed in the JC model [17,
18, 19]. In ESB there is an observed creation of entanglement where the
initially un-entangled qubits are entangled after a very short time interval.
For fairly low values of photon numbers $n$ and sum frequency
$\overline{\delta}$ as demonstrated in Fig. 4 for $\overline{\delta}=2\lambda$
plotted when $n=1$, $n=2$, the degree of entanglement rises sharply to a
maximum value of unity ($E(\tau)_{max}$) at an entangled state, stays at the
maximum level for a reasonably short duration, decreases to a local minimum,
then rises back to the maximum value before falling sharply to zero
($E(\tau)_{min}$) at the separable state. The local minimum disappears for
larger values of sum frequency $\overline{\delta}\geq 6\lambda$ at low photon
number n and re-emerge at high photon number $\textit{n}\geq 4$ (see Fig. 5
and Fig. 6) as examples. However, in comparison to the resonance case
$\delta=0$ in the JC model [19] we notice a long-lived entanglement at
$E(\tau)_{max}=1$ in the cases of $\overline{\delta}=6\lambda$ plotted when
$n=1$ in Fig. 5 and $\overline{\delta}=8\lambda$ plotted when $n=3$ in Fig. 6.
The process of ESB and ESD then repeats periodically, consistent with Rabi
oscillations between the qubit states.
In Fig. 4 and Fig. 6 sum frequencies are kept constant at
$\overline{\delta}=2\lambda$ and $\overline{\delta}=8\lambda$ respectively and
photon number n is varied in each case. We clearly see that the frequency of
oscillation of $E(\tau)$ increases with an increase in photon number n. This
phenomenon in which the frequency of oscillation of $E(\tau)$ increases with
an increase in photon number n is also observed in the JC model [19, 18].
To visualize the effect of sum frequency parameter $\overline{\delta}$ on the
dynamics of $E(\tau)$, we considered values of sum frequency set at
$\overline{\delta}=6\lambda$ and $\overline{\delta}=8\lambda$ for photon
number $n=1$ in Fig. 5. It is clear that the frequency of oscillation of
$E(\tau)$ increases with an increase in sum frequency
$\overline{\delta}=\omega_{0}+\omega$. In the JC model when detuning
$\delta=\omega_{0}-\omega$ is set at off resonance $\delta\neq 0$ results into
a decrease in the frequency of oscillation of $E(\tau)$ as seen in [18, 19,
20] in comparison to the resonance case $\delta=0$.
Finally, for $\overline{\delta}=8\lambda$ plotted when $n=1$ in Fig. 5 and in
Fig. 6 in comparison to $\overline{\delta}=6\lambda$ plotted when $n=1$ in
Fig. 5, it is clear in Fig. 5 that the degree of entanglement $E(\tau)$
decreases at a high value of sum frequency a phenomena similar to the JC model
in [20]. The observed decrease in degree of entanglement is due the property
that the system loses its purity and the entropy decreases when the effect of
sum frequency is considered for small number of photons n. This is remedied
when the effect of sum frequency is considered for higher photon numbers n as
shown in Fig. 6.
## V Teleportation
In the present work we consider an interesting case of quantum teleportation
by applying entanglement swapping protocol (teleportation of entanglement)
[21, 22, 23, 24] where the teleported state is itself entangled. The state we
want to teleport is a two-atom maximally entangled state in which we have
assigned subscripts to distinguish the atomic qubit states in the form [25]
$|\varphi\rangle_{12}=\frac{1}{\sqrt{2}}(|e\rangle_{1}|g\rangle_{2}-|g\rangle_{1}|e\rangle_{2})$
(34)
and it is in Alice’s possession. In another location Bob is in possession of a
maximally entangled qubit state $|\overline{\phi}_{g0}\rangle$ generated in
the AJC interaction in Eq. (21c) and expressed as
$|\Phi\rangle_{3\textit{x}}=-\frac{1}{\sqrt{2}}|g\rangle_{3}|0\rangle_{x}+\frac{1}{\sqrt{2}}|e\rangle_{3}|1\rangle_{x}$
(35)
where we have also assigned subscripts to the qubits in Eq. (35) to clearly
distinguish them.
An observer, Charlie, receives qubit-1 from Alice and qubit-$x$ from Bob. The
entire state of the system
$|\chi\rangle=|\varphi\rangle_{12}\otimes|\Phi\rangle_{3\textit{x}}$ (36a)
which on substituting $|\varphi\rangle_{12}$ and $|\Phi\rangle_{3\textit{x}}$
from Eqs. (34), (35) and reorganizing takes the form
$\displaystyle|\chi\rangle=\frac{1}{2}\Bigg{[}|\Psi^{+}\rangle_{1\textit{x}}\left(\frac{|e\rangle_{3}|g\rangle_{2}+|g\rangle_{3}|e\rangle_{2}}{\sqrt{2}}\right)+|\Psi^{-}\rangle_{1\textit{x}}\left(\frac{|e\rangle_{3}|g\rangle_{2}-|g\rangle_{3}|e\rangle_{2}}{\sqrt{2}}\right)-|\Phi^{-}\rangle_{1\textit{x}}\left(\frac{|g\rangle_{3}|g\rangle_{2}-|e\rangle_{3}|e\rangle_{2}}{\sqrt{2}}\right)$
$\displaystyle-|\Phi^{+}\rangle_{1\textit{x}}\left(\frac{|g\rangle_{3}|g\rangle_{2}+|e\rangle_{3}|e\rangle_{2}}{\sqrt{2}}\right)\Bigg{]}\hfill$
(36b)
after introducing the emerging Bell states obtained as
$\displaystyle|\Psi^{+}\rangle_{1\textit{x}}$ $\displaystyle=$
$\displaystyle\frac{|e\rangle_{1}|1\rangle_{x}+|g\rangle_{1}|0\rangle_{x}}{\sqrt{2}}$
$\displaystyle|\Psi^{-}\rangle_{1\textit{x}}$ $\displaystyle=$
$\displaystyle\frac{|e\rangle_{1}|1\rangle_{x}-|g\rangle_{1}|0\rangle_{x}}{\sqrt{2}}$
$\displaystyle|\Phi^{-}\rangle_{1\textit{x}}$ $\displaystyle=$
$\displaystyle\frac{|e\rangle_{1}|0\rangle_{x}-|g\rangle_{1}|1\rangle_{x}}{\sqrt{2}}$
$\displaystyle|\Phi^{+}\rangle_{1\textit{x}}$ $\displaystyle=$
$\displaystyle\frac{|e\rangle_{1}|0\rangle_{x}+|g\rangle_{1}|1\rangle_{x}}{\sqrt{2}}$
(37)
Charlie performs Bell state projection between qubit-1 and qubit-x (Bell state
measurement (BSM)) and communicates his results to Bob which we have presented
in Sec. V.1 below.
### V.1 Bell state measurement
BSM is realized at Charlie’s end. Projection of a state $|\Lambda\rangle$ onto
$|\Sigma\rangle$ is defined as [26]
$P_{\Sigma}:=\langle{\Sigma}|\Lambda\rangle\hskip 2.84526pt|\Sigma\rangle$
(38)
Using $|\chi\rangle$ from Eq. (36b) and applying Eq. (38) we obtain a Bell
state projection outcome communicated to Bob in the form
${}_{1\textit{x}}\langle{\Psi^{-}}|\chi\rangle=\frac{1}{2}\left(\frac{|e\rangle_{3}|g\rangle_{2}-|g\rangle_{3}|e\rangle_{2}}{\sqrt{2}}\right)=\frac{1}{2}|\Psi^{-}\rangle_{32}$
(39a) The Bell state $|\Psi^{-}\rangle_{32}$ in Eq. (39a) is in the form of
Alice’s qubit in Eq. (34). Alice and Bob now have a Bell pair between qubit-2
and qubit-3. Similarly the other three Bell projections take the forms
${}_{1\textit{x}}\langle{\Psi^{+}}|\chi\rangle=\frac{1}{2}\left(\frac{|e\rangle_{3}|g\rangle_{2}+|g\rangle_{3}|e\rangle_{2}}{\sqrt{2}}\right)=\frac{1}{2}|\Psi^{+}\rangle_{32}$
(39b)
${}_{1\textit{x}}\langle{\Phi^{-}}|\chi\rangle=\frac{1}{2}\left(\frac{|e\rangle_{3}|e\rangle_{2}-|g\rangle_{3}|g\rangle_{2}}{\sqrt{2}}\right)=\frac{1}{2}|\Phi^{-}\rangle_{32}$
(39c)
${}_{1\textit{x}}\langle{\Phi^{+}}|\chi\rangle=-\frac{1}{2}\left(\frac{|e\rangle_{3}|e\rangle_{2}+|g\rangle_{3}|g\rangle_{2}}{\sqrt{2}}\right)=-\frac{1}{2}|\Phi^{+}\rangle_{32}$
(39d)
For these cases of Bell state projections in Eqs. (39b), (39c) and (39d) it
will be necessary for Bob to perform local corrections to qubit-3 by Pauli
operators as shown in Tab. 1. We also see that the probability of measuring
states $|\psi\rangle_{32}$ in Eqs. (39a)-(39d) in Charlie’s lab is
$p=\frac{1}{4}$. In general, by application of the entanglement swapping
protocol (teleportation of entanglement), qubit-2 belonging to Alice and
qubit-3 belonging to Bob despite never having interacted before became
entangled. Further, we see that a maximally entangled anti-symmetric atom-
field transition state $|\overline{\phi}_{g0}\rangle$ (in Eq. (21c)) easily
generated in the AJC interaction, can be used in quantum information
processing (QIP) protocols like entanglement swapping (teleportation of
entanglement) which we have demonstrated in this work. We note that it is not
possible to generate such an entangled anti-symmetric state in the JC
interaction starting with the atom initially in the ground state and the field
mode in the vacuum state [4]. Recall that the JC interaction produces a
meaningful physical effect, namely, spontaneous emission only when the atom is
initially in the excited state and the field mode in the vacuum state.
Table 1: Table showing how Bob applies an appropriate gate to his qubit based on BSM from Charlie $|\varphi\rangle_{12}$ | $|\psi\rangle_{32}$ | UNITARY OPERATION
---|---|---
$\frac{1}{\sqrt{2}}(|e\rangle_{1}|g\rangle_{2}+|g\rangle_{1}|e\rangle_{2})$ | $\frac{1}{\sqrt{2}}(-|g\rangle_{3}|g\rangle_{2}+|e\rangle_{3}|e\rangle_{2})$ | $-\hat{\sigma}_{x(atom3)}\otimes\hat{I}_{(atom2)}$
| $\frac{1}{\sqrt{2}}(-|g\rangle_{3}|g\rangle_{2}-|e\rangle_{3}|e\rangle_{2})$ | $-i\hat{\sigma}_{y(atom3)}\otimes\hat{I}_{(atom2)}$
| $\frac{1}{\sqrt{2}}(|e\rangle_{3}|g\rangle_{2}+|g\rangle_{3}|e\rangle_{2})$ | $\hat{\sigma}_{z(atom3)}\otimes\hat{I}_{(atom2)}$
### V.2 Maximal teleportation fidelity
For any two-qubit state $\hat{\rho}$ the maximal fidelity is given by [27]
$F_{\hat{\rho}}=\frac{2f_{\hat{\rho}}+1}{3}$ (40)
where $f_{\hat{\rho}}$ is the fully entangled fraction defined in the form
[15]
$f_{\hat{\rho}}=\underset{|\Psi\rangle}{max}\langle\Psi|\hat{\rho}|\Psi\rangle=\left\\{{tr}\sqrt{\hat{\rho}_{expected}^{\frac{1}{2}}\hat{\rho}_{measured}\hat{\rho}_{expected}^{\frac{1}{2}}}\right\\}^{2}$
(41)
From Tab. 1
$\displaystyle\hat{\rho}_{expected}$ $\displaystyle=$
$\displaystyle|\varphi_{12}\rangle\langle\varphi_{12}|$ (42) $\displaystyle=$
$\displaystyle\frac{1}{2}[(|e_{1}\rangle|g_{2}\rangle-|g_{1}\rangle|e_{2}\rangle)\otimes(\langle{e_{1}}|\langle{g_{2}}|-\langle{g_{1}}|\langle{e_{2}}|)]$
$\displaystyle=$
$\displaystyle\frac{1}{2}[|e_{1}\rangle\langle{e_{1}}||g_{2}\rangle\langle{g_{2}}|-|e_{1}\rangle\langle{g_{1}}||g_{2}\rangle\langle{e_{2}}|$
$\displaystyle-|g_{1}\rangle\langle{e_{1}}||e_{2}\rangle\langle{g_{2}}|+|g_{1}\rangle\langle{g_{1}}||e_{2}\rangle\langle{e_{2}}|]$
$\displaystyle=$ $\displaystyle\frac{1}{2}\begin{pmatrix}-1&0\\\ 0&-1\\\
\end{pmatrix}$
$\displaystyle\hat{\rho}_{measured}$ $\displaystyle=$
$\displaystyle|\psi_{32}\rangle\langle\psi_{32}|$ (43) $\displaystyle=$
$\displaystyle\frac{1}{2}[(|e_{3}\rangle|g_{2}\rangle-|g_{3}\rangle|e_{2}\rangle)\otimes(\langle{e_{3}}|\langle{g_{2}}|-\langle{g_{3}}|\langle{e_{2}}|)]$
$\displaystyle=$
$\displaystyle\frac{1}{2}\Big{[}|e_{3}\rangle\langle{e_{3}}||g_{2}\rangle\langle{g_{2}}|-|e_{3}\rangle\langle{g_{3}}||g_{2}\rangle\langle{e_{2}}|$
$\displaystyle-|g_{3}\rangle\langle{e_{3}}||e_{2}\rangle\langle{g_{2}}|+|g_{3}\rangle\langle{g_{3}}||e_{2}\rangle\langle{e_{2}}|\Big{]}$
$\displaystyle=$ $\displaystyle\frac{1}{2}\begin{pmatrix}-1&0\\\ 0&-1\\\
\end{pmatrix}$
Substituting the results in Eq. (42) and Eq. (43) into the fully entangled
fraction Eq. (41) we obtain
$f_{\hat{\rho}}=\left\\{{tr}\begin{pmatrix}\frac{1}{2}&0\\\ 0&\frac{1}{2}\\\
\end{pmatrix}\right\\}^{2}=1$ (44)
Substituting the value of the fully entangled fraction into Eq. (40) we get
$F_{\hat{\rho}}=1$ (45)
a maximal teleportation fidelity of unity, showing that the state was fully
recovered, i.e Alice’s qubit in Eq. (34) was successfully teleported to Bob.
We obtain an equal outcome to all the other measured states. We have thus
achieved teleportation using a maximally entangled qubit state generated in an
AJC interaction, using the case where the atom and field are initially in the
absolute ground state $|g\rangle$, $|0\rangle$ as an example.
## VI Conclusion
In this paper we have analysed entanglement of a two-level atom and a
quantized electromagnetic field mode in an AJC qubit formed in the AJC
interaction mechanism. The effect of sum-frequency parameter and photon number
on the dynamical behavior of entanglement measured by von Neumann entropy was
studied which brought a clear visualization of this interaction similar to the
graphical representation on Bloch sphere. The graphical representation of Rabi
oscillations on the Bloch sphere demonstrated an important physical property,
that the AJC interaction process occurs in the reverse sense relative to the
JC interaction process. We further generated an entangled AJC qubit state in
the AJC interaction mechanism which we used in the entanglement swapping
protocol as Bob’s qubit. We obtained an impressive maximal teleportation
fidelity $F_{\rho}=1$ showing that the state was fully recovered. This
impressive result of fidelity, opens all possible directions for future
research in teleportation strictly within the AJC model. In conclusion we
observe that the operator ordering that distinguishes the rotating (JC)
component and anti-rotating component (AJC) has an important physical
foundation with reference to the rotating positive and anti-rotating negative
frequency components of the field mode which dictates the coupling of the
degenerate states of a two-level atom to the frequency components of the field
mode, an important basis for realizing the workings in the AJC interaction
mechanism and JC interaction mechanism.
## Acknowledgment
I thank Maseno University Department of Physics and Materials Science for
providing a conducive environment to do this work.
## References
* Braak [2011] D. Braak, Phys. Rev. Lett. 107, 100401 (2011).
* Omolo [2017a] J. A. Omolo, Preprint Research Gate, DOI:10.13140/RG.2.2.30936.80647 (2017a).
* Omolo [2017b] J. A. Omolo, preprint Research Gate, DOI:10.13140/RG.2.2.11833.67683 (2017b).
* Omolo [2019] J. A. Omolo, preprint Research Gate, DOI: 10.13140/RG.2.2.27331.96807 (2019).
* Born and Wolf [1999] M. Born and E. Wolf, _Principles of optics: electromagnetic theory of propagation, interference and diffraction of light, 7th edition_ (Cambridge university press, 1999).
* Boyer et al. [2017] M. Boyer, R. Liss, and T. Mor, Phys. Rev. A 95, 032308 (2017).
* Regula and Adesso [2016] B. Regula and G. Adesso, Phys. Rev. Lett. 116, 070504 (2016).
* Nielsen and Chuang [2011] M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, 2011).
* Rieffel and Polak [2011] E. G. Rieffel and W. H. Polak, _Quantum computing: A gentle introduction_ (MIT Press, 2011).
* Van Meter [2014] R. Van Meter, _Quantum networking_ (John Wiley & Sons, 2014).
* Enríquez et al. [2014] M. Enríquez, R. Reyes, and O. Rosas-Ortiz, J. Phys. Conf. Ser. 512, 012021 (2014).
* Von Neumann [2018] J. Von Neumann, _Mathematical foundations of quantum mechanics: New edition_ (Princeton university press, 2018).
* Wootters [2001] W. K. Wootters, Quantum Inf. Comput. 1, 27 (2001).
* Abdel-Khalek [2011] S. Abdel-Khalek, J. Russ. Laser 32, 86 (2011).
* Bennett et al. [1996a] C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Phys. Rev. A 54, 3824 (1996a).
* Bennett et al. [1996b] C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schumacher, Phys. Rev. A 53, 2046 (1996b).
* Mohammadi and Jami [2019] M. Mohammadi and S. Jami, Optik 181, 582 (2019).
* Liu et al. [2018] X.-J. Liu, J.-B. Lu, S.-Q. Zhang, J.-P. Liu, H. Li, Y. Liang, J. Ma, Y.-J. Weng, Q.-R. Zhang, H. Liu, et al., Int. J. Theor. Phys. 57, 290 (2018).
* Zhang et al. [2018] S.-Q. Zhang, J.-B. Lu, X.-J. Liu, Y. Liang, H. Li, J. Ma, J.-P. Liu, and X.-Y. Wu, Int. J. Theor. Phys. 57, 279 (2018).
* Abdel-Khalek et al. [2015] S. Abdel-Khalek, M. Quthami, and M. Ahmed, Opt. Rev. 22, 25 (2015).
* Żukowski et al. [1993] M. Żukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert, Phys. Rev. Lett. 71, 4287 (1993).
* Megidish et al. [2013] E. Megidish, A. Halevy, T. Shacham, T. Dvir, L. Dovrat, and H. Eisenberg, Phys. Rev. Lett. 110, 210403 (2013).
* Pan et al. [1998] J.-W. Pan, D. Bouwmeester, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. 80, 3891 (1998).
* Bose et al. [1998] S. Bose, V. Vedral, and P. L. Knight, Phys. Rev. A 57, 822 (1998).
* Sousa and Roversi [2019] E. H. Sousa and J. Roversi, Quantum Rep. 1, 63 (2019).
* Scherer [2019] W. Scherer, _Mathematics of Quantum Computing: An Introduction_ (Springer Nature, 2019).
* Horodecki et al. [1999] M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. A 60, 1888 (1999).
|
# MONAH: Multi-Modal Narratives for Humans to analyze conversations
Joshua Y. Kim Greyson Y. Kim Chunfeng Liu University of Sydney Success Beyond
Pain Hello Sunday Morning New South Wales, Australia Western Australia,
Australia New South Wales, Australia Rafael A. Calvo Silas C.R. Taylor Kalina
Yacef**** Corresponding author<EMAIL_ADDRESS>Imperial College
London University of New South Wales University of Sydney London, United
Kingdom New South Wales, Australia New South Wales, Australia
###### Abstract
In conversational analyses, humans manually weave multimodal information into
the transcripts, which is significantly time-consuming. We introduce a system
that automatically expands the verbatim transcripts of video-recorded
conversations using multimodal data streams. This system uses a set of
preprocessing rules to weave multimodal annotations into the verbatim
transcripts and promote interpretability. Our feature engineering
contributions are two-fold: firstly, we identify the range of multimodal
features relevant to detect rapport-building; secondly, we expand the range of
multimodal annotations and show that the expansion leads to statistically
significant improvements in detecting rapport-building.
## 1 Introduction
Dyadic human-human dialogs are rich in multimodal information. Both the visual
and the audio characteristics of how the words are said reveal the emotions
and attitudes of the speaker. Given the richness of multimodal information,
analyzing conversations requires both domain knowledge and time. The
discipline of conversational analysis is a mature field. In this discipline,
conversations could be manually transcribed using a technical system developed
by Jefferson (2004), containing information about intonation, lengths of
pauses, and gaps. Hence, it captures both what was said and how it was
said111Please visit www.universitytranscriptions.co.uk/jefferson-
transcription-example/ for an audio example.. However, such manual annotations
take a great deal of time. Individuals must watch the conversations
attentively, often replaying the conversations to ensure completeness.
Automated Jefferson (2004) transcripts could be generated from video-
recordings Moore (2015). However, the potential issue with Jeffersonian
annotations is that there are often within-word annotations and symbols which
makes it hard to benefit from pre-trained word embeddings. Inspired by the
Jeffersonian annotations, we expand the verbatim transcripts with multimodal
annotations such that downstream classification models can easily benefit from
pre-trained word embeddings.
Our paper focuses on the classification task of predicting rapport building in
conversations. Rapport has been defined as a state experienced in interaction
with another with interest, positivity, and balance Cappella (1990). If we can
model rapport building in the medical school setting, the volunteer actors can
let the system give feedback for the unofficial practice sessions, and
therefore students get more practice with feedback. Also, the lecturer could
study the conversations of the top performers and choose interesting segments
to discuss. As student doctors get better in rapport building, when they
graduate and practice as doctors, treatments are more effective and long-term
(Egbert et al., 1964; DiMatteo, 1979; Travaline et al., 2005).
Outside of the healthcare domain, understanding and extracting the features
required to detect rapport-building could help researchers build better
conversational systems. Our first contribution is the identification of
multimodal features that have been found to be associated with rapport
building and using them to predict rapport building automatically. Our second
contribution is to include them into a text-based multimodal narrative system
Kim et al. (2019b). Why go through text? It is because this is how human
experts have been manually analyzing conversations in the linguistics
community. Our text-based approach has the merit of emulating the way human
analysts analyze conversations, and hence supporting better interpretability.
We demonstrate that the additions bring statistically significant
improvements. This feature-engineering system222Open-sourced at
https://github.com/SpectData/MONAH could potentially be used to accomplish a
highly attention-demanding task for an analyst. With an automated text-based
approach, we aim to contribute towards the research gap of automatic
visualizations that support multimodal analysis Kim et al. (2019a). The
created multimodal transcript itself is a conversational analysis product,
which can be printed out on paper.
In this paper, we first introduced the problem domain (section 3). Secondly,
we motivated the new features (detailed in Fig. 1) to be extracted (section
4). Then, we extracted the features from videos and encoded them as text
together with verbatim transcripts (section 4). To evaluate whether the text
narratives were useful, we ran experiments that predict rapport-building using
texts containing different amounts of multimodal annotations (section 5).
Finally, we discuss the results and visualize the outputs of the system
(section 6).
Figure 1: High-level features introduction. We build on our previous work (Kim
et al., 2019b) – the new features introduced in this work are coloured in
blue, whilst the existing set of features are in white.
## 2 Related Works
The automated analysis of conversations has been the subject of considerable
interest in recent years. Within the domain of doctor-patient communication,
Sen et al. (2017) calculated session-level input features, including affective
features Gilbert (2014). Analyses using session-level features have a drawback
of not being able to identify specific defining multimodal interactions in the
conversation Zhao et al. (2016); Heylen et al. (2007). Therefore, we build
upon the works of Sen et al. (2017) – in addition to the use of session-level
features, we propose using a finer level of talk-turn multimodal text
representation as inputs into a hierarchical attention network (HAN) Yang et
al. (2016).
We also build upon our previous work (Kim et al., 2019b) by broadening the
range of multimodal features considered. As for the different methods of
multimodal information fusion, Poria et al. (2017) completed an extensive
review of the different state-of-the-art multimodal fusion techniques. Recent
multimodal fusion research (such as ICON (Hazarika et al., 2018a), CMN
(Hazarika et al., 2018b), MFN (Zadeh et al., 2018), DialogueRNN (Majumder et
al., 2019), M3ER (Mittal et al., 2020)) has focussed on end-to-end approaches.
Unlike the typical end-to-end approach of representing and fusing multimodal
features using numeric vectors, our contribution is an entirely text-based
multimodal narrative, thereby improving downstream analysis’s
interpretability. The approach of this system not only annotates the presence
of nonverbal events Eyben et al. (2011), but also the degree of the nonverbal
event intensity at both the session-level and talkturn-level.
## 3 Data
This study uses data from the EQClinic platform Liu et al. (2016). Students in
an Australian medical school were required to complete at least one medical
consultation on the online video conferencing platform EQClinic with a
simulated patient who is a human actor trained to act as a patient. Each
simulated patient was provided with a patient scenario, which mentioned the
main symptoms experienced. The study was approved by the Human Research Ethics
Committee of the University of New South Wales (project number HC16048).
The primary outcome measurement was the response to the rapport-building
question on the Student-Patient Observed Communication Assessment (SOCA) form,
an adapted version of the Calgary-Cambridge Guide Kurtz and Silverman (1996).
Simulated patients used the SOCA form to rate the students’ performances after
each video consultation. Our dataset comprises of 873 sessions, all from
distinct students. Since we have two recordings per session (one of the
student, the second of the simulated patient), the number of recordings
analyzed is 1,746. The average length per recording is 928 seconds (sd=253
seconds), amounting to a total of about 450 hours of recordings analyzed. The
dataset’s size is small relative to the number of multimodal features
extracted; therefore, there is a risk of overfitting.
We used the YouTube platform to obtain the transcript per speaker from the
recordings. We chose YouTube because we (Kim et al., 2019c) found that it was
the most accurate transcription service (word error rate: 0.28) compared to
Google Cloud (0.34), Microsoft Azure (0.40), Trint (0.44), IBM Watson (0.50),
when given dyadic video-conferences of an Australian medical school. Jeong-Hwa
and Cha (2020) found that among the four categories of YouTube errors
(omission, addition, substitution, and word order), substitution recorded the
highest amount of errors. Specifically, they found that phrase repetitions
could be mis-transcribed into non-repetitions. From our experience, (a)
repair-initiation techniques such as sound stretches (e.g. “ummmm”) (Hosoda,
2006), were either omitted or substituted with “um”; (b) overlapping speech
was not a problem because our speakers were physically separated and recorded
into separate files.
We brought together the two speakers’ transcripts into a session-level
transcript through word-level timings and grouped together words spoken by one
speaker until the sequence is interrupted by the other speaker. When the
interruption occurs, we deem that the talk-turn of the current speaker has
ended, and a new talk-turn by the interrupting speaker has begun. The average
number of talk-turns per session is 296 (sd=126), and the average word count
per talk-turn is 7.62 (sd=12.2).
At this point, we note that acted dialogues differ from naturally occurring
dialogues in a few ways. Firstly, naturally occurring dialogues tend to be
more vague (phrases like “sort of”, “kinda”, “or something”) due to the shared
understanding between the speakers (Quaglio, 2008). Secondly, taboo words or
expletives that convey emotions (like “shit”, “pisssed off”, “crap”) is likely
to be less common in an acted medical setting than naturally occurring
conversations. Some conversations transform into genuine dialogues where the
speakers “shared parts of themselves they did not reveal to everyone and, most
importantly, this disclosure was met with acceptance” (Montague, 2012). This
definition of genuine conversation is similarly aligned to our definition of
rapport-building in section 4.1.
Table 1: Session-level input features for each participant. * indicates new features outside of Kim et al. (2019b). Family | Child | Template
---|---|---
Demo graphics | Talkativeness | Total word count, total distinct word count, and proportion of word count
Big 5 Personality | Percentile scores for each of the big 5 personality
Gender | Male or Female
Actions | Laughter | Total laughter count
Head Nodding* | Count of nods
Forward Trunk Leaning* | Count of leaning in
Smiling* | Count of smiles
PosiFace* | Counts of times of positive and negative facial expressions
AU | Summary statistics of the selected AU (05,17,20,25) intensities
Prosody | Delay | Summary statistics of time gaps between talk-turns
Speech rate | Average speech rate
Tone* | Happy, sad, angry tone
Semantics | Sentiment* | Composite, positive, neutral, and negative sentiment
Questions* | Proportion of talk-turns that are open/closed questions
Mimicry | Speech Rate* | Dynamic time wrapping distance for speech rate
Tone* | Dynamic time wrapping distance for tone
History | Num. Sessions* | Number of past sessions the assessor has scored before this
Proportion given extreme marks* | Proportion of past sessions that the assessor has given an extreme score
Figure 1 shows a summary of the features extracted. We annotated verbatim
transcripts with two different levels of multimodal inputs – annotations at
the session-level are labeled coarse, whilst annotations at the talk-turn-
level are labeled fine. To facilitate comparisons, all input families
belonging to the coarse (fine) level would be annotated with uppercase
(lowercase) letters, respectively. In this paper, we refer to the previously
existing set of features (with white background) as the “prime” (´)
configuration. Families are also abbreviated by their first letter. For
example, the coarse $P\textprime$ family would consist of only speech rate and
delay, whilst the coarse $P$ family would consist of $P\textprime$ plus tone.
As another example, the coarse $D\textprime$ family is the same as the $D$
family because there are no newly added features (in blue). We introduce the
framework of our multimodal feature extraction pipeline in Figure 2.
Figure 2: MONAH (Multi-Modal Narratives for Humans) Framework.
## 4 Multimodal features extractions
As an overview, we extracted the timestamped verbatim transcripts and used a
range of pre-trained models to extract temporal, modality-specific features.
We relied on pre-trained models for feature extraction and did not attempt to
improve on them – demonstrating the value of using multidisciplinary pre-
trained models from natural language processing, computer vision, and speech
processing for conversational analysis.
Effectively, we extracted structured data from unstructured video data
(section 4.2). With the structured data and verbatim transcript, we weaved a
multimodal narrative using a set of predefined templates (sections 4.3 and
4.4). With the multimodal narrative, we employed deep learning techniques and
pre-trained word embeddings to predict the dependent variable (section 5).
### 4.1 Dependent variable - rapport building
The dependent variable is defined as the success in rapport building. Rapport
building is one of the four items scored in the SOCA. The original 4-point
Likert scale is Fail, Pass-, Pass, Pass+, we converted this scale into a
binary variable where it is true if the rapport-building score is “Pass+” as
we are concerned here with identifying good rapport building. “Pass+” means
that the actor felt rapport such that all information could be comfortably
shared. 38 percent of the population has achieved “Pass+”. All actors followed
the same pre-interview brief. Because only the actor scored the student
performance and there is no overlap, the limitation is that we do not have
measures of agreement.
### 4.2 Description of features
Table 1 gives an overview of all features for each speaker. We define six
families of coarse-level inputs -– demographics, actions, prosody, semantics,
mimicry, and history. We computed the features per speaker. From all families,
there are a total of 77 features per session.
We first discuss the family of demographics. Talkativeness is chosen because
the patient’s talkativeness would initiate the doctor’s active listening while
aiding identification of patient’s concerns – processes that could establish
rapport. In Hall et al. (2009), it appears that patients appreciate a certain
degree of doctor’s dominance in the conversation, which itself is also
correlated with higher rapport. Big 5 Personality consists of Extraversion,
Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience
(McCrae and Costa, 1987). This personality structure is widely used in
research and practice to quantify aspects of a person’s natural tendency in
thought, feeling, and action, with good validity and reliability indicators
(McCrae, 2017). It is chosen because traits of agreeableness and openness on
the part of both doctor and patient predict higher rapport. Among doctors,
higher openness and agreeableness predict higher empathy towards patients
Costa et al. (2014). Among patients, higher agreeableness predicted higher
trust towards doctors Cousin and Mast (2013), and higher openness predicted
higher doctor affectionate communication Hesse and Rauscher (2019). Big 5
Personality is extracted through feeding transcripts to the IBM Watson
Personality Insights API (version 2017-10-13), costing a maximum of 0.02 USD
per call. Gender is chosen because personality differences between genders
were observed cross-culturally. Among twenty-three thousand participants
across cultures for both college-age and adult samples, females reported
higher agreeableness, warmth, and openness to feelings than males Costa Jr et
al. (2001), traits that could be linked to rapport building.
Secondly, for the family of actions, laughter is chosen because humor (which
was defined in part by the presence of laughter) on the part of both doctor
and patient was found to be twice as frequent in high-satisfaction than low-
satisfaction visits Sala et al. (2002). Laughter events were detected using
the Ryokai et al. (2018) algorithm. Facial expressions that resemble smiling
is another behavioral indicator of humor appreciation, and approval of one
another Tickle-Degnen and Rosenthal (1990). Head nodding is a type of
backchannel response (i.e., response tokens) that has been shown to reflect
rapport between doctor and patient, especially when the primary activity is
face to face communications Manusov (2014). Forward trunk leaning is chosen
because it has long been found to reflect an expression of interest and
caring, which are foundational to rapport building Scheflen (1964).
Additionally, facial positivity (posiface) is included as it is useful in
rapport building detection in small groups Müller et al. (2018). Lastly,
action units (AU) that describe specific facial expressions, in particular AU
05 (upper lid raiser), 17 (chin raiser), 20 (lip stretcher), 25 (lips part),
are also included as they were useful in automated dyadic conversational
analyses to detect depression in our previous work Kim et al. (2019b). All
features introduced in this paragraph were calculated using the AU and
landmark positioning features extracted using OpenFace Baltrušaitis et al.
(2016).
Thirdly, for the family of prosody, delay is chosen because it has been shown
to be an indicator of doctor-to-patient influence – patients of low rapport
with their doctors were found to speak less in response to doctor’s comments
Sexton et al. (1996). Speech rate is chosen because doctor’s fluent speech
rate and patient’s confident communication have been positively correlated
with the patient’s perception of rapport Hall et al. (2009). Delay and speech
rate are calculated using the time-stamped transcripts. Tone is chosen because
a warm and respectful tone on the part of both doctor and patient is
positively correlated with the patient’s perception of rapport Hall et al.
(2009). Tone is calculated using the Vokaturi algorithm (version 3.3) Vokaturi
(2019).
Table 2: Templates for the session-level coarse summary. Family | Child | ID | Template
---|---|---|---
Demo graphics | Talkativeness | 1 | doctor number of words high, doctor number of distinct words high
Big 5 Personality | 2 | doctor openness high
Gender | 3 | The patient is female
Actions | Laughter | 4 | doctor laughter counts high
Head Nodding | 5 | doctor head nod counts high
Forward Trunk Leaning | 6 | doctor forward trunk leaning high
Smiling | 7 | doctor smiling counts high
PosiFace | 8 | doctor positive face expression counts high
AU | 9 | doctor minimum lip depressor very low, maximum lip depressor low, average lip depressor low, variance lip depressor low
Prosody | Delay | 10 | minimum delay very low, maximum delay low, average delay low, variance delay low
Speech rate | 11 | speech rate high
Tone | 12 | angry tone high
Semantics | Sentiment | 13 | positive sentiment high
Questions | 14 | open questions high
Mimicry | Speech Rate | 15 | speech rate mimicry high
Tone | 16 | tone mimicry high
History | Num. Sessions | 17 | patient number of sessions before this very high
Proportion given extreme marks | 18 | patient question four proportion given maximum marks high
Fourthly, for the family of semantics, sentiment is chosen because the
provision of positive regard from a practitioner to a patient is an important
factor to foster therapeutic alliance; additionally, this process may be
further enhanced if the patient also demonstrates positive behaviors towards
the practitioners Farber and Doolin (2011). Sentiment is extracted using the
VADER algorithm Gilbert (2014), in line with Sen et al. (2017). Questions is
chosen because higher engagement by the doctor (e.g., asking questions) with
the patient and the patient asking fewer questions have been shown to
positively correlate with the patient’s perception of rapport Hall et al.
(2009). Questions are detected using Stanford CoreNLP Parser Manning et al.
(2014) and the Penn Treebank Bies et al. (1995) tag sets.
Next, mimicry is chosen because doctor-patient synchrony is an established
proxy for rapport. In a review paper, rapport is theorized to be grounded in
the coupling of practitioner’s and patient’s brains Koole and Tschacher
(2016). Such a coupling process would eventuate in various forms of mimicry in
the dyad, for instance, vocally (e.g., matching speech rate and tone),
physiologically (e.g., turn-taking, breathing), physically (e.g., matching
body language) Wu et al. (2020). In this study, we aim to use vocal mimicry to
capture this underlying phenomenon. Session level mimicry scores are
approximated through Dynamic Time Wrapping distances Giorgino and others
(2009), in line with Müller et al. (2018).
Lastly, history is chosen because the scores given by the assessors could be
subjective evaluations where the evaluations are unduly influenced by the
assessor’s leniency bias Moers (2005). We attempted to mitigate the leniency
bias by introducing history features that indicate the assessor’s leniency and
its consistency.
### 4.3 Generation of coarse multimodal narrative
In this section, we discuss the coarse multimodal narrative. We summarized the
automatic generation of the text representation in Table 2.
Table 3: Templates for the talkturn-level fine summary. Family | Child | ID | Template
---|---|---|---
Verbatim | Transcript | 19 | Transcript returned from the ASR system
Prosody | Speech rate | 20 | the doctor quickly said
Tone | 21 | the doctor said angrily
Delay | 22 | after two hundred milliseconds
| 23 | a long delay
Actions | Laughter | 24 | the doctor laughed
Nodding | 25 | the doctor nodded
Forward trunk learning | 26 | the doctor leaned forward
Smiling | 27 | the doctor smiled
PosiFace | 28 | the doctor displayed positive facial expression
AU05, 17, 20, 25 | 29 | the doctor exhibited lip depressor
We calculated the z-score for all the above templates (except Template 3 which
is categorical) using the following z-score formula. The average ($\mu$), and
standard deviation ($\sigma$) are computed using observations from the
training observations. Using the z-score, we bucketed them into “very low”
(z$<$-2), “low” (z$<$-1), “high” (z$>$1) and “very high” (z$>$2). The reason
for z-transformation is to create a human-readable text through bucketing
continuous variables into easy-to-understand buckets (“high” vs. “low”).
$\centering
z=\frac{x-\mu_{\text{Train}}}{\sigma_{\text{Train}}}\@add@centering$ (1)
### 4.4 Generation of fine multimodal narrative
In addition to the verbatim transcript, we introduced two new families of
information – prosody, and actions. Table 3 gives an overview of the
templates, and the bold-face indicates a variable. The motivations of the
features have been discussed; we discuss the rules of insertion in the next
few paragraphs.
Template 19 is the verbatim transcript returned from the ASR system. Before
each talk-turn, we identified the speaker (doctor/patient) and added
multimodal information using templates 20-29. Speech rate and tone were
standardized across all training observations. We appended template 20, 21
where possible values are dependent on the z-score – “quickly” (1 $<$ z-score
$<$ 2) and “very quickly” (z-score $\geq$ 2). For delay, we used time
intervals of 100 milliseconds, and between 200 and 1200 milliseconds – in line
with Roberts and Francis (2013). We appended template 22 at the front of the
talk-turn if a delay of at least 200 milliseconds is present between talk-
turns. In addition, we appended template 23 where possible values are
dependent on the standardized duration of delay – “short” ($<$ 1 z-score),
“long” ($<$ 2 z-score) and “significantly long” ($\geq$ 2 z-score). Template
23 captures longer than usual delay, considering the unique turn-taking
dynamics of each conversation. The standardized duration of delay is
calculated using talk-turn delays from the respective session. Lastly, as for
the actions family, templates 24 – 28 were added if any of the actions are
detected during the talk-turn. For template 29, it was only added if the AU is
detected throughout the entire duration of the talk-turn.
## 5 Experimental settings
There are two main types of inputs – (1) numeric inputs at the session-level,
and (2) coarse and/or fine multimodal narrative text inputs. As an overview,
for (1), we trained the decision tree classifier using session-level numeric
inputs. As for (2), we trained the HAN Yang et al. (2016). We aim to
facilitate how humans analyze conversations – HAN can work with text and has
easy interpretation with single-headed attention, making it a suitable
candidate. Relative to BERT (Devlin et al., 2018), the HAN is faster to train
and easier to interpret.
### 5.1 Research questions
Table 4: Summary of the model performances. We report the average five-fold cross-validation AUC and its standard deviation in brackets. Row-wise: We begin with the $D\textprime A\textprime P\textprime$, which is the full existing feature set from Kim et al. (2019b), and progressively compare it against the new sets of features to answer Q1. Column-wise: We compare the difference in AUC between the classification tree and coarse-only HAN to answer Q2. We compare the difference in AUC between the coarse-only HAN and coarse \+ fine HAN to answer Q3. Asterisks (*) indicate significance relative to the $D\textprime A\textprime P\textprime$ row. Carets (^) indicate significance relative to column-wise comparisons, we also provide the confidence intervals in square brackets [] for the difference in performance. The number of symbols indicate the level of statistical significance, e.g., ***: 0.01, **: 0.05, *: 0.10. Coarse Inputs | Tree | Coarse-only (HAN) | Significance of Difference (Coarse-only vs. Tree) | Coarse + Fine (HAN) | Significance of Difference (Coarse + Fine vs. Coarse-only)
---|---|---|---|---|---
$D\textprime A\textprime P\textprime$
(Existing Features) | 0.577 (0.011) | 0.637 (0.018) | Œ
[0.038, 0.082] | 0.629 (0.041) | [-0.054, 0.038]
$H$ | 0.613 ** (0.036) | 0.642 (0.038) | [-0.025, 0.083] | 0.652 (0.048) | [-0.053, 0.073]
$DH$ | 0.670 *** (0.049) | 0.670 ** (0.034) | [-0.062, 0.062] | 0.654 (0.030) | [-0.063, 0.031]
$PAH$ | 0.684 *** (0.022) | 0.645 (0.043) | [-0.089, 0.011] | 0.661 (0.029) | [-0.038, 0.070]
$APMH$ | 0.664 *** (0.037) | 0.643 (0.036) | [-0.074, 0.032] | 0.657 (0.037) | [-0.039, 0.067]
$APSMH$ | 0.649 *** (0.021) | 0.644 (0.049) | [-0.060, 0.050] | 0.653 (0.051) | [-0.064, 0.082]
$DAPSMH$ | 0.630 *** (0.032) | 0.661 * (0.030) | [-0.014, 0.076] | 0.650 (0.028) | [-0.053, 0.031]
The proposed features have been motivated by scientific studies in Section 4.
A natural next question is, “what are the impacts of these proposed features
on model performance?” We break this broad question into three questions.
Firstly, (Q1) do the newly added features improve performance over the
existing set of features for the classification tree and/or HAN?
Secondly, modelling using unstructured text input data (as opposed to using
numeric inputs) has the risk of introducing too much variability in the
inputs. Therefore, we investigate (Q2) – given the coarse-only inputs, do the
performance between the HAN and classification tree differ significantly?
Lastly, adding more granular talkturn-level inputs to the coarse session-level
inputs has the benefit of deeper analyses, because it allows the analyst to
analyze important talkturns of the conversation. On top of this benefit, (Q3)
do we also have significant performance improvement between coarse-only vs.
both coarse and fine inputs?
For all models, the area under the receiver-operator curve (AUC) was used as
the evaluation metric. The AUC measures the goodness of ranking (Hanley and
McNeil, 1982) and therefore does not require an arbitrary threshold to turn
the probabilities into classes. The partitioning of the dataset to the five-
folds is constant for decision tree and HAN to facilitate comparison. The five
folds are created through stratified sampling of the dependent variable.
### 5.2 Classification tree set-up
To answer (Q1) and (Q2), we tested for all 72 configurations of prime
($2^{3}=8$) plus full ($2^{6}=64$) family inputs for the decision tree. We
performed the same z-transformation pre-processing (as in section 4.3) on the
decision tree input variables and limited random search to twenty trials.
The algorithm used is from the rpart package with R. As part of hyperparameter
tuning, we tuned the cp (log-uniform between $10^{-7}$ to $10^{-9}$), maximum
depth (uniform between 1 to 20), and minimum split (uniform between 20 to 80)
through five-fold cross-validation and random search.
### 5.3 HAN set-up
To answer (Q1) and (Q2), we chose the input configurations that performed that
best for the classification tree, and used the same input configurations in
HAN to compare the difference. Therefore, this test is biased in favour of the
classification tree. To answer (Q3), we added the fine narratives to each
coarse-only configuration, and compared the difference.
The model architecture is the HAN architecture by Yang et al. (2016), with
about 5 million parameters. We used the pre-trained Glove word embeddings
Pennington et al. (2014) of 300-dimensions to represent each word. Words not
found in the Glove vocabulary are replaced with the “unk” token. The
hyperparameter tuning procedure is reported in Appendix A, and the best
hyperparameter configurations are reported in Appendix B. There are twenty
hyperparameter search trials for each input configuration333We conducted
additional tuning experiments for the tree in Appendix C to observe potential
improvements in performance..
## 6 Experimental results
The results are summarized in Table 4. The key findings are: (Q1) with the
extended inputs, we observed statistically significant improvements in both
the HAN and tree over the existing full set of features (one-tailed t-test);
(Q2) given the coarse-only inputs, the performances between the HAN and
classification tree did not differ significantly (two-tailed t-test),
therefore it is plausible that feature engineering into text features do not
risk performance; (Q3) although adding the fine narratives allow deeper
analyses by the analyst, it does not lead to significant differences over the
coarse-only inputs (two-tailed t-test).
(Q1) When compared to the full set of existing features, the classification
tree achieved statistically significant improvements (at $\alpha=0.05$) in all
six out of six coarse input families. For HAN, it achieved statistically
significant improvements in one (at $\alpha=0.05$) or two (at $\alpha=0.10$)
out of six coarse input families. This demonstrates the value of the newly
introduced coarse features444We performed additional tests in Appendix D to
observe the impact of the additions to the fine narratives, and found small
improvements (but statistically insignificant) in all three out of three input
families ($va$, $vp$, $vpa$)..
(Q2) Across the seven coarse input configurations, there are no significant
differences in the performance from the classification tree when compared to
the HAN in six out of seven input configurations. The only exception is in the
baseline $D\textprime A\textprime P\textprime$ configuration where the HAN is
significantly better. However, the lack of statistically significant
differences does not mean that the performances are the same. In line with
Quertemont (2011) recommendation, we provided the confidence interval around
the difference in performance for discussion. Of all confidence intervals that
included zero in the fourth column of Table 4, the confidence intervals do not
suggest that that the effect sizes are negligible (for example, less than
0.01). In summary, we cannot conclude that the performance of HAN differs
significantly from tree nor are they the same.
(Q3) The addition of fine narratives to the coarse narrative did not result in
significantly stronger (nor weaker) performance in any of the seven input
configurations. We posit that this negative finding is due to the difficulty
in prioritizing the back-propagation updates to the parts of the network
interacting with the coarse features, where there is likely a high signal-to-
noise ratio. Despite the negative finding, we think it is important to explore
fine features’ addition onto coarse features because it produces a complete
transcript for the human to understand how the conversation proceeded.
### 6.1 Qualitative Analysis
We visualized the talkturn-level and word-level attention weights from the
model. Attention weights are normalized using z-transformation and bucketed
into four buckets ($<0$, $<1$, $<2$, $\geq$ 2) Kim et al. (2019b). The analyst
could analyze an important segment in detail (as in Fig. 3) or see an overview
of the important segments in the conversation (see appendix E). In the example
(Fig. 3), we observed that the multimodal annotations of leaning forward and
positive expression were picked up as important words by the model.
Figure 3: Conversation analysis for a true positive. The talkturn-level
attentions are labelled Low (L), Medium (M) and High (H), while the words with
higher attention have a larger and darker font. We also transcribed this
segment using the Jefferson system in Appendix F.
## 7 Conclusion
In this paper, we build upon a fully text-based feature-engineering system. We
motivated the added features with existing literature, and demonstrated the
value of the added features through experiments on the EQClinic dataset. This
approach emulates how humans have been analyzing conversations with the
Jefferson (2004) transcription system, and hence is human-interpretable. It is
highly modular, thereby allowing practitioners to inject modalities. In this
paper, we have used a wide range of modalities, including demographics,
actions, prosody, mimicry, actions, and history. The ablation tests showed
that the added coarse features significantly improve the performance for both
decision tree and HAN models.
Future research could (1) investigate whether this feature engineering system
is generalizable to wider applications of conversational analysis; (2) conduct
user studies to validate the usability and ease of interpretability of the
visualization.
## Acknowledgments
We acknowledge the Sydney Informatics Hub and the University of Sydney’s high-
performance computing cluster, Artemis, for providing the computing resources
and Marriane Makahiya for supporting the data manipulation work. Video data
collection was carried out as part of the OSPIA platform project, funded by
the Department of Health Clinical Training Fund from the Australian
Government.
## References
* Baltrušaitis et al. (2016) Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency. 2016. Openface: an open source facial behavior analysis toolkit. In _Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on_ , pages 1–10. IEEE.
* Bies et al. (1995) Ann Bies, Mark Ferguson, Karen Katz, Robert MacIntyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, and Britta Schasberger. 1995. Bracketing guidelines for Treebank II style Penn Treebank project. _University of Pennsylvania_ , 97:100.
* Cappella (1990) Joseph N Cappella. 1990. On defining conversational coordination and rapport. _Psychological Inquiry_ , 1(4):303–305.
* Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. _arXiv preprint arXiv:1406.1078_.
* Costa et al. (2014) Patricio Costa, Raquel Alves, Isabel Neto, Pedro Marvao, Miguel Portela, and Manuel Joao Costa. 2014. Associations between medical student empathy and personality: a multi-institutional study. _PloS one_ , 9(3).
* Costa Jr et al. (2001) Paul T Costa Jr, Antonio Terracciano, and Robert R McCrae. 2001. Gender differences in personality traits across cultures: robust and surprising findings. _Journal of personality and social psychology_ , 81(2):322.
* Cousin and Mast (2013) Gaëtan Cousin and Marianne Schmid Mast. 2013. Agreeable patient meets affiliative physician: how physician behavior affects patient outcomes depends on patient personality. _Patient Education and Counseling_ , 90(3):399–404.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* DiMatteo (1979) M Robin DiMatteo. 1979. A social-psychological analysis of physician-patient rapport: toward a science of the art of medicine. _Journal of Social Issues_ , 35(1):12–33.
* Egbert et al. (1964) Lawrence D Egbert, George E Battit, Claude E Welch, and Marshall K Bartlett. 1964\. Reduction of postoperative pain by encouragement and instruction of patients: a study of doctor-patient rapport. _New England Journal of Medicine_ , 270(16):825–827.
* Eyben et al. (2011) Florian Eyben, Martin Wöllmer, Michel F Valstar, Hatice Gunes, Björn Schuller, and Maja Pantic. 2011. String-based audiovisual fusion of behavioural events for the assessment of dimensional affect. In _Face and Gesture 2011_ , pages 322–329. IEEE.
* Farber and Doolin (2011) Barry A Farber and Erin M Doolin. 2011. Positive regard. _Psychotherapy_ , 48(1):58.
* Gal and Ghahramani (2016) Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In _Advances in neural information processing systems_ , pages 1019–1027.
* Gilbert (2014) CJ Hutto Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. _Eighth International Conference on Weblogs and Social Media (ICWSM-14)._
* Giorgino and others (2009) Toni Giorgino and others. 2009. Computing and visualizing dynamic time warping alignments in R: the dtw package. _Journal of statistical Software_ , 31(7):1–24.
* Hall et al. (2009) Judith A Hall, Debra L Roter, Danielle C Blanch, and Richard M Frankel. 2009. Observer-rated rapport in interactions between medical students and standardized patients. _Patient Education and Counseling_ , 76(3):323–327.
* Hanley and McNeil (1982) James A Hanley and Barbara J McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (roc) curve. _Radiology_ , 143(1):29–36.
* Hazarika et al. (2018a) Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. 2018a. Icon: Interactive conversational memory network for multimodal emotion detection. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2594–2604.
* Hazarika et al. (2018b) Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018b. Conversational memory network for emotion recognition in dyadic dialogue videos. In _Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting_ , volume 2018, page 2122. NIH Public Access.
* Hesse and Rauscher (2019) Colin Hesse and Emily A Rauscher. 2019. The relationships between doctor-patient affectionate communication and patient perceptions and outcomes. _Health communication_ , 34(8):881–891.
* Heylen et al. (2007) Dirk Heylen, Elisabetta Bevacqua, Marion Tellier, and Catherine Pelachaud. 2007\. Searching for prototypical facial feedback signals. In _International Workshop on Intelligent Virtual Agents_ , pages 147–153. Springer.
* Hosoda (2006) Yuri Hosoda. 2006. Repair and relevance of differential language expertise in second language conversations. _Applied linguistics_ , 27(1):25–50.
* Jefferson (2004) Gail Jefferson. 2004. Glossary of transcript symbols with an introduction. _Pragmatics and Beyond New Series_ , 125:13–34.
* Jeong-Hwa and Cha (2020) Lee Jeong-Hwa and Kyung-Whan Cha. 2020. An analysis of the errors in the auto-generated captions of university commencement speeches on youtube. _Journal of Asia TEFL_ , 17(1):143.
* Kim et al. (2019a) Joshua Kim, Rafael A Calvo, Kalina Yacef, and N J Enfield. 2019a. A Review on Dyadic Conversation Visualizations - Purposes, Data, Lens of Analysis. _arXiv preprint arXiv:1905.00653_.
* Kim et al. (2019b) Joshua Y Kim, Greyson Y Kim, and Kalina Yacef. 2019b. Detecting depression in dyadic conversations with multimodal narratives and visualizations. In _Australasian Joint Conference on Artificial Intelligence_ , pages 303–314. Springer.
* Kim et al. (2019c) Joshua Y Kim, Chunfeng Liu, Rafael A Calvo, Kathryn McCabe, Silas CR Taylor, Björn W Schuller, and Kaihang Wu. 2019c. A comparison of online automatic speech recognition systems and the nonverbal responses to unintelligible speech. _arXiv preprint arXiv:1904.12403_.
* Koole and Tschacher (2016) Sander L Koole and Wolfgang Tschacher. 2016. Synchrony in psychotherapy: A review and an integrative framework for the therapeutic alliance. _Frontiers in psychology_ , 7:862.
* Kurtz and Silverman (1996) Suzanne M Kurtz and Jonathan D Silverman. 1996. The Calgary-Cambridge Referenced Observation Guides: an aid to defining the curriculum and organizing the teaching in communication training programmes. _Medical education_ , 30(2):83–89.
* Liu et al. (2016) Chunfeng Liu, Renee L Lim, Kathryn L McCabe, Silas Taylor, and Rafael A Calvo. 2016\. A web-based telehealth training platform incorporating automated nonverbal behavior feedback for teaching communication skills to medical students: A randomized crossover study. _Journal of Medical Internet Research_ , 18(9).
* Majumder et al. (2019) Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 6818–6825.
* Manning et al. (2014) Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In _Association for Computational Linguistics (ACL) System Demonstrations_ , pages 55–60.
* Manusov (2014) Valerie Lynn Manusov. 2014. _The sourcebook of nonverbal measures: Going beyond words_. Psychology Press.
* McCrae (2017) Robert R McCrae. 2017. _The Five-Factor Model across cultures._ Praeger/ABC-CLIO.
* McCrae and Costa (1987) Robert R McCrae and Paul T Costa. 1987. Validation of the five-factor model of personality across instruments and observers. _Journal of personality and social psychology_ , 52(1):81.
* Mittal et al. (2020) Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, and Dinesh Manocha. 2020. M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues. In _AAAI_ , pages 1359–1367.
* Moers (2005) Frank Moers. 2005. Discretion and bias in performance evaluation: the impact of diversity and subjectivity. _Accounting, Organizations and Society_ , 30(1):67–80.
* Montague (2012) Ryan R Montague. 2012. Genuine dialogue: Relational accounts of moments of meeting. _Western Journal of Communication_ , 76(4):397–416.
* Moore (2015) Robert J Moore. 2015. Automated transcription and conversation analysis. _Research on Language and Social Interaction_ , 48(3):253–270.
* Müller et al. (2018) Philipp Müller, Michael Xuelin Huang, and Andreas Bulling. 2018. Detecting low rapport during natural interactions in small groups from non-Verbal behaviour. In _23rd International Conference on Intelligent User Interfaces_ , pages 153–164. ACM.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , pages 1532–1543.
* Poria et al. (2017) Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. _Information Fusion_ , 37:98–125.
* Quaglio (2008) Paulo Quaglio. 2008. Television dialogue and natural conversation: Linguistic similarities and functional differences. _Corpora and discourse: The challenges of different settings_ , pages 189–210.
* Quertemont (2011) Etienne Quertemont. 2011. How to statistically show the absence of an effect. _Psychologica Belgica_ , 51(2):109–127.
* Roberts and Francis (2013) Felicia Roberts and Alexander L Francis. 2013. Identifying a temporal threshold of tolerance for silent gaps after requests. _The Journal of the Acoustical Society of America_ , 133(6):EL471–EL477.
* Ryokai et al. (2018) Kimiko Ryokai, Elena Durán López, Noura Howell, Jon Gillick, and David Bamman. 2018. Capturing, Representing, and Interacting with Laughter. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ , page 358. ACM.
* Sala et al. (2002) Fabio Sala, Edward Krupat, and Debra Roter. 2002. Satisfaction and the use of humor by physicians and patients. _Psychology and Health_ , 17(3):269–280.
* Scheflen (1964) Albert E Scheflen. 1964. The significance of posture in communication systems. _Psychiatry_ , 27(4):316–331.
* Sen et al. (2017) Taylan Sen, Mohammad Rafayet Ali, Mohammed Ehsan Hoque, Ronald Epstein, and Paul Duberstein. 2017. Modeling doctor-patient communication with affective text analysis. _2017 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017_ , 2018-Janua:170–177.
* Sexton et al. (1996) Harold C Sexton, Kristin Hembre, and Guri Kvarme. 1996. The interaction of the alliance and therapy microprocess: A sequential analysis. _Journal of Consulting and Clinical Psychology_ , 64(3):471.
* Tickle-Degnen and Rosenthal (1990) Linda Tickle-Degnen and Robert Rosenthal. 1990. The nature of rapport and its nonverbal correlates. _Psychological inquiry_ , 1(4):285–293.
* Travaline et al. (2005) John M Travaline, Robert Ruchinskas, and Gilbert E D’Alonzo Jr. 2005. Patient-physician communication: why and how. _Journal of the American Osteopathic Association_ , 105(1):13.
* Vokaturi (2019) Vokaturi. 2019. Vokaturi Overview.
* Wu et al. (2020) Kaihang Wu, Chunfeng Liu, and Rafael A Calvo. 2020. Automatic Nonverbal Mimicry Detection and Analysis in Medical Video Consultations. _International Journal of Human–Computer Interaction_ , pages 1–14.
* Yang et al. (2016) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016\. Hierarchical attention networks for document classification. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1480–1489.
* Zadeh et al. (2018) Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Memory fusion network for multi-view sequential learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 32.
* Zhao et al. (2016) Ran Zhao, Tanmay Sinha, Alan W Black, and Justine Cassell. 2016. Socially-aware virtual agents: Automatically assessing dyadic rapport from temporal patterns of behavior. In _International conference on intelligent virtual agents_ , pages 218–233. Springer.
## Appendices
Table 5: Best HAN configurations for the development set. Config. | Batch Size | Num. of GRU | Learning Rate | GRU dropout | GRU recurrent dropout | L2 regularization | Epoch
---|---|---|---|---|---|---|---
$H$ | 19 | 42 | 0.010 | 0.10 | 0.23 | $1\times 10^{-4}$ | 223
$DH$ | 11 | 46 | 0.010 | 0.07 | 0.09 | $3\times 10^{-6}$ | 74
$PAH$ | 14 | 47 | 0.005 | 0.16 | 0.50 | $2\times 10^{-5}$ | 329
$APMH$ | 8 | 44 | 0.005 | 0.29 | 0.16 | $1\times 10^{-3}$ | 275
$APSMH$ | 9 | 43 | 0.005 | 0.16 | 0.48 | $4\times 10^{-5}$ | 305
$DAPSMH$ | 14 | 41 | 0.010 | 0.49 | 0.48 | $2\times 10^{-5}$ | 138
$D\textprime A\textprime P\textprime$ | 19 | 46 | 0.004 | 0.06 | 0.50 | $1\times 10^{-4}$ | 260
$v$ | 16 | 40 | 0.009 | 0.15 | 0.09 | $2\times 10^{-5}$ | 316
$va$ | 13 | 43 | 0.007 | 0.13 | 0.48 | $1\times 10^{-6}$ | 347
$vp$ | 8 | 42 | 0.006 | 0.13 | 0.05 | $2\times 10^{-5}$ | 310
$vpa$ | 9 | 48 | 0.010 | 0.45 | 0.46 | $1\times 10^{-5}$ | 349
$va\textprime$ | 12 | 40 | 0.006 | 0.11 | 0.30 | $1\times 10^{-4}$ | 346
$vp\textprime$ | 11 | 42 | 0.007 | 0.44 | 0.19 | $2\times 10^{-5}$ | 341
$vp\textprime a\textprime$ | 10 | 45 | 0.008 | 0.31 | 0.41 | $4\times 10^{-6}$ | 267
$H$-$vpa$ | 8 | 42 | 0.005 | 0.38 | 0.33 | $2\times 10^{-5}$ | 346
$DH$-$vpa$ | 12 | 44 | 0.009 | 0.25 | 0.14 | $1\times 10^{-5}$ | 316
$PAH$-$vpa$ | 11 | 47 | 0.005 | 0.08 | 0.49 | $5\times 10^{-5}$ | 349
$APMH$-$vpa$ | 18 | 46 | 0.008 | 0.13 | 0.50 | $1\times 10^{-5}$ | 339
$APSMH$-$vpa$ | 9 | 43 | 0.010 | 0.13 | 0.21 | $2\times 10^{-6}$ | 240
$DAPSMH$-$vpa$ | 15 | 46 | 0.009 | 0.15 | 0.50 | $2\times 10^{-5}$ | 340
$D\textprime A\textprime P\textprime$ \- $vp\textprime a\textprime$ | 13 | 46 | 0.008 | 0.26 | 0.16 | $1\times 10^{-5}$ | 262
Table 6: Best Tree configurations for the development set. Config. | Min. split | Max. depth | cp
---|---|---|---
$H$ | 27 | 17 | $3.13\times 10^{-6}$
$DH$ | 72 | 18 | $1.14\times 10^{-6}$
$PAH$ | 70 | 15 | $8.84\times 10^{-5}$
$APMH$ | 72 | 18 | $1.14\times 10^{-6}$
$APSMH$ | 68 | 14 | $5.26\times 10^{-5}$
$DAPSMH$ | 37 | 10 | $2.94\times 10^{-5}$
$D\textprime A\textprime P\textprime$ | 68 | 21 | $3.74\times 10^{-5}$
### A Tuning procedure
We tuned the SGD optimizer with a learning rate between 0.003 to 0.010, batch
size to be between 4 to 20, L2 regularization between $10^{-6}$ and $10^{-3}$,
and trained for up to 350 epochs without early stopping. We tuned the number
of gated recurrent units (GRU) Cho et al. (2014) between 40 to 49 in both the
word-level and talk-turn-level layers, with both the GRU dropout and recurrent
dropout Gal and Ghahramani (2016) to be between 0.05 to 0.50. The method of
choosing hyperparameters is through uniform sampling between the above-
mentioned bounds, except for the learning rate where log-uniform sampling is
used. Training is performed on a RTX2070 GPU or V100 GPU.
### B Hyperparameter configurations for best-performing models
Table 5 (HAN) and Table 6 (Tree) report the hyperparameter configurations for
each of the best-performing model reported in Table 4.
### C Performance of additional tuning
We conducted additional experiments on the tree configurations to (1) compare
the improvements in performance when tuning the HAN and tree, and (2)
evaluated the increase in performance if the tree is allowed twenty more
hyperparameters random search trials (Fig. 4).
Figure 4: Best cumulative AUC performance given N random search trials.
From the larger increases in HAN performances, it is plausible that HAN is
more sensitive to the hyperparameter tuning than the tree.
### D Additional tests for additions to the fine narratives
Table 7 reports the additional tests on the impact of the added fine features.
We observe that whilst all three input configurations ($va$, $vp$, $vpa$) have
small increases in performance, none of them are statistically significant.
Table 7: Summary of the model performances for the fine narratives. We report the average five-fold cross-validation AUC and its standard deviation in brackets. Row-wise, we begin with the $v$ configuration to show the impact of fine multi-modal annotations over the verbatim transcript. Then, we show the impact of the additions (Q1) over the existing fine annotations from Kim et al. (2019b) using column-wise comparisons. Asterisks (*) indicate significance relative to the $v$ row. Carets (^) indicate significance relative to column-wise comparisons, we also provide the confidence intervals in square brackets [] for the difference in performance. The number of symbols indicate the level of statistical significance, e.g., ***: 0.01, **: 0.05, *: 0.10. Config. | Existing inputs | New inputs | Significance of Difference (existing vs. new)
---|---|---|---
$v$ | 0.617 (0.053) | N/A
$vp\textprime$ | 0.630 (0.037) | 0.636 (0.055) | [-0.062, 0.074]
$va\textprime$ | 0.616 (0.055) | 0.622 (0.033) | [-0.060, 0.072]
$vp\textprime a\textprime$ | 0.630 (0.038) | 0.648 (0.027) | [-0.030, 0.066]
### E Conversation thumbnail visualization
By illustrating the talkturn-level attention weights as a heatmap thumbnail
(Fig. 5), the analyst could quickly get a sense of the important segments of
the conversation without reading the content and zoom-in if required.
Figure 5: Heatmap thumbnail. Darker blue indicates higher talkturn attention
weights.
### F Jefferson example
As an optional reference, we engaged a professional transcriptionist to
transcribe the conversation segment presented (Fig. 3) using the Jefferson
system. The Jefferson example is presented in Fig. 6. The verbal content is
slightly different due to (1) different methods to determine talkturns
transitions and (2) automatic speech recognition accuracy.
Figure 6: Jefferson transcription example. : (colon) - stretched sound; (0.2)
- a pause of 0.2 seconds; .hhh - in breath, .h - short in breath; $\uparrow$
\- Rise in intonation; underline - emphasis; $<>$ \- slowed speech rate, $><$
\- quickened speech rate; $[]$ \- overlapping speech.
|
11institutetext: Departamento de Física, Instituto Tecnológico de Aeronáutica,
DCTA, 12228-900, São José dos Campos, SP, Brazil
# Thermodynamical phases in a PNJL model at zero temperature
O. A. Mattos 11 T. Frederico 11 and O. Lourenço 11
###### Abstract
The confinement/deconfinement transition described the Polyakov-Nambu-Jona-
Lasinio (PNJL) model is extended to be operative at zero temperature regime.
In this study, the scalar and vector channel interaction strengths of the
original PNJL model are modified by introducing a dependence on the traced
Polyakov loop. In such a way the effective interactions depend on the quark
phase and in turn provides a backreaction of the quarks to the gluonic sector,
also at zero temperature. On general grounds from quantum chromodynamics this
is an expected feature. The thermodynamics of the extended model (PNJL0) is
studied in detail. It presents along with a suitable choice of the Polyakov
potential, a first order confined/deconfined quark phase transition even at
$T=0$. We also show that the vector channel plays an important role in order
to allow $\Phi\neq 0$ solutions for the PNJL0 model. Furthermore, the
sensitivity of the combined quarkyonic and deconfinement phases to the vector
interaction strength and the proposed parametrization of the Polyakov-loop
potential at $T=0$ allowed to set a window for the bulk values of the relevant
parameters.
## 1 Introduction
The physics of strongly interacting matter is described by Quantum
Chromodynamics (QCD) Weinberg ; Fritzsch ; Huang with quarks and gluons as
the fundamental degrees of freedom, while the colorless hadrons are emergent
phenomena originated from the complex nonperturbative physics of this theory.
Despite being well established, nonperturbative aspects of QCD have nontrivial
treatment is far from being understood, which is critical for the description
of the equation of state of strongly interacting matter at finite densities
and low temperatures.
The challenge is to compute hadronic observables when the strength of the QCD
running coupling presents the well known strong infrared enhancement to embody
spontaneous chiral symmetry breaking and at the same time quark and gluon
confinement. These properties are opposite to what is know in Quantum
Electrodynamics (QED), where the use of perturbative methods are successful
and the fundamental degrees of freedom appears as asymptotic states. That is
not the case for QCD where quarks and gluons are confined.
Ab-initio nonperturbative approaches are called for to solve QCD, as Lattice
Quantum Chromodynamics (LQCD) Kogut1 ; Kogut2 ; Rothe . This well known method
starts with the QCD action, $S_{\mbox{\tiny QCD}}$, and evaluate the
generating functional
$Z=\int\mathcal{D}A\mathcal{D}\psi\mathcal{D}\bar{\psi}e^{iS_{\mbox{\tiny
QCD}}}$ by numerical simulations and finally accessing the matrix elements of
the relevant operators between hadronic states. Such a procedure is performed
through both, space-time discretization and Wick rotation, and changes $Z$ to
$Z^{\prime}=\int\mathcal{D}A\mathcal{D}\psi\mathcal{D}\bar{\psi}e^{-S_{\mbox{\tiny
QCD}}}$, i.e., putting it in correspondence to a Statistical Mechanics
formulation. The numerical calculations are performed from Monte Carlo
simulations. Despite powerful, LQCD faces some intrinsic problems such as the
need to extrapolation of the outcomes for lattice spacing approaching to zero,
the need of powerful dedicated computational facilities, and the “fermion sign
problem” Muroya .
Another nonperturbative continuum method to treat QCD is through Dyson-
Schwinger equations (DSE) Roberts ; Alkofer . They are derived from the
generating functional $Z$ and allows to obtain the equations of motion of the
n-point functions, which are also known as the Euler-Lagrange equations for
the QCD Green’s function. Both LQCD and the standard DSE methods have the
issue of being formulated in the Euclidean space. However, the access of all
observables obtained from QCD, requires its representation in the Minkowski
space, that calls for a careful and not yet fully known analytical extension
or other particular methods to obtain, for example, the light-front wave
function of the hadronic state. A possible alternative to circumvent this
problem is the use of the Nakanishi integral representation, built directly in
the Minkowski space, and solve with that the DSE or even the Bethe-Salpeter
equation Paula ; Frederico ; Pimentel . Sum rules Shifman ; REINDERS , and the
connection between gauge and string theories also aim to treat QCD in the
infrared region. This last method was introduced by Gerard ’t Hooft HOOFT by
suggesting the analytical calculation of amplitudes by using string theories.
A clear mapping between the two theories was proposed by Juan Maldacena
MALDACENA with the famous Anti-de Sitter/Conformal Field Theory (AdS/CFT)
conjecture.
Other practical possibilities to incorporate the infrared physics is the use
of effective quark models, built with the aim of reproducing most of the QCD
phenomenology, as for instance, their symmetries and also dynamical chiral
symmetry breaking. By exploiting such approaches, many models were developed.
The Nambu-Jona-Lasinio (NJL) model Nambu1 ; Nambu2 ; buballa ; Vogl ;
Klevansky ; Hatsuda ; ric1 ; ric2 ; ric3 is an example. Its improved version,
namely, the Polyakov-Nambu-Jona-Lasinio (PNJL) model FUKUSHIMA ; fukushima3 ;
fuku1 ; fuku2 ; weise1 ; weise2 ; weise4 ; weise6 ; ratti ; costa ; scoccola ;
nosso1 ; nosso2 ; nosso3 ; nosso4 includes effects of confinement /
deconfinement phase transition, included in the theory through an effective
field of gluonic origin, namely, the Polyakov loop ($\Phi$ and $\Phi^{*}$)
POLYAKOV ; SUSSKIND ; SVETITSKY1 ; SVETITSKY2 . Even that the PNJL model shows
a clear improvement in comparison with the original NJL model, with respect to
QCD phase structure, the Polyakov loop decouples from the baryonic fields at
the zero temperature regime. This missing interaction is evident by analyzing,
for instance, the pressure ($P$) and the energy density ($\mathcal{E}$) of
PNJL models at low temperatures. The Polyakov potential,
$\mathcal{U}(\Phi,\Phi^{*},T)$, vanishes for $T=0$ in the most
parametrizations of this quantity. Therefore, the equations of state of the
NJL model are recovered at $T=0$ and, consequently, the physics of confinement
implemented in $T\neq 0$ in PNJL models is simply lost.
In this work we explore the approach started with the initial study of Ref.
mattos and present the thermodynamics of a PNJL model at zero temperature,
named here as PNJL0 model. In the following, Sec. 2, we briefly review the
basics of the original PNJL model. Its version at $T=0$, along with its main
results, are discussed in Sec. 3. Finally, we present the summary and
concluding remarks of our study in Sec. 4.
## 2 Polyakov-Nambu-Jona-Lasinio model
The PNJL model was firstly proposed in Ref. FUKUSHIMA as a generalization of
the original NJL model due to the inclusion of confinement effects. From this
point of view, the PNJL model becomes an effective model describing the QCD
theory more realistically in comparison with the previous NJL version.
Basically, the gluon dynamics is implemented in the NJL model by replacing the
derivative $\partial^{\mu}$ by $D^{\mu}\equiv\partial^{\mu}+iA^{\mu}$ where
$A^{\mu}=\delta^{\mu}_{0}A_{0}$ and $A_{0}=gA_{\alpha}^{0}\lambda_{\alpha}/2$
($g$ is the gauge coupling and $\lambda_{\alpha}$ are the Gell-Mann matrices).
The Lagrangian density of the SU(2) PNJL model is then written as
$\displaystyle\mathcal{L}_{\mbox{\tiny PNJL}}$ $\displaystyle=$
$\displaystyle\bar{\psi}(i\gamma_{\mu}D^{\mu}-m)\psi+G_{s}\left[(\bar{\psi}\psi)^{2}-(\bar{\psi}\gamma_{5}\tau\psi)^{2}\right]$
(1) $\displaystyle-$ $\displaystyle
G_{V}(\bar{\psi}\gamma_{\mu}\psi)^{2}-\mathcal{U}(\Phi,\Phi^{*},T).\qquad$
with $m$ being the current quark mass (in our case $m=m_{u}=m_{d}$). In this
formulation, we also add a vector channel, regulated by the coupling constant
$G_{V}$ .
Other clear difference between PNJL and NJL models is the inclusion of the
Polyakov potential $\mathcal{U}(\Phi,\Phi^{*},T)$ that depends on the traced
Polyakov loop and its conjugate, $\Phi$ and $\Phi^{*}$, respectively. $\Phi$
is defined in terms of $A_{4}=iA_{0}\equiv T\phi$ as
$\displaystyle\Phi$
$\displaystyle=\frac{1}{3}\rm{Tr}\left[\,\,\rm{exp}\left(i\int_{0}^{1/T}d\tau\,A_{4}\right)\right]$
$\displaystyle=\frac{1}{3}\rm{Tr}\left[\rm{exp}(i\phi)\right]=\frac{1}{3}\rm{Tr}\left\\{\rm{exp}[i(\phi_{3}\lambda_{3}+\phi_{8}\lambda_{8})]\right\\}$
$\displaystyle=\frac{1}{3}\left[\rm{e}^{i(\phi_{3}+\phi_{8}/\sqrt{3})}+\rm{e}^{i(-\phi_{3}+\phi_{8}/\sqrt{3})}+\rm{e}^{-2i\phi_{8}/\sqrt{3}}\right],$
(2)
in a gauge (Polyakov gauge) in which the gluon field is written in terms of
the diagonal Gell-Mann matrices as
$\phi=\phi_{3}\lambda_{3}+\phi_{8}\lambda_{8}$, with
$\phi_{3},\phi_{8}\in\mathbb{R}$ (the definitions $\phi_{3}=A_{4}^{3}/T$ and
$\phi_{8}=A_{4}^{8}/T$ were taken into account). Here we also use the mean-
field approximation described in Refs. weise4 ; weise6 ; nosso3 that
considers the mean-field configuration in which $\phi_{8}=0$ in Eq. (2). In
this case, $\Phi=\Phi^{*}=[2\cos(\phi_{3})+1]/3$ even for nonvanishing quark
chemical potentials ($\mu$).
The thermodynamics of this model is obtained through the calculation of its
grand canonical potential density namely, $\Omega_{\mbox{\tiny
PNJL}}=-T\mbox{ln}(Z_{\mbox{\tiny PNJL}})/V$, with $Z_{\mbox{\tiny PNJL}}$
being the partition function of the model. The final expression is given by
FUKUSHIMA ; weise1 ; weise2 ; rossner1 ; rossner2
$\displaystyle\Omega_{\mbox{\tiny PNJL}}$ $\displaystyle=$
$\displaystyle\mathcal{U}(\Phi,T)+G_{s}\rho_{s}^{2}-G_{V}\rho^{2}-\frac{\gamma}{2\pi^{2}}\int_{0}^{\Lambda}E\,k^{2}dk$
(3) $\displaystyle-$ $\displaystyle\frac{\gamma
T}{2\pi^{2}N_{c}}\int_{0}^{\infty}\mbox{ln}\Big{[}1+3\Phi e^{-(E-\mu)/T}$
$\displaystyle+$ $\displaystyle 3\Phi
e^{-2(E-\mu)/T}+e^{-3(E-\mu)/T}\Big{]}k^{2}dk$ $\displaystyle-$
$\displaystyle\frac{\gamma
T}{2\pi^{2}N_{c}}\int_{0}^{\infty}\mbox{ln}\Big{[}1+3\Phi e^{-(E+\mu)/T}$
$\displaystyle+$ $\displaystyle 3\Phi
e^{-2(E+\mu)/T}+e^{-3(E+\mu)/T}\Big{]}k^{2}dk,$
with $E=(k^{2}+{M}^{2})^{1/2}$,
$\rho_{s}=\left<\bar{\psi}\psi\right>=\left<\bar{u}u\right>+\left<\bar{d}d\right>=2\left<\bar{u}u\right>\,$
and the degeneracy factor given by $\gamma=N_{s}\times N_{f}\times
N_{c}=12\,$, due to the spin, flavor and color numbers, respectively
($N_{s}=N_{f}=2$ and $N_{c}=3$). The quantity $\Lambda$ defines the cutoff of
the divergent integral. As in the NJL model, the constituent quark mass $M$ is
given in terms of the quark condensate $\rho_{s}$ as
$\displaystyle M=m-2G_{s}\rho_{s},$ (4)
with $\rho_{s}$, obtained by the condition $\partial\Omega_{\mbox{\tiny
PNJL}}/\partial\rho_{s}=0$, given by
$\displaystyle\rho_{s}$ $\displaystyle=$
$\displaystyle\frac{\gamma}{2\pi^{2}}\int_{0}^{\infty}\frac{M}{E(M)}k^{2}dk\left[f(k,T,\Phi)+\bar{f}(k,T,\Phi)\right]$
(5) $\displaystyle-$
$\displaystyle\frac{\gamma}{2\pi^{2}}\int_{0}^{\Lambda}\frac{M}{E(M)}k^{2}dk.$
The functions $f(k,T,\Phi)$ and $\bar{f}(k,T,\Phi)$, given as follows,
$\displaystyle f(k,T,\Phi)=$ $\displaystyle\frac{\Phi e^{2(E-\mu)/T}+2\Phi
e^{(E-\mu)/T}+1}{3\Phi e^{2(E-\mu)/T}+3\Phi e^{(E-\mu)/T}+e^{3(E-\mu)/T}+1}$
(6)
and
$\displaystyle\bar{f}(k,T,\Phi)=$ $\displaystyle\frac{\Phi
e^{2(E+\mu)/T}+2\Phi e^{(E+\mu)/T}+1}{3\Phi e^{2(E+\mu)/T}+3\Phi
e^{(E+\mu)/T}+e^{3(E+\mu)/T}+1},$ (7)
are the generalized Fermi-Dirac distributions, also used to obtain the quark
density through
$\displaystyle\rho$ $\displaystyle=$
$\displaystyle-\frac{\partial\Omega_{\mbox{\tiny PNJL}}}{\partial\mu}$ (8)
$\displaystyle=$
$\displaystyle\frac{\gamma}{2\pi^{2}}\int_{0}^{\infty}k^{2}dk[f(k,T,\Phi)-\bar{f}(k,T,\Phi)].\quad\,$
Note the similarity between the grand canonical potentials of the PNJL and NJL
models, where in the former there is the replacement of the usual Fermi-Dirac
functions of quarks and antiquarks by the generalized functions given in Eqs
(6) and (7). Furthermore in the PNJL model, there is also the inclusion of an
effective gluon potential represented by $\mathcal{U}(\Phi,T)$ in the grand
canonical potential density.
The effective scalar field $\Phi$ is found through the solution of
$\partial\Omega_{\mbox{\tiny PNJL}}/\partial\Phi=0$. This quantity is
determined simultaneously to $M$, that is found from Eqs. (4) and (5), or
equivalently, through the condition $\partial\Omega_{\mbox{\tiny
PNJL}}/\partial\rho_{s}=0$. Pressure and energy density are obtained from Eq.
(3) as
$\displaystyle P_{\mbox{\tiny PNJL}}(\rho,T)=-\Omega_{\mbox{\tiny
PNJL}}=-\mathcal{U}(\Phi,T)+G_{V}\rho^{2}-G_{s}\rho_{s}^{2}$
$\displaystyle+\frac{\gamma}{2\pi^{2}}\int_{0}^{\Lambda}(k^{2}+M^{2})^{1/2}\,k^{2}dk+\mathcal{E}_{\rm
vac}$
$\displaystyle+\frac{\gamma}{6\pi^{2}}\int_{0}^{\infty}\frac{k^{4}}{(k^{2}+M^{2})^{1/2}}dk[f(k,T,\Phi)+\bar{f}(k,T,\Phi)]$
(9)
and
$\displaystyle\mathcal{E}_{\mbox{\tiny
PNJL}}(\rho,T)=-T^{2}\frac{\partial(\Omega_{\mbox{\tiny PNJL}}/T)}{\partial
T}+\mu\rho$
$\displaystyle=\mathcal{U}(\Phi,T)-T\frac{\partial\mathcal{U}}{\partial
T}+G_{V}\rho^{2}+G_{s}\rho_{s}^{2}$
$\displaystyle-\frac{\gamma}{2\pi^{2}}\int_{0}^{\Lambda}(k^{2}+M^{2})^{1/2}\,k^{2}dk-\mathcal{E}_{\rm
vac}$
$\displaystyle+\frac{\gamma}{2\pi^{2}}\int_{0}^{\infty}(k^{2}+M^{2})^{1/2}\,k^{2}dk[f(k,T,\Phi)+\bar{f}(k,T,\Phi)],$
(10)
respectively, with the vacuum quantity $\mathcal{E}_{\rm vac}$ included in the
equations in order to ensure $P=\mathcal{E}=0$ at vanishing temperature and
quark density. Finally, the entropy density can be obtained from
$\mathcal{S}_{\mbox{\tiny PNJL}}=-\partial\Omega_{\mbox{\tiny PNJL}}/\partial
T$, or from $\mathcal{S}_{\mbox{\tiny PNJL}}=(P_{\mbox{\tiny
PNJL}}+\mathcal{E}_{\mbox{\tiny PNJL}}-\mu\rho)/T\,$. The thermodynamics of
the PNJL model is quantitatively defined once the potential
$\mathcal{U}(\Phi,T)$, and the constants $G_{s}$, $\Lambda$ and $m$ are
chosen. These last quantities are the same as the ones obtained in the quarks
sector (NJL model), with $G_{V}$ being a free parameter.
## 3 PNJL model at zero temperature (PNJL0)
### 3.1 Construction of the model
It is worth notice that the limit of vanishing temperature in Eqs. (9) and
(10) leads to
$\displaystyle P_{\mbox{\tiny PNJL}}$
$\displaystyle(\rho,0)=G_{V}\rho^{2}-G_{s}\rho^{2}_{s}+\frac{\gamma}{2\pi^{2}}\int^{\Lambda}_{0}dkk^{2}(k^{2}+M^{2})^{1/2}$
$\displaystyle+\frac{\gamma}{6\pi^{2}}\int_{0}^{k_{F}}dk\frac{k^{4}}{(k^{2}+M^{2})^{1/2}}+\mathcal{E}_{\rm
vac}=-\Omega_{\mbox{\tiny PNJL}}(\rho,0)$ (11)
and
$\displaystyle\mathcal{E}_{\mbox{\tiny PNJL}}(\rho,0)$ $\displaystyle=$
$\displaystyle G_{V}\rho^{2}+G_{s}\rho^{2}_{s}-\mathcal{E}_{\rm vac}$ (12)
$\displaystyle-$
$\displaystyle\frac{\gamma}{2\pi^{2}}\int_{k_{F}}^{\Lambda}dkk^{2}(k^{2}+M^{2})^{1/2},$
with $\mu=2G_{V}\rho+(k_{F}^{2}+M^{2})^{1/2}$, i.e., at $T=0$ one has
$P_{\mbox{\tiny PNJL}}(\rho)=P_{\mbox{\tiny NJL}}(\rho)$ and
$\mathcal{E}_{\mbox{\tiny PNJL}}(\rho)=\mathcal{E}_{\mbox{\tiny NJL}}(\rho)$,
with $P_{\mbox{\tiny NJL}}(\rho)$ and $\mathcal{E}_{\mbox{\tiny NJL}}(\rho)$
being the pressure and energy density, respectively, of the original NJL model
at zero temperature Nambu1 ; Nambu2 ; buballa ; Vogl ; Klevansky ; Hatsuda .
Therefore, the confinement physics from the Polyakov potential is lost at
$T=0$. Such problem is due to two reasons. The first one is that the
generalized Fermi-Dirac distributions, Eqs. (6)-(7), becomes the traditional
step function $\theta(k_{F}-k)$ at $T=0$ ($k_{F}$ is the quark Fermi
momentum). The second reason is that the gluonic contribution of the PNJL
model, described by the Polyakov potential, $\mathcal{U}(\Phi,T)|_{T=0}$
vanishes in the most known versions.
Three most known forms of the Polyakov potential, namely, RTW05 weise1 , RRW06
weise2 ; weise4 and FUKU08 fukushima3 , are given respectively by
$(\Phi=\Phi^{*})$,
$\displaystyle\frac{\mathcal{U}_{\mbox{\tiny RTW05}}}{T^{4}}$
$\displaystyle=-\frac{b_{2}(T)}{2}\Phi^{2}-\frac{b_{3}}{3}\Phi^{3}+\frac{b_{4}}{4}\Phi^{4},$
(13) $\displaystyle\frac{\mathcal{U}_{\mbox{\tiny RRW06}}}{T^{4}}$
$\displaystyle=-\frac{b_{2}(T)}{2}\Phi^{2}+b_{4}(T)\mbox{ln}(1-6\Phi^{2}+8\Phi^{3}-3\Phi^{4}),$
(14) $\displaystyle\frac{\mathcal{U}_{\mbox{\tiny FUKU08}}}{b\,T}$
$\displaystyle=-54e^{-a/T}\Phi^{2}+\mbox{ln}(1-6\Phi^{2}+8\Phi^{3}-3\Phi^{4}),$
(15)
with
$\displaystyle
b_{2}(T)=a_{0}+a_{1}\left(\frac{T_{0}}{T}\right)+a_{2}\left(\frac{T_{0}}{T}\right)^{2}+a_{3}\left(\frac{T_{0}}{T}\right)^{3},$
(16)
and
$\displaystyle{b_{4}(T)=b_{4}\left(\frac{T_{0}}{T}\right)^{3}.}$ (17)
These potentials are constructed to reproduce data from lattice calculations
of the pure gauge sector concerning the temperature dependence of $\Phi$ and
its first order phase transition, characterized by the jump of $\Phi$ from
zero to a finite value at $T_{0}=270$ MeV. In Eqs. (13)-(15), $a$, $b$,
$a_{0}$, $a_{1}$, $a_{2}$, $a_{3}$, $b_{3}$ and $b_{4}$ are dimensionless free
parameters. Notice that for all potentials one has $\mathcal{U}=0$ at $T=0$.
This phenomenology leads the thermodynamics of the PNJL model to be reduced to
the NJL one at zero temperature. One exception to this result is the DS10
Schramm1 ; Schramm2 potential given by
$\displaystyle\mathcal{U}_{\mbox{\tiny DS10}}$ $\displaystyle=$
$\displaystyle(a_{0}T^{4}+a_{1}\mu^{4}+a_{2}T^{2}\mu^{2})\Phi^{2}+\mathcal{U}_{0}(\Phi)$
(18)
with
$\displaystyle\mathcal{U}_{0}(\Phi)\equiv
a_{3}T_{0}^{4}\mbox{ln}(1-6\Phi^{2}+8\Phi^{3}-3\Phi^{4}).$ (19)
Our aim is to avoid the lack of confinement physics in the PNJL model at $T=0$
by taking into account the effects of the traced Polyakov loop $\Phi$ in both
the Polyakov potential and the effective interaction between the quarks, as
performed in a previous initial investigation mattos where preliminary
results were presented. The idea here is to introduce the traced Polyakov loop
in the NJL equations of state by imposing vanishing scalar and vector
couplings as the quarks become deconfined, situation predicted to occur at
$\Phi\rightarrow 1$. This phenomenology can be achieved by making the scalar
and vector coupling strengths dependent on $\Phi$ as follows,
$\displaystyle G_{s}\longrightarrow G_{s}(\Phi)$ $\displaystyle=$
$\displaystyle G_{s}(1-\Phi^{2}),$ (20)
and
$\displaystyle G_{V}\longrightarrow G_{V}(\Phi)$ $\displaystyle=$
$\displaystyle G_{V}(1-\Phi^{2}).$ (21)
In fact, such changes can be seen as a simpler version of the Entanglement
PNJL (EPNJL) epnjl . The implementation of Eqs. (20)-(21) in the PNJL/NJL
model at $T=0$ still demands the determination of the possible $\Phi$ values,
obtained through $\partial\Omega_{\mbox{\tiny PNJL}}/\partial\Phi=0$. However,
the replacement of $G_{s}$ and $G_{V}$ by their $\Phi$ dependent versions is
not enough to ensure $\Phi\neq 0$ solutions, i.e., the NJL model is recovered
once more. In order to avoid this problem, besides the modifications proposed
in Eqs. (20)-(21) we also add to $\Omega_{\mbox{\tiny PNJL}}(\rho,0)$ the term
$\mathcal{U}_{0}(\Phi)$ given by Eq. (19), inspired by the
$\mathcal{U}_{\mbox{\tiny DS10}}$ Polyakov potential, with $T_{0}=190$ MeV
(value very often used in the Polyakov potentials of the PNJL models weise1 ).
As we will discuss later on, the effect of $\mathcal{U}_{0}(\Phi)$ is to
ensure $\Phi\neq 0$ solutions and also limit the traced Polyakov loop in the
range of $0\leqslant\Phi\leqslant 1$. This term was used in Refs. Schramm1 ;
Schramm2 also to generate $\Phi\neq 0$, however, for a much more
sophisticated model, that takes into account hadrons and quarks degrees of
freedom at the same Lagrangian density.
The quark couplings given by Eqs. (20)-(21) along with the
$\mathcal{U}_{0}(\Phi)$ potential added to $\Omega_{\mbox{\tiny
PNJL}}(\rho,0)$ lead to the following equations for the pressure and energy
density, respectively,
$\displaystyle P_{\mbox{\tiny PNJL0}}$
$\displaystyle(\rho)=-\mathcal{U}_{\mbox{\tiny
PNJL0}}(\rho,\rho_{s},\Phi)+G_{V}\rho^{2}-G_{s}\rho^{2}_{s}$
$\displaystyle+\frac{\gamma}{2\pi^{2}}\int^{\Lambda}_{0}dkk^{2}(k^{2}+M^{2})^{1/2}+\mathcal{E}_{\rm
vac}$
$\displaystyle+\frac{\gamma}{6\pi^{2}}\int_{0}^{k_{F}}dk\frac{k^{4}}{(k^{2}+M^{2})^{1/2}}=-\Omega_{\mbox{\tiny
PNJL0}}(\rho)$ (22)
and
$\displaystyle\mathcal{E}_{\mbox{\tiny PNJL0}}(\rho)$
$\displaystyle=\mathcal{U}_{\mbox{\tiny
PNJL0}}(\rho,\rho_{s},\Phi)-2G_{V}\Phi^{2}\rho^{2}+G_{V}\rho^{2}+G_{s}\rho^{2}_{s}$
$\displaystyle-\frac{\gamma}{2\pi^{2}}\int_{k_{F}}^{\Lambda}dkk^{2}(k^{2}+M^{2})^{1/2}-\mathcal{E}_{\rm
vac},$ (23)
in which it was possible to define a Polyakov potential as being
$\displaystyle\mathcal{U}_{\mbox{\tiny
PNJL0}}(\rho_{s},\rho,\Phi)=G_{V}\Phi^{2}\rho^{2}-G_{s}\Phi^{2}\rho_{s}^{2}+\mathcal{U}_{0}(\Phi)$
(24)
which includes quarks as source of $\Phi$, as one should expect from QCD where
quarks are also sources for the gluon field.
The model built above, is named as PNJL0 model, and it has constituent quark
mass and quark chemical potential given by:
$\displaystyle M=m-2G_{s}(1-\Phi^{2})\rho_{s}~{},$ (25)
and
$\displaystyle\mu=2G_{V}(1-\Phi^{2})\rho+(k_{F}^{2}+M^{2})^{1/2},$ (26)
respectively. Furthermore, the quark density is written in terms of the quark
Fermi momentum as $\rho=(\gamma/6\pi^{2})k_{F}^{3}$, and the quark condensate
reads
$\displaystyle\rho_{s}=-\frac{\gamma
M}{2\pi^{2}}\int_{k_{F}}^{\Lambda}dk\frac{k^{2}}{(k^{2}+M^{2})^{1/2}},$ (27)
which contributes to the scalar quark density.
It is worthwhile to notice that Eqs. (22) and (23) can be seen as the
$T\rightarrow 0$ limit of Eqs. (9) and (10). Another important aspect in
defining the Polyakov potential $\mathcal{U}_{\mbox{\tiny
PNJL0}}(\rho_{s},\rho,\Phi)$ is the presence of the backreaction of quarks in
the gluons sector, as we have already pointed out. The inverse backreaction,
namely, gluons affecting the quark sector is already intrinsic in the original
PNJL model, see for instance, the generalized Fermi-Dirac distribution
functions in Eqs. (6) and (7). In the PNJL0 model the backreaction is complete
(each sector interacts each other): the effective quark interactions vanishes
at the deconfinement phase and $\mathcal{U}_{0}(\Phi)$ is included in the
grand canonical potential to assure confinement physics at $T=0$.
Just to be complete, we should mention that another way of including quark
effects in the gluon sector was given, for example, in Refs. herbst ; schaefer
, in which the authors impose a $N_{f}$ and $\mu$ dependence on $T_{0}$ in the
Polyakov potentials, namely, $T_{0}\rightarrow T_{0}(N_{f},\mu)$.
### 3.2 $\Phi\neq 0$ solutions for the PNJL0 model
The inclusion of $\mathcal{U}_{0}(\Phi)$ in the Polyakov potential given by
Eq. (24) enables the PNJL0 model to present $\Phi\neq 0$ solutions for the
condition $\partial\Omega_{\mbox{\tiny PNJL0}}/\partial\Phi=0$ at zero
temperature. Therefore, it becomes possible the study of the deconfinement
dynamics in the $T=0$ regime, with the dimensionless constant $a_{3}$ in Eq.
(19) regulating this effect.
We investigate how $\Omega_{\mbox{\tiny PNJL0}}$ behaves as a function of
$\Phi$ firstly for $G_{V}=0$. In Fig. 1 we display this thermodynamical
potential, obtained from $\Omega_{\mbox{\tiny PNJL0}}=-P_{\mbox{\tiny
PNJL0}}$, Eq. (22), with the self-consistent equations (25) and (27)
implemented, and without the condition $\partial\Omega_{\mbox{\tiny
PNJL0}}/\partial\Phi=0$ taken into account. We also use a parametrization of
Ref. buballa , namely, $\Lambda=587.9$ MeV, $G_{s}\Lambda^{2}=2.44$ and
$m=5.6$ MeV for this case and all the other ones. Such values produce
$M_{\mbox{\tiny vac}}=400$ MeV and
$\langle\overline{u}u\rangle^{1/3}_{\mbox{\tiny vac}}=-240.8$ MeV for the
vacuum values, $m_{\pi}=135$ MeV and $f_{\pi}=92.4$ MeV for the pion mass, and
decay constant, respectively.
Figure 1: $\Omega_{\mbox{\tiny PNJL0}}\times\Phi$ for $G_{V}=0$ with different
values of $a_{3}$. For each panel the quark chemical potential is given by (a)
$\mu=M_{\mbox{\tiny vac}}=400$ MeV, (b) $\mu=415$ MeV, (c) $\mu=580$ MeV, and
(d) $\mu=\Lambda=587.9$ MeV.
Some features can be observed from the results depicted in Fig. 1. The first
one is that the variation of $a_{3}$ does not produce any change in the value
of $\Phi$ concerning the minimum of $\Omega_{\mbox{\tiny PNJL0}}$. For all
$a_{3}$ values one has $\partial\Omega_{\mbox{\tiny PNJL0}}/\partial\Phi=0$
only at $\Phi=0$. Notice also that positive $a_{3}$ values induce
$\Omega_{\mbox{\tiny PNJL0}}$ to change its concavity, i.e, a non physical
configuration. This effect is also verified for other $\mu$ values different
from those shown in Fig. 1. Even for the extreme value of $\mu=\Lambda$, Fig.
1d, there is no indication that the system presents $\Phi\neq 0$. Notice that
the variation of $\mu$ for a fixed $a_{3}$ is also not enough to produce
$\Phi\neq 0$ solutions for $\partial\Omega_{\mbox{\tiny
PNJL0}}/\partial\Phi=0$. The increase in $\mu$ decreases $\Omega_{\mbox{\tiny
PNJL0}}(\Phi=0)$ as the only effect. This lack of $\Phi\neq 0$ solutions means
that the model can not bring confinement effects for the analyzed region of
the quark chemical potential, namely, $M_{\mbox{\tiny
vac}}\leqslant\mu\leqslant\Lambda$. Therefore, for this case the PNJL0 model
reduces to the NJL one as the PNJL model does. However, this picture is
radically modified when $G_{V}\neq 0$ as one can see in Fig. 2, where
$G_{V}=0.25G_{s}$ was used.
Figure 2: The same as in Fig. 1, but for $G_{V}=0.25G_{s}$.
The results displayed in Fig. 2 show that for $G_{V}=0.25G_{s}$ there is a
global minima for $\Omega_{\mbox{\tiny PNJL0}}$ at $\Phi\neq 0$. As an
example, see that for $\mu=\Lambda=587.9$ MeV, Fig. 2d, this minimum is found
at $\Phi\approx 0.9$ for $a_{3}=-0.1$. For the same $a_{3}$ value, $\Phi=0$ is
the unique global minimum for $\mu=M_{\mbox{\tiny vac}}=400$ MeV. This means
that there is an intermediate value of $\mu$ in which there is two minima for
$\Omega_{\mbox{\tiny PNJL0}}$, namely, one of them at $\Phi=0$ and the other
one at $\Phi\neq 0$. Physically, this means that there exists a certain $\mu$
value, for the $a_{3}=-0.1$ case, in which the transition from a quark
confined phase ($\Phi=0$) to a deconfined one ($\Phi\neq 0$) takes place. For
the PNJL0 model, this $\Phi\neq 0$ solutions are found for $G_{V}\neq 0$,
i.e., the repulsive vector channel plays an important role for the emergence
of deconfinement effects at zero temperature regime.
### 3.3 Confinement/deconfinement phase transition
In order to correctly identify the quark confined and deconfined
thermodynamical phases in the PNJL0 model, we compute the grand canonical
potential given by Eq. (22) with the self-consistent equations (25) and (27)
implemented, but now along with the condition $\partial\Omega_{\mbox{\tiny
PNJL0}}/\partial\Phi=0$. The result of this calculation, for $G_{V}=0.25G_{s}$
and $a_{3}=-0.1$, is presented in Fig. 3.
Figure 3: $\Omega_{\mbox{\tiny PNJL0}}$ as a function of the quark chemical
potential ($\mu$) for $G_{V}=0.25G_{s}$ and $a_{3}=-0.1$.
This figure displays the typical feature of systems that present first order
phase transition, namely, non unique values for the thermodynamical potential
that describes the system ($\Omega_{\mbox{\tiny PNJL0}}$) as a function of the
intensive quantity ($\mu$). Thermodynamical stability callen requires that in
the range of $500\mbox{ MeV}\leqslant\mu\lesssim 514\mbox{ MeV}$ the
dependence of the grand canonical potential with $\mu$ is the one defined by
the $DEF$ curve. The $E$ point indicates the chemical potential value at which
a first order phase transition takes place, namely, a
confinement/deconfinement one, with $\Phi$ the order parameter in this case as
we will discuss later on. We name the chemical potential at this point as
$\mu_{\mbox{\tiny conf}}$ with the value of $\mu_{\mbox{\tiny conf}}=506.906$
MeV.
Another way to determine $\mu_{\mbox{\tiny conf}}$ is from the analysis of
$\Omega_{\mbox{\tiny PNJL0}}$ as a function of $\Phi$ without imposing the
condition $\partial\Omega_{\mbox{\tiny PNJL0}}/\partial\Phi=0$, for each fixed
value of $\mu$. The different grand potentials obtained for each $\mu$ are
depicted in Fig. 4.
Figure 4: $\Omega_{\mbox{\tiny PNJL0}}$ as a function of $\Phi$ for different
$\mu$ values. Curves constructed for $G_{V}=0.25G_{s}$ and $a_{3}=-0.1$.
The value of $\mu_{\mbox{\tiny conf}}$ is obtained when two minima of the
thermodynamical potential appear for distinct values of $\Phi$ with the same
$\Omega_{\mbox{\tiny PNJL0}}$, see points $p_{1}$ and $p_{2}$ in the
$\mu=\mu_{\mbox{\tiny conf}}=506.906$ MeV curve. The $\Phi$ values associated
to these points delimit the confined quark phase (point $p_{1}$, $\Phi=0$) and
the deconfinement one (point $p_{2}$, $\Phi\neq 0$). For curves where
$\mu\neq\mu_{\mbox{\tiny conf}}$, there is only one global minimum in
$\Omega_{\mbox{\tiny PNJL0}}$. For such cases, the system is exclusively in
one of the two thermodynamic phases concerning the quark confinement.
In Fig. 4 we also notice that the $\mu=504$ MeV curve is in a confined phase,
since the minimum is attained at $\Phi=0$, but for $\mu=510$ MeV, a $\Phi\neq
0$ value is the possible one and identifies the system in deconfined phase.
Only at $\mu=\mu_{\mbox{\tiny conf}}$ the system undergoes a first order phase
transition. Such procedure of searching for two global minima in the
thermodynamical potential was also used, for instance, in the analysis of
mean-field hadronic models delfino , as well as for the NJL model at finite
temperature yazaki . Both models present the same kind of first order phase
transition, but in different environments and with different order parameters.
In the case of the PNJL0 model at zero temperature, $\Phi$ is the order
parameter related to the confined/deconfined phase transition. Its dependence
with $\mu$, obtained through $\partial\Omega_{\mbox{\tiny
PNJL0}}/\partial\Phi=0$, is shown in Fig. 5.
Figure 5: $\Phi$ as a function of $\mu$ for the PNJL0 model with
$G_{V}=0.25G_{s}$ and $a_{3}=-0.1$.
The equilibrium $\Phi$ dependence with $\mu$ is the one defined by the full
line. The position of the jump in $\Phi$ is determined by
$\mu=\mu_{\mbox{\tiny conf}}$, found by the aforementioned method of searching
for two global minima in $\Omega_{\mbox{\tiny PNJL0}}$. The dashed line
corresponds to the eliminated branches of $\Omega_{\mbox{\tiny PNJL0}}$ in
Fig. 3.
### 3.4 Quarkyonic phase and effects of $a_{3}$ and $G_{V}$ on the PNJL0
model
Another thermodynamical phase structure is also observed at lower $\mu$
values, namely, $385\mbox{ MeV}\leqslant\mu\leqslant 415\mbox{ MeV}$, as
pointed out by the inset of Fig. 3. In that region, it is verified that the
correct $\Omega_{\mbox{\tiny PNJL0}}\times\mu$ curve must be the one described
by the $ABC$ line. The first order phase transition is given by the transition
of the system from a broken chiral symmetry region to a restored one (the
quark condensate is the order parameter in this case). By naming the chemical
potential at the $B$ point as $\mu_{\mbox{\tiny chiral}}$, we obtain the value
of $400.243$ MeV. Notice that as $\Phi=0$ in this region, the same
thermodynamical structure is also present in the NJL model. In this case, Eqs.
(22) to (26) are reduced to the NJL ones for $\Phi=0$ (confined phase of the
PNJL0 model). Strictly speaking, the PNJL0 model is exactly the NJL one until
$\mu=\mu_{\mbox{\tiny conf}}$, where nonzero $\Phi$ values occur, i.e., in our
approach the NJL thermodynamical phases can be seen as contained in the PNJL0
model.
A wider picture of the PNJL0 model, encompassing the two phase transition
regions, is shown in Fig. 6. The thermodynamical stable curve in this case is
the one described by the $ABCDEF$ line.
Figure 6: The same as in Fig. 3, but for a larger $\mu$ region.
In Fig. 7, we show the $\mu$ dependence of the chiral condensate for the PNJL0
model.
Figure 7: Chiral condensate, in units of its vacuum value, as a function of
$\mu$ for the PNJL0 model with $G_{V}=0.25G_{s}$ and $a_{3}=-0.1$.
For chemical potential values smaller than $\mu_{\mbox{\tiny chiral}}=400.243$
MeV, the model behaves as the NJL one, as already discussed. Exactly at
$\mu=\mu_{\mbox{\tiny conf}}=506.906$ MeV, the discontinuity in $\Phi$ also
affects $\rho_{s}$ due to the backreaction mechanism presented in the PNJL0
model. We remark in the inset of the Fig. 7 the discontinuity induced in
$\rho_{s}$ due to the one observed in $\Phi$ (Fig. 5). The stable curve for
$\rho_{s}$ is the one described by the full line.
Fig. 7 is also useful to identify another phase of the strongly interacting
matter, namely, the one defined in the region of $\mu_{\mbox{\tiny
chiral}}\leqslant\mu\leqslant\mu_{\mbox{\tiny conf}}$. In this range, quark
matter is chirally symmetric but still confined since the quark condensate is
nearly vanishing and the traced Polyakov loop is zero. Only at
$\mu\geqslant\mu_{\mbox{\tiny conf}}$, the deconfined quark phase is reached,
i.e, one has $\Phi\neq 0$. This specific $\mu$ region in the range of
$\mu_{\mbox{\tiny chiral}}\leqslant\mu\leqslant\mu_{\mbox{\tiny conf}}$ is
identified as the quarkyonic phase, in the notation of Refs. fukushima3 ;
abuki ; mcnpa09 ; buisseret ; mcnpa07 ; mcnpa08 , for instance. The emergence
of this phase is not possible in the usual NJL model since there is no
information regarding confinement effects as there is in the PNJL0 model.
We also investigate the effects of the variation of the $a_{3}$ and $G_{V}$
parameters in the PNJL0 model. In Fig. 8 we show $\Phi$ as a function of $\mu$
for different $a_{3}$ values chosen in order to produce $\mu_{\mbox{\tiny
conf}}=\mu_{\mbox{\tiny chiral}}$ and $\mu_{\mbox{\tiny conf}}=\Lambda$ for
the chemical potentials related to the two phase transitions (broken/restored
chiral symmetry phase transition and confinement/deconfinement one).
Figure 8: $\Phi\times\mu$ for the PNJL0 model with $G_{V}=0.25G_{s}$ and
different $a_{3}$ values.
Note that the effect of the $a_{3}$ increasing is to shrink the quarkyonic
phase until its complete elimination, in this case for $a_{3}\sim-0.026$. For
this particular value of $a_{3}$, $\Omega_{\mbox{\tiny PNJL0}}\times\mu$
present two crossing points exactly at the same $\mu$, namely,
$\mu=\mu_{\mbox{\tiny chiral}}=\mu_{\mbox{\tiny conf}}=400.243$ MeV, as we see
in Fig. 9.
Figure 9: $\Omega_{\mbox{\tiny PNJL0}}$ as a function of the quark chemical
potential ($\mu$) for $G_{V}=0.25G_{s}$ and $a_{3}=-0.026341$.
Finally, we study the impact of the $G_{V}$ variation in the model. In Fig. 10
it is depicted how the traced Polyakov loop is affected by the strength of the
vector channel interaction.
Figure 10: $\Phi$ as a function of the quark chemical potential for
$a_{3}=-0.1$ and different $G_{V}$ values. We also remark that as $G_{V}$
increases above $\sim G_{s}$, the lines begin to move significantly to the
right. The reader can confer this effect in Fig. 11.
It can be seen that the increase of $G_{V}$ also moves the entire $\Phi$ curve
to the direction of lower $\mu$ values. As a direct consequence, the
quarkyonic region begins to decrease in size (as $G_{V}$ increases) until the
situation in which $G_{V}\sim 1.18G_{s}$. In this case, the quantity defined
as $\Delta\mu=\mu_{\mbox{\tiny conf}}-\mu_{\mbox{\tiny chiral}}$ is vanishing.
Thereafter, $\mu_{\mbox{\tiny chiral}}$ becomes greater than $\mu_{\mbox{\tiny
conf}}$, indicating that chiral symmetry restoration takes place after
deconfinement. The change of $\mu_{\mbox{\tiny chiral}}$ as a function of
$G_{V}$ is an effect observed also in the original NJL model buballa .
Actually, the increase of $G_{V}$ moves the point of broken/restored chiral
symmetry phase transition to higher $\mu$ values, making possible the PNJL0
model to present $\mu_{\mbox{\tiny chiral}}>\mu_{\mbox{\tiny conf}}$. In order
to avoid this situation, we restrict $G_{V}$ to the maximum value that leads
to $\Delta\mu=0$, namely, $G_{V}\sim 1.18G_{s}$.111For $G_{V}$ values that
eliminate the first order phase transition, we calculate $\mu_{\mbox{\tiny
chiral}}$ as the chemical potential related to the peak of
$|\frac{\partial\rho_{s}}{\partial\mu}|$.
Studies with the aim of correctly limit the $G_{V}$ values were performed, for
instance, in Refs. carignano ; kashiwa ; rapp where the range of
$0.25G_{s}\leqslant G_{V}\leqslant 0.50G_{s}$ was found. In Refs. nosso2 ;
bratovic , on the other hand, the more broad range of $0.30G_{s}\leqslant
G_{V}\leqslant 3.2G_{s}$ was used. Here, it is possible to adopt a criterion
in order to determine a range of $G_{V}$ based on the results shown for the
PNJL0 model, namely, we define $G_{V}$ inside an interval of
$G_{V}^{\mbox{\tiny min}}\leqslant G_{V}\leqslant G^{\mbox{\tiny max}}_{V}$,
where $G^{\mbox{\tiny min}}_{V}$ is the value that produces $\mu_{\mbox{\tiny
conf}}=\Lambda$, and $G_{V}^{\mbox{\tiny max}}$ is the value that leads to
$\mu_{\mbox{\tiny conf}}=\mu_{\mbox{\tiny chiral}}$ $(\Delta\mu=0)$, according
to the previous discussion. This generates a quarkyonic phase starting at
$\mu=\mu_{\mbox{\tiny chiral}}$ and extending up to a certain typical energy
scale of the system as $\Lambda$, for instance. Such an approach also avoids a
confinement/deconfinement phase transition taking place before the
broken/restored chiral symmetry one. In the specific case of $a_{3}=-0.1$,
this criterion leads to $G^{\mbox{\tiny min}}_{V}\sim 0.069G_{s}$ and
$G^{\mbox{\tiny max}}_{V}\sim 1.18G_{s}$.
In Fig. 11 we also show the evolution of $\Phi\times\mu$ curves for higher
$G_{V}$ values. Notice that the mainly effect in these cases is to move the
$\Phi$ curve to the increasing $\mu$ direction.
Figure 11: $\Phi$ as a function of $\mu$ for $G_{V}/G_{s}$ = 1, 1.5, 2 and
2.5.
As a last comment, we reinforce that the PNJL0 model studied in this work
(zero temperature regime) produces first order phase transitions for the
traced Polyakov loop, as a function of the quark chemical potential, for
different values of the free parameters $G_{V}$ and $a_{3}$, as observed in
Figs. 5, 8, 10, and 11. The same kind of phase transition is also found in
Refs. Schramm1 ; Schramm2 , where the Polyakov potential given in Eq. (18) was
implemented in a hybrid SU(3) chiral model that contains both hadrons and
quarks as degrees of freedom. Besides that study, in Ref. pnjl0outro another
version of the PNJL model at $T=0$ was proposed. In that model, a quark
density dependence is introduced in the $b_{2}(T)$ function of the Polyakov
potential given in Eq. (14). A signal of first order phase transition is also
verified in their $\Phi\times\mu$ curves, but with the discontinuous jump of
the trace Polyakov loop coinciding with the hadron-quark phase transition.
## 4 Summary and concluding remarks
In this work we extend a previous study performed in Ref. mattos in which a
version of the Polyakov-Nambu-Jona-Lasinio model at zero temperature (PNJL0
model) was proposed. Here we explicitly show that it is possible to preserve
the confinement effects of the model even at $T=0$ by imposing a $\Phi$
dependence in the strengths of the scalar and vector interaction channels, see
Eqs. (20) and (21). This modification leads to a Polyakov potential given by
$\displaystyle\mathcal{U}_{\mbox{\tiny PNJL0}}(\rho_{s},\rho,\Phi)$
$\displaystyle=G_{V}\Phi^{2}\rho^{2}-G_{s}\Phi^{2}\rho_{s}^{2}$
$\displaystyle+a_{3}T_{0}^{4}\mbox{ln}(1-6\Phi^{2}+8\Phi^{3}-3\Phi^{4}),$ (28)
containing the backreaction of the quarks in the gluonic sector, but also the
influence of the latter on the former. The last term in
$\mathcal{U}_{\mbox{\tiny PNJL0}}$ ensures nonvanishing solutions for $\Phi$
and also limits the $\Phi$ values to $1$. We show that $\Omega_{\mbox{\tiny
PNJL0}}\times\Phi$ present $\Phi\neq 0$ solutions only for $G_{V}\neq 0$.
Therefore, the vector channel plays an important role in the PNJL0 model since
it ensures the possibility of observing the confinement effects represented by
nonvanishing traced Polyakov loop values. We show how these solutions
generates first order phase transitions related to the confined/deconfined
quark phases, since $\Phi$ present a discontinuous jump as a function of the
quark chemical potential ($\Phi$ is the order parameter). The signature of
this phase transition is identified in the $\Omega_{\mbox{\tiny
PNJL0}}\times\mu$ curve where a crossing point is observed, see Fig. 3.
Thermodynamical stability establishes that it is also possible to identify
such a transition through the search of the chemical potential that produces
two global minima, with the same $\Omega_{\mbox{\tiny PNJL0}}$ value, in the
$\Omega_{\mbox{\tiny PNJL0}}\times\Phi$ curve, see Fig. 4.
We also show that the first order phase transition related to the
broken/restored chiral symmetry is still present in the PNJL0 at the same
value of $\mu$ as in the original NJL model, as one can see in Fig. 6. The
first crossing point indicates this transition, with the quark condensate
being the order parameter. The region $\mu_{\mbox{\tiny
chiral}}\leqslant\mu\leqslant\mu_{\mbox{\tiny conf}}$ is identified as the
quarkyonic phase, namely, chirally symmetric ($\rho_{s}/\rho_{s(\mbox{\tiny
vac})}\sim 0$) but still presenting confined quarks ($\Phi=0$). The deconfined
phase only takes place at $\mu\geqslant\mu_{\mbox{\tiny conf}}$.
$\mu_{\mbox{\tiny chiral}}$ and $\mu_{\mbox{\tiny conf}}$ are, respectively,
the chemical potentials in which the broken/restored chiral symmetry and
confinement/deconfinement first order phase transitions occur.
The size of the quarkyonic phase in the PNJL0 model is directly governed by
the values of the $a_{3}$ and $G_{V}$ parameters. In Figs. 8 and 10 it is
shown that by increasing these quantities the quarkyonic phase shrinks. In the
case of the $G_{V}$ parameter, this situation is changed starting from a
particular value of $G_{V}$. This feature suggested a way to define a range of
possible $G_{V}$ values, namely, those producing $\mu_{\mbox{\tiny
conf}}=\Lambda$ (the cutoff parameter is a energy scale of the model) and
$\mu_{\mbox{\tiny conf}}=\mu_{\mbox{\tiny chiral}}$. For the case in which
$a_{3}=-0.1$, such a criteria leads to $G_{V}\sim 0.069G_{s}$ and $G_{V}\sim
1.18G_{s}$, respectively, for minimum and maximum values of the vector channel
strength.
Finally, we remark the importance of the construction of QCD
effective/phenomenological models at zero temperature, since a direct
application is in the analysis of the hadron-quark phase transition in compact
neutron stars (described at $T=0$). A recent and very important study
evidences the existence of quark-matter cores in such objects nature . The
challenge of a detailed study involving relativistic hadronic models rmf ;
vdw1 ; vdw2 and the PNJL0 model described here and in Ref. mattos is left
for a future work.
## ACKNOWLEDGMENTS
This work is a part of the project INCT-FNA Proc. No. 464898/2014-5, was
partially supported by Conselho Nacional de Desenvolvimento Científico e
Tecnológico (CNPq) under grants No. 310242/2017-7, No. 406958/2018-1 (O.L.)
and No. 308486/2015-3 (T.F.), and by Fundação de Amparo à Pesquisa do Estado
de São Paulo (FAPESP) under the thematic projects 2013/26258-4 and
2017/05660-0 (O.L, T.F.). O.A.M also thanks for fellowships provided by CNPq,
INCT-FNA, and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior -
Brazil (CAPES - Finance Code 001).
## References
* (1) S. Weinberg, Phys. Rev. Lett. 31, 494 (1973).
* (2) H. Fritzsch, M. Gell-Mann, H. Leutwyler, Phy. Lett. B 47, 365 (1973).
* (3) K. Huang, Quarks, leptons and gauge fields, World Scientific, (1992).
* (4) J. B. Kogut, Rev. Mod. Phys. 51, 659 (1979).
* (5) J. B. Kogut, Rev. Mod. Phys. 55, 775 (1983).
* (6) H. J. Rothe, Lattice Gauge Theories: An Introduction (3rd ed., World Scientific, 2005).
* (7) S. Muroya, A. Nakamura, C. Nonaka, T. Takaishi, Prog. Theo. Phys. 110, 615 (2003).
* (8) C. D. Roberts, A. G. Williams, Prog. Part. Nucl. Phys. 33, 477 (1994).
* (9) R. Alkofer, L. von Smekal, Phys. Rept. 353, 281 (2001).
* (10) W. de Paula, T. Frederico, G. Salmè, M. Viviani, R. Pimentel, Eur. Phys. J. C 77, 764 (2017).
* (11) T. Frederico, G. Salmè, M. Viviani, Phys. Rev. D 85, 036009 (2012).
* (12) R. Pimentel, W. de Paula, Few-Body Syst. 57, 491 (2016).
* (13) M. A. Shifman, A. I. Vainshtein, V. I. Zakharov, Nucl. Phys. B 147, 385 (1979).
* (14) L. J. Reinders, H. Rubinstein, S. Yazaki, Phys. Rept. 127, 1 (1985).
* (15) G. ’t Hooft, Nucl. Phys. B 72, 461 (1974).
* (16) J. M. Maldacena, Int. J. Theor. Phys. 38, 1113 (1999).
* (17) Y. Nambu, G. Jona-Lasinio, Phys. Rev. 122, 345 (1961).
* (18) Y. Nambu, G. Jona-Lasinio, Phys. Rev. 124, 246 (1961).
* (19) M. Buballa, Phys. Rep. 407, 205 (2005).
* (20) U. Vogl and W. Weise, Prog. Part. Nucl. Phys. 27, 195 (1991).
* (21) S. P. Klevansky, Rev. Mod. Phys. 64, 649 (1992).
* (22) T. Hatsuda and T. Kunihiro, Phys. Rep. 247, 221 (1994).
* (23) S. S. Avancini, R. L. S. Farias, M. B. Pinto, W. R. Tavares, and V. S. Timóteo, Phys. Lett B 767, 247 (2017).
* (24) R. L. S. Farias, V. S. Timóteo, S. S. Avancini, M. B. Pinto, and G. Krein, Eur. Phys. J. A 53, 101 (2017).
* (25) R. L. S. Farias, K. P. Gomes, G. Krein, and M. B. Pintom Phys. Rev. C 90, 025203 (2014).
* (26) K. Fukushima, Phys. Lett. B 591, 277 (2004).
* (27) K. Fukushima, Phys. Rev. D 77, 114028 (2008).
* (28) K. Fukushima and T. Hatsuda, Rep. Prog. Phys. 74, 014001 (2011).
* (29) K. Fukushima and C.Sasaki, Prog. Part. Nucl. Phys. 72, 99 (2013).
* (30) C. Ratti, M. A. Thaler, and W. Weise, Phys. Rev. D 73, 014019 (2006).
* (31) C. Ratti, S. Rößner, M.A. Thaler, and W. Weise, Eur. Phys. J. C 49, 213 (2007).
* (32) S. Rößner, C. Ratti, and W. Weise, Phys. Rev. D 75, 034007 (2007).
* (33) S. Rößner, T. Hell, C. Ratti, and W. Weise, Nucl. Phys. A 814, 118 (2008).
* (34) H. Hansen, W. M. Alberico, A. Beraudo, A. Molinari, M. Nardi, and C. Ratti, Phys. Rev. D 75, 065004 (2007).
* (35) P. Costa, M. C. Ruivo, C. A. de Sousa, H. Hansen, and W. M. Alberico, Phys. Rev. D 79, 116003 (2009).
* (36) G. A. Contrera, M. Orsaria, and N. N. Scoccola, Phys. Rev. D 82, 054026 (2010).
* (37) O. Lourenço, M. Dutra, A. Delfino, and M. Malheiro, Phys. Rev. D 84, 125034 (2011).
* (38) O. Lourenço, M. Dutra, T. Frederico, A. Delfino, and M. Malheiro, Phys. Rev. D 85, 097504 (2012).
* (39) M. Dutra, O. Lourenço, A. Delfino, T. Frederico, and M. Malheiro, Phys. Rev. D 88, 114013 (2013).
* (40) M. Ferreira, P. Costa, O. Lourenço, T. Frederico, and C. Providência, Phys. Rev. D 89, 116011 (2014).
* (41) A. Polyakov, Phys. Lett. B 72, 477 (1978).
* (42) L. Susskind, Phys. Rev. D 20, 2610 (1979).
* (43) B. Svetitsky, L. G. Yaffe, Nucl. Phys. B 210, 423 (1982).
* (44) B. Svetitsky, Phys. Rept. 132, 1 (1986).
* (45) O. A. Mattos, O Lourenço and T. Frederico, J. Phys. Conf. Ser. 1291, 012031 (2019).
* (46) S. Rößner, Diploma thesis, Field theoretical modelling of the QCD phase diagram, Technische Universität München (2006).
* (47) S. Rößner, PhD thesis, Phases of QCD, Technische Universität München (2009).
* (48) V. A. Dexheimer and S. Schramm, Phys. Rev. C 81, 045201 (2010).
* (49) V. A. Dexheimer and S. Schramm, Nucl. Phys. A 827, 579c (2009).
* (50) Y. Sakai, T. Sasaki, H. Kouno, M. Yahiro, Phys. Rev. D 82, 076003 (2010).
* (51) T. K. Herbst, J. M. Pawlowski, B.-J. Schaefer, Phys. Lett. B 696, 58 (2011).
* (52) B.-J. Schaefer, J. M. Pawlowski, J. Wambach, Phys. Rev. D 76, 074023 (2007).
* (53) H. B. Callen, Thermodynamics and an Introduction to Thermostatistics (2nd ed., John Wiley & Sons Inc., 1985).
* (54) A. Delfino, M. Jansen, and V. S. Timóteo, Phys. Rev. C 78, 034909 (2008).
* (55) M. Asakawa and K. Yazaki, Nucl. Phys. A 504, 668 (1989).
* (56) H. Abuki, R. Anglani, R.Gatto, G.Nardulli, M. Ruggieri, Phys. Rev. D 78, 034034 (2008).
* (57) L. McLerran, K. Redlich, C. Sasaki, Nucl. Phys. A 824, 86 (2009).
* (58) F. Buisseret, G. Lacroix, Phys. Rev. D 85, 016009 (2012).
* (59) L. McLerran, R. D. Pisarski, Nucl. Phys. A 796, 83 (2007).
* (60) Y. Hidaka, L. McLerran, R. D. Pisarski, Nucl. Phys. A 808, 117 (2008).
* (61) S. Carignano, D. Nickel, M. Buballa, Phys. Rev. D 82 054009 (2010).
* (62) K. Kashiwa, T. Hell, and W. Weise, Phys. Rev. D 84, 056010 (2011).
* (63) R. Rapp, T. Schäfer, E. Shuryak, and M. Velkovsky, Phys. Rev. Lett 81, 53 (1998).
* (64) N. Bratovic, T. Hatsuda, and W. Weise, Phys. Lett. B 719, 131 (2013).
* (65) O. Ivanytskyi , M. Ángeles Pérez-García , V. Sagun, and C. Albertus, Phys. Rev. D 100, 103020 (2019).
* (66) E. Annala, T. Gorda, A. Kurkela, J. Nttil, and A. Vuorinen, Nature Phys. (2020), https://doi.org/10.1038/s41567-020-0914-9, arXiv:1903.09121.
* (67) O. Lourenço, M. Dutra, C. H. Lenzi, C. V. Flores, D. P. Menezes, Phys. Rev. C 99, 045202 (2019).
* (68) O. Lourenço, M. Dutra, C. H. Lenzi, M. Bhuyan, S. K. Biswal, and B. M. Santos, Astrophys. J. 882, 67 (2019).
* (69) M. Dutra, B. M. Santos, O. Lourenço, J. Phys. G 47, 035101 (2020).
|
# Automatic punctuation restoration with BERT models
Attila Nagy 1 1Department of Automation and Applied Informatics
Budapest University of Technology and Economics1 Bence Bial 1 1Department of
Automation and Applied Informatics
Budapest University of Technology and Economics1 Judit Ács
11
###### Abstract
We present an approach for automatic punctuation restoration with BERT models
for English and Hungarian. For English, we conduct our experiments on Ted
Talks, a commonly used benchmark for punctuation restoration, while for
Hungarian we evaluate our models on the Szeged Treebank dataset. Our best
models achieve a macro-averaged $F_{1}$-score of 79.8 in English and 82.2 in
Hungarian. Our code is publicly
available111https://github.com/attilanagy234/neural-punctuator.
## 1 Introduction
Automatic Speech Recognition (ASR) systems typically output unsegmented
transcripts without punctuation. Restoring punctuations is an important step
in processing transcribed speech. Tündik et al. (2018) showed that the absence
of punctuations in transcripts affect readability as much as a significant
word error rate. Downstream tasks such as neural machine translation
(Vandeghinste et al., 2018), sentiment analysis (Cureg et al., 2019) and
information extraction (Makhoul et al., 2005) also benefit from having clausal
boundaries. In this paper we present models for automatic punctuation
restoration for English and Hungarian. Our work is based on a state-of-the-art
model proposed by (Courtland et al., 2020), which uses pretrained
contextualized language models (Devlin et al., 2018).
Our contributions are twofold. First, we present the implementation of an
automatic punctuation model based on a state-of-the-art model (Courtland et
al., 2020) and evaluate it on an English benchmark dataset. Second, using the
same architecture we propose an automatic punctuator for Hungarian trained on
the Szeged Treebank (Csendes et al., 2005). To the best of our knowledge our
work is the first punctuation restoration attempt that uses BERT on Hungarian
data.
## 2 Related Work
Systems that are most efficient at restoring punctuations usually exploit both
prosodic and lexical features with hybrid models (Szaszák and Tündik, 2019;
Garg et al., 2018; Żelasko et al., 2018). Up until the appearance of BERT-like
models, lexical features were primarily processed by recurrent neural networks
(Vandeghinste et al., 2018; Tündik et al., 2017; Kim, 2019; Tilk and Alumäe,
2016; Salloum et al., 2017), while more recent approaches use the transformer
(Vaswani et al., 2017) architecture (Chen et al., 2020; Nguyen et al., 2019;
Cai and Wang, 2019). The current state-of-the-art method by Courtland et al.
(2020) is a pretrained BERT, which aggregates multiple predictions for the
same token, resulting in higher accuracy and significant parallelism.
## 3 Methodology
We train models for Hungarian and English. For English we rely on the widely
used IWSLT 2012 Ted Talks dataset (Federico et al., 2012) benchmark dataset.
Due to the lack of such datasets for Hungarian, we generate it from the Szeged
Treebank (Csendes et al., 2005). We preprocess the Szeged Treebank such that
it structures similarly to the output of an ASR system. Then with the
presented methods we attempt to reconstruct the original and punctuated gold
standard corpus.
### 3.1 Problem formulation
We formulate the problem of punctuation restoration as a sequence labeling
task with four target classes: _EMPTY_ , _COMMA_ , _PERIOD_ , and _QUESTION_.
We do not include other punctuation marks as their frequency is very low in
both datasets. For this reason, we apply a conversion in cases where it is
semantically reasonable: we convert exclamation marks and semicolons to
periods and colons and quotation marks to commas. We remove double and intra-
word hyphens, however, if they are encapsulated between white spaces, we
convert them to commas. Other punctuation marks are disregarded during our
experiments. As tokenizers occasionally split words to multiple tokens, we
apply masking on tokens, which do not mark a word ending. These preprocessing
steps and the corresponding output labels are shown in Table 1.
Original | Tyranosaurus asked: kill me?
---|---
Preprocessed | tyranosaurus asked, kill me?
Tokenized | ty | ##rano | ##saurus | asked | kill | me
Output | - | - | EMP | COM | EMP | Q
Original | Not enough, – said the co-pilot – …
Preprocessed | not enough, said the co pilot,
Tokenized | not | enough | said | the | co | pilot
Output | EMP | COM | EMP | EMP | EMP | COM
Table 1: An example input sentence and the following processing steps in our
setup.
### 3.2 Datasets
#### 3.2.1 IWSLT 2012 Ted Talks dataset
We use the IWSLT 2012 Ted Talks dataset (Federico et al., 2012) for English.
IWSLT is a common benchmark for automatic punctuation. It contains 1066 unique
transcripts of Ted talks with a total number of 2.46M words in the corpus. We
lowercase the data and we convert consecutive spaces into single spaces. We
also remove spaces before commas. We use the original train, validation and
test sets from the IWSLT 2012 competition. The overall data distributions of
the IWSLT Ted Talk dataset is summarized in Table 2.
| Train | Validation | Test
---|---|---|---
PERIOD | 139,619 | 909 | 1,100
COMMA | 188,165 | 1,225 | 1,120
QUESTION | 10,215 | 71 | 46
EMPTY | 2,001,462 | 15,141 | 16,208
Table 2: Label distributions of the IWSLT Ted talk dataset.
#### 3.2.2 Szeged Treebank
We use the Szeged Treebank dataset (Csendes et al., 2005) for Hungarian. This
dataset is the largest gold standard treebank in Hungarian. It covers a wide
variety of domains such as fiction, news articles, and legal text. As these
subcorpora have very different distributions in terms of punctuations, we
merge them and shuffle the sentences. We then split the dataset into train,
validation and test sets. This introduces a bias in the prediction of periods
as it is easier for the model to correctly predict sentence boundaries by
recognizing context change between adjacent sentences but it also provides a
more-balanced distribution of punctuation classes across the train, validation
and test sets. The label distribution is listed in Table 3.
| Train | Validation | Test
---|---|---|---
PERIOD | 81,168 | 9,218 | 3,370
COMMA | 120,027 | 13,781 | 4,885
QUESTION | 1,808 | 198 | 75
EMPTY | 885,451 | 101,637 | 36,095
Table 3: Overall data distributions of the Szeged Treebank dataset.
### 3.3 Architecture
Our model is illustrated in Figure 3. We base our model on pretrained BERT
models. BERT is a contextual language model with multiple transformer layers
and hundreds of millions trainable parameters trained on a massive English
corpora with the masked language modeling objective. Several variations of
pretrained weights were released. We use BERT-base cased and uncased for
English as well as Albert (Lan et al., 2019), a smaller version of BERT. BERT
also has a multilingual version, _mBERT_ that supports Hungarian along with
100 other languages. We use mBERT and the recently released Hungarian-only
BERT, _huBERT_ (Nemeskey, 2020) for Hungarian. These models all apply
wordpiece tokenization with their own predefined WordPiece vocabulary. They
then generate continuous representations for every wordpiece. Our model adds a
two-layer multilayer perceptron on top of these representation with 1568
hidden dimension, ReLU activation and an output layer, and finally a softmax
layer that produces a distribution over the labels. We also apply dropout with
a probability of 0.2 before and after the first linear layer. Similarly to
Courtland et al. (2020), we apply a sliding window over the input data,
generate multiple predictions for each token and then aggregate the
probabilities for each position by taking the label-wise mean and thus output
the most probable label. The process is illustrated in Figure 1 and 2.
Figure 1: The process of generating multiple predictions for a token. Although
BERT always receives sequences of 512, we sample consecutive sequences from
the corpora such that they overlap, thus resulting in multiple predictions for
the same token. The extent of the overlap and therefore the number of
predictions for a token depend on the offset between the windows. Note that
padding is necessary in the beginning to ensure that all tokens have the same
amount of predictions. Figure 2: The final prediction is computed by first
aggregating all punctuation probability distributions for each token by taking
their class-wise averages and then selecting the highest probability. Figure
3: The complete architecture used for punctuation restoration.
### 3.4 Training
We train all models with identical hyperparameters. We perform gradient
descent using the AdamW optimizer (Loshchilov and Hutter, 2017) with the
learning rate set to $3*10^{-5}$ for BERT and $10^{-4}$ for the classifier on
top. We apply gradient clipping of 1.5 and a learning rate warm up of 300
steps using a linear scheduler. We select negative log likelihood as the loss
function. The tokenizer modules often split single words to multiple subwords.
For this task we only need to predict punctuations after words (between white
spaces). We mask the loss function for every other subword. It is a common
practice to intermittently freeze and unfreeze the weights of the transformer
model, while training the fine-tuning linear layers situated at the top of the
whole architecture. We found that it is best to have the transformer model
unfrozen from the very first epoch and therefore update its parameters along
with the linear layers. We trained the models for 12 epochs with a batch size
of 4 and applied early stopping based on the validation set. We used the
validation set to tune the sliding window step size, that is responsible for
getting multiple predictions for a single token. All experiments were
performed using a single Nvidia GTX 1070 GPU with one epoch taking 10 minutes.
Our longest training lasted for 2 hours.
## 4 Results
All models are evaluated using macro $F_{1}$-score ($F$) over the 4 classes.
Similarly to Courtland et al., our work is focused on the performance of
punctuation marks and as EMPTY labels constitute 85% of all labels, we report
the overall $F_{1}$-score without EMPTY. We evaluated both cased and uncased
variations of BERT and generally we have found that the uncased model is
better than its cased variant for this task. This was an expected conclusion,
as we lowercased the entire corpus with the purpose of eliminating bias around
the prediction of periods. For all setups, we selected the best performing
models on the validation set by loss and by macro $F_{1}$-score and evaluated
them independently on the test set. On the Ted Talks dataset, our best
performing model was an uncased variation of BERT that achieved on par
performance with the current state-of-the-art model (Courtland et al., 2020),
having a slightly worse macro $F_{1}$-score of 79.8 (0.8 absolute and 0.9975%
relative difference) with 10 epochs of training and 64 predictions/token. All
results on the Ted Talks dataset are summarized in Table 4.
| Comma | Period | Question | Overall
---|---|---|---|---
Models | P | R | F | P | R | F | P | R | F | P | R | F
BERT-base (Courtland et al., 2020) | 72.8 | 70.8 | 71.8 | 81.9 | 86.6 | 84.2 | 80.8 | 91.3 | 85.7 | 78.5 | 82.9 | 80.6
Albert-base (Courtland et al., 2020) | 69.4 | 69.3 | 69.4 | 80.9 | 84.5 | 82.7 | 76.7 | 71.7 | 74.2 | 75.7 | 75.2 | 75.4
BERT-base-uncased (by loss) | 59.0 | 80.2 | 68 | 83.0 | 83.6 | 83.3 | 87.8 | 83.7 | 85.7 | 76.6 | 82.5 | 79.0
BERT-base-uncased (by $F_{1}$-score) | 58.4 | 80.7 | 67.8 | 84.2 | 83.8 | 84.0 | 84.8 | 90.7 | 87.6 | 75.8 | 85.1 | 79.8
BERT-base-cased (by loss) | 57.3 | 73.9 | 64.5 | 75.9 | 87.9 | 81.4 | 77.1 | 84.1 | 80.4 | 70.1 | 81.9 | 75.5
BERT-base-cased (by $F_{1}$-score) | 59.1 | 78.5 | 67.5 | 79.6 | 81.6 | 80.6 | 76.9 | 88.9 | 82.5 | 71.9 | 83.0 | 76.8
Albert-base (by loss) | 55.3 | 74.8 | 63.6 | 76.8 | 87.9 | 82.0 | 70.6 | 83.7 | 76.6 | 67.6 | 82.1 | 74.1
Albert-base (by $F_{1}$-score) | 56.5 | 80.3 | 66.3 | 80.7 | 80.8 | 80.8 | 80.4 | 84.1 | 82.2 | 72.5 | 81.7 | 76.4
Table 4: Precision, recall and $F_{1}$-score values on the Ted Talk dataset.
(a) Macro $F_{1}$-score (b) Loss
Figure 4: Metrics on the validation set over epochs during training on the
IWSLT Ted Talk dataset
On the Szeged Treebank dataset, we evaluate the multilangual variants of BERT
and the recently released Hubert model. We find that Hubert performs
significantly better (82.2 macro $F_{1}$-score) than the best multilangual
model with an absolute and relative difference of 12.2 and 14.84% respectively
on macro $F_{1}$-score. We trained the best Hubert model for 3 epochs and used
8 predictions/token. All results on the Szeged Treebank dataset are summarized
in Table 5.
| Comma | Period | Question | Overall
---|---|---|---|---
Models | P | R | F | P | R | F | P | R | F | P | R | F
BERT-base-multilang-uncased (by loss) | 82.3 | 79.3 | 80.8 | 79.6 | 88.3 | 83.8 | 43.2 | 21.3 | 28.6 | 68.4 | 63.0 | 64.4
BERT-base-multilang-uncased (by $F_{1}$-score) | 82.9 | 79.4 | 81.1 | 80.1 | 88.4 | 84.0 | 51.4 | 24.0 | 32.7 | 71.5 | 63.9 | 66.0
BERT-base-multilang-cased (by loss) | 81.3 | 79.3 | 80.3 | 82.4 | 83.2 | 82.8 | 51.6 | 21.3 | 30.2 | 71.8 | 61.3 | 64.4
BERT-base-multilang-cased (by $F_{1}$-score) | 83.6 | 78.8 | 81.1 | 81.7 | 85.5 | 83.6 | 61.4 | 36.0 | 45.4 | 75.6 | 66.8 | 70.0
Hubert (by loss and $F_{1}$-score) | 84.4 | 87.3 | 85.8 | 89.0 | 93.1 | 91.0 | 73.5 | 66.7 | 69.9 | 82.3 | 82.4 | 82.2
Table 5: Precision, recall and $F_{1}$-score values on the Szeged Treebank
dataset.
(a) Macro $F_{1}$-score (b) Loss
Figure 5: Metrics on the validation set over epochs during training on the
Szeged Treebank dataset.
We also examined the effect of using multiple predictions for a token. The
changes we see in macro $F_{1}$-score on the validation set with regard to the
number of predictions per token are shown in Figure 6. The best models were
evaluated on the test set and we found that having multiple predictions per
token increased the $F_{1}$-score by 5% in English and 2.4% in Hungarian.
(a) BERT-base-uncased (Ted Talks) (b) HuBERT (Szeged Treebank)
Figure 6: Effect of the number of predictions per token on the overall
$F_{1}$-score, computed on the validation dataset.
## 5 Conclusion
We presented an automatic punctuation restoration model based on BERT for
English and Hungarian. For English we reimplemented a state-of-the-art model
and evaluated it on the IWSLT Ted Talks dataset. Our best model achieved
comparable results with current state-of-the-art on the benchmark dataset. For
Hungarian we generated training data by converting the Szeged Treebank into an
ASR-like format and presented BERT-like models that solve the task of
punctuation restoration efficiently, with our best model Hubert achieving a
macro $F_{1}$-score of 82.2.
## References
* Cai and Wang (2019) Cai, Y., Wang, D.: Question mark prediction by bert. In: 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). pp. 363–367. IEEE (2019)
* Chen et al. (2020) Chen, Q., Chen, M., Li, B., Wang, W.: Controllable time-delay transformer for real-time punctuation prediction and disfluency detection. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 8069–8073. IEEE (2020)
* Courtland et al. (2020) Courtland, M., Faulkner, A., McElvain, G.: Efficient automatic punctuation restoration using bidirectional transformers with robust inference. In: Proceedings of the 17th International Conference on Spoken Language Translation. pp. 272–279 (2020)
* Csendes et al. (2005) Csendes, D., Csirik, J., Gyimóthy, T., Kocsor, A.: The szeged treebank. In: International Conference on Text, Speech and Dialogue. pp. 123–131. Springer (2005)
* Cureg et al. (2019) Cureg, M.Q., De La Cruz, J.A.D., Solomon, J.C.A., Saharkhiz, A.T., Balan, A.K.D., Samonte, M.J.C.: Sentiment analysis on tweets with punctuations, emoticons, and negations. In: Proceedings of the 2019 2nd International Conference on Information Science and Systems. pp. 266–270 (2019)
* Devlin et al. (2018) Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
* Federico et al. (2012) Federico, M., Cettolo, M., Bentivogli, L., Michael, P., Sebastian, S.: Overview of the iwslt 2012 evaluation campaign. In: IWSLT-International Workshop on Spoken Language Translation. pp. 12–33 (2012)
* Garg et al. (2018) Garg, B., et al.: Analysis of punctuation prediction models for automated transcript generation in mooc videos. In: 2018 IEEE 6th International Conference on MOOCs, Innovation and Technology in Education (MITE). pp. 19–26. IEEE (2018)
* Kim (2019) Kim, S.: Deep recurrent neural networks with layer-wise multi-head attentions for punctuation restoration. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 7280–7284. IEEE (2019)
* Lan et al. (2019) Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019)
* Loshchilov and Hutter (2017) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
* Makhoul et al. (2005) Makhoul, J., Baron, A., Bulyko, I., Nguyen, L., Ramshaw, L., Stallard, D., Schwartz, R., Xiang, B.: The effects of speech recognition and punctuation on information extraction performance. In: Ninth European Conference on Speech Communication and Technology (2005)
* Nemeskey (2020) Nemeskey, D.M.: Natural Language Processing Methods for Language Modeling. Ph.D. thesis, Eötvös Loránd University (2020)
* Nguyen et al. (2019) Nguyen, B., Nguyen, V.B.H., Nguyen, H., Phuong, P.N., Nguyen, T.L., Do, Q.T., Mai, L.C.: Fast and accurate capitalization and punctuation for automatic speech recognition using transformer and chunk merging. In: 2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA). pp. 1–5. IEEE (2019)
* Salloum et al. (2017) Salloum, W., Finley, G., Edwards, E., Miller, M., Suendermann-Oeft, D.: Deep learning for punctuation restoration in medical reports. In: BioNLP 2017. pp. 159–164 (2017)
* Szaszák and Tündik (2019) Szaszák, G., Tündik, M.Á.: Leveraging a character, word and prosody triplet for an asr error robust and agglutination friendly punctuation approach. In: INTERSPEECH. pp. 2988–2992 (2019)
* Tilk and Alumäe (2016) Tilk, O., Alumäe, T.: Bidirectional recurrent neural network with attention mechanism for punctuation restoration. In: Interspeech. pp. 3047–3051 (2016)
* Tündik et al. (2018) Tündik, M.A., Szaszák, G., Gosztolya, G., Beke, A.: User-centric evaluation of automatic punctuation in asr closed captioning (2018)
* Tündik et al. (2017) Tündik, M.Á., Tarján, B., Szaszák, G.: A bilingual comparison of maxent-and rnn-based punctuation restoration in speech transcripts. In: 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom). pp. 000121–000126. IEEE (2017)
* Vandeghinste et al. (2018) Vandeghinste, V., Verwimp, L., Pelemans, J., Wambacq, P.: A comparison of different punctuation prediction approaches in a translation context. Proceedings EAMT (2018)
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in neural information processing systems. pp. 5998–6008 (2017)
* Żelasko et al. (2018) Żelasko, P., Szymański, P., Mizgajski, J., Szymczak, A., Carmiel, Y., Dehak, N.: Punctuation prediction model for conversational speech. arXiv preprint arXiv:1807.00543 (2018)
|
# On finite dimensional representations of finite W-superalgebras
Husileng Xiao School of mathematical Science, Harbin Engineering University,
Harbin, 15001, China<EMAIL_ADDRESS>
###### Abstract.
Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}+\mathfrak{g}_{1}$ be a basic Lie
superalgebra, $\mathcal{W}_{0}$ (resp. $\mathcal{W}$) be the finite W-(resp.
super-) algebras constructed from a fixed nilpotent element in
$\mathfrak{g}_{\bar{0}}$. Based on a relation between finite W-algebra
$\mathcal{W}_{0}$ and W-superalgebra $\mathcal{W}$ found recently by the
author and Shu, we study the finite dimensional representations of finite
W-superalgebras in this paper. We first formulate and prove a version of
Premet’s conjecture for the finite W-superalgebras from basic simple Lie
superalgebras. As in the W-algebra case, the Premet’s conjecture is very close
to give a classification to the finite dimensional simple
$\mathcal{W}$-modules. In the case of $\mathfrak{g}$ is a Lie superalgebras of
basic type i@, we prove the set of simple $\mathcal{W}$-supermodules is
bijective with that of simple $\mathcal{W}_{0}$-modules; presenting a
triangular decomposition to the tensor product of $\mathcal{W}$ with a
Clifford algebra, we also give an algorithm to compute the character of finite
dimensional simple $\mathcal{W}$-supermodules with integral central character.
## 1\. introduction
The finite W-superalgebras are mathematically a super generalization of the
finite W-algebras. They are closely related to the supersymmetric field
theories in physics as well as Lie superalgebras. Generalizing the
groundbreaking work [Pr1] of Premet in non-super case, Wang and Zhao in [WZ]
gave a mathematical definition to finite W-superalgebras from the modular
representation theory of Lie superalgebras. In the present paper we study the
finite dimensional irreducible representations of finite W-superalgebras. Let
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W})$ stand for the set consist of their
isomorphism classes. In [BBG] and [BG], the authors give a Yangian
presentation of the W-superalgebra corresponding to a principal nilpotent
orbit in the general Lie superalgebras. Relying on this explicit presentation,
they gave a description of $\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W})$ and
further detailed information on their highest weight structure. A Amitsur-
Levitzki identity was proved for W-superalgebras associated with principal
nilpotent orbits for $Q(N)$ in [PS1]. Then the authors obtain that any
irreducible representation of these W-superalgebras is finite dimensional.
These results seem to indicate that the representation theory of the finite
W-superalgebras is quite different from that of finite W-algebras. Presenting
a set of generators of the W-superalgebras associated to the minimal nilpotent
orbits, Zeng and Shu constructed irreducible representations of them with
dimension 1 or 2, see [ZS]. However unlike in the case of finite W-algebras,
some fundamental problems in representation theory of general finite
W-superalgebras are still open. In [SX] the authors generalize the Losev’s
Poisson geometric approach to the super case and make a step to give a
classification of finite dimensional irreducible representation of general
finite W-superalgebras. In this article we make a progress to this problem, by
proving Premet’s conjecture for Lie superalgebras of basic types. In
particular, we classify the finite dimensional simple
$\mathcal{W}$-supermodules with integral central character and obtain an
algorithm to compute their character in the basic type i@ case.
We hope that the readers could feel from here that the difference between
representation theory of finite W-algebras and W-superalgebras probably not
exceeds that between representation theory of Lie algebras and Lie
superalgebras.
### 1.1. On Premet’s conjecture for finite W-superalgebras
Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a
basic Lie superalgebra over an algebraically closed field $\mathbb{K}$ with
$\mathrm{Char}(\mathbb{K})=0$, $\mathcal{U}$ and $\mathcal{U}_{0}$ be the
enveloping algebra of $\mathfrak{g}$ and $\mathfrak{g}_{\bar{0}}$
respectively. Denote by $(\bullet,\bullet)$ the Killing form on it. Let
$e\in\mathfrak{g}_{\bar{0}}$ and $\chi\in\mathfrak{g}_{\bar{0}}^{*}$ be the
corresponding element to $e$ via the Killing form. Pick an
$\mathfrak{sl}_{2}$-triple $\\{f,h,e\\}\subset\mathfrak{g}_{\bar{0}}$ and let
$\mathfrak{g}=\oplus_{i}\mathfrak{g}(i)$ (resp.
$\mathfrak{g}_{\bar{0}}=\oplus_{i}\mathfrak{g}(i)\cap\mathfrak{g}_{\bar{0}}$)
be the corresponding good $\mathbb{Z}$-grading. Denote by $\mathcal{W}$ and
$\mathcal{W}_{0}$ the W-algebras associated to the pairs $(\mathfrak{g},e)$
and $(\mathfrak{g}_{\bar{0}},e)$ respectively. Let $\tilde{\mathcal{W}}$ be
the extended W-superalgebra $\mathcal{A}_{{\ddagger}}$ defined in §3 [SX]( in
§6 [Lo15] it is denoted by $\mathcal{A}_{{\dagger}}$). It was obtained in [SX]
that there is a following relation among the three kind of W-algebras. (1), we
have an embedding $\mathcal{W}_{0}\hookrightarrow\tilde{\mathcal{W}}$ and the
later is generated over the former by $\dim(\mathfrak{g}_{\bar{1}})$ odd
elements. (2), we have a decomposition
$\tilde{\mathcal{W}}=\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{W}$ of
associative algebras, where $\mathrm{Cl}(V_{\bar{1}})$ is the Clifford algebra
over a vector space $V_{\bar{1}}$ with a non-degenerate symmetric bilinear
form, see Theorem 2.3 for the details. Essentially, as we pointed out in [SX],
this makes $\mathcal{W}_{0}$ to $\mathcal{W}$ as $\mathcal{U}_{0}$ to
$\mathcal{U}$. The representation theories of $\mathcal{W}$ and
$\tilde{\mathcal{W}}$ are equivalent, see Proposition 2.5. However, as we will
see in the present work, a significant advantage to consider
$\tilde{\mathcal{W}}$ instead of $\mathcal{W}$ is that it is easy to relate
$\tilde{\mathcal{W}}$ with $\mathcal{W}_{0}$. This enables us to use results
on $\mathcal{W}_{0}$.
For a given associative algebra $\mathcal{A}$, denote by
$\mathfrak{id}(\mathcal{A})$ the set of two sided ideals of $\mathcal{A}$ and
by $\mathrm{Prim}^{\mathrm{fin}}(\mathcal{A})$ the set of primitive ideals of
$\mathcal{A}$ with finite codimension in $\mathcal{A}$. It is well known that
$\mathrm{Prim}^{\mathrm{fin}}(\mathcal{A})$ is bijective with the set
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{A})$ of isomorphism classes of finite
dimensional irreducible $\mathcal{A}$-modules. In [Lo10] Losev constructed a
ascending map
$\bullet^{{\dagger}}:\mathfrak{id}(\mathcal{W}_{0})\longrightarrow\mathfrak{id}(\mathcal{U}_{0})$
and descending map
$\bullet_{{\dagger}}:\mathfrak{id}(\mathcal{U}_{0})\longrightarrow\mathfrak{id}(\mathcal{W}_{0})$.
These two maps are crucial to his study on representations of
$\mathcal{W}_{0}$. The ascending map $\bullet^{{\dagger}}$ sends
$\mathrm{Prim}^{\mathrm{fin}}(\mathcal{W}_{0})$ to the set
$\mathrm{Prim}_{\mathbb{O}}(\mathcal{U}_{0})$ of primitive ideals of
$\mathcal{U}_{0}$ supported on the Zariski closure of the adjoint orbit
$\mathbb{O}=G_{\bar{0}}\cdot e$. Denote by $Q=Z_{G_{\bar{0}}}\\{e,h,f\\}$ the
stabilizer of the triple $\\{e,h,f\\}$ in $G_{\bar{0}}$ under the adjoint
action. Let $C_{e}=Q/Q^{\circ}$, where $Q^{\circ}$ is the identity component
of $Q$. The Premet’s conjecture which was proved in [Lo10], is saying that for
any $\mathcal{J}\in\mathrm{Prim}_{\mathbb{O}}(\mathcal{U}_{0})$ the set
$\\{\mathcal{I}\mid\mathcal{I}\in\mathrm{Prim}^{\mathrm{fin}}(\mathcal{W}),\mathcal{I}^{{\dagger}}=\mathcal{J}\\}$
is a single $C_{e}$-orbit. This gives us an almost complete classification of
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W}_{0})$.
In this paper we generalize the above fact to the super case. The super analog
of the maps $\bullet^{{\dagger}}$ and $\bullet_{{\dagger}}$ were established
in [SX]. By abuse of notation, we also denote it by $\bullet^{{\dagger}}$ and
$\bullet_{{\dagger}}$ from now on. Denote by
$\mathrm{Prim}_{\mathbb{O}}(\mathcal{U})$ the set of primitive ideals of
$\mathcal{U}$ supported on the Zariski closure of the adjoint orbit
$\mathbb{O}=G_{\bar{0}}\cdot e$, see §2 for the precise meaning of the term
‘supported’ in the super context. In §2 we will construct an action of $Q$ on
$\tilde{\mathcal{W}}$ with a property that $Q^{\circ}$ leaves any two sided
ideal of $\tilde{\mathcal{W}}$ stable, see Proposition 2.1. This provides us
an action of $C_{e}$ on $\mathfrak{id}(\tilde{\mathcal{W}})$.
We also consider $\mathbb{Z}_{2}$-graded version of the above setting. For
superalgebra $\mathcal{A}=\mathcal{A}_{\bar{0}}+\mathcal{A}_{\bar{1}}$, by the
term $\mathcal{A}$-supermodule we mean a $\mathbb{Z}_{2}$-graded
$\mathcal{A}$-modules. An ideal $\mathcal{I}$ of $\mathcal{A}$ is said be to
graded primitive, if it is the annihilator of a simple object in the category
of $\mathcal{A}$-supermodules. We denote by
$\mathrm{gr}.\mathrm{Prim}(\mathcal{A})$ the set of graded primitive ideals of
$\mathcal{A}$. For a notation $\bullet$ used in the ungraded case, we always
use $\mathrm{gr}.\bullet$ in the $\mathbb{Z}_{2}$-graded case by the same way
as above.
Since the action of $Q$ on $\tilde{\mathcal{W}}$ is
$\mathbb{Z}_{2}$-homogeneous, we also have an action of $C_{e}$ on
$\mathrm{gr}.\mathfrak{id}(\tilde{\mathcal{W}})$. Our first main result is
following.
###### Theorem 1.1.
For any $\mathcal{J}\in\mathrm{Prim}_{\mathbb{O}}(\mathcal{U})$, the set
$\\{\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}\mid\mathcal{I}\in\mathrm{Prim}^{\mathrm{fin}}(\mathcal{W}),\quad\mathcal{I}^{{\dagger}}=\mathcal{J}\\}$
defined from the primitive ideals of $\mathcal{W}$ lying over $\mathcal{J}$,
is a single $C_{e}$-orbit. For any
$\mathcal{J}\in\mathrm{gr}.\mathrm{Prim}_{\mathbb{O}}(\mathcal{U})$, the
similar set defined from the graded primitive ideals of $\mathcal{W}$ lying
over $\mathcal{J}$ is also a single $C_{e}$-orbit.
We also have maps
$\bullet^{\tilde{{\dagger}}}:\mathfrak{id}(\tilde{\mathcal{W}})\rightarrow\mathfrak{id}(\mathcal{U})$
and
$\bullet_{\tilde{{\dagger}}}:\mathfrak{id}(\mathcal{U})\rightarrow\mathfrak{id}(\tilde{\mathcal{W}})$
defined similarly as $\bullet^{{\dagger}}$ and $\bullet_{{\dagger}}$. Theorem
1.1 is equivalent to say that the set
$\\{\mathcal{\tilde{I}}\mid\mathcal{\tilde{I}}\in\mathrm{Prim}^{\mathrm{fin}}(\tilde{\mathcal{W}}),\quad\mathcal{\tilde{I}}^{\tilde{{\dagger}}}=\mathcal{J}\\}$
of primitive ideals lying over $\mathcal{J}$, is a single $C_{e}$-orbit.
Our strategy to prove the theorem is that we apply Theorem 4.1.1 [Lo11] to the
Harish-Chandra bimodule $\mathcal{U}$ over $\mathcal{U}_{0}$ and the relation
among $\mathcal{W}$, $\mathcal{W}_{0}$ and $\tilde{\mathcal{W}}$ obtained in
Theorem 3.11 [SX]. Our approach is highly inspired by §6 [Lo15].
We can recover $\mathcal{I}$ from $\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}$
by Corollary 2.4. It was proved in Theorem 4.8 [SX] that the map
$\bullet^{\dagger}$ sends $\mathrm{Prim}^{\mathrm{fin}}(\mathcal{W})$ to
$\mathrm{Prim}_{\mathbb{O}}(\mathcal{U})$. Thus Theorem 1.1 almost completely
reduced the problem of classifying
$\mathrm{Prim}^{\mathrm{fin}}(\mathcal{W})=\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W})$
to that of $\mathrm{Prim}(\mathcal{U})$. Provided $\mathrm{Prim}(\mathcal{U})$
is known and $C_{e}$ is trivial, Theorem 1.1 gives a description of
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W})$, see §2.6. For the recent progress
on primitive ideals of Lie superalgebras, see [CM] and [Mu97], for example.
We say that $M\in\mathrm{Irr}^{\mathrm{fin}}(\tilde{\mathcal{W}})$ or
$M^{\prime}\in\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W})$ is lying over an
primitive ideal $\mathcal{J}$ of $\mathcal{U}$ if so is their annihilators. It
is well known that for basic classical Lie superalgebras $\mathfrak{g}$, any
primitive ideal of $\mathcal{U}$ is the annihilator $\widehat{J}(\lambda)$ of
the simple module $\widehat{L}(\lambda)$ for some
$\lambda\in\mathfrak{h}^{*}$. We say that a finite dimensional simple
$\mathcal{\tilde{W}}$-module has integral center character if it is lying over
$\widehat{J}(\lambda)$ for some integral $\lambda\in\mathfrak{h}^{*}$. Let
$\mathrm{Irr}_{\lambda}(\tilde{\mathcal{W}})$ stand for the set of isomorphic
classes of $\tilde{\mathcal{W}}$-supermodule with center character $\lambda$.
Define $\mathrm{Irr}_{\lambda}(\mathcal{W}_{0})$ similarly for
$\mathcal{W}_{0}$. The Premet’s conjecture give us a $C_{e}$-action on
$\mathrm{Irr}_{\lambda}(\mathcal{W}_{0})$ and
$\mathrm{Irr}_{\lambda}(\tilde{\mathcal{W}})$, see §2.5.
### 1.2. On finite dimensional representations of basic type i@
W-superalgebras
In the remaining part of this section, let
$\mathfrak{g}=\mathfrak{g}_{\bar{0}}+\mathfrak{g}_{\bar{1}}$ be a basic type
i@ simple Lie superalgebras. Namely $\mathfrak{g}$ is one of the following
list, Type (A)
$\mathfrak{gl}(m|n),\mathfrak{sl}(m|n),\mathfrak{sl}(n|n)/\mathbb{C}I_{n|n}$;
Type(C) $\mathfrak{osp}(2|2n)$. We begin to discuss the finite dimensional
simple $\mathcal{\tilde{W}}$-supermodules.
A classification of simple $\mathfrak{g}$-supermodules has been obtained in
[CM]. It is proved in there that the set of simple $\mathfrak{g}$-supermodules
has a one to one correspondence with that of simple
$\mathfrak{g}_{\bar{0}}$-modules. For a simple $\mathfrak{g}_{\bar{0}}$-module
$V$, we denote by $\widehat{V}$ the simple $\mathfrak{g}$-supermodules under
this correspondence, which is the unique simple quotient of the Kac module
$K(V)$. This result is fundamental to the present work. We will decent this
result to the context of W-algebras by using Skryabin’s equivalence. Namely,
we prove that there is a one to one correspondence among
$\mathrm{Irr}(\mathcal{W}_{0})$,
$\mathrm{gr}.\mathrm{Irr}(\tilde{\mathcal{W}})$ and
$\mathrm{gr}.\mathrm{Irr}(\mathcal{W})$. By abuse of notation, for a simple
$\mathcal{W}_{0}$-module $N$, we also denote by $\widehat{N}$ the unique
simple $\tilde{\mathcal{W}}$-supermodule under this correspondence. However,
this classification of $\mathrm{Irr}(\tilde{\mathcal{W}})$ is not well
organized. For example, it is difficult to see the behavior of the
$C_{e}$-action under the correspondence. To fix up this problem, we give
another good classification to $\mathrm{Irr}(\tilde{\mathcal{W}})$. To that
end, we present a triangular decomposition
$\mathcal{\tilde{W}}=\mathcal{\tilde{W}}_{+}^{\\#}\otimes_{\mathbb{K}}\mathcal{W}_{0}\otimes_{\mathbb{K}}\mathcal{\tilde{W}}_{-}^{\\#}$
for $\tilde{\mathcal{W}}$. This is can be compared with the decomposition
$\mathfrak{g}=\mathfrak{g}_{-1}+\mathfrak{g}_{0}+\mathfrak{g}_{1}$ for the
type i@ simple Lie superalgebras. A crucial point is that $\mathcal{W}_{0}$ is
the even finite W-algebra from $(\mathfrak{g}_{\bar{0}},e)$. Using this
decomposition, for any finite dimensional simple $\mathcal{W}_{0}$-module $N$,
we define ‘Verma’ module $\Delta^{K}_{\mathcal{\tilde{W}}}(N)$ and prove it
has a unique simple $\mathbb{Z}_{2}$-graded quotient
$L^{K}_{\mathcal{\tilde{W}}}(N)$. We point out that it is easy to obtain a
triangular decomposition
$\mathcal{W}=\mathcal{W}_{-}^{\\#}\otimes_{\mathbb{K}}\mathcal{W}_{0}^{\prime}\otimes_{\mathbb{K}}\mathcal{W}_{-}^{\\#}$
for the usual finite W-superalgebra $\mathcal{W}$ by similar method in the
present paper. A triangular decomposition has obtained already for
$\mathcal{W}$ arising from the general Lie superalgebras by using super
Yangian presentation, see [Pe] for general and [BBG] for principal nilpotent
element $e$. Compared with the one for $\mathcal{\tilde{W}}$, a disadvantage
of the later decomposition is that it is highly non-trivial to relate
$\mathcal{W}_{0}^{\prime}$ and $\mathcal{W}_{0}$ for general $e$, although
these two algebras coincide for principal nilpotent $e$.
Our main tool used to compute the character of simple
$\mathcal{\tilde{W}}$-modules with integral center character is the
generalized Soergel functor $\mathbb{V}$ for $\mathcal{W}_{0}$ constructed in
[Lo15]. Let $P\subset G_{\bar{0}}$( resp. $\mathfrak{p}=\mathrm{Lie}(P)$ ) be
the suitable parabolic subgroup (resp. subalgebra) constructed from an
$\mathfrak{sl}_{2}$-triple in [Lo15]. Denote by $\mathcal{O}^{P}$ the
corresponding parabolic category $\mathcal{O}$ and $\Lambda_{\mathfrak{p}}$
the set consist of integral $\lambda\in\mathfrak{h}^{*}$ with highest weight
simple module $L(\lambda)$ lie in $\mathcal{O}^{P}$. Let
$\mathbb{V}:\mathcal{O}^{P}\rightarrow\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)$
be the generalized Soergel functor for $\mathcal{W}_{0}$ defined in [Lo15].
The notations will be recalled in §4. Let $\lambda\in\mathfrak{h}^{*}$ be
integral such that $L(\lambda)\in\mathcal{O}^{P}$ and
$\mathbb{V}(L(\lambda))\neq 0$. Describing $\mathbb{V}(L(\lambda))$, Losev
gives a character formula for the finite dimensional simple
$\mathcal{W}_{0}$-modules with integral center characters. His character
formula is based on the parabolic version of Kazhdan-Lusztig theory for
$\mathcal{O}^{P}$. We describe the image of simple $\mathfrak{g}$-supermodules
$\widehat{L}(\lambda)$ under $\mathbb{V}$. Then use it to give an algorithm to
compute character of the simple modules lying over $\widehat{J}(\lambda)$
based on the $\mathfrak{g}_{\bar{0}}$-rough structure of simple
$\mathfrak{g}$-supermodules or the parabolic Kazhdan Lusztig theory of Lie
superalgebras.
Summarized up, for $\mathfrak{g}$ is a basic type i@ Lie superalgebras, we
will obtain the following results.
* (1)
We obtain a triangular decomposition for $\mathcal{\tilde{W}}$ and some
standard properties of the Verma modules defined by it. We prove that the map
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W}_{0})\rightarrow\mathrm{gr}.\mathrm{Irr}^{\mathrm{fin}}(\tilde{\mathcal{W}});N\mapsto
L^{K}_{\mathcal{\tilde{W}}}(N)$ is bijective and $C_{e}$-equivariant, see
Proposition 4.1.
* (2)
For $\lambda\in\Lambda_{\mathfrak{p}}$, let
$\mathbb{V}(L(\lambda))=\oplus_{i\in I_{\lambda}}N_{i}$ be the description of
$\mathbb{V}(L(\lambda))$ obtained in [Lo15]. Here $I_{\lambda}$ is a finite
set and $N_{i}$ is a finite dimensional simple $\mathcal{W}_{0}$-modules. Then
we have
$\mathbb{V}(\widehat{L}(\lambda))=\oplus_{i\in
I_{\lambda}}L^{K}_{\mathcal{\tilde{W}}}(N),$
see Theorem 4.4.
* (3)
For $\lambda\in\Lambda_{\mathfrak{p}}$, we will present an algorithm to
compute the character of simple $\tilde{\mathcal{W}}$-supermodules
(equivalently $\mathcal{W}$-supermodules) lying over $\widehat{J}(\lambda)$,
see §4.4.
## 2\. Supre Premet’s conjecture
We first recall the Poisson geometric realization of the finite
W-(super)algebras in the sense of Losev. Denote by $A_{0}$ (resp. $A$) the
Poisson (resp. super) algebra $S[\mathfrak{g}_{\bar{0}}]$(resp.
$S[\mathfrak{g}]$) with the standard bracket $\\{,\\}$ given by
$\\{x,y\\}=[x,y]$ for all $x,y\in\mathfrak{g}_{\bar{0}}$ (resp.
$\mathfrak{g}$). Let $\hat{A}_{0}$ (resp. $\hat{A}$) be the completion of
$A_{0}$ (resp. $A$) with respect to the point
$\chi\in\mathfrak{g}_{\bar{0}}^{*}$(resp. $\mathfrak{g}$). Let
$\mathcal{U}_{\hbar,0}^{\wedge}$ (resp. $\mathcal{U}_{\hbar}^{\wedge}$ ) be
the formal quantization of $\hat{A}_{0}$ (resp. $\hat{A}$) given by $x\ast
y-y\ast x=\hbar^{2}[x,y]$ for all $x,y\in\mathfrak{g}_{\bar{0}}$. Equip all
the above algebras with the Kazhdan $\mathbb{K}^{*}$-action arising from the
good $\mathbb{Z}$-grading on $\mathfrak{g}$ and $t\cdot\hbar=t\hbar$ for all
$t\in\mathbb{K}^{*}$.
Denote by $\omega$ the even symplectic form on $[f,\mathfrak{g}]$ given by
$\omega(x,y)=\chi([x,y])$. Let $V=V_{\bar{0}}\oplus V_{\bar{1}}$ be the
superspace $[f,\mathfrak{g}]$ if $\dim(\mathfrak{g}(-1))$ is even. If
$\dim(\mathfrak{g}(-1))$ is odd, let $V\subset[f,\mathfrak{g}]$ be a
superpaces has standard basis $v_{i}$, $i,j\in\\{\pm
1,\ldots,\pm(\dim([f,\mathfrak{g}])-1)/2\\}$ with
$\omega(v_{i},v_{j})=\delta_{i,-j}$. We chose such a $V$ in the present paper
for considering the definition of W-superalgebra given in [WZ].
For a superspace $V$ with an even sympletic form, we denote by
$\mathbf{A}_{\hbar}(V)$ the corresponding Weyl superalgebra, see Example1.5
[SX] for the definition. Specially, if $V$ is pure odd, we denote by
$\mathrm{Cl}_{\hbar}(V)$ the Weyl superalgebra $\mathbf{A}_{\hbar}(V)$ and
call it Clifford algebra.
It was obtained in §2.3 [Lo11] that there is a
$Q\times\mathbb{K}^{*}$-equivariant
$\Phi_{0,\hbar}:\mathbf{A}_{\hbar}^{\wedge}(V_{\bar{0}})\otimes\mathcal{W}_{0,\hbar}^{\wedge}\longrightarrow\mathcal{U}_{0,\hbar}^{\wedge}$
isomorphism of quantum algebras. Moreover
###### Proposition 2.1.
* (1)
We have a $Q\times\mathbb{K}^{*}$-equivariant
$\tilde{\Phi}_{\hbar}:\mathbf{A}_{\hbar}^{\wedge}(V_{\bar{0}})\otimes\mathcal{\tilde{W}}_{\hbar}^{\wedge}\longrightarrow\mathcal{U}_{\hbar}^{\wedge}$
and a $\mathbb{K}^{*}$-equivariant isomorphism
$\Phi_{1,\hbar}:\mathrm{Cl}_{\hbar}(V_{\bar{1}})\otimes\mathcal{W}_{\hbar}^{\wedge}\longrightarrow\mathcal{\tilde{W}}_{\hbar}$
of quantum algebras. Finally this give us a $\mathbb{K}^{*}$-equivariant
isomorphism
$\Phi_{\hbar}:\mathbf{A}_{\hbar}^{\wedge}(V)\otimes\mathcal{W}_{\hbar}^{\wedge}\longrightarrow\mathcal{U}_{\hbar}^{\wedge}$
of quantum algebras. Here $\mathcal{\tilde{W}}_{\hbar}^{\wedge}$ is defined as
the commutator of $\tilde{\Phi}_{\hbar}(V_{\bar{0}})$ in
$\mathcal{U}_{\hbar}^{\wedge}$ and $\mathcal{W}_{\hbar}^{\wedge}$ is defined
similarly.
* (2)
There are isomorphisms
$(\mathcal{\tilde{W}}_{\hbar}^{\wedge})_{\mathbb{K}^{*}-{\mathrm{lf}}}/(\hbar-1)=\mathcal{\tilde{W}};\quad(\mathcal{W}_{0,\hbar}^{\wedge})_{\mathbb{K}^{*}-{\mathrm{lf}}}/(\hbar-1)=\mathcal{W}_{0}\quad\text{and}\quad(\mathcal{W}_{\hbar}^{\wedge})_{\mathbb{K}^{*}-{\mathrm{lf}}}/(\hbar-1)=\mathcal{W}$
of associative algebra. Where, for a vector space $V$ with a
$\mathbb{K}^{*}$-action, we denote by $(V)_{\mathbb{K}^{*}-{\mathrm{lf}}}$ the
sum of all finite dimensional $\mathbb{K}^{*}$-stable subspace of $V$.
* (3)
There is an embedding
$\mathfrak{q}:=\mathrm{Lie}(Q)\hookrightarrow\mathcal{\tilde{W}}$ of Lie
algebras such that the adjoint action of $\mathfrak{q}$ coincides with the
differential of the $Q$-action.
###### Proof.
(1) Suppose that $V_{\bar{0}}$ has a basis $\\{v_{i}\\}_{1\leq|i|\leq l}$ with
$\omega(v_{i},v_{j})=\delta_{i,-j}$. The isomorphism $\Phi_{0,\hbar}$ gives us
a $Q$-equivariant embedding
$\tilde{\Phi}_{\hbar}:V_{\bar{0}}\hookrightarrow\mathcal{U}_{\hbar}^{\wedge}$
with
$[\tilde{\Phi}_{\hbar}(v_{i}),\tilde{\Phi}_{\hbar}(v_{j})]=\delta_{i,-j}\hbar$.
Now the isomorphism $\tilde{\Phi}_{\hbar}$ can be constructed as in the proof
of Theorem 1.6 [SX]. For the construction of isomorphism $\Phi_{1,\hbar}$, see
also Case 1 in the proof of Theorem 1.6 [SX]. The isomorphism $\Phi_{\hbar}$
can be constructed from the embedding
$\Phi_{\hbar}:V\hookrightarrow\mathcal{U}_{\hbar}^{\wedge}$ given by
$\Phi_{\hbar}|_{V_{\bar{0}}}=\tilde{\Phi}_{\hbar}$ and
$\Phi_{\hbar}|_{V_{\bar{1}}}=\Phi_{1,\hbar}$.
(2) The second isomorphism was proved in [Lo11]. The remaining statements
follow by a similar argument as in the proof of Theorem 3.8 [SX].
(3) View $\mathcal{U}$ as Harish-Chandra $\mathcal{U}_{0}$-bimodule and use
§2.5 [Lo11]. ∎
###### Remark 2.2.
In the proposition above we are not claiming that $\Phi_{\hbar}$ is
$Q$-equivariant, although this is probably true.
Proposition 2.1 give us a following $Q\times\mathbb{K}^{*}$-equivarian version
of Theorem 4.1 [SX].
###### Theorem 2.3.
* (1)
We have a $Q\times\mathbb{K}^{*}$-equivariant embedding
$\mathcal{W}_{0}\hookrightarrow\tilde{\mathcal{W}}$ of associative algebras.
The later is generated over the former by $\dim(\mathfrak{g}_{\bar{1}})$ odd
elements.
* (2)
Moreover we have an isomorphism
$\Phi_{1}:\tilde{\mathcal{W}}\longrightarrow\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{W}$
of algebras. Here $\mathrm{Cl}(V_{\bar{1}})$ is the Clifford algebra on the
vector space $V_{\bar{1}}$ with symmetric bilinear form $\chi([\cdot,\cdot])$.
###### Proof.
Here we can repeat the proof of Theorem 4.1 [SX]. ∎
Since it is frequently used in later, it is helpful to recall the construction
of $\Phi_{1}$ in the following slightly general setting.
###### Proposition 2.4.
For a two sided ideal $\tilde{\mathcal{I}}$ of $\tilde{\mathcal{W}}$, we have
$\tilde{\mathcal{I}}=\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}$. Here
$\mathcal{I}$ is the two sided ideal of $\mathcal{W}$ consist of elements
anit-commuting with $\mathrm{Cl}(V_{\bar{1}})$.
###### Proof.
By Theorem 2.3 (2) there exist
$x_{1},\ldots,x_{\dim(V_{\bar{1}})}\in\tilde{\mathcal{W}}$ with
$x_{i}^{2}=1\text{ and }x_{i}x_{j}=-x_{j}x_{i}\text{ for all distinct
$i,j\in\\{1,\ldots,\dim(V_{\bar{1}})\\}$.}$
By a quantum analog of Lemma 2.2(2) [SX], we have that $\tilde{\mathcal{I}}$
is equal to $\mathrm{Cl}(\mathbb{K}\langle
x_{1}\rangle)\otimes\tilde{\mathcal{I}}_{1}$ as rings. Here we denote by
$\tilde{\mathcal{I}}_{1}$ the space anti-commuting with $x_{1}$. Now the
corollary follows by induction to $\dim(V_{\bar{1}})$. ∎
### 2.1. Equivalence of $\mathcal{W}\mathrm{-Mod}$ and
$\mathcal{\tilde{W}}\mathrm{-Mod}$
Let $\mathfrak{u}_{\bar{1}}$ be a Lagrangian of $V_{\bar{1}}$ and
$\mathfrak{u}^{*}_{\bar{1}}$ be dual of it (given by the non-degenerate
symmetric two form $\omega$). Note that
$V_{\bar{1}}=\mathfrak{u}_{\bar{1}}\oplus\mathfrak{u}^{*}_{\bar{1}}$. View the
exterior algebra $\bigwedge(\mathfrak{u}_{\bar{1}}^{*})$ as
$\mathrm{Cl}(V_{\bar{1}})$-module by
$u\cdot x=ux\text{ and }v\cdot x=\omega(v,x)\text{ for all
$u,x\in\mathfrak{u}_{\bar{1}}^{*}$ and $v\in\mathfrak{u}_{\bar{1}}$ }.$
The following proposition establishes an explicit relation between the
categories $\mathcal{W}$-Mod and $\mathcal{\tilde{W}}$-Mod. It corresponds to
Proposition 2.4 via the bijective map
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{\tilde{W}})\rightarrow\mathrm{Prim}^{\mathrm{fin}}(\mathcal{\tilde{W}})$.
###### Proposition 2.5.
For any $M\in\mathcal{\tilde{W}-}\mathrm{Mod}$, we have an isomorphism
$\bigwedge(\mathfrak{u}_{\bar{1}}^{*})\otimes M^{\prime}\rightarrow M;x\otimes
m\mapsto x\cdot m$
of $\mathcal{\tilde{W}}$-modules. Here $M^{\prime}$ is the annihilator of
$\mathfrak{u}_{\bar{1}}$, which is naturally a $\mathcal{W}$-module; we view
$\bigwedge(\mathfrak{u}_{\bar{1}}^{*})\otimes M^{\prime}$ as
$\mathcal{\tilde{W}}$-module by the isomorphism in Theorem 2.3. Hence the
functor
$\mathcal{\tilde{W}}\mathrm{-Mod}\rightarrow\mathcal{W}\mathrm{-Mod}:M\mapsto
M^{\prime}$ is an equivalence of categories with inverse
$N\mapsto\bigwedge(\mathfrak{u}_{\bar{1}}^{*})\otimes N$.
The proof is very similar to the proof of Proposition 2.4 and Lemma 2.2(2)
[SX]. We prove it for the reader’s convenience.
###### Proof.
Let $x_{1},\ldots x_{\dim(\mathfrak{u}_{\bar{1}})}$ be a basis of
$\mathfrak{u}_{\bar{1}}$ and $x_{1}^{*},\ldots
x_{\dim(\mathfrak{u}_{\bar{1}})}^{*}$ be the dual basis of
$\mathfrak{u}_{\bar{1}}^{*}$ with $\omega(x_{i},x_{j}^{*})=\delta_{i,j}$. We
claim that there is an isomorphism
$\Psi_{1}:\mathrm{Cl}(\mathbb{C}\langle
x_{1},x_{1}^{*}\rangle)\otimes\mathcal{\tilde{W}}_{1}\rightarrow\mathcal{\tilde{W}}$
of algebras. Here $\mathcal{\tilde{W}}_{1}$ is super commutator of
$x_{1},x_{1}^{*}$ in $\mathcal{\tilde{W}}$ and the isomorphism is given by
multiplication in $\mathcal{\tilde{W}}$. Note that for any
$y\in\mathcal{\tilde{W}}$, we have
$\begin{array}[]{lll}&y=y-x_{1}[x_{1}^{*},y]-x_{1}^{*}[x_{1},y]\\\
&+x_{1}([x_{1}^{*},y]-x_{1}^{*}[x_{1},[x_{1}^{*},y]])+x_{1}x_{1}^{*}[x_{1},[x_{1}^{*},y]]\\\
&+x_{1}^{*}([x_{1},y]-x_{1}[x_{1}^{*},[x_{1},y]])+x_{1}x_{1}^{*}[x_{1}^{*},[x_{1},y]].\end{array}$
Therefore $\Psi_{1}$ is surjective. Suppose that
$w_{0}+x_{1}w_{1}+x_{1}^{*}w_{2}+x_{1}x_{1}^{*}w_{3}=0$
for some $w_{i}\in\mathcal{\tilde{W}}_{1},i=0,1,2,3.$ Applying operator
$[x_{1},[x_{1}^{*},\bullet]]$ on both side we have $w_{3}=0$, and similarly we
have $w_{i}=0$ for $i=0,1,2.$ So $\Phi_{1}$ is also injective. Thus the claim
follows. Now we prove the proposition for the pair
$(\mathcal{\tilde{W}}_{1},\mathcal{\tilde{W}})$. Namely there is an
isomorphism
$\Psi_{1,M}:\bigwedge(x_{1})\otimes M^{\prime}_{1}\rightarrow M$
of $\mathcal{\tilde{W}}$-modules. Here the notations have similar meaning as
in the proposition. Indeed, for any $m\in M$, we have
$m=m-x_{1}^{*}(x_{1}\cdot m)+x_{1}^{*}x_{1}\cdot m.$
Since $x_{1}\cdot m$ and $m-x_{1}^{*}(x_{1}\cdot m)\in M^{\prime}_{1}$,
$\Psi_{1,M}$ is surjective. Similarly as before, we can check that
$\Psi_{1,M}$ is injective. Now the first statement follows by repeating the
above procedure $\dim(\mathfrak{u}_{\bar{1}})$ times. The second statement is
a direct consequence of the first one. ∎
### 2.2. The maps $\bullet^{{\dagger}}$ and $\bullet_{{\dagger}}$
We recall the construction of maps $\bullet^{{\dagger}}$ and
$\bullet_{{\dagger}}$ between $\mathfrak{id}(\mathcal{W})$ and
$\mathfrak{id}(\mathcal{U})$ in [SX] at first. For
$\mathcal{I}\in\mathfrak{id}(\mathcal{W})$, denote by
$\mathrm{R}_{\hbar}(\mathcal{I})\subset\mathcal{W}_{\hbar}$ the Rees algebra
associated with $\mathcal{I}$ and
$\mathrm{R}_{\hbar}(\mathcal{I})^{\wedge}\subset\mathcal{W}_{\hbar}^{\wedge}$
by completion of $\mathrm{R}_{\hbar}(\mathcal{I})$ at $0$. Let
$\mathbf{A}(\mathcal{I})^{\wedge}_{\hbar}=\mathbf{A}_{\hbar}(V)^{\wedge}\otimes\mathrm{R}_{\hbar}(\mathcal{I})^{\wedge}$
and set
$\mathcal{I}^{{\dagger}}=(\mathcal{U}_{\hbar}\cap\Phi_{\hbar}(\mathbf{A}(\mathcal{I})^{\wedge}_{\hbar}))/(\hbar-1)$.
For an ideal $\mathcal{J}\in\mathfrak{id}(\mathcal{U})$, denote by
$\bar{\mathcal{J}}_{\hbar}$ the closure of $\mathbf{R}_{\hbar}(\mathcal{J})$
in $\mathcal{U}^{\wedge_{\chi}}_{\hbar}$. Define $\mathcal{J}_{{\dagger}}$ to
be the unique (by Proposition 3.4(3) [SX]) ideal in $\mathcal{W}$ such that
$\mathbf{R}_{\hbar}(\mathcal{J}_{{\dagger}})=\Phi_{\hbar}^{-1}(\bar{\mathcal{J}}_{\hbar})\cap\mathbf{R}_{\hbar}(\mathcal{W}).$
A $\mathfrak{g}_{\bar{0}}$-bimodule $M$ is said to be Harish-Chandra(HC)
bimodule, if $M$ is finitely generated and the adjoint action of
$\mathfrak{g}$ on $M$ is locally finite. For any two sided ideal
$\mathcal{J}\subset\mathcal{U}$ (resp.
$\mathcal{I}\subset\tilde{\mathcal{W}}$), we denote by
$\mathcal{J}_{\tilde{{\dagger}}}$ (resp. $\mathcal{I}^{\tilde{{\dagger}}}$)
the image of $\mathcal{J}$ under the functor $\bullet_{{\dagger}}$ (resp.
$\bullet^{{\dagger}}$) in §3 [Lo11]. Here we view $\mathcal{J}$ and
$\mathcal{I}$ as a HC-bimodules over $\mathfrak{g}_{\bar{0}}$ and
$\mathcal{W}_{0}$ respectively. The following lemma follows directly from the
above construction and Theorem 2.3.
###### Lemma 2.6.
We have that
$(\mathrm{Cl(V_{\bar{1}})}\otimes\mathcal{I})^{\tilde{{\dagger}}}=\mathcal{I}^{{\dagger}}$
and
$\mathcal{I}_{\tilde{{\dagger}}}=\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{{\dagger}}$.
### 2.3. Properties of $\bullet^{{\dagger}}$ and $\bullet_{{\dagger}}$ after
[SX]
For an associative algebra $\mathcal{A}$, we denote by
$\mathrm{GK}\dim(\mathcal{A})$ the Gelfand-Kirillov dimension of $\mathcal{A}$
(for the definition, see[KL]). The associated variety
$\mathbf{V}(\mathcal{J})$ of a two sided ideal
$\mathcal{J}\in\mathfrak{id}(\mathcal{U})$, is defined to be the associated
variety $\mathbf{V}(\mathcal{J}_{0})$ of
$\mathcal{J}_{0}=\mathcal{J}\cap\mathcal{U}_{0}$. We say that $\mathcal{J}$ is
supported on $\mathbf{V}(\mathcal{J})$ in this case.
###### Lemma 2.7.
For any two sided ideal of $\mathcal{I}\subset\mathcal{U}$, we have
$\mathrm{GK}\dim(\mathcal{U}/\mathcal{I})=\mathrm{GK}\dim(\mathcal{U}_{0}/\mathcal{I}_{0})=\dim(\mathbf{V}(\mathcal{I})).$
###### Proof.
Note that we have the natural embedding
$\mathcal{U}_{0}/\mathcal{I}_{0}\hookrightarrow\mathcal{U}/\mathcal{I}$. The
first equality follows from the definition of Gelfand-Kirillov dimension (see
pp.14 Definition [KL] and the remark following it ) and the PBW base theorems
for $\mathcal{U}(\mathfrak{g}_{\bar{0}})$ and $\mathcal{U}$. The second
equality follows form Corollary 5.4 [BK]. ∎
The following proposition and it’s proof are super version of Theorem 1.2.2
(vii)[Lo10] in a special case.
###### Proposition 2.8.
For any $\mathcal{J}\in\mathrm{Prim}_{\mathbb{O}}(\mathcal{U})$, the set
$\\{\mathcal{I}\in\mathfrak{id}(\mathcal{W})\mid\text{ $\mathcal{I}$ is prime,
$\mathcal{I}^{{\dagger}}=\mathcal{J}$ }\\}$ is exactly the minimal prime
ideals containing $\mathcal{J}_{{\dagger}}$.
###### Proof.
Suppose that $\mathcal{I}$ is prime ideal of $\mathcal{W}$ with
$\mathcal{I}^{{\dagger}}=\mathcal{J}$. Proposition 4.5 [SX] implies that
$\mathcal{J}_{{\dagger}}\subset\mathcal{I}$. So $\mathcal{I}$ has finite
codimension in $\mathcal{W}$. Hence we deduce that $\mathcal{I}$ is minimal by
Corollary 3.6 [BK]. Now suppose that the minimal prime ideal
$\mathcal{I}\subset\mathcal{W}$ with
$\mathcal{J}_{{\dagger}}\subset\mathcal{I}$. It follows from Proposition 4.6
[SX] that $\mathcal{J}_{\dagger}$ has finite codimension in $\mathcal{W}$.
Thus we can see that
$\tilde{\mathcal{I}}=\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}$ has finite
codimension in $\tilde{\mathcal{W}}$. Whence
$\tilde{\mathcal{I}}_{0}=\mathcal{W}_{0}\cap\tilde{\mathcal{I}}$ has finite
codimension in $\mathcal{W}_{0}$. Since
$\mathcal{I}^{{\dagger}}\cap\mathcal{U}_{0}=(\tilde{\mathcal{I}}_{0})^{\tilde{{\dagger}}}$,
we obtain that $\mathcal{I}^{{\dagger}}$ is supported on
$G_{\bar{0}}\cdot\chi$ by the proof of Theorem 1.2.2 (vii)[Lo10]. Thus by
Lemma 2.7 and Corollary 3.6 [BK], we have
$\mathcal{I}^{{\dagger}}=\mathcal{J}$. ∎
Let $\sigma$ be the automorphism of superalgebra
$\mathcal{A}=\mathcal{A}_{\bar{0}}+\mathcal{A}_{\bar{1}}$ given by
$\sigma(x)=x_{0}-x_{1}$ for any $x=x_{0}+x_{1}$ in $\mathcal{A}$. An ideal of
$\mathcal{A}$ is $\mathbb{Z}_{2}$-graded if and only if it is invariant under
$\sigma$. We have the following relation between primitive and graded
primitive ideals of $\mathcal{A}$.
###### Lemma 2.9 (Lemma 7.6.3, [Mu12]).
For any graded primitive ideal $\mathcal{I}^{{}^{\prime}}$ of $\mathcal{A}$,
there exists a primitive ideal $\mathcal{I}\subset\mathcal{A}$ such that
$\mathcal{I}^{{}^{\prime}}=\mathcal{I}\cap\sigma(\mathcal{I})$.
### 2.4. Proof of the main result
Now we are ready to prove our first main result.
Proof of Theorem 1.1
We prove the theorem by a similar argument as in the proof of Conjecture 1.2.1
[Lo11]. Indeed, let $\mathcal{I}_{1},\ldots,\mathcal{I}_{l}$ be the minimal
prime ideal containing $\mathcal{J}_{{\dagger}}$, for a fixed
$\mathcal{J}\in\mathrm{Prim}_{\mathbb{O}}(\mathcal{U})$. Since
$\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{1}$ is stable under $Q^{\circ}$,
$\bigcap_{\gamma\in
C_{e}}\gamma(\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{1})$ is $Q$-stable.
Set $\mathcal{J}^{1}=(\bigcap_{\gamma\in
C_{e}}\gamma(\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{1}))^{\tilde{{\dagger}}}$,
then by Theorem 4.1.1 [Lo11] we have
$(\mathcal{J}^{1})_{\tilde{{\dagger}}}=\bigcap_{\gamma\in
C_{e}}\gamma(\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{1})$. Thus we have
$\mathcal{J}=(\mathcal{I}_{1})^{{\dagger}}\supset\mathcal{J}^{1}\supset\mathcal{J}$
(The first equality follows from Lemma 2.7 and Corollary 3.6 [BK]). Hence
$\mathcal{J}_{\tilde{{\dagger}}}=\bigcap_{\gamma\in
C_{e}}\gamma(\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{1})$. We obtain that
$\gamma(\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{1})=\mathrm{Cl}(V_{\bar{1}})\otimes\mathcal{I}_{\gamma(1)}$
for some $\gamma(1)\in\\{1,\ldots,l\\}$ by Proposition 3.1.10 [Di] and
Corollary 2.4. Thus we have $\mathcal{I}=\bigcap_{\gamma\in
C_{e}}\mathcal{I}_{\gamma(1)}$ by Proposition 3.1.10 [Di] and Lemma 2.6. Now
the proof is completed by Proposition 2.8.
In the $\mathbb{Z}_{2}$-graded case, note that the algebra auotomorphism given
by $g\in Q$ commutes with $\sigma$. Thus the second statement follows from the
first one and Lemma 2.9. $\hfill\Box$
### 2.5. Finite dimensional representations of $\mathcal{\tilde{W}}$
Now we point out the role of Theorem 1.1 in describing
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{\tilde{W}})$. As we recalled earlier the
map
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{\tilde{W}})\rightarrow\mathrm{Prim}^{\mathrm{fin}}(\mathcal{\tilde{W}});M\mapsto\mathrm{Ann}(M)$
is bijective. The inverse is given by the facts that, for
$\mathcal{I}\in\mathrm{Prim}^{\mathrm{fin}}(\mathcal{\tilde{W}})$, the finite
dimensional simple algebra $\mathcal{\tilde{W}}/\mathcal{I}$ is isomorphic to
$\mathrm{End}(M)$ for some finite dimensional vector space $M$. We have
similar bijection in $\mathbb{Z}_{2}$-graded case by Lemma 2.9.
Now let $M\in\mathrm{Irr}^{\mathrm{fin}}(\mathcal{\tilde{W}})$ and
$\mathcal{I}=\mathrm{Ann}(M)$. For $g\in C_{e}=Q/Q^{\circ}$ and a
representative $g^{\prime}$ of it in $Q$, we denote by ${}^{g}M$ the twist of
$M$ by the algebra automorphism $g^{\prime}$ of $\mathcal{\tilde{W}}$.
Obviously, ${}^{g}M$ has annihilator $g\cdot\mathcal{I}$. Thus Theorem 1.1 is
equivalent to saying that $\\{^{g}M|g\in
C_{e}\\}=\mathrm{Irr}^{\mathrm{fin}}(\mathcal{\tilde{W}})$.
### 2.6. In the special case: $C_{e}=1$
In the case of $\mathfrak{g}=\mathfrak{g}_{\bar{0}}+\mathfrak{g}_{\bar{1}}$ is
a basic Lie superalgebra of type<EMAIL_ADDRESS>Lezter established a bijection
$\nu:\mathrm{Prim}(\mathcal{U}_{0})\rightarrow\mathrm{Prim}(\mathcal{U})$. It
follows from the construction of $\nu$ that the restriction give us a
bijection
$\nu_{\mathbb{O}}:\mathrm{Prim}_{\mathbb{O}}(\mathcal{U}_{0})\longrightarrow\mathrm{Prim}_{\mathbb{O}}(\mathcal{U}).$
So we can give a description of $\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W})$ if
$C_{e}$ is trivial. For all nilpotent elements in type $A(m|n)$ or at least
for the regular nilpotent elements in type $C(n)$ Lie superalgebras, the
finite group $C_{e}$ is trivial. In the case of
$\mathfrak{g}=\mathfrak{osp}(1,2n)$, a description of
$\mathrm{Prim}(\mathcal{U})$ given in Theorem A and B [Mu97]. The poset
structure describing $\mathrm{Prim}(\mathcal{U})$ is exactly same as that
$\mathrm{Prim}(\mathcal{U}_{0})$. It is straightforward to check that
$\widehat{L}(\lambda)$ is supported on $\bar{\mathbb{O}}$ if and only if so is
$L(\lambda)$. Thus we show that Theorem 1.1 gives us a description of
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W})$ provided $C_{e}=1$.
## 3\. Graded finite dimensional representations
From now on, let $\mathfrak{g}$ be a basic Lie superalgebra of type<EMAIL_ADDRESS>The
most essential feature of the type i@ Lie superalgebras is that they admit a
$\mathbb{Z}_{2}$-compatible $\mathbb{Z}$-grading
$\mathfrak{g}=\mathfrak{g}_{-1}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}.$
Here by the term $\mathbb{Z}_{2}$-compatible we mean
$\mathfrak{g}_{-1}\oplus\mathfrak{g}_{1}=\mathfrak{g}_{\bar{1}}$ and
$\mathfrak{g}_{0}=\mathfrak{g}_{\bar{0}}$. The Kac functor $K(\bullet)$ from
the category of $\mathfrak{g}_{\bar{0}}$-modules to that of
$\mathfrak{g}$-supermodules is defined as follows For a
$\mathfrak{g}_{\bar{0}}$-module $V$, view it as a
$\mathfrak{g}_{0}+\mathfrak{g}_{1}$-module with trivial $\mathfrak{g}_{1}$
action and define
$K(V)=\mathrm{ind}_{\mathfrak{g}_{0}+\mathfrak{g}_{1}}^{\mathfrak{g}}V$. The
main result of [CM] is saying that, for any simple
$\mathfrak{g}_{0}$-supermodule $V$, the Kac module $K(V)$ has a unique simple
$\mathbb{Z}_{2}$-graded quotient $\widehat{V}$; and the map
$V\mapsto\widehat{V}$ is a bijection between the set of simple
$\mathfrak{g}_{0}$-modules and that of simple $\mathfrak{g}$-supermodules. It
is well known that the above map sends the highest weight simple
$\mathcal{U}_{0}$-module $L(\lambda)$ to the highest weight simple
$\mathcal{U}$-module $\widehat{L}(\lambda)$.
Now we give a classification to simple $\tilde{\mathcal{W}}$-supermodules
(hence to $\mathcal{W}$-supermodules ) via the Kac equivalence and Skryabin’s
equivalence.
Let $\mathfrak{m}_{\bar{0}}=\oplus_{i\leq-2}\mathfrak{g}_{\bar{0}}(i)+l$,
where $l$ is a Lagrangian of $\mathfrak{g}_{\bar{0}}(-1)$ with respect the
bilinear form $\chi([\bullet,\bullet])$. Let $\mathfrak{m}_{\bar{0}}$ stand
for the subspace of $\mathcal{U}_{0}$ consist of $x-\chi(x)$ for all
$x\in\mathfrak{m}_{\bar{0}}$. A $\mathfrak{g}$-supermodule $M$ is said to be
Whittaker if the action of $\mathfrak{m}_{\bar{0},\chi}$ on it is locally
nilpotent. It is straightforward to check that
$\widetilde{\mathrm{Wh}}(M):=M^{\mathfrak{m}_{\bar{0},\chi}}$ is a
$\tilde{\mathcal{W}}$-supermodule. Let $\tilde{Q}_{\chi}$ be the left
$\mathcal{U}$-module $\mathcal{U}/\mathcal{U}\mathfrak{m}_{\bar{0},\chi}$. It
also has a right $\tilde{\mathcal{W}}$-supermodule structure. For any
$\tilde{\mathcal{W}}$-supermodule $N$,
$\tilde{Q}_{\chi}\otimes_{\tilde{\mathcal{W}}}N$ is a left
$\mathcal{U}$-supermodule. Let
$Q_{0,\chi}=\mathcal{U}_{0}/\mathcal{U}_{0}\mathfrak{m}_{\bar{0},\chi}$ be the
$(\mathcal{U}_{0},\mathcal{W}_{0})$-bimodule defined similarly. We have the
following Skryabin’s equivalence for $\tilde{\mathcal{W}}$.
###### Theorem 3.1.
The functor $\widetilde{\mathrm{Wh}}$ and
$Q_{\chi}\otimes_{\tilde{\mathcal{W}}}\bullet$ are mutual quasi-equivanlence
between the category of $\tilde{\mathcal{W}}$-supermodules and that of
Whittaker $\mathcal{U}$-supermodules. For any
$\tilde{\mathcal{W}}$-supermodule $N$, $Q_{0,\chi}\otimes_{\mathcal{W}_{0}}N$
also has a $\mathcal{U}$-supermodule structure, which is isomorphic to
$\tilde{Q}_{\chi}\otimes_{\tilde{\mathcal{W}}}N$.
It is worthwhile to point out that the second statement is very useful,
because it enables us to use results on $\mathcal{W}_{0}$ to study
representations of $\tilde{\mathcal{W}}$. We may prove the theorem by a
similar argument in the W-algebra cases, see [Lo11] or in W-superalgebra
cases, see [SX]. Here we provide a sketch to prove it.
###### Proof.
Let
$\mathbf{A}_{V_{\bar{0}}}(\tilde{\mathcal{W}})=\mathbf{A}(V_{\bar{0}})\otimes\tilde{\mathcal{W}}$.
We claim that there is an isomorphism
$\mathcal{U}^{\wedge}_{\mathfrak{m}_{\bar{0},\chi}}\rightarrow\mathbf{(}A)(\tilde{\mathcal{W}})^{\wedge}_{\mathfrak{m}_{\bar{0}}}$
of topological algebras, where
$\mathcal{U}^{\wedge}_{\mathfrak{m}_{\bar{0},\chi}}$(resp.
$\mathbf{A}_{V_{\bar{0}}}(\tilde{\mathcal{W}})^{\wedge}_{\mathfrak{m}_{\bar{0}}}$)
is the completion of $\mathcal{U}$(resp.
$\mathbf{A}_{V_{\bar{0}}}(\tilde{\mathcal{W}})$) with respect to nilpotent Lie
subalgebra $\mathfrak{m}_{\bar{0},\chi}\subset\mathcal{U}$ (resp. commutative
subalgebra $\mathfrak{m}_{\bar{0}}$). This is an analog of Theroem 1.2.1
[Lo10] for $\mathcal{W}_{0}$, which is saying that
$\mathcal{U}^{\wedge}_{\mathfrak{m}_{\bar{0},\chi}}$ is isomorphic to
$\mathbf{A}(V_{\bar{0}})\otimes\mathcal{W}_{0}$ as topological algebras. Our
claim may be proved by the similar arguments therein.
View $Q_{0,\chi}$ as an $\mathbf{A}(V_{\bar{0}})\otimes\mathcal{W}_{0}$-module
via the second isomorphism mentioned above, then we have
$Q_{0,\chi}=\mathbb{K}[\mathfrak{m}_{\bar{0}}]\otimes\mathcal{W}_{0}$ as
($\mathbf{A}(V_{\bar{0}})\otimes\mathcal{W}_{0},\mathcal{W}_{0})$-bimodules,
see pp.52 [Lo10]). Similarly we have
$\tilde{Q}_{\chi}=\mathbb{K}[\mathfrak{m}_{\bar{0}}]\otimes\tilde{\mathcal{W}}$
as
$(\mathbf{A}_{V_{\bar{0}}}(\tilde{\mathcal{W}}),\tilde{\mathcal{W}})$-bimodules.
Therefore
$Q_{0,\chi}\otimes_{\mathcal{W}_{0}}N=\mathbb{K}[\mathfrak{m}_{\bar{0}}]\otimes\mathcal{W}_{0}\otimes_{\mathcal{W}_{0}}N=\mathbb{K}[\mathfrak{m}_{\bar{0}}]\otimes
N$
has an $\mathbf{A}(V_{\bar{0}})\otimes\tilde{\mathcal{W}}$-supermodule
structure. Hence it is a Whittaker $\mathcal{U}$-supermodule via the
homomorphism
$\mathcal{U}\hookrightarrow\mathcal{U}^{\wedge}_{\mathfrak{m}_{\bar{0},\chi}}\rightarrow\mathbf{A}_{V_{\bar{0}}}(\tilde{\mathcal{W}})^{\wedge}_{\mathfrak{m}_{\bar{0}}}$.
Repeating the proof of Theorem 4.1 [SX], the remaining statements follow.
∎
###### Theorem 3.2.
There is a bijection among $\mathrm{gr}.\mathrm{Irr}(\mathcal{W}_{0})$,
$\mathrm{gr}.\mathrm{Irr}(\tilde{\mathcal{W}})$ and
$\mathrm{gr}.\mathrm{Irr}(\mathcal{W})$. Any simple
$\tilde{\mathcal{W}}$-supermodule, or equivalently simple
$\mathcal{W}$-supermodule is $\mathbb{Z}$-gradable.
###### Proof.
Obviously, Kac functor maps Whittaker $\mathfrak{g}_{\bar{0}}$-modules to
Whittaker $\mathfrak{g}$-supermodules. By Theorem 3.1 we have that the map
$N\mapsto\widehat{N}:=\widetilde{\mathrm{Wh}}(\widehat{Q_{\chi,0}\otimes_{\mathcal{W}_{0}}N})$
is a bijection between $\mathrm{Irr}(\mathcal{W}_{0})$ and
$\mathrm{gr}.\mathrm{Irr}(\tilde{\mathcal{W}})$. Since
$\mathfrak{m}_{\bar{0},\chi}\subset\mathcal{U}$ is $\mathbb{Z}$-homogenous,
the second statement follows from the fact that any simple
$\mathfrak{g}$-supermodule is $\mathbb{Z}$-gradable, see the proof of Theorem
4.1 [CM]. ∎
## 4\. Character formula
### 4.1. Triangular decomposition for $\mathcal{\tilde{W}}$
Let $\mathcal{U}_{+}$(resp. $\mathcal{U}_{-}$) be the universal enveloping
algebra of $\mathfrak{g}_{0}+\mathfrak{g}_{1}$ (resp.
$\mathfrak{g}_{0}+\mathfrak{g}_{-1}$). Define completed algebras
$(\mathcal{U}_{+})_{\hbar}^{\wedge}$ and $(\mathcal{U}_{-})_{\hbar}^{\wedge}$
same as $\mathcal{U}_{\hbar}^{\wedge}$. The restriction of
$\tilde{\Phi}_{\hbar}$ to $(\mathcal{U}_{+})_{\hbar}^{\wedge}$ and
$(\mathcal{U}_{-})_{\hbar}^{\wedge}$ give us following isomorphisms
$\tilde{\Phi}_{\hbar}^{+}:\mathbf{A}_{\hbar}^{\wedge}(V_{\bar{0}})\otimes\mathcal{\tilde{W}}_{+,\hbar}^{\wedge}\longrightarrow(\mathcal{U}_{+})_{\hbar}^{\wedge}\text{
and
}\tilde{\Phi}_{\hbar}^{-}:\mathbf{A}_{\hbar}^{\wedge}(V_{\bar{0}})\otimes\mathcal{\tilde{W}}_{-,\hbar}^{\wedge}\longrightarrow(\mathcal{U}_{-})_{\hbar}^{\wedge}$
of quantum algebras. Here $\mathcal{\tilde{W}}_{+,\hbar}^{\wedge}$ and
$\mathcal{\tilde{W}}_{-,\hbar}^{\wedge}$ are defined similarly as
$\mathcal{\tilde{W}}_{\hbar}^{\wedge}$ in Proposition 2.1. Define
$\mathcal{\tilde{W}}_{-}:=(\mathcal{\tilde{W}}_{-,\hbar}^{\wedge})_{\mathbb{K}^{*}-lf}/(\hbar-1)$
and
$\mathcal{\tilde{W}}_{+}=(\mathcal{\tilde{W}}_{+,\hbar}^{\wedge})_{\mathbb{K}^{*}-lf}/(\hbar-1)$.
They can be viewed as the W-superalgebras from
$(\mathfrak{g}_{-1}+\mathfrak{g}_{0},e)$ and
$(\mathfrak{g}_{0}+\mathfrak{g}_{1},e)$.
Equip $\mathcal{U}_{\hbar}$ a $\mathbb{Z}$-grading such that subspace
$\mathcal{U}$ has the natural grading from $\mathfrak{g}$ and $\hbar$ has
grading $0$. It follows from the construction that $\tilde{\Phi}_{\hbar}$
preserves the $\mathbb{Z}$-grading. Hence there is a $\mathbb{Z}$-grading
$\mathcal{\tilde{W}}=\oplus_{i\in\mathbb{Z}}\mathcal{\tilde{W}}$ inherited
from the one on $\mathcal{U}$. The algebras $\mathcal{\tilde{W}}_{-}$ and
$\mathcal{\tilde{W}}_{+}$ are $\mathbb{Z}$-graded subalgebras of
$\mathcal{\tilde{W}}$. It is immediate from the construction that
$\mathcal{\tilde{W}}_{+}=\mathcal{W}_{0}\otimes_{\mathbb{K}}\mathcal{\tilde{W}}_{+}^{\\#},\quad\mathcal{\tilde{W}}_{-}=\mathcal{W}_{0}\otimes_{\mathbb{K}}\mathcal{\tilde{W}}_{-}^{\\#}.$
Here $\mathcal{\tilde{W}}_{-}^{\\#}$ (resp. $\mathcal{\tilde{W}}_{+}^{\\#}$)
is the nilpotent subalgebra of $\mathcal{\tilde{W}}_{-}$ (resp.
$\mathcal{\tilde{W}}_{+}$ ) generated by elements with negative (resp.
positive ) degree. We emphasize that $\mathcal{W}_{0}$ is the even finite
W-algebra form $(\mathfrak{g}_{\bar{0}},e)$.
###### Proposition 4.1.
* (1)
There exists $\mathbb{Z}$-homogeneous elements
$x^{-}_{1},\ldots,x^{-}_{k}\in\mathcal{\tilde{W}}_{-}^{\\#}$,
$x_{1},\ldots,x_{l}\in\mathcal{W}_{0}$ and
$x^{+}_{1},\ldots,x^{+}_{k}\in\mathcal{\tilde{W}}_{+}^{\\#}$ such that they
form a PBW basis of $\mathcal{\tilde{W}}$.
* (2)
There is a triangular decomposition
$\mathcal{\tilde{W}}=\mathcal{\tilde{W}}_{+}^{\\#}\otimes_{\mathbb{K}}\mathcal{W}_{0}\otimes_{\mathbb{K}}\mathcal{\tilde{W}}_{-}^{\\#}.$
(4.1)
* (3)
For any irreducible $\mathcal{W}_{0}$-module $N$, view it as a
$\mathcal{\tilde{W}}_{+}$-module by the natural quotient
$\mathcal{\tilde{W}}_{+}\twoheadrightarrow\mathcal{W}_{0}$. Then the ‘Verma’
module
$V^{K}_{\mathcal{\tilde{W}}}(N):=\mathcal{\tilde{W}}\otimes_{\mathcal{\tilde{W}}_{+}}N$
has a unique simple quotient $L^{K}_{\mathcal{\tilde{W}}}(N)$. The map
$\mathrm{Irr}^{\mathrm{fin}}(\mathcal{W}_{0})\rightarrow\mathrm{gr}.\mathrm{Irr}^{\mathrm{fin}}(\tilde{\mathcal{W}});N\mapsto
L^{K}_{\mathcal{\tilde{W}}}(N)$ is bijective and $C_{e}$-equivariant.
###### Proof.
Statement (1) follows from the similar argument for existence of PBW for
$\mathcal{W}_{0}$ in [Lo10] or for $\mathcal{W}$ in [SX]. Claim (2) follows
from (1).
Now we prove the last claim. Let $M$ be a $\mathbb{Z}_{2}$-graded simple
quotient of $V^{K}_{\mathcal{\tilde{W}}}(N)$ and $\pi$ be the quotient
homomorphism. By Theorem 3.2 we may assume $M$ has a $\mathbb{Z}$-grading with
top degree $0$. We claim that $\pi$ has to be a $\mathbb{Z}$-graded
homomorphism. Assume otherwise, then for a non-zero $x\in N$, we may write
$\pi(x)=\sum_{i=1}^{n}y_{i}$ for $\mathbb{Z}$-homogeneous $y_{i}\in M$,
$i=1,2,\ldots n>1$. Suppose $\mathrm{gr}(y_{1})=d<0$. Since
$\mathcal{\tilde{W}}_{+}^{\\#}\cdot y_{1}=0$, submodule
$\mathcal{\tilde{W}}\cdot y_{1}=0$ has top degree $d$, so it is a proper sub-
supermodule of sipmle supermodule $M$. The claim follows from this
contradiction. Thus we have that any maximal sub-supermodule of
$V^{K}_{\mathcal{\tilde{W}}}(N)$ is $\mathbb{Z}$-graded submodule. So the sum
of all proper maximal sub-supermodules is the unique proper maximal sub-
supermodule. For any $g\in C_{e}$, it is clear that
${}^{g}L_{\mathcal{\tilde{W}}}^{K}(N)=L_{\mathcal{\tilde{W}}}^{K}(^{g}N)$.
Thus the claim (3) follows by the standard arguments in the highest weight
theory. ∎
### 4.2. Recall: the generalized Soergel functor $\mathbb{V}$ in the even
theory
We recall the definition of generalized Soergel functor $\mathbb{V}$ of [Lo15]
in this subsection. Choose a Levi subalgebra
$(\mathfrak{g}_{\bar{0}})_{0}\subset\mathfrak{g}_{\bar{0}}$, an
$\mathfrak{sl}_{2}$-triple $(e,h,f)\subset(\mathfrak{g}_{\bar{0}})_{0}$, an
integral element $\theta\in\mathfrak{z}((\mathfrak{g}_{\bar{0}})_{0})$, a
parabolic subgroup $P$ as in §2.6.1,[Lo15]. Let $\mathcal{O}^{P}_{\nu}$ be the
parabolic category $\mathcal{O}$ generated by finitely generated
$(P,\nu)$-equivariant $(\mathcal{U}_{0},P)$-modules for character $\nu$ of
$\mathfrak{p}$. Let $\mathfrak{t}=\mathfrak{z}(\mathfrak{g})$ and $T\subset Q$
be the torus with $\mathrm{Lie}(T)=\mathfrak{t}$. Let $R$ stand for the
centralizer of $T$ in $Q$.
View $\theta$ as an element of $\mathcal{W}_{0}$ by the embedding
$\mathfrak{q}\hookrightarrow\mathcal{W}_{0}$. Denote by
$\mathcal{W}_{0}=\oplus_{\alpha\in\mathbb{Z}}(\mathcal{W}_{0})_{\alpha}$ the
decomposition by eigenspaces of $\mathrm{ad}(\theta)$. Set
$(\mathcal{W}_{0})_{\geq 0}=\bigoplus_{\alpha\geq
0}(\mathcal{W}_{0})_{\alpha},(\mathcal{W}_{0})_{>0}=\bigoplus_{\alpha>0}(\mathcal{W}_{0})_{\alpha},(\mathcal{W}_{0})^{0}=(\mathcal{W}_{0})\cap\mathcal{W}_{0}(\mathcal{W}_{0})_{>0}.$
Denote by $\pi$ the quotient
$(\mathcal{W}_{0})_{>0}\rightarrow(\mathcal{W}_{0})^{0}$. Then
$(\mathcal{W}_{0})^{0}$ is isomorphic to the W-algebra arised from the pair
$((\mathfrak{g}_{\bar{0}})_{0},e)$. For a finite dimensional simple
$(\mathcal{W}_{0})^{0}$-module $N$, define the Verma module
$\Delta_{\mathcal{W}_{0}}(N):=\mathcal{W}_{0}\otimes_{(\mathcal{W}_{0})_{\geq
0}}N$. Then $\Delta_{\mathcal{W}_{0}}^{\theta}(N)$ has a unique irreducible
quotient $L_{\mathcal{W}_{0}}^{\theta}(N)$. Any finite dimensional irreducible
$\mathcal{W}_{0}$-module can be obtained by this way. For a character $\nu$ of
$R$, let $\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$ be the
$(R,\nu)$-equivarinat category $\mathcal{O}$ defined for $\mathcal{W}_{0}$.
Let $\mathfrak{u}:=\mathfrak{p}\cap[f,\mathfrak{g}_{\bar{0}}]$, which is a
Lagrangian subspace of $V_{\bar{0}}$. Choose an $R\times\mathbb{K}^{*}$
equivariant embedding
$\iota:V_{\bar{0}}\hookrightarrow\mathcal{U}_{0,\hbar}^{\wedge}$ as in §4.1.2
[Lo15]. We have an isomorphism
$\Phi_{0,\hbar}:\mathbf{A}_{\hbar}^{\wedge}(V_{\bar{0}})\otimes\mathcal{W}_{0,\hbar}^{\wedge}\longrightarrow\mathcal{U}_{0,\hbar}^{\wedge}$
(4.2)
of quantum algebras from $\iota$ and
$(\mathcal{W}_{0,\hbar}^{\wedge})_{\mathbb{K}^{*}-{\mathrm{lf}}}/(\hbar-1)=\mathcal{W}_{0}$.
The generalized Soergel functor
$\mathbb{V}:\mathcal{O}^{P}_{\nu}\longrightarrow\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$
is defined by three different but equivalent ways in [Lo15]. We recall the
first one. For $M\in\mathcal{O}^{P}_{\nu}$, denote by
$M_{\hbar}^{\wedge_{\chi}}$ the completion of Rees module $M_{\hbar}$ with
respect to inverse image of maximal ideal of $\chi$ under the natural
homomorphism $(\mathcal{U}_{0})_{\hbar}\rightarrow S[\mathfrak{g}_{\bar{0}}]$
given by $\hbar=0$. Let $M^{\prime}_{\hbar}\subset M_{\hbar}^{\wedge_{\chi}}$
be the annihilator of $\Phi_{0,\hbar}(\mathfrak{u})$. Then
$M^{\prime}_{\hbar}$ is
$\Phi_{0,\hbar}(\mathcal{W}_{0,\hbar}^{\wedge})$-stable, because
$\Phi_{0,\hbar}(\mathcal{W}_{0,\hbar}^{\wedge})$ commutes with
$\Phi_{0,\hbar}(\mathbf{A}_{\hbar}^{\wedge}(V_{\bar{0}}))\supset\Phi_{0,\hbar}(\mathfrak{u})$.
The generalized Soergel functor $\mathbb{V}$ is defined as follows
$\mathbb{V}(M):=(M^{\prime}_{\hbar})_{\mathbb{K}^{*}-l.f}/(\hbar-1).$
There is also a rational action of $R$ on $\mathbb{V}(M)$ by construction.
The image of simple module $L(\lambda)$ in the parabolic category
$\mathcal{O}^{P}_{\nu}$ is described as follows
$\mathbb{V}(L(\lambda))=\bigoplus_{i\in
I_{\lambda}}L_{\mathcal{W}_{0}}^{\theta}({N^{0}_{i}}).$ (4.3)
Here $L_{00}(\lambda)$ stand for the finite dimensional
$(\mathfrak{g}_{\bar{0}})_{0}$-module with highest weight $\lambda$. In (4.3),
$N^{0}_{i}$ for $i\in I_{\lambda}$ run over the finite dimensional simple
modules of $(\mathcal{W}_{0})^{0}$ lying over
$J_{0}(\lambda)=\mathrm{Ann}(L_{00}(\lambda))$. From now on, we denote
$L_{\mathcal{W}_{0}}^{\theta}({N^{0}_{i}})$ by $N_{i}$, $i\in I_{\lambda}$ for
simplicity.
### 4.3. Description of $\mathbb{V}(\widehat{L}(\lambda))$ for
$\lambda\in\Lambda_{\mathfrak{p}}$
Denote by $\mathcal{O}^{P}_{\nu}(\mathcal{U})$ the category of
$\mathfrak{g}$-supermodules lying in parabolic category
$\mathcal{O}^{P}_{\nu}$ for $\mathfrak{g}_{\bar{0}}$. Similarly, let
$\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}(\tilde{\mathcal{W}})$
be the category of $\tilde{\mathcal{W}}$-modules lying in
$\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$. Let
$\mathrm{Wh}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$ be the category of
$R$-equivariant $\mathrm{Wh}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$ Whittaker
modules defined in §3.2.3, [Lo15]. This Whittaker category is similar to the
one considered in Theorem 3.1, but it is defined by a nilpotent Lie subalgebra
of $\mathfrak{g}_{\bar{0}}$ different from $\mathfrak{m}_{\bar{0}}$. Let
$\mathrm{Wh}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}(\mathcal{U})$ stand for the
category of $\mathfrak{g}$-supermodules lying in
$\mathrm{Wh}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$.
There is a Skryabin’s equivalence
$\mathcal{K}:\mathrm{Wh}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}\rightarrow\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$
with inverse $\mathcal{K}^{-1}$, see [Lo15] for the definition. It is clear
from the definition that $\mathcal{K}$ maps a $\mathcal{U}$-supermodule in
$\mathrm{Wh}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$ to a
$\mathcal{\tilde{W}}$-supermodule in
$\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$.
The following lemma is an analog of Theorem 3.1 and can be proved similarly.
###### Lemma 4.2.
The functor $\mathcal{K}^{-1}$ restricts to a functor from
$\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}(\tilde{\mathcal{W}})$
to
$\mathrm{Wh}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}(\mathcal{U})$.
The following result is crucial to describe the image of simple object under
$\mathbb{V}$.
###### Theorem 4.3.
The functor
$\mathbb{V}:\mathcal{O}^{P}_{\nu}\longrightarrow\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}$
sends simple $\mathcal{U}$-supermodules to simple object in
$\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}(\tilde{\mathcal{W}})$.
###### Proof.
It directly follows from construction that $\mathbb{V}$ restricts to a functor
from $\mathcal{O}^{P}_{\nu}(\mathcal{U})$ to
$\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}(\tilde{\mathcal{W}})$.
Let
$\mathbb{V}^{*}:(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}\rightarrow\mathcal{O}^{P}_{\nu}$
be the right adjoin functor of $\mathbb{V}$ defined in Proposition 4.4,
[Lo15]. By Lemma 4.2, tracking the constrcution ( precisely the last paragraph
of pp.898 [Lo15]) of $\mathbb{V}^{*}$, we see that $\mathbb{V}^{*}$ sends
$\tilde{\mathcal{W}}$-supermodules to $\mathcal{U}$-supermodules. Furthermore,
$\mathbb{V}^{*}$ is restricted to a functor
$\mathcal{O}_{\theta}(\mathfrak{g}_{\bar{0}},e)^{R}_{\nu}(\mathcal{\tilde{W}})\rightarrow\mathcal{O}^{P}_{\nu}(\mathcal{U})$,
which is right adjoint to the restriction of $\mathbb{V}$. The theorem
follows. ∎
###### Theorem 4.4.
For $\lambda\in\Lambda_{\mathfrak{p}}$, recall that $N_{i}$, $i\in
I_{\lambda}$ stand for the simple $\mathcal{W}_{0}$-modules appearing in
(4.3). Then we have
$\mathbb{V}(\widehat{L}(\lambda))=\bigoplus_{N_{i}}L^{K}_{\tilde{\mathcal{W}}}(N_{i}).$
###### Proof.
Since $L(\lambda)\subset\widehat{L}(\lambda)$, we have
$\bigoplus_{i}N_{i}\subset\mathbb{V}(\widehat{L}(\lambda)).$
Note that the action of $\tilde{\mathcal{W}}_{-}^{\\#}$ on $N_{i}$ for $i\in
I_{\lambda}$ is trivial. Now the theorem follows from Proposition 4.1 (3) and
Theorem 4.3. ∎
The following result organize
$\mathrm{gr}.\mathrm{Irr}^{\mathrm{fin}}_{\lambda}(\tilde{\mathcal{W}})$ into
a single $C_{e}$\- orbit.
###### Corollary 4.5.
For $\lambda\in\Lambda_{\mathfrak{p}}$, the map
$\mathrm{Irr}_{\lambda}(\mathcal{W}_{0})\rightarrow\mathrm{gr}.\mathrm{Irr}_{\lambda}(\tilde{\mathcal{W}});N\mapsto
L_{\mathcal{\tilde{W}}}^{K}(N)$ is bijective and $C_{e}$-equivariant.
###### Proof.
For $N=N_{i}$ for some $i\in I_{\lambda}$, it follows form Theorem 4.4 that
$L_{\mathcal{\tilde{W}}}^{K}(N)\in\mathrm{gr}.\mathrm{Irr}^{\mathrm{fin}}_{\lambda}(\tilde{\mathcal{W}})$.
Thus the theorem follows from §2.5 and Proposition 4.1 (3). ∎
### 4.4. Algorithm for character formulas
Now we present an algorithm to compute character formula for basic type i@
finite W-superalgebras. First we write
$\widehat{L}(\lambda)=\sum_{i\in
S_{\lambda}}c_{i\lambda}\Delta_{P}(\lambda_{i})$ (4.4)
in the Grothendick group of the equivariant parabolic category
$\mathcal{O}^{P}_{\nu}$ for the Lie algebra $\mathfrak{g}_{\bar{0}}$. The
coefficients $c_{i\lambda}$ can be obtained from the
$\mathfrak{g}_{\bar{0}}$-rough structure of simple $\mathfrak{g}$-modules. We
may view $L(\lambda)$ as a $\mathfrak{g}_{\bar{0}}$ module and assume that
$\widehat{L}(\lambda)=\sum d_{\lambda\mu_{i}}L(\mu_{i})$
in the Grothendick group of the category $\mathcal{O}$ for the Lie algebras
$\mathfrak{g}_{\bar{0}}$. Here the coefficients $d_{\lambda\mu_{i}}$’s are the
multiplicities of $L(\mu_{i})$ in the $\widehat{L}(\lambda)$. However in
general , the author does not know how to determine $d_{\lambda\mu_{i}}$. It
can be computed by Kazhdan-Lusztig theory of the Lie algebras in the case of
$\mathfrak{g}=\mathfrak{gl}(m|n)$ and $\lambda$ is typical, see [CM]. For the
recent progress on the rough structures for type i@ Lie superalgebras, also
see [CCM]. The coefficients $c_{i\lambda}$ also can be determined by the super
version of parabolic Kazhdan-Lusztig theory.
It is obtained in Theorem 4.8 (iv) [Lo15] that
$\mathrm{Ch}(\mathbb{V}(\Delta_{P}(\mu))=\dim(L_{00}(\mu))e^{\mu-\rho}\prod_{i=1}^{k}(1-e^{\mu_{i}})^{-1}$
(4.5)
Applying generalized Soergel functor to $\widehat{L}(\lambda)$ , by Theorem
4.8 [Lo15] we have
$\mathrm{Ch}(\mathbb{V}(\widehat{L}(\lambda)))=\sum_{i\in
S_{\lambda}}c_{i\lambda}\dim(L_{00}(\lambda_{i}))e^{\lambda_{i}-\rho}\prod_{i=1}^{k}(1-e^{\mu_{i}})^{-1}.$
(4.6)
Here $\mu_{i}$, $i=1,2,\ldots,k$ are the weights of $\mathfrak{t}$ in
$(\mathfrak{g}_{\bar{0}})_{<0}\cap\mathfrak{z}_{\mathfrak{g}_{\bar{0}}}(e)$,
$\rho$ is half of the sum of all positive roots of $\mathfrak{g}_{\bar{0}}$.
It follows from Theorem 4.4 that $\mathbb{V}(\widehat{L}(\lambda))$ is direct
sum of $|I_{\lambda}|$ simple $\mathcal{\tilde{W}}$-supermodules. Those
supermodules are form a single orbit under the twist action given by
$Q_{0}/Q_{0}^{\circ}$ , where $Q_{0}$ is the centralizer of
$\mathfrak{sl}_{2}$-triple $\\{e,h,f\\}$ in $(G_{\bar{0}})_{0}$. The character
formula that we are considering is over torus
$\mathfrak{t}=\mathfrak{z}((\mathfrak{g}_{\bar{0}})_{0})$. Therefore they have
the same character. Thus
$\mathrm{Ch}(L^{K}_{\tilde{\mathcal{W}}}(N_{i}))=|I_{\lambda}|^{-1}\sum_{i\in
I_{\lambda}}c_{i\lambda}\dim(L_{00}(\lambda_{i}))e^{\lambda_{i}-\rho}\prod_{i=1}^{k}(1-e^{\mu_{i}})^{-1}$
(4.7)
Now by §2.5 and Corollary 4.5, we obtain a character formula for all
$N\in\mathrm{gr}.\mathrm{Irr}_{\lambda}(\mathcal{\tilde{W}})$.
Note that we have embedding
$\mathfrak{t}\hookrightarrow\mathcal{W}\hookrightarrow\mathcal{\tilde{W}}$
from the definitions. So Proposition 2.5 give us a character formula
$\mathrm{Ch}(L^{K}_{\tilde{\mathcal{W}}}(N_{i})^{\prime})=\mathrm{Ch}(L^{K}_{\tilde{\mathcal{W}}}(N_{i}))\prod_{i=1}^{l}(1+e^{\mu_{i}^{\prime}})^{-1}$
for simple $\mathcal{W}$-module
$(L^{K}_{\tilde{\mathcal{W}}}(N_{i}))^{\prime}$ obtained from
$L^{K}_{\tilde{\mathcal{W}}}(N_{i})$ by Proposition 2.5. Here
$\mu_{i}^{\prime},i=1,2,\ldots,l$ are the weights of the Lagrangian
$\mathfrak{u}_{\bar{1}}^{*}$.
## Acknowledgements
The author is partially supported by NFSC (grant No.11801113) and RIMS, an
international joint usage/research center located in Kyoto University. This
work is motivated by communications with Arakawa and a part of it is written
during the author’s visit to him at RIMS. The author indebted much to him for
many helpful discussions. The author also thanks the helpful communications
from Bin Shu and comments from Yang Zeng.
## References
* [BBG] J. Brown, J. Brundan and S. Goodwin, Principal W-algebras for $GL(m|n)$ , Algebra. Number Theory 7 (2013), 1849-1882.
* [BG] J. Brundan, S. M. Goodwin, Whittaker coinvariants for $GL(m|n)$, Adv. Math. 347 (2019), 273-339.
* [BK] W. Borho, H. Kraft, Über die Gelfand-Kirillov-Dimension , Math. Ann. 220(1976), 1-24.
* [CM] K. Coulembier, I. Musson, The primitive spectrum for $\mathfrak{gl}(m|n)$, Tohoku Math. J.(2)70 (2018), no. 2, 225-266.
* [CCM] C. Chen, K. Coulembier and V. Mazorchuk Translated simple modules for Lie algebras and simple supermodules for Lie superalgebras Math. Zeits (2020), 1-27.
* [CMa] C. Chen, V. Mazorchuk, Simple supermodules over Lie superalgebras, Trans. Amer. Math. Soc. 374 (2021), 899-921.
* [Di] J. Dixmier, Enveloping algebras, North-Holland Mathematical Library, Vol. 14. 1977.
* [KL] G. R. Krause, T. H. Lenagan, Growth of algebras and Gelfand-Kirillov dimension , revised edition , Graduate Studies in Mathematics,vol. 22(2000) , American Mathematical Society, Providence.
* [Le] E. Letzter, A bijection of primitive spectra for classical Lie superalgebras of Type i@, J. London Math. Soc. 53 (1996), 39-49.
* [Lo10] I. Losev, Quantized symplectic actions and W-algebras, J. Amer. Math. Soc. 23 (2010), 35-59.
* [Lo11] I. Losev, Finite dimensional representations of W-algebras, Duke Math. J. 159 (2011), 99-143.
* [Lo15] I. Losev, Dimensions of irreducible modules over W-algebras and Goldie ranks, Invent.Math, 200(3)(2015), 849-923.
* [Mu97] I. Musson, The enveloping algebra of the Lie superalgebra $\mathfrak{osp}(1|2r)$, Rep.Theory 1(1997), 405-423.
* [Mu92] I. Musson, A classification of Primitive Ideals in the Enveloping Algebra of a Classical Simple Lie Superalgebra (eng) Adv. Math 91.2(1992), 252-268.
* [Mu12] I. Musson, Lie Superalgebras and Enveloping Algebras, GSM131 (2012), Amer. Math. Society.
* [Pe] Y-N. Peng, Finite W-superalgebras via super Yangians to appear Adv. Math 337(2021).
* [Pr1] A. Premet, Special transverse slices and their enveloping algebras, Adv. Math, 170 (2002), 1-55.
* [PS1] E. Poletaeva, V. Serganova, On Kostant’s theorem for the Lie superalgebra Q(n), Adv. Math. 300 (2016), 320-359.
* [WZ] W. Wang, L. Zhao, Representations of Lie superalgebras in prime characteristic I, Proc. London Math. Soc. 99 (2009), 145-167.
* [SX] B. Shu, H. Xiao Super formal Darboux-Weinstein theorems and finite W superalgebras, J. Algebra 550(2020), 242-265.
* [ZS] Y. Zeng, B. Shu, Minimal $W$-superalgebras and the modular representations of basic Lie superalgebras, Publ. RIMS Kyoto Univ. 55 (2019), 123-188.
|
Object Detection and Pose Estimation from RGB and Depth Data for Real-time, Adaptive Robotic Grasping
Shuvo Kumar Paul^1, Muhammed Tawfiq Chowdhury^1, Mircea Nicolescu^1, Monica Nicolescu^1,
David Feil-Seifer^1
*This work has been supported in part by the Office of Naval Research award N00014-16-1-2312 and US Army Research Laboratory (ARO) award W911NF-20-2-0084.
^1Contact author: Shuvo Kumar Paul, Muhammed Tawfiq Chowdhury, Mircea Nicolescu, Monica Nicolescu, and David Feil-Seifer are affiliated with the Department of Computer Science and Engineering,
University of Nevada, Reno, 1664 North Virginia Street, Reno, Nevada 89557, USA
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
In recent times, object detection and pose estimation have gained significant attention in the context of robotic vision applications. Both the identification of objects of interest as well as the estimation of their pose remain important capabilities in order for robots to provide effective assistance for numerous robotic applications ranging from household tasks to industrial manipulation. This problem is particularly challenging because of the heterogeneity of objects having different and potentially complex shapes, and the difficulties arising due to background clutter and partial occlusions between objects. As the main contribution of this work, we propose a system that performs real-time object detection and pose estimation, for the purpose of dynamic robot grasping. The robot has been pre-trained to perform a small set of canonical grasps from a few fixed poses for each object. When presented with an unknown object in an arbitrary pose, the proposed approach allows the robot to detect the object identity and its actual pose, and then adapt a canonical grasp in order to be used with the new pose. For training, the system defines a canonical grasp by capturing the relative pose of an object with respect to the gripper attached to the robot’s wrist. During testing, once a new pose is detected, a canonical grasp for the object is identified and then dynamically adapted by adjusting the robot arm's joint angles, so that the gripper can grasp the object in its new pose. We conducted experiments using a humanoid PR2 robot and showed that the proposed framework can detect well-textured objects, and provide accurate pose estimation in the presence of tolerable amounts of out-of-plane rotation. The performance is also illustrated by the robot successfully grasping objects from a wide range of arbitrary poses.
pose estimation, robotics, robotic grasp, homography
§ INTRODUCTION
Current advances in robotics and autonomous systems have expanded the use of robots in a wide range of robotic tasks including assembly, advanced manufacturing, human-robot or robot-robot collaboration. In order for robots to efficiently perform these tasks, they need to have the ability to adapt to the changing environment while interacting with their surroundings, and a key component of this interaction is the reliable grasping of arbitrary objects. Consequently, a recent trend in robotics research has focused on object detection and pose estimation for the purpose of dynamic robotic grasping.
However, identifying objects and recovering their poses are particularly challenging tasks as objects in the real world are extremely varied in shape and appearance. Moreover, cluttered scenes, occlusion between objects, and variance in lighting conditions make it even more difficult. Additionally, the system needs to be sufficiently fast to facilitate real-time robotic tasks. As a result, a generic solution that can address all these problems remains an open challenge.
While classification [1, 2, 3, 4, 5, 6], detection [7, 8, 9, 10, 11, 12], and segmentation [13, 14, 15] of objects from images have taken a significant step forward - thanks to deep learning, the same has not yet happened to 3D localization and pose estimation. One primary reason was the lack of labeled data in the past as it is not practical to manually infer, thus As a result, the recent research trend in the deep learning community for such applications has shifted towards synthetic datasets [16, 17, 18, 19, 20]. Several pose estimation methods leveraging deep learning techniques [21, 22, 23, 24, 25] use these synthetic datasets for training and have shown satisfactory accuracy.
Although synthetic data is a promising alternative, capable of generating large amounts of labeled data, it requires photorealistic 3D models of the objects to mirror the real-world scenario. Hence, generating synthetic data for each newly introduced object needs photo-realistic 3D models and thus significant effort from skilled 3D artists. Furthermore, training and running deep learning models are not feasible without high computing resources as well. As a result, object detection and pose estimation in real-time with computationally moderate machines remain a challenging problem. To address these issues, we have devised a simpler pipeline that does not rely on high computing resources and focuses on planar objects, requiring only an RGB image and the depth information in order to infer real-time object detection and pose estimation.
In this work, we present a feature-detector-descriptor based method for detection and a homography based pose estimation technique where, by utilizing the depth information, we estimate the pose of an object in terms of a 2D planar representation in 3D space. The robot is pre-trained to perform a set of canonical grasps; a canonical grasp describes how a robotic end-effector should be placed relative to an object in a fixed pose so that it can securely grasp it. Afterward, the robot is able to detect objects and estimates their pose in real-time, and then adapt the pre-trained canonical grasp to the new pose of the object of interest. We demonstrate that the proposed method can detect a well-textured planar object and estimate its accurate pose within a tolerable amount of out-of-plane rotation. We also conducted experiments with the humanoid PR2 robot to show the applicability of the framework where the robot grasped objects by adapting to a range of different poses.
§ RELATED WORK
Our work constitutes of three modules: object detection, planar pose estimation, and adaptive grasping. In the following sub-sections, several fields of research that are closely related to our work are reviewed.
§.§ Object Detection
Object detection has been one of the fundamental challenges in the field of computer vision and in that aspect, the introduction of feature detectors and descriptors represents a great achievement. Over the past decades, many detectors, descriptors, and their numerous variants have been presented in the literature. The applications of these methods have widely extended to numerous other vision applications such as panorama stitching, tracking, visual navigation, etc.
One of the first feature detectors was proposed by Harris et al. [26] (widely known as the Harris corner detector). Later Tomasi et al. [27] developed the KLT (Kanade-Lucas-Tomasi) tracker based on the Harris corner detector. Shi and Tomasi introduced a new detection metric GFTT [28] (Good Features To Track) and argued that it offered superior performance. Hall et al. introduced the concept of saliency [29] in terms of the change in scale and evaluated the Harris method proposed in [30] and the Harris Laplacian corner detector [31] where a Harris detector and a Laplacian function are combined.
Motivated by the need for a scale-invariant feature detector, in 2004 Lowe [32] published one of the most influential papers in computer vision, SIFT (Scale Invariant Feature Transform). SIFT is both a feature point detector and descriptor. H. Bay et al. [33] proposed SURF (Speeded Up Robust Features) in 2008. But both of these methods are computationally expensive as SIFT detector leverages the difference of Gaussians (DoG) in different scales while SURF detector uses a Haar wavelet approximation of the determinant of the Hessian matrix to speed up the detection process. Many variants of SIFT [34, 35, 36, 37] and SURF [38, 39, 40] were proposed, either targeting a different problem or reporting improvements in matching, however, the execution time remained a persisting problem for several vision applications.
To improve execution time, several other detectors such as FAST [41] and AGAST [42] have been introduced. Calonder et al. developed the BRIEF [43] (Binary Robust Independent Elementary Features) descriptor of binary strings that has a fast execution time and is very useful for matching images. E. Rublee et al. presented ORB [44] (Oriented FAST and Rotated Brief) which is a combination of modified FAST (Features from Accelerated Segment Test) for feature detection and BRIEF for description. S. Leutnegger et al. designed BRISK [45] (Binary Robust Invariant Scale Keypoint) that detects corners using AGAST and filters them using FAST. On the other hand, FREAK (Fast Retina Key-point), introduced by Alahi et al. [46] generates retinal sampling patterns using a circular sampling grid and uses a binary descriptor, formed by a one bit difference of Gaussians (DoG). Alcantarilla et al. introduced KAZE [47] features that exploit non-linear scale-space using non-linear diffusion filtering and later extended it to AKAZE [48] where they replaced it with a more computationally efficient method called FED (Fast Explicit Diffusion) [49, 50].
In our work, we have selected four methods to investigate: SIFT, SURF, FAST+BRISK, AKAZE.
§.§ Planar Pose Estimation
Among the many techniques in literature on pose estimation, we focus our review on those related to planar pose estimation. In recent years, planar pose estimation has been increasingly becoming popular in many fields, such as robotics and augmented reality.
Simon et. al [51] proposed a pose estimation technique for planar structures using homography projection and by computing camera pose from consecutive images. Changhai et. al [52] presented a method to robustly estimate 3D poses of planes by applying a weighted incremental normal estimation method that uses Bayesian inference. Donoser et al. [53] utilized the properties of Maximally Stable Extremal Regions (MSERs [54]) to construct a perspectively invariant frame on the closed contour to estimate the planar pose. In our approach, we applied perspective transformation to approximate a set of corresponding points on the test image for estimating the basis vectors of the object surface and used the depth information to estimate the 3D pose by computing the normal to the planar object.
§.§ Adaptive Grasping
Designing an adaptive grasping system is challenging due to the complex nature of the shapes of objects. In early times, analytical methods were used where the system would analyze the geometric structure of the object and would try to predict suitable grasping points. Sahbani et al. [55] did an in depth review on the existing analytical approaches for 3D object grasping. However, with the analytical approach it is difficult to compute force and not suitable for autonomous manipulation. Later, as the number of 3D models increased, numerous data driven methods were introduced that would analyze grasps in the 3D model database and then transfer to the target object. Bohg et al. [56] reviewed data driven grasping method methods where they divided the approach into three groups based on the familiarity of the object.
Kehoe et al. [57] used a candidate grasp from the candidate grasp set based on the feasibility score determined by the grasp planner. The grasps weren't very accurate in situations where the objects had stable horizontal poses and were close to the width of the robot's gripper. Huebner et al. [58] also take a similar approach as they perform grasp candidate simulation. They created a sequence of grasps by approximating the shape of the objects and then computed a random grasp evaluation for each model of objects. In both works, a grasp has been chosen from a list of candidate grasps.
The recent advances in deep learning also made it possible to regress grasp configuration through deep convolutional networks. A number of deep learning-based methods were reviewed in [59] where the authors also discussed how each element in deep learning-based methods enhances the robotic grasping detection. [60] presented a system where deep neural networks were used to learn hierarchical features to detect and estimate the pose of an object, and then use the centers of the defined pose classes to grasps the objects. Kroemer et al. [61] introduced an active learning approach where the robot observes a few good grasps by demonstration and learns a value function for these grasps using Gaussian process regression. Aleotti et al. [62] proposed a grasping model that is capable of grasping objects by their parts which learns new tasks from human demonstration with automatic 3D shape segmentation for object recognition and semantic modeling. [63] and [64] used supervised learning to predict grasp locations from RGB images. In [65], as an alternative to a trial and error exploration strategy, the authors proposed a Bayesian optimization technique to address the robot grasp optimization problem of unknown objects. These methods emphasized developing and using learning models for obtaining accurate grasps.
In our work, we focus on pre-defining a suitable grasp relative to an object that can adapt to a new grasp based on the change of position and orientation of the object.
§ METHOD
The proposed method is divided into two parts. The first part outlines the process of simultaneous object detection and pose estimation of multiple objects and the second part describes the process of generating an adaptive grasp using the pre-trained canonical grasp and the object pose.
The following sections describe the architecture of the proposed framework (figure <ref>) in detail.
§.§ Object Detection and Pose Estimation
We present a planar pose estimation algorithm (algorithm <ref>) for adaptive grasping that consists of four phases: (i) feature extraction and matching, (ii) homography estimation and perspective transformation, (iii) directional vectors estimation on the object surface, (iv) planar pose estimation using the depth data. In the following sections, we will focus on the detailed description of the aforementioned steps.
§.§.§ Feature extraction and matching
Our object detection starts with extracting features from the images of the planar objects and then matching them with the features found in the images acquired from the camera. Image features are patterns in images based on which we can describe the image. A feature detecting algorithm takes an image and returns the locations of these patterns - they can be edges, corners or interest points, blobs or regions of interest points, ridges, etc. This feature information then needs to be transformed into a vector space using a feature descriptor, so that it gives us the possibility to execute numerical operations on them. A feature descriptor encodes these patterns into a series of numerical values that can be used to match, compare, and differentiate one feature to another; for example, we can use these feature vectors to find the similarities in different images which can lead us to detect objects in the image. In theory, this information would be invariant to image transformations. In our work, we have investigated SIFT [32], SURF [33], AKAZE [48], and BRISK [45] descriptors. SIFT, SURF, AKAZE are both feature detectors and descriptors, but BRISK uses FAST [41] algorithm for feature detection. These descriptors were selected after carefully reviewing the comparisons done in the recent literature [66, 67, 68].
Once the features are extracted and transformed into vectors, we compare the features to determine the presence of an object in the scene. For non-binary feature descriptors (SIFT, SURF) we find matches using the Nearest Neighbor algorithm. However, finding the nearest neighbor matches within high dimensional data is computationally expensive, and with more objects introduced it can affect the process of updating the pose in real-time. To counter this issue to some extent, we used the FLANN [69] implementation of K-d Nearest Neighbor Search, which is an approximation of the K-Nearest Neighbor algorithm that is optimized for high dimensional features. For binary features (AKAZE, BRISK), we used the Hamming distance ratio method to find the matches. Finally, if we have more than ten matches, we presume the object is present in the scene.
Training images of planar objects, $\mathcal{I}$
$Detector \gets \text{Define feature detector}$
$Descriptor \gets \text{Define feature descriptor}$
88retrieve feature descriptor
88for each image in $\mathcal{I}$
i in $\mathcal{I}$
77$\mathcal{K}$ is set of detected keypoints for image i
88$\mathcal{K} \gets \texttt{DetectKeypoints($i, Detector$)}$
77$\mathcal{D}[i]$ is the corresponding descriptor set for image i
$\mathcal{D}[i] \gets \texttt{GetDescriptors( $\mathcal{K}, Descriptor$)}$
$\text{camera is on}$
88$f \gets \text{RGB image frame}$
$PC \gets \text{Point cloud data}$
77$K_F$ is set of detected keypoints for image frame $f$
$K_F \gets \texttt{DetectKeypoints($f, Detector$)}$
77$D_F$ is the corresponding descriptor set for rgb image $f$
$D_F \gets \texttt{GetDescriptors( $K_F, Descriptor$)}$
i in $\mathcal{I}$
$matches \gets \texttt{FindMatches( $\mathcal{D}[i]$, $D_F$)}$
77If there is at least 10 matches then we have the object (described in image $i$) in the scene
Total number of $matches \geq 10$
77extract matched keypoints pair $(kp_{i},kp_{f})$ from the corresponding descriptors matches.
$kp_{i}, kp_{f} \gets \texttt{ExtractKeypoints($matches$)}$
$\mathbf{H} \gets \texttt{EstimateHomography($kp_{i}, kp_{f}$)}$
$p_c, p_x, p_y \gets \text{points on the planar object }\newline \text{~~~~~~~~~~~~~ obtained using equation (\ref{eqn:axis})}$
$p_c^{'}, p_x^{'}, p_y^{'} \gets \text{corresponding projected points}\newline \text{~~~~~~~~~~~~~ of $p_c, p_x, p_y$ on image frame $f$}\newline \text{~~~~~~~~~~~~~ estimated using equations}\newline \text{~~~~~~~~~~~~~ (\ref{eqn:homography}) and (\ref{eqn:projection})}$
77$\vec{c}$ denotes the origin of the object frame with respect to the base/world frame
$\Vec{c}, \Vec{x}, \Vec{y} \gets \text{corresponding 3d locations }\newline \text{~~~~~~~~~ of $p_c^{'}, p_x^{'}, p_y^{'}$ from point cloud $PC$}$
77shift $\vec{x}, \vec{y}$ to the origin of the base or the world frame
$\vec{x} \gets \vec{x}-\vec{c}$
$\vec{y} \gets \vec{y}-\vec{c}$
77estimate the object frame in terms of three orthonormal vectors
$\hat{i}, \hat{j}$, and $\hat{k}$.
$\hat{i}, \hat{j}, \hat{k} \gets \text{from equation (\ref{eqn:unitv})}$
77compute the rotation $\phi_i,\theta_i,\psi_i$ of the object frame $\hat{i}, \hat{j}$, $\hat{k}$ with respect to the base or the world frame $\vec{X}, \vec{Y}, \vec{Z}$.
$\phi_i,\theta_i,\psi_i \gets \text{from equation (\ref{eqn:eulerangles})}$
77finally, publish the position and orientation of the object.
Planar Pose Estimation
§.§.§ Homography Estimation and Perspective Transformation
A homography is an invertible mapping of points and lines on the projective plane that describes a 2D planar projective transformation (figure <ref>) that can be estimated from a given pair of images. In simple terms, a homography is a matrix that maps a set of points in one image to the corresponding set of points in another image. We can use a homography matrix $\mathbf{H}$ to find the corresponding points using equation <ref> and <ref>, which defines the relation of projected point $(x^{'}, y^{'})$ (figure <ref>) on the rotated plane to the reference point $(x,y)$.
A 2D point $(x,y)$ in an image can be represented as a 3D vector $(x, y, 1)$ which is called the homogeneous representation of a point that lies on the reference plane or image of the planar object. In equation (<ref>), $\mathbf{H}$ represents the homography matrix and $[x~y~1]^{T}$ is the homogeneous representation of the reference point $(x,y)$ and we can use the values of $a,b,c$ to estimate the projected point $(x^{'},y^{'})$ in equation (<ref>).
\begin{align}
\left [ \begin{matrix} a \\ b \\ c \end{matrix} \right ] = \mathbf{H}\begin{bmatrix} x\\ y\\ 1\\ \end{bmatrix} = \begin{bmatrix} h_{11}&h_{12}&h_{13}\\ h_{21}&h_{22}&h_{23}\\ h_{31}&h_{32}&h_{33}\\ \end{bmatrix} \begin{bmatrix} x\\ y\\ 1\\ \end{bmatrix}
\label{eqn:homography}
\end{align}
\begin{equation}
\begin{aligned}
\left \lbrace \begin{aligned}
x^{'} = \frac{a}{c} \\
y^{'} = \frac{b}{c}
\end{aligned} \right .
\end{aligned}
\label{eqn:projection}
\end{equation}
We estimate the homography using the matches found from the nearest neighbor search as input; often these matches can have completely false correspondences, meaning they don't correspond to the same real-world feature at all which can be a problem in estimating the homography. So, we chose RANSAC [70] to robustly estimate the homography by considering only inlier matches as it tries to estimate the underlying model parameters and detect outliers by generating candidate solutions through random sampling using a minimum number of observations.
While the other techniques use as much data as possible to find the model parameters and then pruning the outliers, RANSAC uses the smallest set of data point possible to estimate the model, thus making it faster and more efficient than the conventional solutions.
Object in different orientation from the camera
System architecture.
§.§.§ Finding directional vectors on the object
In order to find the pose of a planar object, we need to find the three orthonormal vectors on the planar object that describe the object coordinate frame and consequently, the orientation of the object relative to the world coordinate system. We start by estimating the vectors on the planar object that form the basis of the plane, illustrated in figure <ref>. Then, we take the cross product of these two vectors to find the third directional vector which is the normal to the object surface. Let's denote the world coordinate system as $XYZ$, and the object coordinate system as $xyz$. We define the axes of the orientation in relation to a body as:
$x \to \text{right}$
$y \to \text{up}$
$z \to \text{towards the camera}$
First, we retrieve the locations of the three points $p_c, p_x, p_y$ on the planar object from the reference image using equation (<ref>) and then locate the corresponding points $p_{c}^{'}, p_{x}^{'}, p_{y}^{'}$ on the image acquired from the Microsoft Kinect sensor. We estimate the locations of these points using the homography matrix $\mathbf{H}$ as shown in equation <ref>, <ref>. Then we find the corresponding 3D locations of $p_{c}^{'}, p_{x}^{'}, p_{y}^{'}$ from the point cloud data also obtained from the Microsoft Kinect sensor. We denote them as vectors $\vec{c}$,$\vec{x}$, and $\vec{y}$. Here, $\vec{c}$ represents the translation vector from the object frame to the world frame and also the position of the object in the world frame. Next, we subtract $\vec{c}$ from $\vec{x}$, $\vec{y}$ which essentially gives us two vectors $\vec{x}$ and $\vec{y}$ centered at the origin of the world frame. We take the cross product of these two vectors $\vec{x}, \vec{y}$ to find the third axis $\vec{z}$. But, depending on the homography matrix the estimated axes $\vec{x}$ and $\vec{y}$ might not be exactly orthogonal, so we take the cross product of $\vec{y}$ and $\vec{z}$ to recalculate the vector $\vec{x}$. Now that we have three orthogonal vectors, we compute the three unit vectors $\hat{i}$, $\hat{j}$, and $\hat{k}$ along the $\vec{x}$, $\vec{y}$, and $\vec{z}$ vectors respectively using equation <ref>. These three orthonormal vectors describe the object frame. These vectors were projected onto the image plane to give a visual confirmation of the methods applied; figure <ref> shows the orthogonal axes projected onto the object plane.
\begin{equation}
\vcenter{\hbox{\begin{minipage}{5cm}
\centering
\includegraphics[width=4cm,height=4cm]{images/box_axis.png}
\captionof{figure}{Axis on the reference plane}
\end{minipage}}}
\begin{aligned}
\left \lbrace \begin{aligned}
p_c &= (w/2, h/2)
\\
p_x &= (w, h/2)
\\
p_y &= (w/2, 0)
\end{aligned} \right .
\end{aligned}
\label{eqn:axis}
\end{equation}
Computed third directional axis projected onto image plane
\begin{align}
% \hfill
\begin{split}
\hat{j} = \frac{\Vec{y}}{|\Vec{y}|} = [j_X \hspace{0.15cm} j_Y \hspace{0.15cm} j_Z]
\\
\hat{k} = \frac{\Vec{x} \times \Vec{y}}{|\Vec{x} \times \Vec{y}|} = [k_X \hspace{0.15cm} k_Y \hspace{0.15cm} k_Z]
\\
\hat{i} = \frac{\Vec{y} \times \Vec{z}}{|\Vec{y} \times \Vec{z}|} = [i_X \hspace{0.15cm} i_Y \hspace{0.15cm} i_Z]
\end{split}
\label{eqn:unitv}
\end{align}
§.§.§ Planar pose computation
We compute the pose of the object in terms of the Euler angles. Euler angles are three angles that describe the orientation of a rigid body with respect to a fixed coordinate system. The rotation matrix $\mathbf{R}$ in equation (<ref>) rotates X axis to $\hat{i}$, Y axis to $\hat{j}$, and Z axis to $\hat{k}$.
\begin{align}
\mathbf{R} = \left [ \begin{matrix} i_X & j_X & k_X \\ i_Y & j_Y & k_Y \\ i_Z & j_Z & k_Z \end{matrix} \right ]
\label{eqn:rotR}
\end{align}
Euler angles are combinations of the three axis rotations (equation <ref>), where $\phi$, $\theta$, and $\psi$ specify the intrinsic rotations around the X, Y, and Z axis respectively. The combined rotation matrix is a product of three matrices: $\mathbf{R} = \mathbf{R}_z \mathbf{R}_y \mathbf{R}_x$ (equation <ref>); the first intrinsic rotation rightmost, last leftmost.
\begin{align}\medmath{
\left\lbrace \begin{aligned}
\mathbf{R}_x &= \colvec {1 & 0 & 0 \\ 0 & \cos\phi & -\sin\phi \\ 0 & \sin\phi & \cos\phi } \\
\mathbf{R}_y &= \colvec {\cos\theta & 0 & \sin\theta \\ 0 & 1 & 0 \\ -\sin\theta & 0 & \cos\theta } \\
\mathbf{R}_z &= \colvec {\cos\psi & -\sin\psi & 0 \\ \sin\psi & \cos\psi & 0 \\ 0 & 0 & 1 }
\end{aligned} \right .}
\label{eqn:euler-axis}
\end{align}
\begin{align}
\mathbf{R} =
\begin{bmatrix*}
c\theta c\psi
& s\phi s\theta c\psi - c\phi s\psi
& c\phi s\theta c\psi + s\phi s\psi
\\ c\theta s\psi
& s\phi s\theta s\psi + c\phi c\psi
& c\phi s\theta s\psi - s\phi c\psi
\\ -s\theta
& s\phi c\theta
& c\phi c\theta
\end{bmatrix*}
\label{eqn:rotcomb}
\end{align}
In equation <ref>, $c$ and $s$ represents $\cos$ and $\sin$ respectively.
Solving for $\phi, \theta$, and $\psi$ from (<ref>) and (<ref>), we get,
\begin{align}\medmath{
\left\lbrace \begin{aligned}
\phi &= \tan^{-1}\left(\frac{j_Z}{k_Z}\right) \\
\theta &= \tan^{-1}\left(\frac{-i_Z}{\sqrt{1-i_Z^2}}\right) = \sin^{-1}\left(-i_Z\right) \\
\psi &= \tan^{-1}\left(\frac{i_Y}{i_X}\right)
\end{aligned} \right .}
\label{eqn:eulerangles}
\end{align}
(a),(b),(c) are recovered poses from robot's camera and (d),(e),(f) are corresponding poses visualized in RViz
§.§ Training Grasps for Humanoid Robots
To ensure that the robot can grasp objects in an adaptive manner, we pre-train the robot to perform a set of canonical grasps. We place the object and the robot's gripper close to each other and record the relative pose. This essentially gives us the pose of the gripper with respect to the object. Figure <ref> illustrates the training process in which the robot's gripper and a cracker box have been placed in close proximity and the relative poses have been recorded for grasping the objects from the side.
\begin{equation}
\textbf{T}_{s}^{d}
= \begin{bmatrix} \textbf{R}_{s}^{d} & P_{s}^{d} \\ 0 & 1 \end{bmatrix}
=\begin{bmatrix} r_{11} & r_{12} & r_{13} & X_t \\
r_{21} & r_{22} & r_{23} & Y_t \\
r_{31} & r_{32} & r_{33} & Z_t \\
0 & 0 & 0 & 1 \end{bmatrix}
\label{eqn:transmat}
\end{equation}
Equation <ref> outlines the structure of a transformation matrix $\textbf{T}_{s}^{d}$ that describes the rotation and translation of frame $d$ with respect to frame $s$; $\textbf{R}_{s}^{d}$ represents the rotation matrix similar to equation <ref> and $P_{s}^{d}=[X_{t},Y_{t},Z_{t}]^{T}$ is the translation matrix which is the 3D location of the origin of frame $d$ in frame $s$.
During the training phase, we first formulate the transformation matrix $\textbf{T}_{b}^{o}$ using the rotation matrix and the object location. We take the inverse of $\textbf{T}_{b}^{o}$ which gives us the transformation matrix $\textbf{T}_{o}^{b}$. We then use the equation <ref> to record the transformation $\mathbf{T}_{o}^{g}$ of the robot's wrist relative to the object.
Pre-training canonical grasp
\begin{equation} \label{eqn:graspmat}
T_{o}^{g} = T_{o}^{b} \times T_{b}^{g} \ \text{where} \ T_{o}^{b} = (T_{b}^{o})^{-1}
\end{equation}
In the equation <ref>, $b$ refers to the robot's base, $o$ refers to the object, and $g$ refers to the wrist of the robot to which the gripper is attached. Once we record the matrix, we get a new pose of the object from the vision in the testing phase and generate the final matrix using the equation <ref> that has the new position and orientation of the robot's wrist in matrix form .
\begin{equation} \label{eqn:fingrasp}
T_{b}^{g} = T_{b}^{o} \times T_{o}^{g}
\end{equation}
We then extract the rotational angles $\gamma$, $\beta$, $\alpha$ (roll, pitch, yaw) of the grasp pose from matrix $\mathbf{T}_{b}^{g}$ using equation <ref>
\begin{align}\medmath{
\left\lbrace \begin{aligned}
\gamma=tan^{-1}(r_{32}/r_{33}) \\
\beta=tan^{-1}\frac{-r_{31}}{\sqrt {{r_{32}}^2+{r_{33}}^2}}\\
\alpha=tan^{-1}(r_{21}/r_{11})
\end{aligned} \right .}
\label{eqn:grippereulerangles}
\end{align}
§ EVALUATION
The proposed object recognition and pose estimation algorithm was implemented on an Ubuntu 14.04 platform equipped with 3.0 GHz Intel R Core(TM) i5-7400 CPU and 8GB system memory. The RGB-D camera used in the experiments was a Microsoft Kinect sensor v1. We evaluated the proposed algorithm by comparing the accuracy of object recognition, pose estimation, and execution time of four different feature descriptors. We also validated the effectiveness of our approach for adaptive grasping by conducting experiments with the PR2 robot.
§.§ Object detection and pose estimation
Without enough observable features, the system would fail to find good matches that are required for accurate homography estimation. Consequently, our object detection and pose estimation approach has a constraint on the out-of-plane rotation $\theta$, illustrated in figure <ref>. In other words, if the out-of-plane rotation of the object is more than $\theta$, the system would not be able to recognize the object. Fast execution is also a crucial aspect to facilitate multiple object detection and pose estimation for real-time applications. We experimented with four different descriptors on several planar objects and the comparative result is shown in table <ref>. The execution time was measured for the object detection and pose estimation step. AKAZE and BRISK had much lower processing time for detection and pose estimation, thus would have a better frame rate, but SIFT and SURF had larger out-of-plane rotational freedom.
Out of plane rotation
Comparison of feature descriptors
Descriptor x]@c@Maximum out of
plane rotation (degree) x]@c@Execution time
SIFT $48^{\circ}\pm2^{\circ}$ 0.21s
SURF $37^{\circ}\pm2^{\circ}$ 0.27s
AKAZE $18^{\circ}\pm1^{\circ}$ 0.05s
BRISK $22^{\circ}\pm2^{\circ}$ 0.06s
We also compared the RMS difference $\epsilon$ (equation <ref>) of re-calculated $\vec{x}$ to original $\vec{x}$ ($\vec{x}^{'}$ in the equation) for increasing out-of-plane rotation of the planar objects to assess the homography estimation. Ideally, the two estimated vectors $\vec{x}$ and $\vec{y}$, which describe the basis of the plane of the planar object, should be orthogonal to each other, but often they are not. So, the values of $\epsilon$ in figure <ref> give us an indication of the average error in homography estimation for different out-of-plane rotations. In figure <ref>, we can see AKAZE has much higher $\epsilon$ values while the rest remained within a close range. This tells us AKAZE results in a much larger error in estimating the homography than the other methods.
Out of plane rotation vs $\epsilon$
We chose SIFT and SURF to evaluate how the execution time for detection scales up while increasing the number of objects. From table <ref>, which shows the mean processing time for object detection, we can see that SURF had a detection time around 50% more than SIFT in all the cases. This outcome coupled with the previous results prompted us to select SIFT for the subsequent experiments.
The system was capable of detecting multiple objects in real-time and at the same time could estimate their corresponding poses. Figure <ref> shows detected objects with estimated directional planar vectors. We can also observe that the system was robust to in-plane rotation and partial occlusion.
Execution time of SIFT and SURF for multiple object detection
2*c]@c@Number of
Objects 2c|c]@c@Detection time
SIFT SURF
1 0.06s 0.09s
2 0.11s 0.17s
3 0.17s 0.26s
4 0.22s 0.35s
5 0.28s 0.4s5
6 0.34s 0.54s
Multiple object detection with estimated planar vectors
We used RViz [71], a 3D visualizer for the Robot Operating System (ROS) [72], to validate the pose estimation. The calculated directional axes were projected onto the image and the estimated poses were visualized in RViz. As shown in figure <ref>, we qualitatively verified the accuracy of the detection and the estimated pose by comparing the two outputs. We can see that both the outputs render similar results. We conducted experiments with multiple objects and human held objects as well. Figure <ref> illustrates the simultaneous detection and pose estimation of two different boxes and an object held by a human, respectively.
(a) Pose estimation of multiple objects (b) Estimated pose of an object held by a human
\begin{equation}
\epsilon = \frac{1}{N}\sum_{i=1}^{N}||\vec{x_i}^{'}-\vec{x_i}||, \text{\fontsize{8}{8}\selectfont where N is the number of frames}
\label{eqn:epsilon}
\end{equation}
§.§ Adaptive grasping
We assessed our approach for adaptive grasping keeping two different aspects of the robotic application in mind; robotic tasks that require 1) interacting with a static environment, and 2) interacting with humans.
We first tested our system for static objects where the object was attached to a tripod. Next, we set up experiments where the object was held by a human. We used a sticker book and a cartoon book and evaluated our system on a comprehensive set of poses. In almost all the experiments, the robot successfully grasped the object in a manner consistent with its training. There were some poses that were not reachable by the robot - for instance, when the object was pointing inward along the X-axis in the robot reference frame, it was not possible for the end-effector to make a top grasp. Figure <ref> and <ref> show the successful grasping of the robot for both types of experiments.
Robot grasping an object from a tripod. Left: initial position of the robot's gripper, middle: gripper adapting to the object's pose, right: grasping of the object.
Robot grasping an object held by a human. Left: initial position of the robot's gripper, middle: gripper adapting to the object's pose, right: grasping of the object.
§ CONCLUSION AND FUTURE WORK
This work presents an approach that enables humanoid robots to grasp objects using planar pose estimation based on RGB image and depth data. We examined the performance of four feature-detector-descriptors for object recognition and found SIFT to be the best solution. We used FLANN's K-d Tree Nearest Neighbor implementation, and Bruteforce Hamming to find the keypoint matches and employed RANSAC to estimate the homography. The homography matrix was used to approximate the three orthonormal directional vectors on the planar object using perspective transformation. The pose of the planar object was estimated from the three directional vectors. The system was able to detect multiple objects and estimate the pose of the objects in real-time. We also conducted experiments with the humanoid PR2 robot to show the practical applicability of the framework where the robot grasped objects by adapting to a range of different poses.
In the future, we plan to add GPU acceleration for the proposed algorithm that would further improve the overall computational efficiency of the system. We would like to extend the algorithm to automatically prioritize certain objects and limit the number of objects needed for detection based on different scheduled tasks. Finally, we would like to incorporate transferring grasp configuration for familiar objects and explore other feature matching technique e.g. multi probe LSH, hierarchical k-means tree, etc.
[1]
K. He et al.
Deep residual learning for image recognition.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 770–778, 2016.
[2]
S. Liu and W. Deng.
Very deep convolutional neural network based image classification
using small training sample size.
In 2015 3rd IAPR Asian Conference on Pattern Recognition
(ACPR), pp. 730–734, 2015.
[3]
C. Szegedy et al.
Going deeper with convolutions.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 1–9, 2015.
[4]
D. C. Ciresan et al.
Flexible, high performance convolutional neural networks for image
In Twenty-Second International Joint Conference on Artificial
Intelligence, 2011.
[5]
P. Sermanet et al.
Overfeat: Integrated recognition, localization and detection using
convolutional networks. 2nd international conference on learning
representations, iclr 2014.
jan 2014.
2nd International Conference on Learning Representations, ICLR 2014 ;
Conference date: 14-04-2014 Through 16-04-2014.
[6]
K. He et al.
Spatial pyramid pooling in deep convolutional networks for visual
IEEE transactions on pattern analysis and machine intelligence,
37(9):1904–1916, 2015.
[7]
R. Girshick.
Fast R-CNN.
In Proceedings of the IEEE international conference on computer
vision, pp. 1440–1448, 2015.
[8]
S. Ren et al.
Faster R-CNN: towards real-time object detection with region
proposal networks.
In Advances in neural information processing systems, pp.
91–99, 2015.
[9]
W. Liu et al.
Ssd: Single shot multibox detector.
In European conference on computer vision, pp. 21–37.
Springer, 2016.
[10]
J. Redmon et al.
You only look once: Unified, real-time object detection.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 779–788, 2016.
[11]
J. Redmon and A. Farhadi.
Yolo9000: better, faster, stronger.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 7263–7271, 2017.
[12]
T.-Y. Lin et al.
Focal loss for dense object detection.
In Proceedings of the IEEE international conference on computer
vision, pp. 2980–2988, 2017.
[13]
V. Badrinarayanan et al.
Segnet: A deep convolutional encoder-decoder architecture for image
IEEE transactions on pattern analysis and machine intelligence,
39(12):2481–2495, 2017.
[14]
K. He et al.
Mask r-cnn.
In Proceedings of the IEEE international conference on computer
vision, pp. 2961–2969, 2017.
[15]
O. Ronneberger et al.
U-net: Convolutional networks for biomedical image segmentation.
In International Conference on Medical image computing and
computer-assisted intervention, pp. 234–241. Springer, 2015.
[16]
D. J. Butler et al.
A naturalistic open source movie for optical flow evaluation.
In A. Fitzgibbon et al., editors, Computer Vision – ECCV 2012,
pp. 611–625, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
[17]
N. Mayer et al.
A large dataset to train convolutional networks for disparity,
optical flow, and scene flow estimation.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 4040–4048, 2016.
[18]
W. Qiu and A. Yuille.
Unrealcv: Connecting computer vision to unreal engine.
In European Conference on Computer Vision, pp. 909–916.
Springer, 2016.
[19]
Y. Zhang et al.
Unrealstereo: A synthetic dataset for analyzing stereo vision.
arXiv preprint arXiv:1612.04647, 2016.
[20]
J. McCormac et al.
Scenenet rgb-d: Can 5m synthetic images beat generic imagenet
pre-training on indoor segmentation?
In The IEEE International Conference on Computer Vision (ICCV),
Oct 2017.
[21]
Y. Xiang et al.
Posecnn: A convolutional neural network for 6d object pose estimation
in cluttered scenes.
In Robotics: Science and Systems (RSS), 2018.
[22]
J. Tremblay et al.
Deep object pose estimation for semantic robotic grasping of
household objects.
In Conference on Robot Learning (CoRL), 2018.
[23]
E. Brachmann et al.
Learning 6d object pose estimation using 3d object coordinates.
In European conference on computer vision, pp. 536–551.
Springer, 2014.
[24]
C. Wang et al.
Densefusion: 6d object pose estimation by iterative dense fusion.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 3343–3352, 2019.
[25]
Y. Hu et al.
Segmentation-driven 6d object pose estimation.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 3385–3394, 2019.
[26]
C. G. Harris et al.
A combined corner and edge detector.
In Alvey vision conference, volume 15, pp. 10–5244. Citeseer,
[27]
C. Tomasi and T. Kanade.
Detection and tracking of point features.
School of Computer Science, Carnegie Mellon Univ. Pittsburgh,
[28]
J. Shi et al.
Good features to track.
In 1994 Proceedings of IEEE conference on computer vision and
pattern recognition, pp. 593–600. IEEE, 1994.
[29]
D. Hall et al.
Saliency of interest points under scale changes.
In BMVC, pp. 1–10, 2002.
[30]
T. Lindeberg.
Feature detection with automatic scale selection.
International journal of computer vision, 30(2):79–116, 1998.
[31]
K. Mikolajczyk and C. Schmid.
Indexing based on scale invariant interest points.
In Proceedings Eighth IEEE International Conference on Computer
Vision. ICCV 2001, volume 1, pp. 525–531. IEEE, 2001.
[32]
D. G. Lowe.
Distinctive image features from scale-invariant keypoints.
International Journal of Computer Vision, 2004.
[33]
H. Bay et al.
Surf: Speeded up robust features.
In A. Leonardis et al., editors, Computer Vision – ECCV 2006,
pp. 404–417, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.
[34]
Y. Ke and R. Sukthankar.
Pca-sift: A more distinctive representation for local image
In Proceedings of the 2004 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 2, pp.
II–II. IEEE, 2004.
[35]
S. K. Lodha and Y. Xiao.
Gsift: geometric scale invariant feature transform for terrain data.
In Vision Geometry XIV, volume 6066, pp. 60660L. International
Society for Optics and Photonics, 2006.
[36]
A. E. Abdel-Hakim and A. A. Farag.
Csift: A sift descriptor with color invariant characteristics.
In 2006 IEEE computer society conference on computer vision and
pattern recognition (CVPR'06), volume 2, pp. 1978–1983. Ieee, 2006.
[37]
J.-M. Morel and G. Yu.
Asift: A new framework for fully affine invariant image comparison.
SIAM journal on imaging sciences, 2(2):438–469, 2009.
[38]
P. F. Alcantarilla et al.
Gauge-surf descriptors.
Image and vision computing, 31(1):103–116, 2013.
[39]
T.-K. Kang et al.
Mdghm-surf: A robust local image descriptor based on modified
discrete gaussian–hermite moment.
Pattern Recognition, 48(3):670–684, 2015.
[40]
J. Fu et al.
C-surf: Colored speeded up robust features.
In International Conference on Trustworthy Computing and
Services, pp. 203–210. Springer, 2012.
[41]
E. Rosten and T. Drummond.
Machine learning for high-speed corner detection.
In European conference on computer vision, pp. 430–443.
Springer, 2006.
[42]
E. Mair et al.
Adaptive and generic corner detection based on the accelerated
segment test.
In European conference on Computer vision, pp. 183–196.
Springer, 2010.
[43]
M. Calonder et al.
Brief: Computing a local binary descriptor very fast.
IEEE transactions on pattern analysis and machine intelligence,
34(7):1281–1298, 2011.
[44]
E. Rublee et al.
Orb: An efficient alternative to sift or surf.
In 2011 International Conference on Computer Vision, pp.
2564–2571, Nov 2011.
[45]
S. Leutenegger et al.
Brisk: Binary robust invariant scalable keypoints.
In 2011 International conference on computer vision, pp.
2548–2555. Ieee, 2011.
[46]
R. Ortiz.
Freak: Fast retina keypoint.
In Proceedings of the 2012 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), CVPR '12, pp. 510–517, Washington, DC, USA,
2012. IEEE Computer Society.
[47]
P. F. Alcantarilla et al.
Kaze features.
In A. Fitzgibbon et al., editors, Computer Vision – ECCV 2012,
pp. 214–227, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
[48]
P. F. Alcantarilla et al.
Fast explicit diffusion for accelerated features in nonlinear scale
In British Machine Vision Conf. (BMVC), 2013.
[49]
J. Weickert et al.
Cyclic schemes for pde-based image analysis.
International Journal of Computer Vision, 118(3):275–299,
[50]
S. Grewenig et al.
From box filtering to fast explicit diffusion.
In Joint Pattern Recognition Symposium, pp. 533–542. Springer,
[51]
G. Simon and M. . Berger.
Pose estimation for planar structures.
IEEE Computer Graphics and Applications, 22(6):46–53, Nov
[52]
Changhai Xu et al.
3d pose estimation for planes.
In 2009 IEEE 12th International Conference on Computer Vision
Workshops, ICCV Workshops, pp. 673–680, Sep. 2009.
[53]
M. Donoser et al.
Robust planar target tracking and pose estimation from a single
In 2011 10th IEEE International Symposium on Mixed and Augmented
Reality, pp. 9–15, Oct 2011.
[54]
D. Nistér and H. Stewénius.
Linear time maximally stable extremal regions.
In D. Forsyth et al., editors, Computer Vision – ECCV 2008,
pp. 183–196, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg.
[55]
A. Sahbani et al.
An overview of 3d object grasp synthesis algorithms.
Robotics and Autonomous Systems, 60(3):326–336, 2012.
[56]
J. Bohg et al.
Data-driven grasp synthesis—a survey.
IEEE Transactions on Robotics, 30(2):289–309, 2013.
[57]
B. Kehoe et al.
Cloud-based robot grasping with the google object recognition engine.
In 2013 IEEE International Conference on Robotics and
Automation. IEEE, May 2013.
[58]
K. Huebner et al.
Minimum volume bounding box decomposition for shape approximation in
robot grasping.
In 2008 IEEE International Conference on Robotics and
Automation. IEEE, May 2008.
[59]
S. Caldera et al.
Review of deep learning methods in robotic grasp detection.
Multimodal Technologies and Interaction, 2(3):57, 2018.
[60]
J. Yu et al.
A vision-based robotic grasping system using deep learning for 3d
object recognition and pose estimation.
In 2013 IEEE International Conference on Robotics and
Biomimetics (ROBIO). IEEE, December 2013.
[61]
O. Kroemer et al.
Active learning using mean shift optimization for robot grasping.
In 2009 IEEE/RSJ International Conference on Intelligent
Robots and Systems. IEEE, October 2009.
[62]
J. Aleotti and S. Caselli.
Part-based robot grasp planning from human demonstration.
In 2011 IEEE International Conference on Robotics and
Automation. IEEE, May 2011.
[63]
A. Saxena et al.
Robotic grasping of novel objects using vision.
The International Journal of Robotics Research, 27(2):157–173,
February 2008.
[64]
L. Montesano and M. Lopes.
Active learning of visual descriptors for grasping using
non-parametric smoothed beta distributions.
Robotics and Autonomous Systems, 60(3):452–462, March 2012.
[65]
J. Nogueira et al.
Unscented bayesian optimization for safe robot grasping.
In 2016 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS). IEEE, October 2016.
[66]
O. Andersson and S. Reyna Marquez.
A comparison of object detection algorithms using unmanipulated
testing images: Comparing sift, kaze, akaze and orb, 2016.
[67]
E. Karami et al.
Image matching using sift, surf, brief and orb: performance
comparison for distorted images.
The 24th Annual Newfoundland Electrical and Computer Engineering
Conference, NECEC, 2015.
[68]
S. A. K. Tareen and Z. Saleem.
A comparative analysis of sift, surf, kaze, akaze, orb, and brisk.
In 2018 International conference on computing, mathematics and
engineering technologies (iCoMET), pp. 1–10. IEEE, 2018.
[69]
M. Muja and D. G. Lowe.
Fast approximate nearest neighbors with automatic algorithm
In International Conference on Computer Vision Theory and
Application VISSAPP'09), pp. 331–340. INSTICC Press, 2009.
[70]
M. A. Fischler and R. C. Bolles.
Random sample consensus: A paradigm for model fitting with
applications to image analysis and automated cartography.
Commun. ACM, 24(6):381–395, June 1981.
[71]
David Gossow,Chad Rockey,Kei Okada,Julius Kammerl,Acorn Pooley,Rein
Appeldoorn,Robert Haschke.
[72]
Stanford Artificial Intelligence Laboratory et al.
Robotic operating system.
[1]
K. He et al.
Deep residual learning for image recognition.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 770–778, 2016.
[2]
S. Liu and W. Deng.
Very deep convolutional neural network based image classification
using small training sample size.
In 2015 3rd IAPR Asian Conference on Pattern Recognition
(ACPR), pp. 730–734, 2015.
[3]
C. Szegedy et al.
Going deeper with convolutions.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 1–9, 2015.
[4]
D. C. Ciresan et al.
Flexible, high performance convolutional neural networks for image
In Twenty-Second International Joint Conference on Artificial
Intelligence, 2011.
[5]
P. Sermanet et al.
Overfeat: Integrated recognition, localization and detection using
convolutional networks. 2nd international conference on learning
representations, iclr 2014.
jan 2014.
2nd International Conference on Learning Representations, ICLR 2014 ;
Conference date: 14-04-2014 Through 16-04-2014.
[6]
K. He et al.
Spatial pyramid pooling in deep convolutional networks for visual
IEEE transactions on pattern analysis and machine intelligence,
37(9):1904–1916, 2015.
[7]
R. Girshick.
Fast R-CNN.
In Proceedings of the IEEE international conference on computer
vision, pp. 1440–1448, 2015.
[8]
S. Ren et al.
Faster R-CNN: towards real-time object detection with region
proposal networks.
In Advances in neural information processing systems, pp.
91–99, 2015.
[9]
W. Liu et al.
Ssd: Single shot multibox detector.
In European conference on computer vision, pp. 21–37.
Springer, 2016.
[10]
J. Redmon et al.
You only look once: Unified, real-time object detection.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 779–788, 2016.
[11]
J. Redmon and A. Farhadi.
Yolo9000: better, faster, stronger.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 7263–7271, 2017.
[12]
T.-Y. Lin et al.
Focal loss for dense object detection.
In Proceedings of the IEEE international conference on computer
vision, pp. 2980–2988, 2017.
[13]
V. Badrinarayanan et al.
Segnet: A deep convolutional encoder-decoder architecture for image
IEEE transactions on pattern analysis and machine intelligence,
39(12):2481–2495, 2017.
[14]
K. He et al.
Mask r-cnn.
In Proceedings of the IEEE international conference on computer
vision, pp. 2961–2969, 2017.
[15]
O. Ronneberger et al.
U-net: Convolutional networks for biomedical image segmentation.
In International Conference on Medical image computing and
computer-assisted intervention, pp. 234–241. Springer, 2015.
[16]
D. J. Butler et al.
A naturalistic open source movie for optical flow evaluation.
In A. Fitzgibbon et al., editors, Computer Vision – ECCV 2012,
pp. 611–625, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
[17]
N. Mayer et al.
A large dataset to train convolutional networks for disparity,
optical flow, and scene flow estimation.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 4040–4048, 2016.
[18]
W. Qiu and A. Yuille.
Unrealcv: Connecting computer vision to unreal engine.
In European Conference on Computer Vision, pp. 909–916.
Springer, 2016.
[19]
Y. Zhang et al.
Unrealstereo: A synthetic dataset for analyzing stereo vision.
arXiv preprint arXiv:1612.04647, 2016.
[20]
J. McCormac et al.
Scenenet rgb-d: Can 5m synthetic images beat generic imagenet
pre-training on indoor segmentation?
In The IEEE International Conference on Computer Vision (ICCV),
Oct 2017.
[21]
Y. Xiang et al.
Posecnn: A convolutional neural network for 6d object pose estimation
in cluttered scenes.
In Robotics: Science and Systems (RSS), 2018.
[22]
J. Tremblay et al.
Deep object pose estimation for semantic robotic grasping of
household objects.
In Conference on Robot Learning (CoRL), 2018.
[23]
E. Brachmann et al.
Learning 6d object pose estimation using 3d object coordinates.
In European conference on computer vision, pp. 536–551.
Springer, 2014.
[24]
C. Wang et al.
Densefusion: 6d object pose estimation by iterative dense fusion.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 3343–3352, 2019.
[25]
Y. Hu et al.
Segmentation-driven 6d object pose estimation.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 3385–3394, 2019.
[26]
C. G. Harris et al.
A combined corner and edge detector.
In Alvey vision conference, volume 15, pp. 10–5244. Citeseer,
[27]
C. Tomasi and T. Kanade.
Detection and tracking of point features.
School of Computer Science, Carnegie Mellon Univ. Pittsburgh,
[28]
J. Shi et al.
Good features to track.
In 1994 Proceedings of IEEE conference on computer vision and
pattern recognition, pp. 593–600. IEEE, 1994.
[29]
D. Hall et al.
Saliency of interest points under scale changes.
In BMVC, pp. 1–10, 2002.
[30]
T. Lindeberg.
Feature detection with automatic scale selection.
International journal of computer vision, 30(2):79–116, 1998.
[31]
K. Mikolajczyk and C. Schmid.
Indexing based on scale invariant interest points.
In Proceedings Eighth IEEE International Conference on Computer
Vision. ICCV 2001, volume 1, pp. 525–531. IEEE, 2001.
[32]
D. G. Lowe.
Distinctive image features from scale-invariant keypoints.
International Journal of Computer Vision, 2004.
[33]
H. Bay et al.
Surf: Speeded up robust features.
In A. Leonardis et al., editors, Computer Vision – ECCV 2006,
pp. 404–417, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.
[34]
Y. Ke and R. Sukthankar.
Pca-sift: A more distinctive representation for local image
In Proceedings of the 2004 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 2, pp.
II–II. IEEE, 2004.
[35]
S. K. Lodha and Y. Xiao.
Gsift: geometric scale invariant feature transform for terrain data.
In Vision Geometry XIV, volume 6066, pp. 60660L. International
Society for Optics and Photonics, 2006.
[36]
A. E. Abdel-Hakim and A. A. Farag.
Csift: A sift descriptor with color invariant characteristics.
In 2006 IEEE computer society conference on computer vision and
pattern recognition (CVPR'06), volume 2, pp. 1978–1983. Ieee, 2006.
[37]
J.-M. Morel and G. Yu.
Asift: A new framework for fully affine invariant image comparison.
SIAM journal on imaging sciences, 2(2):438–469, 2009.
[38]
P. F. Alcantarilla et al.
Gauge-surf descriptors.
Image and vision computing, 31(1):103–116, 2013.
[39]
T.-K. Kang et al.
Mdghm-surf: A robust local image descriptor based on modified
discrete gaussian–hermite moment.
Pattern Recognition, 48(3):670–684, 2015.
[40]
J. Fu et al.
C-surf: Colored speeded up robust features.
In International Conference on Trustworthy Computing and
Services, pp. 203–210. Springer, 2012.
[41]
E. Rosten and T. Drummond.
Machine learning for high-speed corner detection.
In European conference on computer vision, pp. 430–443.
Springer, 2006.
[42]
E. Mair et al.
Adaptive and generic corner detection based on the accelerated
segment test.
In European conference on Computer vision, pp. 183–196.
Springer, 2010.
[43]
M. Calonder et al.
Brief: Computing a local binary descriptor very fast.
IEEE transactions on pattern analysis and machine intelligence,
34(7):1281–1298, 2011.
[44]
E. Rublee et al.
Orb: An efficient alternative to sift or surf.
In 2011 International Conference on Computer Vision, pp.
2564–2571, Nov 2011.
[45]
S. Leutenegger et al.
Brisk: Binary robust invariant scalable keypoints.
In 2011 International conference on computer vision, pp.
2548–2555. Ieee, 2011.
[46]
R. Ortiz.
Freak: Fast retina keypoint.
In Proceedings of the 2012 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), CVPR '12, pp. 510–517, Washington, DC, USA,
2012. IEEE Computer Society.
[47]
P. F. Alcantarilla et al.
Kaze features.
In A. Fitzgibbon et al., editors, Computer Vision – ECCV 2012,
pp. 214–227, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
[48]
P. F. Alcantarilla et al.
Fast explicit diffusion for accelerated features in nonlinear scale
In British Machine Vision Conf. (BMVC), 2013.
[49]
J. Weickert et al.
Cyclic schemes for pde-based image analysis.
International Journal of Computer Vision, 118(3):275–299,
[50]
S. Grewenig et al.
From box filtering to fast explicit diffusion.
In Joint Pattern Recognition Symposium, pp. 533–542. Springer,
[51]
G. Simon and M. . Berger.
Pose estimation for planar structures.
IEEE Computer Graphics and Applications, 22(6):46–53, Nov
[52]
Changhai Xu et al.
3d pose estimation for planes.
In 2009 IEEE 12th International Conference on Computer Vision
Workshops, ICCV Workshops, pp. 673–680, Sep. 2009.
[53]
M. Donoser et al.
Robust planar target tracking and pose estimation from a single
In 2011 10th IEEE International Symposium on Mixed and Augmented
Reality, pp. 9–15, Oct 2011.
[54]
D. Nistér and H. Stewénius.
Linear time maximally stable extremal regions.
In D. Forsyth et al., editors, Computer Vision – ECCV 2008,
pp. 183–196, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg.
[55]
A. Sahbani et al.
An overview of 3d object grasp synthesis algorithms.
Robotics and Autonomous Systems, 60(3):326–336, 2012.
[56]
J. Bohg et al.
Data-driven grasp synthesis—a survey.
IEEE Transactions on Robotics, 30(2):289–309, 2013.
[57]
B. Kehoe et al.
Cloud-based robot grasping with the google object recognition engine.
In 2013 IEEE International Conference on Robotics and
Automation. IEEE, May 2013.
[58]
K. Huebner et al.
Minimum volume bounding box decomposition for shape approximation in
robot grasping.
In 2008 IEEE International Conference on Robotics and
Automation. IEEE, May 2008.
[59]
S. Caldera et al.
Review of deep learning methods in robotic grasp detection.
Multimodal Technologies and Interaction, 2(3):57, 2018.
[60]
J. Yu et al.
A vision-based robotic grasping system using deep learning for 3d
object recognition and pose estimation.
In 2013 IEEE International Conference on Robotics and
Biomimetics (ROBIO). IEEE, December 2013.
[61]
O. Kroemer et al.
Active learning using mean shift optimization for robot grasping.
In 2009 IEEE/RSJ International Conference on Intelligent
Robots and Systems. IEEE, October 2009.
[62]
J. Aleotti and S. Caselli.
Part-based robot grasp planning from human demonstration.
In 2011 IEEE International Conference on Robotics and
Automation. IEEE, May 2011.
[63]
A. Saxena et al.
Robotic grasping of novel objects using vision.
The International Journal of Robotics Research, 27(2):157–173,
February 2008.
[64]
L. Montesano and M. Lopes.
Active learning of visual descriptors for grasping using
non-parametric smoothed beta distributions.
Robotics and Autonomous Systems, 60(3):452–462, March 2012.
[65]
J. Nogueira et al.
Unscented bayesian optimization for safe robot grasping.
In 2016 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS). IEEE, October 2016.
[66]
O. Andersson and S. Reyna Marquez.
A comparison of object detection algorithms using unmanipulated
testing images: Comparing sift, kaze, akaze and orb, 2016.
[67]
E. Karami et al.
Image matching using sift, surf, brief and orb: performance
comparison for distorted images.
The 24th Annual Newfoundland Electrical and Computer Engineering
Conference, NECEC, 2015.
[68]
S. A. K. Tareen and Z. Saleem.
A comparative analysis of sift, surf, kaze, akaze, orb, and brisk.
In 2018 International conference on computing, mathematics and
engineering technologies (iCoMET), pp. 1–10. IEEE, 2018.
[69]
M. Muja and D. G. Lowe.
Fast approximate nearest neighbors with automatic algorithm
In International Conference on Computer Vision Theory and
Application VISSAPP'09), pp. 331–340. INSTICC Press, 2009.
[70]
M. A. Fischler and R. C. Bolles.
Random sample consensus: A paradigm for model fitting with
applications to image analysis and automated cartography.
Commun. ACM, 24(6):381–395, June 1981.
[71]
David Gossow,Chad Rockey,Kei Okada,Julius Kammerl,Acorn Pooley,Rein
Appeldoorn,Robert Haschke.
[72]
Stanford Artificial Intelligence Laboratory et al.
Robotic operating system.
|
# A unified hoop conjecture for black holes and horizonless compact stars
Yan<EMAIL_ADDRESS>
1 School of Mathematical Sciences, Qufu Normal University, Qufu, Shandong
273165, China
2 Center for Gravitation and Cosmology, College of Physical Science and
Technology, Yangzhou University, Yangzhou 225009, China
###### Abstract
Abstract
We propose a unified version of hoop conjecture valid for various black holes
and horizonless compact stars. This conjecture is expressed by the mass to
circumference ratio $4\pi M_{in}/C\leqslant 1$, where C is the circumference
of the smallest ring that can engulf the object in all azimuthal directions
and $M_{in}$ is the mass within the engulfing sphere.
###### pacs:
11.25.Tq, 04.70.Bw, 74.20.-z
## I Introduction
The famous hoop conjecture introduced almost five decades ago asserts that the
existence of black hole horizons is characterized by the mass and
circumference relation $4\pi\mathcal{M}/C\geqslant 1$ hc1 ; hc2 . Here C is
the circumference of the smallest ring that can engulf the black hole in all
azimuthal directions and $\mathcal{M}$ is usually interpreted as the
asymptotically measured total ADM mass of black holes hc3 -ahc9 .
For horizonless compact stars, the curved spacetimes should be characterized
by the relation $4\pi\mathcal{M}/C<1$ hc1 ; hc2 . However, if $\mathcal{M}$ is
still interpreted as the total ADM mass, this relation $4\pi\mathcal{M}/C<1$
is violated in the background of horizonless charged compact objects hc19 ;
hc20 . In fact, Thorne has not provided an explicit definition for the mass
term $\mathcal{M}$ in the mass to circumference ratio hc1 . And it has been
proved that the relation $4\pi\mathcal{M}/C<1$ holds in horizonless spacetimes
when $\mathcal{M}$ is interpreted as the mass contained within the engulfing
sphere (not as the total ADM mass) hc21 ; hc22 .
It is natural to assume that a unified hoop conjecture may be
$4\pi\mathcal{M}/C\geqslant 1$ for black holes and $4\pi\mathcal{M}/C<1$ for
horizonless compact stars, where $\mathcal{M}$ is the total ADM mass or the
mass contained in the engulfing sphere hc23 . In the background of horizonless
stars, the calculations in hc21 imply that the term $\mathcal{M}$ should be
the mass contained in the sphere and cannot be the total ADM mass. However, in
the black hole spacetimes, Hod found that the hoop conjecture holds if
$\mathcal{M}$ is the ADM mass and cannot be the mass contained in the sphere
hc23 . So it seems that there is no unified hoop conjecture valid for both
black holes and compact stars.
In this paper, we propose another version of unified hoop conjecture, which
holds for Schwarzschild black holes, neutral Kerr black holes, RN black holes,
Kerr-Newman black holes and horizonless charged compact stars. It may be a
general property for both black holes and horizonless compact stars.
## II Test the unified hoop conjecture
In the background of horizonless charged compact stars, it has recently been
proved that the relation $4\pi\mathcal{M}/C<1$ is violated when $\mathcal{M}$
is interpreted as the total ADM mass and the relation holds if
$\mathcal{M}=M_{in}$, where $M_{in}$ is the mass contained within the
engulfing sphere hc19 ; hc20 ; hc21 ; hc22 . In addition, the numerical data
in hc23 implies that $4\pi M_{in}/C\leqslant 1$ may be a general property in
black hole spacetimes. With these facts, we propose a unified version of hoop
conjecture
$\displaystyle 4\pi M_{in}/C\leqslant 1,$ (1)
where C is the circumference of the smallest ring that can engulf the object
in all azimuthal directions and $M_{in}$ is the mass within the engulfing
sphere. The relation (1) is natural according to the idea that the ring of the
engulfing sphere is related to the mass within the sphere (not the mass in the
total spacetime).
In the following, we give examples supporting the bound (1) following studies
in hc19 ; hc20 ; hc21 ; hc22 ; hc23 . A Kerr-Newman black hole spacetime is
described by the curved line element hc23
$\displaystyle ds^{2}$
$\displaystyle=-\frac{\Delta-a^{2}sin^{2}\theta}{\rho^{2}}dt^{2}+\frac{\rho^{2}}{\Delta^{2}}dr^{2}-\frac{2asin^{2}\theta(2Mr-Q^{2})}{\rho^{2}}dtd\phi+\rho^{2}d\theta^{2}+\frac{(r^{2}+a^{2})^{2}-a^{2}\Delta
sin^{2}\theta}{\rho^{2}}sin^{2}\theta d\phi^{2},$ (2)
where M is the ADM mass, Q is the electric charge, $J=Ma$ is the angular
momentum, $\Delta=r^{2}-2Mr+Q^{2}+a^{2}$ and
$\rho^{2}=r^{2}+a^{2}cos^{2}\theta$. Black hole horizons are expressed as
$\displaystyle r_{\pm}=M\pm(M^{2}-Q^{2}+a^{2})^{1/2},$ (3)
which are determined by the zeros of the metric function $\Delta(r)=0$.
Setting $dt=dr=d\theta=0,\theta=\frac{\pi}{2}$, we get the relation
$\displaystyle ds=\frac{r_{+}^{2}+a^{2}}{r_{+}}d\phi.$ (4)
Also taking $\Delta\phi=2\pi$, we obtain the equatorial circumference
$\displaystyle C=2\pi\frac{r_{+}^{2}+a^{2}}{r_{+}}.$ (5)
For Schwarzschild black holes, there is $Q=0$ and $a=0$. The mass to
circumference ratio is
$\displaystyle\frac{4\pi M_{in}}{C}=\frac{4\pi M}{C}=\frac{4\pi M}{2\pi
r_{+}}=\frac{4\pi M}{2\pi(2M)}=1.$ (6)
For RN black holes with $Q\neq 0$ and $a=0$, the ratio satisfies
$\displaystyle\frac{4\pi M_{in}}{C}=\frac{4\pi(M-\frac{Q^{2}}{2r_{+}})}{2\pi
r_{+}}=\frac{4Mr_{+}-2Q^{2}}{2r_{+}^{2}}=\frac{4M(M+\sqrt{M^{2}-Q^{2}})-2Q^{2}}{2(M+\sqrt{M^{2}-Q^{2}})^{2}}=1.$
(7)
The neutral Kerr black holes correspond to $Q=0$ and $a\neq 0$. It yields the
relation
$\displaystyle\frac{4\pi M_{in}}{C}=\frac{4\pi
M}{2\pi\frac{r_{+}^{2}+a^{2}}{r_{+}}}=\frac{2Mr_{+}}{r_{+}^{2}+a^{2}}=\frac{2M(M+\sqrt{M^{2}-a^{2}})}{(M+\sqrt{M^{2}-a^{2}})^{2}+a^{2}}=1.$
(8)
In the background of Kerr-Neuman black holes, both charge and rotating
parameters are nonzero as $Q\neq 0$ and $a\neq 0$. The equatorial
circumference is $C=2\pi\frac{r_{+}^{2}+a^{2}}{r_{+}}$ and the mass contained
within the black hole horizon is
$M_{in}=M-\frac{Q^{2}}{4r_{+}}[1+\frac{r_{+}^{2}+a^{2}}{ar_{+}}arctg(\frac{a}{r_{+}})]$
mass . In this case of Kerr-Neuman black holes, numerical data in hc23
implies an upper bound on the ratio
$\displaystyle 4\pi M_{in}/C\leqslant 1.$ (9)
The horizonless charged compact star satisfies the inequality hc21 ; hc22
$\displaystyle 4\pi M_{in}/C<1.$ (10)
According to relations (6-10), we propose the unified hoop conjecture (1),
which may be a general property for both black holes and horizonless compact
stars.
## III Conclusions
We proposed a unified hoop conjecture, which is valid for various black holes
and horizonless compact stars. Our conjecture is that the mass and
circumference relation $4\pi M_{in}/C\leqslant 1$ holds, where C is the
circumference of the smallest ring that can engulf the object in all azimuthal
directions and $M_{in}$ is the gravitating mass within the engulfing sphere.
Our statement is in accordance with the idea that the ring of the engulfing
sphere should be related to the mass within the sphere (not the mass in the
total spacetime).
###### Acknowledgements.
This work was supported by the Shandong Provincial Natural Science Foundation
of China under Grant No. ZR2018QA008. This work was also supported by a grant
from Qufu Normal University of China under Grant No. xkjjc201906.
## References
* (1) K.S. Thorne, in Magic Without Magic: John Archibald Wheeler, ed. by J. Klauder (Freeman, San Francisco, 1972).
* (2) C.W.Misner,K.S.Thorne,J.A.Wheeler,Gravitation(Freeman,San Francisco, 1973).
* (3) I.H. Redmount, Phys. Rev. D 27, 699 (1983).
* (4) A.M. Abrahams, K.R. Heiderich, S.L. Shapiro, S.A. Teukolsky, Phys. Rev. D 46, 2452 (1992).
* (5) S. Hod, Phys. Lett. B 751, 241 (2015).
* (6) P. Bizon, E. Malec, and N. ó Murchadha, Trapped surfaces in spherical stars, Phys. Rev. Lett. 61, 1147 (1988).
* (7) P. Bizon, E. Malec, and N. ó Murchadha, Class. Quantum Grav. 6, 961 (1989).
* (8) D. Eardley, Gravitational collapse of vacuum gravitational field configurations, J. Math. Phys. 36,3004(1995).
* (9) J. Guven and N. ó Murchadha,Sufficient conditions for apparent horizons in spherically symmetric initial data, Phys. Rev. D 56,7658(1997).
* (10) J. Guven and N. ó Murchadha,Necessary conditions for apparent horizons and singularities in spherically symmetric initial data, Phys. Rev. D 56, 7666(1997).
* (11) E. Malec, Event horizons and apparent horizons in spherically symmetric geometries, Phys. Rev. D 49, 6475 (1994).
* (12) E. Malec and ó Murchadha,The Jang equation, apparent horizons, and the Penrose inequality, Class. Quantum Grav. 21,5777(2004).
* (13) T. Zannias, Phys. Rev. D 45, 2998 (1992).
* (14) T. Zannias, Phys. Rev. D 47, 1448 (1993).
* (15) E. Malec,Isoperimetric inequalities in the physics of black holes, Acta Phys. Pol. B 22, 829 (1991).
* (16) M. Khuri,The Hoop Conjecture in Spherically Symmetric Spacetimes, Phys. Rev. D 80, 124025 (2009).
* (17) H. Bray and M. Khuri, Asian J. Math. 15, 557 (2011).
* (18) R. Schoen and S.-T. Yau, Commun. Math. Phys. 90, 575(1983).
* (19) Takeshi Chiba, Takashi Nakamura, Ken-ichi Nakao, Misao Sasaki, Hoop conjecture for apparent horizon formation, Class. Quant. Grav. 11(1994)431-441.
* (20) Takeshi Chiba, Apparent horizon formation and hoop concept in nonaxisymmetric space, Phys. Rev. D 60(1999)044003.
* (21) Ken-ichi Nakao, Kouji Nakamura, Takashi Mishima, Hoop conjecture and cosmic censorship in the brane world, Phys. Lett. B 564(2003)143-148.
* (22) G.W. Gibbons,Birkhoff’s invariant and Thorne’s Hoop Conjecture, arXiv:0903.1580[gr-qc].
* (23) M. Cvetic, G.W. Gibbons, C.N. Pope,More about Birkhoff’s Invariant and Thorne’s Hoop Conjecture for Horizons, Class. Quant. Grav. 28(2011)195001.
* (24) John D. Barrow, G. W. Gibbons, Maximum Tension: with and without a cosmological constant, Mon. Not. Roy. Astron. Soc. 446(2014)3874-3877.
* (25) John D. Barrow, G.W. Gibbons, A maximum magnetic moment to angular momentum conjecture, Phys. Rev. D 95(2017)064040.
* (26) Edward Malec, Naqing Xie, Brown-York mass and the hoop conjecture in nonspherical massive systems, Phys. Rev. D 91(2015)no.8,081501.
* (27) Fabio Anz$\grave{a}$, Goffredo Chirco,Fate of the Hoop Conjecture in Quantum Gravity, Phys. Rev. Lett. 119(2017)no.23,231301.
* (28) Shahar Hod, Bekenstein’s generalized second law of thermodynamics: The role of the hoop conjecture, Phys. Lett. B 751(2015)241-245.
* (29) Shahar Hod, The gravitational two-body system: The role of the Thorne hoop conjecture, Eur. Phys. J. Plus 134(2019)no.3,106.
* (30) J.P. de León, Gen. Relativ. Grav. 19, 289 (1987).
* (31) W.B. Bonnor, Phys. Lett. A 99, 424 (1983).
* (32) Shahar Hod, On the status of the hoop conjecture in charged curved spacetimes, Eur. Phys. J. C (2018)78:1013.
* (33) Yan Peng, Analytical studies on the hoop conjecture in charged curved spacetimes, Eur.Phys.J.C 79(2019)11,943.
* (34) Shahar Hod, Further evidence for the non-existence of a unified hoop conjecture, Eur.Phys.J.C 80(2020)10,982.
* (35) J.M. Aguirregabiria, A. Chamorro, K.S. Virbhadra, Gen. Relativ. Gravit. 28,1393(1996).
|
# Handling Non-ignorably Missing Features in Electronic Health Records Data
Using Importance-Weighted Autoencoders
David K. Lim
Department of Biostatistics, University of North Carolina at Chapel Hill
and
Naim U. Rashid
Department of Biostatistics, University of North Carolina at Chapel Hill
and
Junier B. Oliva
Department of Computer Science, University of North Carolina at Chapel Hill
and
Joseph G. Ibrahim
Department of Biostatistics, University of North Carolina at Chapel Hill The
authors gratefully acknowledge NIH for funding this research.
###### Abstract
Electronic Health Records (EHRs) are commonly used to investigate
relationships between patient health information and outcomes. Deep learning
methods are emerging as powerful tools to learn such relationships, given the
characteristic high dimension and large sample size of EHR datasets. The
Physionet 2012 Challenge involves an EHR dataset pertaining to 12,000 ICU
patients, where researchers investigated the relationships between clinical
measurements, and in-hospital mortality. However, the prevalence and
complexity of missing data in the Physionet data present significant
challenges for the application of deep learning methods, such as Variational
Autoencoders (VAEs). Although a rich literature exists regarding the treatment
of missing data in traditional statistical models, it is unclear how this
extends to deep learning architectures. To address these issues, we propose a
novel extension of VAEs called Importance-Weighted Autoencoders (IWAEs) to
flexibly handle Missing Not At Random (MNAR) patterns in the Physionet data.
Our proposed method models the missingness mechanism using an embedded neural
network, eliminating the need to specify the exact form of the missingness
mechanism a priori. We show that the use of our method leads to more realistic
imputed values relative to the state-of-the-art, as well as significant
differences in fitted downstream models for mortality.
Keywords: missing data, autoencoder, MNAR, EHR
## 1 Introduction
Electronic Health Records (EHRs) are a broad and popular class of data
pertaining to records of patient information, including diagnostic
measurements, laboratory results, and medical history (Zhang et al., 2020).
Analysis of such data can be crucial for uncovering patterns that may be
informative for patient care, such as associations between certain diagnostic
measurements and mortality, (Courtright et al., 2019) and also represent an
inexpensive means to recruit patients for clinical research (Beesley &
Mukherjee, 2020). EHR data from the Physionet Challenge (Silva et al., 2012),
which called on participants to develop a model that best predicts Intensive
Care Unit (ICU) mortality, based upon data pertaining to 12,000 patients
admitted into the Beth Israel Deaconess Medical Center. Such models would
allow for the early identification of ICU patients with an elevated risk of
mortality based upon common clinical measurements, and would therefore have
great utility for health practitioners.
The presence of high-dimensional and complex interactions among features in
EHR data has motivated the use of deep learning methods for learning such
relationships. For example, autoencoders (AE) are a popular class of deep
learning architectures that has shown great utility across a range of
applications in EHR data. Such applications include dimension reduction,
representational learning, and the generation of synthetic data mimicking real
EHR data, which may be unavailable due to patient confidentiality (Shickel et
al., 2018; Sadati et al., 2019; Gootjes-Dreesbach et al., 2019). The structure
of the AE consists of an encoder, which “encodes” the data into a lower-
dimensional space that captures the salient qualities of the data, and a
decoder, which reconstructs the original data from the lower-dimensional space
(Tschannen et al., 2018). This lower dimensional space, similar to PCA, may be
used to define structure in the underlying set of input data (Wang et al.,
2016). From this low dimensional space, the generator “reconstructs” the data,
which allows it to also generate synthetic data that reflects the original
data. The encoder and decoder are often represented by feed forward neural
networks, allowing for complex non-linear interactions to be captured in both
(LeCun et al., 2015). The Variational Autoencoder (VAE) is built upon the same
encoder-decoder structure, but imposes a probabilistic assumption on the data,
as well as the lower-dimensional (or “latent”) space (Doersch, 2016), and uses
variational inference to learn a tractable estimate of the posterior of the
latent variable conditional on the observed data (Kingma & Welling, 2013;
Rezende et al., 2014). In prior evaluations, VAEs have shown tremendous
performance in data generation and representation learning (van den Oord et
al., 2017), which explain their popularity in EHR data and many other
applications.
However, the Physionet data also contain a significant proportion of missing
entries across measurements. In general, the common and widespread presence of
missing data across patient records (Luo et al., 2018) presents a significant
barrier to the generalizability and applicability of deep learning methods
(Wells et al., 2013). EHR data also typically contain many features pertaining
to a large number of patients (Ross et al., 2014), and such features may show
complex interactions with each other as well as with clinical outcomes
(Vinzamuri & Reddy, 2013), compounding the impact of the missingness of these
features. The conventional statistical literature characterizes patterns of
missingness into one of three mechanisms: missing completely at random (MCAR),
missing at random (MAR), or missing not at random (MNAR) (Little & Rubin,
2002). Many methods submitted to the challenge utilized naïve methods such as
zero or mean imputation (Citi & Barbieri, 2012; Johnson et al., 2014) to
replace missing values prior to model training, while others employed
imputation methods that can handle MAR missingness, but not MNAR (Fortuin et
al., 2019). A limited number of existing VAE methods claim to allow MAR
patterns of missingness among input features during training (Nazabal et al.,
2018; Mattei & Frellsen, 2019; Ivanov et al., 2019); however, a thorough
evaluation of performance under the MCAR and MAR settings is lacking in the
literature. In addition, missingness in EHR datasets is often not ignorable,
i.e. MNAR (Beaulieu-Jones & and, 2016; Venugopalan et al., 2019; Sharafoddini
et al., 2019; O’Shea, 2019), requiring principled and theoretically sound
methods of handling missing data prior to or during model training. To the
best of our knowledge, there are no published methods that can handle MNAR
patterns of missingness in the VAE setting, and this presents a significant
barrier to the application of such methods to EHR and other modern biomedical
datasets.
To address these issues, we introduce a novel architecture that treats missing
observations as latent variables using Importance-Weighted Autoencoders
(IWAEs), incorporating the modeling of the missingness mechanism directly into
the network structure and enabling our approach to handle MNAR missingness in
input features for the first time. In contrast to traditional likelihood-based
missing data methods, we do not need to pre-specify the set of features to
model the probability of missingness. We instead model this probability with a
feed forward neural network, facilitating the flexible handling of MNAR
patterns of missingness across a large set of available features. Through
simulations, we show that proper modelling of the missingness mechanism
increases the accuracy of missing data imputation. Lastly, we show that in the
Physionet 2012 Challenge EHR dataset, accounting for MNAR missingness results
in more realistic ranges of imputed diagnostic measurements relative to those
imputed using a MAR model, in addition to significant differences in
downstream fitted models to predict in-hospital mortality. In addition, we
believe that our work can be extended more generally to be a flexible
framework to handle multiple patterns of missingness in similar EHR and other
complex biomedical datasets.
## 2 Data
The Physionet 2012 Challenge dataset (Silva et al., 2012) is a publicly
available EHR dataset that consists of de-identified information on 12,000
adult Intensive Care Unit (ICU) patients admitted at Beth Israel Deconess
Medical Center from 2001 and 2008. Patients whose ICU stays were less than 48
hours long were excluded from the dataset. For each patient, 36 features were
measured, with one potential measurement hourly for 48 hours. Example features
include albumin levels (in g/dL), serum sodium (in mEq/L) and white blood cell
count (in cells/nL). For each patient, clinical outcomes such as the SAPS-I
score (Gall et al., 1984) SOFA score (Ferreira, 2001), length of stay,
survival, and in-hospital death were also recorded. The challenge called for
learning algorithms to accurately predict the outcome of interest (like in-
hospital death) based on such clinical data, facilitating clinical decision-
making regarding the need for early intervention in ICU patients (Silva et
al., 2012).
The challenge dataset was divided into training, validation, and test datasets
of equal sizes, and missingness was highly prevalent in all three datasets.
Table 1 shows the missingness percentage of all features in the dataset, and
the number of subjects with at least one non-missing entry recorded for each
respective feature. In this study, we pre-processed the data using the same
method as the top performing method of the Physionet 2012 Challenge, which
used a priori knowledge of medical data to remove nonsensical values and
condense the time-series data into summary statistics across the repeated
measurements (Johnson et al., 2012). For our analysis, we used the median or
last-observed measurements for each clinical measurement. In this way, an
entry is missing only if the diagnostic measure was not taken during any of
the 48 time points, as most measurements are not meant to be taken hourly.
After the pre-processing, the dataset consisted of 46 features, including 7
baseline measurements. Additional details of this pre-processing step can be
found in Section 1.2 of the Supplementary materials.
| Feature | Missingness (overall) | Patients with $\geq 1$ obs.
---|---|---|---
1 | ALP | 0.984 | 0.425
2 | ALT | 0.983 | 0.434
3 | AST | 0.983 | 0.434
4 | Albumin | 0.987 | 0.406
5 | BUN | 0.928 | 0.986
6 | Bilirubin | 0.983 | 0.435
7 | Cholesterol | 0.998 | 0.079
8 | Creatinine | 0.927 | 0.986
9 | DiasABP | 0.458 | 0.703
10 | FiO2 | 0.843 | 0.677
11 | GCS | 0.680 | 0.986
12 | Glucose | 0.932 | 0.976
13 | HCO3 | 0.929 | 0.984
14 | HCT | 0.905 | 0.985
15 | HR | 0.098 | 0.986
16 | K | 0.924 | 0.980
17 | Lactate | 0.959 | 0.548
18 | MAP | 0.461 | 0.701
19 | Mg | 0.929 | 0.977
20 | MechVent | 0.849 | 0.632
21 | NIDiasABP | 0.579 | 0.875
22 | NIMAP | 0.585 | 0.873
23 | NISysABP | 0.579 | 0.877
24 | Na | 0.929 | 0.983
25 | PaCO2 | 0.885 | 0.755
26 | PaO2 | 0.885 | 0.755
27 | Platelets | 0.926 | 0.984
28 | RespRate | 0.759 | 0.278
29 | SaO2 | 0.960 | 0.448
30 | SysABP | 0.458 | 0.703
31 | Temp | 0.629 | 0.986
32 | TroponinI | 0.998 | 0.047
33 | TroponinT | 0.989 | 0.220
34 | Urine | 0.307 | 0.975
35 | WBC | 0.933 | 0.983
36 | pH | 0.879 | 0.760
Table 1: Proportion of overall missingness, and proportion of patients with at
least 1 measurement for each feature in the Physionet 2012 Challenge EHR
dataset.
As shown in previous work, VAEs are natural architectures to utilize in this
setting, as VAEs inherently learn a probabilistic model of the data, and
samples from this model can be used to impute missing values. Such imputed
values may be useful to perform downstream statistical analyses, such as
prediction of mortality (Sharafoddini et al., 2019). However, some studies
have suggested that such missingness in EHR data may be MNAR (Sharafoddini et
al., 2019; O’Shea, 2019), but existing VAE-based imputation methods can only
handle MCAR or MAR missingness, and is unable to handle the more difficult
MNAR mechanisms that may be present in this type of data. A model that
accounts for such non-ignorable, or MNAR, missingness may most accurately
impute missing values in the Physionet 2012 Challenge dataset, and allow for
accurate analysis of the patient data.
## 3 Methods
In this section, we first show the formulation of the VAE and its
generalization (IWAE) in Section 3.1. Then, in Section 3.2 we review the
different mechanisms of missingness in the context of the historical
statistical literature (Little & Rubin, 2002). We expand on the case of MNAR,
or non-ignorable missingness, and present the models that are typically
employed under this type of missingness. In Section 3.3, we introduce a novel
method using the IWAE architecture in the presence of MNAR missingness.
### 3.1 Variational Autoencoder
A variational autoencoder (VAE) is a type of deep learning architecture in
which the input data is autoencoded using a stochastic encoder-decoder pair to
optimize an objective function that encourages the encoded latent space to
follow an assumed fixed prior distribution (Ghosh et al., 2019). Let
$\mathbf{X}$ be the $n\times p$ data matrix with observation vectors
$\mathbf{x}_{i}$ for observations $i=1,\ldots,n$, with corresponding entries
$x_{ij}$ denoting values of the features $j=1,\ldots,p$ for each observation.
In a VAE, we assume observation vectors $\mathbf{x}_{i}$ are i.i.d. samples
generated from an unknown underlying process $p_{\psi}(\cdot|z)$ involving a
latent variable $\mathbf{Z}$ from some prior distribution $p(\mathbf{Z})$
(Kingma & Welling, 2019), where $\mathbf{Z}$ is an $n\times d$ matrix, such
that $\mathbf{Z}=\\{\mathbf{z}_{1},\cdots,\mathbf{z}_{n}\\}$ with
$\mathbf{z}_{i}$ latent vectors of length $d$ for each observation
$i=1,\ldots,n$.
Learning is done by maximizing an objective function via variational inference
(Blei et al., 2016). Ideally, we wish to maximize the marginal log-likelihood
of $\mathbf{X}$, defined as $\log p_{\psi}(\mathbf{X})=\log\int
p_{\psi}(\mathbf{X},\mathbf{Z})d\mathbf{Z}$, for some set of parameters
$\psi$. However, due to the integral involved, this quantity is usually
intractable. Thus, instead of directly maximizing the marginal log-likelihood,
we utilize variational inference, which aims to maximize the so-called
“Evidence Lower Bound” (ELBO). The standard ELBO gives a lower bound of the
marginal log-likelihood and has the form:
$\mathcal{L}(\theta,\psi)=\operatorname{\mathbb{E}}_{\mathbf{Z}\sim
q_{\theta}(\mathbf{Z}|\mathbf{X})}\log\left[\frac{p_{\psi}(\mathbf{X}|\mathbf{Z})p(\mathbf{Z})}{q_{\theta}(\mathbf{Z}|\mathbf{X})}\right],$
(1)
where $\mathcal{L}(\theta,\psi)$ is the ELBO such that $\log
p(\mathbf{X})\geq\mathcal{L}(\theta,\psi)$, $p(\mathbf{Z})$ is the prior
distribution of $\mathbf{Z}$, $p_{\psi}$ is a multivariate “generative model”
conditional on $\mathbf{Z}$, and $q_{\theta}$ is a multivariate “recognition
model” conditional on $\mathbf{X}$. Furthermore, we denote
$f_{\psi}(\mathbf{Z})$ and $g_{\theta}(\mathbf{X})$ as the decoder and encoder
neural networks of the VAE, and $(\theta,\psi)$ are the weights and biases of
these neural networks. The quantities $f_{\psi}(\mathbf{Z})$ and
$g_{\theta}(\mathbf{X})$, respectively, output the values of the
distributional parameters of $q_{\theta}(\mathbf{Z}|\mathbf{X})$, i.e. the
variational approximation of the true but intractable posterior
$p_{\psi}(\mathbf{Z}|\mathbf{X})$, and $p_{\psi}(\mathbf{X}|\mathbf{Z})$, i.e.
the generative distribution that describes the underlying distribution of the
data.
In variational inference, by constraining $q_{\theta}(\mathbf{Z}|\mathbf{X})$
to be from a class of simple distributions, or a “variational family”, we can
obtain the best candidate from within that class of distributions to
approximate the true intractable posterior. VAEs also incorporate amortized
variational inference (Gershman & Goodman, 2014), where the neural network
parameters are shared across observations in order to utilize stochastic
gradient descent in optimization (Kingma & Welling, 2019). In practice, both
$q_{\theta}(\mathbf{Z}|\mathbf{X})$ and $p(\mathbf{Z})$ are typically assumed
to have simple forms, such as independent multivariate Gaussians (Kingma &
Welling, 2013).
The decoder draws samples from the posterior that is learned from the encoder,
using the so-called “reparametrization trick” to allow the calculation of
gradients through the sampling step (Kingma & Welling, 2013). Then, using
these samples as input, the decoder outputs the parameters of
$p_{\psi}(\mathbf{X}|\mathbf{Z})$. This decoding process shows how the
original data $\mathbf{X}$ is constructed from the latent variable
$\mathbf{Z}$. In this way, the network learns a distribution for $\mathbf{Z}$,
which can be interpreted as a representation of $\mathbf{X}$ in a reduced
dimensional space. Also, synthetic data can be generated by sampling from the
learned generative model $p_{\psi}(\mathbf{X}|\mathbf{Z})$, which describes
the distribution from which the data originates.
#### 3.1.1 Importance-Weighted Autoencoder
The IWAE (Burda et al., 2015) is a generalization of the standard VAE. Whereas
a standard VAE draws one sample from $q_{\theta}(\mathbf{Z}|\mathbf{X})$, IWAE
draws multiple samples, which are used to create a tighter bound on the
marginal log-likelihood than the traditional ELBO (Cremer et al., 2017). The
IWAE bound, corresponding to the ELBO in (1), can be written as
$\mathcal{L}(\theta,\psi)=\operatorname{\mathbb{E}}_{\mathbf{z}_{1},\ldots,\mathbf{z}_{K}\sim
q_{\theta}(\mathbf{Z}|\mathbf{X})}\log\left[\frac{1}{K}\sum_{k=1}^{K}\frac{p_{\psi}(\mathbf{X}|\mathbf{z}_{k})p(\mathbf{z}_{k})}{q_{\theta}(\mathbf{z}_{k}|\mathbf{X})}\right],$
(2)
where $K$ is the number of samples from the posterior. Importantly, Burda et
al. (2015) showed that for any $K\geq 1$, $\log
p(\mathbf{X})\geq\mathcal{L}_{K+1}\geq\mathcal{L}_{K}$, such that
$\mathcal{L}_{K}\rightarrow\log p(\mathbf{X})$ as $K\rightarrow\infty$ if
$p_{\psi}(\mathbf{X},\mathbf{Z})/q_{\theta}(\mathbf{Z}|\mathbf{X})$ is
bounded. If $K=1$, (2) reduces to the form of the ELBO in Equation (1), where
the IWAE corresponds exactly to the standard VAE. For $K>1$, the lower bound
more closely approximates the true marginal log likelihood, but the
computational burden is increased due to the increased number of drawn
samples. In this way, the IWAE can be considered to be part of the VAE family,
and we refer to methods that use either VAEs or IWAEs broadly as “VAE
methods”. A visualization of the workflow for an IWAE (without missing data)
can be found in Section 2.1 of the supplementary materials.
### 3.2 Missing Data
In this section, we introduce notation for missing data and review the
different mechanisms of missingness, as described in the statistical
literature. Let the data be factored such that
$\mathbf{X}=\\{\mathbf{X}^{o},\mathbf{X}^{m}\\}$, with $\mathbf{X}^{o}$
denoting the observed values and $\mathbf{X}^{m}$ denoting the missing values.
Also, for each observation vector $\mathbf{x}_{i}$, denote
$\mathbf{x}_{i}^{o}$ and $\mathbf{x}_{i}^{m}$ respectively to be the observed
and missing features of $\mathbf{x}_{i}$, and let $\mathbf{R}$ be a matrix of
the same dimensionality as $\mathbf{X}$, with entries $r_{ij}=I(x_{ij}$ is
observed$)$ for the $i^{th}$ observation and $j^{th}$ feature, where
$I(\cdot)$ denotes the indicator function. In this way, $\mathbf{R}$ is the
“mask” matrix pertaining to $\mathbf{X}$, such that
$\mathbf{x}_{i}^{o}=\\{x_{ij}:r_{ij}=1\\}$ and
$\mathbf{x}_{i}^{m}=\\{x_{ij}:r_{ij}=0\\}$ for all $i=1,\ldots,n$ and
$j=1,\ldots,p$.
#### 3.2.1 Missingness Mechanisms
Missingness was classified into three major categories, or mechanisms, in the
seminal work by Little & Rubin (2002). These mechanisms are missing completely
at random (MCAR), missing at random (MAR), and missing not at random (MNAR),
and they satisfy the following relations:
* •
MCAR:
$p(\mathbf{r}_{i}|\mathbf{x}_{i},\mathbf{z}_{i},\boldsymbol{\phi})=p(\mathbf{r}_{i}|\boldsymbol{\phi})$
* •
MAR:
$p(\mathbf{r}_{i}|\mathbf{x}_{i},\mathbf{z}_{i},\boldsymbol{\phi})=p(\mathbf{r}_{i}|\mathbf{x}_{i}^{o},\boldsymbol{\phi})$
* •
MNAR:
$p(\mathbf{r}_{i}|\mathbf{x}_{i},\mathbf{z}_{i},\boldsymbol{\phi})=p(\mathbf{r}_{i}|\mathbf{x}_{i}^{o},\mathbf{x}_{i}^{m},\mathbf{z}_{i},\boldsymbol{\phi})$
Here, $\boldsymbol{\phi}$ denotes the collection of parameters for the model
of the missingness mask $\mathbf{r}_{i}=\\{r_{i1},\ldots,r_{ip}\\}$. MCAR
missingness is independent of both observed and missing values, and is
equivalent to randomly masking a fixed proportion of entries. MAR missingness
is dependent on the values of observed entries, but not on any unobserved
values, including the missing values themselves.
Importantly, for both MCAR and MAR, the missingness is considered “ignorable”
such that the missingness mechanism need not be explicitly modelled in these
cases (Rubin, 1976; Little & Rubin, 2002). In particular, one is often
interested in maximizing the quantity $p(\mathbf{X}^{o},\mathbf{R})$. Under
ignorable missingness, this quantity can be factored as
$p(\mathbf{X}^{o})p(\mathbf{R}|\mathbf{X}^{o})$, where $p(\mathbf{X}^{o})$ is
the marginal distribution of $\mathbf{X}^{o}$. Here,
$p(\mathbf{R}|\mathbf{X}^{o})$ need not be specified because inference on the
parameters of interest pertaining to $p(\mathbf{X}^{o})$ is independent of
$p(\mathbf{R}|\mathbf{X}^{o})$. Then, one aims to maximize the quantity
$\log p(\mathbf{X}^{o})=\log\iint
p(\mathbf{X}^{o},\mathbf{X}^{m},\mathbf{Z})d\mathbf{X}^{m}d\mathbf{Z}=\log\int
p(\mathbf{X}^{o},\mathbf{Z})d\mathbf{Z}.$ (3)
Because $\mathbf{X}^{m}$ can be easily integrated out of (3), this quantity
can be bounded below exactly as in Section 3.1, conditioning on just the
observed data $\mathbf{X}^{o}$, rather than the full data $\mathbf{X}$.
Existing methods typically take advantage of this simplification, and are
shown to perform well under ignorable missingness. Details for these methods
are given in Section 2.3 of the supplementary materials.
In contrast, MNAR missingness refers to the case where the missingness can be
dependent on any unobserved values, including the missing entries
$\mathbf{x}_{i}^{m}$. MNAR missingness can also be dependent on
$\mathbf{x}_{i}^{o}$ as well as latent values like $\mathbf{Z}$, and thus MNAR
represents the most general and difficult case of missingness in practice
(Scheffer, 2002). In this setting, the missingness is considered “non-
ignorable”, and typically requires a model of the missingness $\mathbf{r}_{i}$
for proper analysis (Stubbendick & Ibrahim, 2003; Ibrahim et al., 2005).
Variational inference has been used in the presence of MNAR missingness in the
past, and has been shown to perform well in the stochastic block model (Tabouy
et al., 2019), although it is unclear how to extend this to the VAE family
setting. Current methods of the VAE family are only able to handle MAR
missingness, and there is no method to properly deal with the more difficult
MNAR case. This is compounded by the fact that missingness in EHR datasets has
been posited to be non-ignorable (Beaulieu-Jones & and, 2016; Venugopalan et
al., 2019; Sharafoddini et al., 2019; O’Shea, 2019).
Under the MNAR (non-ignorable) missingness assumption, the marginal log-
likelihood is written as
$\log p(\mathbf{X}^{o},\mathbf{R})=\log\iint
p(\mathbf{X}^{o},\mathbf{X}^{m},\mathbf{Z},\mathbf{R})d\mathbf{X}^{m}d\mathbf{Z}.$
(4)
We see here that due to the dependence between $\mathbf{R}$ and
$\mathbf{X}^{m}$, it is not as straightforward to integrate out the variable
pertaining to the missing values $\mathbf{X}^{m}$ as in (3). A lower bound of
this quantity can be derived as before, and variational inference can be
performed. We factorize
$p(\mathbf{X}^{o},\mathbf{X}^{m},\mathbf{Z},\mathbf{R})$ using the selection
model factorization (Diggle & Kenward, 1994), which is written as
$p(\mathbf{X}^{o},\mathbf{X}^{m},\mathbf{Z},\mathbf{R})=p(\mathbf{X}^{o},\mathbf{X}^{m}|\mathbf{Z})p(\mathbf{Z})p(\mathbf{R}|\mathbf{X},\mathbf{Z})$.
Now, one needs only to specify a model for the missingness mask
$p(\mathbf{R}|\mathbf{X},\mathbf{Z},\boldsymbol{\phi})$. There are a number of
ways to specify such a model. For example, Diggle & Kenward (1994) proposes a
binomial model for the missing data mechanism:
$p(\mathbf{R}|\mathbf{X},\boldsymbol{\phi})=\prod_{i=1}^{n}\prod_{j=1}^{p}\left[p(r_{ij}=1|\mathbf{x}_{i},\boldsymbol{\phi})\right]^{r_{ij}}\left[1-p(r_{ij}=1|\mathbf{x}_{i},\boldsymbol{\phi})\right]^{1-r_{ij}},$
where $p(r_{ij}=1|\mathbf{x}_{i},\boldsymbol{\phi})$ can be modeled
straightforwardly by a logistic regression model, such that
$logit[p(r_{ij}=1|\mathbf{x}_{i},\boldsymbol{\phi})]=\phi_{0}+\mathbf{x}_{i}^{o}\boldsymbol{\phi}_{1}+\mathbf{x}_{i}^{m}\boldsymbol{\phi}_{2},$
where $\boldsymbol{\phi}_{1}=\\{\phi_{11},\ldots,\phi_{1,p_{obs}}\\}^{T}$ and
$\boldsymbol{\phi}_{2}=\\{\phi_{21},\ldots,\phi_{2,p_{miss}}\\}^{T}$ represent
the set of coefficients pertaining to the observed features and missing
features, respectively, with $p_{obs}$ and $p_{miss}$ denoting the total
number of completely-observed and partially-observed features in the data.
Note that this model assumes independence of $\mathbf{R}$ across features,
such that the missingness of each feature is independent of whether any other
feature has been observed, which may or may not be realistic in some settings
(Ibrahim et al., 2005). Also, we note that this missingness model does not
depend on the latent variable $\mathbf{Z}$, but such latent factors can also
be included as covariates.
Given the high dimensionality of the missingness mask $\mathbf{R}$ and the
potential correlation between the mask and the features in EHR data, we take
advantage of the deep neural network structure to learn the parameters of the
missingness model in Section 3.3. By using an IWAE architecture, we are able
to model complex interactions between the features. Also, by appending an
additional feed-forward neural network to the decoder, we can jointly learn
the conditional distribution
$p_{\boldsymbol{\phi}}(r_{ij}=1|\mathbf{x}_{i},\mathbf{z}_{i},\boldsymbol{\phi})$
during training of the IWAE, with $\boldsymbol{\phi}$ now denoting the weights
and biases of this additional neural network, which we call the “mask decoder
network”. This provides a simple way to model the dependency of the
missingness on the data, while learning complex representations of the data
itself. Additionally, we can exactly model the logistic regression model for
the missingness mask by omitting hidden layers and using a Sigmoid activation
function in the mask decoder network. In this way, the weights associated with
the mask decoder network would exactly represent the coefficients of the
covariates of the logistic regression model, while the bias represents the
intercept. We go over this proposed scheme in more detail in Section 3.3,
where we formalize our deep learning architecture to handle MNAR missingness.
In traditional missing data literature with selection model factorization,
covariates for the logistic regression model need to be pre-specified based
upon prior information. Similarly, we can specify features to include as
predictors for our proposed neural network missingness model, such that these
features are fed in as input to the mask decoder neural network.
Alternatively, when such information is not available, we show that using all
features as covariates in the missingness model yields reasonably good
performance in most simulated cases, although performance increases when the
correct model is specified.
### 3.3 NIMIWAE: IWAE with Nonignorable Missingness
We now propose a method to perform inference using an IWAE in the presence of
missing data whose missingness is nonignorable. First, we specify a general
form of the lower bound, and we utilize the general IWAE framework to form a
tighter bound on the marginal log-likelihood than the VAE ELBO. Unlike a
traditional IWAE, we now have two latent variables $\mathbf{Z}$ and
$\mathbf{X}^{m}$. Like the traditional IWAE, we draw K samples of
$\mathbf{Z}$, but we additionally draw M samples of $\mathbf{X}^{m}$ for each
sample of $\mathbf{Z}$ in the same manner. The form of the pertaining lower
bound can be derived as follows:
$\displaystyle\log p(\mathbf{X}^{o},\mathbf{R})$
$\displaystyle=\log\left[\iint
p_{\psi,\phi}(\mathbf{X}^{o},\mathbf{X}^{m},\mathbf{R},\mathbf{Z})d\mathbf{Z}d\mathbf{X}^{m}\right]$
$\displaystyle=\sum_{i=1}^{n}\log\operatorname{\mathbb{E}}_{(\mathbf{z}_{i},\mathbf{x}_{i}^{m})\sim
q_{\theta}(\mathbf{Z},\mathbf{X}^{m})}\left[\frac{p_{\psi,\phi}(\mathbf{x}_{i}^{o},\mathbf{x}_{i}^{m},\mathbf{r}_{i},\mathbf{z}_{i})}{q_{\theta}(\mathbf{z}_{i},\mathbf{x}_{i}^{m})}\right]$
$\displaystyle\geq\sum_{i=1}^{n}\operatorname{\mathbb{E}}_{(\mathbf{z}_{i1},\mathbf{x}_{i1}^{m}),\ldots,(\mathbf{z}_{iK},\mathbf{x}_{iKM}^{m})\sim
q_{\theta}(\mathbf{Z},\mathbf{X}^{m})}\left[\log{\frac{1}{K}\sum_{k=1}^{K}\frac{p_{\psi,\phi}(\mathbf{x}_{i}^{o},\mathbf{x}_{ikl}^{m},\mathbf{r}_{i},\mathbf{z}_{ik})}{q_{\theta}(\mathbf{z}_{ik},\mathbf{x}_{ikl}^{m})}}\right]$
(5)
As explained in Section 3.2.1, we use the selection model factorization of the
generative model, such that
$p_{\psi,\phi}(\mathbf{X}^{o},\mathbf{X}^{m},\mathbf{R},\mathbf{Z})=p_{\psi}(\mathbf{X}^{o},\mathbf{X}^{m}|\mathbf{Z})p(\mathbf{Z})p_{\phi}(\mathbf{R}|\mathbf{X}^{o},\mathbf{X}^{m},\mathbf{Z}).$
Here, $\psi$ denotes the weights and biases of the encoder and decoder neural
networks, and $\phi$ denotes the weights and biases of the mask decoder
network that learns the parameters of the missingness model. This is analogous
to the standard logistic regression model used in the traditional statistical
literature, as given in Section 3.2.1, and is exactly equivalent to the
logistic regression model when there are no hidden layers and a Sigmoid
activation function is applied to this neural network. With this neural
network, more complicated mechanisms of missingness can be explored by
incorporating hidden layers and associated activation functions. For the
purposes of this study, we leave this as an extension of our method, and focus
on the simpler logistic regression model of missingness in our analyses.
Additionally, in MNAR, we must specify a form for the variational
approximation $q_{\theta}(\mathbf{Z},\mathbf{X}^{m})$ for the joint posterior,
where we treat both $\mathbf{Z}$ and the missing data $\mathbf{X}^{m}$ as
latent variables. This is in contrast to the MAR case, where one only
specifies the variational posterior of the latent variable
$q_{\theta}(\mathbf{Z})$, and this posterior can be conditioned on just the
observed values, such that $q_{\theta}(\mathbf{Z}|\mathbf{X}^{o})$. In the
MNAR case, we can factor the joint posterior of $(\mathbf{Z},\mathbf{X}^{m})$
as
$q_{\theta}(\mathbf{Z},\mathbf{X}^{m})=q_{\theta_{1}}(\mathbf{Z}|\mathbf{X}^{o},\mathbf{R})q_{\theta_{2}}(\mathbf{X}^{m}|\mathbf{Z},\mathbf{X}^{o},\mathbf{R}).$
Applying the factorizations of the generative model and the joint posterior to
(5), we obtain the final form of our lower bound, which we call the
NonIgnorably Missing Importance-Weighted AutoEncoder bound, or “NIMIWAE
bound”:
$\displaystyle\sum_{i=1}^{n}$
$\displaystyle\operatorname{\mathbb{E}}_{\\{\mathbf{Z},\mathbf{X}^{m}\\}\sim
q_{\theta}(\mathbf{Z},\mathbf{X}^{m})}$
$\displaystyle\left[\log{\frac{1}{K}\frac{1}{M}\sum_{k=1}^{K}\sum_{l=1}^{M}\frac{p_{\psi}(\mathbf{x}_{i}|\mathbf{z}_{ik})p(\mathbf{z}_{ik})p_{\phi}(\mathbf{r}_{i}|\mathbf{x}_{i}^{o},\mathbf{x}_{ikl}^{m},\mathbf{z}_{ik})}{q_{\theta_{1}}(\mathbf{z}_{ik}|\mathbf{x}_{i}^{o},\mathbf{r}_{i})q_{\theta_{2}}(\mathbf{x}_{ikl}^{m}|\mathbf{z}_{ik},\mathbf{x}_{i}^{o},\mathbf{r}_{i})}}\right],$
where samples of $\\{\mathbf{Z},\mathbf{X}^{m}\\}$ are drawn via ancestral
sampling (Bishop, 2006), and the entire lower bound is optimized using the
Adam optimizer (Kingma & Ba, 2014). The NIMIWAE bound properly accounts for
MNAR by the specified mask decoder network for the missingness mechanism
$\mathbf{r}$.
An illustration of this proposed network is given in Figure 1, and the details
of the NIMIWAE algorithm are outlined in Section 2.2 of the supplementary
materials.
Figure 1: Architecture of proposed NIMIWAE method. Dark colored nodes
($X^{o},R,X^{m}=0$) represent deterministic values, lightly colored nodes
($Z,X^{m^{\prime}}$) represent learned distributional parameters, and outlined
(in red) nodes represent sampled values. Orange cells correspond to latent
variables $\mathbf{Z}$ and $\mathbf{X}^{m}$. $\mathbf{Z}$ is sampled $K$ times
and $\mathbf{X}^{m}$ is sampled $M$ times from the respective posteriors.
Below is the lower bound ($LB$), which is optimized via stochastic gradient
descent.
## 4 Simulations
### 4.1 Simulated Data
Because the true values of missing entries in the Physionet data are unknown,
we utilize statistical simulation to first evaluate the imputation performance
of our proposed NIMIWAE method under the assumption of MCAR, MAR, and MNAR
missingness. We also evaluate the performance of state-of-the-art missing data
methods in machine learning that claim to handle up to MAR patterns of
missgness: HIVAE (Nazabal et al., 2018), VAEAC (Ivanov et al., 2019), MIWAE
(Mattei & Frellsen, 2019), MissForest (Stekhoven & Buhlmann, 2011), as well as
a naïve mean imputation method. For all simulations and real datasets, we
divided the full data into training, validation, and test datasets with ratio
6:2:2. Each model was trained on the training set, and hyperparameters were
validated using the validation set. The test set is held out in the training
and hyperparameter tuning, and final imputation performance was evaluated on
just the test set, ensuring that there is no overfitting in the model. For
MissForest and mean imputation, we impute only the test dataset, for
consistency across all methods. For NIMIWAE, we utilize all features in the
missingness model unless otherwise specified.
The simulated data $\mathbf{X}$ is an $n\times p$ matrix with $n=100,000$
observations and $p=100$ features, and the latent dimensionality was $d=2$.
This data was obtained by using a linear transformation of the latent values:
$\mathbf{X}=\mathbf{Z}\mathbf{W}+\mathbf{B}$, where $\mathbf{W}$ and
$\mathbf{B}$ were both drawn from $N(0,1)$ and are matrices of dimensions
$d\times p$ and $n\times p$, respectively, and $\mathbf{Z}$ was drawn from
$N_{100}(\mathbf{0},\mathbf{I})$. We then simulated the missingness mask
$\mathbf{R}$ on half of the simulated features by using a logistic regression
model, such that
$\text{logit}[p(r_{ij_{m}}=1|\mathbf{X},\boldsymbol{\phi})]=\phi_{0}+\boldsymbol{\phi}_{1}\mathbf{X}^{o}+\boldsymbol{\phi}_{2}\mathbf{X}^{m}$,
where $j^{\prime}_{m}=1,\ldots,p_{miss}$ index the missing features and
$j_{o}=1,\ldots,p_{obs}$ index the observed features,
$\boldsymbol{\phi}_{2}=\\{\phi_{21},\ldots,\phi_{2,p_{miss}}\\}$ are those
pertaining to the missing features, and
$\boldsymbol{\phi}_{1}=\\{\phi_{11},\ldots,\phi_{1,p_{obs}}\\}$ are the
coefficients pertaining to the observed features, where $p_{obs}$ and
$p_{miss}$ are the total number of features that are observed and missing,
respectively, with $p_{obs}=p_{miss}=p/2$. We draw nonzero values of
$\boldsymbol{\phi}_{1}$ and $\boldsymbol{\phi}_{2}$ from the log-normal
distribution with mean $\mu=5$ and $\log$ standard deviation $\sigma=0.2$.
Then, each missingness mechanism was simulated as follows: 1) MCAR
missingness: $\\{\boldsymbol{\phi}_{1},\boldsymbol{\phi}_{2}\\}=0$, 2) MAR
missingness: $\boldsymbol{\phi}_{2}=0$ and one of $\boldsymbol{\phi}_{1}\neq
0$, and 3) MNAR missingness: $\boldsymbol{\phi}_{1}=0$, and one of
$\boldsymbol{\phi}_{2}\neq 0$. For MAR and MNAR, we use just one feature as a
covariate for the missingness model for each missing feature. Specifically,
for MAR, we pair each missing feature $j_{m}=1,\ldots,p_{miss}$ with a unique
observed feature $j_{o}=1,\ldots,p_{obs}$, which contains no missingness, and
use the paired observed feature as the covariate. For MNAR, we use each
missing feature $j_{m}$ itself as the only covariate in its respective
missingness model. In this way, for each MAR or MNAR features, the missingness
is dependent on just one feature.
We measured imputation performance forby calculating the average L1 distance
between true and imputed masked values. Results were averaged over 5 simulated
datasets per simulation condition, spanning various missingness mechanisms and
percent missingness. Letting $\hat{\mathbf{X}}^{m}$ denote the imputed masked
values of the true $\mathbf{X}^{m}$ values of the missing entries, the average
L1 distance is simply
$\frac{\mid\hat{\mathbf{X}}^{m}-\mathbf{X}^{m}\mid}{N_{miss}},$
where $N_{miss}$ is the total number of missing entries in the dataset.
Figure 2: Average L1 distance between true and imputed values for missing
entries, stratified by simulated mechanism of missingness. Again, NIMIWAE
outperforms all methods in imputing missing values that were simulated to be
MNAR.
Figure 2 shows the average L1 distance yielded by each method, stratified by
the simulated mechanism of missingness. We note that for each these
simulations, we utilize all features in NIMIWAE’s mask decoder network,
although only one feature is involved under the true missingness model.
Despite this, NIMIWAE consistently yields drastically greater performance
compared to other methods under MNAR missingness, while yielding an average L1
that is comparable to other MAR methods in the MCAR and MAR cases. This
demonstrates that the neural network is able to learn the correct underlying
missingness model under MNAR despite overspecification. This can be
exceptionally useful for real EHR data, where the true covariates of the
missingness model under MNAR may not be known a priori.
One may also be interested in the performance of these missing data methods in
lower-dimensional data. Thus, we simulated datasets of smaller dimensions
($p=8$), and similarly imputed each dataset. Results of this analysis can be
found in Section 2.4 of the supplementary materials. Details and values of the
hyperparameters that were searched for all datasets, including these simulated
datasets, can be found in Section 2.5 of the supplementary materials.
### 4.2 UCI Datasets
Next, we performed imputation of simulated missing entries on several real
datasets from the UCI Machine Learning repository (Dua & Graff, 2017) to
evaluate NIMIWAE’s performance in other real-life datasets with potentially
complex feature interactions like in the Physionet data. We simulated MCAR,
MAR, and MNAR missingness on these datasets using the same logistic regression
model as in the prior simulation, with a fixed 25% of the overall entries
missing. MissForest was not able to be run on two of the UCI datasets
(hepmass, power) due to memory issues. Table 2 shows imputation performance of
each method on each dataset, with masks simulated based on MCAR, MAR, and MNAR
missingness.
Dataset | | HIVAE | Mean | MissForest | MIWAE | VAEAC | NIMIWAE | IMIWAE
---|---|---|---|---|---|---|---|---
banknote | MCAR | 1.50 | 2.00 | 1.04 | 1.46 | 1.31 | 1.45 | 1.31
| MAR | 2.14 | 2.23 | 1.96 | 2.89 | 1.76 | 5.02 | 2.41
| MNAR | 3.25 | 3.90 | 3.37 | 3.21 | 3.45 | 1.75 | 3.33
concrete | MCAR | 44.88 | 50.30 | 34.79 | 40.59 | 32.15 | 40.54 | 35.47
| MAR | 63.20 | 58.64 | 61.56 | 60.39 | 58.36 | 63.64 | 57.60
| MNAR | 60.77 | 103.50 | 90.13 | 70.57 | 68.03 | 55.20 | 66.24
hepmass | MCAR | 0.75 | 0.82 | NA | 0.81 | 0.67 | 0.83 | 0.81
| MAR | 0.76 | 0.84 | NA | 0.86 | 0.70 | 1.38 | 0.85
| MNAR | 1.41 | 1.54 | NA | 1.37 | 1.36 | 0.76 | 1.37
power | MCAR | 0.53 | 0.72 | NA | 0.68 | 0.52 | 0.90 | 0.84
| MAR | 0.64 | 0.81 | NA | 0.82 | 0.60 | 0.84 | 0.82
| MNAR | 1.05 | 1.08 | NA | 1.07 | 1.09 | 1.00 | 1.02
red | MCAR | 1.07 | 1.41 | 0.89 | 1.04 | 0.96 | 1.29 | 1.09
| MAR | 1.31 | 1.69 | 1.21 | 1.32 | 1.31 | 1.52 | 1.46
| MNAR | 2.00 | 3.02 | 2.10 | 1.73 | 2.33 | 1.00 | 1.68
white | MCAR | 2.26 | 2.62 | 1.93 | 2.56 | 1.97 | 2.19 | 2.19
| MAR | 2.17 | 2.63 | 1.96 | 2.34 | 1.95 | 2.41 | 2.19
| MNAR | 4.10 | 4.92 | 3.97 | 4.49 | 4.09 | 3.10 | 4.67
Table 2: Average L1 distance between true and imputed missing values in
various datasets, under different mechanisms of simulated missingness.
Proportion of missing entries was fixed at 50%. For NIMIWAE, we additionally
ran the “Ignorable” model with the mask decoder network omitted (IMIWAE). We
see that NIMIWAE consistently performs relatively well in imputing MNAR
missingness, while performance of the “Ignorable” IMIWAE model is comparable
to other methods under MCAR and MAR.
We found that NIMIWAE consistently performs well when the missingness is
simulated to be MNAR. In practice, explicitly modeling the missingness model
may be problematic in smaller datasets, as a model that is too large tends to
significantly hinder performance for MNAR methods when the true mechanism is
MAR or MCAR (Ibrahim et al., 2005). As a result, we see that for some
datasets, NIMIWAE yields poorer imputation performance (high average L1)
compared to other methods when the missingness is ignorable. We found that
when the mask decoder network was omitted, the imputation performance by
NIMIWAE generally improved under MCAR and MAR missingness, and was similar to
the performance of state-of-the-art methods. We denote this NIMIWAE framework
without the mask decoder network “Ignorably-Missing IWAE” (IMIWAE). This
highlights the importance of the assumption of ignorability of the
missingness, especially in datasets of smaller size. Typically in the
statistical literature, an a priori assumption is made on whether the
missingness in the data is ignorable or non-ignorable. Generally, NIMIWAE
allows excellent performance without such an assumption, but IMIWAE may be
more appropriate for small datasets where ignorable missingness is suspected.
## 5 Physionet 2012 Challenge Dataset
Finally, we analyzed the Physionet 2012 Challenge data using each of the
compared methods. Namely, we perform a qualitative analysis of each imputed
dataset, highlighting differences between results from assuming non-ignorable
(NIMIWAE) and ignorable (IMIWAE) missingness (Ibrahim et al., 2005; Ibrahim &
Molenberghs, 2009). This is because, in contrast to our simulations, the true
values of the missing entries are not available to directly assess imputation
performance, and it may be infeasible to simulate additional missing entries
by masking observed values in real data due to a high rate of inherent
missingness in the data. Additionally, the missingness mechanism itself is
generally not “testable” by the observed data in practice (Ibrahim et al.,
1999). Similar to the simulations, we evaluate the imputation results from
state-of-the-art methods as well.
Blood pressure has often been used as a measure of risk for patients, with
associations found between abnormal diastolic, systolic blood pressure (DBP,
SBP), and mean arterial pressure (MAP) and mortality in adults admitted into
the ICU (Maheshwari et al., 2018; Burstein et al., 2020). In the Physionet
dataset, we found that the rates of missingness for invasive DBP, SBP, and MAP
measurements were higher for patients who survived the ICU stay than for those
who did not. One possible explanation for this is that these tests may not be
performed as routinely for a patient that is not in critical condition, thus
yielding a higher rate of missingness if the patient is in better
cardiovascular health. This may suggest MNAR missingness, where we may expect
the missing blood pressure measurements to be closer to “normal” range
relative to the observed values. The accepted “normal” or healthy values of
SBP and DBP are under 120 and 80, respectively (Robinson, 1939), while the
“normal” range of MAP is between 65 to 100 (Jakkula et al., 2018).
Figure 3: (Top 3): Boxplots of imputed values via IMIWAE (IM), NIMIWAE (NIM),
MIWAE, HIVAE, VAEAC, and MissForest (MF) of imputation for median DBP (left),
MAP (center), and SBP (right). Mean imputed values are given with the
horizontal red line. (Middle): Table of missingness rates for median DBP, MAP,
and SBP, stratified by whether each patient survived the ICU stay. (Bottom 3):
Scatterplots of NIMIWAE imputed values vs. IMIWAE imputed values, showing the
shift in values when using a non-ignorably missing model vs. an ignorably
missing model. The $y=x$ line is given in red.
We imputed missing values for blood pressure measurements using each of the
compared methods, and Figure 3 shows the distribution of the imputed values
from these methods, as well as the rate of missingness for each measurement
with respect to the mortality status of each patient. We found that the rates
of missingness for these measurements were on average about 2% higher for
patients who survived the ICU stay than those who did not survive. Also, we
found that the imputed DBP and SBP were significantly lower for NIMIWAE and
MissForest than all other methods, suggesting better health of the patients
who had missingness in these measurements. Furthermore, MAP values imputed by
NIMIWAE was most consistent with the normal or “healthy” range of MAP values
than other methods, with 99.6% of measurements within the “normal” range. This
is in line with the missingness pattern, which showed a lower rate of
mortality for patients with missingness in these blood pressure measurements.
Table 3: Table of coefficient estimates, standard errors (SE), Z statistics,
p-values, and 95% confidence intervals of select covariates from a logistic
regression model fit with datasets with missing entries imputed by each
method. Here, IMIWAE is the ignorable version of NIMIWAE.
Based on the imputed datasets from these methods, we also fit a logistic
regression model with mortality as the response, with the features of this
dataset as the covariates. We report details of the covariates with the top 5
largest effects on mortality when using the imputed dataset from the non-
ignorably missing NIMIWAE model in Table 3. We found some significant
differences in the results of the logistic regression based on the different
imputed datasets. For example, we found that for “MechVentLast8Hour”, the Z
statistic was significantly higher with NIMIWAE than with any other model,
suggesting that the NIMIWAE-imputed dataset found this feature to be more
significant in modelling the mortality than other imputed datasets. Another
observation we made was that whereas imputed values from the ignorable version
of our model (IMIWAE) found “FiO2_last” and “pH_last” to be insignificant to
mortality, NIMIWAE found these features to be highly significant, with
Z-statistic values of 4.83 and 3.41, respectively. Clinical studies have shown
that irregular levels of FiO2 and pH may be significant predictors of
mortality in hospital patients (Samanta et al., 2018; von Vopelius-Feldt et
al., 2020), and the dataset imputed by the non-ignorable model better captures
these effects.
## 6 Discussion
In this paper we introduce NIMIWAE, one of the first methods to handle up to
MNAR patterns of missingness in the VAE/IWAE class of methods, to address
complex patterns of missingess observed in the Physionet EHR data. Using
statistical simulations, we show that NIMIWAE performs well in imputing
missing features under MNAR, and has reasonable performance under the MCAR or
MAR settings. Performance in imputing MCAR and MAR missingness can be further
improved in NIMIWAE by using the ignorable version of this model (IMIWAE),
where we omit the mask decoder network. We also found that the results of the
analysis on the Physionet data are highly dependent on the choice of
missingness model, which specifies the assumption of the missingness
mechanism. However, NIMIWAE is able to impute missing values well in
simulations regardless of the underlying missingness mechanism, flexibly
modelling the mechanism using a deeply-learned neural network. We have also
shown that this performance can be further improved if a priori knowledge is
used to narrow down the missingness model covariates, while the neural network
structure allows this model to capture complex, nonlinear dependencies between
missingness across features. The NIMIWAE-imputed dataset resulted in more
realistic imputed values with respect to what we may expect in the Physionet
Challenge patients, since NIMIWAE takes into account possible non-ignorable
missingness in the data. Additionally, the IWAE architecture learns a lower-
dimensional representation of the data, which can be used for further tasks,
such as patient subgroup identification or visualization of data.
Learning algorithms that can be applied to EHR data like the Physionet
Challenge dataset can be valuable tools that clinicians can use to aid
decisions in hospital settings and understand patterns within these health
records. For example, properly handling missingness in EHRs when imputing the
missing entries can improve the performance of prediction algorithms that can
assess risk of death or other outcomes of interest, like disease. Informative
missingness is a common problem in analyzing EHR data, and accounting for such
missingness can be helpful in obtaining accurate, unbiased estimates of the
true missing values. We note that although we have used our NIMIWAE method
primarily to analyze the Physionet 2012 Challenge dataset, it can more
generally be applied to settings where one wishes to train a VAE when
missingness is present among input features.
SUPPLEMENTARY MATERIAL
Supplementary Materials:
Descriptions of real datasets used, directions on how to obtain datasets, and
link to code for reproducibility. (pdf)
R-package for NIMIWAE routine:
R-package NIMIWAE containing code to perform the diagnostic methods described
in the article. The package can be downloaded from
https://github.com/DavidKLim/NIMIWAE (website)
R Paper repo for reproducibility:
Github repository to replicate all analyses, figures, and tables from this
paper can be found here: https://github.com/DavidKLim/NIMIWAE_Paper (website)
## Acknowledgments
This research was supported by NIH grant
5T32CA106209-13.
## References
* (1)
* Beaulieu-Jones & and (2016) Beaulieu-Jones, B. K. & and, J. H. M. (2016), Missing Data Imputation in the Electronic Health Record Using Deeply Learned Autoencoders, in ‘Biocomputing 2017’, WORLD SCIENTIFIC.
* Beesley & Mukherjee (2020) Beesley, L. J. & Mukherjee, B. (2020), ‘Statistical inference for association studies using electronic health records: handling both selection bias and outcome misclassification’, Biometrics .
https://onlinelibrary.wiley.com/doi/abs/10.1111/biom.13400
* Bishop (2006) Bishop, C. M. (2006), Pattern Recognition and Machine Learning, Springer-Verlag New York Inc.
* Blei et al. (2016) Blei, D. M., Kucukelbir, A. & McAuliffe, J. D. (2016), ‘Variational Inference: A Review for Statisticians’, arXiv e-prints p. arXiv:1601.00670.
* Burda et al. (2015) Burda, Y., Grosse, R. & Salakhutdinov, R. (2015), ‘Importance Weighted Autoencoders’, arXiv e-prints p. arXiv:1509.00519.
* Burstein et al. (2020) Burstein, B., Tabi, M., Barsness, G. W., Bell, M. R., Kashani, K. & Jentzer, J. C. (2020), ‘Association between mean arterial pressure during the first 24 hours and hospital mortality in patients with cardiogenic shock’, Critical Care 24(1).
* Citi & Barbieri (2012) Citi, L. & Barbieri, R. (2012), Physionet 2012 challenge: Predicting mortality of icu patients using a cascaded svm-glm paradigm, in ‘2012 Computing in Cardiology’, pp. 257–260.
* Courtright et al. (2019) Courtright, K. R., Chivers, C., Becker, M., Regli, S. H., Pepper, L. C., Draugelis, M. E. & O’Connor, N. R. (2019), ‘Electronic health record mortality prediction model for targeted palliative care among hospitalized medical patients: a pilot quasi-experimental study’, Journal of General Internal Medicine 34(9), 1841–1847.
* Cremer et al. (2017) Cremer, C., Morris, Q. & Duvenaud, D. (2017), ‘Reinterpreting Importance-Weighted Autoencoders’, arXiv e-prints p. arXiv:1704.02916.
* Diggle & Kenward (1994) Diggle, P. & Kenward, M. G. (1994), ‘Informative drop-out in longitudinal data analysis’, Applied Statistics 43(1), 49.
* Doersch (2016) Doersch, C. (2016), ‘Tutorial on Variational Autoencoders’, arXiv e-prints p. arXiv:1606.05908.
* Dua & Graff (2017) Dua, D. & Graff, C. (2017), ‘UCI machine learning repository’.
http://archive.ics.uci.edu/ml
* Ferreira (2001) Ferreira, F. L. (2001), ‘Serial evaluation of the SOFA score to predict outcome in critically ill patients’, JAMA 286(14), 1754.
* Fortuin et al. (2019) Fortuin, V., Baranchuk, D., Rätsch, G. & Mandt, S. (2019), ‘Gp-vae: Deep probabilistic time series imputation’.
* Gall et al. (1984) Gall, J.-R. L., Loirat, P., Alperovitch, A., Glaser, P., Granthil, C., Mathieu, D., Mercier, P., Thomas, R. & Villers, D. (1984), ‘A simplified acute physiology score for ICU patients’, Critical Care Medicine 12(11), 975–977.
* Gershman & Goodman (2014) Gershman, S. J. & Goodman, N. D. (2014), Amortized inference in probabilistic reasoning, in ‘CogSci’.
* Ghosh et al. (2019) Ghosh, P., Sajjadi, M. S. M., Vergari, A., Black, M. & Schölkopf, B. (2019), ‘From Variational to Deterministic Autoencoders’, arXiv e-prints p. arXiv:1903.12436.
* Gootjes-Dreesbach et al. (2019) Gootjes-Dreesbach, L., Sood, M., Sahay, A., Hofmann-Apitius, M. & Fröhlich, H. (2019), ‘Variational autoencoder modular bayesian networks (VAMBN) for simulation of heterogeneous clinical study data’.
* Ibrahim et al. (2005) Ibrahim, J. G., Chen, M.-H., Lipsitz, S. R. & Herring, A. H. (2005), ‘Missing-data methods for generalized linear models’, Journal of the American Statistical Association 100(469), 332–346.
* Ibrahim et al. (1999) Ibrahim, J. G., Lipsitz, S. R. & Chen, M.-H. (1999), ‘Missing covariates in generalized linear models when the missing data mechanism is non-ignorable’, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61(1), 173–190.
* Ibrahim & Molenberghs (2009) Ibrahim, J. G. & Molenberghs, G. (2009), ‘Missing data methods in longitudinal studies: a review’, TEST 18(1), 1–43.
* Ivanov et al. (2019) Ivanov, O., Figurnov, M. & Vetrov, D. (2019), Variational autoencoder with arbitrary conditioning, in ‘International Conference on Learning Representations’.
https://openreview.net/forum?id=SyxtJh0qYm
* Jakkula et al. (2018) Jakkula, P., , Pettilä, V., Skrifvars, M. B., Hästbacka, J., Loisa, P., Tiainen, M., Wilkman, E., Toppila, J., Koskue, T., Bendel, S., Birkelund, T., Laru-Sompa, R., Valkonen, M. & Reinikainen, M. (2018), ‘Targeting low-normal or high-normal mean arterial pressure after cardiac arrest and resuscitation: a randomised pilot trial’, Intensive Care Medicine 44(12), 2091–2101.
* Johnson et al. (2014) Johnson, A. E., Kramer, A. A. & Clifford, G. D. (2014), Data preprocessing and mortality prediction: The physionet/cinc 2012 challenge revisited, in ‘Computing in Cardiology 2014’, pp. 157–160.
* Johnson et al. (2012) Johnson, A. E. W., Dunkley, N., Mayaud, L., Tsanas, A., Kramer, A. A. & Clifford, G. D. (2012), Patient specific predictions in the intensive care unit using a bayesian ensemble, in ‘2012 Computing in Cardiology’, pp. 249–252.
* Kingma & Ba (2014) Kingma, D. P. & Ba, J. (2014), ‘Adam: A Method for Stochastic Optimization’, arXiv e-prints p. arXiv:1412.6980.
* Kingma & Welling (2013) Kingma, D. P. & Welling, M. (2013), ‘Auto-Encoding Variational Bayes’, arXiv e-prints p. arXiv:1312.6114.
* Kingma & Welling (2019) Kingma, D. P. & Welling, M. (2019), ‘An Introduction to Variational Autoencoders’, arXiv e-prints p. arXiv:1906.02691.
* LeCun et al. (2015) LeCun, Y., Bengio, Y. & Hinton, G. (2015), ‘Deep learning’, Nature 521(7553), 436–444.
* Little & Rubin (2002) Little, R. J. A. & Rubin, D. B. (2002), Statistical Analysis with Missing Data, John Wiley & Sons, Inc.
* Luo et al. (2018) Luo, Y., Cai, X., ZHANG, Y., Xu, J. & xiaojie, Y. (2018), Multivariate time series imputation with generative adversarial networks, in S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi & R. Garnett, eds, ‘Advances in Neural Information Processing Systems 31’, Curran Associates, Inc., pp. 1596–1607.
http://papers.nips.cc/paper/7432-multivariate-time-series-imputation-with-
generative-adversarial-networks.pdf
* Maheshwari et al. (2018) Maheshwari, K., Nathanson, B. H., Munson, S. H., Khangulov, V., Stevens, M., Badani, H., Khanna, A. K. & Sessler, D. I. (2018), ‘The relationship between ICU hypotension and in-hospital mortality and morbidity in septic patients’, Intensive Care Medicine 44(6), 857–867.
* Mattei & Frellsen (2019) Mattei, P.-A. & Frellsen, J. (2019), MIWAE: Deep generative modelling and imputation of incomplete data sets, in K. Chaudhuri & R. Salakhutdinov, eds, ‘Proceedings of the 36th International Conference on Machine Learning’, Vol. 97 of Proceedings of Machine Learning Research, PMLR, Long Beach, California, USA, pp. 4413–4423.
http://proceedings.mlr.press/v97/mattei19a.html
* Nazabal et al. (2018) Nazabal, A., Olmos, P. M., Ghahramani, Z. & Valera, I. (2018), ‘Handling Incomplete Heterogeneous Data using VAEs’, arXiv e-prints p. arXiv:1807.03653.
* O’Shea (2019) O’Shea, R. (2019), ‘Interpreting Missing Data Patterns in the ICU’, arXiv e-prints p. arXiv:1912.08612.
* Rezende et al. (2014) Rezende, D. J., Mohamed, S. & Wierstra, D. (2014), ‘Stochastic Backpropagation and Approximate Inference in Deep Generative Models’, arXiv e-prints p. arXiv:1401.4082.
* Robinson (1939) Robinson, S. C. (1939), ‘Range of normal blood pressure’, Archives of Internal Medicine 64(3), 409.
* Ross et al. (2014) Ross, M. K., Wei, W. & Ohno-Machado, L. (2014), ‘“big data” and the electronic health record’, Yearbook of Medical Informatics 23(01), 97–104.
* Rubin (1976) Rubin, D. B. (1976), ‘Inference and missing data’, Biometrika 63(3), 581–592.
* Sadati et al. (2019) Sadati, N., Zafar Nezhad, M., Babu Chinnam, R. & Zhu, D. (2019), ‘Representation Learning with Autoencoders for Electronic Health Records: A Comparative Study’, arXiv e-prints p. arXiv:1908.09174.
* Samanta et al. (2018) Samanta, S., Singh, R. K., Baronia, A. K., Mishra, P., Poddar, B., Azim, A. & Gurjar, M. (2018), ‘Early ph change predicts intensive care unit mortality.’, Indian journal of critical care medicine : peer-reviewed, official publication of Indian Society of Critical Care Medicine 22, 697–705.
* Scheffer (2002) Scheffer, J. (2002), ‘Dealing with missing data’, Research Letters in the Information and Mathematical Sciences 3(1), 153–160.
* Sharafoddini et al. (2019) Sharafoddini, A., Dubin, J. A., Maslove, D. M. & Lee, J. (2019), ‘A new insight into missing data in intensive care unit patient profiles: Observational study’, JMIR Medical Informatics 7(1), e11605.
* Shickel et al. (2018) Shickel, B., Tighe, P. J., Bihorac, A. & Rashidi, P. (2018), ‘Deep EHR: A survey of recent advances in deep learning techniques for electronic health record (EHR) analysis’, IEEE Journal of Biomedical and Health Informatics 22(5), 1589–1604.
* Silva et al. (2012) Silva, I., Moody, G., Scott, D. J., Celi, L. A. & Mark, R. G. (2012), ‘Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012.’, Computing in cardiology 39, 245–248.
* Stekhoven & Buhlmann (2011) Stekhoven, D. J. & Buhlmann, P. (2011), ‘MissForest–non-parametric missing value imputation for mixed-type data’, Bioinformatics 28(1), 112–118.
* Stubbendick & Ibrahim (2003) Stubbendick, A. L. & Ibrahim, J. G. (2003), ‘Maximum likelihood methods for nonignorable missing responses and covariates in random effects models’, Biometrics 59(4), 1140–1150.
* Tabouy et al. (2019) Tabouy, T., Barbillon, P. & Chiquet, J. (2019), ‘Variational inference for stochastic block models from sampled data’, Journal of the American Statistical Association pp. 1–23.
* Tschannen et al. (2018) Tschannen, M., Bachem, O. & Lucic, M. (2018), ‘Recent Advances in Autoencoder-Based Representation Learning’, arXiv e-prints p. arXiv:1812.05069.
* van den Oord et al. (2017) van den Oord, A., Vinyals, O. & kavukcuoglu, k. (2017), Neural discrete representation learning, in I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan & R. Garnett, eds, ‘Advances in Neural Information Processing Systems’, Vol. 30, Curran Associates, Inc., pp. 6306–6315.
https://proceedings.neurips.cc/paper/2017/file/7a98af17e63a0ac09ce2e96d03992fbc-
Paper.pdf
* Venugopalan et al. (2019) Venugopalan, J., Chanani, N., Maher, K. & Wang, M. D. (2019), ‘Novel data imputation for multiple types of missing data in intensive care units’, IEEE Journal of Biomedical and Health Informatics 23(3), 1243–1250.
* Vinzamuri & Reddy (2013) Vinzamuri, B. & Reddy, C. K. (2013), Cox regression with correlation based regularization for electronic health records, in ‘2013 IEEE 13th International Conference on Data Mining’, pp. 757–766.
* von Vopelius-Feldt et al. (2020) von Vopelius-Feldt, J., Watson, D., Swanson-Low, C. & Cameron, J. (2020), ‘Estimated sp02/fio2 ratio to predict mortality in patients with suspected COVID-19 in the emergency department: a prospective cohort study’.
* Wang et al. (2016) Wang, Y., Yao, H. & Zhao, S. (2016), ‘Auto-encoder based dimensionality reduction’, Neurocomputing 184, 232–242.
* Wells et al. (2013) Wells, B. J., Nowacki, A. S., Chagin, K. & Kattan, M. W. (2013), ‘Strategies for handling missing data in electronic health record derived data’, eGEMs (Generating Evidence & Methods to improve patient outcomes) 1(3), 7.
* Zhang et al. (2020) Zhang, G., Beesley, L. J., Mukherjee, B. & Shi, X. (2020), ‘Patient recruitment using electronic health records under selection bias: a two-phase sampling framework’, arXiv preprint arXiv:2011.06663 .
|
# Variable Selection in Regression-based Estimation of Dynamic Treatment
Regimes
Zeyu Bian1 Erica EM Moodie1 Susan M Shortreed2, 3 and Sahir Bhatnagar1
1Department of Epidemiology Biostatistics and Occupational Health
McGill University
2Kaiser Permanente Washington Health Research Institute
3Department of Biostatistics University of Washington
Dynamic treatment regimes (DTRs) consist of a sequence of decision rules, one
per stage of intervention, that finds effective treatments for individual
patients according to patient information history. DTRs can be estimated from
models which include the interaction between treatment and a small number of
covariates which are often chosen a priori. However, with increasingly large
and complex data being collected, it is difficult to know which prognostic
factors might be relevant in the treatment rule. Therefore, a more data-driven
approach of selecting these covariates might improve the estimated decision
rules and simplify models to make them easier to interpret. We propose a
variable selection method for DTR estimation using penalized dynamic weighted
least squares. Our method has the strong heredity property, that is, an
interaction term can be included in the model only if the corresponding main
terms have also been selected. Through simulations, we show our method has
both the double robustness property and the oracle property, and the newly
proposed methods compare favorably with other variable selection approaches.
Key Words: Adaptive treatment strategies; Double robustness; LASSO; Precision
medicine; Q-learning.
## 1 Introduction
Dynamic treatment regimes (DTRs) (Chakraborty and Moodie, 2013), also called
adaptive treatment strategies, involve a sequence of decision rules, one per
stage of intervention, that find effective treatments for individual patients
according to patient information history. Statistical methods can be used to
identify optimal DTRs, constructing a sequence of treatment rules tailored
over time to individual’s information that can maximize the expected patient
outcome.
DTRs can be estimated from models which include interactions between treatment
and covariates which are often chosen a priori. However, with many covariates
and a complex disease process for which competing treatment choices have
heterogeneous effects, it is difficult to know which prognostic factors might
be considered relevant in the treatment rule. A more data-driven approach of
selecting these covariates might improve the estimated decision rules and
simplify models to improve intractability.
Most existing work in the DTR literature focuses on estimation; variable
selection with the objective of optimizing treatment decisions has been
considered only occasionally. Gunter, Zhu and Murphy (Gunter et al., 2011)
proposed S-Scores, a ranking method to variable selection in DTRs. Based on
this approach, (Fan et al., 2016) developed the sequential advantage selection
(SAS) approach, which considers variables already in the model when deciding
whether to include a new variable by the additional improvement of the
criterion provided by this variable. Lu et al. (Lu et al., 2013) adopted
adaptive LASSO (Zou, 2006) in the context of A-learning (Murphy, 2003), which
does not require a correct specification of the outcome model. Shi et al. (Shi
et al., 2018) proposed a penalized A-learning used the Dantzig selector
(Candes and Tao, 2007) which directly penalizes the estimating equations of
A-learning, and has the double robust property: the estimators are consistent
if either one of two nuisance models is correct. See Supplementary Table 1 for
a comparison of these approaches.
The topic of variable selection in a general (not DTR) context has seen
extensive growth, such as LASSO (Tibshirani, 1996) and SCAD (Fan and Li,
2001). Gunter, Zhu and Murphy (Gunter et al., 2011) noted that most variable
selection approaches focus on prediction performance, and thus may not perform
well in the context of DTRs because these techniques can underestimate the
importance of variables that have small predictive ability but that play
significant rule in decision making.
In this article, we follow the DTR estimation approach of dynamic ordinary
least squares regression (dWOLS) introduced by (Wallace and Moodie, 2015), an
approach which requires only some minor pre-computation and the implementation
of standard weighted regression. While having similarities to both Q-learning
(Watkins, 1989) and G-estimation (Robins, 2004), it provides simplicity and
intuitiveness similar to the former and benefits from the double robustness of
the latter although it is suitable only for linear decision rules. By adding
two penalty terms in the dWOLS model, we perform the estimation and variable
selection for DTRs simultaneously.
The rest of this article is organized as follows. In Section 2, we introduce
the proposed penalized dWOLS (pdWOLS) approach, followed by algorithmic
details. Three simulation studies are given in Section 3. Finally, we apply
our method to the Sequenced Treatment Alternatives to Relieve Depression
(STAR*D) trial data in Section 4.
## 2 Methodology
### 2.1 Introductory Concepts and Notation
To estimate DTRs, we assume that the following assumptions hold through out
this paper:
1\. Stable unit treatment value assumption (SUTVA) (Rubin, 1980): a patient’s
potential outcome is not affected by other patients’ treatment assignments.
2\. Ignorability: ignorability or no unmeasured confounding (Robins, 1997)
specifies that for any possible treatment regimes, the stage $k$ treatment is
independent of future potential covariates or outcome conditional on current
patient history.
3\. We also assume no interference, no measurement error, and all the
individuals have complete follow-up.
We adopt the setup in (Wallace and Moodie, 2015). For a $K$-stages DTR, the
following (standard) notation is used, with lower case being used for observed
variables and uppercase for their random counterparts:
* •
$y$: patient outcome (continuous) which is is measured at one point in time.
The goal of DTRs is to make treatment decisions that can optimize (typically,
maximize) the outcome.
* •
$a_{k}$: $k$th binary treatment decision: for instance, $a=1$ for new
treatment, $a=0$ for standard care.
* •
$\textbf{\emph{x}}_{k}$: patient information available prior to $k$th
treatment decision.
* •
$\boldsymbol{h}_{k}$: covariate matrix containing patient history prior to the
$k$th treatment decision; the history can include previous treatments
$a_{1},\ldots,a_{k-1}$.
* •
$\overline{\boldsymbol{a}}_{k}=(a_{1},a_{2},\ldots,a_{k})$: vector of first
$k$ treatment decisions.
* •
$\underline{\boldsymbol{a}}_{k}=(a_{k+1},a_{k+2},\ldots,a_{K})$ vector of
treatment decisions from stage $k+1$ onward.
* •
Blip function (or contrast function) defined as the difference in expected
potential outcome between patients who received treatment $a_{k}$ at stage $k$
and patients who received a reference treatment denoted, say $a_{k}=0$, with
the same history (same covariates and treatments) and assuming they receive
optimal treatment after $k$th stage:
$\gamma_{k}\left(\boldsymbol{h}_{k},a_{k}\right)=\mathbb{E}\left[Y^{\overline{\boldsymbol{a}}_{k},\underline{\boldsymbol{a}}_{k+1}^{opt}}-Y^{\overline{\boldsymbol{a}}_{k-1},a_{k}=0,\underline{\boldsymbol{a}}_{k+1}^{opt}}|\boldsymbol{H}_{k}=\boldsymbol{h}_{k}\right].$
* •
Regret function (or advantage function) (Murphy, 2003) is the expected loss
resulting from giving treatment $a_{k}$ at stage $k$ instead of the optimal
treatment $a_{k}^{opt}$, assuming optimal treatment is received after $k$-th
stage:
$\mu_{k}\left(\boldsymbol{h}_{k},a_{k}\right)=\mathbb{E}\left[Y^{\overline{\boldsymbol{a}}_{k-1},\underline{\boldsymbol{a}}_{k}^{opt}}-Y^{\overline{\boldsymbol{a}}_{k},\underline{\boldsymbol{a}}_{k+1}^{opt}}|\boldsymbol{H}_{k}=\boldsymbol{h}_{k}\right].$
There is a direct correspondence between the blip and regret functions:
$\mu_{k}\left(\boldsymbol{h}_{k},a_{k}\right)=\\\
\gamma_{k}\left(\boldsymbol{h}_{k},a_{k}^{opt}\right)-\gamma_{k}\left(\boldsymbol{h}_{k},a_{k}\right)$.
This can be leveraged to simplify some expressions in later sections. Finally,
we decompose the expected mean outcome into two components:
$\displaystyle\mathbb{E}\left[Y^{a}|\boldsymbol{H}=\boldsymbol{h};\boldsymbol{\beta},\boldsymbol{\psi}\right]=\underbrace{f\left(\boldsymbol{h}_{0};\boldsymbol{\beta}\right)}_{\textrm{treatment-
free
model}}+\sum_{k=1}^{K}\underbrace{\gamma_{k}\left(\boldsymbol{h}_{k},a_{k};\boldsymbol{\psi}_{k}\right)}_{\textrm{blip
function}}$ (1)
where $\boldsymbol{h}_{0}$ are baseline covariates, and the function $f$ –
being free of any terms relating to the active treatment ($a_{k}=1$) is
irrelevant for making decisions about optimal treatment selection.
### 2.2 dWOLS
dWOLS (Wallace and Moodie, 2015) uses a sequential regression approach similar
to Q-learning to estimate the blip parameter $\boldsymbol{\psi}_{k}$ in
Equation (1), however achieves double robustness through weighting by a
function of the propensity score. The dWOLS weights must satisfy
$\pi(\boldsymbol{x})w(1,\boldsymbol{x})=(1-\pi(\boldsymbol{x}))w(0,\boldsymbol{x})$,
where $\pi(\boldsymbol{x})$ is the propensity score (Rosenbaum and Rubin,
1983) and $w(a,\boldsymbol{x})$ is the weight for a subject with treatment $a$
and covariates $\boldsymbol{x}$. Wallace and Moodie (Wallace and Moodie, 2015)
suggested to use ‘absolute value’ weights of the form
$w(a,\textbf{\emph{x}})=|a-\mathbb{E}[A|\boldsymbol{X}=\boldsymbol{x}]|$, as
these offered better efficiency than other alternatives considered, while
yielding consistent estimators of blip parameters if either the treatment or
treatment-free model is correctly specified.
### 2.3 Penalized dWOLS
We first introduce our approach in a one-stage setting, consider the usual
setup for linear regression, letting
Y
$\displaystyle=\beta_{0}\boldsymbol{I}+\psi_{0}\boldsymbol{A}+\sum_{j=1}^{p}\textbf{\emph{X}}_{j}\beta_{j}+\sum_{j=1}^{p}\psi_{j}(\boldsymbol{A}\circ\textbf{\emph{X}}_{j})+\boldsymbol{\varepsilon},$
(2)
where $\textbf{\emph{Y}}\in\mathbb{R}^{n}$ is a continuous response measured
on $n$ individuals; $\textbf{\emph{X}}_{j}\in\mathbb{R}^{n\times 1}$ are the
$j$-th covariates; $\textbf{\emph{X}}_{i}\in\mathbb{R}^{1\times p}$ are
covariates of $i$-th individual; $\beta_{j}\in\mathbb{R}$ are the
corresponding parameters for the main effects of covariates;
$\psi_{j}\in\mathbb{R}$ are the blip parameters for $j=1,\ldots,p$,
$\boldsymbol{A}$ is the binary treatment indicator, "$\circ$" is the Hadamard
product for element wise vector multiplication, and $\boldsymbol{\varepsilon}$
is an error term. This model is a simplification of (Bhatnagar et al., 2020),
which considers an additive interaction regression model. To eliminate the
intercept $\beta_{0}$ from Equation (2), throughout this section, we center
the response variable and each input variable in a weighted way, e.g., using
$\textbf{\emph{Y}}-\sum_{i=1}^{n}w_{i}Y_{i}/n$ instead of Y as the outcome, so
that the mean of the observed outcome is centred, and hence the intercept term
is eliminated. Our method allows for the case when the number of predictors
$p$ is greater than the number of observations $n$.
For a continuous response we use the weighted squared-error loss:
$\mathcal{L}(\textbf{\emph{Y}};\boldsymbol{\theta})=\frac{1}{2n}\left\lVert\sqrt{\textbf{\emph{W}}}\left(\textbf{\emph{Y}}-\psi_{0}\boldsymbol{A}-\sum_{j=1}^{p}\textbf{\emph{X}}_{j}\beta_{j}-\sum_{j=1}^{p}\psi_{j}(\boldsymbol{A}\circ\textbf{\emph{X}}_{j})\right)\right\rVert^{2}_{2}$
where
$\boldsymbol{\theta}=(\beta_{1},\ldots,\beta_{p},\psi_{0},\ldots,\psi_{p})$,
and
$\textbf{\emph{W}}=diag\left\\{w_{1}(a,\textbf{\emph{x}}),w_{2}(a,\textbf{\emph{x}}),\dots,w_{n}(a,\textbf{\emph{x}})\right\\}$
is a _known_ $n\times n$ diagonal matrix where $w_{i}(a,\textbf{\emph{x}})$ is
the ‘absolute value’ weight for $i_{th}$ individual. Similar to LASSO, we
consider the following objective function that includes the $\ell_{1}$ penalty
for variable selection:
$Q\left(\boldsymbol{\theta}\right)=\mathcal{L}(\textbf{\emph{Y}};\boldsymbol{\theta})+\lambda(1-\alpha)(\left\lVert\boldsymbol{\beta}\right\rVert_{1}+|\psi_{0}|)+\lambda\alpha\left\lVert\boldsymbol{\psi}\right\rVert_{1}$
(3)
where $\boldsymbol{\beta}=(\beta_{1},...,\beta_{p})$ and
$\boldsymbol{\psi}=(\psi_{1},...,\psi_{p})$, $\lambda>0$ and $\alpha\in(0,1)$
are tuning parameters, and the solution is given by
$\widehat{\boldsymbol{\theta}}=\operatorname*{arg\,min}_{\boldsymbol{\theta}}Q(\boldsymbol{\theta})$.
The parameter $\alpha$ controls the relative penalties for the main effects
and the interaction effects. An issue with Equation (3) is that since no
constraint is placed on the structure of the model, it is possible that an
estimated interaction term is nonzero while the corresponding main effects are
zero.
Our work is built on the strong heredity assumption, a constraint that is
often used in practice when estimating interaction effects. Under the strong
heredity assumption an interaction term can be estimated to be non-zero if and
only if its corresponding main effects are estimated to be non-zero, whereas a
non-zero main effect does not necessarily imply a non-zero interaction term.
In DTR analysis it is most common that there are more confounders than there
are potential tailoring variables. Following (Choi et al., 2010), we introduce
a new set of parameters $\boldsymbol{\tau}=(\tau_{1},\tau_{2},...\tau_{p})$
and reparametrize the coefficients for the interaction terms $\psi_{j}$ as a
function of $\tau_{j}$ and the main effect parameters $\beta_{j}$ and
$\psi_{0}$: $\psi_{j}=\psi_{0}\tau_{j}\beta_{j}$. In this way, strong heredity
can be met and we consider the following model:
$\mathcal{L}^{*}(\textbf{\emph{Y}};\boldsymbol{\theta})=\frac{1}{2n}\left\lVert\sqrt{\textbf{\emph{W}}}\left(\textbf{\emph{Y}}-\psi_{0}\boldsymbol{A}-\sum_{j=1}^{p}\textbf{\emph{X}}_{j}\beta_{j}-\sum_{j=1}^{p}\underbrace{\psi_{0}\tau_{j}\beta_{j}}_{\textrm{$\psi_{j}$}}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)\right\rVert_{2}^{2}$
where the objective function now is expressed as:
$Q\left(\boldsymbol{\theta}\right)=\mathcal{L}^{*}(\textbf{\emph{Y}};\boldsymbol{\theta})+\lambda(1-\alpha)(\left\lVert\boldsymbol{\beta}\right\rVert_{1}+|\psi_{0}|)+\lambda\alpha\left\lVert\boldsymbol{\tau}\right\rVert_{1}.$
(4)
### 2.4 Algorithm Details
In this section we describe a blockwise coordinate descent algorithm (Friedman
et al., 2007) for fitting the weighted least-squares version of the model in
Equation (4). "Blockwise" means we breakdown the optimization problem into
sub-problems, i.e., we fix the interaction terms $\boldsymbol{\tau}$ and solve
for the main effects $\psi_{0}$ and $\boldsymbol{\beta}$ and vice versa.
Following (Zou and Hastie, 2005) and (Hastie et al., 2010), we fix the value
for the tuning parameter $\alpha$ and minimize the objective function over a
decreasing sequence of $\lambda$ values
$(\lambda_{max}>\ldots>\lambda_{min})$.
Denote the $n$-dimensional residual column vector
$\boldsymbol{R}=\textbf{\emph{Y}}-\hat{\textbf{\emph{Y}}}$. The subgradient
equations are given by
$\displaystyle\frac{\partial Q}{\partial\psi_{0}}$
$\displaystyle=-\frac{1}{n}(\boldsymbol{A}+\sum_{j=1}^{p}\tau_{j}\beta_{j}\boldsymbol{A}\circ\textbf{\emph{X}}_{j})^{\top}\textbf{\emph{W}}\boldsymbol{R}+\lambda(1-\alpha)s_{1}=0$
(5) $\displaystyle\frac{\partial Q}{\partial\beta_{j}}$
$\displaystyle=-\frac{1}{n}\left(\textbf{\emph{X}}_{j}+\tau_{j}\psi_{0}\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)^{\top}\textbf{\emph{W}}\boldsymbol{R}+\lambda(1-\alpha)s_{2}=\boldsymbol{0}$
(6) $\displaystyle\frac{\partial Q}{\partial\tau_{j}}$
$\displaystyle=-\frac{1}{n}\left(\psi_{0}\beta_{j}\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)^{\top}\textbf{\emph{W}}\boldsymbol{R}+\lambda\alpha
s_{3}=0$ (7)
where $s_{1}$, $s_{2}$ and $s_{3}$ are subgradients of the $\ell_{1}$-norm,
i.e., $s_{1}\in\textrm{sign}\left(\psi_{0}\right)$ if $\psi_{0}\neq 0$,
$s_{1}\in[-1,1]$ if $\psi_{0}=0$.
Define the partial residuals, without the $j$th predictor for $j=1,\ldots,p$,
as
$\boldsymbol{R}_{(-j)}=\textbf{\emph{Y}}-\sum_{\ell\neq
j}\textbf{\emph{X}}_{\ell}\beta_{\ell}-\psi_{0}\boldsymbol{A}-\sum_{\ell\neq
j}\tau_{\ell}\psi_{0}\beta_{\ell}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{\ell}\right),$
and the partial residual without A as
$\boldsymbol{R}_{(-A)}=\textbf{\emph{Y}}-\sum_{j=1}^{p}\textbf{\emph{X}}_{j}\beta_{j}$
and the partial residual without the $j$th interaction for $j=1,\ldots,p$, as
$\boldsymbol{R}_{(-jA)}=\textbf{\emph{Y}}-\sum_{j=1}^{p}\textbf{\emph{X}}_{j}\beta_{j}-\psi_{0}\boldsymbol{A}-\sum_{\ell\neq
j}\tau_{\ell}\psi_{0}\beta_{\ell}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{\ell}\right).$
From the subgradient Equations (5)–(7) we see that
$\hat{\psi}_{0}=\frac{S\left(\left(\boldsymbol{A}+\sum_{j=1}^{p}\tau_{j}\beta_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\textbf{\emph{W}}\boldsymbol{R}_{(-A)},n\cdot\lambda(1-\alpha)\right)}{\left(\boldsymbol{A}+\sum_{j=1}^{p}\tau_{j}\beta_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\textbf{\emph{W}}\left(\boldsymbol{A}+\sum_{j=1}^{p}\tau_{j}\beta_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)}$
$\hat{\beta}_{j}=\frac{S\left(\left(\textbf{\emph{X}}_{j}+\tau_{j}\psi_{0}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\textbf{\emph{W}}\boldsymbol{R}_{-j},n\cdot\lambda(1-\alpha)\right)}{\left(\textbf{\emph{X}}_{j}+\tau_{j}\psi_{0}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\textbf{\emph{W}}\left(\textbf{\emph{X}}_{j}+\tau_{j}\psi_{0}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)}$
$\hat{\tau}_{j}=\frac{S\left(\left(\psi_{0}\beta_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\textbf{\emph{W}}\boldsymbol{R}_{(-jA)},n\cdot\lambda\alpha\right)}{\left(\psi_{0}\beta_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\textbf{\emph{W}}\left(\psi_{0}\beta_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)}$
where $S(x,u)$ is the soft-thresholding operator defined as
$S(x,u)=\textrm{sign}(x)(\left\lvert x\right\rvert-u)_{+}$ ($x_{+}$ is the
maximum value of $x$ and $0$).
The strong heredity assumption means that finding the $\lambda$ which shrinks
all coefficients to 0, is reduced to find the smallest $\lambda$ such that all
main effect coefficients are shrunk to 0. From the subgradient Equation (5),
we see that $\psi_{0}=0$ is a solution if
$\left\lvert\frac{1}{n}\left(\boldsymbol{A}+\sum_{j=1}^{p}\tau_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\beta_{j}\right)^{\top}\boldsymbol{R}_{(-A)}\right\rvert\leq\lambda(1-\alpha).$
From the subgradient Equation (6), we see that $\beta_{j}=0$ is a solution if
$\left\lvert\frac{1}{n}\left(\textbf{\emph{X}}_{j}+\tau_{j}\psi_{0}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\boldsymbol{R}_{(-j)}\right\rvert\leq\lambda(1-\alpha).$
From the subgradient Equation (7), we see that $\tau_{j}=0$ is a solution if
$\left\lvert\frac{1}{n}\left(\psi_{0}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\beta_{j}\right)^{\top}\boldsymbol{R}_{(-jA)}\right\rvert\leq\lambda\alpha.$
Thus the strong heredity assumption implies that the parameter vector
$(\psi_{0},\beta_{1},\ldots,\beta_{p},\tau_{1},\\\ \ldots,\tau_{p})$ will be
entirely equal to $\boldsymbol{0}$ if
$(\psi_{0},\beta_{1},\ldots,\beta_{p})=\boldsymbol{0}$. Therefore, the
smallest value of $\lambda$ for which the entire parameter vector is
$\boldsymbol{0}$ is:
$n(1-\alpha)\lambda_{max}=\max\left\\{\left\lvert(\boldsymbol{A}+\sum_{j=1}^{p}\tau_{j}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\beta_{j})^{\top}\boldsymbol{R}_{(-A)}\right\rvert,\right.\left.\left\lvert\left(\textbf{\emph{X}}_{j}+\tau_{j}\psi_{0}\left(\boldsymbol{A}\circ\textbf{\emph{X}}_{j}\right)\right)^{\top}\boldsymbol{R}_{(-j)}\right\rvert\right\\}$
which reduces to
$\lambda_{max}=\frac{1}{n(1-\alpha)}\max\left\\{\left\lvert\boldsymbol{A}^{\top}\boldsymbol{R}_{(-A)}\right\rvert,\max_{j}\left\lvert\left(\textbf{\emph{X}}_{j}\right)^{\top}\boldsymbol{R}_{(-j)}\right\rvert\right\\}.$
The computational algorithm to fit all the parameters in a sequence of loops
is further detailed in the Supplementary Material (Algorithm 1).
### 2.5 Multiple Intervals Estimation
Knowing how to estimate the blip parameters in a one-stage setting, we now
describe how the pdWOLS approach works in a $K$-stages setting. Starting from
the last stage, the estimation procedure is applied to the $K$-th stage
observed outcome $\textbf{\emph{y}}_{K}$, treatment $\boldsymbol{a}_{K}$ and
covariates $\textbf{\emph{x}}_{K}$. The estimated blip parameters are obtained
by maximizing the objective function in Equation (4), and the estimated rules
$\hat{a}_{K}^{opt}=I(\hat{\psi}_{0K}+\textbf{\emph{x}}_{K}\boldsymbol{\hat{\psi}_{K}}>0)$,
where $I$ is the indicator function. The ($K$-1$)$-th stage outcome is based
on "optimal responses", that is, the estimation procedure is applied to the
pseudo-outcome
$\tilde{\textbf{\emph{y}}}_{k-1}=\textbf{\emph{y}}_{K}+\mu_{K}(\textbf{\emph{x}}_{K},a_{K};\boldsymbol{\hat{\psi}_{K}})$,
treatment $\boldsymbol{a}_{K-1}$ and covariates $\textbf{\emph{x}}_{K-1}$
where
$\mu_{K}(\textbf{\emph{x}}_{K},a_{K};\boldsymbol{\hat{\psi}_{K}})=\gamma_{K}(\textbf{\emph{x}}_{K},\hat{a}_{K}^{opt};\boldsymbol{\hat{\psi}_{K}})-\gamma_{K}(\textbf{\emph{x}}_{K},a_{K};\boldsymbol{\hat{\psi}_{K}})$
is the regret function at stage $K$. The pseudo-outcome
$\tilde{\textbf{\emph{y}}}_{K-1}$ is optimal since the regrets is added to the
observed outcome $\textbf{\emph{y}}_{K}$. The same procedure continues,
recursively working backwards, until stage $1$ estimation, such that the blip
parameters across all the stages are obtained and all treatment decisions can
be made.
### 2.6 Asymptotic Properties of the pdWOLS estimator
We now show that when the number of predictors $p$ is fixed and the sample
size $n$ approaches to infinity, the pdWOLS estimator has both the double
robustness and oracle properties (Fan and Li, 2001) under several assumptions.
Following adaptive LASSO principles (Zou, 2006), we add adaptive weights (or a
penalty factor) to the objective function (4) to obtain
$\mathcal{L}(\textbf{\emph{Y}};\boldsymbol{\theta})+\lambda(1-\alpha)(w_{0}|\psi_{0}|+\sum_{j=1}^{p}w_{j}|\beta_{j}|)+\lambda\alpha\sum_{j=1}^{p}\tilde{w}_{j}|\tau_{j}|$
(8)
where $w_{0}$, $w_{j}$ and $\tilde{w}_{j}$ are known adaptive weights, i.e.,
the coefficients are not forced to be equally penalized in the $\ell_{1}$
penalty. If the adaptive weight is zero, then the corresponding coefficient is
unpenalized. As $n$ goes to infinity, the weights corresponding to unimportant
variables go to infinity, which puts a large penalty on those variables, and
the weights corresponding to important variables converge to a finite
constant. Thus, small coefficients are removed, i.e., coefficients are shrunk
to 0, and large coefficients are estimated unbiasedly (asymptotically) (Zou,
2006). For instance, we can choose
$w_{j}=\left\lvert\hat{\beta}_{j}^{wls}\right\rvert^{-1}$ and
$\tilde{w}_{j}=\left\lvert\frac{\hat{\beta}_{j}^{wls}\hat{\psi}_{0}^{wls}}{\hat{\psi}_{j}^{wls}}\right\rvert$
for penalty factors where $\hat{\beta}_{j}^{wls}$ and $\hat{\psi}_{j}^{wls}$
are unpenalized weighted least square estimates of the pdWOLS model. Without
loss of generality, we can rewrite Equation (8) as
$\mathcal{L}^{*}(\textbf{\emph{Y}};\boldsymbol{\theta})+\lambda_{0}|\psi_{0}|+\sum_{j=1}^{p}\lambda_{j}^{\beta}|\beta_{j}|+\sum_{j=1}^{p}\lambda_{j}^{\tau}|\tau_{j}|.$
where $\lambda_{0}=\lambda(1-\alpha)w_{0}$,
$\lambda_{j}^{\beta}=\lambda(1-\alpha)w_{j}$ and
$\lambda_{j}^{\tau}=\lambda\alpha\tilde{w}_{j}$.
We assume that the true model follows the strong heredity assumption described
above, and regularity conditions for the asymptotic distribution of the data
hold, those regularity conditions are detailed in A(1)-A(3) in the
Supplemental Material. We describe the asymptotic properties of pdWOLS in the
following theorems; the proofs are given in the Supplemental Material.
Let $\boldsymbol{\beta}^{*}$ and $\boldsymbol{\psi}^{*}$ denote the underlying
true parameters. Recall that
$\boldsymbol{\theta}^{*}=(\beta_{1}^{*},\ldots,\beta_{p}^{*},\psi_{0}^{*},\\\
\psi_{1}^{*},\ldots,\psi_{p}^{*})$. Define $B_{1}$ as the indices of non-zero
components for main effects, and $B_{2}$ as the indices of non-zero components
for interaction terms such that
$B_{1}=\\{j:\beta_{j}^{*}\neq 0\\}\cup\\{p+1:\psi_{0}^{*}\neq
0\\},B_{2}=\\{j+p+1:\psi_{j}^{*}\neq 0\\},B=B_{1}\cup B_{2}.$
Let $na_{n}$ be the maximum value of the tuning parameters
$(\lambda_{0},\boldsymbol{\lambda^{\beta}},\boldsymbol{\lambda^{\tau}})$ such
that the corresponding coefficients are non-zero; and $nb_{n}$ be the minimum
value of the tuning parameters such that the corresponding coefficients are
zero, noted that for $\lambda^{\tau}$ we only consider the index $m$ such that
$\beta_{m}^{*}\neq 0$ and $\psi_{m}^{*}=0$ (i.e., $m\in B_{1}$):
$a_{n}=\frac{1}{n}max\\{\lambda_{0},\lambda_{j}^{\beta},\lambda_{m}^{\tau}:\psi_{0}^{*}\neq
0,j\in B_{1},m+p+1\in B_{2}\\}$
$b_{n}=\frac{1}{n}min\\{\lambda_{0},\lambda_{j}^{\beta},\lambda_{m}^{\tau}:\psi_{0}^{*}=0,j\in
B_{1}^{c},m+p+1\in B_{2}^{c}\text{ such that }\beta_{m}^{*}\neq 0\\}.$
###### Theorem 1.
Correct Sparsity (Variable Selection Consistency): Assume that $a_{n}=o(1)$
and $\sqrt{n}b_{n}\to\infty$, then there exists a local minimizer
$\hat{\boldsymbol{\theta}}_{n}$ of Equation (8) such that
$\left\lVert\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}^{*}\right\rVert=O_{p}(n^{-\frac{1}{2}}+a_{n})$.
Moreover, we have
$P(\hat{\boldsymbol{\theta}}_{B^{c}}=0)\to 1.$
###### Theorem 2.
Asymptotic Normality: Assume that $\sqrt{n}a_{n}\to 0$ and
$\sqrt{n}b_{n}\to\infty$, then
$\sqrt{n}(\hat{\boldsymbol{\theta}}_{B}-\boldsymbol{\theta}_{B}^{*})\to_{d}N(0,\boldsymbol{I}^{-1}(\boldsymbol{\theta}_{B}^{*}))$
where $\boldsymbol{I}(\boldsymbol{\theta}_{B}^{*})$ is the Fisher information
matrix of $\boldsymbol{\theta}_{B}^{*}$ assuming that
$\boldsymbol{\theta}_{B^{c}}^{*}$ are known to be zero.
###### Remark.
In practice, when $p>n$ and the weighted least squares estimates are not
available, we could use ridge regression to obtain well defined adaptive
weights.
###### Theorem 3.
Double Robustness: Assume that the blip function is correctly specified, SUTVA
and ignorability described in Section 2.1 hold, the resulting blip parameter
estimators of pdWOLS are doubly-robust: the estimators are consistent if
either the treatment model or the treatment-free model is correct.
Note that correct specification of the blip model permits over-specification
\- that is, the true blip model may be contained within the analyst-specified
model.
## 3 Simulation Studies
In this section, we first illustrate the double robustness of pdWOLS and
compare its performance to competitor approaches through a number of
simulations; then we implement the proposed method in a a high dimensional
setting where $p>n$; lastly, we present a two-stage setting. The tuning
parameter $\alpha$ was set to 0.5 for all the simulations, $\lambda$ was
selected using four-fold cross-validation.
### 3.1 Competing Methods
We compare the variable selection results, error rate and out-of-sample value
(i.e., the expected outcome) under the estimated rules of pdWOLS with
Q-learning combined with LASSO (Blatt et al., 2004) and penalized A-Learning
(PAL) (Shi et al., 2018). Q-learning is a sequential regression approach to
DTR estimation; relying only on outcome models (i.e., the sum of the blip and
the treatment-free models), it is not doubly robust. PAL first estimates the
treatment-free and propensity score models, then used the Dantzig selector
(Candes and Tao, 2007) to penalize the estimating equations of A-learning:
$\hat{\boldsymbol{\psi}}=argmin_{\boldsymbol{\psi}}\>\left\lVert\boldsymbol{\psi}\right\rVert_{1}$
subject to $\left\lVert
diag(\boldsymbol{A}-\boldsymbol{\hat{\pi}})(\textbf{\emph{Y}}-f(\textbf{\emph{x}};\hat{\boldsymbol{\beta}})-\gamma(\textbf{\emph{x}},a;\boldsymbol{\psi}))\right\rVert_{\infty}\leq
n\lambda_{pal}$ where $\lambda_{pal}$ is the tuning parameter,
$\boldsymbol{\hat{\pi}}$ is the estimated propensity score, and
$f(\textbf{\emph{x}};\hat{\boldsymbol{\beta}})$ and
$\gamma(\textbf{\emph{x}},a;\boldsymbol{\psi})$ are the treatment-free model
and blip functions, respectively. One can estimate $\boldsymbol{\pi}$ and
$\boldsymbol{\beta}$ in the posited propensity score and treatment-free models
using penalized regressions (Shi et al., 2018), however if the data are only
moderately high dimensional, there is often little to be gained (and perhaps
something to be lost) from propensity score models that are fit through data-
adaptive methods (Alam et al., 2019).
LASSO was implemented using the R package glmnet (Hastie et al., 2010), with
$\lambda_{LASSO}$ selected via four-fold cross-validation. PAL was implemented
using the R package ITRSelect (Shi et al., 2018), with the tuning parameter
$\lambda_{pal}$ selected via BIC. The main effect of treatment $A$ is not
penalized any of the three methods. The default approach to fit the treatment-
free model in ITRSelect is SCAD; to make the comparison between the three
approaches more fair, we use LASSO to fit the treatment-free model for PAL. We
also present unpenalized estimates of the blip parameters from a two-step
approach: that is, after variable selection, the blip parameters are re-
calculated by solving the unpenalized weighted least squares for via
Q-learning, dWOLS and A-learning with the selected variables. We refer those
three unpenalized estimation procedure as refitted.
### 3.2 Experiments Examining Double Robustness Property
We begin with a simple one-stage example, with the following data generation
procedure: Step 1: Generate 10 covariates
($\textbf{\emph{X}}_{1}-\textbf{\emph{X}}_{10}$) where X are multivariate
normal with zero mean, unit variance and correlation
$Corr(\textbf{\emph{X}}_{j},\textbf{\emph{X}}_{k})=0.25^{|j-k|}$ for
$j,k=1,2,\dots,10$.
Step 2: Generate treatment according to the model:
$P(A=1|X_{1},X_{2})=\cfrac{exp(1+x_{1}+x_{2})}{1+exp(1+x_{1}+x_{2})}.$
Step 3: Set the blip function, and hence the optimal treatment strategy, to
depend only on $X_{1}$:
$\gamma(x,a;\boldsymbol{\psi})=a(\psi_{0}+\psi_{1}x_{1})$ for
$\psi_{0}=1,\psi_{1}=-1.5$.
Step 4: Set the treatment-free model to
$f(\textbf{\emph{x}};\boldsymbol{\beta})=0.5-0.6e^{x_{1}}-2x_{1}-2x_{2}$.
Step 5: Generate the outcome $Y\sim
N(f(\textbf{\emph{x}};\boldsymbol{\beta})+\gamma(\textbf{\emph{x}},a;\boldsymbol{\psi}),1).$
We apply estimation and variable selection approaches with a variety of sample
sizes (100, 500 and 2000) in four scenarios, where neither, one, or both of
the treatment and treatment-free models is correctly specified. Specifically,
the scenarios are:
$\bullet$ Scenario 1 (neither treatment nor treatment-free is correct):
Regress Y on ($\boldsymbol{1},\textbf{\emph{X}},\boldsymbol{A},\\\
\boldsymbol{A}\textbf{\emph{X}}$), and set all observational weights to 1
(similar to assuming a null propensity score model). As this scenario fails to
meet the assumptions of correct model specification required by all methods,
consistency is not assured for any approach.
$\bullet$ Scenario 2 (treatment correct, treatment-free incorrect): Regress Y
on
($\boldsymbol{1},\textbf{\emph{X}},\boldsymbol{A},\boldsymbol{A}\textbf{\emph{X}}$),
but fit a correctly specified propensity score model whose parameters are
estimated via logistic regression.
$\bullet$ Scenario 3 (treatment incorrect, treatment-free correct): Regress Y
on
($\boldsymbol{1},e^{\textbf{\emph{X}}_{1}},\textbf{\emph{X}},\boldsymbol{A},\\\
\boldsymbol{A}e^{\textbf{\emph{X}}_{1}},\boldsymbol{A}\textbf{\emph{X}}$), so
that the treatment-free model is correctly specified but - as in scenario 1 -
set all observational weights to 1.
$\bullet$ Scenario 4 (both treatment and treatment-free are correct): Regress
Y on
($\boldsymbol{1},e^{\textbf{\emph{X}}_{1}},\textbf{\emph{X}},\boldsymbol{A},\\\
\boldsymbol{A}e^{\textbf{\emph{X}}_{1}},\boldsymbol{A}\textbf{\emph{X}}$), and
estimate the parameters of a correctly specified propensity score via logistic
regression.
Note that, since Q-learning does not incorporate any propensity score
adjustments, scenarios 1 and 2 yield identical estimates, as do scenarios 3
and scenario 4. All the three methods have the same treatment-free models and
the same blip functions to be estimated in the four scenarios. To save the
space, here we only present the simulations performance of Scenario 2-4 with
sample size 100 and 500; the full simulations are available in the
Supplemental Material.
Across all scenarios where at least one nuisance model is correctly specified,
refitted estimators perform better than their penalized counterparts in terms
of bias (see Figure 1 in the Supplementary Web Material). When at least one of
the treatment or treatment-free models is correctly specified, the blip
parameter estimators are consistent for refitted pdWOLS. When the treatment-
free model is correct (Scenarios 3 and 4), the refitted Q-learning (LASSO)
estimators are consistent, as expected. Surprisingly, PAL failed when the
treatment model is incorrect (Scenario 3). This result was not anticipated
since PAL is a double robust method, although previous simulations have not
considered its performance in terms of parameter estimates (Shi et al., 2018).
The variable selection results for optimal treatment decisions is presented in
Table 1. In Scenario 2-4, the important tailoring variable was correctly
selected by both pdWOLS and Q-learning (LASSO). PAL failed in scenario 3.
However, the false positive rates of pdWOLS and Q-learning (LASSO) were higher
than that of PAL in all scenarios: for example, in Scenario 3, both LASSO and
pdWOLS falsely selected the variable $Ae^{X_{1}}$ $72\%$ of the time.
Table 1 also summarizes the error rates (i.e.,
$\frac{1}{n}\sum_{i=1}^{n}I(a_{i}^{opt}\neq\hat{a}_{i})$) of the estimated
optimal treatment regimes for treatment decision making and value functions.
The average value function and the error rates was computed over a testing set
of size 10,000 (i.e., a dataset generating according to the process described
above in all respects except that treatment was allocated according to the
estimated rule). Both the error rate and the value of pdWOLS and Q-learning
with LASSO are very close; pdWOLS outperforms other methods in Scenario 2,
while Q-learning with LASSO has the best performance in Scenarios 3 and 4. The
performance of the refitted versions of pdWOLS and Q-learning are similar; the
performances of PAL were uniformly worse than other two methods performed
without refitting across all scenarios, however refitted PAL substantially
improved the performance of PAL.
Table 1: Variable selection rate (%) of the blip parameters, error rate (ER,
%) and value function over a testing set of size 10,000 under the estimated
decision rules using pdWOLS, Q-learning with LASSO (QL), PAL and their
refitted versions ($n=500$, $400$ simulations). The main effect of treatment
is not penalized (and hence is always selected).
| Scenario 2 | Scenario 3 | Scenario 4
---|---|---|---
| pdWOLS | QL | PAL | pdWOLS | QL | PAL | pdWOLS | QL | PAL
$Ae^{X_{1}}$ | - | - | - | 72 | 14 | 72 | 42 | 14 | 0
$AX_{1}^{*}$ | 100 | 100 | 99 | 100 | 100 | 33 | 100 | 100 | 100
$AX_{2}$ | 53 | 51 | 2 | 73 | 44 | 3 | 52 | 44 | 1
$AX_{3}$ | 2 | 26 | 2 | 6 | 23 | 0 | 2 | 23 | 1
$AX_{4}$ | 4 | 28 | 2 | 5 | 24 | 1 | 3 | 24 | 1
$AX_{5}$ | 4 | 29 | 4 | 4 | 26 | 2 | 2 | 26 | 2
$AX_{6}$ | 2 | 26 | 2 | 4 | 21 | 1 | 1 | 21 | 1
$AX_{7}$ | 3 | 25 | 2 | 5 | 22 | 1 | 2 | 22 | 0
$AX_{8}$ | 3 | 27 | 3 | 6 | 23 | 1 | 2 | 23 | 0
$AX_{9}$ | 2 | 27 | 3 | 6 | 24 | 1 | 2 | 24 | 1
$AX_{10}$ | 2 | 28 | 2 | 5 | 22 | 1 | 1 | 22 | 1
ER | 3.9 | 9.8 | 22.9 | 5.5 | 3.4 | 12.0 | 4.2 | 3.4 | 23.4
ER (Refitted) | 4.5 | 8.5 | 4.9 | 3.4 | 3.6 | 8.7 | 3.6 | 3.6 | 3.8
Value | 0.6 | 0.6 | 0.5 | 0.6 | 0.7 | 0.6 | 0.6 | 0.7 | 0.5
Value (Refitted) | 0.6 | 0.6 | 0.6 | 0.6 | 0.7 | 0.6 | 0.7 | 0.7 | 0.6
* *
Term with a non-zero coefficient in the data-generating model
* Note that $Ae^{X_{1}}$ was not included in the blip model for scenario 2
### 3.3 Simulations Evaluating Performance in a High-dimensional Setting
We now increase the dimension to $p=400$ and set $n=200$ and present the
performance of the new procedure under this very high dimensional setting. The
data generation procedure is the same as in Section 3.2, except that we now
set $P(A=1)$ to 0.5 for everyone such that no confounding is present. The blip
function is $\gamma(\textbf{\emph{x}},a;\boldsymbol{\psi})=a(1-1.5x_{1})$
where $\psi_{0}=1,\psi_{1}=-1.5$ and the treatment-free model is
$f(\textbf{\emph{x}};\boldsymbol{\beta})=0.5-0.6e^{x_{1}}-2x_{1}-2x_{2}$. We
regress Y on
$(\boldsymbol{1},\textbf{\emph{X}},\boldsymbol{A},\boldsymbol{A}\textbf{\emph{X}})$
where the treatment-free model is misspecified.
Figure 1 summarizes the estimates of blip parameters using the three methods
in the high dimensional setting. Like before, for all the methods, refitted
estimators improved the performance of their penalized counterparts. For
$\psi_{0}$, Q-learning with LASSO and its refitted estimator have the smallest
bias; as for $\psi_{1}$, pdWOLS and its refitted version have the smallest
bias.
Figure 1: Estimates of blip parameters using pdWOLS, Q-learning (LASSO), PAL
and their refitted versions with sample size 200 (400 simulations) in a high
dimensional ($p=400$) setting. The true value is represented by the dotted
line.
Table 2 shows the variable selection results (FN and FP, where FN is the false
negative rate at which the method wrongly remove a truly important variable,
and FP is the false positive rate at which that the method wrongly include a
non-important variable), error rate and value under the estimated rules of the
three methods. The average value function and the error rates were computed
over a testing set of size 10,000. Q-learning with LASSO achieves zero false
negative rate, pdWOLS and refitted pdWOLS have the lowest false positive rate,
error rate and the highest value, which indicates the favorable performance of
the newly proposed method. However, unlike before, even refitted PAL estimator
has a smaller bias than PAL estimator, refitted PAL did not improve the
performance of PAL with respect to value and error rate, which shows that
smaller bias in estimation of blip parameters does not necessarily translate
into a better performance of the estimated regime.
Table 2: False negative (FN, %) rate and false positive (FP, %) rate of variable selection results of the blip parameters, error rate (ER, %) and value using pdWOLS, Q-learning with LASSO (QL), PAL and their refitted versions with sample size 200 (400 simulations) in a high dimensional ($p=400$) setting. The main effect of treatment is not penalized (and hence is always selected). | FN | FP | ER | Value
---|---|---|---|---
pdWOLS | 0.3 | 0.2 | 12.8 | 0.7
QL (LASSO) | 0.0 | 1.4 | 11.3 | 0.7
PAL | 2.6 | 0.4 | 24.6 | 0.6
RpdWOLS | 0.3 | 0.2 | 9.9 | 0.7
RQL (LASSO) | 0.0 | 1.4 | 16.8 | 0.6
RPAL | 2.6 | 0.4 | 25.1 | 0.5
### 3.4 Simulations Evaluating Performance in Multi-stage Setting
In this subsection, we demonstrate the performance of the proposed pdWOLS
approach when treatment decisions are made at multiple stages. We consider two
different data generation procedures in order to follow previous literature,
the one that the true treatment-free models does not have analytical form
(misspecified treatment-free model) is presented in here in what we term
Setting 1, and another simulation setting (Setting 2, where the treatment-free
models can be computed analytically) is available in the Supplemental
Material.
We follow the data generation procedure in (Wallace and Moodie, 2015) with a
sample size of 1000:
Step 1: Generate 10 covariates at stage 1 ($X_{1}-X_{10}$) where $X_{i}\sim
N(0,1)$.
Step 2: Generate treatment at stage $k$ according to
$P(A_{k}=1|X_{1k},X_{2k})=\frac{exp(x_{1k}-x_{2k})}{1+exp(x_{1k}-x_{2k})}$
for $k=1,2$.
Step 3: Generate covariates at stage 2 such that $X_{12}\sim
N(0.5A_{1}+0.8X_{11},1)$ and $X_{j2}\sim N(0.8X_{j1},1)$ for $j=2,3,...10$.
Step 4: Set the blip functions to be
$\gamma_{1}(x_{1},a_{1};\boldsymbol{\psi}_{1})=a_{1}(0.8-2x_{11})$ and
$\gamma_{2}(x_{2},a_{2};\boldsymbol{\psi}_{2})=a_{2}(1-1.5x_{12})$, so that
$\psi_{01}=0.8$, $\psi_{11}=-2$, $\psi_{02}=1$, and $\psi_{12}=-1.5$.
Step 5: Generate the outcome under optimal treatment according to
$y^{opt}=0.5+2x_{11}+2x_{12}$ (i.e. so as to depend only on covariates x at
stage 1). The observed outcome is generated such that $Y\sim
N(y^{opt}-\mu_{1}-\mu_{2},1)$ where $\mu_{1}$ and $\mu_{2}$ are regret
function at stages 1 and 2, defined through the blip functions in step 4.
Recall that a backward recursive approach can be used to make the treatment
decision. Starting from the last stage, the estimation procedure is applied to
the observed outcome y. The estimated blip parameters and the estimated rules
$\hat{a}_{2}^{opt}$ are obtained. Estimation then proceeds to stage 1, where
again the estimation procedure is applied to a pseudo-outcome which represents
the expected effect of the observed stage 2 treatment with the optimal stage 2
treatment. In pdWOLS, the pseudo-outcome is
$\tilde{y}_{1}=y+\gamma_{2}(\textbf{\emph{x}}_{2},\hat{a}_{2}^{opt};\hat{\boldsymbol{\psi}}_{2})-\gamma_{2}(\textbf{\emph{x}}_{2},a_{2};\hat{\boldsymbol{\psi}}_{2})$,
where as for Q-learning with LASSO, it is
$\tilde{y}_{1}^{Q}=f(\textbf{\emph{x}}_{2};\hat{\boldsymbol{\beta}}_{2})+\gamma_{2}(\textbf{\emph{x}}_{2},\hat{a}_{2}^{opt};\hat{\boldsymbol{\psi}}_{2})$.
In this setting, the treatment free model in the second stage of estimation
aims to represent $y^{opt}-\mu_{1}-a_{2}^{opt}(1-1.5x_{12})$ which depends on
$a_{2}^{opt}$, which in turn is a function of second stage parameters
$\boldsymbol{\psi}_{2}$ and covariate $x_{2}$. The treatment free model in
this setting cannot be computed analytically, we nevertheless assumed that the
treatment-free models were linear in the covariates measured at their
respective stages, and thus in these simulations, it is always the case that
the treatment-free models are misspecified and, for those methods relying on a
propensity score, the treatment models are fit using correctly-specified
logistic regression models at each stage using all covariates measured at that
stage.
Figure 2 summarizes the estimates of blip parameters using the three methods
in two-stage Setting 1. As expected, pdWOLS and PAL works when at least one of
the treatment or treatment-free models is correctly specified (in this case,
the treatment model is correctly specified); and Q-learning with LASSO failed,
since the treatment free model at both stages are misspecified. For pdWOLS and
PAL, refitted estimators are nearly unbiased, and they perform better than
their penalized counterparts. At stage 1, PAL estimators have large bias,
however, after the refitting procedure, the bias decreased to almost zero,
which indicates the excellent performance of variable selection procedure of
Dantzig selector but the need for an additional step for accurate estimation.
Unlike PAL, pdWOLS can have small bias even without the refitting procedure.
Figure 2: Estimates of blip parameters using pdWOLS, Q-learning with LASSO
(QL), PAL and their refitted versions with sample size 1000 (400 simulations)
in two-stage Setting 1. The true value is represented by the dotted line.
Table 3 presents the variable selection results for optimal treatment
decisions. The important tailoring variable was selected by all the methods at
both stages. At stage 2, the false positive rate of pdWOLS is much smaller
than other two methods: for instance, the selection frequency of
$AX_{2}-AX_{10}$ are all less than $5\%$, as do stage 1. Note that at stage 1,
because the pseudo-outcomes are different for refitted version and their
penalized counterparts, the variable selection result varies from each other
too.
Table 3: Variable selection rate (%) of the blip parameters using pdWOLS,
Q-learning with LASSO (QL), PAL and their refitted versions with sample size
1000 (400 simulations) in two-stage Setting 1. The main effect of treatment is
not penalized (and hence is always selected).
| Stage 1 | Stage 2
---|---|---
| pdWOLS | QL | PAL | RpdWOLS | RQL | RPAL | pdWOLS | QL | PAL
$AX_{1}$ * | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100
$AX_{2}$ | 49 | 32 | 1 | 45 | 33 | 2 | 22 | 44 | 33
$AX_{3}$ | 4 | 28 | 2 | 2 | 34 | 2 | 2 | 41 | 38
$AX_{4}$ | 3 | 30 | 0 | 2 | 34 | 2 | 3 | 45 | 37
$AX_{5}$ | 3 | 25 | 1 | 2 | 29 | 2 | 2 | 40 | 40
$AX_{6}$ | 4 | 25 | 1 | 2 | 29 | 1 | 3 | 40 | 40
$AX_{7}$ | 4 | 27 | 0 | 2 | 33 | 2 | 3 | 40 | 38
$AX_{8}$ | 4 | 28 | 0 | 2 | 30 | 1 | 2 | 42 | 38
$AX_{9}$ | 4 | 26 | 1 | 2 | 32 | 4 | 2 | 41 | 36
$AX_{10}$ | 3 | 29 | 2 | 2 | 32 | 2 | 2 | 44 | 38
* *
Term with a non-zero coefficient in the data-generating model
Table 4 summarizes the error rates of the estimated optimal treatment regimes
for treatment decision making and value functions. The average value function
and the error rates was computed over a testing set of size 10,000. As before,
refitted methods have lower error rate and higher value functions than their
penalized counterparts. pdWOLS outperforms other methods at both stages with
respect to the error rate and value function; refitted PAL improved the
performance of PAL significantly as expected.
Table 4: Error Rate (%) and value function using pdWOLS, Q-learning with LASSO (QL), PAL and their refitted versions with sample size 1000 (400 simulations) in two-stage Setting 1. Stage-wise error rate and the total error rate (TER, %) across both stages are shown. | TER | ER (Stage 1) | ER (Stage 2) | Value
---|---|---|---|---
pdWOLS | 9.2 | 2.2 | 7.2 | 0.4
QL (LASSO) | 52.8 | 50.4 | 4.6 | -0.6
PAL | 22.4 | 14.5 | 9.8 | 0.4
RpdWOLS | 6.5 | 2.0 | 4.6 | 0.5
RQL (LASSO) | 58.0 | 54.8 | 6.1 | -0.7
RPAL | 11.3 | 2.1 | 9.5 | 0.4
## 4 Application to STAR*D Study
In this section, we apply pdWOLS to data from the STAR*D (Fava et al., 2003)
study, a multistage randomized trial that aimed to determine effective
treatments for patients with major depressive disorder, where severity was
measured using the Quick Inventory of Depressive Symptomatology (QIDS) score
(Rush et al., 2003). The study was divided into four levels (one of which had
two sub-levels); patients had different treatments at each level and would
exit the study upon achieving remission. The additional details are provided
in the Supplemental Materials.
Our analysis follows (Wallace et al., 2019) and (Chakraborty et al., 2013): a
two-stage study based on use of a selective serotonin reuptake inhibitor
(SSRI) with binary treatment and negative QIDS score as the outcome. Three
tailoring variables were considered: (1) the QIDS score measured at the
beginning of each level (denoted by $q_{k}$ at stage $k$); (2) change in QIDS
score divided by the time in the previous level (QIDS slope, denoted by
$s_{k}$ at stage $k$); and (3) patient preference measured prior to receiving
treatment which is a binary variable (denoted by $p_{k}$ at stage $k$). In
addition, we generate $m$ independent and identically distributed noise
variables at each stage: noise variables at stage 1 were generated using
$X_{j1}\sim N(0,1)$ and at stage 2 they were generated by $X_{j2}\sim
N(log\left\lvert X_{j1}\right\rvert,1)$ for $j=1,2,\dots,d$. We consider three
scenarios for the analysis where $d=5,10,20$ respectively.
Logistic regression was used to estimate the treatment model adjusting for
patient preference only, following the trial design, and weights
$w=\left\lvert A-E(A|X)\right\rvert$ were used in the analysis. As in (Wallace
et al., 2019), the treatment-free models are linear in $(q_{1},s_{1},p_{1})$
at stage 1 and $(a_{1},q_{2},s_{2},p_{2})$ at stage 2. Linear blip models with
covariates $(q_{1},s_{1},p_{1})$ at stage 1, and $(a_{1},q_{2},s_{2},p_{2})$
at stage 2 were considered. Noted in (Wallace et al., 2019), $a_{1}$ and
$p_{2}$ were not included in the blip models to avoid the multicollinearity;
this is not necessary in pdWOLS and hence our model specifications differ.
As in our simulations, the main effect of the treatment was not penalized. At
all the three scenarios and both stages, pdWOLS returned the intercept-only
blip model: it suggested that the optimal treatments are $A_{2}=0$ (treat
without SSRI) and $A_{1}=1$ (treat with SSRI) at stage 2 and 1 respectively
for all the patients. As for PAL, the results in different scenarios vary:
when $d=5$, it selected $a_{2},q_{2},s_{2},p_{2}$ at stage $2$ and
$a_{1},q_{1},s_{1},p_{1}$ at stage $1$; when $d=10$, it selected $a_{2}$ at
stage $2$ and $a_{1},q_{1},s_{1},p_{1}$ at stage $1$; when $d=20$, it selected
$a_{2}$ at stage $2$ and $a_{1},p_{1}$ at stage $1$. pdWOLS selected zero
noise variables at both stages in all scenarios; while the false positive
rates of PAL at stage $2$ and $1$ are $100\%$, $40\%$ ($d=5$), $10\%$, $50\%$
($d=10$), and $10\%$, $45\%$ ($d=20$) respectively. By comparison,
(Chakraborty et al., 2013) and (Wallace et al., 2019) found that no stage 2
blip covariates were statistically significant which is consistent with our
results; while at stage 1, only the treatment preference was significant.
## 5 Discussion
In this article, we extended the dWOLS (Wallace and Moodie, 2015) approach to
a penalized estimation framework for variable selection and estimating the
optimal treatment regimes simultaneously. The new method automatically
enforces the strong heredity constraint. The proposed method inherits the
desired double robustness property from dWOLS: the estimators and the variable
selection of the blip parameters are consistent if either the treatment model
or the treatment-free model is correct, assuming that the blip function is
correctly specified. Our simulations indicated that pdWOLS compares favorably
with other variable selection approaches in the context of DTRs.
The standard errors for the estimated blip parameters can be obtained
directly: a sandwich formula for computing the covariance of the estimates of
the non-zero components can be derived using local quadratic approximation
(Fan and Li, 2001). In addition, the local quadratic approximation sandwich
formula is consistent (Fan and Peng, 2004). How to derive the standard errors
for the estimated blip parameters under the use of refitted pdWOLS is an
interesting topic that needs further investigation. Post selection inference
should also be addressed: inferential methods that can compensate for the fact
that the model was picked in a data-dependent way should be developed.
It is of note that the proposed method is, fundamentally, one based on
prediction: it will select any variables that can improve the predictive
ability. As such, in finite samples, pdWOLS may underestimate the importance
of variables that have small predictive ability but that play significant rule
in DTRs, e.g., some variables may not predict the outcome very well, but they
need to be included in order to guarantee ignorability. Besides, the
application of predictive methods directly to causal models may result in
inflated variances and self-inflicted bias (Hernán and Robins, 2020). The
importance of the distinction between DTRs (causal inference) and prediction
must be kept in mind. Variable selection in causal inference is a tough
problem: on the one hand, we want to adjust enough covariates in the analysis
to achieve ignorability; on the other hand, adjustment for some other
"irrelevant" variables would induce bias and losses of statistical efficiency
(Rotnitzky et al., 2010; De Luna et al., 2011). Hence, a thoughtful selection
of confounders is needed, using expert knowledge to guide variable selection
is encouraged. Other discussions about confounder selection can be found in
(Shortreed and Ertefaie, 2017; Robins and Greenland, 1986; Schneeweiss et al.,
2009). For pdWOLS, if we are worried about confounding and our focus is on
building simple rules, we may want to do minimal selection on main effects but
lots of selection on interaction effects (blip parameters), which can be
implemented by setting small adaptive weights $w_{j}$ for the main effects (or
set a large $\alpha$). How to choose the tuning parameter $\lambda$ and
$\alpha$ in a DTRs framework is an open but intriguing problem. Cross
validation, Bayesian information criteria (BIC) (Schwarz, 1978) and its
several modified versions (Chen and Chen, 2008; Gao and Song, 2010) are widely
used in penalized likelihood when the goal is prediction. It would be of
interest for future work to further investigate how to select the tuning
parameter.
## References
* Alam et al. (2019) Alam, S., Moodie, E. E., and Stephens, D. A. (2019). Should a propensity score model be super? The utility of ensemble procedures for causal adjustment. Statistics in Medicine, 38(9):1690–1702.
* Bhatnagar et al. (2020) Bhatnagar, S. R., Lu, T., Lovato, A., Olds, D. L., Kobor, M. S., Meaney, M. J., O’Donnell, K., Yang, Y., and Greenwood, C. M. (2020). A sparse additive model for high-dimensional interactions with an exposure variable. BioRxiv, page 445304.
* Blatt et al. (2004) Blatt, D., Murphy, S. A., and Zhu, J. (2004). A-learning for approximate planning. Ann Arbor, 1001:48109–2122.
* Candes and Tao (2007) Candes, E. and Tao, T. (2007). The Dantzig selector: Statistical estimation when $p$ is much larger than $n$. The Annals of Statistics, 35(6):2313–2351.
* Chakraborty et al. (2013) Chakraborty, B., Laber, E. B., and Zhao, Y. (2013). Inference for optimal dynamic treatment regimes using an adaptive $m$-out-of-$n$ bootstrap scheme. Biometrics, 69(3):714–723.
* Chakraborty and Moodie (2013) Chakraborty, B. and Moodie, E. E. (2013). Statistical methods for dynamic treatment regimes. Springer.
* Chen and Chen (2008) Chen, J. and Chen, Z. (2008). Extended bayesian information criteria for model selection with large model spaces. Biometrika, 95(3):759–771.
* Choi et al. (2010) Choi, N. H., Li, W., and Zhu, J. (2010). Variable selection with the strong heredity constraint and its oracle property. Journal of the American Statistical Association, 105(489):354–364.
* De Luna et al. (2011) De Luna, X., Waernbaum, I., and Richardson, T. S. (2011). Covariate selection for the nonparametric estimation of an average treatment effect. Biometrika, 98(4):861–875.
* Fan et al. (2016) Fan, A., Lu, W., and Song, R. (2016). Sequential advantage selection for optimal treatment regime. The Annals of Applied Statistics, 10(1):32.
* Fan and Li (2001) Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360.
* Fan and Peng (2004) Fan, J. and Peng, H. (2004). Nonconcave penalized likelihood with a diverging number of parameters. The Annals of Statistics, 32(3):928–961.
* Fava et al. (2003) Fava, M., Rush, A. J., Trivedi, M. H., Nierenberg, A. A., Thase, M. E., Sackeim, H. A., Quitkin, F. M., Wisniewski, S., Lavori, P. W., Rosenbaum, J. F., and Kupfer, D. (2003). Background and rationale for the sequenced treatment alternatives to relieve depression (STAR* D) study. Psychiatric Clinics of North America, 26(6):457–494.
* Friedman et al. (2007) Friedman, J., Hastie, T., Höfling, H., Tibshirani, R., et al. (2007). Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302–332.
* Gao and Song (2010) Gao, X. and Song, P. X.-K. (2010). Composite likelihood bayesian information criteria for model selection in high-dimensional data. Journal of the American Statistical Association, 105(492):1531–1540.
* Gunter et al. (2011) Gunter, L., Zhu, J., and Murphy, S. (2011). Variable selection for qualitative interactions. Statistical Methodology, 8(1):42–55.
* Hastie et al. (2010) Hastie, T., Tibshirani, R., and Friedman, J. (2010). Regularized paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22.
* Hernán and Robins (2020) Hernán, M. A. and Robins, J. M. (2020). Causal Inference: What If. Chapman & Hall/CRC. Taylor & Francis.
* Lu et al. (2013) Lu, W., Zhang, H. H., and Zeng, D. (2013). Variable selection for optimal treatment decision. Statistical Methods in Medical Research, 22(5):493–504.
* Murphy (2003) Murphy, S. A. (2003). Optimal dynamic treatment regimes. Journal of the Royal Statistical Society: Series B (Methodological), 65(2):331–355.
* Robins (1997) Robins, J. M. (1997). Causal inference from complex longitudinal data. In Berkane, M., editor, Latent Variable Modeling and Applications to Causality: Lecture Notes in Statistics, pages 69–117. Springer.
* Robins (2004) Robins, J. M. (2004). Optimal structural nested models for optimal sequential decisions. In Lin, D. Y. and Heagerty, P., editors, Proceedings of the Second Seattle Symposium in Biostatistics, pages 189–326. Springer.
* Robins and Greenland (1986) Robins, J. M. and Greenland, S. (1986). The role of model selection in causal inference from nonexperimental data. American Journal of Epidemiology, 123(3):392–402.
* Rosenbaum and Rubin (1983) Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55.
* Rotnitzky et al. (2010) Rotnitzky, A., Li, L., and Li, X. (2010). A note on overadjustment in inverse probability weighted estimation. Biometrika, 97(4):997–1001.
* Rubin (1980) Rubin, D. (1980). Discussion of "Randomization analysis of experimental data in the Fisher randomization test" by D. Basu. Journal of the American Statistical Association, 75(371):591–593.
* Rush et al. (2003) Rush, A. J., Trivedi, M. H., Ibrahim, H. M., Carmody, T. J., Arnow, B., Klein, D. N., Markowitz, J. C., Ninan, P. T., Kornstein, S., Manber, R., et al. (2003). The 16-item quick inventory of depressive symptomatology (QIDS), clinician rating (QIDS-C), and self-report (QIDS-SR): a psychometric evaluation in patients with chronic major depression. Biological Psychiatry, 54(5):573–583.
* Schneeweiss et al. (2009) Schneeweiss, S., Rassen, J. A., Glynn, R. J., Avorn, J., Mogun, H., and Brookhart, M. A. (2009). High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology, 20(4):512.
* Schwarz (1978) Schwarz, G. E. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2):461–464.
* Shi et al. (2018) Shi, C., Fan, A., Song, R., and Lu, W. (2018). High-dimensional A-learning for optimal dynamic treatment regimes. The Annals of Statistics, 46(3):925.
* Shortreed and Ertefaie (2017) Shortreed, S. M. and Ertefaie, A. (2017). Outcome-adaptive lasso: Variable selection for causal inference. Biometrics, 73(4):1111–1122.
* Tibshirani (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288.
* Wallace and Moodie (2015) Wallace, M. P. and Moodie, E. E. (2015). Doubly-robust dynamic treatment regimen estimation via weighted least squares. Biometrics, 71(3):636–644.
* Wallace et al. (2019) Wallace, M. P., Moodie, E. E., and Stephens, D. A. (2019). Model selection for G-estimation of dynamic treatment regimes. Biometrics, 75(4):1205–1215.
* Watkins (1989) Watkins, C. J. C. H. (1989). Learning from delayed rewards.
* Zou (2006) Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418–1429.
* Zou and Hastie (2005) Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Methodological), 67(2):301–320.
|
# Dynamic Longest Increasing Subsequence and the Erdös-Szekeres Partitioning
Problem111The results of this manuscript appeared in preliminary versions [24]
(STOC’20) and [25].
Michael Mitzenmacher Toyota Technological Institute at Chicago Saeed
Seddighin11footnotemark: 1
###### Abstract
In this paper, we provide new approximation algorithms for dynamic variations
of the longest increasing subsequence (LIS) problem, and the complementary
distance to monotonicity (DTM) problem. In this setting, operations of the
following form arrive sequentially: (i) add an element, (ii) remove an
element, or (iii) substitute an element for another. At every point in time,
the algorithm has an approximation to the longest increasing subsequence (or
distance to monotonicity). We present a $(1+\epsilon)$-approximation algorithm
for DTM with polylogarithmic worst-case update time and a constant factor
approximation algorithm for LIS with worst-case update time
$\tilde{O}(n^{\epsilon})$ for any constant $\epsilon>0$.
Our dynamic algorithm for LIS leads to an almost optimal algorithm for the
Erdös-Szekeres partitioning problem. Erdös-Szekeres partitioning problem was
introduced by Erdös and Szekeres in 1935 and was known to be solvable in time
$O(n^{1.5}\log n)$. Subsequent work improve the runtime to $O(n^{1.5})$ only
in 1998. Our dynamic LIS algorithm leads to a solution for Erdös-Szekeres
partitioning problem with runtime $\tilde{O}_{\epsilon}(n^{1+\epsilon})$ for
any constant $\epsilon>0$.
## 1 Introduction
Longest increasing subsequence (LIS) is one of the oldest problems in computer
science. Given an array $a=\langle a_{1},a_{2},\ldots,a_{n}\rangle$ of size
$n$, LIS is defined as the largest subset of the elements whose values are
strictly increasing in the order of their indices. Distance to monotonicity
(DTM) is the dual problem. For DTM, we wish to remove the smallest number of
elements such that the remaining subsequence is increasing. LIS and DTM are
special cases of the celebrated edit distance and longest common subsequence
problems when the input strings are permutations.
The classic patience sorting solution for LIS utilizes dynamic programming and
binary search to solve LIS exactly in time $O(n\log n)$. (In what follows,
when we refer to a solution, we typically refer to the size of the LIS, but
also the corresponding increasing subsequence can be found in time
proportional to its size.) Matching lower bounds ($\Omega(n\log n)$) are known
for comparison-based algorithms [13] and solutions based on algebraic decision
trees [31]. For approximation algorithms, for any $\epsilon>0$, a
multiplicative $O(n^{\epsilon})$ approximate solution can be determined in
truly sublinear time via random sampling222For an $O(n^{\epsilon})$
approximation algorithm, one can sample $O(n^{1-\epsilon})$ many elements from
the array and report the LIS of those samples.. Surprisingly, not much is
known that improves upon this algorithm generally, although when
$n/\mathsf{LIS}(a)$ is subpolynomial we can obtain better approximation
guarantees for LIS [32, 33].
From a complexity point of view, unconditional lower bounds apply to LIS. For
instance, any algorithm that obtains an $f(n)$ approximate solution for LIS
has to make at least $n/(f(n)+1)$ value queries333A value query provides an
$i$ as input and asks for the value of $a_{i}$. to the elements of $a$ to
distinguish the case that $a$ is decreasing from the case that $a$ has an
increasing subsequence of length at least $f(n)+1$. Thus a subpolynomial
approximation algorithm for LIS in truly sublinear time is not possible in
general. Even if we are guaranteed that the solution size is
$\Theta(n^{1-\epsilon})$ (a setting for which the known complexity lower
bounds do not apply), we are not aware of any subpolynomial approximate
solution for LIS. Very recently, Rubinstein et al. [32] obtain
$O(n^{3\epsilon})$ approximation in time $\tilde{O}(n^{0.5+7\epsilon})$ in
this case. Also, these lower bounds do not hold for stronger computational
models such as quantum computation, but we do not have better general quantum
approximation algorithms.
In this work, we focus on approximation algorithms in the dynamic setting,
where at each step, the array can be updated by inserting, deleting, or
modifying an element. The goal is to maintain an approximation of the correct
value at each step. Dynamic settings for many problems have been studied, e.g.
[19, 27, 16, 4, 3, 7, 23, 5, 28]. In general, in dynamic settings the goal is
to develop an algorithm where the solution can be updated efficiently given
incremental changes to the input. In the context of graph algorithms [27, 28,
23, 4, 3, 5], such changes are usually modeled by edge addition or edge
deletion. For string problems, changes are typically modeled with character
insertion and character deletion [16, 7], as we consider here.
We provide novel approximation algorithms for LIS and DTM in the dynamic
setting. For LIS, for any $0<\epsilon<1$, we give a dynamic algorithm with
worst-case update time $\tilde{O}(n^{\epsilon})$ and approximation factor
$O_{\epsilon}(1)$; that is, for constant $\epsilon$, the approximation factor
is a constant that depends on $\epsilon$. For DTM, we present an algorithm
with approximation factor $1+\epsilon$ for any constant $\epsilon$ and worst-
case update time $O(\log^{2}n)$, where the order notation hides factors that
can depend on $\epsilon$. We primarily treat $\epsilon$ as constant since the
exponent of the $\log$ factors suppressed by the $\tilde{O}$ notation may
depend on $1/\epsilon$. Here, $n$ denotes the size of the array at the time
the operation arrives. Thus the update time does not depend on the number of
operations arrived prior to the new operation.
Problem | Approximation factor | Update time
---|---|---
LIS | $1+\epsilon$ | $\tilde{O}(\sqrt{n})$
LIS | $O((1/\epsilon)^{O(1/\epsilon)})$ | $\tilde{O}(n^{\epsilon})$
$\mathsf{LIS}^{+}$ | $O(\log n)$ | $O(\log^{3}n)$
DTM | $1+\epsilon$ | $O(\log^{2}n)$
Table 1: The results of this paper are summarized in this table.
$\mathsf{LIS}^{+}$ is a special case of LIS where only element insertion is
allowed.
### 1.1 Erdös-Szekeres partitioning problem
Our dynamic algorithm has an interesting application to a long-standing
mathematical problem, namely the Erdös-Szekeres partitioning problem. It is
well-known that any sequence of size $n$ can be decomposed into $O(\sqrt{n})$
monotone subsequences. The proof follows from a simple fact: Any sequence of
length $n$ contains either an increasing subsequence of length $\sqrt{n}$ or a
non-increasing subsequence of length $\sqrt{n}$. Thus, one can iteratively
find the maximum increasing and the maximum non-increasing subsequences of a
sequence and take the larger one as one of the solution partitions. Next, by
removing the partition from the original sequence and repeating this procedure
with the remainder of the elements we obtain a decomposition into at most
$O(\sqrt{n})$ partitions. The computational challenge, also known as the
Erdös-Szekeres partitioning problem, is to do this in an efficient way. The
above algorithm can be implemented in time $O(n^{1.5}\log n)$ if we use
patience sorting in every iteration. Bar-Yehuda and Fogel [34] improve the
runtime down to $O(n^{1.5})$ by designing an algorithm that after a
preprocessing step, solves LIS in time $O(n+k^{2})$ where the solution size is
bounded by $k$. Since any comparison-based algorithm takes time at least
$\tilde{\Omega}(n)$, the gap for Erdös-Szekeres partitioning problem has been
$\tilde{\Omega}(\sqrt{n})$ for quite a long time and the question was raised
in a number of works as an important open problem [30, 18].
We prove that via our dynamic LIS algorithm, the Erdös-Szekeres partitioning
problem can be solved in time $\tilde{O}_{\epsilon}(n^{1+\epsilon})$ for any
constant $\epsilon>0$. We assume that our algorithm performs as stated in
Table 1.
###### Theorem 1.
For any constant $\epsilon>0$, one can in time
$\tilde{O}_{\epsilon}(n^{1+\epsilon})$ partition any sequence of length $n$ of
distinct integer numbers into $O_{\epsilon}(\sqrt{n})$ monotone (increasing or
decreasing) subsequences.
###### Proof.
The proof follows directly from our algorithm for dynamic LIS. In our dynamic
setting, we start with an empty array $a$ and at every point in time we are
allowed to (i) add an element, or (ii) remove an element, or (iii) substitute
an element for another. The algorithm is able to update the sequence and
estimate the size of the LIS in time $\tilde{O}_{\epsilon}(|a|^{\epsilon})$
where $|a|$ is the size of the array at the time the operation is performed.
Moreover, the approximation factor of our algorithm is constant as long as
$\epsilon$ is constant. More precisely, our algorithm estimates the size of
the longest increasing subsequence within a multiplicative factor of at most
$(1/\epsilon)^{O(1/\epsilon)}$. It follows from our algorithm that by spending
additional time proportional to the reported estimation, our algorithm is able
to also find an increasing subsequence with size equal to the reported length.
Given a sequence of length $n$ with distinct numbers, we use the dynamic
algorithm for LIS to decompose it into $O_{\epsilon}(\sqrt{n})$ monotone
subsequences in time $\tilde{O}_{\epsilon}(n^{1+\epsilon})$. To do so, we
initialize two instances of our dynamic algorithm that keep an approximation
to the longest increasing subsequence and the longest decreasing subsequence
of the array. More precisely, in the first instance, we insert all elements of
the array exactly the same way they appear in our sequence and in the second
instance we insert the elements in the reverse order. Thus the dynamic
algorithm for the second instance always maintains an approximation to the
longest decreasing subsequence of our array.
In every iteration, we estimate the size of the longest increasing and longest
decreasing subsequences of the array via the dynamic LIS algorithm. We then
choose the maximum one and ask the algorithm to give us the sequence
corresponding to the solution reported. Finally, we remove the elements from
both instances of the dynamic algorithm and repeat the same procedure for the
remainder of the elements.
The total runtime of our algorithm is $\tilde{O}_{\epsilon}(n^{1+\epsilon})$
since we insert $n$ elements in each of the instances and then remove $n$
elements which amounts to $2n$ operations for each instance that runs in time
$\tilde{O}_{\epsilon}(n^{1+\epsilon})$. Moreover, because at every point in
time the maximum estimate we receive from each of the dynamic algorithms is at
least a constant fraction of the actual longest increasing subsequence, we
repeat this procedure at most $O_{\epsilon}(\sqrt{n})$ times. Therefore, we
decompose the sequence into $O_{\epsilon}(\sqrt{n})$ monotone subsequences. ∎
###### Remark 1.
The constant factor hidden in the $O$ notation for the number of partitions is
optimal in neither the algorithm of Theorem 1 nor the previous algorithm of
[34] nor the simple greedy algorithm that runs patience sorting in every step.
### 1.2 Subsequent Work
Since our dynamic algorithm has constant approximation factor, in order to
make sure the number of partitions remains $O(\sqrt{n})$, one needs to set
$\epsilon$ to constant and therefore the gap between our runtime of
$\tilde{O}(n^{1+\epsilon})$ and the lower bound of $\Omega(n)$ remains
polynomial. Two independent subsequent work further tighten the gap. Kociumaka
and Seddighin [22] improve the gap to subpolynomial by presenting a dynamic
algorithm with approximation factor $1-o(1)$ and update time $O(n^{o(1)})$.
Gawrychowski and Janczewski [15] further tighten the gap to polylogarithmic by
obtaining a similar algorithm with polylogarithmic update time (with
polynomial dependence on $1/\epsilon$). The work of Kociumaka and Seddighin
[22] also gives the first exact algorithm for dynamic LIS with sublinear
update time. Their algorithm is able to update the solution in time
$\tilde{O}(n^{2/3})$ after each operation and gives a correct solution with
probability $1-n^{-5}$.
In another subsequent work, Mitzenmacher and Seddighin [26] use the grid-
packing technique given here to obtain an improved sublinear time algorithm
for approximating LIS. Their algorithm is able to obtain an approximation of
LIS in truly sublinear time within a factor of $\Omega(\lambda^{\epsilon})$
where $\epsilon>0$ is an arbitrarily small constant factor and $\lambda$ is
the ratio of the solution size and the input size.
### 1.3 Related Work
LIS has received significant attention in the areas of property testing [11,
10, 12, 1], streaming [17, 14], and massively parallel computation (MPC) [20],
as well as in the standard algorithmic setting [13, 31, 32, 33]. Several
questions remain open about approximation algorithms for LIS. Although a
linear lower bound on the runtime is trivial when the solution size is $O(1)$,
neither convincing lower bounds nor upper bounds are known for approximating
LIS within subpolynomial multiplicative factors if the solution size is larger
($\omega(1)$) in general. For a special case when $n/\textsf{LIS}(a)$ is
subpolynomial, we can approximate the solution size within a subpolynomial
factor in sublinear time [32, 33]. In particular, Saks and Seshadhri [33]
present a $(1+\epsilon)$ approximation algorithm for LIS in sublinear time if
the ratio of $n$ over the solution size is sublogarithmic. The only prior non-
trivial dynamic algorithm for LIS that we are aware of is the work of Chen et
al. [7], where the authors present an exact dynamic algorithm for LIS with
worst-case update time $O(r+\log n)$ when the solution size is bounded by $r$.
The update time for this algorithm can grow up to $\Omega(n)$ if the solution
size is $\Omega(n)$.
When the available memory is sublinear (as it is in the streaming and the MPC
models), patience sorting can be used to compute a solution for smaller
fragments of the input. Previous work show that these local solutions can be
cleverly merged to obtain $1+\epsilon$ approximate solutions in the streaming
[17] and the MPC models [20]. In contrast, our technique for approximating LIS
is not based on patience sorting. We show it also has an application to a
streaming variant of LIS, and we expect it will have additional applications
in the future.
Distance to monotonicity (a.k.a Ulam distance) is also a very well-studied
problem [29, 2, 6, 33]. While LIS has resisted a multiplicative approximation
algorithm, DTM can be approximated within a multiplicative factor $1+\epsilon$
in time $\tilde{O}(n/d+\sqrt{n})$ when the solution size is lower bounded by
$d$ [29]. Streaming [17] and MPC [6] algorithms for DTM have also appeared.
### 1.4 Preliminaries
We consider the two problems, LIS and DTM. Input to both problems is an array
$a$ with arbitrary length. For LIS, the goal is to find the length of the
largest subsequence of elements such that their values increase according to
their indices. For DTM the goal is to determine the smallest number of
elements such that the remaining subsequence is increasing. Obviously,
$\mathsf{DTM}(a)=|a|-\mathsf{LIS}(a)$. However, an approximate solution for
one problem does not imply an approximate solution for the other (much like
maximum matching and vertex cover). We assume for simplicity and without loss
of generality that all the numbers are distinct, although one can easily
modify our algorithm to handle repeated numbers.
Our results here are for the dynamic setting. Initially, the input array is
empty ($|a|=0$). At each step, an element is either inserted at an arbitrary
position of the array or removed from an arbitrary position of the array.
(Element substitution can also be implemented with the previous two
operations, so we consider only insertions and removals.) We also study a
special case of LIS where all operations add elements to the array. We call
this problem $\mathsf{LIS}^{+}$.
We more formally define the array operations. Each insertion operation is of
the form “insert $(i,x)$” where $i$ is an integer between $1$ and the length
of the current array plus one. $i$ specifies the position of element $x$.
After this operation, all the elements whose previous index was at least $i$
will be shifted to the right. Similarly, an operation “delete $(i)$” removes
the $i$’th element of the array and element $i+1$ will replace its position.
Likewise, all the elements whose previous index was at least $i$ will be
shifted to the left.
For simplicity, in our algorithms we assume that at any point random access to
the elements is provided. That is, in every step, one can access the value of
the $i$’th element of the array as a value query. This brings an $O(\log n)$
overhead to the runtime since one needs to design a data structure that allows
us element addition, element removal, and access to the $i$’th element. Any
balanced binary tree (e.g. red-black tree) suffices for that purpose [9]. We
can also recover the position of each element of the array in logarithmic time
with a balanced binary tree.
## 2 Summary of the Results and Techniques
Our main result is a dynamic algorithm for LIS with worst-case update time
$\tilde{O}(n^{\epsilon})$ and approximation factor
$O((1/\epsilon)^{O(1/\epsilon)})$. Our algorithm is based on a novel technique
which we call grid packing. In this section, we give a high-level summary of
grid packing and our overall approach; Full proofs are given in Sections 4 and
5.
### 2.1 Block-based Algorithms
In our work, to simplify our proofs, we utilize the notion of what we call a
block-based algorithm. Very roughly speaking, a block-based algorithm starts
with an array $a$ of length $n$. It can use $f(n)$ preprocessing time, after
which it is responsible for a block of $g(n)$ operations, where each operation
has worst-case update time $h(n)$. We show in Section 3 via a simple reduction
that such a block-based algorithm can be used to obtain a dynamic algorithm
with worst-case update time $\max\\{f(n)/g(n),h(n)\\}$.
A motivating example shows the notion of block-based algorithms simplifies the
analysis. Chen et al. [7] show that when LIS for an array is upper bounded by
$r$, a dynamic algorithm can maintain the exact solution for LIS with worst-
case update time $\tilde{O}(r)$. We show that this exact algorithm yields a
dynamic $(1+\epsilon)$-approximation algorithm with worst-case update time
$\tilde{O}(\sqrt{n})$.
We first provide the intuition and explain the complications. At any point in
time, if the solution value is below $2\sqrt{n}/\epsilon$, then the runtime
guarantee is met by using the algorithm of Chen et al. [7]. Otherwise, we can
compute an exact solution, and then use the same value for up to $\sqrt{n}$
steps to maintain a valid approximation. We then spend time $\tilde{O}(n)$ for
$\sqrt{n}$ operations, leading to amortized update time of
$\tilde{O}(\sqrt{n})$. Deamortizing this approach to bound the worst-case
update time seems cumbersome. The issue is that it is not clear when we should
switch between the two algorithms. For example, if we define a threshold
$\tau$ and switch between the algorithms when the solution size crosses the
threshold $\tau$, we may go back and forth across the threshold. We could
consider multiple thresholds, but at this point we appear to be complicating
the analysis beyond what should be necessary.
Working with the framework of a block-based algorithm conveniently remedies
the problem. Assuming we start with an array $a$ of length $n$, we allow a
preprocessing time of $f(n)=O(n\log n)$ for the algorithm to compute the LIS.
We set $g(n)=\sqrt{n}$. If the LIS value $r$ is above $2\sqrt{n}/\epsilon$,
for the next $\sqrt{n}$ steps, we report $r-i$ in the $i$’th step and can be
sure that our solution is within a small range from the optimal one.
Otherwise, we use the algorithm of Chen et al. [7] with worst-case update time
$O(\sqrt{n})$ for the next $\sqrt{n}$ steps. Using the reduction, this turns
to an algorithm with worst-case update time $\tilde{O}(\sqrt{n})$ for LIS with
approximation factor $1+\epsilon$.
Corollary 4, [restated informally]. For any constant $\epsilon>0$, there
exists a dynamic algorithm for LIS with worst-case update time
$\tilde{O}(\sqrt{n})$ and approximation factor $1+\epsilon$.
### 2.2 Grid Packing and Applications
As mentioned earlier, our algorithm for LIS is based on a technique that we
call grid packing. Grid packing is defined on a table of $m\times m$ cells;
the only parameter of the problem is $m$. The problem can be thought of as a
game between us and an adversary. We introduce a number of segments on the
table. Each segment covers a consecutive set of cells in either a row or in a
column. A segment $A$ precedes a segment $B$ if every cell of $A$ is strictly
higher than every cell of $B$ and also every cell of $A$ is strictly to the
right of every cell of $B$. Two segments are non-conflicting, if one of them
precedes the other one. Otherwise, we call them conflicting. The segments we
introduce can overlap and there is no restriction on the number of segments or
the length of each segment. However, we would like to minimize the maximum
number of segments that cover a cell.
Figure 1: Segments are shown on the grid. The pair (black, orange) is
conflicting since the yellow cell (covered by the black segment) is on the
same row as the blue cell (covered by the orange segment). The following pairs
are non-conflicting: (green, black), (green, orange), (green, blue), (red,
orange), (red, blue), (black, blue).
After we introduce the segments, an adversary puts a non-negative number on
each cell of the grid. We emphasize that our segments do not depend on these
numbers, as the numbers are given after we provide our segments. The score of
a subset of cells in the table is the sum of the values in the cells, and the
overall score of the table is the maximum score of a path of length $2m-1$
from the bottom-left corner to the top-right corner. (In such a path, each
move is either up or to the right.)
The score that we obtain using our segments is the maximum sum of scores of a
non-conflicting set of segments. One can easily verify that the score of the
table is a clear upper bound on the score we obtain using any subset of non-
conflicting segments. We would like to introduce the segments in such a way
that the ratio of the score of the table over our score is always bounded by
constant, no matter how the adversary puts the numbers on the table. For a
fixed $\alpha\geq 1$ and a $\beta\geq 1$, we call a solution
$(\alpha,\beta)$-approximate if at most $\alpha$ segments cover each cell and
it guarantees a $1/\beta$ fraction of the score of the table for us for any
assignment of numbers to the table cells. We show in Section 4 that grid
packing admits an $(O(m^{\kappa}\log m),O(1/\kappa))$-approximate solution for
any $0<\kappa<1$.
Before explaining the idea behind this result, we would like to make a
connection between grid packing and LIS. Let us consider an array $a$ of
length $n$. We assume for the sake of this example that all the numbers of the
array are distinct and are in range $[1,n]$. In other words, $a$ is a
permutation of numbers in $[n]$. We map the array to a set of points on the 2D
plane by putting a point at $(i,a_{i})$ for every position $i$ of the array.
$\langle
7,2,4,1,9,6,3,5,8\rangle$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$0$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$
Figure 2: An array $\langle 7,2,4,1,9,6,3,5,8\rangle$ is mapped to the 2D
plane.
Now, divide the plane into an $m\times m$ grid, and fix a longest increasing
subsequence. The number on each cell of the grid would be equal to the
contribution of the elements in that grid cell to the fixed longest increasing
subsequence. (We emphasize that the number is not the longest increasing
subsequence inside the cell, but the contribution to the fixed longest
increasing subsequence only.) It follows that the score of the grid is exactly
equal to the size of the longest increasing subsequence. Let us assume that
the score of each segment is available. To approximate the score of the grid
(which equals the size of the LIS) we find the largest score we can obtain
using non-conflicting segments by dynamic programming. The last observation
which gives us speedup for LIS is the following: instead of using the score of
each segment (which we are not aware of), we use the size of the LIS for each
segment as an approximate value for its score. LIS of each segment can be
computed in time $\tilde{O}(n/m)$ since at most $n/m$ elements appear in every
row or every column of the grid. This quantity is clearly an upper bound on
the score of each segment but can be used to construct a global solution for
the entire array (see Section 5 for more details). In our dynamic algorithm,
every time a change is made, we only need to update the approximate score
(LIS) of the corresponding segments.
$\langle
7,\color[rgb]{0.494117647058824,0.827450980392157,0.129411764705882}\definecolor[named]{pgfstrokecolor}{rgb}{0.494117647058824,0.827450980392157,0.129411764705882}2\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},\color[rgb]{0.494117647058824,0.827450980392157,0.129411764705882}\definecolor[named]{pgfstrokecolor}{rgb}{0.494117647058824,0.827450980392157,0.129411764705882}4\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},1,9,\color[rgb]{0.494117647058824,0.827450980392157,0.129411764705882}\definecolor[named]{pgfstrokecolor}{rgb}{0.494117647058824,0.827450980392157,0.129411764705882}6\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},3,5,\color[rgb]{0.494117647058824,0.827450980392157,0.129411764705882}\definecolor[named]{pgfstrokecolor}{rgb}{0.494117647058824,0.827450980392157,0.129411764705882}8\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rangle$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$0$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$1$$0$$0$$0$$0$$0$$1$$1$$1$
Figure 3: An array $\langle 7,2,4,1,9,6,3,5,8\rangle$ is mapped to the 2D
plane. An LIS is shown by green points. The plane is divided into a $3\times
3$ grid. The number on each cell is the equal to the contribution of that cell
to the LIS. The score of the grid is equal to the LIS of the array. The score
of the grid is made by the path colored in green.
Our solution for grid packing is based on a combinatorial construction. The
first observation is that any path of length $2m-1$ from the bottom left to
the top right of the grid can be decomposed into several disjoint parts such
that each part is either completely in a row or completely in a column, and
further column-parts or row-parts are non-conflicting using the previous
terminology.
Figure 4: Each path from bottom-left to top-right is divided into disjoint
row-intervals or column-intervals. Row intervals are shown with orange and
column intervals are shown with blue. All row intervals are non-conflicting
and all column-intervals are also non-conflicting.
Grid packing therefore reduces to the 1-dimensional variant of grid packing,
array packing, as follows. For the array packing problem, an array of length
$m$ is given as input (with no numbers on it). Our goal is to define segments
(this time all horizontal), while keeping the maximum number of segments
covering each cell small. After we fix our solution, the adversary puts non-
negative numbers on the array cells. For any fixed interval $[x,y]$ we would
like to have a segment completely in that interval whose score is at least a
fraction of the score of that interval. More precisely, a solution for array
packing is $(\alpha,\beta)$-approximate if it covers each cell at most
$\alpha$ times and the score of any interval over the maximum score of a
segment inside it is bounded by $\beta$. Similar to grid packing, we are not
aware of the numbers when giving a solution, and the adversary is aware of our
solution before deciding which numbers to put and which interval to choose for
the comparison.
An $(\alpha,\beta)$-approximate solution for array packing yields a
$(2\alpha,2\beta)$-approximate solution for grid packing as follows. We treat
each row and each column of the grid as an array and make a separate solution
for the corresponding array packing instance. After the adversary puts the
numbers on the grid, any path from bottom left to the top right can be divided
into disjoint column intervals or row intervals, one of which provides us a
$2$ approximate solution for the score of the grid. Finally, the guarantee of
array packing enables us to prove that the above solution is
$(2\alpha,2\beta)$-approximate for grid packing.
Thus, all that remains is to provide a solution for array packing. To begin,
for each cell of the array, there should be one segment covering only that
cell. Otherwise, there is no way to compete with an adversary that puts $1$ on
that cell and 0 on the other cells and uses that cell for the chosen interval.
Thus, $m$ segments of length $1$ for the $m$ cells of the array is an
inevitable part of any solution. A first idea to extend this construction is
to put segments of length $2$ on every other cell of the array, giving $m/2$
segments covering all of the array cells, and continuing further, for any
$1\leq i\leq\log m$, we use $m/2^{i}$ segments of length $2^{i}$ to cover all
cells of the array. While with this construction at most $\log m$ segments
cover each cell, the best guarantee that we can expect for such a solution in
terms of score is a $1/\Omega(\log m)$ fraction of the score of each interval.
That is, such a solution is only $(O(\log m),O(\log m))$-approximate.
To improve the approximation factor to a constant, we make $m^{\kappa}$ copies
of each set of segments. Roughly speaking, for segments of length $2^{i}$ we
make $m^{\kappa}$ copies by right shifting the segments by $2^{i}/m^{\kappa}$
cells each time (see Section 4 for more details about this construction and
edges cases such as when $2^{i}<m^{\kappa}$). This clearly adds a
multiplicative overhead of $m^{\kappa}$ for the number of segments covering
each cell. However, as we show in Section 4 it improves the second parameter
of the approximation guarantee down to $O(1/\kappa)$ from $O(\log m)$.
Theorem 6, [restated informally]. For any $0<\kappa<1$, the grid packing
problem on an $m\times m$ grid admits an
$(\tilde{O}(m^{\kappa}),O(1/\kappa))$-approximate solution.
Grid packing is a very strong tool for approximating LIS. For example,
consider an array $a$ of length $n$ for which we wish to design a block-based
dynamic algorithm for LIS. We fix a constant $0<\kappa<1$ and set
$f(n)=\tilde{O}(n^{1+\kappa})$. Let $m=n^{1/3}$ be the size of the grid we
construct for this array. The horizontal thresholds are set in a way that
separate the elements into $m$ different pieces each containing roughly $n/m$
elements. That is, the first threshold is the value of the $n/m$’th element of
the array after sorting the numbers and so on. The vertical thresholds are set
to divide the elements into $m$ different parts each containing $n/m$
elements. That is, the first part contains the first $n/m$ elements of the
array and so on. This way, each element of the array corresponds to one unique
cell of the grid.
The most important property of this division is that every row or every column
of the grid contains at most $n/m$ elements. This property is asymptotically
maintained for the next $g(n)=n^{2/3}$ operations for which the block-based
algorithm is responsible. Obviously, this guarantee also holds for the
segments. We solve the problem in the following way: first we make a solution
for grid packing of size $m\times m$. For each segment, we make a separate
instance of the dynamic LIS problem that solves the problem for the elements
covered by that segment. Initially, we use the naive algorithm that computes
LIS from scratch every time an operation arrives. However, since each segment
corresponds to at most $O(n/m)$ elements, the worst-case update time for each
segment is $\tilde{O}(n/m)$. Moreover, each cell is covered by at most
$\tilde{O}(m^{\kappa})$ segments which means each operation modifies at most
$\tilde{O}(m^{\kappa})$ segments. Thus, the total update time is
$\tilde{O}(m^{\kappa}n/m)=\tilde{O}(n^{2/3+\kappa})$. In order to approximate
the size of the LIS, every time we run a DP on the segments to find a set of
non-conflicting segments whose total size of LIS is maximized. Notice that the
LIS of each segment is available in time $O(1)$ (since after each update we
store the size of the solution for each segment), and DP takes time
$\tilde{O}(m^{2+\kappa})$ which is basically the total number of segments we
have. This is obviously a lower bound on the actual solution size since any
partial solution for a non-conflicting set of segments can be combined to
obtain a global solution for the union of the elements in all segments.
Moreover, the size of the LIS for each segment is definitely an upper bound on
the contribution of that segment to the optimal solution. Thus, the solution
of the DP is at least an $\Omega(\kappa)$ fraction of the size of the LIS for
the entire array.
One thing to keep in mind is that updating the grid requires a more careful
analysis. Since the column divisions are based on the indices of the elements,
when we add or remove some elements, some columns may grow wider or thinner.
While the grid illustration may make it seem challenging to manage such update
operations, the actual implementation is straightforward. We define $m-1$
thresholds initially set to factors of $n/m$. Every time an element is added
or removed, in addition to updating the binary tree data structure tracking
the location of each element, we can also update the thresholds separating the
grid into subgrids..
Since the preprocessing time is $\tilde{O}(n^{1+\kappa})$ and we run the
algorithm for $n/m=O(n^{2/3})$ steps and the worst-case update time for each
operation is $\tilde{O}(n^{2/3+\kappa})$, this block-based algorithm can be
turned into a dynamic algorithm for LIS with approximation factor
$O(1/\kappa)$ and worst-case update time $\tilde{O}(n^{2/3+\kappa})$. While
this is worse than the solution given in Corollary 4 both in terms of the
approximation factor and worst-case update time, this solution can be extended
to improve the update time down to $\tilde{O}(n^{\epsilon})$ for any constant
$\epsilon>0$. All it takes to improve the update time is to replace the naive
LIS algorithm of each segment by the more clever algorithm we explained above.
While this comes at the expense of a larger approximation factor, the worst-
case update time improves. We show in Section 5 that by setting
$\kappa=\Omega(\epsilon)$ and recursing on this algorithm $O(1/\epsilon)$
times we obtain a dynamic algorithm for LIS with worst case update time
$\tilde{O}(n^{\epsilon})$ and approximation factor
$O((1/\epsilon)^{O(1/\epsilon)})$.
Theorem 9, [restated informally]. For any constant $\epsilon>0$, there exists
an algorithm for dynamic LIS whose worst-case update time is
$\tilde{O}(n^{\epsilon})$ and whose approximation factor is
$O((1/\epsilon)^{O(1/\epsilon)})$.
It follows from Theorem 9 that after reporting the estimated value of the
solution, we can also determine the corresponding sequence in time
proportional to its size. More precisely, after using DP to construct a global
solution based on partial solutions of the segment, we can find out which
segments contribute to such a solution and recursively recover the
corresponding increasing subsequences of the relevant segments. To this end,
in addition to the DP table which we use for constructing a global solution,
we also store which segments contribute to such a solution. This way, the
runtime required for determine the corresponding increasing subsequence is
proportional to the size of the solution.
###### Remark 2.
After reporting a solution of size $x$ by our dynamic LIS algorithm, our
algorithm is able to report an increasing subsequence of length $x$ in time
$O_{\epsilon}(x)$.
For the special case of $\mathsf{LIS}^{+}$ where only insertion operations are
supported, we improve the approximation factor down to $O(1/\epsilon)$.
Moreover, if one favors the update time over the approximation factor, we show
that the update time can be reduced to polylogarithmic if we allow the
approximation factor to be $O(\log n)$.
#### 2.2.1 Another Example: Advisory Help
To illustrate the effectiveness of the grid packing technique, we bring yet
another example, this time in the context of streaming algorithms. It has been
shown that LIS can be approximated within a factor of $1+\epsilon$ in the
streaming model with memory $O(\sqrt{n})$ [17]. Moreover, matching lower
bounds are also provided by Gál and Gopalan [14]. They show that it is
impossible to beat the $\Omega(\sqrt{n})$ barrier with any deterministic
algorithm that runs in a constant number of rounds and obtains a constant
factor approximation. We show that this can be improved with a randomized
algorithm that reads the input in a particular order. This notion is called
advisory help and has been previously studied [8] to provide graph algorithms
in the streaming model.
In such a setting, we design a streaming algorithm but we ask the adversary to
give us the input in a particular order. To avoid losing information, elements
come in the form $(i,a_{i})$ which specifies both the position and the value
of each element. We show that in three rounds we can obtain an $O(1/\kappa)$
approximation with memory $\tilde{O}(n^{2/5+\kappa})$. Roughly speaking, in
the first round we sample $m=n^{1/5}$ elements from the array and we set
horizontal lines of the grid based on their values. Vertical lines just evenly
divide the elements based on their indices into portions of size $n/m$.
$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$11$$12$$13$$14$$15$$16$$17$$18$$19$$20$$21$$22$$23$$24$$25$$1$$6$$11$$16$$21$$2$$7$$12$$17$$22$$3$$8$$13$$18$$23$$4$$9$$14$$19$$24$$5$$10$$15$$20$$25$
Figure 5: Order of the ways that the elements of the array are given to us are
shown in this figure. Figure on the left shows the order of elements in the
second round and the figure on the right shows the order of the element in the
third round.
In the second round, we ask the adversary to give us the elements of the array
but in the row order (shown in Figure 5). In this round, we compute the
solution for horizontal segments. Each segment contains at most
$\tilde{O}(n/m)=\tilde{O}(n^{4/5})$ element with high probability and
therefore using the algorithm of [17] we can approximate its LIS within factor
$1+\epsilon$ with memory $\tilde{O}(n^{2/5})$. Since each cell is covered by
at most $\tilde{O}(m^{\kappa})$ segments, at each step we solve the problem
for at most $\tilde{O}(m^{\kappa})$ segments simultaneously which adds an
overhead of $\tilde{O}(m^{\kappa})=\tilde{O}(n^{\kappa})$ to the memory of the
algorithm. Thus the overall memory is bounded by $\tilde{O}(n^{2/5+\kappa})$.
The third round solves the problem for vertical segments similar to the
horizontal ones. The only difference is that this time we ask the adversary to
give us the elements in the column order. Once all the solutions for all
segments are available, we run a DP with memory
$\tilde{O}(m^{2+\kappa})=\tilde{O}(n^{2/5+\kappa})$ to approximate the final
solution size. Notice that this solution is not refuted by the impossibility
result of [17] since it both uses randomization and extra help from the
adversary. In order for the adversary to provide us the array elements in this
particular order, she may need to sort the numbers based on their values.
However, sorting does not overly simplify the problem. Computing the LIS of an
array is equally hard if the elements are given in the sorted order!
### 2.3 Distance to Monotonicity
Additionally, we present a dynamic algorithm for DTM. Distance to monotonicity
seems to be more tractable than LIS since previous work obtain much more
efficient algorithms for DTM than LIS [2, 17, 29]. In particular, there are
several known techniques for approximating DTM within a constant factor. As an
example, one can model the problem with a graph containing $n$ vertices each
corresponding to an element. There is an edge between two vertices, if the
corresponding elements are not increasing. While in this interpretation, LIS
is equivalent to the largest independent set of the graph, DTM translates to
vertex cover which can be approximated within a factor of $2$ by maintaining a
maximal matching. As part of our algorithm, we show that such a maximal
matching can be maintained with worst-case update time $O(\log^{2}n)$ which
yields a dynamic algorithm for DTM with approximation factor $2$ and worst-
case update time $O(\log^{2}n)$.
However, we further strengthen this result by improving the approximation
factor down to $1+\epsilon$ while keeping the update time intact. The heart of
our improvement is based on an exact algorithm for computing DTM when an
approximate solution is available. We show in Section 6 that given random
access to the elements of an array $a$ of size $n$ and a constant approximate
solution of size $k$ for the array, one can compute an exact solution for DTM
in time $O(k\log n)$. Just knowing the size of the approximate solution does
not suffice here; our algorithm requires random access to the elements of the
approximate solution as well.
Lemma 11, [restated informally]. Let $a$ be an array of length $n$ and $S$ be
a set of $k$ elements whose removal from $a$ makes $a$ increasing. One can
compute the distance to monotonicity of $a$ in time $O(k\log n)$.
The above algorithm, in addition to the 2-approximate solution, yields a
$1+\epsilon$ approximation block-based algorithm for DTM. Starting from an
array $a$ and provided access to a $2$-approximate solution, we set
$f(a)=\tilde{O}(k)$ where $k$ is the size of the approximate solution. Here,
$f,g,$ and $h$ do not depend on $n$ since the runtimes depend on the solution
size. See Section 3 for more information. Moreover, $g(a)=k\epsilon/2$ and
$h(a)=O(1)$. After computing an exact solution via the algorithm of Lemma 11
in the preprocessing phase, we keep reporting $d+i$ as an estimate for DTM for
the $i$’th operation where $d$ is the solution for the initial array. The
block-based algorithm then can be used to obtain a dynamic algorithm with
worst-case update time $O(\log^{2}n)$.
Theorem 14, [restated informally]. For any constant $\epsilon>0$, there exists
an algorithm for dynamic DTM whose worst-case update time is $O(\log^{2}n)$
and whose approximation factor is $1+\epsilon$.
Although our method is simple, it has a nice implication for classic
algorithms. We show that using Lemma 11, we can approximate DTM in time $O(n)$
within an approximation factor $1+\epsilon$ (a log is shaved from the runtime
by incurring a factor $1+\epsilon$ to the approximation guarantee). This
result is tight in two ways: i) Any constant factor approximation algorithm
for DTM has to make at least $\Omega(n)$ value queries to the elements of the
array to solve the case that the solution is either 0 or 1. ii) Any exact
solution which is comparison based or based on algebraic decisions trees has a
runtime of at least $\Omega(n\log n)$ [13, 31].
To achieve this, we first compute a 2-approximate solution for DTM in time
$O(n)$ (see Section C for more details). If the size of the solution is
smaller than $\sqrt{n}$, we use the algorithm of Lemma 11 to obtain an exact
solution in linear time. Otherwise, we use the algorithm of [29] to obtain a
$1+\epsilon$ approximate solution in time $O(n)$444The runtime of the
algorithm given in [29] is $\tilde{O}(n/d+\sqrt{n})$ when the solution size is
lower bounded by $d$..
## 3 From Amortized Update Time to Worst Case Update Time
The goal of this section is to show a reduction that simplifies the problem
with respect to worst-case time constraints. Ultimately, in our algorithms, we
prove that the update time of each operation is bounded in the worst case.
However, it is more convenient to allow for larger update times in some cases,
while keeping a bounded amortized update time. In this section, we prove that
if our algorithms adhere to a certain structure, then worst-case update time
reduces to amortized update time. Later in the section, we present a
motivating example to show how the reduction enables us to simplify the
proofs. In our example, we seek to design a $1+\epsilon$ approximation
algorithm for dynamic LIS with worst-case update time $\tilde{O}(\sqrt{n})$.
In our framework, we start with an array $a$ of size $n$ and our algorithm is
allowed to make a preprocessing of time $f(a)$. For the next $g(a)$ steps, the
processing time of each operation is bounded by $h(a)$ in the worst case.
After $g(a)$ steps, our algorithm is no longer responsible for the operations
and terminates. We refer to such an algorithm as block-based. Note $f(a)$,
$g(a)$, and $h(a)$ are not necessarily determined based only on the size $n$
of the array $a$. For example, in the algorithm of Section 6, the values of
these functions are proportional to the solution size for $a$, not $n$.
However, when these functions are only dependent on $n$, we may drop $a$ in
the notation and use $n$ instead. Functions $f,g,$ and $h$ should meet one
important property: after applying $g(a)$ arbitrary operations to an array $a$
and obtaining a new array $a^{\prime}$, $f(a^{\prime}),g(a^{\prime})$ and
$h(a^{\prime})$ should not change asymptotically. More specifically, although
the reduction holds when the values are within any constant factor, we assume
$1/2\leq f(a)/f(a^{\prime}),g(a)/g(a^{\prime}),h(a)/h(a^{\prime})\leq 2$. We
call this property relativity. We also assume without loss of generality that
$f,g,$ and $h$ are always lower bounded by a constant (say $20$) and when the
array size is constant, so are the values for $f(a),g(a),$ and $h(a)$.
We show in the following that a block-based algorithm $\mathcal{A}$ for LIS or
DTM with identifiers $\langle f,g,h\rangle$ can be used as a black box to
obtain a dynamic algorithm $\mathcal{A^{\prime}}$ with worst-case update time
$O(\max\\{h(a),f(a)/g(a)\\})$. The approximation factor of the algorithm is
preserved in this reduction. To show a use case of this technique, we provide
a simple analysis of a $(1+\epsilon)$-approximate dynamic algorithm for LIS
with worst-case update time $\tilde{O}(\sqrt{n})$.
###### Lemma 2.
Let $\mathcal{A}$ be a block-based algorithm with preprocessing time $f(a)$
that approximates dynamic LIS or dynamic DTM for up to $g(a)$ many steps with
worst-case update time $h(a)$. If $\langle f,g,h\rangle$ satisfies relativity
then there exists a dynamic algorithm $\mathcal{A}^{\prime}$ for the same
problem whose worst-case update time is bounded by
$O(\max\\{h(a),f(a)/g(a)\\})$ and whose approximation factor is the same as
$\mathcal{A}$.
###### Proof.
$\mathcal{B}_{1}$$a^{(1)}$$\mathcal{B}_{2}$$\mathcal{B}_{3}$$\dotsc$preprocessingupdating
two operations in each
step$g(a^{(1)})/10$$g(a^{(2)})/10$$a^{(2)}$$a^{(3)}$$1$$\frac{9g(a^{(1)})}{10}$$g(a^{(1)})$$\frac{9g(a^{(1)})}{10}+g(a^{(2)})$$\frac{9(g(a^{(1)})+g(a^{(2)}))}{10}$
Figure 6: The reduction is shown in this figure.
Figure 6 gives a pictorial depiction of the proof idea. We construct an
algorithm $\mathcal{A^{\prime}}$ in the following way: $\mathcal{A}^{\prime}$
uses algorithm $\mathcal{A}$ repeatedly. To distinguish between multiple
instances of $\mathcal{A}$, we add subscripts; the first time we use algorithm
$\mathcal{A}$ we call it $\mathcal{B}_{1}$. Every $\mathcal{B}_{i}$ is
basically a copy of the block-based algorithm $\mathcal{A}$ which is modified
slightly to execute the preprocessing part in multiple steps. We begin with
using our block-based algorithm $\mathcal{B}_{1}$ at step 1. At this point we
call the initial array (which is empty) $a^{(1)}$. Since the size of the array
is constant, so is the preprocessing time and therefore we can ignore it when
bounding the time complexity. For $g(a^{(1)})$ many steps, we use algorithm
$\mathcal{B}_{1}$ to preserve an approximate solution and from then on, we use
a separate algorithm for the rest of the operations, namely $\mathcal{B}_{2}$.
The construction of $\mathcal{B}_{2}$ is given below:
When $\mathcal{B}_{1}$ has gone $9/10$ of the way and is only responsible for
$g(a^{(1)})/10$ more operations, we initiate algorithm $\mathcal{B}_{2}$. Let
$a^{(2)}$ be the array at this point. $\mathcal{B}_{2}$ needs to run the
preprocessing step which requires $f(a^{(2)})$ many operations. This may not
be possible in a single step, therefore, we break the computation into
$g(a^{(1)})/20$ pieces and execute each piece in the next $g(a^{(1)})/20$
steps. Moreover, in the next $g(a^{(1)})/20$ steps operations that arrive
after the construction of $\mathcal{B}_{2}$ are processed: two operations in
each step. While this is happening, algorithm $\mathcal{B}_{1}$ processes the
operations and updates the solution size. When we reach $g(a^{(1)})/10$ many
steps after the construction of $\mathcal{B}_{2}$, algorithm $\mathcal{B}_{2}$
has already finished the preprocessing and all the operations that have
arrived so far are applied to it. This is exactly the time that
$\mathcal{B}_{1}$ terminates, and from then on, we use algorithm
$\mathcal{B}_{2}$ to process each operation.
Similarly, $\mathcal{B}_{3}$ is constructed when $\mathcal{B}_{2}$ has applied
$g(a^{(2)})9/10$ operations. This construction goes on as long as operations
arrive.
The correctness of $\mathcal{A}^{\prime}$ follows from that of $\mathcal{A}$.
Therefore, any approximation factor that $\mathcal{A}$ guarantees for us also
carries over to $\mathcal{A}^{\prime}$. For the update time, the construction
guarantees that at every time, at most two instances of algorithm
$\mathcal{A}$ are active. At every step, in each algorithm, we either perform
an operation or we are initializing the algorithm in which case the update
time is bounded by $O(\max\\{f(a)/g(a),h(a)\\})$.
One thing to keep in mind in the above argument is that because of relativity
the value of functions $f,g,$ and $h$ remain asymptotically the same during
two consecutive runs of algorithm $\mathcal{A}$. Thus, $g(a)$ is within a
constant factor for two consecutive runs of $\mathcal{A}$ and therefore
$f(a^{(i)})/g(a^{(i-1)})$ is asymptotically the same as
$f(a^{(i)})/g(a^{(i)})$. ∎
We emphasize that there is a constant-factor overhead in the update-time of
the reduction which is hidden in the $O$ notation.
To illustrate the effectiveness of our reduction, we bring a motivating
example to show how it simplifies the design of dynamic algorithms for LIS.
### 3.1 Warm Up: Block-based Algorithm for LIS
The reduction of Section 3 gives us a very convenient framework to design
dynamic algorithms for LIS and DTM. Here we bring a simple block-based
algorithm for dynamic LIS with approximation factor $1+\epsilon$ that results
in an algorithm with the same approximation factor and worst-case update time
$\tilde{O}(\sqrt{n})$.
Since in the following, functions $f,g$, and $h$ only depend on the size of
the array in this case, we write them in terms of $n$.
###### Lemma 3.
For any $\epsilon>0$, there exists a block-based algorithm for LIS with
approximation factor $1+\epsilon$, preprocessing time $f(n)=\tilde{O}(n)$ and
update time $h(n)=\tilde{O}(\sqrt{n})$ that maintains an approximate solution
to LIS for up to $g(n)=\sqrt{n}$ many operations.
###### Proof.
In the preprocessing phase, we first compute the longest increasing
subsequence in time $\tilde{O}(n)$. Let the solution size be $r$. Based on the
value of $r$, we consider two different strategies: If $r$ is already at least
$2\sqrt{n}/\epsilon$, in the next $\sqrt{n}$ steps, we do not make any changes
to the array and output $r-i$ after $i$’th operation. Since each operation
hurts the solution size by an additive factor of at most $1$, our solution is
always valid and has approximation factor $1+\epsilon$ throughout this
process.
Otherwise, we use the algorithm of Chen et al. [7] to update the solution in
every step. The setup cost for the algorithm of Chen et al. [7] is
$\tilde{O}(n)$ which can be executed in the preprocessing step (since this is
not explicitly mentioned in [7], we bring more details about their algorithm
in Appendix D). Moreover, the solution size is initially upper bounded by
$2\sqrt{n}/\epsilon$ and it can grow to at most $2\sqrt{n}/\epsilon+\sqrt{n}$
after $\sqrt{n}$ operations. Therefore, the update time remains
$\tilde{O}(\sqrt{n})$ in the worst case. ∎
Based on Lemma 2, a dynamic algorithm for LIS can be implemented with worst-
case update time $\tilde{O}(\sqrt{n})$ and approximation factor $1+\epsilon$.
###### Theorem 4.
[a corollary of Lemmas 2 and 3] For any constant $\epsilon>0$, there exists a
dynamic algorithm for LIS with approximation factor $1+\epsilon$ and update
time $\tilde{O}(\sqrt{n})$ in the worst case.
## 4 Grid Packing
This section is dedicated to a combinatorial problem which we call grid
packing. Definitions are given in Section 2 but for the sake of completeness
we restate them here. In this problem, we have a table of size $m\times m$.
Our goal is to introduce a number of segments on the table. Each segment
either covers a consecutive set of cells in a row or in a column. A segment
$A$ precedes a segment $B$ if every cell of $A$ is strictly higher than every
cell of $B$ and also every cell of $A$ is strictly to the right of every cell
of $B$. Two segments are non-conflicting, if one of them precedes the other
one. Otherwise, we call them conflicting. The segments we introduce can
overlap and there is no restriction on the number of segments or the length of
each segment. However, we would like to minimize the maximum number of
segments that cover each cell.
Figure 7: Segments are shown on the grid. The pair (black, orange) is
conflicting since the yellow cell (covered by the black segment) is on the
same row as the blue cell (covered by the orange segment). The following pairs
are non-conflicting: (green, black), (green, orange), (green, blue), (red,
orange), (red, blue), (black, blue).
After we choose the segments, an adversary puts a non-negative number on each
cell of the grid. The score of a subset of cells of the table would be the sum
of their values and the overall score of the table is the maximum score of a
path of length $2m-1$ from the bottom left corner to the top right corner. In
such a path, we always either move up or to the right.
The score of a segment is the sum of the numbers on the cells it covers. We
obtain the maximum sum of the scores of a non-conflicting set of segments. The
score of the table is an upper bound of on the score of any set of non-
conflicting segments. We would like to choose segments so that the ratio of
the score of the table and our score is bounded by a constant, no matter how
the adversary puts the numbers on the table. More precisely, we call a
solution $(\alpha,\beta)$-approximate, if at most $\alpha$ segments cover each
cell and it guarantees a $1/\beta$ fraction of the score of the table for us
for any assignment of numbers to the table cells.
$2$$0$$2$$3$$1$$1$$5$$0$$1$$0$$3$$0$$2$$1$$0$$1$ Figure 8: After we introduce
the segments (left figure), the adversary puts the numbers on the table
(middle figure). In this case, the score of the table is equal to $12$ (via
the path depicted on the right figure), and our score is equal to $9$ obtained
from two non-conflicting segments green and blue.
In this section, we prove the following theorem: For any grid $m\times m$ and
any $0<\kappa<1$, there exists a grid packing with guarantee
$(O(m^{\kappa}\log m),O(1/\kappa))$. That is, each cell is covered by at most
$O(m^{\kappa}\log m)$ segments and the ratio of the table’s score over our
score is bounded by $O(1/\kappa)$ in the worst case. This solution is
constructive and our proof also gives us the segments.
### 4.1 Array Packing
We first consider a useful sub-problem, array packing. Array packing is a one-
dimensional variant of grid packing, where we have an array of size $m$ and we
choose segments of consecutive cells in this array. Again segments can overlap
and there is no constraint on the number or size of the segments; after we fix
the segments an adversary puts non-negative numbers on the array cells; and
the score of a subset of the array cells would be the sum of their values.
Here we call a solution $(\alpha,\beta)$-approximate if no more than $\alpha$
segments cover each cell and for any interval $[x,y]$ of the array, there
exists a segment which completely lies in this interval with a score is at
least a $1/\beta$ fraction of the overall score of the interval.
Figure 9: A $(3,5)$-approximate solution is shown for an array packing problem
of size $10$. Blue segments cover three consecutive cells, red segments cover
two consecutive cells and each black segment covers a single cell. Each cell
is covered by at most $3$ segments. Also, one can prove that no matter how the
adversary puts the numbers on the array, for any interval, there is a segment
completely in that interval whose score is at least $1/5$ of the score of that
interval.
Here we show that when the array size is $m$, for any $0<\kappa<1$, there
exists a solution for array packing with approximation guarantee
$(O(m^{\kappa}\log m),O(1/\kappa))$. We then use this in a solution for grid
packing.
We construct two sets of segments in the following way: let $d$ be the largest
power of 2 such that $d^{2}\leq m^{\kappa}$. In the first set, for any $1\leq
i\leq d$ and any $1\leq j\leq m-i+1$ we introduce a segment starting from cell
$j$ and ending at cell $i+j-1$.
$d$$\vdots$$d$ Figure 10: In the first set, from each cell we introduce $d$
segments with lengths $1,2,\ldots,d$.
In the second set, for any integer $0\leq i\leq\log m$ and any $j$ divisible
by $2^{i}$ such that $j+d2^{i}-1\leq m$, we introduce a segment that spans the
interval $[j,j+d2^{i}-1]$.
$\dotsc$$d2^{i}$$2d2^{i}$$3d2^{i}$$4d2^{i}$$5d2^{i}$$2^{i}$$\vdots$$\dotsc$$\dotsc$$d$
Figure 11: In the second set, for each $i$, there is a segment of length
$d2^{i}$ starting from any cell whose starting point is divisible by $2^{i}$.
The distance between consecutive segments is $2^{i}$.
###### Lemma 5.
For any $m\geq 1$ and any $0<\kappa<1$ there exists a solution for the array
packing problem of size $m$ with approximation guarantee
$(O(m^{\kappa}\log m),O(1/\kappa)).$
###### Proof.
The solution is the union of the two sets of segments described above. The key
to showing it gives the required approximation is the following: For each
interval $[x,y]$ we can cover the entire interval with $O(1/\kappa)$ segments
that completely lie in the interval $[x,y]$. Therefore, no matter how the
adversary puts the numbers on the cells, the summation of the scores for such
segments is at least as much as the score of interval $[x,y]$ and thus one of
those segments has an $\Omega(\kappa)$ fraction of the score of interval
$[x,y]$. To show this, we cover the cells of the intervals in the following
way.
If $y-x<d$, then we can cover the entire interval with a single segment (of
the first type) and the proof is trivial. Otherwise, we only use the segments
of the second type in our covering. Let $q$ be the size of the largest segment
that completely lies in the interval $[x,y]$.
We start with an empty set $S_{\ell}$ and a pointer $p_{\ell}$ initially equal
to $x$. Each time we find a segment that starts in the range $[x,p_{\ell}]$
and ends in the range $[x,y]$. If many such segments exist, we choose the one
with the right-most ending point and in case of a tie, we break the tie
arbitrarily. We add the new segment to $S_{\ell}$ and continue the process by
increasing $p_{\ell}$ to the cell right after the new segment ends. We
continue this process so long as $p_{\ell}-x<q/d$
We repeat the same process but this time starting from $p_{r}=y$ and
proceeding backwards. Initially we set $S_{r}$ to be an empty set. Every time,
we find a segment that ends in range $[p_{r},y]$ and starts in range $[x,y]$.
Similarly, when multiple options are available, we choose the one whose
starting point is the smallest. We add the new segment to $S_{r}$ and update
$p_{r}$ to be the right-most cell not covered by the new segment. We terminate
this process when $y-p_{r}\geq q/d$.
Obviously segments in $S_{\ell}\cup S_{r}$ cover both intervals
$[x,p_{\ell}-1]$ and $[p_{r}+1,y]$. If $p_{r}<p_{\ell}$ then the entire
interval $[x,y]$ is covered. Otherwise, we can add two more segments of size
$q$ to fill the gap. More precisely, we add the left-most and right-most
segments of length $q$ that lie completely in the interval $[x,y]$. The
analysis is based on the following property of our construction: the distance
between consecutive segments of length $q$ is bounded by $q/d$. Thus, after
putting such segments on the interval $[x,y]$, each cell is covered except the
ones whose distance to one of the end points is smaller than $q/d$. However,
segments of sets $S_{\ell}$ and $S_{r}$ cover those cells.
We show that $|S_{\ell}|,|S_{r}|\leq O(1/\kappa)$. To this end, we prove that
every time we add a new segment to $S_{\ell}$, the size of the new segment is
at least $d$ times larger than the size of the previous segment we added to
$S_{\ell}$. The reason is the following: let $l$ be the length of the segment
we add to $S_{\ell}$ at some point. This means that after adding this segment
to $S_{\ell}$ we have $p_{\ell}\geq x+l$. $l$ is a power of $2$ since we only
use the segments of the second type. Moreover, one cell in this range is
divisible by $l$ which means that it is the starting point of another segment
of length $dl$. Thus the next segment that we add to $S_{\ell}$ has a length
at least $dl$ which is $d$ times larger than the current one. Therefore, the
size of $|S_{\ell}|$ is bounded by $\log_{d}m=O(1/\kappa)$. The same also
holds for $S_{r}$.
$x$$y$$q$$q$$\geq q/d$$\geq q/d$$<q/d$$<q/d$ Figure 12: Green parts are
covered with segments in $S_{\ell}$ and blue parts are covered by segments in
$S_{r}$. Two segments of length $q$ cover the rest of the elements.
All that remains is to show that each cell of the array is covered by at most
$O(m^{\kappa}\log m)$ different segments. In the first set, the length of each
segment is bounded by $d$ and therefore there are at most $d^{2}$ different
combinations for the starting and ending cells of the intervals that cover a
particular cell. Since $d^{2}\leq m^{\kappa}$, this guarantee is met by the
first set of segments. For the second set, notice that there are at most
$O(\log m)$ distinct segment sizes. Moreover, each cell is covered by at most
$d$ segments of a particular size. Thus, each cell is covered by at most
$O(d\log m)$ different segments of the second set which is bounded by
$O(m^{\kappa}\log m)$.
∎
### 4.2 An $(O(m^{\kappa}\log m),O(1/\kappa))$-approximate Solution for Grid
Packing
We provide a reduction from grid packing to array packing. The intuition
behind the reduction is given below: Let us fix a path from the bottom-left to
the top-right of the array. We can divide the cells of the path into some
disjoint vertical and horizontal intervals as show in Figure 13. In this
decomposition, all the row-intervals are non-conflicting and all the column-
intervals are also non-conflicting (row-intervals and column-intervals may be
conflicting).
Figure 13: Each path from bottom-left to top-right is divided into disjoint
row-intervals or column-intervals. Row intervals are shown with orange and
column intervals are shown with blue. All row intervals are non-conflicting
and all column-intervals are also non-conflicting.
For any path and any combination of numbers on the cells of the path, either
the sum of the numbers on the row-intervals or the sum of numbers on the
column intervals is at least a $1/2$ fraction of the total sum of the numbers
on the path. Based on this, we can reduce grid packing to array packing in the
following way.
###### Theorem 6.
For any $0<\kappa<1$, the grid packing problem on an $m\times m$ grid admits
an $(O(m^{\kappa}\log m),O(1/\kappa))$-approximate solution.
###### Proof.
We treat every row and every column of the grid as an array of length $m$ and
construct an $(O(m^{\kappa}\log m),O(1/\kappa))$-approximate solution of the
array packing problem for that row or column (Lemma 5). With this
construction, every cell is covered by at most $O(m^{\kappa}\log m)$ segments,
because every cell is covered by at most $O(m^{\kappa}\log m)$ horizontal
segments and every cell is covered by at most $O(m^{\kappa}\log m)$ vertical
segments, so the total number of segments covering each cell is bounded by
$O(m^{\kappa}\log m)$.
After the adversary puts the numbers on the cells of the grid, the score of
the grid equals the largest score of a path of the length $2m-1$ that starts
from the bottom left corner and moves to the top right corner. As mentioned
previously, we can divide the cells of such a path into non-conflicting row
intervals and non-conflicting column intervals. The score of either the row
intervals or column intervals is at least half the score of this path. Let it
be the row intervals without loss of generality. By Lemma 5, there is one
segment in each row that approximates the score of the corresponding interval
within a factor $O(1/\kappa)$ and fits completely within that interval. Since
all the row-intervals are non-conflicting, these segments are non-conflicting,
and the score of those segments is at least an $O(1/\kappa)$ fraction of the
score of the row intervals, which is at least $1/2$ of the score of the grid.
This completes the proof. ∎
## 5 Constant Factor Approximation for LIS with Update Time
$\tilde{O}(n^{\epsilon})$
In this section, we show that for any constant $\epsilon>0$, there is a
dynamic algorithm for LIS that loses only a constant factor in the
approximation (where the constant depends on $\epsilon$) and guarantees an
update time bounded by $\tilde{O}(n^{\epsilon})$. Our algorithm is based on
the grid packing technique explained in Section 4.
We begin by explaining a simple algorithm for dynamic LIS where the
approximation factor is constant and the update time is close to
$\tilde{O}(n^{2/3})$. This result is weaker than what we give in Section 3
both in terms of update time and approximation factor, but we show this new
technique can be extended to obtain the result for any $\epsilon>0$ above.
We remind the reader that an exact algorithm with worst-case update time
$\tilde{O}(n)$ is trivial; we compute the LIS from the scratch after each
operation arrives. We call this algorithm $\mathcal{A^{\prime}}_{0}$555It will
be clear to the reader later in the section why we use such an unconventional
notation for this algorithm.. From $\mathcal{A^{\prime}}_{0}$, we make a
block-based algorithm $\mathcal{A}_{1}$ and then turn it into an algorithm
$\mathcal{A^{\prime}}_{1}$ with worst-case update time
$\tilde{O}(n^{2/3+\kappa})$ for a small enough $\kappa>0$.
In our block-based algorithm, we begin by an array $a$ of length $n$. Map the
elements of $a$ onto the 2D plane by creating a point $(i,a_{i})$ for every
element of the array. Recall that we assume the elements are distinct but
there is no bound on their values. Define $m=n^{1/3}$ and construct a grid in
the following way: draw $m-1$ horizontal lines that divide the plane into
parts of as equal size as possible with respect to the number of points
included in each part. If $n$ is not divisible by $m$, some parts may have one
more point than other parts. Similarly, we draw $m-1$ vertical lines that
separate the plane into $m$ parts each having either $\lfloor n/m\rfloor$ or
$\lceil n/m\rceil$ points. This gives an $m$ by $m$ grid, with each grid cell
corresponding to a “rectangle” in the 2D plane.
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$0$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$0$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$(7,2,4\
|\ 1,9,6\ |\ 3,5,8)$$(7,2,4\ |\
1,9,\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},6\
|\ 3,5,8)$ Figure 14: An example for adding an element to the array.
Our block-based algorithm works in the following way: fix a $0<\kappa<1$ and
construct a grid packing solution with approximation $(O(m^{\kappa}\log
m),O(1/\kappa))$ for the $m$ by $m$ grid. Each element of the array lies in
exactly one cell of the grid and corresponds to all segments that cover that
cell. Next for each segment in the solution of grid packing, we construct a
separate instance of the LIS problem that maintains a solution for the LIS of
the corresponding elements.
Each time an operation arrives, we update the solution for the corresponding
segments. Let us be more specific about this. Initially, $m-1$ vertical lines
divide the array into chunks of size roughly $n/m$. As operations arrive, the
elements are shifted to the left or to the right (their indices are updated).
Each vertical line can be thought of as a separator between two consecutive
elements that is also shifted to the left or to the right when elements are
added or removed. Thus, although the vertical lines move, each element which
is inserted or deleted lies between two thresholds and corresponds to a unique
column of the grid. The corresponding row is uniquely determined by the
horizontal lines (those lines remain unchanged). Thus, every element insertion
or deletion affects only one cell of the grid which is covered by a bounded
number of segments. When the size of the LIS is desired, we run a dynamic
program and find a subset of non-conflicting set of segments whose total
solution size is maximized. We show that this gives us approximation factor
$O(1/\kappa)$ for the LIS of the array. Our algorithm is responsible for up to
$g(n)=n^{2/3}$ many operations.
$0$$1$$2$$3$$4$$5$$6$$7$$8$$0$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$0$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$\langle
7,2,4\ |\ 1,9,6\ |\ 3,5,8\rangle$$\langle 7,2,4\ |\ 1,6\ |\ 3,5,8\rangle$
Figure 15: An example for removing an element from the array.
###### Lemma 7.
Let $0<\kappa<1$ be an arbitrarily small constant used for the solution of
grid packing. $\mathcal{A}_{1}$ is a block-based algorithm for dynamic LIS
with approximation factor $O(1/\kappa)$ whose preprocessing time is
$\tilde{O}(n^{1+\kappa})$ and whose update time is
$\tilde{O}(n^{2/3+\kappa})$. Moreover, $\mathcal{A}_{1}$ runs for $n^{2/3}$
many steps.
###### Proof.
We first prove that the preprocessing time of $\mathcal{A}_{1}$ is
$\tilde{O}(n^{1+\kappa})$. It takes time $\tilde{O}(n)$ to sort the numbers
and draw the grid lines. Also, the runtime for constructing the solution for
grid packing is $\tilde{O}(m^{2+\kappa})$, which is smaller than
$\tilde{O}(n)$. Every cell of the grid is covered by at most
$\tilde{O}(m^{\kappa})$ different segments. We construct a separate LIS
instance for every segment. Each grid cell appears in at most
$\tilde{O}(m^{\kappa})$ segments. Therefore, every element of the array in
included in in at most $\tilde{O}(m^{\kappa})$ LIS instances. Thus, the total
size of all the instances combined is bounded by $\tilde{O}(nm^{\kappa})$ and
thus finding an LIS for each segment takes time $\tilde{O}(nm^{\kappa})$ in
total. Since $m=n^{1/3}$ the preprocessing time is bounded by
$\tilde{O}(n^{1+\kappa/3})=\tilde{O}(n^{1+\kappa})$.
The total number of points in every column, or every row of the grid is
bounded by $\lceil n/m\rceil=O(n^{2/3})$. Thus, the size of the LIS instance
for each segment is also bounded by $O(n^{2/3})$. Since we run the algorithm
for at most $O(n^{2/3})$ many steps, the sizes of the LIS instances remain
bounded by $O(n^{2/3})$ even if we add more numbers to them in the next
$O(n^{2/3})$ many operations.
When a new operation arrives, this only affects one cell of the grid which we
can find using its position and its value. We update all the corresponding
segments that cover that cell. Their count is bounded by
$\tilde{O}(m^{\kappa})\leq\tilde{O}(n^{\kappa})$. Moreover, each one we can
update in time $\tilde{O}(n^{2/3})$ since the problem size for each segment is
bounded by $O(n^{2/3})$. Every time the size of the LIS is desired, we run a
DP in time $\tilde{O}(m^{2+\kappa})$ and find a solution that can be
constructed using non-conflicting segments. The size of the LIS for each
segment is available in time $O(1)$. Thus, the runtime for approximating the
LIS is bounded by $\tilde{O}(m^{2+\kappa})\leq\tilde{O}(n^{2/3+\kappa})$.
Therefore, the update time $h(n)$ is bounded by $\tilde{O}(n^{2/3+\kappa})$
for every operation.
For the dynamic program, we define an $m\times m$ table $D$ such that
$D[i][j]$ denotes the solution for any subset of non-conflicting segments in
the first $i$ rows and the first $j$ columns of the grid. Obviously
$D[i][j]\geq D[i-1][j],D[i][j-1]$. Thus, when computing the value for
$D[i][j]$, we start by assigning $\max\\{D[i-1][j],D[i][j-1]\\}$ to it. Next,
for any segment that ends at cell $(i,j)$, we update $D[i][j]$ as
$D[i][j]=\max\\{D[i][j],D[i^{\prime}-1][j^{\prime}-1]+w\\}$
where $(i^{\prime},j^{\prime})$ are coordinates of the bottom-left corner of
the segment and $w$ is the length of its LIS. The total runtime of this
algorithm is asymptotically equal to the number of segments on the grid.
Finally, we show that this gives us an $O(1/\kappa)$-approximate solution for
the LIS of the entire array. For any point in time, fix an arbitrary solution
of the longest increasing subsequence for the array at that time. Assume that
the numbers the adversary puts on the cells of the grid are the contributions
of the cells to the fixed longest common subsequence. This way, the score of
the grid is exactly equal to the size of the longest increasing subsequence.
In addition to the above, the score of every segment is a lower bound on the
LIS of the elements for that segment. Thus, by the guarantee of the grid
packing solution, the solution we obtain by appending the solutions of non-
conflicting segments is definitely an $O(1/\kappa)$ approximate solution for
the score of the grid which is the size of the solution. Finally, the validity
of our solution follows from the fact that since all the segments are non-
conflicting, then we can combine their partial solutions and that gives us a
valid increasing subsequence.
Keep in mind that when we add elements to or remove elements from the array,
the indices of the numbers change. Thus, we need to also update the
coordinates of the vertical lines. This can be done similarly to the way we
update the indices of the array elements. Therefore, this does not add
computational difficulty as all we need to know for each new operation is
which column of the grid this operation applies to and which row of the grid
is affected by the new operation. Also, for edge cases (when the left most or
right most element of a column is removed or added), we have a choice of which
column we add the new points to. In any case, the solution we find preserves
the approximation.
∎
One thing to note about the algorithm of Lemma 7 is that by setting $\kappa$
arbitrarily close to $0$, we can obtain a update time close to
$\tilde{O}(n^{2/3})$. By Lemma 2, algorithm $\mathcal{A}_{1}$ can be turned
into an algorithm $\mathcal{A^{\prime}}_{1}$ with the same approximation
factor but worst-case update time $\tilde{O}(n^{2/3+\kappa})$. Although
$\mathcal{A^{\prime}}_{1}$ has an approximation factor of $O(1/\kappa)$, its
update time is $\tilde{O}(n^{1/3-\kappa})$ times smaller than that of
$\mathcal{A^{\prime}}_{0}$. Thus, our approach to improve the overall update
time is to replace $\mathcal{A^{\prime}}_{0}$ by $\mathcal{A^{\prime}}_{1}$ to
obtain a faster (but worse in terms of approximation factor) algorithm.
###### Lemma 8 (as a corollary of Lemmas 7 and 2).
For any $0<\kappa<1$, there exists a dynamic algorithm for LIS with worst case
update time $\tilde{O}(n^{2/3+\kappa})$ and approximation factor
$\tilde{O}(1/\kappa)$.
It is not hard to see that one can recurse on the above idea to improve the
update time. This comes however, at the expense of a larger approximation
factor. We prove in Theorem 9 that, similar to what we did for Lemma 7, one
can devise an algorithm with worst-case update time $\tilde{O}(n^{\epsilon})$
with approximation factor $O((1/\epsilon)^{O(1/\epsilon)})$ for any
$\epsilon>0$.
###### Theorem 9.
For any constant $\epsilon>0$, there exists an algorithm for dynamic LIS whose
worst-case update time is $\tilde{O}(n^{\epsilon})$ and whose approximation
factor is $O((1/\epsilon)^{O(1/\epsilon)})$.
###### Proof.
Dynamic | worst-case | approximation
---|---|---
algorithm | update time | factor
$\mathcal{A^{\prime}}_{0}$ | $\tilde{O}(n)$ | 1
$\mathcal{A^{\prime}}_{1}$ | $\tilde{O}(n^{2/3+\kappa})$ | $O(1/\kappa)$
$\mathcal{A^{\prime}}_{2}$ | $\tilde{O}(n^{1/2+\kappa})$ | $O(1/\kappa^{2})$
$\mathcal{A^{\prime}}_{3}$ | $\tilde{O}(n^{2/5+\kappa})$ | $O(1/\kappa^{3})$
⋮ | ⋮ | ⋮
$\mathcal{A^{\prime}}_{k}$ | $\tilde{O}(n^{\frac{2}{k+2}+\kappa})$ | $O(1/\kappa)^{k}$
Table 2: Guarantees of the dynamic solutions are shown in this figure. block-based | $m$ | $f(n)$ | $g(n)$ | $h(n)$ | approximation
---|---|---|---|---|---
algorithm | | | | | factor
$\mathcal{A}_{1}$ | $n^{1/3}$ | $\tilde{O}(n^{1+\kappa})$ | $n^{2/3}$ | $\tilde{O}(n^{2/3+\kappa})$ | $O(1/\kappa)$
$\mathcal{A}_{2}$ | $n^{1/4}$ | $\tilde{O}(n^{1+\kappa})$ | $n^{3/4}$ | $\tilde{O}(n^{1/2+\kappa})$ | $O(1/\kappa^{2})$
$\mathcal{A}_{3}$ | $n^{1/5}$ | $\tilde{O}(n^{1+\kappa})$ | $n^{4/5}$ | $\tilde{O}(n^{2/5+\kappa})$ | $O(1/\kappa^{3})$
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮
$\mathcal{A}_{k}$ | $n^{\frac{1}{k+2}}$ | $\tilde{O}(n^{1+\kappa})$ | $n^{\frac{k+1}{k+2}}$ | $\tilde{O}(n^{\frac{2}{k+2}+\kappa})$ | $O(1/\kappa)^{k}$
Table 3: Guarantees of the block-based solutions are shown in this figure. We
assume that the length of the initial array $a$ for the block-based algorithm
is $n$.
The proof is by induction. Let us fix a constant $0<\kappa<1$ that is used for
all recursions. For any $k\geq 1$, our aim is to design a dynamic algorithm
for LIS with approximation factor $O((1/\kappa)^{k})$ and worst-case update
time $\tilde{O}(n^{\frac{2}{k+2}+\kappa})$. We call such an algorithm
$\mathcal{A}^{\prime}_{k}$. The base case is for $k=0$ for which we already
know a solution $\mathcal{A}^{\prime}_{0}$. We also strengthen our hypothesis:
If instead of starting from an empty array, we start with an array of length
$n$, our algorithm needs a preprocessing time of $\tilde{O}(n^{1+\kappa})$.
Let us assume that for $k\geq 1$, $\mathcal{A}^{\prime}_{k-1}$ with desirable
guarantees is available and the goal is to design $\mathcal{A}^{\prime}_{k}$.
To this end, we first make a block-based algorithm $\mathcal{A}_{k}$ with the
following properties: For an initial array $a$ of length $n$, we set
$f(n)=n^{1+\kappa}$, $g(n)=n^{\frac{k+1}{k+2}}$, and
$h(n)=\tilde{O}(n^{\frac{k+1}{k+2}+\kappa})$. Similar to what we did in Lemma
7, we map every element of the initial array $a$ onto the 2D plane by putting
a point $(i,a_{i})$ for every element. We set $m=n^{\frac{1}{k+2}}$ and divide
the plane into an $m\times m$ grid, such that in each row and in each column
there are at most $\lceil n/m\rceil$ points. Moreover, we construct a grid
packing solution for the $m\times m$ grid with approximation guarantee
$(\tilde{O}(m^{\kappa}),O(1/\kappa))$. Next, for each segment we initiate an
LIS instance that will be solved with algorithm $\mathcal{A}^{\prime}_{k-1}$.
The preprocessing time consists of two parts: (i) constructing the solution to
grid packing in time $\tilde{O}(m^{2+\kappa})$ and (ii) constructing an
initial solution for each segment. Since the problem size for each segment is
bounded by $n/m$ and every element appears in at most $\tilde{O}(m^{\kappa})$
segments and the initialization time for $\mathcal{A}^{\prime}_{k-1}$ is
$\tilde{O}(n^{1+\kappa})$ this can be bounded by
$\tilde{O}(m^{\kappa}n(n/m)^{\kappa})=\tilde{O}(n^{1+\kappa})$. Thus, the
total preprocessing time is bounded by
$\tilde{O}(m^{2+\kappa}+n^{1+\kappa})=\tilde{O}(n^{1+\kappa}).$
By the construction of the grid, the size of the problem for each segment is
bounded by $O(n/m)=O(n^{\frac{k+1}{k+2}})$. Also, since
$g(n)=O(n^{\frac{k+1}{k+2}})$, the size of the problem instance corresponding
to each segment remains in this range throughout the $g(n)$ many steps. Each
time an operation arrives, we update the solution for $\tilde{O}(m^{\kappa})$
many segments each in time $\tilde{O}((n/m)^{\frac{2}{k+1}+\kappa})$ (recall
that we use $\mathcal{A}^{\prime}_{k-1}$ for the solution of the segments).
Thus, the update time is bounded by
$\displaystyle\tilde{O}(m^{\kappa}(n/m)^{\frac{2}{k+1}+\kappa})$
$\displaystyle=\tilde{O}(n^{\kappa}(n/m)^{\frac{2}{k+1}})$
$\displaystyle=\tilde{O}(n^{\kappa}(n^{\frac{k+1}{k+2}})^{\frac{2}{k+1}})$
$\displaystyle=\tilde{O}(n^{\kappa}n^{\frac{2}{k+2}})$
$\displaystyle=\tilde{O}(n^{\frac{2}{k+2}+\kappa}).$
Moreover, in order to find an approximate solution for LIS using non-
conflicting segments, we need to run a DP in time
$\tilde{O}(m^{2+\kappa})=\tilde{O}(n^{\frac{2+\kappa}{k+2}})=\tilde{O}(n^{\frac{2}{k+2}+\kappa})$.
Thus, the overall worst-case update time is equal to
$h(n)=\tilde{O}(n^{\frac{2}{k+2}+\kappa})$.
Finally, the approximation factor increases by a multiplicative factor of
$O(1/\kappa)$ in each level of recursion. Thus, the approximation factor of
$\mathcal{A}_{k}$ is bounded by $O((1/\kappa)^{k})$. Using Lemma 2, we can the
turn block-based algorithm $\mathcal{A}_{k}$ into an algorithm for LIS with
worst-case update time $\tilde{O}(n^{\frac{2}{k+2}+\kappa})$. Finally, since
we are constructing $\mathcal{A}^{\prime}_{k}$ from a block-based algorithm
with preprocessing time $\tilde{O}(n^{1+\kappa})$, if we start from an array
$a$ of length $n$, we need a preprocessing time of $\tilde{O}(n^{1+\kappa})$
which is another condition of the hypothesis.
Now, for a fixed $\epsilon>0$, we set $\kappa=\epsilon/2$ and $k=\lceil
4/\epsilon\rceil$. Thus, the worst-case update time of the algorithm would be
bounded by
$\displaystyle\tilde{O}(n^{\frac{2}{k+2}+\kappa})$
$\displaystyle\leq\tilde{O}(n^{\frac{2}{k}+\kappa})$
$\displaystyle=\tilde{O}(n^{\frac{2}{k}+\epsilon/2})$
$\displaystyle\leq\tilde{O}(n^{\epsilon/2+\epsilon/2})$
$\displaystyle=\tilde{O}(n^{\epsilon})$
which is desired. Also, the approximation factor would be bounded by
$O((1/\epsilon)^{O(1/\epsilon)})$. ∎
## 6 $1+\epsilon$ Approximation for DTM
In this section, we give a dynamic algorithm for DTM with approximation factor
$1+\epsilon$ and update time $O(\log^{2}n)$. If access to the elements of the
array is provided in time $O(1)$, the update time improves to $O(\log n)$.
However, since we use a balanced binary tree for the elements, there is an
additional $O(\log n)$ overhead for the update time. We explain the algorithm
in four steps. In the first step, we present a simple algorithm that obtains
an approximation factor of $2$ with the same update time. Next, in Step 2, we
show how a $2$-approximate solution can be used to obtain an exact solution in
time $O(k^{2}\log n)$ when the solution size is bounded by $k$. In the third
step, we improve the runtime of the same algorithm to $O(k\log n)$. Finally,
we show in the last step that such an algorithm along with the $2$-approximate
solution in time $O(\log^{2}n)$ gives us a $1+\epsilon$ approximate solution
with update time $O(\log^{2}n)$.
### 6.1 A $2$-approximate Solution for DTM
Similar to previous work [17], we call a pair $(a_{i},a_{j})$ an inversion, if
$i<j$ but $a_{i}>a_{j}$. The heart of the analysis is that a maximal set of
disjoint inversion pairs is a 2-approximate solution for DTM. We first
formally give a proof to this claim and next show how such a maximal set can
be maintained with $O(\log^{2}n)$ update time.
###### Observation 1.
Let $a=\langle a_{1},a_{2},\ldots,a_{n}\rangle$ be an array of length $n$ and
$S=\\{(a_{\alpha_{1}},a_{\beta_{1}}),(a_{\alpha_{2}},a_{\beta_{2}}),\ldots\\}$
be a maximal set of disjoint inversions of $a$. Then we have
$|S|\leq\mathsf{DTM}(a)\leq 2|S|.$
###### Proof.
Since every pair $(a_{\alpha_{i}},a_{\beta_{i}})$ is an inversion, then any
solution for DTM should remove one element from each pair which implies
$\mathsf{DTM}(a)\geq|S|$. On the other hand, if we remove all the $2|S|$
elements of $S$ from $a$, the remaining subsequence is increasing since $S$ is
maximal and thereby there is no inversion in the remaining elements. This
implies that $\mathsf{DTM}(a)\leq 2|S|$. ∎
Based on Observation 1, our 2-approximate algorithm maintains a maximal set of
disjoint inversions with update time $O(\log^{2}n)$.
###### Lemma 10.
There exists a $2$-approximate solution for $\mathsf{DTM}$ with update time
$O(\log^{2}n)$.
###### Proof.
Our algorithm maintains a maximal collection of disjoint inversion pairs,
namely $S$. In addition to this, both the elements used in this collection and
the elements not used in this collection are stored in a balanced tree666Red
black tree could be one implementation. that allows for search, insertion, and
deletion in logarithmic time. We refer to the tree containing the elements of
$S$ with $T_{S}$ and the tree containing other elements by $T_{N}$.
Whenever a new element $a_{i}$ is inserted into the array, we first check if
it makes an inversion with the elements of $T_{N}$. Notice that these elements
are increasing in the order of their indices since there is no inversion
between them. Thus, in order to verify whether $a_{i}$ makes an inversion with
any element of $T_{N}$, we just need to compare that to the largest $a_{j}\in
T_{N}$ which is smaller than $a_{i}$ or the smallest $a_{j}\in T_{N}$ which is
larger than $a_{i}$. Both of these two operations can be done in time
$O(\log^{2}n)$ (the exponent of $\log$ is 2 since it takes time $O(\log n)$
for us to get the value of the $i$’th element of the array). If an inversion
is detected, we add it to $S$ and update $T_{N}$ and $T_{S}$ accordingly.
Otherwise, we add $a_{i}$ into $T_{N}$.
Removing an element is also strait-forward. If the element belongs to $T_{N}$,
then no action is required other than updating $T_{N}$. Otherwise, after
removing $a_{i}$, we have to be careful about the element which made an
inversion with $a_{i}$ previously. That can be handled in time $O(\log^{2}n)$
similar to adding new elements. We check if it makes an inversion with the
elements of $T_{N}$ and update both $T_{N}$ and $T_{S}$ accordingly. All these
operations can be done in time $O(\log^{2}n)$. ∎
### 6.2 From $2$-approximate Solution to an Exact Solution
We show that a $2$-approximate solution for DTM can be used to obtain an exact
solution. In fact, this idea carries over to any constant approximate solution
but for simplicity we state it only for $2$-approximate solutions. Let us
denote the size of the $2$-approximate solution by $k$. This way, we know that
the optimal solution is in range $[k/2,k]$.
We first construct a graph $G$ in the following way. Every element of the
array becomes one vertex of $G$ and we put an edge between two vertices if
their corresponding elements in $a$ make an inversion. This way, finding
distance to monotonicity of array $a$ is equivalent to finding the smallest
vertex cover of $G$.
Let set $S$ be all the elements that are removed from the array in our
$2$-approximate solution (thus we have $|S|=k$). We refer to the vertices of
$G$ corresponding to set $S$ by $v_{1}$, $v_{2}$, $\ldots$, $v_{k}$. The key
observation is that every edge of the graph is incident to at least one vertex
$v_{i}$ otherwise $\cup_{i\in[k]}\\{v_{i}\\}$ would not be a valid vertex
cover.
We call a vertex of the graph low-degree if its degree is upper bounded by $k$
and high-degree otherwise. Based on this, we divide the vertices corresponding
to set $S$ into two disjoint sets $L$ and $H$ containing the low-degree and
high-degree vertices. All vertices of $H$ have to be included in the optimal
solution otherwise all their neighbors should be included and their number is
more than $k$. Thus, we can include those vertices in our solution and remove
them from the graph (this includes their incident edges too).
For each remaining edge of the graph, one end point is in $L$. Moreover, the
degrees of the vertices in $L$ are bounded by $k$. Thus, the total number of
remaining edges in the graph is bounded by $k^{2}$. Therefore, apart from at
most $2k^{2}$ vertices, all the other vertices are isolated and definitely do
not contribute to the vertex cover. Thus, we need to solve the problem for
$O(k^{2})$ many elements. This is equivalent to finding DTM for $O(k^{2})$
many vertices which can be solved in time $O(k^{2}\log n)$. There is an
additional $O(\log n)$ overhead involved if access to each element requires
time $O(\log n)$.
###### Lemma 11.
Let $a$ be an array of length $n$ and $S$ be a set of $k$ elements whose
removal from $a$ makes $a$ increasing. Provided oracle access to the elements
of $a$, one can compute the distance to monotonicity of $a$ in time
$O(k^{2}\log n)$.
###### Proof.
The correctness of the algorithm is already discussed. Here we just show a
bound on the runtime. Since set $S$ is given, we just need to compute the
degree of each vertex. There are $k$ elements in set $S$ so detecting the
edges between them can be done in time $O(k^{2})$. Moreover, for every vertex
$v_{i}$ corresponding to the elements of $S$, detecting its edges to the rest
of the elements can also be done in time $O(\log n)$ since a binary search
suffices for that purpose. Therefore, the total runtime is $O(k^{2}\log n)$. ∎
### 6.3 Exact Solution in Quasi-linear Time
We show that the runtime of the algorithm of Section 6.2 can be improved to
quasi-linear. Let us for simplicity divide the element of the array into two
sets $S$ and $N$. Set $S$ corresponds to the elements of the approximate
solution (whose removal makes the array increasing) and set $N$ contains the
rest of the elements. Obviously, set $N$ is increasing otherwise $S$ would not
be a valid solution to distance to monotonicity. The key to our improvement is
the following observation:
###### Observation 2.
Let $a_{i}<a_{j}<a_{k}$ be three elements of set $N$ and $a_{l}$ be an element
of set $S$. If both $(a_{i},a_{l})$ and $(a_{k},a_{l})$ are inversions, then
$(a_{j},a_{l})$ is also an inversion.
###### Proof.
Notice that since we have $a_{i}<a_{j}<a_{k}$ and all three are in $N$, then
we can infer that $i<j<k$. On the other hand, since both $(a_{i},a_{l})$ and
$(a_{k},a_{l})$ are inversions, then either both $l<i$ and $a_{l}>a_{k}$ hold
or both $l>k$ and $a_{l}<a_{i}$ hold. In either case, $(a_{j},a_{l})$ makes an
inversion. ∎
Observation 2 shows that any element of $S$ makes an inversion with an
interval of the elements in $N$. This implies an important consequence: Label
each element $a_{i}\in N$ with a set of elements $I(a_{i})\subseteq S$ such
that for each element $a_{j}\in I(a_{i})$, pair $(a_{i},a_{j})$ makes an
inversion. Based on Observation 2 we can prove that the total number of
distinct labels in $N$ is bounded by $2|S|+1$.
###### Lemma 12.
For each element $a_{i}\in N$, define its label by $I(a_{i})\subseteq S$ where
$I(a_{i})$ contains all elements of $S$ that make an inversion with $a_{i}$.
Then we have:
$|\bigcup_{a_{i}\in N}\\{I(a_{i})\\}|\leq 2|S|+1.$
###### Proof.
For each element $a_{j}\in S$ that makes an inversion with an element of $N$
define two thresholds $\alpha$ and $\beta$ where $\alpha$ is the smallest
element of $N$ that makes an inversion with $a_{j}$ and $\beta$ is the
smallest element of $N$ larger than $\alpha$ that does not make an inversion
with $a_{j}$. Due to Observation 2, an element of $N$ makes an inversion with
$a_{j}$ if and only if its value is at least $\alpha$ and smaller than
$\beta$. The total number of thresholds for all elements of $S$ is at most
$2|S|$. Moreover if two elements of $N$ are not separated by any threshold,
then their labels are the same. Thus, the total number of distinct labels is
bounded by $2|S|+1$. ∎
$N$$S$ Figure 16: Inversions between the elements of $S$ and $L$ are shown in
this figure. Elements of $N$ with equal labels are colored similarly.
Lemma 12 is important from an algorithmic point of view because of the
following: if two elements have the same label (and thus the same set of
conflicting elements in $S$), either they both contribute to the optimal
solution (removed from the array) or they both remain in the array. Thus, one
can merge these elements and make a larger element in the array that
represents both of them. More generally, for each label, if we merge all the
element attributed to that label and make a single element out of them (with a
larger size), the size of the optimal solution remains unchanged. Since the
total number of labels is bounded by $2|S|+1$ then this transformation leaves
us with $2|S|+1$ elements and we only need to solve the problem for them. This
can be done in time $O(k\log n)$ as shown in Lemma 13. We note that in the
above, we assume random access to the elements of the array is provided in
time $O(1)$.
###### Lemma 13.
Given query access to an array $a$ of length $n$ and a $2$-approximate
solution for distance to monotonicity of $a$ with size $k$, one can find an
exact solution for distance to monotonicity in time $O(k\log n)$.
###### Proof.
The algorithm and its correctness is outlined above. Here we just explain the
runtime. Since the elements of $N$ are stored in a balanced tree data
structure, for each element of $S$ we can find in time $O(\log n)$ which
interval of the elements of $N$ makes an inversion with it. This takes a total
runtime $O(k\log n)$. Next, we sort all the intervals based on them and merge
elements of $N$ whose labels are the same. We refer to these merged elements
as super elements. The value of a super element can be equal to the value of
an arbitrary element which is used in its construction.
After constructing the super elements in time $O(k\log n)$ the size of the
problem reduces to $O(k)$. However, patience sorting does not solve this
problem since super elements have weights. In other words, removing a super
element incurs a cost equal to the number of elements used to make it.
Nonetheless, it is known [21] that even if the elements are weighted, for an
array of size $k$, one can solve both LIS and DTM in time $O(k\log k)$. ∎
### 6.4 $1+\epsilon$ Approximation for DTM with Update Time $O(\log^{2}n)$
The last step is to obtain a $1+\epsilon$ approximate solution using the above
techniques. In parallel, we always run the algorithm of Section 6.1 to
maintain a 2-approximate solution. In order to obtain a $1+\epsilon$
approximation algorithm, we use the framework of Section 3. To design our
block-based algorithm we start with an array of length $n$. We set the
preprocessing time to $O(k\log n)$ where $k$ is the size of the 2-approximate
solution (this we know by the parallel algorithm that we run). In the
preprocessing phase, we find an exact solution (say $k^{\prime}\geq k/2$) in
time $O(k\log^{2}n)$777The additional $O(\log n)$ factor is due to the data
structure used for random access to the elements of the array. for the array
and report it as the value of distance to monotonicity. We set $g(a)=\epsilon
k/2$ and for the next $\epsilon k/2$ steps, we report $d+i$ for the $i$’th
operation where $d$ is the solution for the initial array. This is clearly an
upper bound on the size of the solution as well as a $1+\epsilon$
approximation of it. Thus, the worst-case update time is $O(1)$.
By Lemma 3, our block-based algorithm can be turned into a dynamic algorithm
with worst-case update time $O(\log^{2}n)$.
###### Theorem 14.
For any $\epsilon>0$, there exists a dynamic algorithm for distance to
monotonicity with approximation factor $1+\epsilon$ and worst-case update time
$O(\log^{2}n)$.
###### Proof.
For an array $a$, our block-based algorithm has preprocessing time
$f(a)=O(k\log^{2}n)$ where $k=\mathsf{DTM}(a)$. Also, $g(a)=\epsilon k/2$ and
$h(a)=O(1)$. Thus, by Lemma 2, the worst-case update time of the equivalent
dynamic algorithm is $O(\log^{2}n)$. Notice that since after $g(a)$ steps the
size of the $\mathsf{DTM(a)}$ changes by a small factor, all functions $f,g,$
and $h$ remain asymptotically the same which implies relativity. ∎
## References
* [1] N. Ailon, B. Chazelle, S. Comandur, and D. Liu. Estimating the distance to a monotone function. In RANDOM 2004.
* [2] A. Andoni and H. L. Nguyen. Near-optimal sublinear time algorithms for ulam distance. In SODA 2010.
* [3] S. Assadi, K. Onak, B. Schieber, and S. Solomon. Fully dynamic maximal independent set with sublinear in n update time. In SODA 2019.
* [4] S. Assadi, K. Onak, B. Schieber, and S. Solomon. Fully dynamic maximal independent set with sublinear update time. In STOC 2018.
* [5] S. Behnezhad, M. Derakhshan, M. Hajiaghayi, C. Stein, and M. Sudan. Fully dynamic maximal independent set with polylogarithmic update time. In FOCS 2019.
* [6] M. Boroujeni and S. Seddighin. Improved MPC algorithms for edit distance and ulam distance. In SPAA 2019.
* [7] A. Chen, T. Chu, and N. Pinsker. The dynamic longest increasing subsequence problem. arXiv preprint arXiv:1309.7724, 2013.
* [8] G. Cormode, M. Mitzenmacher, and J. Thaler. Streaming graph computations with a helpful advisor. In ESA 2010.
* [9] S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani. Algorithms. McGraw-Hill Higher Education, 2008.
* [10] Y. Dodis, O. Goldreich, E. Lehman, S. Raskhodnikova, D. Ron, and A. Samorodnitsky. Improved testing algorithms for monotonicity. In RANDOM-APPROX 1999.
* [11] F. Ergün, S. Kannan, R. Kumar, R. Rubinfeld, and M. Viswanathan. Spot-checkers. In STOC 1998.
* [12] E. Fischer. The art of uninformed decisions. Bulletin of the EATCS, 75:97, 2001.
* [13] M. L. Fredman. On computing the length of longest increasing subsequences. Discrete Mathematics, 11(1):29–35, 1975.
* [14] A. Gál and P. Gopalan. Lower bounds on streaming algorithms for approximating the length of the longest increasing subsequence. In FOCS 2007.
* [15] P. Gawrychowski and W. Janczewski. Fully dynamic approximation of lis in polylogarithmic time, 2020.
* [16] P. Gawrychowski, A. Karczmarz, T. Kociumaka, J. Łacki, and P. Sankowski. Optimal dynamic strings. In SODA 2018.
* [17] P. Gopalan, T. S. Jayram, R. Krauthgamer, and R. Kumar. Estimating the sortedness of a data stream. In SODA 2007.
* [18] A. Grønlund and S. Pettie. Threesomes, degenerates, and love triangles. In FOCS, 2014.
* [19] M. Henzinger, S. Krinninger, D. Nanongkai, and T. Saranurak. Unifying and strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In STOC 2015.
* [20] S. Im, B. Moseley, and X. Sun. Efficient massively parallel methods for dynamic programming. In STOC 2017.
* [21] G. Jacobson and K.-P. Vo. Heaviest increasing/common subsequence problems. In Annual Symposium on Combinatorial Pattern Matching, pages 52–66. Springer, 1992.
* [22] T. Kociumaka and S. Seddighin. Improved dynamic algorithms for longest increasing subsequence, 2020.
* [23] J. Lacki, J. Ocwieja, M. Pilipczuk, P. Sankowski, and A. Zych. The power of dynamic distance oracles: Efficient dynamic algorithms for the steiner tree. In STOC 2015.
* [24] M. Mitzenmacher and S. Seddighin. Dynamic algorithms for LIS and distance to monotonicity. In STOC, 2020.
* [25] M. Mitzenmacher and S. Seddighin. Erd$\backslash$”$\\{$o$\\}$ s-szekeres partitioning problem. arXiv preprint arXiv:2011.10870, 2020.
* [26] M. Mitzenmacher and S. Seddighin. Improved sublinear time algorithms for longest increasing subsequence. In SODA, 2021.
* [27] D. Nanongkai and T. Saranurak. Dynamic spanning forest with worst-case update time: adaptive, las vegas, and o(n${}^{\mbox{1/2 - {$\epsilon$}}}$)-time. In STOC 2017.
* [28] D. Nanongkai, T. Saranurak, and C. Wulff-Nilsen. Dynamic minimum spanning forest with subpolynomial worst-case update time. In FOCS 2017.
* [29] T. Naumovitz, M. E. Saks, and C. Seshadhri. Accurate and nearly optimal sublinear approximations to ulam distance. In SODA 2017.
* [30] S. Pettie. On the shortest path and minimum spanning tree problems. PhD thesis, 2003.
* [31] P. Ramanan. Tight $\omega$ (n lg n) lower bound for finding a longest increasing subsequence. International journal of computer mathematics, 65(3-4):161–164, 1997.
* [32] A. Runbinstein, S. Seddighin, Z. Song, and X. Sun. Approximation algorithms for LCS and LIS with truly improved running times. In FOCS, 2019.
* [33] M. Saks and C. Seshadhri. Estimating the longest increasing sequence in polylogarithmic time. In FOCS, 2010.
* [34] R. B. Yehuda and S. Fogel. Partitioning a sequence into few monotone subsequences. Acta Informatica, 35(5):421–440, 1998.
## Appendix A Streaming Algorithm for LIS
We outlined an improved streaming algorithm for LIS in Section 2. Here we give
a proof for its correctness and a bound on its memory. In this setting, we
assume that the input is available to us in any desired order. We call this
setting, streaming with advisory help. Previous work solves the problem with
memory $O(\sqrt{n})$ within a factor $1+\epsilon$ in a single round [17].
###### Observation 3.
For any $0<\kappa<1$, there exists a streaming algorithm that uses advisory
help and approximates the LIS of an array of length $n$ with memory
$\tilde{O}(n^{2/5+\kappa})$ within a factor $O(1/\kappa)$ in three rounds.
This algorithm is randomized and gives an approximate solution with
probability at least $1-1/n$.
###### Proof.
Let $m=n^{1/5}$ be the size of the grid. We use an $(O(m^{\kappa}\log
m),O(1/\kappa))$ approximate solution of grid packing to cover the grid cells
with segments. Similar to what we did before, we think of each element $i$ of
the array as a point $(i,a_{i})$ of the 2D plane. As mentioned earlier, in the
first round, we sample $m-1$ elements from the array. We use these elements to
draw the horizontal lines of the grid. It follows from standard Chernoff bound
that since the elements are chosen uniformly at random, the number of elements
in every row of the grid is bounded by $10n/m\log n$ with probability at least
$1-1/n$. Also, we draw the vertical lines evenly so that they divide the
elements into chunks of size $n/m$.
In the next two rounds, we ask for the elements of the array in the row-order
and column-order. More precisely, in the second round, we first ask for the
elements falling in the first row of the grid (in the column order). Next, we
ask for the elements of the second row and so on. For each row, we approximate
the value of the LIS for each segment. Since the total number of elements in
every row is bounded by $10n/m\log n$, we only need memory
$\tilde{O}(\sqrt{n/m\log n})=\tilde{O}(n^{2/5})$ to approximate the value of
LIS for each segment. However, we need this much memory for multiple segments.
This adds an overhead of $\tilde{O}(m^{\kappa})$ to the memory since each grid
cell may be covered by at most $\tilde{O}(m^{\kappa})$ segments. Similarly, in
the third round, we ask for the elements in the column-order.
Finally, we use a DP to find a subset of non-conflicting segments with the
largest total sum of LIS. This can be done with memory
$\tilde{O}(m^{2+\kappa})=\tilde{O}(n^{2/5+\kappa})$ as the number of segments
is bounded by $\tilde{O}(m^{2+\kappa})$.
The correctness of the algorithm is similar to the one given for dynamic LIS.
We fix an arbitrary LIS of the array and assume that the adversary puts the
contribution of each cell of the grid to the fixed LIS. The LIS of each
segment is a clear upper bound on the score of that segment on the grid,
however, if we use those values instead of their scores, we still obtain a
valid solution. Finally, since our solution for the grid packing problem is
$(O(m^{\kappa}\log m),O(1/\kappa))$ approximate, then the score we obtain
using non-conflicting segments is at least an $\Omega(\kappa)$ fraction of the
score of the grid which is equal to the size of the LIS. ∎
## Appendix B Improved Algorithm for $\mathsf{LIS}^{+}$
We provide the intuition behind the algorithm and then the formal proof. For
simplicity, we assume here that we have random access to all elements of the
array in time $O(1)$. Our first goal is to design a block-based algorithm for
an array $a$ of length $n$. We set $f(n)=O(n\log n)$ and in the preprocessing
phase we compute the LIS of $a$. Let this value be $x$. Since we only add
elements in the $\mathsf{LIS}^{+}$ problem, from here on, $x$ is a lower bound
for the solution value. For the next $g(n)=\sqrt{n}$ operations, every new
element is added to a separate set, and after each operation, the LIS of the
separate set is computed in time $O(\sqrt{n}\log n)$. At an arbitrary step,
let this value be $y$. The key observation is that the overall LIS of the
array is in range $[\max\\{x,y\\},x+y]\leq 2\max\\{x,y\\}$. We therefore have
a 2-approximate solution by just reporting $\max\\{x,y\\}$. This block-based
algorithm for $\mathsf{LIS}^{+}$ with $f(n)=O(n\log n)$, $g(n)=\sqrt{n}$, and
$h(n)=O(\sqrt{n}\log n)$ yields a dynamic algorithm with worst-case update
time $\tilde{O}(\sqrt{n})$.
We recurse to improve the runtime down to $O(n^{\epsilon})$ for any
$\epsilon>0$. Instead of using the naive algorithm for the $g(n)$ operations
after the initialization, we can use the more advanced algorithm explained
above (its update time is $\tilde{\Omega}(\sqrt{n})$ better than computing the
LIS from the scratch every time). Similar to what we did in 5, in each level
of recursion, we use the previous algorithm for the operations after
initialization. The advantage of this approach in this setting over Section 5
is that the approximation factor of the algorithm depends linearly on
$1/\epsilon$ (and not exponentially). To see this, assume that at some point,
$x$ is the solution for the initial array and $y$ is an $\alpha$-approximation
for the solution of the second set of operations. The optimal solution is
upper bounded by $x+\alpha y$ and lower bounded by $\max\\{x,y\\}$. Therefore,
by reporting $\max\\{x,y\\}$ we can be sure that our approximation factor is
bounded by $\alpha+1$. In other words, if we recurse on this algorithm
$\alpha$ times, then the approximation factor is bounded by $\alpha+1$. This
results in an algorithm with worst-case update time $\tilde{O}(n^{\epsilon})$
and approximation factor $O(1/\epsilon)$.
###### Observation 4.
For any constant $\epsilon>0$, there exists an algorithm for dynamic
$\mathsf{LIS}^{+}$ whose worst-case update time is $\tilde{O}(n^{\epsilon})$
and whose approximation factor is $O(1/\epsilon)$.
If one could show the statement of Observation 4 for any (possibly sub-
constant) $\epsilon>0$, then by setting $\epsilon=1/\log n$, we could obtain a
dynamic algorithm for $\mathsf{LIS}^{+}$ with polylogarithmic update time and
logarithmic approximation factor. However, since there is a constant factor
overhead in every recursion, Observation 4 only works when $\epsilon$ is
constant. This overhead is incurred in the reduction from dynamic algorithms
to block-based algorithms. In the following, we provide a variation of the
same algorithm that does not use this reduction and achieves polylogarithmic
update with and logarithmic approximation.
$1$$1$$2$$1$$2$$3$$1$$2$$3$$4$$1$$2$$3$$4$$5$$1$$2$$3$$4$$5$$6$$1$$2$$3$$4$$5$$6$$7$$1$$2$$3$$4$$8$$5$$6$$7$$1$$2$$3$$4$$5$$6$$7$$8$$9$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$1:$$2:$$3:$$4:$$5:$$6:$$7:$$8:$$9:$$10:$
Figure 17: The buckets are shown for the first 10 operations. Buckets that are
disconnected and are held together with dashed rectangles are not finalized
yet. Connected pieces show finalized buckets.
###### Theorem 15.
There exists an algorithm for dynamic $\mathsf{LIS}^{+}$ whose worst-case
update time is $O(\log^{3}n)$ and whose approximation factor is $O(\log n)$.
###### Proof.
In our algorithm, we put the elements in buckets and for every bucket, we
compute the value of the LIS. As more buckets are made, we combine them to
construct larger buckets. The sizes of the buckets are always powers of 2.
In the beginning, the array is empty and there are no buckets. When the first
element is inserted, we construct the first bucket that contains only that
element. After the construction of each bucket, we compute the LIS of that
bucket over the next steps. More precisely, when a bucket of size $k$ is
constructed, we divide the task of computing the LIS of that bucket into $k$
pieces and execute these pieces in the next $k$ operations. Thus, when a
bucket of size $1$ is constructed, its LIS is computed immediately. We say a
bucket is finalized, when our algorithm has already computed its LIS. In our
algorithm, we only merge finalized buckets to make larger ones and thus, we
maintain the property that at each point in time, each element appears in
exactly one finalized bucket.
Every time a new element is inserted, we make a bucket containing that element
alone. However, when there are two finalized buckets of the same size (say
$k$), we merge them to obtain a bucket of size $2k$. After this, it takes
$2k-1$ more steps to finalize the new bucket but once the new bucket is
finalized, we remove the two smaller buckets. This way, each element appears
in exactly one finalized bucket at a time throughout the process.
At any point in time, we approximate the LIS of the array by the maximum
solution for any of the finalized buckets. We prove that with this
construction, there are at most $O(\log n)$ finalized buckets at every point
in time and moreover, the number of buckets that are not finalized is also
bounded by $O(\log n)$.
This immediately implies that the approximation factor of our algorithm is
bounded by $O(\log n)$. The reason is that each element is always included in
exactly one finalized bucket at a time and therefore the total sum of LIS’s
for all finalized buckets is an upper bound on the size of the solution.
Moreover, the maximum solution size for each bucket is a lower bound on the
LIS of the entire array. Since the number of finalized buckets is bounded by
$O(\log n)$ this implies that the approximation factor of our algorithm is
bounded by $O(\log n)$.
We also bound the runtime by $O(\log^{3}n)$. In our algorithm, at every point
in time, there are $O(\log n)$ different buckets that are not finalized yet.
The total runtime needed to compute the LIS of a bucket of size $k$ is
$O(k\log k)$ which is divided over $k$ steps. Thus, each bucket which is not
finalized yet requires time at most $O(\log n)$ for each step. Thus the
overall runtime is $O(\log^{2}n)$ for each step. One thing to keep in mind is
that we use a balanced tree data structure to access the elements of the array
which adds an overhead of $O(\log n)$. Therefore, the worst-case update time
is $O(\log^{3}n)$.
It follows from the construction of the buckets that at each step, there is at
most one bucket of each size which is not finalized. The reason is that after
a bucket of size $k$ is made, it takes $k$ more steps to make another bucket
of the same size. However, before the new bucket is made, the first one will
be finalized. Since the sizes of the buckets are powers of $2$, this implies
that there are most $\lfloor\log n\rfloor$ such buckets. Moreover, this also
shows that the number of finalized buckets of each size is bounded by $3$,
otherwise this makes two buckets of larger size that are not finalized yet. ∎
## Appendix C Sequential Algorithm for DTM
We present a simple comparison-based algorithm for DTM with approximation
factor $2$ that runs in time $O(n)$. Using Lemma 12, the approximation factor
improves to $1+\epsilon$ in the following way: If the solution size is bounded
by $\sqrt{n}$, then Lemma 12 gives an exact solution in time
$\tilde{O}(\sqrt{n})$. Otherwise, one can find a $1+\epsilon$ approximate
solution in time $\tilde{O}(\sqrt{n})$ using the solution of [29]. Thus, the
main bottleneck of the runtime is for the computation of an approximate
solution, which we show can be done in time $O(n)$.
By Observation 1, a 2-approximate solution can be obtained by computing a
maximal set of inversion pairs. Our algorithm finds such a set in linear time.
We begin by a empty stack. We iterate over the elements of the array and each
time we compare the new element to the element at the top of the stack (if
any). If this pair makes an inversion, we put this pair in a set $S$ and
remove the last element of the stack. Otherwise, we put the new element on top
of the stack and continue on. Obviously, the runtime is $O(n)$, since each
element $a_{i}$ is processed in time $O(1)$. The correctness of the algorithm
follows from the fact that the numbers of the stack are always increasing and
therefore there is no inversion between them. Thus, set $S$ is a maximal set
of inversion pairs.
###### Observation 5.
For any constant $\epsilon>0$, DTM can be approximated within a factor
$1+\epsilon$ in time $O(n)$.
We remark that in the above observation, factors that depend on $1/\epsilon$
are hidden in the $O$ notation.
## Appendix D The Algorithm of Chen et al. [7]
For formal proofs, we refer the reader to [7]. Chen et al. [7] propose the
following algorithm to maintain a solution for dynamic LIS.
For each element $i$ of the array, define $l(i)$ to be the size of the longest
increasing subsequence ending at element $a_{i}$ of the array. Chen et al. [7]
refer to this quantity as the level of element $i$. Notice that $l(i)$ can be
computed in time $\tilde{O}(n)$ for all elements of the array using the
patience sorting algorithm.
Define $L_{k}$ to be the set of elements whose levels are equal to $k$. The
algorithm of Chen et al. [7] maintains a balanced binary tree for each $L_{k}$
that contains the corresponding elements. One key observation is that for each
$k$, all the elements of $L_{k}$ are decreasing, otherwise their levels would
not be the same.
When a new element is added to the array, $L_{k}$’s may change. More
precisely, after an element addition, the levels of some elements may change
(but only by 1). Similarly, element removal may change the levels of the
elements of the array but again the change is bounded by $1$. Chen et al. [7],
show that after an insertion, for each $L_{k}$, the levels of only one
interval of the elements may increase. In other words, for each $L_{k}$, there
are two numbers $\alpha$ and $\beta$ such that all the elements whose values
are within $[\alpha,\beta]$ increase their levels and the rest remain in
$L_{k}$.
Thus, they use a special balanced tree structure that allows for interval
deletion and interval addition in logarithmic time. Therefore, all that
remains is to detect which interval of each $L_{k}$ changes after each
operation. They show that this can be computed in time $O(\log n)$ for all
$L_{k}$’s via binary search. Since the number of different levels is equal to
the size of the LIS, their update time depends on the size of the solution.
$L_{1}\ =\ \langle 1\rangle$$L_{2}\ =\ \langle 5,2\rangle$$L_{3}\ =\ \langle
4\rangle$$\langle 1,\ 5,\ 2,\
\color[rgb]{0.494117647058824,0.827450980392157,0.129411764705882}\definecolor[named]{pgfstrokecolor}{rgb}{0.494117647058824,0.827450980392157,0.129411764705882}3\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\
,\ 4,\ 6,\ 7,\ 9,\ 10,\ 8\rangle$$\langle 1,\ 5,\ 2\ ,\ 4,\ 6,\ 7,\ 9,\ 10,\
8\rangle$$L_{4}\ =\ \langle 6\rangle$$L_{5}\ =\ \langle 7\rangle$$L_{6}\ =\
\langle 9,8\rangle$$L_{7}\ =\ \langle 10\rangle$$3$ Figure 18: This examle
shows how adding element $3$ to the array changes the levels of the elements.
Upward arrows show that the level of the corresponding element increases after
we add $3$ to the array.
When $n$ elements are given, their runtime for constructing the data structure
is $\tilde{O}(n)$ since patience sorting gives us all the levels in time
$\tilde{O}(n)$ and the balanced trees can be constructed in time
$\tilde{O}(n)$ for all $L_{k}$.
|
# Integral smoothed particle hydrodynamics with an improved partition of unit
and a better track of contact discontinuities
Domingo García-Senz<EMAIL_ADDRESS>Departament de Física, Universitat
Politècnica de Catalunya, Avinguda Eduard Maristany 16, E-08019 Barcelona
(Spain) Institut d’Estudis Espacials de Catalunya, Gran Capità 2-4, E-08034
Barcelona, Spain Rubén M. Cabezón<EMAIL_ADDRESS>Center for
Scientific Computing - sciCORE. Universität Basel
Klingelbergstrasse 61, 4056 Basel, Switzerland Jose A. Escartín
<EMAIL_ADDRESS>Euclid Science Data Center - Max Planck Institute for
Extraterrestrial Physics
Gie$\beta$enbachstra$\beta$e 1, 85748 Garching b. München, Germany
###### Abstract
The correct evaluation of gradients is at the cornerstone of the smoothed
particle hydrodynamics (SPH) technique. Using an integral approach to estimate
gradients has proven to enhance accuracy substantially. Such approach retains
the Lagrangian structure of SPH equations and is fully conservative. In this
paper we study, among other things, the connection between the choice of the
volume elements (VEs), which enters in the SPH summations, and the accuracy in
the gradient estimation within the integral approach scheme (ISPH). A new kind
of VEs are proposed which improve the partition of unit and are fully
compatible with the Lagrangian formulation of SPH, including the grad-h
corrections. Using analytic considerations, simple static toy models in 1D,
and a few full 3D test cases, we show that any improvement in the partition of
unit also leads to a better calculation of gradients when the integral
approach is used jointly. Additionally, we propose a simple-to-implement
variant of the ISPH scheme which is more adequate to handle sharp density
contrasts.
Keywords: Numerical hydrodynamics, smoothed particle hydrodynamics.
## 1 Introduction
The smoothed particle hydrodynamics (SPH) is a firmly settled numerical
technique, able to successfully cope with many cutting-edge problems in
physics, astrophysics, and engineering. This technique has undergone a
sustained enhancement since its original formulation [1, 2] and it is still
evolving at a good pace [e.g. 3, 4, 5]. A landmark in that evolution concerns
the estimation of derivatives and gradients, which can be done by many
different approaches.
The standard way of calculating gradients by directly taking the analytic
derivative of the interpolating kernel function leads to E0-errors, even in
the presence of constant pressure fields in non-uniform particle distributions
[6, 7]. Another proposal adapts the moving-least squares technique (MLS) to
SPH [8] to ensure an exact interpolation of linear functions. Adding
renormalization corrections to both the kernel and the gradient of the kernel
has proved to enhance the accuracy in the calculation of gradients and to
reduce the tensile instability [9]. A variational principle jointly with a
kernel renormalization was used [10] to reduce the tensile instability and
simulate fluid systems with free surfaces. Nevertheless, these MLS and
renormalization techniques, in general, do not guarantee the perfect
conservation of the whole set of physical laws governing the motion of the
fluid, which are at the foundations of the SPH technique. A recent proposal
was the Conservative Reproducing Kernel SPH method (CRKSPH) Frontiere et al.
[11], which enforces perfect linear interpolation and, at the same time,
retains the linear momentum and energy conservation properties.
An alternative way to estimating gradients was devised by García-Senz et al.
[12]. In their proposal, gradients are calculated from an integral expression,
so that there is not need to explicitly calculate the analytic derivative of
the kernel function. They also proved that such integral approach can be
completely compatible with the Lagrangian formulation of the SPH equations,
leading to the Integral Smoothed Particle Hydrodynamics (ISPH) scheme (named
IAD0 in the seminal paper by [12]). It was shown that the ISPH formulation has
the same conservation properties as the standard, Lagrangian-born SPH [13]. In
particular, a remarkable feature of ISPH is that it reduces the E0-error in
the derivatives [12, 5, 14].
In this work we dig further into the conditions that the ISPH scheme should
meet in order to improve the calculation of gradients, which can become exact
for linear functions. We found out that these conditions are connected with
another basic SPH requirement: the correct partition of unit volume. Using
one-dimensional numerical experiments we make clear the link between these two
basic properties: any enhancement in the partition of unit leads to a better
gradient estimation within the ISPH framework. The results of these 1D toy
models are confirmed by detailed 3D hydrodynamic simulations of explosions,
collisions and instabilities.
Additionally, a new kind of volume elements (VEs), leading to a better
partition of unit is proposed and discussed. The ISPH equations of density,
movement and energy are consequently re-formulated within the Lagrangian
framework, so that they become fully compatible with the particular choice of
the VEs. The resulting scheme is not only fully conservative, but it also
enhances the density estimation and the gradient of linear functions with
practically no computational overload.
In Section 2, we review the main features of the ISPH scheme. We discuss the
choice of the generalized volume elements used to compute the summations in
Section 3. The link between the performance of the ISPH calculation of
gradients and the adequate choice of the VEs is highlighted in Section 4.
Section 5 presents the resulting ISPH equations, while in Section 6, we apply
our code to several standard tests calculated in three dimensions, and analyze
the results under the light of the VE choice. A summary of our findings and
prospects for future work are given in the conclusions section.
## 2 The ISPH formulation
The classical way to evaluating gradients in SPH takes the multidimensional
derivative of a function $f$ as,
${\bf\nabla}f=\int_{V}f({\bf r^{\prime}})\leavevmode\nobreak\ \nabla W(|{\bf
r^{\prime}}-{\bf r}|,h)\leavevmode\nobreak\ dr^{\prime 3}\,,$ (1)
where $W(|{\bf r^{\prime}}-{\bf r}|,h)$ is commonly a Dirac $\delta$-like
function named interpolating kernel, which is continuous and derivable. In the
classical SPH formulation, the gradient of the function is estimated by: 1.-
approaching the integral of Eq. (1) by summations, 2.- taking the analytic
derivative of the kernel, and 3.- assuming that the volume element $dr^{\prime
3}$ is adequately represented by $m_{b}/\rho_{b}$. Namely111Admittedly, this
is not the most used implementation of the derivative in SPH. The most
commonly used version is a variation involving differences of the derived
function. Nevertheless, it is formally the same as Eq. (2).,
$\left<{\bf\nabla}f\right>_{a}=\sum_{b}\frac{m_{b}}{\rho_{b}}f_{b}\nabla
W(|{\bf r_{b}}-{\bf r_{a}}|,h_{a})\,.$ (2)
Alternatively, in ISPH the gradient is calculated from an integral approach
(IA), which does not require the explicit analytic derivative of
$W{\bf({r-r^{\prime},h})}$. A vector integral $I({\bf r})$ is defined as [12]:
$I({\bf{r}})\equiv\int_{V}\left[f({\bf{r^{\prime}}})-f({\bf{r}})\right]({\bf{r^{\prime}}}-{\bf{r}})W(|{\bf
r^{\prime}}-{\bf r}|,h)dr^{\prime}{{}^{3}}\,,$ (3)
where $W(|{\bf r^{\prime}}-{\bf r}|,h)$ is a spherically-symmetric, normalized
interpolating kernel and $h$ is the smoothing length. The integral
$I({\bf{r}})$ can be used to find the gradient of a function $f({\bf{r}})$ in
a similar way that the Laplace operator is usually approached from another
integral expression in standard SPH [15, 16]. The IA interpretation of SPH is
the consequence of approaching Eq. (3) with summations, along with approaching
the function $f({\bf{r}})$ by a Taylor expansion around the evaluated point,
$f({\bf{r_{b}}})-f({\bf{r_{a}}})=\bf\nabla
f_{a}\cdot({\bf{r_{b}}}-{\bf{r_{a}}})\,,$ (4)
where $a$ and $b$ refer to neighbouring particles with masses $m_{a}$ and
$m_{b}$, respectively. The RHS in the integral expression in Eq. (3) becomes,
$\begin{split}I({\bf{r_{a}}})=\left[\sum_{b}\frac{m_{b}}{\rho_{b}}f({\bf
r_{b}})({\bf{r_{b}}}-{\bf{r_{a}}})W(|{\bf r_{b}}-{\bf
r_{a}}|,h_{a})\right]-\left[f({\bf r_{a}})\left<{\bf\Delta
r}\right>_{a}\right]\,,\end{split}$ (5)
where,
$\left<{\bf\Delta
r}\right>_{a}=\sum_{b}\frac{m_{b}}{\rho_{b}}({\bf{r_{b}}}-{\bf{r_{a}}})W(|{\bf
r_{b}}-{\bf r_{a}}|,h_{a})\,.$ (6)
Putting Eqs. (4) and (5) into Eq. (3) allows to obtain the gradient
${\bf\nabla f_{a}}$,
$\left[\begin{array}[]{c}\partial f/\partial x_{1}\\\ \partial f/\partial
x_{2}\\\ \partial f/\partial x_{3}\\\
\end{array}\right]_{a}=\left[\begin{array}[]{ccc}\tau_{11}&\tau_{12}&\tau_{13}\\\
\tau_{21}&\tau_{22}&\tau_{23}\\\
\tau_{31}&\tau_{32}&\tau_{33}\end{array}\right]^{-1}\left[\begin{array}[]{c}I_{1}\\\
I_{2}\\\ I_{3}\\\ \end{array}\right]\,,$ (7)
where,
$\tau_{ij,a}=\sum_{b}\frac{m_{b}}{\rho_{b}}(x_{i,b}-x_{i,a})(x_{j,b}-x_{j,a})W_{ab}(h_{a})\,;i,j=1,3\,.$
(8)
From now on, $W_{ab}(h_{a})\equiv W(|{\bf r_{b}}-{\bf r_{a}}|,h_{a})$ for the
sake of clarity.
It was shown in García-Senz et al. [12] that Eq. (7) leads to perfect linear
interpolation. Unfortunately, the price to pay is losing full conservation of
linear and angular momentum. The exact conservation of the SPH Euler equations
and perfect linear interpolation can only be retrieved simultaneously when
$\left<{\bf\Delta r}\right>\rightarrow 0$. We will refer as ISPH to the
conservative scheme which simply neglects the term $f({\bf
r_{a}})\left<{\bf\Delta r}\right>_{a}$ in Eq. (5). This is justified because
$\left<{\bf\Delta r}\right>$ is in fact the indefinite integral of an odd
function, which is zero222Interestingly, this argument has been used for the
whole history of SPH to justify that it is a second order method, which is in
general true, but not numerically ensured unless a scheme like the one
presented here is used..
On the other hand, the complete integral approach, which takes into account
the $f({\bf r_{a}})\left<{\bf\Delta r}\right>$ term in the RHS of Eq. (5),
leads to perfect linear interpolation but is not fully conservative. We refer
to it as non conservative ISPH (ncISPH, hereafter). As commented above, both
schemes, ISPH and ncISPH, converge to the same outcome when $\left<{\bf\Delta
r}\right>\simeq 0$. Having both, a perfect partition of unit and $<\Delta\bf
r>=\sum_{b}V_{b}({\bf{r_{b}}}-{\bf{r_{a}}})W(|{\bf r_{b}}-{\bf
r_{a}}|,h_{a})=0$, has been identified for a long time as the main constraint
to ensure complete linear consistency in SPH, [17].
## 3 The choice of the volume elements
In SPH, volume integrals are usually approached by finite summations assuming
that the volume element is correctly represented by $V_{a}=m_{a}/\rho_{a}$,
where $m_{a}$ is the mass of the particle and $\rho_{a}$ its density. For
example, an estimate of the density writes,
$\rho_{a}=\sum_{b=1}^{n_{b}}\leavevmode\nobreak\ V_{b}\leavevmode\nobreak\
\rho_{b}\leavevmode\nobreak\
W_{ab}(h_{a})=\sum_{b=1}^{n_{b}}m_{b}W_{ab}(h_{a})\,,$ (9)
Such ’natural’ choice of the VEs works well provided the density does not
change too much within the kernel range, of the order of the smoothing length.
Nevertheless, the density may appreciably be miscalculated in the presence of
shocks and density discontinuities. In these cases, the partition of unit
condition is not fully satisfied,
$\sum_{b=1}^{n_{b}}\leavevmode\nobreak\
\frac{m_{b}}{\rho_{b}}W_{ab}(h_{a})\neq 1$ (10)
The errors in the normalization constraint would introduce a level of
uncontrolled errors in the remaining SPH equations. To reduce the
normalization errors the most obvious recipe is to renormalize the kernel
itself,
$\rho_{a}=\sum_{b}m_{b}\left(\frac{W_{ab}(h_{a})}{\sum_{c}\frac{m_{c}}{\rho^{0}_{c}}W_{ac}(h_{a})}\right)$
(11)
where $\rho^{0}_{c}=\sum_{b}m_{b}W_{cb}(h_{c})$ is the standard density. A
more clever, albeit more complex, variation of that scheme was developed by
Colagrossi and Landrini [18] within the MLS approach, which exactly reproduces
the linear variation of a density field (for a review of these topics see
Gomez-Gesteira et al. [19]). Nevertheless, none of those SPH schemes are
totally compatible with the Lagrangian formulation of SPH. The Lagrangian
formulation of the equations of movement has the advantage that the grad-h
terms can be consistently incorporated to the scheme, so that complete
preservation of mass, linear and angular momentum, energy, and entropy is
guaranteed. Most MLS applications belong to the realm of computational fluid
dynamics (CFD), which usually works with uncompressible or weakly-compressible
fluids. Having an almost constant density field for the entire simulation
implies that the smoothing length also remains constant and the grad-h
corrections are negligible (except at the boundaries, which are usually
handled with special techniques). For that reason the MLS methods are mostly
used in CFD simulations, but in the case of astrophysical scenarios, where
density contrasts of orders of magnitude are not rare, the Lagrangian approach
with a self-consistent treatment of the grad-h terms is preferable.
Nevertheless, due to its properties, we are convinced that the methods
presented in this paper can be also of use for CFD simulations.
### 3.1 The generalized volume elements
Other options for the volume elements, $V_{a}$ may be of interest to address
specific problems. The code SPHYNX333https://astro.physik.unibas.ch/sphynx
from Cabezón et al. [20], makes use of the concept of generalized volume
elements [21, 22]. First a scalar estimator $X_{a}$ is defined so that the
particle volume is,
$V_{a}=\frac{X_{a}}{\sum_{b}X_{b}W_{ab}(h_{a})}\,.$ (12)
The density of the particle is then calculated as $\rho_{a}=m_{a}/V_{a}$.
Current choices for the estimator that can be found in the literature are
$X_{a}=1,m_{a},P_{a}^{k}$, where P is the pressure and $k\leq 1$. There is
however a particular choice which, according to [20], provides a better
partition of the unit. Namely,
$X_{a}=\left(\frac{m_{a}}{\rho_{a}}\right)^{p}\,.$ (13)
Setting $p=0$ produces the standard volume element for particles with
identical mass, whereas $0<p\leq 1$ gradually improves the kernel
normalization.
There are two ways to implement the estimator in Eq. (13):
* 1.
Making use of the density calculated in the previous time-step,
$\rho_{a}^{n-1}$,to estimate the volume elements and density at the current
iteration $n$:
$\rho_{a}^{n}=\frac{m_{a}\sum_{b}X_{b}^{n-1}W_{ab}(h_{a})}{X_{a}^{n-1}}\qquad\mathrm{with}\qquad
X_{a}^{n-1}=\left(\frac{m_{a}}{\rho_{a}^{n-1}}\right)^{p}\,$ (14)
In the simple, static, toy models discussed below convergence is achieved
after a few iterations. Once the estimator $X_{a}$ has converged, there is
always an enhancement of the partition of unit, which is almost perfect when
the exponent $p\rightarrow 1$. We refer hereafter to this explicit procedure
as driven by $X_{1,a}$ (Eq. 14).
* 2.
Making use of the density calculated in the standard way in the current time-
step, $\rho^{0}_{a}=\sum_{b}m_{b}W_{ab}(h_{a})$, to estimate $X_{a}$. That is:
$X_{a}=\left(\frac{m_{a}}{\rho^{0}_{a}}\right)^{p}\,.$ (15)
And then we calculate $\rho_{a}^{n}=m_{a}/V_{a}^{n}$ ($V_{a}^{n}$ calculated
with Eq. 12). We refer hereafter to this implicit procedure as driven by
$X_{2,a}$ (Eq.15).
In theory, the explicit option $X_{1,a}$ should lead to a better partition of
unit and interpolations than $X_{2,a}$, although the former is less
numerically robust and not totally compatible with the Lagrangian formulation
of the SPH equations. In practice, both options lead to very similar partition
of unit (see Sect. 4.1)444In the case of $X_{1}$, taking $p=1$ is too
sensitive to particle noise and not recommended to use [20]. Therefore,
$p=0.7-0.8$ are the recommended values for $X_{1}$, which yields very similar
results to those achieved with $X_{2}$ and $p=1$.. Additionally, the estimator
$X_{2,a}$ allows to build a Lagrangian-consistent scheme (A), which
incorporates the grad-h terms, provided that the exponent $p$ in Eq. (15) is
chosen equal to one ($p=1$), allowing us to eliminate a parameter.
In the following sections we show that reducing the error in the kernel
normalization ($E_{1}$, hereafter) of particle $a$,
$E_{1}=[\sum_{b}V_{b}W_{ab}(h_{a})-1]\,,$ (16)
usually improves the requirement,
$E_{2}\cdot h_{a}=\left|\left<{\bf\Delta
r}\right>\right|_{a}=\left|\sum_{b}V_{b}\leavevmode\nobreak\ ({\bf
r_{b}-r_{a}})\leavevmode\nobreak\ W_{ab}(h_{a})\right|\simeq 0\,,$ (17)
where $E_{2}$ is the normalized error module ${\bf\left|\left<\Delta
r\right>\right|}_{a}/h_{a}$ of particle $a$. The error reduction is the
consequence of using the estimators $X_{1,2}$ to evaluate the volume elements.
Any reduction in both errors ($E_{1},E_{2}$) will potentially improve the
dynamic evolution of the physical system.
## 4 Estimating the errors $E_{1}$ and $E_{2}$ for different particle
distributions
A good control of errors $E_{1}$ and $E_{2}$ is of utmost importance to the
SPH technique. This is so because the quality of both, the interpolated
function $\left<f(\bf{r})\right>$,
$\left<f({\bf r})\right>\simeq f({\bf r})\sum_{b}V_{b}W_{ab}(h_{a})+{\bf\nabla
f}\cdot\sum_{b}V_{b}{\bf(r_{b}-r_{a})}W_{ab}(h_{a})\,,$ (18)
and its gradient $\left<{\bf\nabla}f({\bf r})\right>$ (Eq. 7), are very
sensitive to these errors [5]. Nevertheless, both errors are correlated, as it
can be inferred from the following argumentation in one dimension. We first
take the kernel normalization condition as a function of the spatial
coordinate,
$G(x_{a})=\sum_{b}V_{b}W_{ab}(h_{a})\,.$ (19)
Using the Gaussian kernel $W_{ab}(h_{a})=\frac{C}{h_{a}}\leavevmode\nobreak\
\exp[-(\frac{x_{b}-x_{a}}{h_{a}})^{2}]$, the standard SPH derivative of $G(x)$
writes,
$\left(\frac{dG}{dx}\right)_{a}=-\frac{2}{h_{a}^{2}}\sum_{b}V_{b}(x_{b}-x_{a})W_{ab}(h_{a})\,,$
(20)
thus,
$\sum_{b}V_{b}(x_{a}-x_{b})W_{ab}=-\frac{h_{a}^{2}}{2}\left(\frac{dG}{dx}\right)_{a}\,.$
(21)
We note that the LHS of Eq. (21) is in fact $E_{2}$, suggesting that having a
good partition of unit ($G\simeq 1$, i.e. $E_{1}\to 0$) makes $dG/dx\simeq 0$
and, as a consequence, the $E_{2}$ error is suppressed. An independent proof
of the link between $E_{1}$ and $E_{2}$ obtained with an exponential kernel
[23], was given in the Appendix A of Cabezón et al. [20].
It should be recognized, however, that the proof above is only indicative
because, unlike the interpolators widely used in practical calculations, the
Gaussian and the exponential kernels are functions without compact support.
Moreover, it could happen that even when $G(x)$ is close to one, it shows
fluctuations around the particles and its derivative may significantly differ
from zero.
In this regard, additional insight about the impact of the VE choice in the
errors $E_{1}$ and $E_{2}$ can be obtained by studying a handful of static
particle distributions and using compact-supported kernels. These errors are
expected to be large close to discontinuities, which in SPH are usually spread
over a few times the smoothing-length distance ($h$). We have chosen three
representative discontinuities which often appear in practical calculations: a
Gaussian (model A), an inverted-Gaussian (model B), and a wall (model C).
These are given by the following mathematical expressions:
$\rho(x)=\rho_{0}+\Delta\rho\leavevmode\nobreak\
e^{-(\frac{x-x_{0}}{\delta})^{2}},\qquad(A)$ (22)
$\rho(x)=\rho_{0}-\Delta\rho\leavevmode\nobreak\
e^{-(\frac{x-x_{0}}{\delta})^{2}},\qquad(B)$ (23)
$\rho(x)=\rho_{0}+\Delta\rho\leavevmode\nobreak\
\frac{e^{(\frac{x-x_{0}}{\delta})}-e^{-(\frac{x-x_{0}}{\delta}})}{e^{(\frac{x-x_{0}}{\delta})}+e^{-(\frac{x-x_{0}}{\delta}})},\qquad(C)$
(24)
where the values of the parameters $\rho_{0}$, $\Delta\rho$, and $\delta$ are
specified in Table 1.
We arranged these profiles into a 1D distribution of $N=100$ particles with
reflective boundary conditions. We used an interpolating $sinc$-kernel with
exponent $n=5$ [24], which has a shape similar to the $M_{6}$ spline. The
different profiles of errors $E_{1}$ and $E_{2}$ at each point for the
estimators $X_{1},X_{2}$ are depicted in Fig. 1. As we can see, $X_{1}$ leads
to a clear improvement in the kernel normalization condition as $p$ increases.
For $p\simeq 1$ the $E_{1}$ error becomes negligible, hence it is not shown.
Interestingly, the estimator $X_{2}$ with $p=1$ (black crosses) leads to
similar $E_{1},E_{2}$ errors as $X_{1}$ with $p=0.8$ in all profiles. The
normalized error $E_{2}={\bf|\left<\Delta x\right>|}/h$ follows a similar
trend, suggesting that having a good partition of unit is not only beneficial
to approach the density, but also to calculating the gradient of any magnitude
of interest with ISPH.
Table 1: Value of the different parameters in profiles A, B, and C which mimic different types of sharp density gradients. Profile D is the same as C but with different parameters. Profile | $\rho_{0}$ | $\Delta\rho$ | $\delta$ | h
---|---|---|---|---
A | 1.0 | 1.0 | 0.040 | 0.0230
B | 2.0 | 1.0 | 0.008 | 0.0051
C | 1.5 | 0.5 | 0.020 | 0.0230
D | 10 | 11 | 0.040 | 0.0230
Figure 1: Results of the SPH evaluation of the test profiles A, B, and C using
different VEs. Top row: density profiles of models A (Gaussian), B (Inverted-
Gaussian), and C (Wall), of Table 1, calculated with $X_{1}$ using $p=0.0$
(green) and $p=0.8$ (blue) in Eq. (13). Crosses ($\times$) in black are for
$X_{2}$ (Eq. 15, with $p=1$) and points in red are the analytic values.
Central row: $E_{1}$ (partition of unit) error for the same three profiles.
Bottom row: Same as central row but for error $E_{2}={\bf|\left<\Delta
x\right>|}/h$. Black lines are for the choice $X_{2}$ with $p=1$.
### 4.1 Impact of VE in estimating gradients
We can use our simple sharp profiles above to gain insight into the
relationship between $E_{1}$, $E_{2}$, and the accuracy of the first
derivative. To do that, we assume that the density of the test particle
distribution follows profile A, Eq. (22), so that it totally determines the
VEs through Eq. (13). Let us also assume that we wish to obtain the SPH
derivative of a generic wall-like function $f$ given by profile $D$ in Table
(1), Eq. (24)555This test would, mimic for example, a thermal wave passing
through a star.. Such derivative, $\frac{df}{dx}$, is sensitive to the choice
of the estimator $X$ to compute the VEs. We can thus compare the analytic and
the numerical value of $\frac{df}{dx}$ and carry out the $L_{1}$ analysis of
the results.
Figure 2: Top-left: Profiles of a Gaussian-like density function (red) and a
wall-like function ($f(x)$, in yellow). The solid lines in green and blue were
obtained with $X_{1}$ and $p=0.0;\leavevmode\nobreak\ 0.9$ respectively, while
the black points are for $X_{2}$ with $p=1$. Top-right: Gradient of function
$f(x)$ estimated with the mentioned values of p for $X_{1}$ and $X_{2}$.
Bottom-left: Kernel normalization and its convergence to 1 as the VEs improve.
Bottom-right: Evaluation of $\left<\Delta x\right>/h$ and its convergence to 0
as the VEs improve.
Figure 2 presents a summary of the numerical experiments. The top-left panel
shows the density profiles: analytic (red) and the estimations using $X_{1}$,
with $p=0.0$ (green) and $p=0.8$ (blue) as well as $X_{2}$ with $p=1$ (black
points), while the analytic function $f$ is depicted in yellow. The density
profile is in general well reproduced, but only the cases $X_{1}$ with $p=0.8$
and $X_{2}$ get close to the analytic value at the maximum at $x=0.5$, while
providing a good match at the tails of the distribution, which results in a
better description of the system even in low-resolution regions. The top-right
panel shows the gradient of $f$. Although the value around the maximum of
$\frac{df}{dx}$ is similar for $p=0.0$ and $p=0.8$, the last option fits
better the derivative around the coordinate $x=0.6$, which is in turn similar
to that obtained using the estimator $X_{2}$. The bottom panels in Fig. 2
depict the profiles of the kernel normalization (left) and ${\bf|\left<\Delta
x\right>|}/h$ (right) for different VEs. Again, using both, $X_{1}$ with a
high value of $p$, or $X_{2}$ with $p=1$ leads to a better estimation of both
magnitudes. Note that for $X_{1}$ and $p\geq 0.9$ the partition of unit is
almost perfect as well as the $<\Delta x>=0$ condition.
Figure 3: $L_{1}$ calculation of the errors in the numerical experiments shown
in Fig. 2. Left: averaged value of $L_{1}$ for the partition of unit (red
lines) and $<\Delta x>/h$ (green lines). Solid lines are for the estimator
$X_{1}$ and lines with points for $X_{2}$. Right: averaged value of $L_{1}$,
obtained with expression (25), for the derivative of the wall function
calculated with $X_{1}$ (solid red line) and $X_{2}$ (red line with points).
The black line is the calculation with the non-conservative ISPH which is
independent of the particular value of the exponent $p$. Note that when
$p\simeq 1$ both schemes (ISPH and ncISPH) converge, albeit faster for
estimator $X_{1}$ than for $X_{2}$.
The panel on the left in Fig. (3) shows the averaged $L_{1}$ value for $E_{1}$
and $E_{2}$, calculated in the interval $0.3\leq x\leq 0.7$, for the test
presented above. The $L_{1}$ values for the partition of the unit and $\Delta
x/h$ decrease as the exponent p increases, as expected. Nevertheless, the
quantitative details again depend on the type of estimator, $X_{1}$ or
$X_{2}$. As the figure shows, there is a factor ten reduction of the $L_{1}$
errors for $X_{1}$ and $p\simeq 0.9$. The errors become negligible when
$p\simeq 1$. Although the option $X_{2}$ shows a lower convergence rate it is
still good, decreasing the errors in almost a factor ten when $p\simeq 1$.
The panel on the right in Fig. (3) shows the averaged $L_{1}$ error of
$\frac{df}{dx}$ normalized to $<(\frac{df}{dx})_{analytic}>$,
$L_{1}=\frac{1}{N<(\frac{df}{dx})_{analytic})>}\sum_{b=1,N}\left|\left(\left(\frac{df}{dx}\right)_{analytic}-\left(\frac{df}{dx}\right)_{sph}\right)_{b}\right|$
(25)
and how both schemes (ISPH and ncISPH) converge in light of the flavor used
for the estimator X ($X_{1}$ solid lines, $X_{2}$ lines with points). We note
that because the ncISPH scheme (5) is making exact linear interpolations, the
results (black line) are not sensitive on the adopted value of $p$. On the
other hand, the results of the conservative scheme, ISPH (red lines), show a
clear dependence on the choice of the volume elements, as expected. The
profile of $L_{1}(p)$ decreases linearly, achieving the same accuracy as the
ncISPH scheme when $p>0.9$ for $X_{1}$ or close to it for $X_{2}$.
Even though the test cases presented in this section were calculated in 1D
with ordered particle settings, the results unambiguously support the idea
that a better partition of unit improves both the estimation of density and
the calculation of gradients in ISPH. Hereafter, we focus on the volume
elements calculated with the estimator $X_{a}=X_{2,a}$ (Eq. 15) with $p=1$,
because, unlike $X_{1,a}$, it perfectly fits with the Lagrangian formulation
of the SPH equations and also allow us to eliminate the parameter $p$.
Additionally, the grad-h terms can be easily incorporated to the scheme with
this choice of VEs. The resulting ISPH equations will be described in Sect. 5.
### 4.2 The quest for fully implicit VEs
In principle, the best procedure to calculate the volume elements would be to
directly obtain them from the inversion of the kernel normalization matrix.
Unlike the indirect methods described in the previous section, a fully
implicit implementation of the VEs has the advantage of always leading to the
perfect partition of unit. Previous attempts to build implicit SPH schemes
[25, 26], made use of advanced techniques to efficiently invert large sparse
matrices, as for example the PARDISO library 666The widely used Intel MKL
PARDISO library function is based on a legacy version of this project:
https://www.pardiso-project.org/. This type of libraries could also be used to
implicitly find the volume elements. $V_{b}$, by solving large linear systems
with $N\times N$ equations and unknowns,
$\sum_{b=1}^{n_{b}}\leavevmode\nobreak\ V_{b}\leavevmode\nobreak\
W_{ab}(h_{a})=1$ (26)
Unfortunately, finding the VEs with such direct approach produces, in general,
non-physical results. We have calculated, in a fully implicit manner, the VEs
of the Gaussian function (22), by solving the linear system above and found
that the volume elements strongly oscillate around the explicit solution given
by Eq. (14) with $p=1$ (see Fig. 4). Even worse, in many points the volume
elements become negative which is non-physical. Therefore, we have discarded
the fully implicit route to find the VEs in this work.
Figure 4: Implicit versus explicit estimation of the volume elements in the
numerical experiments with the Gaussian curve (Eq.22).
## 5 ISPH equations
Here we summarize the set of ISPH equations used to compute the 3D tests on
the following section. We provide the details of the procedure used to build
these equations in A.
* 1.
Density equation:
$\rho_{a}=\frac{m_{a}}{V_{a}}\,$ (27)
with $V_{a}=X_{a}/k_{a}$ and $k_{a}=\sum_{b}X_{b}W_{ab}(h_{a})$.
* 2.
Momentum equation:
$\ddot{x}_{i,a}=\begin{cases}-\frac{X_{a}}{m_{a}}\sum_{b}m_{b}\left[\frac{X_{b}P_{a}}{\Omega_{a}k_{a}^{2-\sigma}k_{b}^{\sigma}}\mathcal{A}_{i,ab}(h_{a})+\frac{X_{b}P_{b}}{\Omega_{b}k_{b}^{2-\sigma}k_{a}^{\sigma}}\mathcal{A}_{i,ab}(h_{b})\right];\
(X_{a,b}=m_{a,b})\\\ \\\
-\sum_{b}m_{b}\left[\frac{X_{a}^{2-\sigma}X_{b}^{\sigma}P_{a}}{\Omega_{a}m_{a}^{2}\leavevmode\nobreak\
k_{a}}\mathcal{A}_{i,ab}(h_{a})+\frac{X_{b}^{2-\sigma}X_{a}^{\sigma}P_{b}}{\Omega_{b}m_{b}^{2}\leavevmode\nobreak\
k_{b}}\mathcal{A}_{i,ab}(h_{b})\right];\
(X_{a,b}=\frac{m_{a,b}}{\rho_{a,b}^{0}}),\end{cases}$ (28)
with $\rho^{0}_{a}=\sum_{b}m_{b}W_{ab}(h_{a})$, and $\Omega_{a}$ is given in
the A.1. We have introduced a parameter, $0\leq\sigma\leq 1$, which allows to
choose between the pure Lagrangian scheme developed in the A ($\sigma=0$) and
a progressive deviation of it, which totally suppresses the tensile
instability when $\sigma\to 1$ (see Section 5.1 for more details). Moreover,
$\mathcal{A}_{i,ab}(h_{a,b})=\sum^{d}_{j=1}c_{ij,a}(h_{a})(x_{j,b}-x_{j,a})W_{ab}(h_{a,b}),$
(29)
being $c_{ij,a}$ the coefficients of the inverse matrix in the IA given by Eq.
(7), and $d$ is the number of dimensions. Any expression of standard SPH can
indeed made compatible with the IA by taking the kernel derivative as [27]:
$\frac{\partial W_{ab}(h_{a})}{\partial
x_{i,a}}=\mathcal{A}_{i,ab}(h_{a});\quad i=1,d$ (30)
* 3.
Energy equation:
$\dot{u}_{a}=\begin{cases}\frac{X_{a}P_{a}}{m_{a}\Omega_{a}k_{a}^{2-\sigma}}\sum_{b}\sum^{d}_{i=1}\frac{X_{b}}{k_{b}^{\sigma}}\left[(v_{i,a}-v_{i,b})\mathcal{A}_{i,ab}(h_{a})\right];\
(X_{a,b}=m_{a,b})\\\ \\\
\frac{X_{a}^{2-\sigma}P_{a}}{m_{a}^{2}\Omega_{a}k_{a}}\sum_{b}\sum^{d}_{i=1}m_{b}X_{b}^{\sigma}\left[(v_{i,a}-v_{i,b})\mathcal{A}_{i,ab}(h_{a})\right];\
(X_{a}=\frac{m_{a}}{\rho_{a}^{0}}),\end{cases}$ (31)
The necessary terms of artificial viscosity (AV) are added to the right of the
equations above as in [20], but explicitly including a quadratic term in the
signal velocity:
$v_{ab}^{sig}=\bar{\alpha}_{ab}\bar{c}_{ab,s}-\beta w_{ab}$ (32)
where $w_{ab}=\bf{v_{ab}}\cdot\bf{\hat{r}_{ab}}$ [28, 29]. The parameter
$\beta$ is kept constant with a default value $\beta=2$. The AV coefficient
$\alpha$ is controlled with the switches scheme described in Read et al. [30]
so that $\alpha\simeq 1$ in strong shocks but it decays to a minimum value of
$\alpha\simeq 0.05$ away from them. According to Monaghan [28], the AV
contribution to the energy equation should include a heat-conduction term
which smooths the pressure in wall-shock conditions. The precise form of such
heat-conduction term is:
$\left(\frac{du_{a}}{dt}\right)_{cond}^{AV}=\sum_{b}\sum_{i=1}^{d}m_{b}\alpha_{u}\frac{v_{ab,cond}^{sig}(u_{a}-u_{b})}{\bar{\rho}_{ab}}\frac{r_{i,ab}}{|r_{ab}|}A_{i,ab}$
(33)
which in the SPHYNX code is implemented as:
$\left(\frac{du_{a}}{dt}\right)_{cond}^{AV}=\sum_{b}\sum_{i=1}^{d}\frac{1}{2}\leavevmode\nobreak\
\alpha_{u}\leavevmode\nobreak\
v_{ab,cond}^{sig}(u_{a}-u_{b})\left\\{V_{a}\leavevmode\nobreak\
\frac{m_{b}}{m_{a}}\frac{r_{i,ab}}{|r_{ab}|}A_{i,ab}(h_{a})+V_{b}\leavevmode\nobreak\
\frac{r_{i,ab}}{|r_{ab}|}A_{i,ab}(h_{b})\right\\}$ (34)
The signal velocity $v^{sig}_{cond}$ used in the tests below is [31]:
$v_{ab,cond}^{sig}=\sqrt{\frac{|P_{a}-P_{b}|}{\bar{\rho}_{ab}}}$ (35)
The results of the tests below suggest that adding a small amount of
conductive heat by Eq. (34) is beneficial, because it contributes to reduce
the tensile instability and to smooth the numerical noise. Nonetheless, the
value of the constant $\alpha_{u}$ should not be high, otherwise the density-
peak in strong shocks may be under estimated (see Section 6.4). We have chosen
$\alpha_{u}=0.1$, which is low and in agreement with the choice by other
authors, f.e. [32]. The equation of state (EOS) is that of an ideal gas with
$\gamma=5/3$.
### 5.1 Obtaining the value of $\sigma$
When using the SPH equations deduced from the Euler-Lagrange formulation a
drawback appears wherever $\nabla\rho$ becomes large as, for example, around
contact discontinuities. In these cases, the incorrect estimation of gradients
may lead to numerical artifacts, being the tensile instability one of the most
harmful and common. Several solutions have been postulated to cope with this
problem, all of them requiring some departure from the exact Lagrangian
formulation. For example, Ritchie and Thomas [33] proposed to estimate the
density by averaging over the internal energies so that the ensuing density
field is smooth. A similar approach was described in Saitoh and Makino [22],
who suggested to redefine the volume elements so that they depend on pressure
rather than on density. Another solution was proposed by Read et al. [30],
where a typical element within the summations of the momentum and energy
equations is changed from the standard $[P_{a}/\rho_{a}^{2}]$ to
$[P_{a}/(\rho_{a}\rho_{b})]$. Although not totally Lagrangian compatible, such
simple change is mathematically consistent with the standard derivation of the
SPH equations [34, 35] and totally suppresses the tensile instability. All
these SPH variants have in common that the main magnitudes in the movement
equation are somehow ’crossed’ for particles $a$ and $b$ [f.e.
$u_{b}/\rho_{a}$ in [33] or $P_{a}/(\rho_{a}\rho_{b})$ in [30]].
Here we propose a similar procedure as in [30], but allowing a self-adaptive
crossing of indexes in the original Lagrangian equations derived in the A. The
resulting expressions, Eqs. (28) and (31), are steered by a parameter $\sigma$
so that $0\leq\sigma\leq 1$. The value $\sigma=0$ leads to the fully
Lagrangian SPH equations whereas $\sigma=1$ gives a fully crossed expression.
We make use of a ramp function ($R$) to automatically decide the instantaneous
value of $\sigma$ as a function of the density contrast between any pair of
particles.
$\sigma_{ab}=\begin{cases}0;&At_{ab}\leq At_{min}\\\
R(At_{ab}-At_{min});&At_{min}\leq At_{ab}\leq At_{max}\\\ 1;&At_{ab}\geq
At_{max}\end{cases}$ (36)
being $R=\frac{1}{At_{max}-At_{min}}$ and
$At_{ab}=\left|\frac{\rho_{a}-\rho_{b}}{\rho_{a}+\rho_{b}}\right|$ is the
Atwood number. Expression (36) leads to $\sigma\simeq 1$ only in those regions
hosting large density gradients while the fully Lagrangian scheme is preserved
wherever $\sigma=0$, which is taken by the majority of the particles of the
system. We empirically found that $At_{min}=0.1,At_{max}=0.2$ produce
satisfactory results in all the numerical experiments described in this
manuscript.
## 6 Multi-dimensional simulations
In this section, we analyze the results of four well-known tests, that require
the solution of the full system of Euler equations. The first test deals with
the hydrostatic equilibrium of a two-phase fluid. It aims to analyze the
abilities of our Lagrangian scheme tho handle with sharp density
discontinuities. As we will see the best results are obtained with
$\sigma\simeq 1$ in Eq. (28) owing to the complete suppression of the tensile
instability around the contact discontinuity. Then, we apply this scheme to
the study of the interaction of a supersonic wind with an isothermal, high-
density spherical blob (the wind-blob test), to the growth of the Kelvin
Helmholtz (KH) instability, and to simulate the evolution of a point-like
explosion (the Sedov test). The simulations were carried out using the ISPH
code SPHYNX and the outcomes have been compared to well-known solutions. We
put special emphasis in the comparison among models with different choices of
$\sigma$ and volume elements, namely the standard choice777Actually, any
constant magnitude is a suitable choice as $X_{a}$, but taking the mass of the
particle allows for the fine tuning of the density (if needed) by slightly
modifying the mass of the particles. In nearly isobaric systems, with shallow
pressure gradients, the choice $X_{a}=P_{a}^{k},\leavevmode\nobreak\ k\leq 1$
could also be appropriate. $X_{a}=m_{a}$, and the improved VEs with
$X_{a}=m_{a}/\rho^{0}_{a}$.
### 6.1 The isobaric two-fluid test
The simulation of the hydrostatic evolution of a two-phase system with very
different densities and internal energies is far from being trivial with SPH
codes. We consider a system of two fluids separated by a contact discontinuity
but in pressure equilibrium:
$\rho=\left\\{\begin{array}[]{rclcc}4;&\quad 0.25\leq x,y,z\leq 0.75,\\\
1;&\qquad\mathrm{otherwise}.\end{array}\right.$ (37)
The system is isobaric with $P=2.5$ and we use $N=110^{3}$ equal-mass
particles. If the density around the contact discontinuity is not adequately
smoothed, the two phase system evolves non-physically when the full Lagrangian
scheme (Eqs. 28, 31, with $\sigma=0$) is applied. The reason is that the error
in $\nabla\rho$ becomes too large at the contact discontinuity, inducing the
tensile instability. This is just the kind of simulation where the use of the
magnitude $\sigma$ (see Subsec. 5.1), defined by Eq.(36), becomes really
helpful.
Figure 5: Slices showing the density colormap of models $H_{1},H_{2},H_{3},H_{4}$ in Table 2 (rows) at times t=0.03, 0.55, 1.05 and 1.5 (columns), respectively. Figure 6: Colormap slice showing the averaged $\sigma$-parameter (Eq. 36), in model $H_{8}$ at $t=0.55$. Appreciable values of $\sigma$ are only attained at the fluid interphase. Table 2: $L_{1}$ values for errors $E_{1}$ and $E_{2}$ at $t=1.5$ in the hydrostatic square test. The meaning of the columns are: $\sigma$ is the magnitude defined in Eq. 28, $\alpha_{u}$ controls the amount of heat transport in the AV (Eq. 33), $X$ is the estimator connected to the VE choice and $L_{1}(E_{1})$, $L_{1}(E_{2})$, $L_{1}(d)$ are the averaged $L_{1}$ values of the partition of unit, the $|\Delta r|/h$ condition, and the deviations of the SPH particles from their initial position, respectively. Model | $\sigma$ | $\alpha_{u}$ | $X$ | $L_{1}(E_{1})$ | $L_{1}(E_{2})$ | $L_{1}(d)$
---|---|---|---|---|---|---
$H_{1}$ | $0.0$ | $0.1$ | m | $5.0\times 10^{-3}$ | $3.5\times 10^{-3}$ | $4.5\times 10^{-2}$
$H_{2}$ | $0.0$ | $1.0$ | m | $4.2\times 10^{-3}$ | $3.3\times 10^{-3}$ | $3.5\times 10^{-2}$
$H_{3}$ | $1.0$ | $0.1$ | m | $5.6\times 10^{-3}$ | $3.7\times 10^{-3}$ | $1.4\times 10^{-2}$
$H_{4}$ | $[0,1]$ | $0.1$ | m | $6.7\times 10^{-3}$ | $4.1\times 10^{-3}$ | $1.9\times 10^{-2}$
$H_{5}$ | $0.0$ | $0.1$ | $m/\rho_{0}$ | $3.2\times 10^{-3}$ | $3.8\times 10^{-3}$ | $4.7\times 10^{-2}$
$H_{6}$ | $0.0$ | $1.0$ | $m/\rho_{0}$ | $3.3\times 10^{-3}$ | $3.7\times 10^{-3}$ | $3.5\times 10^{-2}$
$H_{7}$ | $1.0$ | $0.1$ | $m/\rho_{0}$ | $3.0\times 10^{-3}$ | $3.6\times 10^{-3}$ | $1.3\times 10^{-2}$
$H_{8}$ | $[0,1]$ | 0.1 | $m/\rho_{0}$ | $3.3\times 10^{-3}$ | $3.8\times 10^{-3}$ | $1.7\times 10^{-2}$
The results of applying Eqs. (28) and (31) to the hydrostatic square test are
summarized in Table (2) and in Fig. 5. The density color-map slices depicted
in Fig. 5 show that the behavior of the square is primarily controlled by the
value of $\sigma$. In the case of the full Lagrangian formulation (models
$H_{1},H_{2}$), calculated with $\sigma=0$, the system completely looses its
shape after a few tenths of a second. Nevertheless, increasing the
conductivity in the AV (model $H_{2}$) slightly delays the deformation,
although, in the end, it is not able to prevent it. Model $H_{3}$, calculated
with $\sigma=1$, leads to the best results, which is in agreement to previous
calculations [30]. It is worth noting that model $H_{4}$ using the self-
adaptive scheme with $0\leq\sigma\leq 1$ is also able to preserve the shape of
the square (although not as well as in model $H_{3}$), but being Lagrangian-
compatible for the vast majority of the domain. As shown in Fig. 6, the
algorithm (Eq. 36) which self-adapts $\sigma$ as a function of the local
density contrast, works splendidly, increasing the value of $\sigma$ only
where it is needed.
To investigate the dependence of the errors $E_{1,2}$ (Eqs. 16 and 17) with
respect to the VE estimator and $\sigma$, at each integration step we
calculated the following $L_{1}$ values in a spherical shell around the
contact discontinuity, $0.1\leq r\leq 0.4$, being $r$ the distance to the
center of mass.
$L_{1}(E_{1,2})=\left<|E_{1,2}|\right>\,,$ (38)
where the brackets stand for the arithmetic average of the errors in the
shell. The additional $L_{1}(d)$ of Table 2 is defined by the average error of
the absolute displacement of the SPH particles located in the shell, with
respect to their initial positions:
$L_{1}^{d}(t)=\frac{1}{N}\sum_{b=1}^{N}\sqrt{(x_{b}(t)-x_{b}(0))^{2}+(y_{b}(t)-y_{b}(0))^{2}+(z_{b}(t)-z_{b}(0)^{2})}$
(39)
The full Lagrangian models $H_{1}-H_{4}$, calculated with $\sigma=0$, give the
worst results, with $L_{1}(d)>3\times 10^{-2}$ at $t=1.5$. The minimum value
of $L_{1}(d)$ corresponds to model $H_{7}$ ($L_{1}(d)\simeq 10^{-2}$),
calculated with $\sigma=1$ and improved VEs. However, model $H_{8}$ calculated
with the self-adaptive $\sigma$ and improved VEs, also displays a similarly
good behavior, while keeping full Lagrangian consistency ($\sigma\simeq 0$) in
a large fraction of the domain (see Fig. 6).
### 6.2 The Kelvin-Helmholtz instability
The correct simulation of the evolution of the contact layer between fluids
with different densities is of capital importance for the adequate growth of
the Kelvin-Helmholtz (KH) instability. García-Senz et al. [12] proved that the
use of ISPH improves the evaluation of gradients overall (and in particular in
the contact layer) preventing the appearance of tensile instabilities that
otherwise suppress the growth of the KH instability. Because SPHYNX uses ISPH
by default, all KH simulations presented here show no signs of tensile
instability and have growth rates close to the reference calculation by [36]
with the code PENCIL. Nevertheless, using a volume element that better
fulfills conditions (16) and (17) should additionally improve the accuracy of
ISPH and, as a consequence, obtain a better KH growth rate.
We run this test in a thin three-dimensional layer of size $[1\times 1\times
0.0625]$ with $4.2\times 10^{6}$ equal-mass particles, distributed in a ratio
2:1 between the high and low density regions. For the initial setting we have
three stratified layers, being the central layer the high-density one. Each
region was generated from a random particle distribution relaxed to a glass-
like configuration. The equation of state, initial velocities, and initial
pressure are the same as those in Sect. 5.4.1 of [20].
We used two different VEs (the standard $X_{a}=m_{a}$ and the enhanced version
proposed here $X_{a}=m_{a}/\rho^{0}_{a}$) and, for each case, we performed two
simulations: one fixing $\sigma=0$, hence ensuring fully Lagrangian
compatibility, and another one allowing $\sigma$ to vary according to Eq. 36.
All simulations are summarized in Table 3.
Table 3: List of simulated models in the Kelvin-Helmholz test. The columns present the following: $\sigma$ is the magnitude defined in Eq. 36, $X$ is the estimator connected to the VE choice, $\alpha_{min}$ is the minimum value of the AV parameter, $\rho_{1}/\rho_{2}$ is the density ratio, and $L_{1}(E_{1})$ and $L_{1}(E_{2})$ are the averaged $L_{1}$ values of the partition of unit and the $|\Delta r|/h$ condition, respectively, at $t=2$. Model | $\sigma$ | $X$ | $\alpha_{min}$ | $\rho_{1}/\rho_{2}$ | $L_{1}(E_{1})$ | $L_{1}(E_{2})$
---|---|---|---|---|---|---
$KH_{1}$ | $0$ | m | 0.05 | 2 | $1.0\times 10^{-2}$ | $4.4\times 10^{-3}$
$KH_{2}$ | $[0-1]$ | m | 0.05 | 2 | $9.9\times 10^{-3}$ | $4.3\times 10^{-3}$
$KH_{3}$ | $0$ | $m/\rho_{0}$ | 0.05 | 2 | $2.8\times 10^{-3}$ | $3.7\times 10^{-3}$
$KH_{4}$ | $[0-1]$ | $m/\rho_{0}$ | 0.05 | 2 | $2.9\times 10^{-3}$ | $3.8\times 10^{-3}$
$KH_{5}$ | $[0-1]$ | m | 0.5 | 2 | $1.0\times 10^{-2}$ | $3.5\times 10^{-3}$
$KH_{6}$ | $[0-1]$ | $m/\rho_{0}$ | 0.5 | 2 | $2.2\times 10^{-3}$ | $2.6\times 10^{-3}$
$KH_{7}$ | $[0-1]$ | m | 0.05 | 8 | $1.8\times 10^{-2}$ | $7.7\times 10^{-3}$
$KH_{8}$ | $[0-1]$ | $m/\rho_{0}$ | 0.05 | 8 | $5.8\times 10^{-3}$ | $6.1\times 10^{-3}$
In Fig. 7 we represent the particles distributions for each simulated model
(each snapshot corresponds to one model, from $KH_{1}$ to $KH_{4}$) at $t=2$.
The color represents density. As it can be seen, there are very few
differences among all simulated models. In all cases, the KH billows are able
to develop, with small differences among simulations, mostly at the extremes
of the billows.
Figure 7: Particles distribution of models $KH_{1}$ to $KH_{4}$ at $t=2$. The
density [1:2] is color-coded.
We can also explore the growth of the amplitude mode for the $v_{y}$ field and
compare it with a reference evolution taken from the PENCIL code [36]. Figure
8 (left) shows this evolution and it is clear that there are very little
differences among both VEs implementations, in agreement with the results of
[20].
Figure 8: Amplitude growth of the $v_{y}$ field in the KH instability test for
all calculated models. Solid lines correspond to $\alpha_{min}=0.05$, while
dashed lines are from the models with increased AV ($\alpha_{min}=0.5$). Those
with a density jump of two are compared with the reference PENCIL simulation
in the left panel. The right panel shows two simulations with a density jump
of eight for both choices of the VEs. Figure 9: $L_{1}$ evolution of errors
$E_{1}$ and $E_{2}$ for the KH models. Solid lines correspond to $X=m$, while
dashed lines are those of $X=m/\rho_{0}$. Figure 10: Colormap of $\sigma$ (Eq.
36), in model $KH_{4}$ at $t=2$.
Figure 9 presents the time evolution of the averaged $L_{1}$ value for errors
$E_{1}$ and $E_{2}$. To average the $L_{1}$ values we restricted to those
particles that had $\sigma\geq 0.025$. This condition returns all those
particles that are found in the inter-phase (see Fig. 10), which is the region
where the error comes from and where the accurate evaluation of the VEs is
more critical. It is clear that whenever the improved VEs are used (dashed
lines in Fig. 9), these errors decrease, independently of the scenario being
simulated. The last two columns of Table 3 show the numerical value of $L_{1}$
for all simulated tests at $t=2$, where we can see that the choice
$X=m/\rho_{0}$ reduces both, the $L_{1}(E_{1})$ and the $L_{1}(E_{2})$ errors.
From these results, we can conclude that, despite decreasing the errors
$E_{1}$ and $E_{2}$, the VEs choice has little influence in the growth of the
KH instability when the density contrast is moderate. The most important
element here is having the IA implementation to calculate gradients, as shown
in [20]. Additional improvement in the conditions (16) and (17) via using
improved VEs is subdominant. The dominant source of error at this level is the
artificial viscosity. To test this we performed two additional simulations,
$KH_{5}$ and $KH_{6}$, with an artificially increased $\alpha_{min}$ in our AV
switches. This $\alpha_{min}$ controls the amount of dissipation when the AV
switch is off. Therefore, is the minimum dissipation present in the whole
system. Our standard value is $\alpha_{min}=0.05$, while for $KH_{5}$ and
$KH_{6}$ we used $\alpha_{min}=0.5$. To have a reference, traditional AV
without switches employ a global value $\alpha_{min}\simeq 1-1.5$. The
amplitude growth of the $v_{y}$ field is also shown in the left panel of Fig.
8 (dashed lines). It is clear that the AV has a major effect in the
development of the instability. Still the VEs have a negligible impact when
the IA formalism is used.
Our final test was to explore if the VEs can have an influence when the
density contrast is higher. To do that we decreased the density of the low-
density regions by a factor 4, simply by generating a new relaxed model with 4
times less particles, and repeated simulations $KH_{2}$ and $KH_{4}$ with this
new density jump of a factor 8 (simulations $KH_{7}$ and $KH_{8}$). We present
the evolution of the amplitude growth of the $v_{y}$ field in the right panel
of Fig. 8. In this case, the growth is more irregular and there is a clear
difference between the VEs in the linear regime. The improved VEs are able to
growth faster than the standard VEs in the initial stages of the development
of the KH instability, despite converging later, in the non-linear phase. The
IA formalism still does a good job evaluating gradients even with a density
jump of a factor 8. Nevertheless, using improved VEs at this density contrast
shows to have a noticeable effect.
In summary, if there is a mild density contrast, the IA implementation is good
enough to make the choice of VEs sub-dominant. The most relevant parameter in
these situations is to keep the dissipation at its lowest possible value, so
that random noise is still dissipated, but it doesn’t affect the shear between
the different fluid layers. In this respect, improved handling of the AV would
be welcomed, such as that presented in [37] based on the instantaneous
numerical entropy generation rate. If there is a bigger density contrast, the
IA formalism can be further improved by using a VE that better fulfills
conditions (16) and (17).
### 6.3 The wind-cloud collision
The wind-cloud collision scenario, also called ”blob” test [6] is a
challenging test for SPH, involving many pieces of physics such as strong
shocks and mixing due to the KH instability, amidst a two-phase medium with a
large density contrast. In this test a spherical cloud of cold gas, initially
at rest, is swept by a low-density stream of gas (the wind) moving
supersonically. As a consequence, the cloud deforms and, after a while, it is
fragmented and mixed with the background owing to the combined effect of
ablation and hydrodynamic instabilities. This scenario has been extensively
studied during the last years [6, 30, 22, 21, 11], where the tensile
instability appearing at the wind-blob contact was identified as the main
difficulty to overcome.
We want to check the ability of our numerical scheme to handle this test. The
initial conditions were those described in [20], except that the calculation
is now carried out in 3D with approximately $N_{w}=11.23\times 10^{6}$ and
$N_{c}=1.23\times 10^{5}$ particles for the wind and the cloud, respectively.
The initial particle distribution for both was that of a stable glass-like
configuration. The box has a size {$0.25\times 0.25\times 1$} with periodic
boundary conditions. The high-density cloud is initially located at
$(0.125,0.125,0.125)$ with a radius $R=1/40$ and a density $\rho_{c}=10$, ten
times bigger than the surrounding wind. The internal energy of the wind and
the cloud are $u_{w}=3/2$ and $u_{c}=3/20$, respectively, so that both regions
are in pressure equilibrium. The wind has an initial velocity $(2.7,0,0)$.
Table 4: Simulated models in the wind-cloud test. The columns present the following: $\sigma$ is the magnitude defined in Eq. 36, $X$ is the estimator connected to the VE choice, and $L_{1}(E_{1})$ and $L_{1}(E_{2})$ are the averaged $L_{1}$ values of the partition of unit and the $|\Delta r|/h$ condition, respectively, at $t/t_{KH}=2$. Model | $\sigma$ | X | $L_{1}(E_{1})$ | $L_{1}(E_{2})$
---|---|---|---|---
$W_{1}$ | $[0-1]$ | $m$ | $1.9\times 10^{-2}$ | $1.2\times 10^{-2}$
$W_{2}$ | $[0-1]$ | $m/\rho_{0}$ | $7.0\times 10^{-3}$ | $9.3\times 10^{-3}$
In Table 4 we present the $L_{1}$ values of errors $E_{1}$ and $E_{2}$ for the
simulated models. Both errors are clearly lower when the improved VEs are
used. This trend is constant during the whole evolution of the simulation as
it can be seen in Fig. 11. As in the case of the KH test, we used the
parameter $\sigma$ to track the $L_{1}$ error generated by the particles in
the interphase. In Fig. 12 we show a series of snapshots where the color
represents the value of $\sigma$ and proves the capabilities of our algorithm
to track density jumps.
Figure 11: $L_{1}$ values for $E_{1}$ (solid lines) and $E_{2}$ (dashed lines)
in the wind-bubble tests. Using the improved VEs reduces both $L_{1}$ values
for the whole simulation. Figure 12: Particle distribution of the windblob
test for model $W_{2}$ at $t/t_{KH}=0$, 1, 2, and 3. The parameter $\sigma$ is
color-coded.
The density color-map at $t/t_{KH}=1.5$ is shown in Fig. 13 for both choices
of VEs.
Figure 13: Particle distribution of the windblob test for models $W_{1}$ and
$W_{2}$ at $t/t_{KH}=1.5$. Density is color-coded.
In order to characterize the destruction of the high-density cloud, we tracked
the percent of surviving cloud mass (criteria: $\rho\geq 0.64\rho_{c}$ and
$u\leq 0.9u_{w}$) in function of $t/t_{KH}$ in Fig. 14, where $t_{KH}$ is the
characteristic growth time for the KH instability as defined in [6]. From this
result we can see that the choice of the VE is less relevant than using a
crossed formulation of the equations (i.e. variable $\sigma$ or $\sigma=1$).
Indeed, the cloud is mostly destroyed in all simulations, and this is mainly
due to the integral formulation of the SPH equations combined with the
addition of the heat transfer term in the AV equation. Nevertheless, the
surviving fraction of the cloud decreases faster and to lower values provided
we don’t use $\sigma=0$, independently of the VE choice. The slight delay in
the evolution of the surviving fraction with $X=m/\rho_{0}$ simply reflects
the fact that the ensuing VEs better track the density peak at the forward
shock moving through the cloud. Such density enhancement results in the slight
delay in crossing the above threshold condition $\rho\geq 0.64\rho_{c}$.
Figure 14: Percent of surviving cloud in function of $t/t_{KH}$ for the
bubble-shock test.
### 6.4 The Sedov explosion
We simulated this test in a three-dimensional square box of side $L=1$ and
total number of particles $N=100^{3}$ settled in a glass-like configuration.
The mass of all the particles was the same, resulting in an homogeneous
initial density profile with $\rho=1\pm 0.005$. The explosion was initiated at
the center of the box by depositing an amount of energy $\Delta U=1$. That
energy was spread following a Gaussian profile with characteristic width
$\delta=0.1$.
Table 5: The Sedov test: density peak and $L_{1}$ errors at $t=0.09$ Model | $\sigma$ | $\alpha_{u}$ | $X$ | $\rho_{max}$ | $L_{1}(E_{1})$ | $L_{1}(E_{2})$
---|---|---|---|---|---|---
$S_{1}$ | $0.0$ | $0.1$ | m | $3.34$ | $3.10\times 10^{-2}$ | $2.20\times 10^{-2}$
$S_{2}$ | $0.0$ | $0.5$ | m | $3.16$ | $2.85\times 10^{-2}$ | $1.96\times 10^{-2}$
$S_{3}$ | $1.0$ | $0.1$ | m | $3.33$ | $3.05\times 10^{-2}$ | $2.20\times 10^{-2}$
$S_{4}$ | $[0,1]$ | $0.1$ | m | $3.35$ | $3.10\times 10^{-2}$ | $2.20\times 10^{-2}$
$S_{5}$ | $0.0$ | $0.1$ | $m/\rho_{0}$ | $3.71$ | $1.53\times 10^{-2}$ | $1.72\times 10^{-2}$
$S_{6}$ | $0.0$ | $0.5$ | $m/\rho_{0}$ | $3.47$ | $1.33\times 10^{-2}$ | $1.52\times 10^{-2}$
$S_{7}$ | $1.0$ | $0.1$ | $m/\rho_{0}$ | $3.70$ | $1.45\times 10^{-2}$ | $1.67\times 10^{-2}$
$S_{8}$ | $[0,1]$ | $0.1$ | $m/\rho_{0}$ | $3.71$ | $1.45\times 10^{-2}$ | $1.63\times 10^{-2}$
Figure 15: The Sedov explosion 3D for models $S_{4}$ and $S_{8}$ in Table (5).
Top row: density profiles at $t=0.09$ calculated with $X_{a}=m_{a}$ (model
$S_{4}$, standard VEs) and $X_{a}=m_{a}/\rho_{a}^{0}$ (model $S_{8}$, improved
VEs). The green line is the analytical solution. Note that the spherical
symmetry is well preserved in both cases and that the density peak is very
well reproduced with the improved VEs. Bottom row: same but with the partition
of unit. All particles are represented in the plots. Figure 16: Colormap
slice showing the value of the averaged $\sigma$-parameter defined in Eq. 36,
for model $S_{8}$ in Table 5 at t=0.09. Figure 17: The Sedov explosion.
Evolution of the averaged $L_{1}$ error, (Eq. 38), in the shocked region
corresponding to models $S_{4}$ and $S_{8}$ in Table 5. There are shown both,
the partition of unit and the normalized $<\Delta r>/h$ condition, for the two
choices of the estimator considered in this work: $X_{a}=m_{a}$ (red and green
lines respectively) and $X_{a}=\frac{m_{a}}{\rho^{0}_{a}}$ (blue and violet
lines respectively). Also shown is the evolution of the maximum density for
the two volume estimators (lines with points, the light-blue color is for
model $S_{8}$) during the explosion.
To investigate the dependence of the errors $E_{1,2}$ (Eqs. 16 and 17) with
respect the VE estimator choice and $\sigma$, we calculated the $L_{1}$ errors
in the shocked region888Defined as the volume with specific internal energy
$u(t)\leq 0.1$ as in Sec. 6.1, Eq. (38).
The outcome of the simulations is summarized in Table 5 and Figs. 15, 16, and
17. The best results were obtained in those models which incorporate the
improved VEs. Unlike in the hydrostatic square test, the particular value
adopted for the magnitude $\sigma$ plays a secondary role in the Sedov test.
Figure 15 shows the profiles of density at t=0.09 for models $S_{4}$ and
$S_{8}$. In both cases the self-adaptive algorithm to estimate $\sigma$ was
active. The best match with the analytic profile (green line) was obtained
with $X_{a}=m_{a}/\rho^{0}_{a}$, not only in the peak values, which were much
better reproduced, but also in the width of the shock front. The relative
errors of the maximum density with respect to the analytical value at $t=0.09$
are $\simeq 16\%$ and $\simeq 7\%$ for the standard ($S_{4}$) and improved VEs
($S_{8}$), respectively. Figure 16 shows the distribution of the averaged
$\sigma$ in a thin slice along the system for model $S_{8}$. As it can be
seen, the value of $\sigma$ self-adapts so that it approaches to 1 across the
shock front and it almost vanishes far from it.
The profile of the normalization function $\sum_{b}V_{b}W_{ab}(h_{a})$ at
$t=0.09$ is also shown in Fig. 15. As it can be seen, the choice
$X_{a}=m_{a}/\rho^{0}_{a}$ substantially improves the partition of unit with
respect the standard choice $X_{a}=m_{a}$. Such assessment is quantitatively
confirmed by the temporal evolution of the error estimators
$L_{1}(E_{1},E_{2})$ shown in Fig. 17. The enhancement is especially good in
the case of the partition of unit, although less pronounced in the $<\Delta
r>$ condition. Therefore, we conclude that combining ISPH with the VEs
obtained with $X_{a}=m_{a}/\rho^{0}_{a}$ reduces both errors and improves the
simulations.
Finally, the impact of increasing the conductive term in the AV was also
analyzed. Rising the parameter $\alpha_{u}$ from our default choice,
$\alpha_{u}=0.1$ to $\alpha_{u}=0.5$ substantially reduces the density peak
from $\rho_{max}(t=0.09)=3.34$ in model $S_{1}$, to $\rho_{max}(t=0.09)=3.16$
in model $S_{2}$.
## 7 Conclusion
In this work we propose an easy scheme to improve the partition of unit in
SPH, with especial emphasis in the connections between the partition of unit
and the accuracy in estimating the gradients. By combining analytical
reasoning with simple 1D toy models and full 3D simulations, we have shown
that improving the constraint $\sum_{b}V_{b}W_{ab}(h_{a})=1$ automatically
leads to an enhancement of the condition $<\Delta r>=\sum_{b}V_{b}({\bf
r_{b}}-{\bf{r_{a}}})W_{ab}(h_{a})=0$. When gradients are approached with an
integral expression (the ISPH scheme, Eq. 7), the fulfillment of the $<\Delta
r>$ constraint is a sufficient condition to perform exact linear
interpolations. Thus, provided that the volume elements $V_{a}$ are correctly
chosen, the ISPH scheme makes compatible complete conservation of mass,
momentum, energy and entropy with a much better estimation of the gradient of
linear functions.
Our main aim, and one of the novelties of this work, is to improve the
partition of unit without leaving the Lagrangian formulation of ISPH, making
such enhancement totally compatible with the inclusion of grad-h terms in the
momentum and energy equations. To do that the volume elements, $V_{a}$, have
been re-defined so that $V_{a}=X_{a}/k_{a}$ with $X_{a}=m_{a}/\rho^{0}_{a}$;
$k_{a}=\sum_{b}X_{b}W_{ab}(h_{a})$ and
$\rho^{0}_{a}=\sum_{b}m_{b}W_{ab}(h_{a})$. Such particular choice of $V_{a}$
results in the new set of ISPH equations described in the A. These ISPH
equations not only are Lagrangian-compatible, displaying perfect conservation
properties, but manifestly improve the partition of unit and the estimation of
gradients. The computational cost of including these improved VEs, mainly due
to the implicit search of the coupled $h_{a},\rho_{a}$ magnitudes with a
Newton-Raphson algorithm, is subdominant.
The behavior of the new ISPH scheme has been checked with a number of standard
3D tests which involve large density jumps and contact discontinuities. The
second novelty of the present work has to do with the correct handling of
these abrupt density jumps. The numerical handling of fluids with large
density contrasts, embedded in nearly-isobaric media, usually requires to
abandon the Lagrangian formulation of the SPH equations in favor of other
schemes. We propose here a self-adaptive scheme, steered by a single parameter
$0\leq\sigma\leq 1$ (see Eqs. 28, 31, and 36), which selectively chooses the
optimal integration scheme. When $\sigma\simeq 0$ the Lagrangian-compatible
scheme is recovered while $\sigma\simeq 1$ is more adequate to handle large
gradients, but not totally Lagrangian-compatible.
The results of the hydrodynamic tests unambiguously support the conclusions of
the analytical arguments and simple toy models described in Section 4. Namely,
that the proposed volume elements improve both the partition of unit and the
$<\Delta r>$ condition in all studied cases. Nevertheless, the level of
enhancement in the hydrodynamic simulations is lower than that promised in the
toy model experiments, especially those concerning the $<\Delta r>$
constraint. Such degradation is attributable to the larger particle disorder
in the real three-dimensional simulations, but still the results with the
improved VEs are valuable. On another note, the novel self-adaptive $\sigma$
scheme works very well, providing much better results in the hydrostatic and
wind-cloud tests than the $\sigma=0$ calculation. The results of the Kelvin-
Helmholtz test suggest that the artificial viscosity algorithm plays a central
role here, with the VEs and $\sigma$ choice being subdominant. Finally, the
Sedov test clearly shows the best performance of the proposed VEs, which allow
to reproduce the correct density jump across the shock even in three-
dimensional calculations with moderate number of particles.
As an immediate prospect, we plan to use our improved ISPH code to numerically
reproduce the isothermal, sub and super-sonic, turbulence. Implementing the
ISPH scheme to handle with magneto-hydrodynamics effects is also under way.
## Acknowledgment
This work has been supported by the MINECO Spanish project AYA2017-86274-P and
the Generalitat of Catalonia SGR-661/2017 (D.G.), by the Swiss Platform for
Advanced Scientific Computing (PASC) project SPH-EXA: Optimizing Smooth
Particle Hydrodynamics for Exascale Computing (R.C. and D.G.). The authors
acknowledge the support of sciCORE (http://scicore.unibas.ch/) scientific
computing core facility at University of Basel, where part of these
calculations were performed.
## Appendix A The SPH and ISPH formalisms with generalized VE
According to the Euler-Lagrange formulation of SPH ([3] and references
therein) the movement equations are written as,
$m_{a}\ddot{\mathbf{r}}_{a}=-\sum_{b}m_{b}\frac{P_{b}}{\rho_{b}^{2}}\frac{\partial\rho_{b}}{\partial{\mathbf{r}_{a}}}=\sum_{b}P_{b}\frac{\partial
V_{b}}{{\partial\mathbf{r}}_{a}}$ (40)
where $V_{b}$ is the characteristic volume occupied by the particle, and
$\rho_{b}=m_{b}/V_{b}$. The derivative on the RHS embodies the effect of
$h$-gradients. The spatial part of the derivative can be performed in the
standard way, $\nabla W_{ab}$ or, better, with the integral approach given by
Eq. (30)
We first calculate the value of $\partial V_{b}/\partial{\mathbf{r}}_{a}$ in
(40),
$\frac{\partial
V_{b}}{{\partial\mathbf{r}}_{a}}=\nabla_{a}V_{b}+\frac{\partial
V_{b}}{\partial h_{b}}\frac{\partial h_{b}}{\partial{\mathbf{r}}_{a}}$ (41)
The grad-h part of equation above can be estimated differentiating the
constraint $h_{b}^{3}V_{b}^{-1}=C$ with respect ${\mathbf{r}}_{a}$ which,
after some algebra, gives:
$\frac{\partial V_{b}}{\partial h_{b}}\leavevmode\nobreak\ \frac{\partial
h_{b}}{\partial{\mathbf{r}}_{a}}\left[1-\frac{3V_{b}}{h_{b}}\left(\frac{\partial
V_{b}}{\partial h_{b}}\right)^{-1}\right]=-\nabla_{a}V_{b}$ (42)
which combined to expression (41) gives
$\frac{\partial
V_{b}}{{\partial\mathbf{r}}_{a}}=\nabla_{a}V_{b}\left[1-\frac{h_{b}}{3V_{b}}\frac{\partial
V_{b}}{\partial h_{b}}\right]^{-1}$ (43)
where $\nabla_{a}V_{b}$ refers to the spatial gradient whereas the grad-h
effects are included in the term in brackets.
To estimate $\nabla_{a}V_{b}$, the precise form of the volume elements has to
be known. A general form for these elements is [21]:
$V_{b}=\frac{X_{b}}{k_{b}}\qquad{\mathrm{w}ith}\qquad
k_{b}=\sum_{c}X_{c}W_{bc}(h_{b})$ (44)
where $X_{b}$ is a scalar estimator. Here we have considered two different
estimator families, leading to slightly different expressions of the movement
and energy equations:
a) Constant $X_{b}$, as for example $X_{b}=m_{b}$ and $X_{b}=1$ both
reproducing the standard volume elements $V_{b}=m_{b}/\rho_{b}$. Other choice
is $X_{b}=P^{k}_{b}$ where $P$ is the pressure and $k\leq 1$. In this case,
the estimator is strictly constant only in isobaric systems.
$\nabla_{a}V_{b}\xrightarrow[b=a]{}\nabla_{a}V_{a}=\nabla_{a}\left(\frac{X_{a}}{k_{a}}\right)=-\frac{X_{a}}{k_{a}^{2}}\nabla_{a}k_{a}\\\
=-\frac{X_{a}}{k_{a}^{2}}\sum_{b}X_{b}\nabla_{a}W_{ab}(h_{a})$ (45)
$\nabla_{a}V_{b}\xrightarrow[b\neq
a]{}\nabla_{a}V_{b}=\nabla_{a}\left(\frac{X_{b}}{k_{b}}\right)=-\frac{X_{b}}{k_{b}^{2}}\nabla_{a}k_{b}=-\frac{X_{b}}{k_{b}^{2}}\leavevmode\nobreak\
X_{a}\nabla_{a}W_{ab}(h_{b})$ (46)
Combining expressions (43),(45), (46) with equation (40) and making use of Eq.
(30) to carry out the IA approach of the kernel gradient, the $i$-component of
the acceleration of particle $a$ is finally obtained:
$\ddot{x}_{i,a}=-\frac{X_{a}}{m_{a}}\sum_{b}\left[\frac{X_{b}P_{a}}{\Omega_{a}k_{a}^{2}}\mathcal{A}_{i,ab}(h_{a})+\frac{X_{b}P_{b}}{\Omega_{b}k_{b}^{2}}\mathcal{A}_{i,ab}(h_{b})\right]$
(47)
with,
$\Omega_{a}=\left[1-\frac{h_{a}}{3V_{a}}\frac{\partial V_{a}}{\partial
h_{a}}\right]=\left[1+\frac{h_{a}}{3\rho_{a}}\frac{\partial\rho_{a}}{\partial
h_{a}}\right]$ (48)
b) $X_{b}=m_{b}/\rho_{b}^{0}$, being $\rho^{0}_{b}=\sum_{c}m_{c}W_{bc}(h_{b})$
the standard SPH density. Because of the normalization condition, this choice
leads to constant $k_{b}=\sum_{c}X_{c}W_{bc}(h_{b})$. Now we have,
$\begin{split}\nabla_{a}V_{b}\xrightarrow[b=a]{}\nabla_{a}V_{a}=\nabla_{a}\left(\frac{X_{a}}{k_{a}}\right)=\frac{\nabla_{a}X_{a}}{k_{a}}&=-\frac{m_{a}}{(\rho^{0}_{a})^{2}k_{a}}\nabla_{a}\rho^{0}_{a}\\\
&=-\frac{m_{a}}{(\rho^{0}_{a})^{2}k_{a}}\sum_{b}m_{b}\nabla_{a}W_{ab}(h_{a})\end{split}$
(49)
$\begin{split}\nabla_{a}V_{b}\xrightarrow[b\neq
a]{}\nabla_{a}V_{b}=\nabla_{a}\left(\frac{X_{b}}{k_{b}}\right)=\frac{\nabla_{a}X_{b}}{k_{b}}&=-\frac{m_{b}}{(\rho^{0}_{b})^{2}k_{b}}\nabla_{a}\rho^{0}_{b}\\\
&=-\frac{m_{b}m_{a}}{(\rho^{0}_{b})^{2}k_{b}}\nabla_{a}W_{ab}(h_{b})\end{split}$
(50)
Combining expressions (43),(49), (50) with equation (40) and making use of
Eq.(30) to compute the IA approach of the kernel gradient, the $i$-component
of the acceleration of particle $a$ is obtained,
$\ddot{x}_{i,a}=-\sum_{b}m_{b}\left[\frac{X_{a}^{2}P_{a}}{\Omega_{a}m_{a}^{2}\leavevmode\nobreak\
k_{a}}\mathcal{A}_{i,ab}(h_{a})+\frac{X_{b}^{2}P_{b}}{\Omega_{b}m_{b}^{2}\leavevmode\nobreak\
k_{b}}\mathcal{A}_{i,ab}(h_{b})\right]$ (51)
### A.1 The computation of $\Omega_{a}$
According to expression (48), to estimate $\Omega_{a}$ it is necessary to know
derivative of the density with respect the smoothing-length
$(\partial\rho_{a}/\partial h_{a})$. The result relies in the choice the
estimator $X_{a}$ used to compute the density.
* 1.
For constant $X_{a}$, as for example (but not necessarily), the standard
choice $X_{a}=m_{a}$:
$\frac{\partial\rho_{a}}{\partial
h_{a}}=\frac{m_{a}}{X_{a}}\sum_{b}X_{b}\frac{\partial W_{ab}(h_{a})}{\partial
h_{a}}$ (52)
* 2.
The choice $X_{a}=(m_{a}/\rho^{0}_{a})$ requires a bit more algebra:
$\frac{\partial\rho_{a}}{\partial
h_{a}}=\frac{m_{a}}{X_{a}}\left(\frac{\partial k_{a}}{\partial
h_{a}}\right)-\frac{m_{a}k_{a}}{X_{a}^{2}}\left(\frac{\partial X_{a}}{\partial
h_{a}}\right)$ (53)
with:
$\begin{split}\frac{\partial k_{a}}{\partial
h_{a}}&=\sum_{b}\frac{\partial}{\partial
h_{a}}\left(\frac{m_{b}}{\rho_{b}^{0}}\right)W_{ab}(h_{a})+\sum_{b}X_{b}\frac{\partial
W_{ab}(h_{a})}{\partial h_{a}}\\\
&=-\left(\frac{m_{a}}{(\rho^{0}_{a})^{2}}\right)W_{aa}(h_{a})\frac{\partial\rho^{0}_{a}(h_{a})}{\partial{h_{a}}}+\sum_{b}X_{b}\frac{\partial
W_{ab}(h_{a})}{\partial h_{a}}\end{split}$ (54)
and,
$\frac{\partial X_{a}}{\partial h_{a}}=\frac{\partial}{\partial
h_{a}}\left(\frac{m_{a}}{\rho^{0}_{a}}\right)=-\left(\frac{m_{a}}{(\rho^{0}_{a})^{2}}\right)\frac{\partial\rho^{0}_{a}(h_{a})}{\partial{h_{a}}}=-\left(\frac{m_{a}}{(\rho^{0}_{a})^{2}}\right)\sum_{b}m_{b}\frac{\partial
W_{ab}(h_{a})}{\partial h_{a}}$ (55)
Putting expressions (54, 55) into (53) and manipulating,
$\frac{\partial\rho_{a}}{\partial
h_{a}}=\left[\frac{\rho_{a}}{\rho^{0}_{a}}-X_{a}W_{aa}(h_{a})\right]\sum_{b}m_{b}\frac{\partial
W_{ab}(h_{a})}{\partial h_{a}}+\frac{m_{a}}{X_{a}}\sum_{b}X_{b}\frac{\partial
W_{ab}(h_{a})}{\partial h_{a}}$ (56)
## References
* Lucy [1977] L. B. Lucy, A numerical approach to the testing of the fission hypothesis, AJ 82 (1977) 1013–1024. doi:10.1086/112164.
* Gingold and Monaghan [1977] R. A. Gingold, J. J. Monaghan, Smoothed particle hydrodynamics - Theory and application to non-spherical stars, MNRAS 181 (1977) 375–389.
* Springel [2010] V. Springel, Smoothed Particle Hydrodynamics in Astrophysics, ARA&A 48 (2010) 391–430. doi:10.1146/annurev-astro-081309-130914. arXiv:1109.2219.
* Monaghan [2012] J. J. Monaghan, Smoothed Particle Hydrodynamics and Its Diverse Applications, Annual Review of Fluid Mechanics 44 (2012) 323–346. doi:10.1146/annurev-fluid-120710-101220.
* Rosswog [2015] S. Rosswog, Boosting the accuracy of SPH techniques: Newtonian and special-relativistic tests, MNRAS 448 (2015) 3628–3664. doi:10.1093/mnras/stv225. arXiv:1405.6034.
* Agertz et al. [2007] O. Agertz, B. Moore, J. Stadel, D. Potter, F. Miniati, J. Read, L. Mayer, A. Gawryszczak, A. Kravtsov, Å. Nordlund, F. Pearce, V. Quilis, D. Rudd, V. Springel, J. Stone, E. Tasker, R. Teyssier, J. Wadsley, R. Walder, Fundamental differences between SPH and grid methods, MNRAS 380 (2007) 963–978. doi:10.1111/j.1365-2966.2007.12183.x. arXiv:astro-ph/0610051.
* Zhu et al. [2015] Q. Zhu, L. Hernquist, Y. Li, Numerical Convergence In Smoothed Particle Hydrodynamics, ApJ 800 (2015) 6\. doi:10.1088/0004-637X/800/1/6. arXiv:1410.4222.
* Dilts [1999] G. A. Dilts, Moving-least-squares-particle hydrodynamics?I. Consistency and stability, International Journal for Numerical Methods in Engineering 44 (1999) 1115–1155. doi:10.1002/(SICI)1097-0207(19990320)44:8<1115::AID-NME547>3.0.CO;2-L.
* Bonet [1999] J. Bonet, Variational and momentum preservation aspects of Smooth Particle Hydrodynamic formulations, Computer Methods in Applied Mechanics and Engineering 180 (1999) 97–115. doi:10.1016/S0045-7825(99)00051-1.
* Oger et al. [2007] G. Oger, M. Doring, B. Alessandrini, P. Ferrant, An improved SPH method: Towards higher order convergence, Journal of Computational Physics 225 (2007) 1472–1492. doi:10.1016/j.jcp.2007.01.039.
* Frontiere et al. [2017] N. Frontiere, C. D. Raskin, J. M. Owen, CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme, Journal of Computational Physics 332 (2017) 160–209. doi:10.1016/j.jcp.2016.12.004. arXiv:1605.00725.
* García-Senz et al. [2012] D. García-Senz, R. M. Cabezón, J. A. Escartín, Improving smoothed particle hydrodynamics with an integral approach to calculating gradients, A&A 538 (2012) A9. doi:10.1051/0004-6361/201117939. arXiv:1111.3261.
* Springel and Hernquist [2002] V. Springel, L. Hernquist, Cosmological smoothed particle hydrodynamics simulations: the entropy equation, MNRAS 333 (2002) 649–664. doi:10.1046/j.1365-8711.2002.05445.x. arXiv:astro-ph/0111016.
* Valdarnini [2016] R. Valdarnini, Improved Performances in Subsonic Flows of an SPH Scheme with Gradients Estimated using an Integral Approach, ArXiv e-prints (2016). arXiv:1608.08361.
* Brookshaw [1985] L. Brookshaw, A method of calculating radiative heat diffusion in particle simulations, Proceedings of the Astronomical Society of Australia 6 (1985) 207–210.
* Monaghan [2005] J. J. Monaghan, Smoothed particle hydrodynamics, Reports on Progress in Physics 68 (2005) 1703–1759. doi:10.1088/0034-4885/68/8/R01.
* Liu and Liu [2006] M. Liu, G. Liu, Restoring particle consistency in smoothed particle hydrodynamics, Applied Numerical Mathematics 56 (2006) 19–36. URL: http://www.sciencedirect.com/science/article/pii/S0168927405000565. doi:10.1016/j.apnum.2005.02.012.
* Colagrossi and Landrini [2003] A. Colagrossi, M. Landrini, Numerical simulation of interfacial flows by smoothed particle hydrodynamics, Journal of Computational Physics 191 (2003) 448–475. doi:10.1016/S0021-9991(03)00324-3.
* Gomez-Gesteira et al. [2010] M. Gomez-Gesteira, B. Rogers, R. Dalrymple, A. J. Crespo, State-of-the-art of classical sph for free-surface flows, Journal of Hydraulic Research 48 (2010) 6–27. URL: https://doi.org/10.1080/00221686.2010.9641242. doi:10.1080/00221686.2010.9641242. arXiv:https://doi.org/10.1080/00221686.2010.9641242.
* Cabezón et al. [2017] R. M. Cabezón, D. García-Senz, J. Figueira, SPHYNX: an accurate density-based SPH method for astrophysical applications, A&A 606 (2017) A78. doi:10.1051/0004-6361/201630208. arXiv:1607.01698.
* Hopkins [2013] P. F. Hopkins, A general class of Lagrangian smoothed particle hydrodynamics methods and implications for fluid mixing problems, MNRAS 428 (2013) 2840–2856. doi:10.1093/mnras/sts210. arXiv:1206.5006.
* Saitoh and Makino [2013] T. R. Saitoh, J. Makino, A Density-independent Formulation of Smoothed Particle Hydrodynamics, ApJ 768 (2013) 44\. doi:10.1088/0004-637X/768/1/44. arXiv:1202.4277.
* Fulk and Quinn [1996] D. A. Fulk, D. W. Quinn, An Analysis of 1-D Smoothed Particle Hydrodynamics Kernels, Journal of Computational Physics 126 (1996) 165–180. doi:10.1006/jcph.1996.0128.
* Cabezón et al. [2008] R. M. Cabezón, D. García-Senz, A. Relaño, A one-parameter family of interpolating kernels for smoothed particle hydrodynamics studies, Journal of Computational Physics 227 (2008) 8523–8540. doi:10.1016/j.jcp.2008.06.014. arXiv:0809.2755.
* Knapp [2000] C. E. Knapp, An implicit Smooth Particle Hydrodynamic code, Ph.D. thesis, Los Alamos national Laboratory. https://www.osti.gov/biblio/754046-implicit-smooth-particle-hydrodynamic-code, 2000\.
* Escartín [2016] J. Escartín, ISFAA: Implicit SPH for astrophysical applications, Ph.D. thesis, Polytechnical University of Catalonia. http://hdl.handle.net/10803/384002, 2016.
* Cabezón et al. [2012] R. M. Cabezón, D. García-Senz, J. A. Escartín, Testing the concept of integral approach to derivatives within the smoothed particle hydrodynamics technique in astrophysical scenarios, A&A 545 (2012) A112. doi:10.1051/0004-6361/201219821. arXiv:1207.5412.
* Monaghan [1997] J. J. Monaghan, SPH and Riemann Solvers, Journal of Computational Physics 136 (1997) 298–307. doi:10.1006/jcph.1997.5732.
* Price et al. [2018] D. J. Price, J. Wurster, T. S. Tricco, C. Nixon, S. Toupin, A. Pettitt, C. Chan, D. Mentiplay, G. Laibe, S. Glover, C. Dobbs, R. Nealon, D. Liptai, H. Worpel, C. Bonnerot, G. Dipierro, G. Ballabio, E. Ragusa, C. Federrath, R. Iaconi, T. Reichardt, D. Forgan, M. Hutchison, T. Constantino, B. Ayliffe, K. Hirsh, G. Lodato, Phantom: A Smoothed Particle Hydrodynamics and Magnetohydrodynamics Code for Astrophysics, PASA 35 (2018) e031. doi:10.1017/pasa.2018.25. arXiv:1702.03930.
* Read et al. [2010] J. I. Read, T. Hayfield, O. Agertz, Resolving mixing in smoothed particle hydrodynamics, MNRAS 405 (2010) 1513–1530. doi:10.1111/j.1365-2966.2010.16577.x. arXiv:0906.0774.
* Price [2008] D. J. Price, Modelling discontinuities and Kelvin Helmholtz instabilities in SPH, Journal of Computational Physics 227 (2008) 10040–10057. doi:10.1016/j.jcp.2008.08.011. arXiv:0709.2772.
* Tricco [2019] T. S. Tricco, The Kelvin-Helmholtz instability and smoothed particle hydrodynamics, MNRAS 488 (2019) 5210–5224. doi:10.1093/mnras/stz2042. arXiv:1907.03935.
* Ritchie and Thomas [2001] B. W. Ritchie, P. A. Thomas, Multiphase smoothed-particle hydrodynamics, MNRAS 323 (2001) 743–756. doi:10.1046/j.1365-8711.2001.04268.x. arXiv:astro-ph/0005357.
* Monaghan [1992] J. J. Monaghan, Smoothed particle hydrodynamics, ARA&A 30 (1992) 543–574. doi:10.1146/annurev.aa.30.090192.002551.
* Price [2005] D. Price, Smoothed Particle Hydrodynamics, Ph.D. thesis, -, 2005.
* McNally et al. [2012] C. P. McNally, W. Lyra, J.-C. Passy, A Well-posed Kelvin-Helmholtz Instability Test and Comparison, ApJS 201 (2012) 18\. doi:10.1088/0067-0049/201/2/18. arXiv:1111.1764.
* Rosswog [2020] S. Rosswog, A Simple, Entropy-based Dissipation Trigger for SPH, ApJ 898 (2020) 60\. doi:10.3847/1538-4357/ab9a2e. arXiv:1912.01095.
|
# Convolution Properties of Orlicz Spaces on hypergroups
Ali Reza Bagheri Salec, Vishvesh Kumar and Seyyed Mohammad Tabatabaie
Department of Mathematics, University of Qom, Qom, Iran<EMAIL_ADDRESS>Department of Mathematics: Analysis, Logic and Discrete Mathematics, Ghent
University, Belgium<EMAIL_ADDRESS>Department of Mathematics,
University of Qom, Qom, Iran<EMAIL_ADDRESS>
Dedicated to Prof. Kenneth A. Ross on his 85th birthday
###### Abstract.
In this paper, for a locally compact commutative hypergroup $K$ and for a pair
$(\Phi_{1},\Phi_{2})$ of Young functions satisfying sequence condition, we
give a necessary condition in terms of aperiodic elements of the center of
$K,$ for the convolution $f\ast g$ to exist a.e., where $f$ and $g$ are
arbitrary elements of Orlicz spaces $L^{\Phi_{1}}(K)$ and $L^{\Phi_{2}}(K)$,
respectively. As an application, we present some equivalent conditions for
compactness of a compactly generated locally compact abelian group. Moreover,
we also characterize compact convolution operators from $L^{1}_{w}(K)$ into
$L^{\Phi}_{w}(K)$.
###### Key words and phrases:
locally compact group, locally compact hypergroup, Orlicz space, convolution,
Young function, weight function, compact operator.
###### 2010 Mathematics Subject Classification:
46E30, 43A62, 43A15.
∗Corresponding author
## 1\. Introduction
This is well-known $L^{p}$-conjecture that if $1<p<\infty$ and $G$ is a
locally compact group, then the Lebesgue space $L^{p}(G)$ is a Banach algebra
under the convolution product if and only if $G$ is compact. This conjecture
was solved by Saeki [29] by quite elementary means, much more elementary than
some of the proofs of earlier partial results [24, 25]. In [1] it is mentioned
that for each $p>2$, if $f\ast g$ exists a.e. for all $f,g\in L^{p}(G)$, then
$G$ is compact, and so automatically $f\ast g\in L^{p}(G)$; see also [23]. In
[34], under some conditions, a version of this fact on hypergroups has been
given. This topic has been studied on Orlicz spaces too. H. Hudzik, A. Kamiska
and J. Musielak in [11, Theorem 2] gave some equivalent conditions for an
Orlicz space $L^{\Phi}(G)$ to be a convolution Banach algebra:
###### Theorem 1.1.
If $G$ is a locally compact abelian group and $\Phi$ is a Young function
satisfying $\Delta_{2}$-condition, then the following conditions are
equivalent:
1. (1)
$L^{\Phi}(G)$ is a Banach algebra under convolution;
2. (2)
$L^{\Phi}(G)\subseteq L^{1}(G)$;
3. (3)
$\lim_{x\rightarrow 0^{+}}\frac{\Phi(x)}{x}>0$ or $G$ is compact.
Recently, A. Osançlıol and S. Öztop in [21] studied the weighted Orlicz
algebras on locally compact groups (see also [33]). They proved that, even for
a non-compact group $G$, if $L^{\Phi}_{w}(G)\subseteq L^{1}_{w}(G)$ for a
weight $w$, then $L^{\Phi}_{w}(G)$ is a convolution Banach algebra. In [22],
these results were extended to the hypergroup case (see [17] for unweighted
case). In [34], for a compactly generated abelian group $G,$ it is proved that
if $\Phi$ is a Young function with $\Delta_{2}$-condition and satisfying a
sequence condition, then $L^{\Phi}(G)$ is a convolution Banach algebra if and
only if $f\ast g$ exists a. e. for all $f,g\in L^{\Phi}(G)$.
A main motivation for writing this paper is the above background and the
following result from [23, Corollary 1.4] about Lebesgue spaces on locally
compact groups.
###### Theorem 1.2.
Let $G$ be a locally compact abelian group and $1<p,q<\infty$. Then,
$L^{p}(G)\ast L^{q}(G)\subseteq L^{p}(G)$ if and only if $G$ is compact.
In Section 3, we intend to give a version of this result for Orlicz spaces on
locally compact hypergroups; Orlicz spaces are a huge generalization of the
usual Lebesgue spaces, and also, hypergroups are an extension of locally
compact groups. Indeed, for a pair $(\Phi_{1},\Phi_{2})$ of Young functions
satisfying the sequence condition (3.1) (see Definition 3.1), we show that if
for each $f\in L^{\Phi_{1}}(K)$ and $g\in L^{\Phi_{2}}(K)$, $f\ast g$ exists
almost everywhere, then there is no aperiodic element in $Z(K)$ with respect
to the action $Z(K)\curvearrowright(K,\lambda)$, where $K$ is a locally
compact commutative hypergroup and $Z(K)$ is the center of $K$. As an
application, among other results, it is proved that a compactly generated
abelian group $G$ is compact if and only if for each pair
$(\Phi_{1},\Phi_{2})$ of Young functions satisfying the sequence condition
(3.1) and for each $f\in L^{\Phi_{1}}(G)$ and $g\in L^{\Phi_{2}}(G)$, $f\ast
g$ exists a.e. We note that if we consider Lebesgue spaces $L^{p_{1}}(G)$ and
$L^{p_{2}}(G)$, where $p_{1},p_{2}>2$, the novel conclusion Corollary 3.7 is
obtained for Lebesgue spaces too, because the Young functions
$\Phi_{p_{i}},i=1,2$ defined by $\Phi_{p_{i}}(x):=|x|^{p_{i}}$ satisfy the
sequence condition (3.1).
In section 4, we fix a function $g\in C_{c}(K)$, and study the compact
convolution operator $f\mapsto f\ast g$ from $L^{1}_{w}(K)$ into
$L^{\Phi}_{w}(K)$, where $K$ is a locally compact hypergroup and $w$ is a
weight function on $K$. We show that this operator is compact if and only if
the function $x\mapsto\frac{1}{w(x)}\|{\rm L}_{x}g\|_{\Phi,w}$ on $K$ vanishes
at infinity. This conclusion is an Orlicz space version of the main result of
[9]. It is also a generalization of one result in [33] on locally compact
hypergroups.
In the next section, we recall some definitions and notation concerning
locally compact hypergroups and Orlicz spaces, and also state some facts about
aperiodic elements of a group action on a measure space which are used in the
proof of our main result.
## 2\. Preliminaries
### 2.1. Locally Comapct Hypergroups
Locally compact hypergroups were introduced in the papers [7, 12, 30] in the
1970’s. The main references for us on this structure are the paper [12] (in
which hypergroups are called _convos_) and the monograph [2]. Let $K$ be a
locally compact Hausdorff space. We denote the space of all bounded (complex-
valued) Radon measures on $K$ by $\mathcal{M}(K)$ and all those in
$\mathcal{M}(K)$ which are non-negetive by $\mathcal{M}^{+}(K)$. The support
of each measure $\mu\in\mathcal{M}(K)$ is denoted by ${\rm supp}\mu$, and for
each $x\in K$, $\delta_{x}$ denotes the Dirac measure at $x$. The space $K$ is
called a _locally compact hypergroup_ (or simply a _hypergroup_) if there are
a _convolution_
$*:\mathcal{M}(K)\times\mathcal{M}(K)\rightarrow\mathcal{M}(K)$, an
_involution_ $x\mapsto x^{-}$ on $K$, and an element $e$ (called the
_identity_ element) such that the following conditions hold:
1. (1)
$(\mathcal{M}(K),+,\ast)$ is a complex Banach algebra;
2. (2)
for all nonnegative measures $\mu,\nu\in\mathcal{M}(K)$, $\mu*\nu$ is also a
nonnegative measure in $\mathcal{M}(K)$ and the mapping
$(\mu,\nu)\mapsto\mu*\nu$ from $\mathcal{M}^{+}(K)\times\mathcal{M}^{+}(K)$ to
$\mathcal{M}^{+}(K)$ is continuous, where $\mathcal{M}^{+}(K)$ is equipped
with the cone topology;
3. (3)
for all $x,y\in K$, $\delta_{x}*\delta_{y}$ is a probability measure with
compact support;
4. (4)
the mapping $(x,y)\mapsto\text{supp}(\delta_{x}*\delta_{y})$ from $K\times K$
into the space $\mathfrak{C}(K)$ of all non-empty compact subsets of $K$ is
continuous, where $\mathfrak{C}(K)$ is equipped with Michael topology whose
subbasis is the family of all
$\mathfrak{C}_{U,V}:=\\{A\in\mathfrak{C}(K):A\cap U\neq\varnothing\text{ and
}A\subseteq V\\}$, where $U$ and $V$ are open subsets of $K$;
5. (5)
for each $x\in K$, $\delta_{e}*\delta_{x}=\delta_{x}=\delta_{x}*\delta_{e}$;
6. (6)
the mapping $x\mapsto x^{-}$ on $K$ is a homeomorphism, and for each $x,y\in
K$ we have $(x^{-})^{-}=x$ and
$(\delta_{x}*\delta_{y})^{-}=\delta_{y^{-}}*\delta_{x^{-}}$, where for each
$\mu\in\mathcal{M}(K)$ and Borel set $E\subseteq K$,
$\mu^{-}(E):=\mu\left(\\{x^{-}:\,x\in E\\}\right)$. Also, $e\in{\rm
supp}(\delta_{x}*\delta_{y})$ if and only if $x=y^{-}$.
A hypergroup $K$ is called _commutative_ if for each $x,y\in K$,
$\delta_{x}\ast\delta_{y}=\delta_{y}\ast\delta_{x}$. Any locally compact group
is a hypergroup. See the book and papers [2, 8, 35, 14, 15, 16] for more
examples including double coset spaces, polynomial hypergroups and orbit
hypergroups and their applications.
Throughout, we assume that $(K,\ast,\cdot^{-},e)$ is a locally compact
hypergroup with a left-invariant measure, i.e. a non-negative Radon measure
$\lambda$ on $K$ such that for each $x\in K$, $\delta_{x}\ast\lambda=\lambda$.
It is known that any commutative hypergroup, compact hypergroup, discrete
hypergroup and amenable hypergroup admits such a measure [2].
For each element $x\in K$ and Borel subsets $E,F$ of $K$ we denote
$x\ast F:=\bigcup_{y\in F}{\rm supp}(\delta_{x}\ast\delta_{y}),\quad E\ast
F:=\bigcup_{t\in E}\left(t\ast F\right).$
The _convolution_ of any two measurable functions $f,g:K\rightarrow\mathbb{C}$
is defined by
$(f\ast g)(x):=\int_{K}f(y)g(y^{-}\ast x)\,d\lambda(y),\quad(x\in K),$
when this integral exists, where
$g(y^{-}\ast x):=\int_{K}g(t)\,d(\delta_{y^{-}}\ast\delta_{x})(t).$
If $\mu\in\mathcal{M}(K)$ and $f$ is a Borel measurable function on $K$, the
convolution $\mu*f$ is defined by:
$(\mu*f)(x)=\int_{K}f(y^{-}*x)\ d\mu(y),\quad(y\in K).$
In particular, $(\delta_{z^{-}}*f)(x)=f_{z}(y)=({\rm L}_{z}f)(y)$ for $z\in
K.$
### 2.2. Group Action on a Measure space
###### Definition 2.1.
Let $G$ be a locally compact group, $X$ be a locally compact Hausdorff space,
and $\mu$ be a non-negative Radon measure on $X$. We say that a continuous
function
$G\times X\longrightarrow X,\quad(s,x)\mapsto sx,\quad(s\in G,x\in X)$
is an _action_ of $G$ on the measure space $(X,\mu)$ (and write
$G\curvearrowright(X,\mu)$) if
1. (i)
for each $x\in X$, $ex=x$, where $e$ is the identity element of $G$;
2. (ii)
for each $s,t\in G$ and $x\in X$, $s(tx)=(st)x$;
3. (iii)
the measure $\mu$ is $G$-invariant, i.e., for each $s\in G$ and any Borel
subset $E$ of $X$, $sE:=\\{sx:\,x\in E\\}$ is also a Borel subset of $X$ and
$\mu(sE)=\mu(E)$.
Assume that $H$ is a closed subgroup of a locally compact group $G$ with
modular functions $\Delta_{H}:H\rightarrow(0,\infty)$ and
$\Delta_{G}:G\rightarrow(0,\infty)$. Denote the restriction of $\Delta_{G}$ on
$H$ by $\Delta_{G}|_{H}.$ If $\Delta_{H}=\Delta_{G}|_{H}$ then there exists a
$G$-invariant Radon measure $\mu$ on $G/H$, and $G$ naturally acts on the
quotient space $(G/H,\mu)$. For more study on this topic we refer to the book
[13, Chapter IV].
###### Definition 2.2.
An element $a$ of a locally compact group $G$ is called _compact_ if the
closed subgroup $G(a)$ generated by $a$ is compact.
For some details about compact elements of $G$ see [10]. In the literature,
non-compact elements of a group $G$ are also called _aperiodic elements_ (for
example see [3, 4]). Trivially, in any discrete group, an element is aperiodic
if and only if it has infinite order (i.e. it is not a torsion element).
Recently, these elements have been used to study linear dynamical properties
of weighted translation operators on locally compact groups; see [5, 4]. It is
known that any element, except the identity, of the non-discrete additive
group $\mathbb{R}^{n}$, the Heisenberg group, and the affine group is
aperiodic. By [3, Lemma 2.1], if $G$ is a second countable group, an element
$a\in G$ is aperiodic if and only if for each compact subset $E$ of $G$, there
is an integer $N>0$ such that for each $n\geq N$, $E\cap a^{n}E=\varnothing$.
This equivalence leads one to give the following definition which recently has
been used to present a sufficient and necessary condition for a weighted
translation, generated by a group action, to be disjoint topologically
transitive.
###### Definition 2.3.
An element $a\in G$ is called _aperiodic_ with respect to a given action
$G\curvearrowright(X,\mu)$ if for each compact subset $E\subseteq X$, there
exists an integer $N>0$ such that for each $n\geq N$, $E\cap
a^{n}E=\varnothing$.
Thanks to [3, Lemma 2.1], the aperiodic elements of a second countable group
$G$ are same as the aperiodic elements of $G$ with respect to the natural
action of $G$ on itself. In this paper, we will apply this concept regarding
the action of the center of a hypergroup on the whole of hypergroup. The
_center_ $Z(K)$ of a hypergroup $K$ is defined as the set of all $x\in K$ such
that for each $y\in K$, $\text{supp}(\delta_{x}\ast\delta_{y})$ is a
singleton. In other words, for each $x\in Z(K)$ and $y\in K$, there is an
element $\alpha(x,y)\in K$ such that
$\delta_{x}\ast\delta_{y}=\delta_{\alpha(x,y)}$. The center $Z(K)$ is the
maximal subgroup of $K$, and naturally acts on $(K,\lambda)$ by the mapping
$(x,y)\mapsto\alpha(x,y)$ [12, Section 10.4]. In the sequel, we denote $x\ast
y:=\alpha(x,y)$ for all $x\in Z(K)$ and $y\in K$. Also, for each $x\in Z(K)$
and $n\in\mathbb{N}$ we put $x^{n}:=x\ast\ldots\ast x$ ($n$ times), and
$x^{-n}:=(x^{-})^{n}$. For more details and examples about center of
hypergroups see [28]. We use this action in the main result of the paper. One
can easily see that for each element $x\in Z(K)$ and Borel subset $E$ of $K$,
$\lambda(x\ast E)=\lambda(E)$ (2.1)
while this equality does not hold for arbitrary elements of $K$; see [12].
### 2.3. Orlicz Spaces
In this subsection, we recall some basic definitions and notation about Orlicz
spaces; see the monographs [26, 27] on this topic. A convex even mapping
$\Phi:\mathbb{R}\rightarrow[0,\infty)$ is called a _Young function_ if
$\Phi(0)=0$ and $\lim_{x\rightarrow\infty}\Phi(x)=\infty$. The _complementary_
of a Young function $\Phi$ is defined by
$\Psi(x):=\sup\\{y|x|-\Phi(y):\,y\geq 0\\},\quad(x\in\mathbb{R}).$
In the sequel, $\Psi$ is always the complementary of a given Young function
$\Phi$. The set of all Borel measurable functions $f:K\rightarrow\mathbb{C}$
such that for some $\alpha>0$,
$\int_{K}\Phi(\alpha|f(x)|)\,d\lambda(x)<\infty,$
is denoted by $L^{\Phi}(K)$. We assume that two elements of $L^{\Phi}(K)$ are
the same if they are equal $\lambda$-a.e. For each $f\in L^{\Phi}(K)$ we
define
$\|f\|_{\Phi}:=\sup\left\\{\int_{K}|fv|\,d\lambda:\,\int_{K}\Psi(|v(x)|)\,d\lambda(x)\leq
1\right\\}.$
The pair $(L^{\Phi}(K),\|\cdot\|_{\Phi})$ is called an _Orlicz space_. Since
$\lambda$ is a regular measure on $K$, by [26, Chapter III, Proposition 11],
$(L^{\Phi}(K),\|\cdot\|_{\Phi})$ is a Banach space. A Young function $\Phi$ is
said to be _$\Delta_{2}$ -regular_ and we write $\Phi\in\Delta_{2}$, if there
are constants $k>0$ and $t_{0}\geq 0$ such that $\Phi(2t)\leq k\Phi(t)$ for
each $t\geq t_{0}$. At times, we also say that $\Phi$ satisfies
$\Delta_{2}$-condition if $\Phi\in\Delta_{2}.$ If $\Phi$ is
$\Delta_{2}$-regular, then the space $C_{c}(K)$ is dense in $L^{\Phi}(K)$.
Orlicz spaces are a more applicable generalization of Lebesgue spaces. In
fact, for each $1<p<\infty$, the function $\Phi_{p}$ defined by
$\Phi_{p}(x):=|x|^{p}$ for all $x\in\mathbb{R}$, is a Young function and the
Orlicz space $L^{\Phi}(K)$ is same as the usual Lebesgue space
$L^{p}(K,\lambda)$. Orlicz spaces have been studied for several decades; see
[17, 18, 19, 31, 20] for some interesting recent development related to Orlicz
spaces on locally compact hypergroups.
## 3\. Convolution of Two Orlicz Spaces
In this section, we study the convolution properties of two different Orlicz
spaces on locally compact hypergroups in terms of aperiodic elements of their
centers. We will also derive interesting results for locally compact groups.
Before stating the main result of the section, we need to introduce a class of
Young functions.
###### Definition 3.1.
Let $\Phi_{1}$ and $\Phi_{2}$ be Young functions. We say that the pair
$(\Phi_{1},\Phi_{2})$ satisfies the _sequence condition_ if there are two
sequences $(\alpha_{n})$ and $(\beta_{n})$ of nonnegative numbers such that
$\sum_{n=1}^{\infty}\Phi_{1}(\alpha_{n})<\infty,\quad\sum_{n=1}^{\infty}\Phi_{2}(\beta_{n})<\infty\quad\text{and}\quad\sum_{n=1}^{\infty}\alpha_{n}\beta_{n}=\infty.$
(3.1)
###### Example 3.2.
For each $p\geq 1$ and $\gamma\geq 0$ define the function $\Phi_{p,\gamma}$ by
$\Phi_{p,\gamma}(x):=|x|^{p}\left(\ln(1+|x|)\right)^{\gamma}$ for all
$x\in\mathbb{R}$. We denote the set of all $(p,\gamma)$ such that $p+\gamma>2$
and $\Phi_{p,\gamma}$ is a Young function by $\Omega$. Then, for each
$(p_{1},\gamma_{1}),(p_{2},\gamma_{2})\in\Omega$, setting
$\alpha_{n}=\beta_{n}:=\frac{1}{\sqrt{n}}$, one can see that
$(\Phi_{p_{1},\gamma_{1}},\Phi_{p_{2},\gamma_{2}})$ satisfies the sequence
condition (3.1).
Now, we can present one of the main results of this paper.
###### Theorem 3.3.
Let $K$ be a locally compact commutative hypergroup111So $K$ has an invariant
measure $\lambda$.. Suppose that $\Phi_{1}$ and $\Phi_{2}$ are Young functions
such that the pair $(\Phi_{1},\Phi_{2})$ satisfies the sequence condition
(3.1). If for each $f\in L^{\Phi_{1}}(K)$ and $g\in L^{\Phi_{2}}(K)$, $(f\ast
g)(x)$ exists for almost every $x\in K$, then the set of aperiodic elements of
$Z(K)$ with respect to the action $Z(K)\curvearrowright(K,\lambda)$ is empty.
###### Proof.
Suppose that the pair $(\Phi_{1},\Phi_{2})$ of Young functions satisfies the
sequence condition, and $f\ast g$ exists a.e. for all $f\in L^{\Phi_{1}}(K)$
and $g\in L^{\Phi_{2}}(K)$. Let, if possible, there exists an aperiodic
element $a$ in $Z(K)$ with respect to the action
$Z(K)\curvearrowright(K,\lambda)$. Fix a compact symmetric neighborhood $U$ of
$e$ in $K$. Then, by Definition 2.3, there exists an integer $N>0$ such that
for each $n\geq N$,
$U\cap\left(a^{n}\ast U\right)=\varnothing.$ (3.2)
Note that (3.2) also implies that $U\cap\left(a^{-n}\ast
U\right)=\varnothing.$
Thanks to [12, Lemma 3.2D], there is a compact symmetric neighborhood $V$ of
$e$ in $K$ such that $V\ast V\subseteq U$. So, by (3.2), for each distinct
$m,n\geq N$ we have
$\left(a^{-mN}\ast V\right)\cap\left(a^{-nN}\ast
V\right)=\varnothing\,\,\text{and}\,\,\left(a^{mN}\ast V\ast
V\right)\cap\left(a^{nN}\ast V\ast V\right)=\varnothing.$ (3.3)
Indeed, to prove this, let us consider an element $t$ which is in $a^{-mN}\ast
V$ and $a^{-nN}\ast V$. Then, there exist $u,v\in V$ such that
$t\in\\{a^{-mN}\\}\ast\\{u\\}$ and $t\in\\{a^{-nN}\\}\ast\\{v\\}$. Now, as
$a\in Z(K),$ we have
$u\in\\{a^{mN}\\}\ast\\{t\\}\subseteq\\{a^{mN}\\}\ast\\{a^{-nN}\\}\ast\\{v\\}\subseteq
a^{(m-n)N}\ast U,$
contradicting (3.2) as $u\in U$. Therefore, $\left(a^{-mN}\ast
V\right)\cap\left(a^{-nN}\ast V\right)=\varnothing$. Similarly, one can see
that $\left(a^{mN}\ast V\ast V\right)\cap\left(a^{nN}\ast V\ast
V\right)=\varnothing$.
Since $(\Phi_{1},\Phi_{2})$ satisfies the sequence condition, there are two
sequences $(\alpha_{n})$ and $(\beta_{n})$ of nonnegative numbers such that
the inequalities in (3.1) hold. So, there is an integer $N^{\prime}>0$ such
that
$\sum_{n=N^{\prime}}^{\infty}\Phi_{1}(\alpha_{n})<\frac{1}{\lambda(V)}\quad\text{and}\quad\sum_{n=N^{\prime}}^{\infty}\Phi_{2}(\beta_{n})<\frac{1}{\lambda(V\ast
V)}.$
Define
$f:=\sum_{n=N^{\prime}}^{\infty}\alpha_{n}\chi_{a^{-nN}\ast V},$
and
$g:=\sum_{n=N^{\prime}}^{\infty}\beta_{n}\chi_{a^{nN}\ast V\ast V},$
where $\chi_{E}$ denotes the characteristic function of $E\subseteq K$. Hence,
because of (3.3) and (2.1) and applying the Monotone Convergence Theorem we
have
$\displaystyle\int_{K}\Phi_{1}(f(x))\,d\lambda(x)$
$\displaystyle=\int_{\bigcup_{n=N^{\prime}}^{\infty}a^{-nN}\ast
V}\Phi_{1}(f(x))\,d\lambda(x)$
$\displaystyle=\sum_{n=N^{\prime}}^{\infty}\int_{a^{-nN}\ast
V}\Phi_{1}(f(x))\,d\lambda(x)$
$\displaystyle=\sum_{n=N^{\prime}}^{\infty}\int_{a^{-nN}\ast
V}\Phi_{1}(\alpha_{n})\,d\lambda(x)$
$\displaystyle=\lambda(V)\sum_{n=N^{\prime}}^{\infty}\Phi_{1}(\alpha_{n})<1,$
where we have used $\Phi_{1}(0)=0$ in the first equality. In particular, $f\in
L^{\Phi_{1}}(K)$. Similarly,
$\displaystyle\int_{K}\Phi_{2}(g(x))\,d\lambda(x)$
$\displaystyle=\lambda(V\ast
V)\sum_{n=N^{\prime}}^{\infty}\Phi_{2}(\beta_{n})<1,$
and this implies that $g\in L^{\Phi_{2}}(K)$. On the other hand, for each
$x\in V,$ using the fact that $\lambda(V)>0$ [2, Theorem 1.3.12], we have
$\displaystyle(f\ast g)(x)$ $\displaystyle=\int_{K}f(y)g(y^{-}\ast
x)\,d\lambda(y)$
$\displaystyle=\sum_{n=N^{\prime}}^{\infty}\alpha_{n}\int_{a^{-nN}\ast
V}g(y^{-}\ast x)\,d\lambda(y)$
$\displaystyle=\sum_{n=N^{\prime}}^{\infty}\alpha_{n}\int_{a^{-nN}\ast
V}\beta_{n}\,d\lambda(y)$
$\displaystyle=\lambda(V)\sum_{n=N^{\prime}}^{\infty}\alpha_{n}\beta_{n}=\infty,$
contradicting the hypothesis $f*g$ exists a.e. ∎
If $K$ is a locally compact group, then the action $Z(K)$ on $K$ is same as
the natural action of $K$ on itself, because in this case we have $Z(K)=K$.
So, the following result holds.
###### Corollary 3.4.
If a locally compact abelian group $G$ has an aperiodic element, then for each
pair $(\Phi_{1},\Phi_{2})$ of Young functions satisfying the sequence
condition (3.1), there are $f\in L^{\Phi_{1}}(G)$ and $g\in L^{\Phi_{2}}(G)$
such that
$\lambda\left(\\{x\in G:(f\ast g)(x)\text{ does not exist}\\}\right)>0.$
###### Corollary 3.5.
Let $G$ be a compactly generated locally compact abelian group. Then, the
followings are equivalent:
1. (1)
$G$ is compact.
2. (2)
There is a pair $(\Phi_{1},\Phi_{2})$ of Young functions satisfying the
sequence condition such that for each $f\in L^{\Phi_{1}}(G)$ and $g\in
L^{\Phi_{2}}(G)$, $f\ast g$ exists a.e.
3. (3)
For each pair $(\Phi_{1},\Phi_{2})$ of Young functions satisfying the sequence
condition and for each $f\in L^{\Phi_{1}}(G)$ and $g\in L^{\Phi_{2}}(G)$,
$f\ast g$ exists a.e.
###### Proof.
It is enough to prove $(2)\Rightarrow(1)$. Let $\Phi_{1}$ and $\Phi_{2}$ are
two Young functions such that $(\Phi_{1},\Phi_{2})$ satisfy the sequence
condition. Since $G$ is a compactly generated abelian group, thanks to [10,
9.26(b)], the set of compact elements of $G$ is a compact subgroup of $G$. So,
if $G$ is not compact, it has an aperiodic element, and this contradicts
Theorem 3.3. ∎
###### Example 3.6.
The additive discrete group $\mathbb{Z}$ is a non-compact finitely generated
abelian group. So, by Corollary 3.5, for each pair $(\Phi_{1},\Phi_{2})$ of
Young functions satisfying the sequence condition (3.1), there are $f\in
l^{\Phi_{1}}(\mathbb{Z})$ and $g\in l^{\Phi_{2}}(\mathbb{Z})$ such that
$(f\ast g)(n)=\infty$ for infinite many $n$ in $\mathbb{Z}$.
Compare the following conclusion with [1, Theorem 1.1] and Theorem 4.4 from
T.S. Quek and L.Y.H. Yap.
###### Corollary 3.7.
Let $G$ be a compactly generated locally compact abelian group and
$2<p,q<\infty$. Then, $G$ is compact if and only if $f\ast g$ exists a.e. for
all $f\in L^{p}(G)$ and $g\in L^{q}(G)$.
## 4\. Compact Convolution Operators
In the sequel we assume that $K$ is a locally compact hypergroup, and $w$ is a
weight on $K,$ that is, a positive continuous function on $K$ such that for
each $x,y\in K$, $w(x\ast y)\leq w(x)\,w(y)$. Here, $\mathcal{M}_{w}(K)$
denotes the set of all measures $\mu\in\mathcal{M}(K)$ with
$w\mu\in\mathcal{M}(K)$. For each $\mu\in\mathcal{M}_{w}(K)$ we set
$\|\mu\|_{w}:=\|w\mu\|$. In a similar way, we can also define $L^{1}_{w}(K)$
and $L^{\Phi}_{w}(K).$
The goal of this section is to give some equivalent condition for a
convolution operator from $L^{1}_{w}(K)$ into the weighted Orlicz space
$L^{\Phi}_{w}(K)$ to be a compact operator. For this, we need the next
theorem.
###### Theorem 4.1.
Let $K$ be a locally compact hypergroup and $g\in C_{c}(K)$. Suppose that the
bounded linear operators $T_{g}:L^{1}_{w}(K)\rightarrow L^{\Phi}_{w}(K)$ and
$\tilde{T}_{g}:\mathcal{M}_{w}(K)\rightarrow L^{\Phi}_{w}(K)$ are defined by
$T_{g}(f):=f\ast g,\quad(f\in L^{1}_{w}(K))$
and
$\tilde{T}_{g}(\mu):=\mu\ast g,\quad(\mu\in\mathcal{M}_{w}(K)).$ (4.1)
Then, $T_{g}$ is compact if and only if $\tilde{T}_{g}$ is compact.
###### Proof.
First suppose that $T_{g}$ is a compact operator. By [22, Theorem 4.1], there
is a bounded left approximate identity $\\{e_{\alpha}\\}_{\alpha\in I}$ in
$L^{1}_{w}(K)$ such that for each $h\in C_{c}(K)$, $e_{\alpha}\ast
h\rightarrow h$ in $L^{\Phi}_{w}(K)$. For each $\mu\in\mathcal{M}_{w}(K)$ we
have
$\left\|\tilde{T}_{g}(\mu)-T_{g}(\mu\ast
e_{\alpha})\right\|_{\Phi,w}=\|\mu\ast g-\mu\ast(e_{\alpha}\ast
g)\|_{\Phi,w}\leq\|\mu\|_{w}\,\|g-(e_{\alpha}\ast g)\|_{\Phi,w}.$
Then, we have
$\left\\{\tilde{T}_{g}(\mu):\,\|\mu\|_{w}\leq
1\right\\}\subseteq\overline{\left\\{T_{g}(\mu\ast e_{\alpha}):\,\alpha\in
I,\mu\in\mathcal{M}_{w}(K),\|\mu\|_{w}\leq 1\right\\}}^{\|\cdot\|_{\Phi,w}}.$
(4.2)
Now, from boundedness of the set
$\\{\mu\ast e_{\alpha}:\,\alpha\in I,\mu\in\mathcal{M}_{w}(K),\|\mu\|_{w}\leq
1\\}$
in $L^{1}_{w}(K)$, one can see that
$\left\\{\tilde{T}_{g}(\mu):\,\|\mu\|_{w}\leq 1\right\\}$ is compact, and so
the proof of this direction is complete. Proof of the converse is easy. ∎
The following result is an Orlicz-space version of the main result of [9,
Theorem 2].
###### Theorem 4.2.
Let $K$ be a locally compact hypergroup. Let $(\Phi,\Psi)$ be a Young pair
with $\Psi\in\Delta_{2}$, and $g\in C_{c}(K)$. Define the operator
$T_{g}:L^{1}_{w}(K)\rightarrow L^{\Phi}_{w}(K)$ by
$T_{g}(f):=f\ast g,\quad(f\in L^{1}_{w}(K)).$
Then, $T_{g}$ is compact if and only if the function $F_{g}$ defined by
$F_{g}:K\rightarrow\mathbb{R},\quad F_{g}(x):=\frac{1}{w(x)}\|{\rm
L}_{x}g\|_{\Phi,w}$ (4.3)
for all $x\in K$, vanishes at infinity.
###### Proof.
Suppose first that $T_{g}$ be a compact operator but that $F_{g}$ does not
vanish at infinity. Then, there is a number $\varepsilon>0$ such that for each
compact set $F\subseteq K$, there exists an element $x_{F}\in K\setminus F$
such that
$\left\|\tilde{T}_{g}\left(\frac{1}{w(x_{F})}\delta_{x_{F}}\right)\right\|_{\Phi,w}=\frac{1}{w(x_{F})}\left\|L_{x_{F}}g\right\|_{\Phi,w}>\varepsilon,$
(4.4)
where $\tilde{T}_{g}$ is the operator defined by (4.1). By Theorem 4.1, the
operator $\tilde{T}_{g}$ is also compact. Then, by boundedness of the set
$\left\\{\frac{1}{w(x_{F})}\delta_{x_{F}}:\,F\subseteq K\text{ is
compact}\right\\}$
in $\mathcal{M}_{w}(K)$, there exists a subnet $\\{x_{F_{i}}\\}$ of
$\\{x_{F}\\}$ and a function $h\in L^{\Phi}_{w}(K)$ such that
$\lim_{i}\tilde{T}_{g}\left(\frac{1}{w(x_{F_{i}})}\delta_{x_{F_{i}}}\right)=h$
(4.5)
in $L^{\Phi}_{w}(K)$. By (4.4), we have $\|h\|_{\Phi,w}\geq\epsilon$. So,
since
$\|h\|_{\Phi,w}=\sup\left\\{|\langle h,f\rangle|:\,f\in
L^{\Psi}_{w^{-1}}(K),\|f\|_{\Psi,w^{-1}}=1\right\\},$
there is a function $\eta\in L^{\Psi}_{w^{-1}}(K)$ with
$\|\eta\|_{\Psi,w^{-1}}=1$ such that $|\langle
h,\eta\rangle|>\frac{\varepsilon}{2}$.
Since $C_{c}(K)$ is dense in $L^{\Psi}_{w^{-1}}(K)$ (note that
$\Psi\in\Delta_{2}$), there is a function $\psi\in C_{c}(K)$ such that
$\|\psi\|_{\Psi,w^{-1}}<\frac{3}{2}$ and
$|\langle h,\psi\rangle|>\frac{\varepsilon}{2}.$
So, thanks to (4.5), there exists an index $i_{0}$ such that for each index
$i$, if $F_{i_{0}}\subseteq F_{i}$, then
$\left|\left\langle\tilde{T}_{g}\left(\frac{1}{w(x_{F_{i}})}\delta_{x_{F_{i}}}\right),\psi\right\rangle\right|>\frac{\varepsilon}{2}.$
(4.6)
Put $A_{0}:={\rm supp}(\psi)$ and $A_{1}:={\rm supp}(g)$. For some index $i$
we have
$F_{i_{0}}\cup(A_{0}\ast A_{1}^{-})\subseteq F_{i},$
and so,
$\left\langle\tilde{T}_{g}\left(\frac{1}{w(x_{F_{i}})}\delta_{x_{F_{i}}}\right),\psi\right\rangle=\frac{1}{w(x_{F_{i}})}\,\int_{A_{1}}g(t)\psi(x_{F_{i}}t)\,dt=0,$
a contradiction.
Conversely, let us assume that $0\neq g\in C_{c}(K)$ and $F_{g}\in C_{0}(K).$
Then mappings
$Q_{1}:L^{\Psi}(K)\rightarrow L^{\Psi}_{w^{-1}},\,\,\,Q_{1}(f)=fw,\,\,\,(f\in
L^{\Psi}(K))$
and
$Q_{2}:C_{0}^{w}(K)\rightarrow C_{0}(K),\,\,\,Q_{2}(f)=\frac{f}{w},\,\,\,(f\in
C_{0}^{w}(K))$
are isometric isomorphisms. Also, note that the operator $\tilde{T}_{g}$ is
the adjoint of the operator
$Q_{3}:L^{\Psi}_{w^{-1}}(K)\rightarrow C_{0}^{w}(K),\,\,\,Q_{3}(f)=\langle
g,L_{x^{-}}f\rangle,\,\,\,(f\in L^{\Psi}_{w^{-1}}).$
Now, as an application of Schauder’s Theorem [6, Chapter IV] and having
Theorem 4.1 in mind, it is enough to show that the operator
$Q_{g}:L^{\Psi}(K)\rightarrow C_{0}(K),\,\,\,Q_{g}=Q_{2}Q_{3}Q_{1}$
is compact.
To show this, let $\\{f_{n}\\}$ be a sequence $L^{\Psi}(K).$ For each
$n\in\mathbb{N},$ let
$G_{n}:=\overline{\Big{\\{}x\in K:|F_{g}(x)|\geq\frac{1}{n}\Big{\\}}}.$
Then, for each $n,$ we have $G_{n}\subset G_{n+1}$ and since $F_{g}$ vanishes
at infinite, $G_{n}$’s are compact subset of $K$. Also, for each
$n\in\mathbb{N}$ and $x\in K\backslash G_{n}$
$\displaystyle|Q_{g}(f_{n})(x)|$ $\displaystyle=\frac{1}{w(x)}|\langle
g,L_{x^{-}}(wf_{n})\rangle|=\frac{1}{w(x)}|\langle wL_{x}g,f_{n}\rangle|$
$\displaystyle=\frac{2}{w(x)}\|wL_{x}g\|_{\Phi}\|f_{n}\|_{\Psi}=2|F_{g}(x)|\|f_{n}\|_{\Psi}\leq\frac{2}{n}\sup_{m}\|f_{m}\|_{\Psi}.$
Now, similar to the proof of second part of [33, Theorem 3] (also see [9]), by
the diagonal method there is a subsequence of $\\{Q_{g}(f_{n})\\}$ which
converge in $C_{0}(K)$ and this complete the proof.
∎
Acknowledgments. The authors would like to thank Prof. Kenneth A. Ross and
Prof. Ajit Iqbal Singh for their suggestions and comments. Vishvesh Kumar is
supported by FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial
Differential Equations.
## References
* [1] F. Abtahi, R. Nasr Isfahani and A. Rejali, _On the $L_{p}$-conjecture for locally compact groups_, Arch. Math. (Basel), 89 (2007) 237–242.
* [2] W.R. Bloom and H. Heyer, Harmonic Analysis of Probability Measures on Hypergroups, De Gruyter, Berlin, 1995.
* [3] C-C. Chen and C.-H. Chu, _Hypercyclic weighted translations on groups_ , Proc. Amer. Math. Soc., 139 (2011) 2839–2846.
* [4] C-C. Chen, Chaotic weighted translations on groups, Arch. Math., 97 (2011) 61–68.
* [5] K.-Y. Chen, On aperiodicity and hypercyclic weighted translation operators, J. Math. Anal. Appl., 462 (2018) 1669–1678.
* [6] J. B. Conway, A course in Functional analysis, Springer-Verlag, New York, 1985.
* [7] C.F. Dunkl, The measure algebra of a locally compact hypergroup, Trans. Amer. Math. Soc., 179 (1973), 331–348.
* [8] C.F. Dunkl and D.E. Ramirez, _A_ family of countable compact $P_{*}$-hypergroups, Trans. Amer. Math. Soc., 202 (1975), 339–356.
* [9] F. Ghahramani and A.R. Medghalchi, _compact multipliers on weighted hypergroup algebras_ , Math. Proc. Camb. Phil. Soc. 98 (1985), 493-500.
* [10] E. Hewitt and K.A. Ross, _A_ bstract harmonic analysis Vol. I: Structure of topological groups, integration theory, group representations, (1st edition, (1963)) 2nd edition, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 115, SpringerVerlag, Berlin-New York, (1979).
* [11] H. Hudzik, A. Kamiska and J. Musielak, _On some Banach algebras given by a modular_ , in: Alfred Haar Memorial Conference, Budapest, Colloquia Mathematica Societatis J anos Bolyai (North Holland, Amsterdam), 49 (1987) 445–463.
* [12] R. I. Jewett, Spaces with an abstract convolution of measures, Adv. Math., 18 (1975), 1–101.
* [13] D. Kerr and H. Li, _Ergodic Theory; Independence and Dichotomies_ , Springer International Publishing AG, 2016.
* [14] V. Kumar, K. A. Ross, A. I. Singh, _Hypergroup deformations of semigroups_ , Semigroup Forum 99(1) (2019) 169-195.
* [15] V. Kumar, K. A. Ross, A. I. Singh, _An addendum to “Hypergroup deformations of semigroups”_ , Semigroup Forum 99(1) (2019) 196-197.
* [16] V. Kumar, K. A. Ross, A. I. Singh, _Ramsey theory for hypergroups_ , Semigroup Forum, 100(2) (2020) 482-504.
* [17] V. Kumar, R. Sarma and N. Sharvan Kumar, _Orlicz spaces on hypergroups_ , Publ. Math. Debrecen, 94 (2019), 31–47.
* [18] V. Kumar, R. Sarma, _The Hausdorff-Young inequality for Orlicz spaces on compact hypergroups_ , Colloquium Mathematicum 160 (2020) 41-51.
* [19] V. Kumar, _Orlicz spaces and amenability of hypergroups_ , Bull. Iran. Math. Soc. (2019). https://doi.org/10.1007/s41980-019-00310-7
* [20] V. Kumar and S.M. Tabatabaie, _Hypercyclic Sequences of weighted translations on hypergroups_ arXiv preprint arXiv:2003.10036, (2020)
* [21] A. Osançlıol and S. Öztop, _Weighted Orlicz algebras on locally compact groups_ , J. Aust. Math. Soc., 99 (2015), 399–414.
* [22] S. Öztop and S.M. Tabatabaie, _Weighted Orlicz algebras on hypergroups_ , Filomat 43(9) (2020), 2991–3002.
* [23] T.S. Quek and L.Y.H. Yap, _Sharpness of Young’s inequality for convolution_ , Math. Scand., 53 (1983), 221–237.
* [24] M. Rajagopalan, $L^{p}$-conjecture for locally compact groups-I, Trans. Amer. Math. Soc., 125 (1966), 216-222.
* [25] M. Rajagopalan and W Zelazko, $L^{p}$-conjecture for solvable locally compact groups, J. Indian Math. Soc., 29 (1965), 87-93.
* [26] M.M. Rao and Z.D. Ren, Theory of Orlicz Spaces, Marcel Dekker, New York, 1991.
* [27] M.M. Rao and Z.D. Ren, Applications of Orlicz Spaces, Marcel Dekker, New York, 2002.
* [28] K.A. Ross, Centers of hypergroups, Trans. Amer. Math. Soc. 243 (1978), 251–269.
* [29] S, Saeki, The $L^{p}$-conjecture and Young’s inequality, Illinois Journal of Mathematics, 34(3) (1990) 614–627.
* [30] R. Spector, Apercu de la theorie des hypergroups, In: Analyse Harmonique sur les Groups de Lie, 643–673, Lec. Notes Math. Ser., 497, Springer, 1975.
* [31] S. M. Tabatabaie and A. R. Bagheri Salec, Convolution of Two Weighted Orlicz Spaces on Hypergroups, Revista Colombiana de Matemáticas, 54 (2020) 117–128.
* [32] S. M. Tabatabaie, A. R. Bagheri Salec and M. Zare Sanjari, A note on Orlicz algebras, Oper. Matrices, 14(1) (2020) 139–144.
* [33] S. M. Tabatabaie, A. R. Bagheri Salec and M. Zare Sanjari, _Remarks on weighted Orlicz spaces on locally compact groups_ , Math. Inequal. Appl. 23(3) (2020) 1015–1025.
* [34] S.M. Tabatabaie and F. Haghighifar, _$L^{p}$ -Conjecture on hypergroups_, Sahand Communications in Mathematical Analysis, 12 (2018), 121–130.
* [35] M. Voit, _Factorization of probability measures on symmetric hypergroups_ , J. Aust. Math. Soc. (Ser. A), 50 (1991), 417–467.
|
# The real part of the complementary error function
Yossi Lonke
It is shown that the real part of the complementary error function
${\rm erfc}\,(z)=\frac{2}{\sqrt{\pi}}\int_{z}^{\infty}e^{-u^{2}}\,du$
satisfies ${\rm Re}\,({\rm erfc}\,(z))\geq 1$ in the set $S=\\{z:|{\rm
Arg}\,z|\geq 3\pi/4\\}$. This improves a previous result asserting that ${\rm
erfc}\,(z)$ has no zeros in the set $S$.
Keywords: Error function, Complementary Error function Yossi Lonke
<EMAIL_ADDRESS>
Independent Research
13 Basel Street, Tel-Aviv ,Israel
Orcid Id: 0000-0001-7493-5085
##
In [1], it was shown that the complementary error function, namely,
${\rm erfc}\,(z)=\frac{2}{\sqrt{\pi}}\int_{z}^{\infty}e^{-u^{2}}\,du$ (1)
does not vanish in the set
$S=\\{z\in\mathbb{C}:|{\rm Arg}\,z|\geq\frac{3\pi}{4}\\}$
(ibid. Theorem 1). Here ${\rm Arg}\,z$ denotes the principal argument in the
complex plane, i.e. the angle between the complex number $z$ and the positive
real axis, taking values in the interval $(-\pi,\pi]$. The proof presented in
[1] is essentially a real-analysis argument. In particular, no use was made of
the fact that the complementary error function is analytic. By combining of a
result from [2] and some elementary complex analysis, we obtain a somewhat
simpler proof than the one in [1] , of the following stronger result.
###### Proposition 1.
For every $z\in S$ one has
${\rm Re}\,({\rm erfc}\,(z))\geq 1$ (2)
Equality holds if and only if $z=0$.
###### Proof.
It clearly suffices to show that ${\rm erf}\,(z)=1-{\rm erfc}\,(z)$ has a non-
negative real part in $S$, where
${\rm erf}\,(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}e^{-u^{2}}\,du$ (3)
Since ${\rm erf}\,(-z)=-{\rm erf}\,(z)$, this is equivalent to showing that
${\rm Re}\,({\rm erf}\,(z))\geq 0$ in the set
$T=-S=\\{z\in\mathbb{C}:|{\rm Arg}\,z|\leq\pi/4\\}$
The following estimate for the complementary error function was derived in
[2], (ibid. (10))
$|{\rm
erfc}\,(x+iy)|<\frac{e^{y^{2}-x^{2}}}{x\sqrt{\pi}}\sqrt{1+y^{2}/x^{2}}\quad(x,y>0)$
(4)
Since ${\rm erfc}\,(\bar{z})=\overline{{\rm erfc}\,(z)}$ for all
$z\in\mathbb{C}$, the estimate (4) holds for $y<0$ as well. If ${z=x+iy\in
T}$, then $y^{2}\leq x^{2}$, and if also $x>\sqrt{2/\pi}$ then (4) implies
that ${|1-{\rm erf}\,(z)|<1}$, and in particular ${\rm Re}\,({\rm
erf}\,(z))>0$.
Consider therefore a complex number $z=x+iy\in T$ such that
${x\leq\sqrt{2/\pi}}$, and we may further assume that ${y\geq 0}$, because
${\rm erf}\,(\bar{z}),{\rm erf}\,(z)$ are complex conjugates having the same
real part. Put $a^{2}=x^{2}-y^{2}$. Since ${\rm erf}\,z$ is an entire
function, the integral defining ${\rm erf}\,z$, namely (3), does not depend on
any particular choice of a continuous path with which we choose to connect the
origin to $z$. Choose a path composed of a segment connecting the origin to
the point $(a,0)$, followed by portion of the curve $x^{2}-y^{2}=a^{2}$,
connecting the point $(a,0)$ to the point $(x,y)$. A parametrization
$\gamma:[0,x]\to T$ is given by
$\gamma(t)=\left\\{\begin{aligned} &t,&0\leq t\leq a,\\\
&t+i\sqrt{t^{2}-a^{2}},&a\leq t\leq x\\\ \end{aligned}\right.$ (5)
Evaluating the integral (3) along the first part of $\gamma(t)$ gives
$\frac{2}{\sqrt{\pi}}\int_{0}^{a}e^{-u^{2}}\,du={\rm erf}\,(a)$ (6)
Along the second part of the path, the corresponding integral is
$\frac{2}{\sqrt{\pi}}\int_{a}^{x}e^{-(t+i\sqrt{t^{2}-a^{2}})^{2}}(1+i\frac{t}{\sqrt{t^{2}-a^{2}}})\,dt$
The real part of this integral is
$\frac{2e^{-a^{2}}}{\sqrt{\pi}}\int_{a}^{x}(\cos 2t\sqrt{t^{2}-a^{2}}+\sin
2t\sqrt{t^{2}-a^{2}}\frac{t}{\sqrt{t^{2}-a^{2}}})\,dt$ (7)
By hypothesis, $x\leq\sqrt{2/\pi}$. Hence
$0\leq 2t\sqrt{t^{2}-a^{2}}\leq 4/\pi<\pi/2\qquad(a\leq t\leq x)$
Since both sine and cosine are non-negative in $[0,\pi/2]$, the integral in
(7) is non-negative. Since ${\rm Re}\,({\rm erf}\,(z))$ is the sum of (6) and
(7), this completes the proof that ${\rm Re}\,({\rm erf}\,(z))\geq 0$ for all
$z\in T$.
In case of equality in (2) for some $z=x+iy\in S$, we switch to ${-z\in T}$,
and note that the considerations above imply that $|x|\leq\sqrt{2/\pi}$. As a
result, ${\rm Re}\,({\rm erf}\,(z))=0$ implies that both integrals (6) and
(7), (the latter with $x$ replaced by $-x=|x|$) must vanish, which forces
$x=y=0$. This completes the proof of the proposition. ∎
Remark Note that if $x=y$ then $a=0$. In that case the path $\gamma$ reduces
to the line segment connecting the origin to the point $(x,x)$, and the
integral (7) reduces to an integral that was analysed in [1] by using other
methods. The proof above, however, remains unchanged.
## References
* [1] Árpád, E., Laforgia, A.: The zeros of the complementary error function. Numer. Algor. 49, 153–157 (2008). https://doi.org/10.1007/s11075-008-9186-7
* [2] Strand, O.E.: A method for the computation of the error function of a complex variable, Mathematics of Computation, 19, 127–129 (1965)
|
11institutetext: Ben-Gurion University of the Negev
11email<EMAIL_ADDRESS>22institutetext: Shamoon
College of Engineering
22email<EMAIL_ADDRESS>
# Text line extraction using fully convolutional network and energy
minimization
Berat Kurar Barakat 11 0000-0002-7240-7286 Ahmad Droby 11 Reem Alaasam 11
Boraq Madi 11 Irina Rabaev 22 Jihad El-Sana 11
###### Abstract
Text lines are important parts of handwritten document images and easier to
analyze by further applications. Despite recent progress in text line
detection, text line extraction from a handwritten document remains an
unsolved task. This paper proposes to use a fully convolutional network for
text line detection and energy minimization for text line extraction. Detected
text lines are represented by blob lines that strike through the text lines.
These blob lines assist an energy function for text line extraction. The
detection stage can locate arbitrarily oriented text lines. Furthermore, the
extraction stage is capable of finding out the pixels of text lines with
various heights and interline proximity independent of their orientations.
Besides, it can finely split the touching and overlapping text lines without
an orientation assumption. We evaluate the proposed method on VML-AHTE, VML-
MOC, and Diva-HisDB datasets. The VML-AHTE dataset contains overlapping,
touching and close text lines with rich diacritics. The VML-MOC dataset is
very challenging by its multiply oriented and skewed text lines. The Diva-
HisDB dataset exhibits distinct text line heights and touching text lines. The
results demonstrate the effectiveness of the method despite various types of
challenges, yet using the same parameters in all the experiments.
###### Keywords:
Historical documents analysis Text line segmentationText line detectionText
line extractionHandwritten document
## 1 Introduction
Segmentation in computer vision is the task of dividing an image into parts
that are easier to analyse. Text lines of a handwritten document image are
widely used for word segmentation, text recognition and spotting, manuscripts
alignment and writer recognition. Text lines need to be provided to these
applications either by their locations or by complete set of their pixels. The
task of identifying the location of each text line is called detection,
whereas the task of determining the pixels of each text line is called
extraction. Much research in the recent years has focused on text line
detection [24, 3, 14]. However, detection defines the text lines loosely by
baselines or main body blobs. On the other hand, extraction is a harder task
which defines text lines precisely by pixel labels or bounding polygons.
The challenges in text line extraction arise due to variations in text line
heights and orientations, presence of overlapping and touching text lines, and
diacritical marks within close interline proximity. It has been generally
demonstrated that deep learning based methods are effective at detecting text
lines with various orientations [30, 22, 25, 14]. However, only few of the
recent researches [8, 30] have addressed the problem of extraction given the
detection, yet with the assumption of horizontal text lines.
This paper proposes a text line extraction method (FCN+EM) which uses Fully
Convolutional Network (FCN) to detect text lines in the form of blob lines
(Figure 1(b)), followed by an Energy Minimization (EM) function assisted by
these blob lines to extract the text lines (Figure 1(c)). FCN is capable of
handling curved and arbitrarily oriented text lines. However, extraction is
problematic due to the Sayre’s paradox [27] which states that exact boundaries
of handwritten text can be defined only after its recognition and handwritten
text can be recognized only after extraction of its boundaries. Nevertheless,
humans are good at understanding boundaries of text lines written in a
language they do not know. Therefore, we consider EM framework to formulate
the text line extraction in compliance with the human visual perception, with
the aid of the Gestalt proximity principle for grouping [17]. The proposed EM
formulation for text line extraction is free of an orientation assumption and
can be used with touching and overlapping text lines with disjoint strokes and
close interline proximity (Figure 1(a)).
Figure 1: Given a handwritten document image (a), FCN learns to detect blob
lines that strike through text lines (b). EM with the assistance of detected
blob lines extracts the pixel labels of text lines which are in turn enclosed
by bounding polygons (c).
The proposed extraction method (FCN+EM) is evaluated on Visual Media Lab
Arabic Handwritten Text line Extraction (VML-AHTE) dataset, Multiply Oriented
and Curved (VML-MOC) dataset [5], and DIVA Historical Manuscript Database
(DIVA-HisDB) [28]. VML-AHTE dataset is characterized by touching and
overlapping text lines with close proximity and rich diacritics. VML-MOC
dataset contains arbitrarily oriented and curved text lines. DIVA-HisDB
dataset exhibit varying text line heights and touching text lines.
The rest of the paper is organized as follows. Related work is discussed in
Section 2, and the datasets are described in Section 3. Later, the method is
presented in Section 4. The experimental evaluation and the results are then
provided in Section 5. Finally, Section 6 draws conclusions and outlines
future work.
## 2 Related work
A text line is a set of image elements, such as pixels or connected
components. Text line components in a document image can be represented using
basic geometric primitives such as points, lines, polylines, polygons or
blobs. Text line representation is given as an input to other document image
processing algorithms, and, therefore, important to be complete and correct.
There are two main approaches to represent text lines: text line detection and
text line extraction. Text line detection detects the lines, polylines or
blobs that represent the locations of spatially aligned set of text line
elements. Detected line or polyline is called a baseline [24, 14] if it joins
the lower part of the character main bodies, and a separator path [26, 8] if
it follows the space between two consecutive text lines. Detected blobs [3]
that cover the character main bodies in a text line are called text line
blobs.
Text line extraction determines the constituting pixels or the polygons around
the spatially aligned text line elements. Pixel labeling assigns the same
label to all the pixels of a text line [26, 9, 30]. Bounding polygon is used
to enclose all the elements of a text line together with its neighbourhood
background pixels [11, 15]. Most of the extraction methods assume horizontally
parallel text lines with constant heights, whereas some methods [5, 2] are
more generic.
Recent deep learning methods estimate $x$-height of text lines using FCN and
apply Line Adjacency Graphs (LAG) to post-process FCN output to split touching
lines [20, 21]. Renton et al. [24, 25] also use FCN to predict $x$-height of
text lines. Kurar et al. [3] applied FCN for challenging manuscript images
with multi-skewed, multi-directed and curved handwritten text lines. However
these methods either do only text line detection or their extraction phase is
not appropriate for unstructured text lines because their assumption of
horizontal and constant height text lines. The proposed method assumes the
both, detection and extraction phases to be for complex layout.
ICDAR 2009 [12] and ICDAR 2013 [29] datasets are commonly used for evaluating
text line extraction methods and ICDAR 2017 [10] dataset is used for
evaluating text line detection methods. DIVA-HisDB dataset [28] is used for
both types of evaluations: detection and extraction. Therefore, we select to
use DIVA-HisDB [28] as it provides ground truth for detection and extraction.
However, this dataset is not enough representative of all the segmentation
problems to evaluate a generic method. Hence, we also evaluated the proposed
method on publicly available VML-MOC [5] dataset that contains multiply
oriented and curved text lines with heterogeneous heights, and on VML-AHTE
dataset that contains crowded diacritics.
## 3 Datasets
We evaluated the proposed method on three publicly available handwritten
datasets. We suppose that these datasets demonstrate the generality of our
method. As VML-AHTE dataset contains lines with crowded diacritics, VML-MOC
dataset contains multiply oriented and curved lines, and Diva-HisDB dataset
contains consecutively touching multiple lines. In this section we present
these datasets.
### 3.1 VML-AHTE
VML-AHTE dataset is a collection of 30 binary document images selected from
several manuscripts (Fig. 2). It is a newly published dataset and available
online for downloading111https://www.cs.bgu.ac.il/~berat/data/ahte˙dataset.
The dataset is split into 20 train pages and 10 test pages. Its ground truth
is provided in three formats: bounding polygons in PAGE xml [23] format, color
pixel labels and DIVA pixel labels [28].
Figure 2: Some samples of challenges in VML-AHTE dataset.
### 3.2 Diva-HisDB
DIVA-HisDB dataset [28] contains 150 pages from 3 medieval manuscripts: CB55,
CSG18 and CSG863 (Fig. 3). Each book has 20 train pages and 10 test pages.
Among them, CB55 is characterized by a vast number of touching characters.
Ground truth is provided in three formats: baselines and bounding polygons in
PAGE xml [23] format and DIVA pixel labels [28].
Figure 3: Diva-HisDB dataset contains 3 manuscripts: CB55, CSG18 and CSG863.
Notice the touching characters among multiple consecutive text lines in CB55.
### 3.3 VML-MOC
VML-MOC dataset [5] is a multiply oriented and curved handwritten text lines
dataset that is publicly
available222https://www.cs.bgu.ac.il/~berat/data/moc˙dataset. These text lines
are side notes added by various scholars over the years on the page margins,
in arbitrary orientations and curvy forms due to space constraints (Fig. 4).
The dataset contains 30 binary document images and divided into 20 train pages
and 10 test pages. The ground truth is provided in three formats: bounding
polygons in PAGE xml [23] format, color pixel labels and DIVA pixel labels
[28].
Figure 4: VML-MOC dataset purely contains binarized side notes with arbitrary
orientations and curvy forms.
## 4 Method
We present a method (FCN+EM) for text line detection together with extraction,
and show its effectiveness on handwritten document images. In the first phase,
the method uses an FCN to densely predict the pixels of the blob lines that
strike through the text lines (Figure 1(b)). In the second phase, we use an EM
framework to extract the pixel labels of text lines with the assistance of
detected blob lines (Figure 1(c)). In the rest of this section we give a
detailed of description FCN, EM and how they are used for text line detection
and text line extraction.
### 4.1 Text line detection using FCN
Fully Convolutional Network (FCN) is an end-to-end semantic segmentation
algorithm that extracts the features and learns the classifier function
simultaneously. FCN inputs the original images and their pixel level
annotations for learning the hypothesis function that can predict whether a
pixel belongs to a text line label or not. A crucial decision have to be made
about the representation of text line detection. Text line detection labels
can be represented as baselines or blob lines.
We use blob line labeling that connects the characters in the same line while
disregarding diacritics and touching components among the text lines. Blob
line labeling for VML-AHTE and DIVA-HisDB datasets is automatically generated
using the skeletons of bounding polygons provided by their ground truth
(Figure 5(d)). Blob line labeling for VML-MOC dataset is manually drawn using
a sharp rectangular brush with a diameter of 12 pixels (Figure 5(b)).
Figure 5: Sample patches from document images of VML-MOC (a) and VML-AHTE (c).
Blob line labeling for VML-AHTE and DIVA-HisDB is generated automatically (d).
Blob line labeling for VML-AHTE is manually drawn using a paint brush with a
diameter of 12 pixels (b).
#### 4.1.1 FCN architecture
The FCN architecture (Figure 6) we used is based on the FCN8 proposed for
semantic segmentation [19]. Particularly FCN8 architecture was selected
because it has been successful in page layout analysis of handwritten
documents [4]. It consists of an encoder and a decoder. The encoder
downsamples the input image and the filters can see coarser information with
larger receptive field. Consequently the decoder adds final layer of encoder
to the lower layers with finer information, then upsamples the combined layer
back to the input size. Default input size is $224\times 224$, which does not
cover more than 2 to 3 text lines. To include more context, we changed the
input size to $350\times 350$ pixels. We also changed the number of output
channels to 2, which is the number of classes: blob line or not.
Figure 6: The FCN architecture used for text line detection. Vertical lines
show the convolutional layers. Grids show the relative coarseness of the
pooling and prediction layers. FCN8 upsamples 4 times the final layer,
upsamples twice the pool4 layer, and combine them with pool3 layer. Finally,
FCN8 upsamples the combination to the input size.
#### 4.1.2 FCN training
For training, we randomly crop $50,000$ patches of size $350\times 350$ from
inverted binary images of the documents and their corresponding labels from
the blob line label images (Figure 5(b)). We adopted this patch size due to
memory limitation. Using full pages for training and prediction is not
feasible on non-specialized systems without resizing the pages to a more
manageable size. Resizing the pages will result in details loss, which usually
reduces the accuracy of segmentation results.
The FCN was trained by a batch size of 12, using Stochastic Gradient Descent
(SGD) with momentum equals to $0.9$ and learning rate equals to $0.001$. The
encoder part of FCN was initialized with its publicly available pre-trained
weights.
#### 4.1.3 FCN testing
During the testing, a sliding window of size $350\times 350$ was used for
prediction, but only the inner window of size $250\times 250$ was considered
to eliminate the edge effect. Page was padded with black pixels at its right
and bottom sides if its size is not an integer multiple of the sliding window
size, in addition to padding it at 4 sides for considering only the central
part of the sliding window.
### 4.2 Text line extraction using EM
We adapt the energy minimization (EM) framework [6] that uses graph cuts to
approximate the minima of arbitrary functions. These functions can be
formulated in terms of image elements such pixels or connected components. In
this section we formulate a general function for text line extraction using
text line detection. Then, we adapt this general function to be used with
connected components for text line extraction.
#### 4.2.1 EM formulation
Let $\mathcal{L}$ be the set of binary blob lines, and $\mathcal{E}$ be the
set of elements in the binary document image. Energy minimization finds a
labeling $f$ that assigns each element $e\in\mathcal{E}$ to a label
$l_{e}\in\mathcal{L}$, where energy function $\textbf{E}(f)$ has the minimum.
$\textbf{E}(f)=\sum_{e\in{\mathcal{E}}}D(e,\ell_{e})+\sum_{\\{e,e^{\prime}\\}\in\mathcal{N}}d(e,e^{\prime})\cdot\delta(\ell_{e}\neq\ell_{e^{\prime}})$
(1)
The term $D$ is the data cost, $d$ is the smoothness cost, and $\delta$ is an
indicator function. Data cost is the cost of assigning element $e$ to label
$l_{e}$. $D(e,\ell_{e})$ is defined to be the Euclidean distance between the
centroid of the element $e$ and the nearest neighbour pixel in blob line
$l_{e}$ for the centroid of the element $e$. Smoothness cost is the cost of
assigning neighbouring elements to different labels. Let $\mathcal{N}$ be the
set of nearest element pairs. Then $\forall\\{e,e^{\prime}\\}\in\mathcal{N}$,
$d(e,e^{\prime})=\exp({-\beta\cdot d_{e}(e,e^{\prime})})$ (2)
where $d_{e}(e,e^{\prime})$ is the Euclidean distance between the centroids of
the elements $e$ and $e^{\prime}$, and $\beta$ is defined as
$\beta=(2\left<d_{e}(e,e^{\prime})\right>)^{-1}$ (3)
$\left<\cdot\right>$ denotes expectation over all pairs of neighbouring
elements [7] in a document page image. $\delta(\ell_{e}\neq\ell_{e^{\prime}})$
is equal to $1$ if the condition inside the parentheses holds and $0$
otherwise.
#### 4.2.2 EM adaptation to connected components
The presented method extracts text lines using results of the text line
detection procedure by FCN. Extraction level representation labels each pixel
of the text lines in a document image. The major difficulty in pixel labeling
lies in the computational cost. A typical document image in the experimented
datasets includes around $14,000,000$ pixels. Due to this reason, we adapt the
energy function (Eq. 1) to be used with connected components for extraction of
text lines.
Data cost of the adapted function measures how appropriate a label is for the
component $e$, given the blob lines $\mathcal{L}$. Actually, the data cost
alone would be equal to clustering the components with their nearest neighbour
blob line. However, simply nearest neighbour clustering would be deficient to
correctly label the free components that are disconnected from the blob lines
(Fig. 7).
Figure 7: Segmented samples that show the necessity of smoothness cost for
text line extraction. Samples in the first row are true and achieved with
smoothness cost. Samples in the second row are false and caused by the lack of
a smoothness cost. Notice that smoothness cost pulls the diacritics to the
components they belong to, in spite of their proximity to the wrong blob line.
A free component tends to exist closer to the components of a line it belongs
to, but can be a nearest neighbour of a blob line that it does not belong to.
This is because the proximity grouping strength decays exponentially with
Euclidean distance [18]. This phenomenon is formulated using the smoothness
cost (Eq. 2). Semantically this means that closer components have a higher
probability to have the same label than distant components. Hence, the
competition between data cost and smoothness cost dictates free components to
be labeled spatially coherent with their neighbouring components.
## 5 Experiments
We experiment with three datasets that are different in terms of the text line
segmentation challenges they contain. VML-AHTE dataset exhibits crowded
diacritics and cramped text lines, whereas DIVA-HisDB dataset contains
consequently touching text lines. Completely different than them VML-MOC
exhibits challenges caused by arbitrarily skewed and curved text lines. The
performance is measured using the line segmentation evaluation metrics of
ICDAR 2013 [13] and ICDAR 2017 [1].
### 5.1 ICDAR 2013 line segmentation evaluation metrics
ICDAR 2013 metrics calculate recognition accuracy ($RA$), detection rate
($DR$) and F-measure ($FM$) values. Given a set of image points $I$, let
$R_{i}$ be the set of points inside the $i^{th}$ result region, $G_{j}$ be the
set of points inside the $j^{th}$ ground truth region, and $T(p)$ is a
function that counts the points inside the set $p$, then the $MatchScore(i,j)$
is calculated by Equation 4
$MatchScore(i,j)=\frac{T(G{j}\cap R{i})}{T(G{j}\cup R{i})}$ (4)
The evaluator considers a region pair $(i,j)$ as a one-to-one match if the
$MatchScore(i,j)$ is equal or above the threshold, which we set to $90$. Let
$N_{1}$ and $N_{2}$ be the number of ground truth and output elements,
respectively, and let $M$ be the number of one-to-one matches. The evaluator
calculates the $DR$, $RA$ and $FM$ as follows:
$DR=\frac{M}{N_{1}}$ (5) $RA=\frac{M}{N_{2}}$ (6) $FM=\frac{2\times DR\times
RA}{DR+RA}$ (7)
### 5.2 ICDAR 2017 line segmentation evaluation metrics
ICDAR 2017 metrics are based on the Intersection over Union (IU). IU scores
for each possible pair of Ground Truth (GT) polygons and Prediction (P)
polygons are computed as follows:
$IU=\frac{IP}{UP}$ (8)
IP denotes the number of intersecting foreground pixels among the pair of
polygons. UP denotes number of foreground pixels in the union of foreground
pixels of the pair of polygons. The pairs with maximum IU score are selected
as the matching pairs of GT polygons and P polygons. Then, pixel IU and line
IU are calculated among these matching pairs. For each matching pair, line TP,
line FP and line FN are given by: Line TP is the number of foreground pixels
that are correctly predicted in the matching pair. Line FP is the number of
foreground pixels that are falsely predicted in the matching pair. Line FN is
the number of false negative foreground pixels in the matching pair.
Accordingly pixel IU is:
$\text{Pixel }IU=\frac{TP}{TP+FP+FN}$ (9)
where TP is the global sum of line TPs, FP is the global sum of line FPs, and
FN is the global sum of line FNs.
Line IU is measured at line level. For each matching pair, line precision and
line recall are:
$\text{Line precision}=\frac{\text{line }TP}{\text{line }TP+\text{line }FP}$
(10) $\text{Line recall}=\frac{\text{line }TP}{\text{line }TP+\text{line }FN}$
(11)
Accordingly, line IU is:
$\text{Line }IU=\frac{\text{CL}}{\text{CL+ML+EL}}$ (12)
where CL is the number of correct lines, ML is the number of missed lines, and
EL is the number of extra lines.
For each matching pair: A line is correct if both, the line precision and the
line recall are above the threshold value. A line is missed if the line recall
is below the threshold value. A line is extra if the line precision is below
the threshold value.
### 5.3 Results on VML-AHTE dataset
Since VML-AHTE and VML-MOC datasets are recently published datasets we run two
other supervised methods. First method is a holistic method that can extract
text lines in one phase and is based on instance segmentation using MRCNN
[16]. Second method is based on running the EM framework using the blob line
labels from the ground truth and we refer to it Human+EM. On VML-AHTE dataset,
FCN+EM outperforms all the other methods in terms of all the metrics except
Line IU. It can successfully split the touching text lines and assign the
disjoint strokes to the correct text lines.
Table 1: Results on VML-AHTE dataset | Line IU | Pixel IU | DR | RA | FM
---|---|---|---|---|---
FCN+EM | 94.52 | 90.01 | 95.55 | 92.8 | 94.3
MRCNN | 93.08 | 86.97 | 84.43 | 58.89 | 68.77
Human+EM | 97.83 | 89.61 | 88.14 | 87.78 | 87.96
Figure 8: Example of generated curved lines: (a) shows the original straight
lines section, (b) is the result of warping (a) by 90 degrees in the middle to
generated the curved lines, and (c) is the mirrored image of (b) in the
vertical direction.
### 5.4 Results on VML-MOC dataset
The VML-MOC dataset contains both types, straight text lines and curved text
lines. Number of straight text lines is substantially greater than the number
of curved text lines. This imbalance causes the FCN to overfit on the straight
text lines. This in turn leads to fragmented blob lines when predicting over
the curved text lines. Therefore, to compensate this imbalance, we generated
images containing artificially curved text lines. We selected the document
image parts with straight lines and warp these images $90$ degrees from their
middle. Furthermore, each one of those warped lines was mirrored in the
horizontal and vertical directions resulting in curved lines in four
directions. Figure 8 illustrates this procedure. The FCN+EM that is trained
with augmented curved text lines (FCN+EM+Aug) outperforms the FCN+EM that is
trained only with the training set (Table 2). But FCN+EM+Aug still
underperforms a learning free algorithm [5].
Table 2: Results on VML-MOC dataset | Line IU | Pixel IU | DR | RA | FM
---|---|---|---|---|---
FCN+EM | 25.04 | 48.71 | 26.45 | 17.73 | 31.09
FCN+EM+Aug | 35.12 | 60.97 | 84.43 | 58.89 | 68.77
[5] | 60.99 | 80.96 | - | - | -
Human+EM | 96.62 | 99.01 | 90.41 | 91.74 | 91.03
### 5.5 Results on DIVA-HisDB dataset
We compare the results with the results of Task-3 from ICDAR 2017 competition
on layout analysis for medieval manuscripts [28]. Task-3’s scope of interest
is only the main text lines but not the interlinear glosses. We removed these
glosses prior to all our experiments using the ground truth. It should be
noticed that Task-3 participants removed these glosses using their own
algorithms.
Table 3 presents a comparison of our methods with the participants of ICDAR
2017 competition on layout analysis for challenging medieval manuscripts for
text line extraction. The FCN+EM can obtain a perfect Line IU score on the
books CSG863 and CB55. Its Pixel IU is on par with the best preforming method
in the competition.
Table 3: Comparison with the Task-3 results of the ICDAR2017 competition on layout analysis for challenging medieval manuscripts [28]. | CB55 | CSG18 | CSG863
---|---|---|---
| LIU | PIU | LIU | PIU | LIU | PIU
FCN+EM | 100 | 97.64 | 97.65 | 97.79 | 100 | 97.18
System-2 | 84.29 | 80.23 | 69.57 | 75.31 | 90.64 | 93.68
System-6 | 5.67 | 30.53 | 39.17 | 54.52 | 25.96 | 46.09
System-8 | 99.33 | 93.75 | 94.90 | 94.47 | 96.75 | 90.81
System-9+4.1 | 98.04 | 96.67 | 96.91 | 96.93 | 98.62 | 97.54
### 5.6 Discussion
An observable pattern in the results is the parallel flow of line IU values
and pixel IU values while RA values are fluctuating in comparison to DR
values. Clearly, such counter-intuitive behaviour of a metric is not
preferable in terms of interpretability of the results. On the other hand,
ICDAR 2017 evaluator can not handle the cases where a text line consists of
multiple polygons. Such case arises from MRCNN results. MRCNN segments a text
line instance correctly but represents it as multiple polygons with the same
label. Evaluating MRCNN results in their raw form yields to low values
unfairly (Figure 9). Because ICDAR 2017 evaluator calculates an IU score for
each possible pair of ground truth polygons and prediction polygons then
selects the pairs with maximum IU score as the matching pairs. Consequently, a
text line represented by multiple polygons is considered only by its largest
polygon.
Figure 9: MRCNN method correctly predicts text line pixels but its results are
not fairly evaluated due to disconnected polygons.
## 6 Conclusion
This paper presents a supervised text line segmentation method FCN+EM. The FCN
detect the blob lines that strike through the text lines and the EM extracts
the pixels of text lines with the guidance of the detected blob lines. FCN+EM
does not make any assumption about the text line orientation or text line
height. The algorithm is very effective in detecting cramped, crowded and
touching text lines. It has a superior performance on VML-AHTE and DIVA-HisDB
datasets but comparable results on VML-MOC dataset.
## Acknowledgment
The authors would like to thank Gunes Cevik for annotating the ground truth.
This work has been partially supported by the Frankel Center for Computer
Science.
## References
* [1] Alberti, M., Bouillon, M., Ingold, R., Liwicki, M.: Open evaluation tool for layout analysis of document images. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). vol. 4, pp. 43–47. IEEE (2017)
* [2] Aldavert, D., Rusiñol, M.: Manuscript text line detection and segmentation using second-order derivatives. In: 2018 13th IAPR International Workshop on Document Analysis Systems (DAS). pp. 293–298. IEEE (2018)
* [3] Barakat, B., Droby, A., Kassis, M., El-Sana, J.: Text line segmentation for challenging handwritten document images using fully convolutional network. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR). pp. 374–379. IEEE (2018)
* [4] Barakat, B., El-Sana, J.: Binarization free layout analysis for arabic historical documents using fully convolutional networks. In: Arabic Script Analysis and Recognition (ASAR), 2nd International Workshop. pp. 26–30. IEEE (2018)
* [5] Barakat, B.K., Cohen, R., El-Sana, J., Rabaev, I.: VML-MOC: Segmenting a multiply oriented and curved handwritten text line dataset. In: 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW). vol. 6, pp. 13–18. IEEE (2019)
* [6] Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23(11), 1222–1239 (2001)
* [7] Boykov, Y.Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in nd images. In: Proceedings eighth IEEE international conference on computer vision. ICCV 2001. vol. 1, pp. 105–112. IEEE (2001)
* [8] Campos, V.B., Gómez, V.R., Rossi, A.H.T., Ruiz, E.V.: Text line extraction based on distance map features and dynamic programming. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR). pp. 357–362. IEEE (2018)
* [9] Cohen, R., Dinstein, I., El-Sana, J., Kedem, K.: Using scale-space anisotropic smoothing for text line extraction in historical documents. In: International Conference Image Analysis and Recognition. pp. 349–358. Springer (2014)
* [10] Diem, M., Kleber, F., Fiel, S., Grüning, T., Gatos, B.: cbad: ICDAR2017 competition on baseline detection. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). vol. 1, pp. 1355–1360. IEEE (2017)
* [11] Fischer, A., Baechler, M., Garz, A., Liwicki, M., Ingold, R.: A combined system for text line extraction and handwriting recognition in historical documents. In: 2014 11th IAPR International Workshop on Document Analysis Systems. pp. 71–75. IEEE (2014)
* [12] Gatos, B., Stamatopoulos, N., Louloudis, G.: ICDAR2009 handwriting segmentation contest. International Journal on Document Analysis and Recognition (IJDAR) 14(1), 25–33 (2011)
* [13] Gatos, B., Stamatopoulos, N., Louloudis, G.: ICFHR 2010 handwriting segmentation contest. In: 2010 12th International Conference on Frontiers in Handwriting Recognition. pp. 737–742. IEEE (2010)
* [14] Grüning, T., Leifert, G., Strauß, T., Michael, J., Labahn, R.: A two-stage method for text line detection in historical documents. International Journal on Document Analysis and Recognition (IJDAR) 22(3), 285–302 (2019)
* [15] Gruuening, T., Leifert, G., Strauss, T., Labahn, R.: A robust and binarization-free approach for text line detection in historical documents. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). vol. 1, pp. 236–241. IEEE (2017)
* [16] He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 2961–2969 (2017)
* [17] Koffka, K.: Principles of Gestalt psychology, vol. 44. Routledge (2013)
* [18] Kubovy, M., Van Den Berg, M.: The whole is equal to the sum of its parts: A probabilistic model of grouping by proximity and similarity in regular patterns. Psychological review 115(1), 131 (2008)
* [19] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3431–3440 (2015)
* [20] Moysset, B., Kermorvant, C., Wolf, C., Louradour, J.: Paragraph text segmentation into lines with recurrent neural networks. In: 2015 13th International Conference on Document Analysis and Recognition (ICDAR). pp. 456–460. IEEE (2015)
* [21] Moysset, B., Louradour, J., Kermorvant, C., Wolf, C.: Learning text-line localization with shared and local regression neural networks. In: 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR). pp. 1–6. IEEE (2016)
* [22] Oliveira, S.A., Seguin, B., Kaplan, F.: dhsegment: A generic deep-learning approach for document segmentation. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR). pp. 7–12. IEEE (2018)
* [23] Pletschacher, S., Antonacopoulos, A.: The page (page analysis and ground-truth elements) format framework. In: 2010 20th International Conference on Pattern Recognition. pp. 257–260. IEEE (2010)
* [24] Renton, G., Chatelain, C., Adam, S., Kermorvant, C., Paquet, T.: Handwritten text line segmentation using fully convolutional network. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). vol. 5, pp. 5–9. IEEE (2017)
* [25] Renton, G., Soullard, Y., Chatelain, C., Adam, S., Kermorvant, C., Paquet, T.: Fully convolutional network with dilated convolutions for handwritten text line segmentation. International Journal on Document Analysis and Recognition (IJDAR) 21(3), 177–186 (2018)
* [26] Saabni, R., Asi, A., El-Sana, J.: Text line extraction for historical document images. Pattern Recognition Letters 35, 23–33 (2014)
* [27] Sayre, K.M.: Machine recognition of handwritten words: A project report. Pattern recognition 5(3), 213–228 (1973)
* [28] Simistira, F., Bouillon, M., Seuret, M., Würsch, M., Alberti, M., Ingold, R., Liwicki, M.: ICDAR2017 competition on layout analysis for challenging medieval manuscripts. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). vol. 1, pp. 1361–1370. IEEE (2017)
* [29] Stamatopoulos, N., Gatos, B., Louloudis, G., Pal, U., Alaei, A.: ICDAR 2013 handwriting segmentation contest. In: 2013 12th International Conference on Document Analysis and Recognition. pp. 1402–1406. IEEE (2013)
* [30] Vo, Q.N., Lee, G.: Dense prediction for text line segmentation in handwritten document images. In: 2016 IEEE International Conference on Image Processing (ICIP). pp. 3264–3268. IEEE (2016)
|
# Stability bounds of a delay visco-elastic rheological model with substrate
friction
Malik A. Dawi 1 Jose J. Muñoz1,2∗
1Laboratori de Càlcul Numèric (LaCàN)
Universitat Politècnica de Catalunya, Barcelona, Spain
2Dept. of Mathematics, Universitat Politècnica de Catalunya, Barcelona, Spain
Centre Internacional de Mètodes Numèrics en Enginyeria (CIMNE), Barcelona,
Spain.
<EMAIL_ADDRESS>
###### Abstract
Cells and tissues exhibit oscillatory deformations during remodelling,
migration or embryogenesis. Although it has been shown that these oscillations
correlate with cell biochemical signalling, it is yet unclear the role of
these oscillations in triggering drastic cell reorganisation events or
instabilities, and the coupling of this oscillatory response with tested
visco-elastic properties.
We here present a rheological model that incorporates elastic, viscous and
frictional components, and that is able to generate oscillatory response
through a delay adaptive process of the rest-length. We analyse its stability
properties as a function of the model parameters and deduce analytical bounds
of the stable domain. While increasing values of the delay and remodelling
rate render the model unstable, we also show that increasing friction with the
substrate destabilise the oscillatory response. Furthermore, we numerically
verify that the extension of the model to non-linear strain measures is able
to generate sustained oscillations that alternate between stable and unstable
regions.
keywords:Oscillations, Delay differential equations, Visco-elasticity,
friction , stability, rheology, cells.
## 1 Introduction
Oscillatory cell deformations are ubiquitous and have been quantified _in
vitro_ [18, 20] and _in vivo_ , for instance in the segmented clock of mice
[27] or during _Drosophila_ fly dorsal closure [23]. These oscillations have
been associated to biochemical dynamics [13], signalling delays [19] or myosin
concentration fluctuations [7]. We here present a rheological model that
explicitly incorporates the delay between the cell length adaptation and the
current stretch.
Time delay has been included in numerous models in biology, with applications
in biochemical negative feedback [15], cell growth and division [1, 11], or
cell maturation [10], but are less common in biomechanics. In our case we
introduce this delay in an evolution law of the cell or tissue rest-length.
Such models with varying rest-length have been applied to stress relaxation
[14], morphogenesis [5], cortical mechanics [8], or endocytosis [4]. They have
the advantage of including a measurable quantity, the rest-length [26], and
also furnishing the observed visco-elastic response. We will here adapt these
models and include the delay response in conjunction with frictional or
adhesive forces from the environment or substrate.
Our visco-elastic model mimics the standard linear solid, but expressed in
terms of delay rest-length changes, which provides the oscillatory character
of the deformation. The stability of such system has been described in [17] or
in [3] for planar frictionless dynamics of monolayers. We here extend such
analysis to a frictional substrate, and deduce the stability conditions as a
function of viscous, stiffness and friction parameters.
The stability analysis is usually carried out through the inspection of the
characteristic equation [2, 22], or semi-discretisation methods [12, 25]. We
resort to the former method, and by analysing the associated Lambert function
[21, 6], we deduce strict and simple bounds of the stability region. We
compare our analysis with some numerical solutions of the Delay Differential
Equations (DDEs).
The article is organised as follows. We describe the visco-elastic model in
Section 2 together with the delay evolution law of the rest-length. In Section
3 the stability of a linear model is analysed, and some bounds as a function
of the model parameters are given. A non-linear extension is presented in
Section 4, which is solved numerically and is analysed with the help of the
results obtained in the linearised model. Our findings are finally discussed
in the Conclusions section.
## 2 Visco-elastic model with delay
We consider a material rheology that mimics the solid standard mode: a purely
elastic stress $\sigma^{e}$ in parallel with a visco-elastic stress
$\sigma^{v}$. Figure 1 shows schematically the two branches. We assume a one-
dimensional domain $D=\left[0,l(t)\right]$, with $l(t)$ a time dependent
apparent (measurable) length of the domain.
The total stress $\sigma$ in $D$ is given by the sum of elastic and
viscoelastic contributions,
$\displaystyle\sigma=\sigma^{e}+\sigma^{v},$
where each stress component is given by
$\sigma^{e}=k_{1}\varepsilon(l(t),l_{0})$ and
$\sigma^{v}=k_{2}\varepsilon^{e}(l(t),L(t))$, with $k_{1}$ and $k_{2}$ the
associated stiffness parameters. The strain measures $\varepsilon(l(t),l_{0})$
and $\varepsilon^{e}(l(t),L(t))$ will be detailed in the next sections for the
linear and non-linear models. As yet we mention that they depend, in addition
to $l(t)$, on the initial length $l_{0}=l(0)$ and the rest-length $L(t)$ of
the visco-elastic branch. This rest-length can be interpreted as an internal
variable, whose evolution mimics the viscous response of Maxwell models [16].
Figure 1: Schematic view of 1-dimensional model, illustrating both elastic and
visco-elastic branches with dissipative friction.
More specifically, $L(t)$ changes according to the following evolution law
$\dot{L}(t)=\gamma(l(t-\tau)-L(t-\tau)),t>0.$ (1)
Henceforth we denote by a superimposed dot the time derivatives, i.e.
$\dot{(\bullet)}=d(\bullet)/dt$. Parameter $\gamma>0$ is the _remodelling
rate_ , which measures the rate at which the cell adapts its length to the
difference $l(t-\tau)-L(t-\tau)$. We have introduced the delay parameter
$\tau\geq 0$ which aims at mimicking the measured time-lag between the
chemical signalling and the internal mechanical remodelling in the cell, as
measured in different systems such as Drosophila dorsal closure [7] or in
wound healing [28], and which in these cases is in the order of a few minutes.
We also include in our model a viscous friction $\sigma_{\eta}$ with the
external substrate or environment, and given by an external force
$\sigma_{\eta}(t)=-\eta\dot{l}(t)$, with $\eta\geq 0$ a viscous coefficient
(see Figure 1). In total, the balance law,
$\sigma_{\eta}=\sigma^{e}+\sigma^{v}$ reads in our case
$-\eta\dot{l}(t)=k_{1}\varepsilon^{e}(l(t),l_{0})+k_{2}\varepsilon^{e}(l(t),L(t)),\
t>0,$ (2)
which should be solved together with the evolution law in (1). Due to the
presence of the delay $\tau$, initial conditions must be specified for
$t\in[-\tau,0]$. For simplicity, we assume constant values
$\displaystyle l(t)=l_{0},\ t\in[-\tau,0],$ (3) $\displaystyle L(t)=L_{0},\
t\in[-\tau,0],$ (4)
with $l_{0}$ and $L_{0}$ given constants. In the next sections we will analyse
the stability and oscillatory regime of the system of Delay Differential
Equations (DDE) for linear and non-linear definitions of the strain measures
$\varepsilon$ and $\varepsilon^{e}$.
## 3 Stability analysis of linear model
### 3.1 Characteristic equations and analytical bounds
In order to ease the stability analysis, we assume here linear definitions of
the strain measures:
$\displaystyle\begin{aligned} \varepsilon(l(t),l_{0})&=l(t)-l_{0},\\\
\varepsilon^{e}(l(t),L(t))&=l(t)-L(t).\end{aligned}$
Inserting these expression into the balance equation (2), the set of DDE turn
into the following form:
$\displaystyle-\eta\dot{l}(t)$
$\displaystyle=k_{1}\left(l(t)-l_{0}\right)+k_{2}\left(l(t)-L(t)\right),$
$\displaystyle t>0$ (5) $\displaystyle\dot{L}(t)$
$\displaystyle=\gamma(l(t-\tau)-L(t-\tau)),$ $\displaystyle t>0$ (6)
with the initial conditions in (3). The coupled system of DDE can be written
in a compact form as
$\dot{\mathcal{L}}(t)+\mathbf{A}\mathcal{L}(t)+\mathbf{B}\mathcal{L}(t-\tau)+\mathbf{c}=\mathbf{0},t>0,$
(7)
with
$\mathcal{L}(t)=\left\\{\begin{array}[]{c}l(t)\\\ L(t)\end{array}\right\\}\ ;\
\mathbf{A}=\left[\begin{array}[]{cc}\frac{k_{1}+k_{2}}{\eta}&-\frac{k_{2}}{\eta}\\\
0&0\end{array}\right]\ ;\ \mathbf{B}=\left[\begin{array}[]{cc}0&0\\\
-\gamma&\gamma\end{array}\right]\ ;\
\mathbf{c}=\left\\{\begin{array}[]{c}\frac{k_{1}l_{0}}{\eta}\\\
0\end{array}\right\\}.$
Generally, the solution of the coupled system of DDE in (7) is characterized
qualitatively (e.g. asymptotic, synchronous, oscillatory) by the exponents or
the roots of the characteristic function [9, 22]. In order to obtain this
characteristic function, one might search for a solution in the form,
$\mathcal{L}(t)=\sum_{i}e^{m_{i}t}\mathcal{L}_{i}+\mathcal{L}_{0},$ (8)
where $\mathcal{L}_{0}$ and $\mathcal{L}_{i}$ are constant vectors that depend
on the chosen initial values, and $m_{i}\in\mathbb{C}$ are the characteristic
exponents. Clearly if all the exponent have negative real parts, i.e.
$Re(m_{i})<0$, the solution is asymptotically stable with time. Substituting
Eq. (8) into Eq. (7) gives for each term in the summation
$\left(m_{i}\mathbf{I}+\mathbf{A}+\mathbf{B}e^{-m_{i}\tau}\right)\mathcal{L}_{i}=\mathbf{0}.$
We remark that the above linear transformation must hold regardless of the
initial conditions, that is to say, the determinant must always vanish. This
allows us to express the characteristic function of the system as the
determinant of the above matrix, which gives
$f(m):=m^{2}+\gamma me^{-m\tau}+\frac{k_{1}+k_{2}}{\eta}m+\frac{\gamma
k_{1}}{\eta}e^{-m\tau}=0.$ (9)
We decompose the characteristic function to real and imaginary parts by
substituting $m=\alpha+i\beta$ and then separating each part, leading to the
following non-linear system of equations,
$\displaystyle\text{Re}\;f(m)$
$\displaystyle=\alpha^{2}-\beta^{2}+\frac{k_{1}+k_{2}}{\eta}\alpha+\gamma
e^{-\alpha\tau}\left(\left(\alpha+\frac{k_{1}}{\eta}\right)\cos(\beta\tau)+\beta\sin(\beta\tau)\right),$
(10) $\displaystyle\text{Im}\;f(m)$
$\displaystyle=2\alpha\beta+\frac{k_{1}+k_{2}}{\eta}\beta+\gamma
e^{-\alpha\tau}\left(\beta\cos(\beta\tau)-\left(\alpha+\frac{k_{1}}{\eta}\right)\sin(\beta\tau)\right).$
The stability regions in the parameters space are defined by the borders where
the number of unstable exponents changes, which means, at least one
characteristic exponents crosses the imaginary axes from left to right. In
such case Eq. (10) will have at least one solution with positive $\alpha$.
Here, we have constructed the phase diagram by solving the system in Eq. (10)
numerically while monitoring the values of $\alpha$ (see Fig. 2). If there is
at least one root with a positive $\alpha$ the solution was considered
unstable.
Figure 2: Phase diagrams for different pairs of material parameters. (a) Plane
$(k_{1},k_{2})$, (b) plane $(k_{1},\eta)$, (c) plane $(k_{2},\eta)$ and (d)
plane $(\tau,\gamma)$. The curves show stability borders for different values
of the off-plane parameters. Continuous lines are obtained with the numerical
solution of Eq. (10). Dashed lines represent the sufficient stability
condition in Eq. (11). The regions which are labeled as stable are those with
negative values for $\alpha$ and those label as unstable indicate the regions
with at least a single positive $\alpha$.
With the aim of furnishing a practical bound for detecting stable solutions,
we also give the following result:
###### Proposition 1.
The solution of the system of delay differential equations in Eq. (7) with
initial conditions in Eq. (3) is stable as long as,
$k_{1}+k_{2}-\gamma\eta-k_{1}\gamma\tau>0.$ (11)
Proof. Condition (11) is derived resorting to the results in [24], and
analysing the so-called D-curves defined as,
$\displaystyle R(\omega):=\text{Re}\;f(i\omega)$
$\displaystyle=-\omega^{2}+\gamma\left(\frac{k_{1}}{\eta}\cos(\omega\tau)+\omega\sin(\omega\tau)\right)$
(12) $\displaystyle S(\omega):=\text{Im}\;f(i\omega)$
$\displaystyle=\frac{k_{1}+k_{2}}{\eta}\omega+\gamma\left(\omega\cos(\omega\tau)-\frac{k_{1}}{\eta}\sin(\omega\tau)\right)$
(13)
with $\omega\in[0,+\infty)$. The functions $R(\omega)$ and $S(\omega)$ provide
infinite parametric curves that mark the region with constant number of
unstable characteristic exponents. In particular, we resort to Theorem 2.19 in
[24], which indicates that the zeros of Eq. (9) have no real positive parts if
and only if,
$S(\rho_{k})\neq 0\quad k=1,..,r,$ (14)
and
$\sum_{k=1}^{r}(-1)^{k}S(\rho_{k})=-1,$ (15)
where $\rho_{1}\geq...\geq\rho_{r}\geq 0$ are the non-negative roots of
$R(\omega)$, with $r$ being an odd number. Moreover, we introduce a polynomial
$S^{-}(\omega)$ which defines a lower bound for the function $S(\omega)$ such
that,
$0<S^{-}(\omega)\leq S(\omega)\quad\quad\text{for}\quad\omega\in(0,+\infty).$
(16)
In case that $S(\omega)$ satisfies the stability conditions in Eq. (14) and
(15), $S^{-}(\omega)$ will also satisfy them by construction. An adequate
choice for the polynomial $S^{-}(\omega)$ can be obtained by exploiting the
following inequalities,
$\cos(\omega\tau)\geq-1,\quad-\sin(\omega\tau)\geq-\omega\tau\quad\quad\text{for}\quad\omega\in(0,+\infty)$
which lead to,
$S^{-}(\omega)=\Big{(}\frac{k_{1}+k_{2}}{\eta}-\gamma-\frac{k_{1}}{\eta}\gamma\tau\Big{)}\omega.$
Since $\omega>0$, the condition in Eq. (16) is satisfied as long as
$k_{1}+k_{2}-\gamma\eta-k_{1}\gamma\tau>0.\quad\quad\qed$
We point out that the main benefit of Proposition 1 is that it counts in the
whole space of system parameters, giving the opportunity to cross check the
stability taking into account the relative variations of system parameters. In
the phase diagrams in the parametric space, condition (11) is indicated by the
dashed lines in Fig. 2. As it can be observed, it indicates stability regions
that are smaller then those obtained by solving numerically Eq. (10). These
plots emphasise the fact that although the bound in Eq. (16) does not provide
a necessary condition, it provides a useful sufficient stability condition.
We remark also two salient conclusion from the expression in the bound, which
are also confirmed in the phase diagrams: increasing values of $\gamma\tau$
have an unstable effect in the lengths $l(t)$ and $L(t)$, as previously
encountered in other models [17], while decreasing values of $\eta$ may render
the oscillations stable. This is an unexpected result, since increasing
viscosity has in general a stabilising or damping effects in mechanics. This
can be explained by highlighting the retardation or delay that viscosity
entails in the stress response, similar to an increase of $\tau$.
### 3.2 Numerical simulations
In order to verify the obtained stability limits, we have preformed some
numerical tests considering the one-dimensional model presented in Fig. 1. The
test mimics a previous compression state that is given by the following
initial conditions,
$\displaystyle l(t)=L(t)=1,$ $\displaystyle\tau<t\leq 0$ (17) $\displaystyle
l(-\tau)=0.9,L(-\tau)=1.$ (18)
In order to compare our results with previous values in the literature and
with more general boundary conditions, we will also test different prescribed
values of $l(t)$ and additional external forces. Indeed, in the presence of a
constant external force $f$, the equilibrium equation in (2) reads,
$\displaystyle-\eta\dot{l}(t)+f$
$\displaystyle=k_{1}\left(l(t)-l_{0}\right)+k_{2}\left(l(t)-L(t)\right)$ (19)
$\displaystyle\dot{L}(t)$
$\displaystyle=\gamma\left(l(t-\tau)-L(t-\tau)\right)$ (20)
#### 3.2.1 Unloaded free conditions
A backward Euler implicit time discretisation of equations in (19) yields the
following set of equations, which are computed sequentially,
$\begin{split}L_{n+1}&=\Delta t\gamma(l_{n-\tau}-L_{n-\tau})\\\
l_{n+1}&=\frac{1}{(\eta/\Delta t+k_{1}+k_{2})}\left(\frac{\eta}{\Delta
t}l_{n}+f_{n+1}+k_{1}L_{0}+k_{2}L_{n+1}\right)\end{split}$ (21)
We here consider the case $f_{n}=0,n=0,1,2\ldots,200/\Delta t$ and $\Delta
t=0.01$, which is found sufficiently accurate when being compared with smaller
values. The resulting evolution of $l_{n}$ and $L_{n}$ is consistent with the
stability analysis of the previous section. The presence of the delay $\tau>0$
produces oscillatory solutions for $l$ and $L$, as it can be seen in Fig. 3.
The stability of these oscillations depends on the model parameters as
indicated in the stability diagrams in Fig. 2. The first case in Fig. 3a
corresponds to stable oscillations, with parameters inside the stability
domain, while the second case in Fig, 3b yields unstable oscillations, with
parameters that exceed the stability limits.
(a) Model parameters: $k_{1}=2$, $k_{2}=3$, $\eta=8$, $\gamma=0.5$, $\tau=6$
(b) Model parameters: $k_{1}=3$, $k_{2}=2$, $\eta=8$, $\gamma=0.5$, $\tau=6$
Figure 3: Time evolution of current length and rest-length for free unloaded
conditions. (a) Parameters belonging to the stable domain. (b) Choice of
parameters that lie outside of the stable domain.
#### 3.2.2 Prescribed deformation
We here choose a constant value of the apparent length $l(t)$, with an initial
discontinuity:
$\displaystyle L(t)=L_{0}=1,$ $\displaystyle\ -\tau\leq t\leq 0,$
$\displaystyle l(-\tau)=0.9,l=l_{0}=1,$ $\displaystyle\ -\tau<t.$
In this case, $\dot{l}(t)=0,t>0$, so the the first differential gives us a
reaction force term equal to $k_{2}(l_{0}-L(t))$, while the DDE reads
$\displaystyle\dot{L}$ $\displaystyle=\gamma(l_{0}-L(t-\tau)).$
This DDE (or equivalent forms) has been extensively studied [22, 17], and is
known to yield oscillatory values of rest-length $L(t)$ whenever
$\gamma\tau>\frac{1}{e}$, and unstable oscillations whenever
$\gamma\tau>\frac{\pi}{2}$. This has been confirmed by the numerical
simulations in Fig. 4.
(a) Model parameters: $\gamma=0.35$, $\tau=4$
(b) Model parameters: $\gamma=0.35$, $\tau=5$
Figure 4: The evolution of the rest-length with fixed values for the apparent
length $l(t)$. The stability is in this case identical to the friction-less
models [17]: (a) Oscillatory solution when $\tau\gamma>\frac{1}{e}$, (b)
unstable solution arise whenever $\tau\gamma>\frac{\pi}{2}$.
#### 3.2.3 Prescribed forces
We now impose and external force $f=0.2$. Since this value only affects the
value of the vector $\mathbf{c}$ in Eq. (5), the stability is consequently
unaffected by the value of $f$. The plots in Fig. 5 confirm this fact. These
plots show the apparent length as a function of time, while the rest-length is
shown as the contourplot on the varying domain $x\in[0,l(t)]$.
(a) Model parameters: $k_{1}=1$, $k_{2}=1$, $\eta=1$, $\gamma=0.5$, $\tau=6$
(b) Model parameters: $k_{1}=1$, $k_{2}=1$, $\eta=3$, $\gamma=0.6$, $\tau=6$
Figure 5: The evolution of the current length and the rest-length (color map)
with prescribed compression forces $f$ ($f(x=0)=0.2,\quad f(x=1)=-0.2$). (a)
the solution inside the stability domain. (b) the time evolution as the
stability limit is exceeded.
## 4 Extension to non-linear: strain–based model
We now use a non-dimensional definition of the strains
$\displaystyle\varepsilon(l(t),l_{0})$
$\displaystyle=\frac{l(t)-l_{0}}{l_{0}},$
$\displaystyle\varepsilon^{e}(l(t),L(t))$
$\displaystyle=\frac{l(t)-L(t)}{L(t)}.$
While this is a more common strain measure, with non-dimensional values, these
expressions, when inserted into the equilibrium equations in (2) yield a set
of non-linear DDE:
$\displaystyle-\eta\dot{l}(t)$
$\displaystyle=k_{1}\left(\frac{l(t)-l_{0}}{l_{0}}\right)+k_{2}\left(\frac{l(t)-L(t)}{L(t)}\right),$
(22) $\displaystyle\dot{L}(t)$
$\displaystyle=\gamma\left(l(t-\tau)-L(t-\tau)\right).$ (23)
We aim at studying the oscillatory character and stability of these equations.
However, due to their non-linearity we cannot directly apply the methodology
previously presented. We aim instead at analysing the linearised form of
equation (22) at time $t_{0}$. By setting $\delta l(t)=l(t)-l(t_{0})$ and
$\delta L(t)=L(t)-L(t_{0})$, the linear terms read,
$\displaystyle-\eta\delta\dot{l}(t)=$ $\displaystyle\frac{k_{1}}{l_{0}}\delta
l+\frac{k_{2}}{L(t_{0})}\delta l(t)-\frac{k_{2}l(t_{0})}{L(t_{0})^{2}}\delta
L(t).$ (24)
It then follows that by defining the modified stiffness parameters,
$\displaystyle\hat{k}_{1}$
$\displaystyle=\frac{k_{1}}{l(t_{0})}+\frac{k_{2}}{L(t_{0})}\left(1-\frac{l(t_{0})}{L(t_{0})}\right),$
(25) $\displaystyle\hat{k}_{2}$
$\displaystyle=\frac{k_{2}l(t_{0})}{L(t_{0})^{2}},$ (26)
equation (24) is equivalent to the linear terms in the equilibrium equation in
(5), but replacing $(k_{1},k_{2})$ by $(\hat{k}_{1},\hat{k}_{2})$ and in terms
of $\delta l(t)$ and $\delta L(t)$ instead of $l(t)$ and $L(t)$. This allows
us to understand some of the numerical solutions obtained for the non-linear
case.
Figure 6a shows the time evolution of $l(t)$ and $L(t)$, which are sustained,
that is, their asymptotic behaviour does not increase nor decrease. We plot in
the parametric space of $k_{1}$ and $k_{2}$ the modified parameters
$\hat{k}_{1}$ and $\hat{k}_{2}$ for each time $t_{0}$, as shown in Fig. 6b. It
can be observed that although the initial values are located in the unstable
region, they in turn oscillate between the unstable and stable region,
reaching a limit cycle that alternates between the two domains.
(a) Time evolution of current length and rest-length
(b) The evolution of $\tilde{k_{1}}$ and $\tilde{k_{2}}$.
Figure 6: Numerical solution with sustained oscillations of the non-linear
model. Parameters: $k_{1}=1$, $k_{2}=1$, $\eta=3$, $\gamma=0.6$, $\tau=5$
We have also tested other parameter settings, with an initial location of
($\hat{k}_{1},\hat{k}_{2}$) in the parametric space farther from the stability
boundary (see Fig. 7). In this case, the system exhibits oscillations that
reach the singular value $L(t)=0$ for some $t>0$, which renders the DDEs in
(22) ill-posed. Instead, when using values that are farther inside the
stability region, as it is the case in Fig. 8, the oscillations stabilise
before reaching this singular value. Although we are not able to furnish
bounds for non-linear stability, we can explain the presence of stable,
sustained, or unstable (or singular) oscillations according to the distance of
the initial value of $(\hat{k}_{1},\hat{k}_{2})$ to the stability boundary of
the linear case.
(a) Time evolution of current length and rest-length
(b) The evolution of $\tilde{k_{1}}$ and $\tilde{k_{2}}$
Figure 7: Numerical solution with unstable oscillations on the non-linear
model. Parameters: $k_{1}=2$, $k_{2}=1$, $\eta=3$, $\gamma=0.6$, $\tau=5$
(a) Time evolution of current length and rest-length
(b) The evolution of $\tilde{k_{1}}$ and $\tilde{k_{2}}$
Figure 8: Numerical solution with stable oscillations of the non-linear model.
Parameter $k_{1}=1$, $k_{2}=2$, $\eta=3$, $\gamma=0.6$, $\tau=5$
## 5 Conclusions
Motivated by the presence of delays and visco-elastic response of tissues, we
have presented a rheological model that includes elastic and viscous
contributions, and also exhibits oscillatory behaviour.
We have analysed the stability of he model when using a linear strain measure
and as a function of the model parameters. We have recovered previous results,
which show that increasing values of the delay $\tau$ and the remodelling rate
$\gamma$ (a quantity that is inversely proportional to tissue viscosity),
render the oscillations unstable. Remarkably, increasing values of the viscous
friction of the domain with respect to external boundary also destabilise the
system.
By studying the characteristic function of the DDE we have provided sufficient
conditions of stability and bounds to the stability region. This analysis have
also allowed us to explain the presence of sustained oscillations in a non-
linear version of the model. This persistent oscillations in the tissue
deformations are frequently observed [18, 20], and in our model are due to the
transition between stable and unstable domains.
We note that despite visco-elastic models based on rest-length changes are
increasingly common [4, 5, 14], their stability in the presence of delayed
response has not been studied. We here provide such an analysis which may also
help to explain the observed sudden deformations in embryo development and
morphogenesis.
## acknowledgements
JJM and MD have been financially supported by the Spanish Ministry of Science,
Innovation and Universities (MICINN) with grant DPI2016-74929-R and by the
local government _Generalitat de Catalunya_ with grant 2017 SGR 1278.
## References
* [1] T. Alarcón, Ph. Getto, and Y. Nakata. Stability analysis of a renewal equation for cell population dynamics with quiescence. SIAM J. Appl. Math., 74(4):1266–1297, 2014.
* [2] F. M. Asl and A. G. Ulsoy. Analysis of a system of linear delay differential equations. J. Dyn. Sys. Meas. Contr., 125:215–223, 2003.
* [3] C. Borja, E. Moral, and J.J. Muñoz. Viscoelasticity and Collective Cell Migration: An interdisciplinary perspective across levels of organization, chapter 5: Effects of time delays and viscoelastic parameters in oscillatory response cell monolayers. Elsevier, 2020. In press.
* [4] K.E. Cavanaugh, M.F. Staddon, E. Munro, S. Banerjee, and M.L. Gardel. RhoA mediates epithelial cell shape changes via mechanosensitive endocytosis. Dev. Cell, 52(2):152–166, 2020.
* [5] R. Clément, C. Collinet, B. Dehapiot, T. Lecuit, and P.F. Lenne. Viscoelastic dissipation stabilizes cell shape changes during tissue morphogenesis. Current Biol., 27(20):3132–3142, 2017.
* [6] R.M. Corless, G.H. Gonnet, D.E.G. Hare, D.J. Jeffrey, and D.E. Knuth. On the Lambert W function. Adv. Comp. Math., 5:329–359, 1996.
* [7] K. Dierkes, A. Sumi, J. Solon, and G. Salbreux. Spontaneous Oscillations of Elastic Contractile Materials with Turnover. Phys. Rev. Letters, 113:148102, 2014.
* [8] K. Doubrovinski, M. Swan, O. Polyakov, and E.F. Wieschaus. Measurement of cortical elasticity in drosophila melanogaster embryos using ferrofluids. Proc. Natl. Acad. Sci. USA, 114(5):1051–1056, 2017.
* [9] T. Erneux. Applied Delay Differential Equations, volume 3 of Surveys and Tutorials in the Applied Mathematical Sciences. Springer, New York, 2009.
* [10] P. Getto, M. Gyllenberg, Y. Nakata, and F. Scarabel. Stability analysis of a state-dependent delay differential equation for cell maturation: analytical and numerical methods. J. Math. Biol., 79:281–328, 2019.
* [11] M. Gyllenberg and H. J. A. M. Heijmans. An abstract delay-differential equation modelling size dependent cell growth and division. SIAM J. Math. Anal., 18(1):74–88, 1987.
* [12] T. Insperger and G. Stépán. Semi-discretization method for delayed systems. Int. J. Num. Meth. Engng., 55(5):503–518, 2002.
* [13] K. Kaouri, P. K. Maini, P. A. Skourides, N. Christodoulou, and S. J. Chapman. A simple mechanochemical model for calcium signalling in embryonic epithelial cells. J. Math. Biol., 78:2059–2092, 2019.
* [14] N. Khalilgharibi, J. Fouchard, N. Asadipour, R. Barrientos, M. Duda, A. Bonfanti, A. Yonis, A. Harris, P. Mosaffa, Y. Fujita, A. Kabla, Y. Mao, B. Baum, J.J. Muñoz, M. Miodownik, and G. Charras. Stress relaxation in epithelial monolayers is controlled by the actomyosin cortex. Nature Phys., 15:839–847, 2019.
* [15] A. Lapytsko and J. Schaber. The role of time delay in adaptive cellular negative feedback systems. J. Theor. Biol., 308:64–73, 2016.
* [16] J.J. Muñoz and S. Albo. Physiology-based model of cell viscoelasticity. Phys. Rev. E, 88(1):012708, 2013.
* [17] J.J. Muñoz, M. Dingle, and M. Wenzel. Mechanical oscillations in biological tissues as a result of delayed rest-length changes. Phys. Rev. E, 98(1):052409, 2018.
* [18] V. Petrolli, M.L. Goff, M. Tadrous, K. Martens, C. Allier, O. Mandula, L. Hervé, S. Henkes, R. Sknepnek, T. Boudou, G. Cappello, and M. Balland. Confinement-induced transition between wave-like collective cell migration modes. Phys. Rev. Letters, 122(16):168101, 2019.
* [19] G. Petrungaro, L. Morelli, and K. Uriu. Information flow in the presence of cell mixing and signalling delays during embryonic development. Sem. Cell Dev. Biol., 93:23–35, 2019.
* [20] G. Peyret, R. Mueller, J. d’Alessandro, S. Begnaud, P. Marcq, R.M. Mège, J.M. Yeomans, A. Doostmohammadi, and B. Ladoux. Sustained oscillations of epithelial cell sheets. Bioph. J., 117(3):454–478, 2019.
* [21] H. Shinozaki and T. Mori. Robust stability analysis of linear time-delay systems by Lambert W function: Some extreme point results. Automat., 42(1):1791–1799, 2006.
* [22] H. Smith. An Introduction to Delay Differential Equations with Applications to the Life Sciences. Texts in Applied Mathematics. Springer, New York, USA, 2011.
* [23] J Solon, A Kaya-Copur, and D Brunner. Pulsed forces timed by a ratchet-like mechanism drive directed tissue movement during dorsal closure. Cell, 58(137):1331–1342, 2009.
* [24] G. Stépán. Retarded dynamical systems: stability and characteristic functions, volume 210 of Pitman Res. Notes Math. Longman Scientific & Technical, Essex, UK, 1989.
* [25] H.T. Sykora, D. Bachrathy, and G. Stépán. Stochastic semi-discretization for linear stochastic delay differential equations. Int. J. Num. Meth. Engng., 119(9):879–898, 2019.
* [26] T.P. J. Wyatt, J. Fouchard, A. Lisica, N. Khalilgharibi, B. Baum, P. Recho, A.J. Kabla, and G.T. Charras. Actomyosin controls planarity and folding of epithelia in response to compression. Num. Math., 19:109–117, 2020. https://doi.org/10.1038/s41563-019-0461-x.
* [27] K. Yoshioka-Kobayashi, M. Matsumiya, Y. Niino, A. Isomura, H. Kori, A. Miyawaki, and R. Kageyama. Coupling delay controls synchronized oscillation in the segmentation clock. Nature, 580(7801):119–123, 2020.
* [28] T. Zulueta-Coarasa and R. Fernandez-Gonzalez. Dynamic force patterns promote collective cell movements during embryonic wound repair. Nature Phys., 14:750–758, 2018.
|
# Two years of pulsar observations with the Ultra-Wideband Receiver on the
Parkes radio telescope
Simon Johnston1, C. Sobey2, S. Dai1, M. Keith3, M. Kerr4, R. N. Manchester1,
L. S. Oswald5, A. Parthasarathy6, R. M. Shannon7,8, P. Weltevrede3
1CSIRO Astronomy and Space Science, Australia Telescope National Facility, PO
Box 76, Epping NSW 1710, Australia
2CSIRO Astronomy and Space Science, PO Box 1130, Bentley, WA 6102, Australia
3Jodrell Bank Centre for Astrophysics, The University of Manchester, Alan
Turing Building, Manchester M13 9PL, UK
4Space Science Division, Naval Research Laboratory, Washington, DC 20375, USA
5Department of Astrophysics, University of Oxford, Denys Wilkinson Building,
Keble Road, Oxford OX1 3RH, UK
6Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn,
Germany
7Centre for Astrophysics and Supercomputing, Swinburne University of
Technology, PO Box 218, Hawthorn, VIC 3122, Australia
8OzGrav: The ARC Centre of Excellence for Gravitational-wave Discovery,
Hawthorn VIC 3122, Australia E-mail<EMAIL_ADDRESS>
(Last updated; in original form)
###### Abstract
The major programme for observing young, non-recycled pulsars with the Parkes
telescope has transitioned from a narrow-band system to an ultra-wideband
system capable of observing between 704 and 4032 MHz. We report here on the
initial two years of observations with this receiver. Results include
dispersion measure (DM) and Faraday rotation measure (RM) variability with
time, determined with higher precision than hitherto, flux density
measurements and the discovery of several nulling and mode changing pulsars.
PSR J1703–4851 is shown to be one of a small subclass of pulsars that has a
weak and a strong mode which alternate rapidly in time. PSR J1114–6100 has the
fourth highest |RM| of any known pulsar despite its location far from the
Galactic Centre. PSR J1825–1446 shows variations in both DM and RM likely due
to its motion behind a foreground supernova remnant.
###### keywords:
pulsars:general
††pubyear: 2020††pagerange: Two years of pulsar observations with the Ultra-
Wideband Receiver on the Parkes radio telescope–5
## 1 Introduction
Results from the long-term monitoring of millisecond pulsars have the
potential to do fundamental science, such as tests of theories of gravity
(Kramer et al., 2006b; Archibald et al., 2018), determining the equation of
state of matter (Antoniadis et al., 2013), and the detection of gravitational
waves (Shannon et al., 2015; Aggarwal et al., 2019). Such experiments have
limitations based on our incomplete knowledge of the pulsar emission
mechanism, the pulsar spin-down and on the Galactic magneto-ionised medium
through which the radio waves propagate. Therefore, observations over decades
of the bulk of the population of radio pulsars are warranted as they allow
science to be gleaned both from the behaviour of the pulsar itself and from
the properties of the interstellar medium (ISM). As examples of the former,
carried out with the Parkes telescope, Yu et al. (2013) compiled glitch
statistics, Parthasarathy et al. (2019, 2020) investigated the timing noise
properties of pulsars over more than a decade and Johnston & Kerr (2018)
investigated the polarization properties of 600 pulsars. Meanwhile, changes in
pulse profiles with time (Lyne et al., 2010) and the implications thereof were
discussed in Brook et al. (2016) and Kerr et al. (2016). The properties of the
interstellar medium were explored through long-term changes in the dispersion
measure (Petroff et al., 2013), extreme scattering events (Kerr et al., 2018)
and variations in flux density (Kumamoto et al., 2020) for a large sample of
pulsars.
The Parkes radio telescope has been observing and timing pulsars over the past
30 years at a variety of observing frequencies and employing signal processing
techniques with increasingly better capabilities. In the 1990s and early
2000s, analogue filterbanks operating at an observing frequency of 1400 MHz
were used with timing programs described in e.g. Johnston et al. (1995) and
Wang et al. (2000). Subsequently, digital filterbanks with better frequency
resolution and full polarization capability were routinely employed. In 2007,
a major timing programme of young, energetic pulsars was set up (Weltevrede et
al., 2010a) to provide ephemerides for NASA’s Fermi Gamma-ray Space Telescope
mission (Smith et al., 2008). The programme used a central observing frequency
of 1369 MHz and a bandwidth of 256 MHz to monitor a sample of some 150 pulsars
on a monthly basis with twice-yearly observations at 3.1 GHz and 0.7 GHz. The
programme achieved its goal of increasing the sample of $\gamma$-ray pulsars
more than ten-fold (Weltevrede et al., 2010b; Abdo et al., 2013; Smith et al.,
2019). In late 2018, the Ultra-Wideband receiver (UWL) was installed on the
Parkes telescope. The UWL allows observing over the entire band between 704
and 4032 MHz. The FPGA-based digital filterbanks were replaced by a GPU-based
software system known as Medusa which enables fully flexible backend
configurations to be used. A comprehensive description of the capabilities of
the UWL and Medusa can be found in Hobbs et al. (2020).
| | |
---|---|---|---
| | |
Figure 1: Polarization profiles for PSR J1701–3726 at eight different
frequencies across the UWL band. Black line denotes total intensity, red and
blue lines are the linear polarization and circular polarization. The position
angle of the linear polarization as a function of pulse phase is also shown.
In this paper we present results from the first 24 months of young pulsar
observations with the UWL, mainly concentrating on aspects of the time-
variability of the sample. A polarization study of the brightest pulsars is
presented in a companion paper (Sobey et al., 2021). Section 2 describes the
observations. In Section 3 we look at profile variations with time, section 4
looks at flux density variability, with sections 5 and 6 examining dispersion
measure (DM) and rotation measure (RM) changes respectively.
## 2 Observations and data reduction
The initial impetus for this observation programme was the launch of the Fermi
satellite in 2008. The programme started observing some 150 pulsars with high
spin-down energy loss rates, $\dot{E}$, derived from the lists given in Smith
et al. (2008). In 2014 the programme changed emphasis. Observations of some of
the weaker, high $\dot{E}$ pulsars were discontinued as it was evident that
they were not $\gamma$-ray emitters. A substantial number of bright, lower
$\dot{E}$ pulsars were added when it was realised that these too could be
$\gamma$-ray pulsars (Smith et al., 2019). Since 2014 therefore, 276 pulsars
are observed in each monthly session in a single block of duration 21 hours.
The list of pulsars is given in Table 5.
Observations with the UWL commenced in 2018 November and have occurred roughly
monthly thereafter for a total of 23 sessions up to 2020 October. The band
between 704 and 4032 MHz is subdivided into 3328 frequency channels each with
a channel bandwidth of 1 MHz. Data for each channel are coherently dedispersed
at the dispersion measure (DM) of the pulsar and folded at the topocentric
spin period to form a pulse profile with 1024 bins across the pulsar period.
Data are integrated over 30 seconds and written to disk. The typical
observation length for each pulsar is 180 s with a handful of weaker pulsars
observed for up to 480 s. Every 60 minutes, observations are made of a pulsed,
square-wave, calibration signal to allow for polarization calibration. Flux
calibration is carried out via observations of the source Hydra A. A complete
end-to-end description of the system can be found in Hobbs et al. (2020).
The data are all processed using psrchive (Hotan et al., 2004) and data
reduction proceeds as follows. First, the calibration observations are
examined and radio frequency interference (RFI) signals are removed. The
pulsar observation is then calibrated. The square-wave calibration signal
provides polarization calibration through correction of the gains and phases.
The observation of the flux calibrator converts the digitiser units to mJy.
Finally the technique described in van Straten (2013) is used to account for
the instrumental leakage terms. A two-step process is then used to remove RFI
from the pulsar data. First the data are summed in time and the RFI flagged in
the frequency channels using a median-filter technique and then the individual
time steps are examined and bad time integrations are flagged before once
again the data are summed in time.
As an example of the output of the data processing, the polarization profiles
of PSR J1701–3726 at eight different frequencies across the band of the UWL
are shown in Figure 1.
## 3 Pulse profile variability
---
Figure 2: The flux density of PSR J1701–3726 in each 30 s integration. The integrations dominated by nulling are clearly seen. |
---|---
Figure 3: The two modes of PSR J1703–4851. Left panel shows the bright mode
with a dominant central component. The right panel shows the weak mode. The
central observing frequency is 1400 MHz with a total bandwidth of 256 MHz.
For an individual pulsar, each rotation provides a snap-shot pulse profile and
these can differ significantly from pulse to pulse. Once sufficient pulses are
averaged, a characteristic pulse profile emerges which (at least to zeroth
order) appears to be stable over decades. It is this stability which makes
high-precision timing experiments possible. However, even in the early days of
pulsar research, a phenomenon loosely known as ‘mode changing’ was observed in
some pulsars (Backer, 1970b). In these cases, a given pulsar appears to have
two or more different, stable pulse profiles. Often, one of the modes is the
off or nulling state (e.g. Backer 1970a; Biggs 1992), while in some cases a
pulsar may have a ‘bright’ mode and a ‘quiet’ mode (e.g. Hermsen et al. 2013)
and in yet other cases have modes of different widths (Lyne et al., 2010;
Brook et al., 2016). The timescale for switching between modes can be as short
as one rotation period (Sobey et al., 2015) and as long as years (Kramer et
al., 2006a), but what sets the timescale has not yet been determined.
Furthermore, without sufficient sensitivity to single pulses, it is unclear
whether all pulsars exhibit mode-changing or whether this is restricted to a
particular class of pulsar. If we ignore the effects of mode-changing, then
pulsar profiles tend to stabilise to within a few percent after several
hundred rotations, with a small minority taking much longer to reach a stable
profile.
In this particular set of observations we are limited by the fact that we
produce profiles every 30 s and so short-term mode-changing is difficult to
detect. We also only observe once per month, making it difficult to ascertain
the timescale for mode changing. In spite of these caveats, we can measure the
flux density for every 30 s integration and inspect the time series for
variability. We describe below those pulsars which show nulling and/or other
time variable behaviour.
PSR J0729–1836: If sufficient rotations are obtained over a long integration
time, the pulse profile has a strong leading component followed by a plateau
before the trailing drop-off. There is little evolution with frequency. The
profiles obtained in individual 180 s observations (350 rotations) can look
very different due to the strong and independent modulation of the leading and
trailing components.
PSR J0820–4114: This pulsar has a low $\dot{E}$ of $5.4\times 10^{30}$ erg s-1
and a very wide profile which spans more than 100° of longitude, and it
therefore appears to be an old, almost aligned rotator. It often appears to
null for about 30 s and remain on for the rest of the 3 min observation so its
nulling fraction is less than 15% which is low compared to other aligned
rotators.
PSR J1048–5832: The profile of this pulsar has at least four components (Sobey
et al., 2021) and even though we average together 1450 rotations per 3-min
observation, the profile can look very different from one observation to the
next. We surmise that each component has very different amplitude statistics
with occasional bright pulses so that different components dominate depending
on the observation length. Single pulse studies are warranted.
PSR J1049–5833: This is a long-period (2.2 s) pulsar with a simple, gaussian-
like profile. Its nulls typically last for one third of the observation.
PSR J1114–6100: This pulsar shows short-duration nulls, with a null fraction
less than 20%.
PSR J1428–5530: The profile of the pulsar has two components which are blended
and the profile does not stabilise within the 180 s observation either in
total intensity or in the fraction of linear polarization. The pulsar
sometimes nulls within the 30 s sub-integration time and has a low nulling
fraction overall.
PSR J1646–6831: This pulsar has as spin period of 1.8-s and a low $\dot{E}$ of
$1.2\times 10^{31}$erg s-1. The profile appears to show a classic double-conal
structure although the sweep of PA is highly distorted. The circular
polarization exceeds the linear polarization fraction in the centre of the
profile. The pulsar has short-duration nulls of which several can be seen per
observation.
PSR J1701–3726: This is a long period (2.5 s) pulsar with a complex profile
which varies with frequency (see Figure 1). It nulls within the 30 s
integration and is in the null state approximately 25% of the time. As a
result of the nulling, the profile takes a long time to stabilise. Figure 2
shows the flux density in each 30 s integration where the nulling is clearly
seen.
PSR J1703–4851: This pulsar was flagged in Kumamoto et al. (2020) as showing
unusual behaviour in its flux density variations which were clearly intrinsic
to the pulsar rather than a propagation effect. In our sample, it also stands
out as an exception. The pulsar shows two distinct modes. The left panel of
Figure 3 shows the profile of the pulsar in the ‘bright’ mode with a strong
central component and weak flanking components. The linear polarization is low
and the circular polarization changes sign under the main component. In the
‘weak’ mode (right panel of Figure 3) the central component is reduced in flux
density by about a factor of 10 compared to the bright mode and the flanking
components stay roughly constant. Of the 14 observations we have made of this
pulsar, five are in the bright mode and nine in the weak mode. In one
observation, made on 2019 July 6, the pulsar changes from the bright mode to
the weak mode in the middle of the observation indicating a fast switching
time and a relatively short duration for each mode. This pulsar merits closer
attention, a single pulse analysis would prove beneficial.
PSR J1709–1640: The low DM of this pulsar means that it exhibits diffractive
scintillation, but the timescale of the scintillation is longer than the
observation duration. The pulsar has short duration nulls, but is occasionally
in the null state during the entire 3 min observation. We estimate the nulling
fraction to be 10%. Both Naidu et al. (2018) and Wang et al. (2020) obtain
similar results, but also see occasional nulls with a duration of several
hours.
PSR J1727–2739: This pulsar nulls with a typical duration of 30 s or longer,
and remains on for typically twice this duration. The overall nulling fraction
is therefore about 35%.
PSR J1733–3716: This pulsar has a peculiar profile with two separated
components each with the steep edge facing the middle of the profile. The
leading component is brighter and modulates strongly, both components switch
off simultaneously and the nulls are quasi-periodic and of short duration. A
single pulse study of this pulsar is given in Hu et al. (2020).
PSR J1745–3040: This pulsar also has a peculiar profile which consists of a
small leading component almost disjoint from a complex trailing component.
Although nulling is not evident, various parts of the profile seem to switch
off at various times and the flux in the leading component is often anti-
correlated with the flux of the trailing components.
PSRs J0034–0721, J1825–0935 and J1830–1059 are also part of our sample. These
pulsars have a significant literature and their details will not be repeated
here (see e.g. Ilie et al. 2020; Gil et al. 1994; Stairs et al. 2019
respectively).
## 4 Flux densities and spectra
Figure 4: Top panel: Flux density versus observing frequency on a log-log
scale for PSR J1157–6224 over 20 epochs. The spectral index is $-2.6.\pm
0.05$. Bottom panel: Flux density at 1.4 GHz as a function of epoch.
The UWL allows us to measure the flux density and the spectrum of a pulsar
simultaneously over a wide bandwidth and to repeat this on a monthly basis.
Flux densities are measured by supplying a template to the psrchive routine
psrflux. Figure 4 shows the example of PSR J1157–6224, measured over 20
epochs. The spectral index is $-2.6\pm 0.05$, fluctuations in the flux density
are likely the result of refractive scintillation (Kumamoto et al., 2020) in
this high DM (325 cm-3 pc) pulsar.
A comprehensive survey of pulsar flux densities at a wide range of frequencies
was carried out by Jankowski et al. (2018). In that paper they identified five
different spectral types (i) simple power-law (ii) broken power-law (iii) log-
parabolic (iv) power-law with high-frequency cutoff and (v) power-law with
low-frequency turn-over. Many of the pulsars in our sample were classified in
that paper and the results will not be repeated here. However, our sample
contains 44 pulsars not in the Jankowski et al. (2018) paper. For these
pulsars, we summed together all the observations to smooth over the effects of
diffractive and refractive scintillation. We then measured the flux density of
each pulsar at 1400 MHz and examined their spectra. The spectrum for both a
simple power-law and a log-parabolic can be described by
${\rm log_{10}}S_{\nu}=a[{\rm log_{10}}(\nu/\nu_{0})]^{2}+b[{\rm
log_{10}}(\nu/\nu_{0})]+c$ (1)
where $S_{\nu}$ is the flux density at a frequency $\nu$ and $\nu_{0}$ is 1400
MHz. For the power-law case, $a=0$ and $b$ yields the spectral index. For the
log parabolic case, $a$ is the curvature and $b$ gives the local value of the
spectral index at $\nu_{0}$. Table 1 gives the results.
Of the 44 pulsars, only two show evidence for a low-frequency turnover in the
spectrum. PSRs J0631+1036 and J1751–3323 have spectra which peak near 1 GHz,
but the limited data below this frequency do not allow us to fully quantify
the spectrum. We therefore simply quote the spectral index for the power-law
spectrum above 1 GHz in Table 1. Of the rest, 7 are best fit with a log-
parabolic spectrum and 35 with a power-law. The fraction of pulsars in the
different classes is in line with the results found by Jankowski et al.
(2018).
Table 1: Flux density and spectral information for 44 pulsars. Columns 2 and 3 show the flux density at 1400 MHz ($S_{\rm 1400}$) and the associated uncertainty, $\sigma_{S}$. Column 8 denotes whether the fit is a power-law (PL) or log-parabolic (LP). In the former case, $a$ denotes the spectral index. For the latter case, $a$ is the curvature parameter and $b$ the spectral index, with $\sigma_{a}$ and $\sigma_{b}$ their respective uncertainties. PL-T denotes a low frequency turnover, see text for details. Jname | $S_{\rm 1400}$ | $\sigma_{S}$ | $a$ | $\sigma_{a}$ | $b$ | $\sigma_{b}$ | Type
---|---|---|---|---|---|---|---
| (mJy) | (mJy) | | | | |
J0525+1115 | 1.94 | 0.02 | –2.27 | 0.08 | . | . | PL
J0631+1036 | 1.11 | 0.01 | –0.33 | 0.03 | . | . | PL-T
J0738–4042 | 112.58 | 0.54 | –1.02 | 0.10 | –1.38 | 0.03 | LP
J0842–4851 | 1.07 | 0.01 | –1.78 | 0.06 | . | . | PL
J1012–5857 | 1.91 | 0.01 | –1.26 | 0.08 | –1.24 | 0.02 | LP
J1017–5621 | 2.20 | 0.01 | –0.93 | 0.16 | –2.13 | 0.05 | LP
J1046–5813 | 1.37 | 0.01 | –2.19 | 0.10 | . | . | PL
J1110–5637 | 3.29 | 0.02 | –0.85 | 0.11 | –1.23 | 0.03 | LP
J1114–6100 | 5.36 | 0.02 | –0.7 | 0.1 | –0.53 | 0.03 | LP
J1210–5559 | 1.27 | 0.01 | –2.17 | 0.03 | . | . | PL
J1225–6408 | 1.26 | 0.01 | –2.12 | 0.12 | . | . | PL
J1306–6617 | 4.91 | 0.02 | –1.6 | 0.2 | –1.25 | 0.05 | LP
J1418–3921 | 0.95 | 0.01 | –2.52 | 0.04 | . | . | PL
J1428–5530 | 7.67 | 0.01 | –2.3 | 0.1 | . | . | PL
J1430–6623 | 13.60 | 0.02 | –1.8 | 0.1 | . | . | PL
J1544–5308 | 5.82 | 0.01 | –1.71 | 0.06 | . | . | PL
J1555–3134 | 4.24 | 0.03 | –0.60 | 0.15 | . | . | PL
J1557–4258 | 3.14 | 0.01 | –2.56 | 0.06 | . | . | PL
J1559–4438 | 41.06 | 0.05 | –2.3 | 0.1 | . | . | PL
J1602–5100 | 8.23 | 0.03 | –2.08 | 0.03 | . | . | PL
J1603–5657 | 0.93 | 0.01 | –2.06 | 0.04 | . | . | PL
J1623–4256 | 2.60 | 0.02 | –2.35 | 0.05 | . | . | PL
J1633–4453 | 2.76 | 0.01 | –2.2 | 0.1 | . | . | PL
J1645–0317 | 25.76 | 0.03 | –2.6 | 0.2 | . | . | PL
J1649–3805 | 1.75 | 0.02 | –1.75 | 0.08 | . | . | PL
J1651–5255 | 2.72 | 0.02 | –2.14 | 0.06 | . | . | PL
J1652–2404 | 1.44 | 0.01 | –2.06 | 0.05 | . | . | PL
J1705–1906 | 5.66 | 0.03 | –1.54 | 0.04 | . | . | PL
J1716–4005 | 1.79 | 0.02 | –1.52 | 0.06 | . | . | PL
J1720–2933 | 1.69 | 0.01 | –2.05 | 0.07 | . | . | PL
J1722–3632 | 2.90 | 0.02 | –0.7 | 0.3 | –1.1 | 0.1 | LP
J1733–2228 | 4.21 | 0.02 | –2.7 | 0.1 | . | . | PL
J1735–0724 | 2.76 | 0.01 | –2.28 | 0.02 | . | . | PL
J1739–3131 | 7.04 | 0.04 | –2.20 | 0.03 | . | . | PL
J1750–3157 | 1.40 | 0.02 | –1.68 | 0.04 | . | . | PL
J1751–3323 | 1.67 | 0.02 | –0.92 | 0.03 | . | . | PL-T
J1816–2650 | 2.38 | 0.02 | –2.70 | 0.02 | . | . | PL
J1817–3837 | 2.09 | 0.01 | –1.82 | 0.06 | . | . | PL
J1820–0427 | 10.07 | 0.02 | –2.45 | 0.01 | . | . | PL
J1825–0935 | 11.86 | 0.07 | –1.5 | 0.1 | . | . | PL
J1834–0426 | 19.69 | 0.08 | –1.60 | 0.02 | . | . | PL
J1841–0345 | 2.07 | 0.07 | –1.0 | 0.1 | . | . | PL
J1845–0434 | 2.92 | 0.02 | –0.9 | 0.1 | . | . | PL
J2155–3118 | 0.66 | 0.02 | –3.0 | 0.1 | . | . | PL
## 5 Dispersion measure and variability
Figure 5: Dispersion measure versus MJD for PSR J0908–4913. A straight line
fit to the data yields a slope of $0.026\pm 0.02$ cm-3 pc yr-1.
In this section, we concentrate on changes in DM over time, a quantity which
is relatively simple to measure compared to the absolute DM value. Absolute
DMs are difficult to measure because of profile variations with frequency and
our lack of understanding of magnetospheric processes (see e.g. Oswald et al.
2020). For millisecond pulsars, high precision to DM variations can be
obtained (Pennucci, 2019). You et al. (2007) and Keith et al. (2013) took
advantage of this to examine the DM structure functions for a sample of 20
millisecond pulsars. The previous large-scale study of DM variability in slow
pulsars was carried out by Petroff et al. (2013) who analysed 168 pulsars over
an almost 6-year period. They were able to detect significant variability in
only four objects and placed upper limits on the rest. The work of Petroff et
al. (2013) involved a mixture of narrow-band observations and non-simultaneous
observations over several bands and had significant overlap with the pulsars
under consideration here.
In order to measure a DM per epoch, we average over the observation duration,
and reduce the number of frequency channels to eight across the band. We then
use psrchive to produce arrival times (ToAs) via the routine pat and fit the
DM to these ToAs using tempo2. We discard observations which are strongly
affected by RFI. We then fitted a straight line to the DM as a function of
time. Currently 56 of the pulsars in our sample have an upper limit on DM
changes of 0.01 cm-3 pc yr-1, already an improvement on Petroff et al. (2013),
and which bodes well for the future of this project as the time baseline
increases. We find 11 pulsars with slopes greater than 5-$\sigma$ deviant from
zero as listed in Table 2. Three of these objects (PSRs J0835–4510, J0908–4913
and J1833–0827) also had significant values of $\Delta$DM in Petroff et al.
(2013) but these were of opposite sign compared to our observations. This
demonstrates that a simple slope is a crude tool to measure DM variations and
that either a higher-order polynomial or a structure function analysis is
required (e.g. Donner et al. 2020).
Table 2: Pulsars with significant values of $\Delta$DM. The uncertainty in the slope is given in brackets and refers to the last digit(s). Jname | DM | $\Delta$DM
---|---|---
| (cm-3 pc) | (cm-3 pc yr-1)
J0835–4510 | 67.8 | –0.024(2)
J0908–4913 | 180.5 | +0.026(2)
J1048–5832 | 128.7 | –0.054(4)
J1105–6107 | 271.4 | +0.063(4)
J1602–5100 | 170.8 | –0.019(3)
J1611–5209 | 127.3 | –0.020(3)
J1825–1446 | 352.7 | +0.169(20)
J1826–1334 | 231.5 | +0.084(11)
J1832–0827 | 301.1 | +0.045(6)
J1833–0827 | 410.4 | +0.284(23)
J1835–1106 | 132.6 | +0.027(4)
## 6 Rotation measure and variability
We note that, as with DM, values of rotation measure (RM) for a given pulsar
depend on the method used to obtain them, and can typically vary by several
units. In addition, an incorrect DM can also lead to an erroneous RM (Ilie et
al., 2019). In order to determine the RM we employ two different algorithms as
implemented within the psrchive routine rmfit. The first method uses a ‘brute-
force’ approach in which trial RMs are employed and the amount of linear
polarization is computed for each trial value. The algorithm returns the RM at
which the linear polarization is maximised over the profile as a whole. In the
second method, rotation measures are computed using the algorithm outlined in
Noutsos et al. (2008). In brief, a single position angle (PA) is computed per
frequency channel after collapsing the pulse profile to a single phase bin.
The rotation measure (RM) is then computed through a quadratic fit to the PA
as a function of frequency and the statistical error determined through Monte
Carlo trials.
There are nine pulsars in our sample which have had RMs measured for the first
time. These are listed in Table 3. PSR J1643–4505, once the high RM is taken
into account, joins the ranks of pulsars with high $\dot{E}$ and very high
linear polarization fraction. In contrast, the linear polarization fraction
for the high $\dot{E}$ pulsar PSR J1055–6028 remains low even after the RM
correction. We note the extremely high value of RM uncovered for PSR
J1114–6100, described in more detail in the subsection below. We have found a
substantial number of pulsars for which our value of RM is more than 10 rad
m-2 away from the published values as given in v1.63 of the ATNF pulsar
catalogue (Manchester et al.,
2005)111http://www.atnf.csiro.au/research/pulsar/psrcat/ . These pulsars are
listed in Table 4. The published value of PSR J1413–6141 was seriously in
error, applying the correct value reveals that this pulsar has only a moderate
linear polarization fraction. For PSR J0857–4424, the published value from 20
years ago of $-75$ rad m-2 is erroneous; we measure +162 rad m-2, a difference
of more than 200 units.
We observe that there are sometimes significant differences between the RMs
returned by the two methods as can be seen in Figure 7. Karastergiou (2009)
pointed out that the effects of interstellar scattering can lead to erroneous
measurements of RM, and that the method of Noutsos et al. (2008) was best to
use in the case of scattered profiles. In addition, Ilie et al. (2019) showed
the presence of (apparent) RM variations across the pulse profile which meant
that different RMs can be obtained depending on the method used. The most
extreme example of this is PSR J1600–5044 for which we obtain an RM of 90 rad
m-2 and 135 rad m-2 using the two methods. At low frequencies the pulsar is
heavily scattered, at high frequencies the position angle of the linear
polarization has a very steep swing across the pulse profile. This is the
scenario pointed out by Karastergiou (2009) as being the most detrimental to
measuring correct values of RM.
Table 3: Pulsars without previous RM measurements. The uncertainty is given in brackets and refers to the last digit. Jname | RM
---|---
| (rad m-2)
J0820–3826 | +122(6)
J1055–6028 | +343(2)
J1114–6100 | –6729(2)
J1515–5720 | +41(7)
J1637–4642 | –43.2(5)
J1643–4505 | –858(1)
J1650–4921 | –208(1)
J1716–4005 | –526(3)
J1843–0702 | +188(3)
Table 4: Pulsars with derived RMs which differ by more than 10 rad m-2 from
the published values, RMcat. The uncertainties, $\sigma$ and $\sigma_{\rm
cat}$ are also listed.
Jname | RM | $\sigma$ | RMcat | $\sigma_{\rm cat}$ | ref
---|---|---|---|---|---
| (rad m-2) | (rad m-2) | (rad m-2) | (rad m-2) |
J0855–4644 | +225 | 3 | +249 | 22 | (3)
J0857–4424 | +165 | 4 | –75 | 20 | (5)
J0901–4624 | +267 | 2 | +289 | 22 | (3)
J0954–5430 | +86 | 2 | +65 | 10 | (3)
J1003–4747 | +52 | 2 | +18 | 4 | (3)
J1012–5857 | +39 | 4 | +74 | 6 | (3)
J1015–5719 | +110 | 2 | +96 | 2 | (6)
J1016–5857 | –520 | 2 | –540 | 3 | (6)
J1019–5749 | –334 | 3 | –366 | 10 | (3)
J1034–3224 | –43 | 2 | –8 | 1 | (5)
J1038–5831 | –31 | 1 | –15 | 10 | (8)
J1043–6116 | +189 | 2 | +257 | 23 | (3)
J1049–5833 | +344 | 4 | +359 | 11 | (3)
J1146–6030 | +7 | 2 | –5 | 4 | (8)
J1305–6203 | –478 | 2 | –436 | 15 | (3)
J1327–6222 | –327 | 2 | –306 | 8 | (3)
J1341–6220 | –947 | 1 | –921 | 3 | (6)
J1356–5521 | +89 | 2 | +101 | 4 | (8)
J1410–6132 | +2270 | 5 | +2400 | 30 | (9)
J1412–6145 | –52 | 2 | –130 | 13 | (6)
J1413–6141 | –354 | 4 | –35 | 10 | (3)
J1424–5822 | –643 | 3 | –625 | 19 | (3)
J1534–5334 | +25 | 1 | –46 | 17 | (3)
J1536–5433 | –138 | 3 | –155 | 13 | (3)
J1543–5459 | +167 | 1 | +28 | 23 | (3)
J1544–5308 | –42 | 1 | –29 | 7 | (5)
J1548–5607 | +22 | 1 | +37 | 10 | (3)
J1600–5044 | +134 | 1 | +119 | 10 | (11)
J1602–5100 | +84 | 1 | +71.5 | 1.1 | (11)
J1604–4909 | +7 | 2 | +34 | 1 | (7)
J1611–5209 | –101 | 1 | –79 | 5 | (8)
J1638–4417 | +139 | 3 | +160 | 25 | (12)
J1638–4608 | +363 | 3 | +335 | 12 | (3)
J1640–4715 | –422 | 2 | –411 | 12 | (8)
J1648–4611 | –650 | 3 | –682 | 26 | (3)
J1715–3903 | +228 | 1 | +250 | 15 | (3)
J1719–4006 | –204 | 2 | –218 | 17 | (8)
J1720–2933 | +10 | 4 | +21 | 5 | (2)
J1722–3632 | –332 | 2 | –307 | 8 | (3)
J1731–4744 | –446 | 2 | –429.1 | 0.5 | (11)
J1738–3211 | –18 | 2 | +7 | 9 | (3)
J1739–2903 | –301 | 2 | –236 | 18 | (3)
J1739–3023 | –120 | 1 | –74 | 18 | (3)
J1740–3015 | –155.3 | 0.4 | –168.0 | 0.7 | (8)
J1750–3157 | +71 | 1 | +111 | 8 | (3)
J1757–2421 | –27 | 1 | +16 | 5 | (8)
J1801–2304 | –1124 | 2 | –1156 | 19 | (1)
J1806–2125 | 724 | 5 | +796 | 15 | (4)
J1822–4209 | +54 | 10 | –13 | 9 | (3)
J1832–0827 | +12 | 1 | +39 | 7 | (10)
J1833–0827 | –284 | 3 | –470 | 7 | (10)
J1848–0123 | +514 | 3 | +580 | 30 | (11)
J1853–0004 | +627 | 4 | +648.7 | 4.7 | (4)
References: (1) Crawford et al. (2001), (2) Hamilton & Lyne (1987), (3) Han et
al. (2006), (4) Han et al. (2018), (5) Han et al. (1999), (6) Johnston &
Weisberg (2006), (7) Johnston et al. (2007), (8) Noutsos et al. (2008), (9)
O’Brien et al. (2008), (10) Rand & Lyne (1994), (11) Taylor et al. (1993),
(12) Weltevrede & Johnston (2008)
### 6.1 PSR J1114–6100
Figure 6: Radio continuum image at 843 MHz from the Molonglo Galactic Plane
Survey (Green et al., 1999) of a region near PSR J1114–6100. Pulsars and HII
regions are marked as is the SNR G291.0–0.1. The wedge to the right of the
image shows the flux density levels in Jy.
We measured a value of $-6729\pm 2$ rad m-2 for the RM of PSR J1114–6100. This
is the fourth highest value of |RM| for any known pulsar, beaten only by the
Galactic centre magnetar PSR J1745–2900 (Shannon & Johnston, 2013), and two
other pulsars near the Galactic centre, PSRs J1746–2849 and J1746–2856
(Schnitzeler et al., 2016). The DM of the pulsar is 677 cm-3 pc, and so the
value of the magnetic field strength parallel to the line of sight, given by
1.2 RM/DM, is $-12.0~{}\mu$G, the third highest for any pulsar.
Figure 6 shows the location of PSR J1114–6100 and other pulsars overlaid with
a radio continuum image taken from the Molonglo Galactic Plane Survey (Green
et al., 1999). Of the pulsars close on the sky to PSR J1114–6100, most have
positive values of RM, including a value of $+853$ rad m-2 for PSR J1119–6127.
PSRs J1112–6103 and J1115–6052 both have RMs close to $+250$ rad m-2. The
prominent supernova remnant SNR G291.0–0.1 (Reynoso et al., 2006) lies close
on the sky to the pulsar and there are numerous HII regions in the vicinity
(Caswell & Haynes, 1987). The largest and brightest HII region located at (l,b
= 291.614,–0.525) has a flux density in excess of 200 Jy; its recombination
lines have a positive velocity indicating a distance of some 8 kpc. This HII
region (also known as NGC 3603) is seen from radio to gamma-rays (Moffat et
al., 2002; Saha et al., 2020), and is the subject of a significant body of
literature; a recent study of its OB stars places its distance at 7 kpc (Drew
et al., 2019). To the south-west lies the only HII region (291.284,–0.713)
with negative velocities at a distance of some 4 kpc. In the optical, the HII
regions are clearly delineated and do not extend as far as PSR J1114–6100.
H$\alpha$ images show some excess diffuse structure near the pulsar. The
pulsar shows a relatively low amount of scatter-broadening given the large
value of DM.
The cause of the extremely high RM of PSR J1114–6100 is unclear. Its distance,
derived from the DM Yao et al. (2017), is roughly consistent with the distance
to NGC 3603 but the HII region appears to be too distant (in angle) to
contribute to the DM of pulsar. It could be that the pulsar lies behind some
highly magnetised filament located perhaps in the near part of the Carina
spiral arm. In any case, it remains odd that the pulsar’s RM is an order of
magnitude greater than its neighbouring pulsars, and pulsars in other, more
complex regions of the Galactic plane.
### 6.2 Time-variable RMs
Figure 7: Rotation measure versus epoch for PSR J1048–5832. Top curve shows
the RM using the ‘brute-force’ method, bottom curve shows the RM using the
Noutsos et al. method.
There are 10 pulsars in our sample for which the statistical error in the
measured RM is smaller than 1.0 rad m-2 per epoch. These are the brightest
and/or the most highly polarized pulsars, PSRs J0738–4042, J0742–2822,
J0835–4510, J0908–4913, J1048–5832, J1359–6038, J1644–4559, J1709–4429,
J1740–3015 and J1745–3040. In no case is there evidence for any change in RM.
As an example, Figure 7 shows RM versus epoch for PSR J1048–5832. The error
bars per epoch reflect statistical errors and do not take into account the
ionospheric RM. However, there does appear to be one pulsar which does show RM
changes, PSR J1825–1446, a pulsar which we noted earlier also showed DM
changes. This pulsar is discussed further below. The pulsar with the second
largest $|$RM$|$ in our sample is PSR J1410–6132, which has a value of +2400
rad m-2 according to O’Brien et al. (2008) but for which we measured +2279 rad
m-2, a substantial change. Our data show a decline in RM from 2290 rad m-2 to
2270 rad m-2 over the course of 2 years although the significance of the
change is only 2-$\sigma$. However, it could be the case that the RM has
decreased substantially since 2008 and further monitoring is required.
### 6.3 PSR J1825–1446
This pulsar is the only one of our sample which shows significant changes in
both RM and DM, shown in Figure 8. The RM increases almost linearly by about 8
rad m-2 over two years (much larger than expected from the ionosphere) while
the DM increases by about 0.3 cm-3 pc. The implies a change in the magnetic
field along the line of sight of 0.2 $\mu$G in two years. An independent
distance to this pulsar has not been determined but the distance inferred from
the DM is $\sim$5 kpc. The proper motion is high, as measured by Moldón et al.
(2012) and later refined by Dexter et al. (2017) and hence the pulsar has a
high transverse velocity of 750 kms-1. The pulsar also appears in projection
to be located within the shell of the supernova remnant G16.8–1.1 although
Moldón et al. (2012) argue that the pulsar’s age and velocity make a true
association unlikely. Nevertheless, our line of sight to the (background)
pulsar intercepts the SNR and it seems likely that the RM and DM changes we
see are the result of the pulsar passing behind a magnetised filament in the
SNR. The scattering time-scale of the pulsar is 21 ms at 1 GHz (Oswald et al.,
2021). This is an order of magnitude higher than expected from the
extrapolation of the relationship of Krishnakumar et al. (2015) to 1 GHz, but
is not particularly discrepant compared to other pulsars with comparable DMs
(Oswald et al., 2021).
---
Figure 8: Dispersion measure (top panel) and rotation measure (bottom panel)
as a function of epoch for PSR J1825–1446. A straight line fit to the DM
versus time yields a slope of $0.15\pm 0.02$ cm-3 pc yr-1, that to the RM
versus time yields $2.6\pm 0.3$ rad m-2 yr-1.
## 7 Summary
The Ultra-Wideband receiver on the Parkes telescope has been operational for
two years and will remain the workhorse for pulsar observations through this
decade. A complementary programme of slow pulsar observations on the MeerKAT
telescope is also underway (Bailes et al., 2020; Johnston et al., 2020) and
will be key for single-pulse analysis. Repeated monitoring of a large sample
of pulsars coupled with the wide instantaneous observing bandwidth is ideal
for studying the time-evolution of the pulsars themselves and the interstellar
medium through which their radio emission propagates. The early results
presented here are extremely encouraging for the future of this project. We
have reported on a number of new nulling pulsars, DM and RM variability, and a
pulsar with an extremely high value of RM.
## Acknowledgements
The Parkes radio telescope is part of the Australia Telescope National
Facility which is funded by the Australian Government for operation as a
National Facility managed by CSIRO. RMS acknowledges support through
Australian Research Council Future Fellowship FT190100155. Work at NRL is
supported by NASA. We thank the referee Scott Ransom for his heartening
report.
## Data Availability
Pulsar data taken for the P574 project is made available through the CSIRO’s
Data Access Portal (https://data.csiro.au) after an 18 month proprietary
period.
## References
* Abdo et al. (2013) Abdo A. A., et al., 2013, ApJSS, 208, 17
* Aggarwal et al. (2019) Aggarwal K., et al., 2019, ApJ, 880, 116
* Antoniadis et al. (2013) Antoniadis J., et al., 2013, Science, 340, 448
* Archibald et al. (2018) Archibald A. M., et al., 2018, Nature, 559, 73
* Backer (1970a) Backer D. C., 1970a, Nature, 228, 42
* Backer (1970b) Backer D. C., 1970b, Nature, 228, 1297
* Bailes et al. (2020) Bailes M., et al., 2020, PASA, 37, e028
* Biggs (1992) Biggs J. D., 1992, ApJ, 394, 574
* Brook et al. (2016) Brook P. R., Karastergiou A., Johnston S., Kerr M., Shannon R. M., Roberts S. J., 2016, MNRAS, 456, 1374
* Caswell & Haynes (1987) Caswell J. L., Haynes R. F., 1987, A&A, 171, 261
* Crawford et al. (2001) Crawford F., Manchester R. N., Kaspi V. M., 2001, AJ, 122, 2001
* Dexter et al. (2017) Dexter J., et al., 2017, MNRAS, 471, 3563
* Donner et al. (2020) Donner J. Y., et al., 2020, A&A Submitted.
* Drew et al. (2019) Drew J. E., Monguió M., Wright N. J., 2019, MNRAS, 486, 1034
* Gil et al. (1994) Gil J. A., et al., 1994, A&A, 282, 45
* Green et al. (1999) Green A. J., Cram L. E., Large M. I., Ye T., 1999, ApJSS, 122, 207
* Hamilton & Lyne (1987) Hamilton P. A., Lyne A. G., 1987, MNRAS, 224, 1073
* Han et al. (1999) Han J. L., Manchester R. N., Qiao G. J., 1999, MNRAS, 306, 371
* Han et al. (2006) Han J. L., Manchester R. N., Lyne A. G., Qiao G. J., van Straten W., 2006, ApJ, 642, 868
* Han et al. (2018) Han J. L., Manchester R. N., van Straten W., Demorest P., 2018, ApJS, 234, 11
* Hermsen et al. (2013) Hermsen W., et al., 2013, Science, 339, 436
* Hobbs et al. (2020) Hobbs G., et al., 2020, PASA, 37, e012
* Hotan et al. (2004) Hotan A. W., van Straten W., Manchester R. N., 2004, PASA, 21, 302
* Hu et al. (2020) Hu Y., Li L., Yuan J. P., Dang S. J., Wang S. Q., Wang Z. J., Yuen R., 2020, ApSS, 365, 143
* Ilie et al. (2019) Ilie C. D., Johnston S., Weltevrede P., 2019, MNRAS, 483, 2778
* Ilie et al. (2020) Ilie C. D., Weltevrede P., Johnston S., Chen T., 2020, MNRAS, 491, 3385
* Jankowski et al. (2018) Jankowski F., van Straten W., Keane E. F., Bailes M., Barr E. D., Johnston S., Kerr M., 2018, MNRAS, 473, 4436
* Johnston & Kerr (2018) Johnston S., Kerr M., 2018, MNRAS, 474, 4629
* Johnston & Weisberg (2006) Johnston S., Weisberg J. M., 2006, MNRAS, 368, 1856
* Johnston et al. (1995) Johnston S., Manchester R. N., Lyne A. G., Kaspi V. M., D’Amico N., 1995, A&A, 293, 795
* Johnston et al. (2007) Johnston S., Kramer M., Karastergiou A., Hobbs G., Ord S., Wallman J., 2007, MNRAS, 381, 1625
* Johnston et al. (2020) Johnston S., et al., 2020, MNRAS, 493, 3608
* Karastergiou (2009) Karastergiou A., 2009, MNRAS, 392, L60
* Keith et al. (2013) Keith M. J., et al., 2013, MNRAS, 429, 2161
* Kerr et al. (2016) Kerr M., Hobbs G., Johnston S., Shannon R. M., 2016, MNRAS, 455, 1845
* Kerr et al. (2018) Kerr M., Coles W. A., Ward C. A., Johnston S., Tuntsov A. V., Shannon R. M., 2018, MNRAS, 474, 4637
* Kramer et al. (2006a) Kramer M., Lyne A. G., O’Brien J. T., Jordan C. A., Lorimer D. R., 2006a, Science, 312, 549
* Kramer et al. (2006b) Kramer M., et al., 2006b, Science, 314, 97
* Krishnakumar et al. (2015) Krishnakumar M. A., Mitra D., Naidu A., Joshi B. C., Manoharan P. K., 2015, ApJ, 804, 23
* Kumamoto et al. (2020) Kumamoto H., et al., 2020, MNRAS. In Press. MN-20-3439-MJ.
* Lyne et al. (2010) Lyne A., Hobbs G., Kramer M., Stairs I., Stappers B., 2010, Science, 329, 408
* Manchester et al. (2005) Manchester R. N., Hobbs G. B., Teoh A., Hobbs M., 2005, AJ, 129, 1993
* Moffat et al. (2002) Moffat A. F. J., et al., 2002, ApJ, 573, 191
* Moldón et al. (2012) Moldón J., Ribó M., Paredes J. M., Brisken W., Dhawan V., Kramer M., Lyne A. G., Stappers B. W., 2012, A&A, 543, A26
* Naidu et al. (2018) Naidu A., Joshi B. C., Manoharan P. K., Krishnakumar M. A., 2018, MNRAS, 475, 2375
* Noutsos et al. (2008) Noutsos A., Johnston S., Kramer M., Karastergiou A., 2008, MNRAS, 386, 1881
* O’Brien et al. (2008) O’Brien J. T., et al., 2008, MNRAS, 388, L1
* Oswald et al. (2020) Oswald L., Karastergiou A., Johnston S., 2020, MNRAS, 496, 1418
* Oswald et al. (2021) Oswald L., et al., 2021, MNRAS. Submitted. MN-20-4708-MJ.
* Parthasarathy et al. (2019) Parthasarathy A., et al., 2019, MNRAS, 489, 3810
* Parthasarathy et al. (2020) Parthasarathy A., et al., 2020, MNRAS, 494, 2012
* Pennucci (2019) Pennucci T. T., 2019, ApJ, 871, 34
* Petroff et al. (2013) Petroff E., Keith M. J., Johnston S., van Straten W., Shannon R. M., 2013, MNRAS, 435, 1610
* Rand & Lyne (1994) Rand R. J., Lyne A. G., 1994, MNRAS, 268, 497
* Reynoso et al. (2006) Reynoso E. M., Johnston S., Green A. J., Koribalski B. S., 2006, MNRAS, 369, 416
* Saha et al. (2020) Saha L., Domínguez A., Tibaldo L., Marchesi S., Ajello M., Lemoine-Goumard M., López M., 2020, ApJ, 897, 131
* Schnitzeler et al. (2016) Schnitzeler D. H. F. M., Eatough R. P., Ferrière K., Kramer M., Lee K. J., Noutsos A., Shannon R. M., 2016, MNRAS, 459, 3005
* Shannon & Johnston (2013) Shannon R. M., Johnston S., 2013, MNRAS, 435, L29
* Shannon et al. (2015) Shannon R. M., et al., 2015, Science, 349, 1522
* Smith et al. (2008) Smith D. A., et al., 2008, A&A, 492, 923
* Smith et al. (2019) Smith D. A., et al., 2019, ApJ, 871, 78
* Sobey et al. (2015) Sobey C., et al., 2015, MNRAS, 451, 2493
* Sobey et al. (2021) Sobey C., et al., 2021, MNRAS. Submitted. MN-20-5187-MJ.
* Stairs et al. (2019) Stairs I. H., et al., 2019, MNRAS, 485, 3230
* Taylor et al. (1993) Taylor J. H., Manchester R. N., Lyne A. G., 1993, ApJSS, 88, 529
* Wang et al. (2000) Wang N., Manchester R. N., Pace R. T., Bailes M., Kaspi V. M., Stappers B. W., Lyne A. G., 2000, MNRAS, 317, 843
* Wang et al. (2020) Wang P. F., et al., 2020, A&A In Press.
* Weltevrede & Johnston (2008) Weltevrede P., Johnston S., 2008, MNRAS, 391, 1210
* Weltevrede et al. (2010a) Weltevrede P., et al., 2010a, PASA, 27, 64
* Weltevrede et al. (2010b) Weltevrede P., et al., 2010b, ApJ, 708, 1426
* Yao et al. (2017) Yao J. M., Manchester R. N., Wang N., 2017, ApJ, 835, 29
* You et al. (2007) You X. P., et al., 2007, MNRAS, 378, 493
* Yu et al. (2013) Yu M., et al., 2013, MNRAS, 429, 688
* van Straten (2013) van Straten W., 2013, ApJSS, 204, 13
## Appendix A Pulsar list
Table 5: List of the 276 pulsars monitored with the UWL on a monthly basis. PSR | | | | | | | | |
---|---|---|---|---|---|---|---|---|---
J0034–0721 | J0108–1431 | J0134–2937 | J0151–0635 | J0152–1637 | J0206–4028 | J0255–5304 | J0304+1932 | J0401–7608 | J0448–2749
J0452–1759 | J0525+1115 | J0536–7543 | J0543+2329 | J0601–0527 | J0614+2229 | J0624–0424 | J0627+0706 | J0630–2834 | J0631+1036
J0659+1414 | J0729–1448 | J0729–1836 | J0738–4042 | J0742–2822 | J0745–5353 | J0758–1528 | J0809–4753 | J0820–1350 | J0820–3826
J0820–4114 | J0834–4159 | J0835–4510 | J0837+0610 | J0837–4135 | J0842–4851 | J0855–4644 | J0857–4424 | J0901–4624 | J0904–7459
J0905–5127 | J0907–5157 | J0908–1739 | J0908–4913 | J0924–5814 | J0940–5428 | J0942–5552 | J0954–5430 | J0959–4809 | J1001–5507
J1003–4747 | J1012–5857 | J1015–5719 | J1016–5819 | J1016–5857 | J1017–5621 | J1019–5749 | J1028–5819 | J1034–3224 | J1038–5831
J1043–6116 | J1046–5813 | J1047–6709 | J1048–5832 | J1049–5833 | J1055–6028 | J1056–6258 | J1057–5226 | J1105–6107 | J1110–5637
J1112–6103 | J1114–6100 | J1115–6052 | J1119–6127 | J1123–6259 | J1136–5525 | J1146–6030 | J1156–5707 | J1157–6224 | J1210–5559
J1224–6407 | J1225–6408 | J1243–6423 | J1253–5820 | J1301–6305 | J1302–6350 | J1305–6203 | J1306–6617 | J1317–6302 | J1319–6056
J1320–5359 | J1326–5859 | J1326–6408 | J1326–6700 | J1327–6222 | J1327–6301 | J1328–4357 | J1338–6204 | J1340–6456 | J1341–6220
J1349–6130 | J1352–6803 | J1356–5521 | J1357–62 | J1357–6429 | J1359–6038 | J1401–6357 | J1410–6132 | J1412–6145 | J1413–6141
J1418–3921 | J1420–6048 | J1424–5822 | J1428–5530 | J1430–6623 | J1435–5954 | J1452–6036 | J1453–6413 | J1456–6843 | J1509–5850
J1512–5759 | J1513–5908 | J1515–5720 | J1522–5829 | J1524–5625 | J1524–5706 | J1530–5327 | J1531–5610 | J1534–5334 | J1534–5405
J1535–4114 | J1536–5433 | J1539–5626 | J1541–5535 | J1543–5459 | J1544–5308 | J1548–5607 | J1549–4848 | J1555–3134 | J1557–4258
J1559–4438 | J1600–5044 | J1600–5751 | J1602–5100 | J1603–5657 | J1604–4909 | J1605–5257 | J1611–5209 | J1613–4714 | J1614–5048
J1623–4256 | J1626–4537 | J1630–4733 | J1632–4621 | J1633–4453 | J1633–5015 | J1637–4553 | J1637–4642 | J1638–4417 | J1638–4608
J1638–4725 | J1640–4715 | J1643–4505 | J1644–4559 | J1645–0317 | J1646–4346 | J1646–6831 | J1648–3256 | J1648–4611 | J1649–3805
J1649–4653 | J1650–4502 | J1650–4921 | J1651–4246 | J1651–5222 | J1651–5255 | J1652–2404 | J1653–3838 | J1653–4249 | J1700–3312
J1701–3726 | J1701–4533 | J1702–4128 | J1702–4310 | J1703–3241 | J1703–4851 | J1705–1906 | J1705–3423 | J1705–3950 | J1707–4053
J1707–4729 | J1708–3426 | J1709–1640 | J1709–4429 | J1715–3903 | J1715–4034 | J1716–4005 | J1717–3425 | J1718–3825 | J1719–4006
J1720–2933 | J1721–3532 | J1722–3207 | J1722–3632 | J1722–3712 | J1723–3659 | J1727–2739 | J1730–3350 | J1731–4744 | J1733–2228
J1733–3716 | J1735–0724 | J1737–3137 | J1738–3211 | J1739–2903 | J1739–3023 | J1739–3131 | J1740–3015 | J1741–2733 | J1741–3016
J1741–3927 | J1743–3150 | J1745–3040 | J1749–3002 | J1750–3157 | J1751–3323 | J1751–4657 | J1752–2806 | J1757–2421 | J1801–2304
J1801–2451 | J1803–2137 | J1806–2125 | J1807–0847 | J1809–1917 | J1816–2650 | J1817–3618 | J1817–3837 | J1820–0427 | J1822–2256
J1822–4209 | J1823–3106 | J1824–1945 | J1825–0935 | J1825–1446 | J1826–1334 | J1828–1101 | J1829–1751 | J1830–1059 | J1832–0827
J1833–0827 | J1834–0426 | J1835–0643 | J1835–1106 | J1837–0559 | J1841–0345 | J1841–0425 | J1842–0905 | J1843–0702 | J1844–0538
J1845–0434 | J1845–0743 | J1847–0402 | J1848–0123 | J1852–0635 | J1852–2610 | J1853–0004 | J1900–2600 | J1910+0358 | J1913–0440
J1932+2220 | J1941–2602 | J2048–1616 | J2155–3118 | J2330–2005 | J2346–0609 | | | |
|
One-generated nilpotent assosymmetric algebras
Ivan Kaygorodov & Farukh Mashurov
E-mail addresses:
Ivan Kaygorodov<EMAIL_ADDRESS>
Farukh Mashurov<EMAIL_ADDRESS>
Abstract: We give the classification of $5$\- and $6$-dimensional complex one-
generated nilpotent assosymmetric algebras.
Keywords: assosymmetric algebras, nilpotent algebras, algebraic
classification, central extension.
MSC2010: 17A30, 17D25.
## Introduction
Algebraic classification (up to isomorphism) of algebras of small dimension
from a certain variety defined by a family of polynomial identities is a
classic problem in the theory of non-associative algebras. There are many
results related to algebraic classification of small dimensional algebras in
varieties of Jordan, Lie, Leibniz, Zinbiel and other algebras. Another
interesting approach of studying algebras of a fixed dimension is to study
them from a geometric point of view (that is, to study degenerations and
deformations of these algebras). The results in which the complete information
about degenerations of a certain variety is obtained are generally referred to
as the geometric classification of the algebras of these variety. There are
many results related to geometric classification of Jordan, Lie, Leibniz,
Zinbiel and other algebras [1, 5, 24, 35]. Another interesting direction is a
study of one-generated objects. Well know the description of one-generated
finite groups: there is only one one-generated group of order $n$. In the case
of algebras, there are some similar results, such that the description of
$n$-dimensional one-generated nilpotent associative [14], noncommutative
Jordan [25], Leibniz and Zinbiel algebras[38]. It was proven, that there is
only one $n$-dimensional one-generated nilpotent algebra in these varieties.
But on the other side, as we can see in varieties of Novikov [27],
assosymmetric [23], bicommutative [33], commutative [17], and terminal [30]
algebras, there are more than one $4$-dimensional one-generated nilpotent
algebra from these varieties. One-generated nilpotent Novikov algebras in
dimensions 5 and 6 were studied in [8], one-generated nilpotent terminal
algebras in dimension 5 were studied in [31]. In the present paper, we give
the algebraic classification of $5$\- and $6$-dimensional complex one-
generated nilpotent assosymmetric algebras, which first appeared in the paper
by Kleinfeld in 1957 [37].
The variety of assosymmetric algebras is defined by the following identities
of right- and left-symmetric:
$\begin{array}[]{rclllrcl}(x,y,z)&=&(x,z,y),&&(x,y,z)&=&(y,x,z),\end{array}$
where $(x,y,z)=(xy)z-x(yz).$ It admits the commutative associative and
associative algebras as a subvariety. Kleinfeld proved that an assosymmetric
ring of characteristic different from 2 and 3 without ideals $I\neq 0,$ such
that $I^{2}=0$ is associative [37]. The free base elements of assosymmetric
algebras were described in [22]. The algebraic and geometric classification of
$4$-dimensional complex nilpotent assosymmetric algebras was given in [23].
Also, assosymmetric algebras were studied in [2, 3, 15, 39, 36, 16].
The key step in our method for algebraically classifying assosymmetric
nilpotent algebras is the calculation of central extensions of smaller
algebras. It comes as no surprise that the central extensions of Lie and non-
Lie algebras have been exhaustively studied for years. It is interesting both
to describe them and to use them to classify different varieties of algebras
[32, 7, 34, 9, 32, 20, 28, 40]. Firstly, Skjelbred and Sund devised a method
for classifying nilpotent Lie algebras employing central extensions [40].
Using this method, all the non-Lie central extensions of all $4$-dimensional
Malcev algebras were described afterwards [20], and also all the
anticommutative central extensions of the $3$-dimensional anticommutative
algebras [4], and all the central extensions of the $2$-dimensional algebras
[6]. Moreover, the method is especially indicated for the classification of
nilpotent algebras and it was used to describe all the $4$-dimensional
nilpotent associative algebras [13], all the $4$-dimensional nilpotent Novikov
algebras [27], all the $4$-dimensional nilpotent bicommutative algebras [33],
all the $5$-dimensional nilpotent Jordan algebras [19], all the
$5$-dimensional nilpotent restricted Lie algebras [11], all the
$6$-dimensional nilpotent Lie algebras [10, 12], all the $6$-dimensional
nilpotent Malcev algebras [21], all the $6$-dimensional nilpotent Tortkara
algebras [18] and some others.
## 1\. The algebraic classification of nilpotent assosymmetric algebras
### 1.1. Method of classification of nilpotent algebras
The objective of this section is to give an analogue of the Skjelbred-Sund
method for classifying nilpotent assosymmetric algebras. As other analogues of
this method were carefully explained in, for example, [20, 6], we will give
only some important definitions, and refer the interested reader to the
previous sources. We will also employ their notations.
Let $({\bf A},\cdot)$ be an assosymmetric algebra over $\mathbb{C}$ and ${\bf
V}$ a vector space over ${\mathbb{C}}$. We define the $\mathbb{C}$-linear
space ${\rm Z^{2}}\left(\bf A,{\bf V}\right)$ as the set of all bilinear maps
$\theta\colon{\bf A}\times{\bf A}\longrightarrow{{\bf V}}$ such that
$\theta(xy,z)-\theta(x,yz)=\theta(xz,y)-\theta(x,zy),$
$\theta(xy,z)-\theta(x,yz)=\theta(yx,z)-\theta(y,xz).$
These maps will be called cocycles. Consider a linear map $f$ from $\bf A$ to
${\bf V}$, and set $\delta f\colon{\bf A}\times{\bf A}\longrightarrow{{\bf
V}}$ with $\delta f(x,y)=f(xy)$. Then, $\delta f$ is a cocycle, and we define
${\rm B^{2}}\left({\bf A},{{\bf V}}\right)=\left\\{\theta=\delta f\ :f\in{\rm
Hom}\left({\bf A},{{\bf V}}\right)\right\\}$, a linear subspace of ${\rm
Z^{2}}\left({\bf A},{{\bf V}}\right)$; its elements are called coboundaries.
The second cohomology space ${\rm H^{2}}\left({\bf A},{{\bf V}}\right)$ is
defined to be the quotient space ${\rm Z^{2}}\left({\bf A},{{\bf
V}}\right)\big{/}{\rm B^{2}}\left({\bf A},{{\bf V}}\right)$.
Let ${\rm Aut}({\bf A})$ be the automorphism group of the assosymmetric
algebra ${\bf A}$ and let $\phi\in{\rm Aut}({\bf A})$. Every
$\theta\in{\rm{\rm Z^{2}}}\left({\bf A},{{\bf V}}\right)$ defines
$\phi\theta(x,y)=\theta\left(\phi\left(x\right),\phi\left(y\right)\right)$,
with $\phi\theta\in{\rm{\rm Z^{2}}}\left({\bf A},{{\bf V}}\right)$. It is
easily checked that ${\rm Aut}({\bf A})$ acts on ${\rm{\rm Z^{2}}}\left({\bf
A},{{\bf V}}\right)$, and that ${\rm B^{2}}\left({\bf A},{{\bf V}}\right)$ is
invariant under the action of ${\rm Aut}({\bf A}).$ So, we have that ${\rm
Aut}({\bf A})$ acts on ${\rm H^{2}}\left({\bf A},{{\bf V}}\right)$.
Let $\bf A$ be an assosymmetric algebra of dimension $m<n$ over $\mathbb{C}$,
${{\bf V}}$ a $\mathbb{C}$-vector space of dimension $n-m$ and $\theta$ a
cocycle, and consider the direct sum ${\bf A}_{\theta}={\bf A}\oplus{{\bf V}}$
with the bilinear product “ $\left[-,-\right]_{{\bf A}_{\theta}}$” defined by
$\left[x+x^{\prime},y+y^{\prime}\right]_{{\bf A}_{\theta}}=xy+\theta(x,y)$ for
all $x,y\in{\bf A},x^{\prime},y^{\prime}\in{{\bf V}}$. It is straightforward
that ${\bf A_{\theta}}$ is an assosymmetric algebra if and only if
$\theta\in{\rm Z}^{2}({\bf A},{{\bf V}})$; it is called an $(n-m)$-dimensional
central extension of ${\bf A}$ by ${{\bf V}}$.
We also call the set ${\rm Ann}(\theta)=\left\\{x\in{\bf A}:\theta\left(x,{\bf
A}\right)+\theta\left({\bf A},x\right)=0\right\\}$ the annihilator of
$\theta$. We recall that the annihilator of an algebra ${\bf A}$ is defined as
the ideal ${\rm Ann}({\bf A})=\left\\{x\in{\bf A}:x{\bf A}+{\bf
A}x=0\right\\}$. Observe that ${\rm Ann}\left({\bf
A}_{\theta}\right)=\big{(}{\rm Ann}(\theta)\cap{\rm Ann}({\bf
A})\big{)}\oplus{{\bf V}}$.
###### Definition 1.
Let ${\bf A}$ be an algebra and $I$ be a subspace of ${\rm Ann}({\bf A})$. If
${\bf A}={\bf A}_{0}\oplus I$ then $I$ is called an annihilator component of
${\bf A}$. A central extension of an algebra $\bf A$ without annihilator
component is called a non-split central extension.
The following result is fundamental for the classification method.
###### Lemma 2.
Let ${\bf A}$ be an $n$-dimensional assosymmetric algebra such that $\dim\
{\rm Ann}({\bf A})=m\neq 0$. Then there exists, up to isomorphism, a unique
$(n-m)$-dimensional assosymmetric algebra ${\bf A}^{\prime}$ and a bilinear
map $\theta\in{\rm Z}^{2}({\bf A},{{\bf V}})$ with ${\rm Ann}({\bf A})\cap{\rm
Ann}(\theta)=0$, where ${\bf V}$ is a vector space of dimension m, such that
${\bf A}\cong{{\bf A}^{\prime}}_{\theta}$ and ${\bf A}/{\rm Ann}({\bf
A})\cong{\bf A}^{\prime}$.
For the proof, we refer the reader to [20, Lemma 5].
Now, we seek a condition on the cocycles to know when two $(n-m)$-central
extensions are isomorphic. Let us fix a basis $e_{1},\ldots,e_{s}$ of ${{\bf
V}}$, and $\theta\in{\rm Z^{2}}\left({\bf A},{{\bf V}}\right)$. Then $\theta$
can be uniquely written as
$\theta\left(x,y\right)=\displaystyle\sum_{i=1}^{s}\theta_{i}\left(x,y\right)e_{i}$,
where $\theta_{i}\in{\rm Z^{2}}\left({\bf A},\mathbb{C}\right)$. It holds that
$\theta\in{\rm B^{2}}\left({\bf A},{{\bf V}}\right)$ if and only if all
$\theta_{i}\in{\rm B^{2}}\left({\bf A},\mathbb{C}\right)$, and it also holds
that ${\rm Ann}(\theta)={\rm Ann}(\theta_{1})\cap{\rm
Ann}(\theta_{2})\ldots\cap{\rm Ann}(\theta_{s})$. Furthermore, if ${\rm
Ann}(\theta)\cap{\rm Ann}\left({\bf A}\right)=0$, then ${\bf A}_{\theta}$ has
an annihilator component if and only if
$\left[\theta_{1}\right],\left[\theta_{2}\right],\ldots,\left[\theta_{s}\right]$
are linearly dependent in ${\rm H^{2}}\left({\bf A},\mathbb{C}\right)$ (see
[20, Lemma 13]).
Recall that, given a finite-dimensional vector space ${{\bf V}}$ over
$\mathbb{C}$, the Grassmannian ${\rm G}_{k}\left({{\bf V}}\right)$ is the set
of all $k$-dimensional linear subspaces of ${{\bf V}}$. Let ${\rm
G}_{s}\left({\rm H^{2}}\left({\bf A},\mathbb{C}\right)\right)$ be the
Grassmannian of subspaces of dimension $s$ in ${\rm H^{2}}\left({\bf
A},\mathbb{C}\right)$. For ${\rm
W}=\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right],\dots,\left[\theta_{s}\right]\right\rangle\in{\rm
G}_{s}\left({\rm H^{2}}\left({\bf A},\mathbb{C}\right)\right)$ and
$\phi\in{\rm Aut}({\bf A})$, define $\phi{\rm
W}=\left\langle\left[\phi\theta_{1}\right],\left[\phi\theta_{2}\right],\dots,\left[\phi\theta_{s}\right]\right\rangle$.
It holds that $\phi{\rm W}\in{\rm G}_{s}\left({\rm H^{2}}\left({\bf
A},\mathbb{C}\right)\right)$, and this induces an action of ${\rm Aut}({\bf
A})$ on ${\rm G}_{s}\left({\rm H^{2}}\left({\bf A},\mathbb{C}\right)\right)$.
We denote the orbit of ${\rm W}\in{\rm G}_{s}\left({\rm H^{2}}\left({\bf
A},\mathbb{C}\right)\right)$ under this action by ${\rm Orb}({\rm W})$. Let
${\rm
W}_{1}=\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right],\dots,\left[\theta_{s}\right]\right\rangle,{\rm
W}_{2}=\left\langle\left[\vartheta_{1}\right],\left[\vartheta_{2}\right],\dots,\left[\vartheta_{s}\right]\right\rangle\in{\rm
G}_{s}\left({\rm H^{2}}\left({\bf A},\mathbb{C}\right)\right).$
Similarly to [20, Lemma 15], in case ${\rm W}_{1}={\rm W}_{2}$, it holds that
$\bigcap\limits_{i=1}^{s}{\rm Ann}(\theta_{i})\cap{\rm Ann}\left({\bf
A}\right)=\bigcap\limits_{i=1}^{s}{\rm Ann}(\vartheta_{i})\cap{\rm Ann}({\bf
A}),$
and therefore the set
${\rm T}_{s}({\bf A})=\left\\{{\rm
W}=\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right],\dots,\left[\theta_{s}\right]\right\rangle\in{\rm
G}_{s}\left({\rm H^{2}}\left({\bf
A},\mathbb{C}\right)\right):\bigcap\limits_{i=1}^{s}{\rm
Ann}(\theta_{i})\cap{\rm Ann}({\bf A})=0\right\\}$
is well defined, and it is also stable under the action of ${\rm Aut}({\bf
A})$ (see [20, Lemma 16]).
Now, let ${{\bf V}}$ be an $s$-dimensional linear space and let us denote by
${\rm E}\left({\bf A},{{\bf V}}\right)$ the set of all non-split
$s$-dimensional central extensions of ${\bf A}$ by ${{\bf V}}$. We can write
${\rm E}\left({\bf A},{{\bf V}}\right)=\left\\{{\bf
A}_{\theta}:\theta\left(x,y\right)=\sum_{i=1}^{s}\theta_{i}\left(x,y\right)e_{i}\
\ \text{and}\ \
\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right],\dots,\left[\theta_{s}\right]\right\rangle\in{\rm
T}_{s}({\bf A})\right\\}.$
Finally, we are prepared to state our main result, which can be proved as [20,
Lemma 17].
###### Lemma 3.
Let ${\bf A}_{\theta},{\bf A}_{\vartheta}\in{\rm E}\left({\bf A},{{\bf
V}}\right)$. Suppose that
$\theta\left(x,y\right)=\displaystyle\sum_{i=1}^{s}\theta_{i}\left(x,y\right)e_{i}$
and
$\vartheta\left(x,y\right)=\displaystyle\sum_{i=1}^{s}\vartheta_{i}\left(x,y\right)e_{i}$.
Then the assosymmetric algebras ${\bf A}_{\theta}$ and ${\bf A}_{\vartheta}$
are isomorphic if and only if
${\rm
Orb}\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right],\dots,\left[\theta_{s}\right]\right\rangle={\rm
Orb}\left\langle\left[\vartheta_{1}\right],\left[\vartheta_{2}\right],\dots,\left[\vartheta_{s}\right]\right\rangle.$
Then, it exists a bijective correspondence between the set of ${\rm Aut}({\bf
A})$-orbits on ${\rm T}_{s}\left({\bf A}\right)$ and the set of isomorphism
classes of ${\rm E}\left({\bf A},{{\bf V}}\right)$. Consequently we have a
procedure that allows us, given an assosymmetric algebra ${\bf A}^{\prime}$ of
dimension $n-s$, to construct all non-split central extensions of ${\bf
A}^{\prime}$.
Procedure
Let ${\bf A}^{\prime}$ be an assosymmetric algebra of dimension $n-s$.
1. (1)
Determine ${\rm H^{2}}({\bf A}^{\prime},\mathbb{C})$, ${\rm Ann}({\bf
A}^{\prime})$ and ${\rm Aut}({\bf A}^{\prime})$.
2. (2)
Determine the set of ${\rm Aut}({\bf A}^{\prime})$-orbits on ${\rm T}_{s}({\bf
A}^{\prime})$.
3. (3)
For each orbit, construct the assosymmetric algebra associated with a
representative of it.
### 1.2. Notations
Let ${\bf A}$ be an assosymmetric algebra with a basis
$e_{1},e_{2},\dots,e_{n}$. Then by $\Delta_{ij}$ we will denote the
assosymmetric bilinear form $\Delta_{ij}\colon{\bf A}\times{\bf
A}\longrightarrow\mathbb{C}$ with
$\Delta_{ij}\left(e_{l},e_{m}\right)=\delta_{il}\delta_{jm}$. Then the set
$\left\\{\Delta_{ij}:1\leq i,j\leq n\right\\}$ is a basis for the linear space
of the bilinear forms on ${\bf A}$. Then every $\theta\in{\rm Z^{2}}\left({\bf
A}\right)$ can be uniquely written as $\theta=\displaystyle\sum_{1\leq i,j\leq
n}c_{ij}\Delta_{{i}{j}}$, where $c_{ij}\in\mathbb{C}$. Let us fix the
following notations:
$\begin{array}[]{lll}{\mathcal{A}}^{i}_{j}&\mbox{---}&j\mbox{th
}i\mbox{-dimensional one-generated nilpotent assosymmetric algebra.}\\\
\end{array}$
### 1.3. The algebraic classification of low dimensional one-generated
nilpotent assosymmetric algebras
In the present table (thanks to [23]) we have a description of all $2$-, $3$\-
and $4$-dimensional one-generated nilpotent assosymmetric algebras:
${\mathcal{A}}^{2}_{01}$ | $:$ | $e_{1}e_{1}=e_{2}$ | | | | |
---|---|---|---|---|---|---|---
${\mathcal{A}}^{3}_{01}$ | $:$ | $e_{1}e_{1}=e_{3}$ | $e_{2}e_{1}=e_{3}$ | | | |
${\mathcal{A}}^{3}_{02}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{2}e_{1}=\alpha e_{3}$ | | |
${\mathcal{A}}^{4}_{01}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{2}e_{1}=e_{3}$ | | |
${\mathcal{A}}^{4}_{02}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{4}$ | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=-e_{4}$ | $e_{3}e_{1}=-2e_{4}$
${\mathcal{A}}^{4}_{03}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{3}=e_{4}$ | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=-e_{4}$ | $e_{3}e_{1}=-2e_{4}$ |
${\mathcal{A}}^{4}_{04}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=(2-\alpha)e_{4}$ | $e_{2}e_{1}=\alpha e_{3}$ | $e_{2}e_{2}=(\alpha^{2}-\alpha+1)e_{4}$ | $e_{3}e_{1}=(2\alpha-1)e_{4}$
${\mathcal{A}}^{4}_{05}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-2e_{4}$ | $e_{2}e_{1}=e_{4}$ | $e_{2}e_{2}=-e_{4}$ | $e_{3}e_{1}=e_{4}$
${\mathcal{A}}^{4}_{06}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-3e_{4}$ | $e_{2}e_{1}=-e_{3}+e_{4}$ | $e_{2}e_{2}=-3e_{4}$ | $e_{3}e_{1}=3e_{4}$
###### Remark 4.
Note that, non-split central extension of a split algebra can not be a one-
generated algebra. Hence, we will consider central extensions only for non-
split one-generated nilpotent algebras.
## 2\. Classification of $5$-dimensional one-generated nilpotent
assosymmetric algebras
### 2.1. $2$-dimensional central extensions of $3$-dimensional one-generated
algebras
The second cohomology spaces of algebras
${\mathcal{A}}^{3}_{01},{\mathcal{A}}^{3}_{02}(\alpha)$ given in [23].
Therefore, two dimensional central extensions of these algebras gives the
following:
${\mathcal{A}}^{5}_{01}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}$ |
---|---|---|---|---|---
| | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=-e_{5}$ | $e_{3}e_{1}=-2e_{5}$ |
${\mathcal{A}}^{5}_{02}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=(\alpha-2)e_{5}$ |
| | $e_{2}e_{1}=\alpha e_{3}+e_{4}$ | $e_{2}e_{2}=(\alpha-\alpha^{2}-1)e_{5}$ | $e_{3}e_{1}=(1-2\alpha)e_{5}$ |
### 2.2. Cohomology spaces of $4$-dimensional one-generated assosymmetric
algebras
In the present table we collect all usefull information about ${\rm
Z}^{2},{\rm B}^{2}$ and ${\rm H}^{2}$ spaces for all $4$-dimensional one-
generated algebras that were counted via code in [26].
${\rm Z^{2}}\left({\mathcal{A}}^{4}_{01}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{41},\Delta_{14}-\Delta_{31}-\Delta_{41},\Delta_{22}+2\Delta_{31}+\Delta_{41}\right\rangle$
---|---|---
${\rm B^{2}}\left({\mathcal{A}}^{4}_{01}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21}\right\rangle$
${\rm H^{2}}\left({\mathcal{A}}^{4}_{01}\right)$ | $=$ | $\left\langle[\Delta_{13}]+[\Delta_{41}],[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],[\Delta_{22}]+2[\Delta_{31}]+[\Delta_{41}]\right\rangle$
${\rm Z^{2}}\left({\mathcal{A}}^{4}_{02}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}-\Delta_{22}-2\Delta_{31}\right\rangle$
${\rm B^{2}}\left({\mathcal{A}}^{4}_{02}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{21},\Delta_{12}+\Delta_{13}-\Delta_{22}-2\Delta_{31}\right\rangle$
${\rm H^{2}}\left({\mathcal{A}}^{4}_{02}\right)$ | $=$ | $\left\langle[\Delta_{12}]\right\rangle$
${\rm Z^{2}}\left({\mathcal{A}}^{4}_{03}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}-\Delta_{22}-2\Delta_{31}\right\rangle$
${\rm B^{2}}\left({\mathcal{A}}^{4}_{03}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{21},\Delta_{13}-\Delta_{22}-2\Delta_{31}\right\rangle$
${\rm H^{2}}\left({\mathcal{A}}^{4}_{03}\right)$ | $=$ | $\left\langle[\Delta_{12}]\right\rangle$
${\rm Z^{2}}\left({\mathcal{A}}^{4}_{04}(\alpha)_{\alpha\neq 1}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21},(2-\alpha)\Delta_{13}+(\alpha^{2}-\alpha+1)\Delta_{22}+(2\alpha-1)\Delta_{31}\right\rangle$
${\rm B^{2}}\left({\mathcal{A}}^{4}_{04}(\alpha)_{\alpha\neq 1}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12}+\alpha\Delta_{21},(2-\alpha)\Delta_{13}+(\alpha^{2}-\alpha+1)\Delta_{22}+(2\alpha-1)\Delta_{31}\right\rangle$
${\rm H^{2}}\left({\mathcal{A}}^{4}_{04}(\alpha)_{\alpha\neq 1}\right)$ | $=$ | $\left\langle[\Delta_{12}]\right\rangle$
${\rm Z^{2}}\left({\mathcal{A}}^{4}_{04}(1)\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{22}+\Delta_{31},\Delta_{14}+\Delta_{23}+\Delta_{32}+\Delta_{41}\right\rangle$
${\rm B^{2}}\left({\mathcal{A}}^{4}_{04}(1)\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12}+\Delta_{21},\Delta_{13}+\Delta_{22}+\Delta_{31}\right\rangle$
${\rm H^{2}}\left({\mathcal{A}}^{4}_{04}(1)\right)$ | $=$ | $\left\langle[\Delta_{21}],[\Delta_{14}]+[\Delta_{23}]+[\Delta_{32}]+[\Delta_{41}]\right\rangle$
${\rm Z^{2}}\left({\mathcal{A}}^{4}_{05}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21},2\Delta_{13}+\Delta_{22}-\Delta_{31}\right\rangle$
${\rm B^{2}}\left({\mathcal{A}}^{4}_{05}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},-2\Delta_{13}+\Delta_{21}-\Delta_{22}+\Delta_{31}\right\rangle$
${\rm H^{2}}\left({\mathcal{A}}^{4}_{05}\right)$ | $=$ | $\left\langle[\Delta_{21}]\right\rangle$
${\rm Z^{2}}\left({\mathcal{A}}^{4}_{06}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{22}-\Delta_{31}\right\rangle$
${\rm B^{2}}\left({\mathcal{A}}^{4}_{06}\right)$ | $=$ | $\left\langle\Delta_{11},\Delta_{12}-\Delta_{21},-3\Delta_{13}+\Delta_{21}-3\Delta_{22}+3\Delta_{31}\right\rangle$
${\rm H^{2}}\left({\mathcal{A}}^{4}_{06}\right)$ | $=$ | $\left\langle[\Delta_{13}]+[\Delta_{22}]-[\Delta_{31}]\right\rangle$
###### Remark 5.
Extensions of the algebras ${\mathcal{A}}^{4}_{02},$ ${\mathcal{A}}^{4}_{03},$
${\mathcal{A}}^{4}_{04}(\alpha)_{\alpha\neq 1},$ ${\mathcal{A}}^{4}_{05}$ and
${\mathcal{A}}^{4}_{06}$ give algebras with $2$-dimensional annihilator. Then,
in the following subsections we study the central extensions of the other
algebras.
### 2.3. Central extensions of ${\mathcal{A}}^{4}_{01}$
Let us use the following notations:
$\nabla_{1}=[\Delta_{13}]+[\Delta_{41}],\
\nabla_{2}=[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],\
\nabla_{3}=[\Delta_{22}]+2[\Delta_{31}]+[\Delta_{41}].$
The automorphism group of ${\mathcal{A}}^{4}_{01}$ consists of invertible
matrices of the form
$\phi=\left(\begin{array}[]{cccc}x&0&0&0\\\ y&x^{2}&0&0\\\ z&xy&x^{3}&0\\\
t&xy&0&x^{3}\end{array}\right).$
Since
$\phi^{T}\left(\begin{array}[]{cccc}0&0&\alpha_{1}&\alpha_{2}\\\
0&\alpha_{3}&0&0\\\ -\alpha_{2}+2\alpha_{3}&0&0&0\\\
\alpha_{1}-\alpha_{2}+\alpha_{3}&0&0&0\\\
\end{array}\right)\phi=\left(\begin{array}[]{cccc}\alpha^{*}&\alpha^{**}&\alpha^{*}_{1}&\alpha^{*}_{2}\\\
\alpha^{***}&\alpha^{*}_{3}&0&0\\\ -\alpha^{*}_{2}+2\alpha^{*}_{3}&0&0&0\\\
\alpha^{*}_{1}-\alpha^{*}_{2}+\alpha^{*}_{3}&0&0&0\\\ \end{array}\right),$
we have that the action of ${\rm Aut}({\mathcal{A}}^{4}_{01})$ on the subspace
$\langle\sum\limits_{i=1}^{3}\alpha_{i}\nabla_{i}\rangle$ is given by
$\langle\sum\limits_{i=1}^{3}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{rclrclrcl}\alpha^{*}_{1}&=&x^{4}\alpha_{1},&\alpha^{*}_{2}&=&x^{4}\alpha_{2},&\alpha^{*}_{3}&=&x^{4}\alpha_{3}.\end{array}$
#### 2.3.1. $1$-dimensional central extensions
We have the following new cases:
1. (1)
If $\alpha_{1}\neq 0,\alpha_{2}=0,\alpha_{3}=0,$ then
$x=\frac{1}{\sqrt[4]{\alpha_{1}}},$ we have the representative
$\langle\nabla_{1}\rangle;$
2. (2)
If $\alpha_{2}\neq 0,\alpha_{3}=0,$ then
$x=\frac{1}{\sqrt[4]{\alpha_{2}}},\alpha=\frac{\alpha_{1}}{\alpha_{2}}$ we
have the representative $\langle\alpha\nabla_{1}+\nabla_{2}\rangle;$
3. (3)
If $\alpha_{3}\neq 0,$ then
$x=\frac{1}{\sqrt[4]{\alpha_{3}}},\alpha=\frac{\alpha_{1}}{\alpha_{3}},\beta=\frac{\alpha_{2}}{\alpha_{3}}$
we have the representative
$\langle\alpha\nabla_{1}+\beta\nabla_{2}+\nabla_{3}\rangle.$
From here, we have new $5$-dimensional one generated assosymmetric algebras
constructed from ${\mathcal{A}}^{4}_{01}:$
${\mathcal{A}}^{5}_{03}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{2}e_{1}=e_{3}$ | $e_{1}e_{3}=e_{5}$ | $e_{4}e_{1}=e_{5}$
---|---|---|---|---|---|---
${\mathcal{A}}^{5}_{04}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{2}e_{1}=e_{3}$ | $e_{1}e_{3}=\alpha e_{5}$ |
| | $e_{1}e_{4}=e_{5}$ | $e_{3}e_{1}=-e_{5}$ | $e_{4}e_{1}=(\alpha-1)e_{5}$ | |
${\mathcal{A}}^{5}_{05}(\alpha,\beta)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{2}e_{1}=e_{3}$ | $e_{1}e_{3}=\alpha e_{5}$ |
| | $e_{1}e_{4}=\beta e_{5}$ | $e_{2}e_{2}=e_{5}$ | $e_{3}e_{1}=(2-\beta)e_{5}$ | $e_{4}e_{1}=(\alpha-\beta+1)e_{5}$
#### 2.3.2. $2$-dimensional central extensions
Consider the vector space generated by the following two cocycles
$\begin{array}[]{rcl}\theta_{1}&=&\alpha_{1}\nabla_{1}+\alpha_{2}\nabla_{2}+\alpha_{3}\nabla_{3},\\\
\theta_{2}&=&\beta_{1}\nabla_{1}+\beta_{2}\nabla_{2}.\end{array}$
Here we have the following cases:
1. (1)
If $\alpha_{3}=0,$ then we have the representative
$\langle\nabla_{1},\nabla_{2}\rangle;$
2. (2)
If $\alpha_{3}\neq 0,\beta_{1}\neq 0,\beta_{2}=0,$ then we have the
representative $\langle\nabla_{1},\alpha\nabla_{2}+\nabla_{3}\rangle;$
3. (3)
If $\alpha_{3}\neq 0,\beta_{2}\neq 0,$ then we have the representative
$\langle\alpha\nabla_{1}+\nabla_{2},\beta\nabla_{1}+\nabla_{3}\rangle.$
We have the following new $6$-dimensional one-generated nilpotent
assosymmetric algebras constructed from ${\mathcal{A}}^{4}_{01}:$
${\mathcal{A}}^{6}_{01}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}$ | $e_{1}e_{4}=e_{6}$
---|---|---|---|---|---
| | $e_{2}e_{1}=e_{3}$ | $e_{3}e_{1}=-e_{6}$ | $e_{4}e_{1}=e_{5}-e_{6}$ |
${\mathcal{A}}^{6}_{02}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}$ | $e_{1}e_{4}=\alpha e_{6}$
| | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{6}$ | $e_{3}e_{1}=(2-\alpha)e_{6}$ | $e_{4}e_{1}=e_{5}-(\alpha-1)e_{6}$
${\mathcal{A}}^{6}_{03}(\alpha,\beta)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}+\beta e_{6}$ | $e_{1}e_{4}=e_{5}$
| | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{6}$ | $e_{3}e_{1}=-e_{5}+2e_{6}$ | $e_{4}e_{1}=(\alpha-1)e_{5}+(\beta+1)e_{6}$
### 2.4. Central extensions of ${\mathcal{A}}^{4}_{04}(1)$
Let us use the following notations:
$\nabla_{1}=[\Delta_{21}],\
\nabla_{2}=[\Delta_{14}]+[\Delta_{23}]+[\Delta_{32}]+[\Delta_{41}].$
The automorphism group of ${\mathcal{A}}^{4}_{04}(1)$ consists of invertible
matrices of the form
$\phi=\left(\begin{array}[]{cccc}x&0&0&0\\\ y&x^{2}&0&0\\\ z&2xy&x^{3}&0\\\
t&2xz+y^{2}&3yx^{2}&x^{4}\end{array}\right).$
Since
$\phi^{T}\left(\begin{array}[]{cccc}0&0&0&\alpha_{2}\\\
\alpha_{1}&0&\alpha_{2}&0\\\ 0&\alpha_{2}&0&0\\\ \alpha_{2}&0&0&0\\\
\end{array}\right)\phi=\left(\begin{array}[]{cccc}\alpha^{***}&\alpha^{*}&\alpha^{**}&\alpha^{*}_{2}\\\
\alpha^{*}+\alpha^{*}_{1}&\alpha^{**}&\alpha^{*}_{2}&0\\\
\alpha^{**}&\alpha^{*}_{2}&0&0\\\ \alpha^{*}_{2}&0&0&0\\\ \end{array}\right),$
we have that the action of ${\rm Aut}({\mathcal{A}}^{4}_{04}(1))$ on the
subspace $\langle\sum\limits_{i=1}^{2}\alpha_{i}\nabla_{i}\rangle$ is given by
$\langle\sum\limits_{i=1}^{2}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{rclrcl}\alpha^{*}_{1}&=&x^{3}\alpha_{1},&\alpha^{*}_{2}&=&x^{5}\alpha_{2}.\end{array}$
#### 2.4.1. $1$-dimensional central extensions
Note that if $\alpha_{2}=0$ then we obtain algebras with 2-dimensional
annihilator. Therefore, we have two representatives $\langle\nabla_{2}\rangle$
and $\langle\nabla_{1}+\nabla_{2}\rangle$ depending on whether $\alpha_{1}=0$
or not.
We have the following new $5$-dimensional nilpotent assosymmetric algebras
constructed from ${\mathcal{A}}^{4}_{04}(1):$
${\mathcal{A}}^{5}_{06}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{4}$
---|---|---|---|---|---|---
| | $e_{3}e_{1}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{2}e_{3}=e_{5}$ | $e_{3}e_{2}=e_{5}$ | $e_{4}e_{1}=e_{5}$
${\mathcal{A}}^{5}_{07}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{2}e_{1}=e_{3}+e_{5}$ | $e_{2}e_{2}=e_{4}$
| | $e_{3}e_{1}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{2}e_{3}=e_{5}$ | $e_{3}e_{2}=e_{5}$ | $e_{4}e_{1}=e_{5}$
#### 2.4.2. $2$-dimensional central extensions
We have only one new $6$-dimensional nilpotent assosymmetric algebras
constructed from ${\mathcal{A}}^{4}_{04}(1):$
${\mathcal{A}}^{6}_{04}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{2}e_{1}=e_{3}+e_{6}$
---|---|---|---|---|---|---
| | $e_{2}e_{2}=e_{4}$ | $e_{2}e_{3}=e_{5}$ | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$ | $e_{4}e_{1}=e_{5}$
### 2.5. Classification theorem
Summarizing results of the previous sections, we have the following theorem.
###### Theorem A.
Let $\mathcal{A}$ be a $5$-dimensional complex one-generated nilpotent
assosymmetric algebra, then $\mathcal{A}$ is isomorphic to an algebra from the
following list:
${\mathcal{A}}^{5}_{01}$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}$ | |
---|---|---|---|---|---
| $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=-e_{5}$ | $e_{3}e_{1}=-2e_{5}$ | |
${\mathcal{A}}^{5}_{02}(\alpha)$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=(\alpha-2)e_{5}$ | $e_{2}e_{1}=\alpha e_{3}+e_{4}$
| $e_{2}e_{2}=(\alpha-\alpha^{2}-1)e_{5}$ | $e_{3}e_{1}=(1-2\alpha)e_{5}$ | |
${\mathcal{A}}^{5}_{03}$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}$ | $e_{2}e_{1}=e_{3}$ | $e_{4}e_{1}=e_{5}$
${\mathcal{A}}^{5}_{04}(\alpha)$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}$ | $e_{1}e_{4}=e_{5}$ |
| $e_{2}e_{1}=e_{3}$ | $e_{3}e_{1}=-e_{5}$ | $e_{4}e_{1}=(\alpha-1)e_{5}$ | |
${\mathcal{A}}^{5}_{05}(\alpha,\beta)$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}$ | $e_{1}e_{4}=\beta e_{5}$ |
| $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{5}$ | $e_{3}e_{1}=(2-\beta)e_{5}$ | $e_{4}e_{1}=(\alpha-\beta+1)e_{5}$
${\mathcal{A}}^{5}_{06}$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{2}e_{1}=e_{3}$
| $e_{2}e_{2}=e_{4}$ | $e_{2}e_{3}=e_{5}$ | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$ | $e_{4}e_{1}=e_{5}$
${\mathcal{A}}^{5}_{07}$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{2}e_{1}=e_{3}+e_{5}$
| $e_{2}e_{2}=e_{4}$ | $e_{2}e_{3}=e_{5}$ | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$ | $e_{4}e_{1}=e_{5}$
## 3\. Classification of 6-dimensional one-generated nilpotent assosymmetric
algebras
### 3.1. Cohomology spaces of $5$-dimensional one-generated assosymmetric
algebras
All multiplication tables of $5$-dimensional one-generated nilpotent
assosymmetric algebras is given in table in Theorem A (see, previous section).
All necessary information about coboundaries, cocycles and second cohomology
spaces of $5$-dimensional one-generated nilpotent assosymmetric algebras were
calculated by the code in [26] and given in the following table.
Table B. The list of cohomology spaces of 5-dimensional one-generated
assosymmetric algebras
---
${\rm Z^{2}}({\mathcal{A}}^{5}_{01})$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{41},\Delta_{22}+2\Delta_{31}+\Delta_{41}\Delta_{14}-\Delta_{31}-\Delta_{41}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{01})$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}-\Delta_{22}-2\Delta_{31}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{01})$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{13}]+[\Delta_{41}],[\Delta_{14}]-[\Delta_{31}]+[\Delta_{41}]\end{array}\Big{\rangle}$
${\rm Z^{2}}({\mathcal{A}}^{5}_{02}(\alpha\neq 1))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+(1-\alpha)\Delta_{22}+(1-2\alpha)\Delta_{41},\\\ \Delta_{14}-\Delta_{22}-2\Delta_{41},\Delta_{22}+\Delta_{31}+(2-\alpha)\Delta_{41}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{02}(\alpha\neq 1))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},(\alpha-2)\Delta_{13}+(\alpha-\alpha^{2}-1)\Delta_{22}+(1-2\alpha)\Delta_{31}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{02}(\alpha\neq 1))$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{14}]-[\Delta_{22}]-2[\Delta_{41}],[\Delta_{22}]+[\Delta_{31}]+(2-\alpha)[\Delta_{41}]\end{array}\Big{\rangle}$
${\rm Z^{2}}({\mathcal{A}}^{5}_{02}(1))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}-\Delta_{41},\Delta_{22}+\Delta_{31}+\Delta_{41},\Delta_{14}+\Delta_{31}-\Delta_{41},\Delta_{15}-\Delta_{23}-\Delta_{32}+\Delta_{51}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{02}(1))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{22}+\Delta_{31}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{02}(1))$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{13}]-[\Delta_{41}],[\Delta_{14}]+[\Delta_{31}]-[\Delta_{41}],[\Delta_{15}]-[\Delta_{23}]-[\Delta_{32}]+[\Delta_{51}]\end{array}\Big{\rangle}$
${\rm Z^{2}}({\mathcal{A}}^{5}_{03})$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{41},\Delta_{14}-\Delta_{31}-\Delta_{41},\Delta_{22}+2\Delta_{31}+\Delta_{41}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{03})$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{41}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{03})$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],[\Delta_{22}]+2[\Delta_{31}]+[\Delta_{41}]\end{array}\Big{\rangle}$
${\rm Z^{2}}({\mathcal{A}}^{5}_{04}(\alpha))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{14}-\Delta_{31},\Delta_{14}-\Delta_{31}-\Delta_{41},\Delta_{14}+\Delta_{22}+\Delta_{31}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{04}(\alpha))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\alpha\Delta_{13}+\Delta_{14}-\Delta_{31}+(\alpha-1)\Delta_{41}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{04}(\alpha))$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{13}]+[\Delta_{41}],\alpha[\Delta_{13}]+2[\Delta_{14}]+[\Delta_{22}]+(\alpha-1)[\Delta_{41}]\end{array}\Big{\rangle}$
${\rm Z^{2}}({\mathcal{A}}^{5}_{05}(\alpha,\beta))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{14}-\Delta_{31},\Delta_{14}-\Delta_{31}-\Delta_{41},\Delta_{14}+\Delta_{22}+\Delta_{31}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{05}(\alpha,\beta))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\alpha\Delta_{13}+\beta\Delta_{14}+\Delta_{22}+(2-\beta)\Delta_{31}+(\alpha-\beta+1)\Delta_{41}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{05}(\alpha,\beta))$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{13}]+[\Delta_{14}]-[\Delta_{31}],[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}]\end{array}\Big{\rangle}$
$\alpha\neq\frac{1}{2}(\beta\pm\sqrt{-2+6\beta-3\beta^{2}})$
${\rm Z^{2}}({\mathcal{A}}^{5}_{05}(\alpha,\beta))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{14}-\Delta_{31}-\Delta_{41},\Delta_{13}+\Delta_{41},(2\beta-1)\Delta_{15}+\\\ +(2\beta(\alpha-1)+1)\Delta_{23}+(\alpha+2\beta^{2}-3\beta+1)\Delta_{24}+(-2\alpha\beta+3\alpha+2\beta^{2}-3\beta+1)\Delta_{32}+\\\ +(-2\alpha\beta+2\alpha+2\beta-1)\Delta_{42}+(2\alpha-2\beta+1)\Delta_{51},\Delta_{22}+2\Delta_{31}+\Delta_{41}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{05}(\alpha,\beta))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\alpha\Delta_{13}+\beta\Delta_{14}+\Delta_{22}+(2-\beta)\Delta_{31}+(\alpha-\beta+1)\Delta_{41}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{05}(\alpha,\beta))$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],[\Delta_{13}]+[\Delta_{41}],(2\beta-1)[\Delta_{15}]+(2\beta(\alpha-1)+1)[\Delta_{23}]+(\alpha+2\beta^{2}-3\beta+1)[\Delta_{24}]\\\ +(-2\alpha\beta+3\alpha+2\beta^{2}-3\beta+1)[\Delta_{32}]+(-2\alpha\beta+2\alpha+2\beta-1)[\Delta_{42}]+(2\alpha-2\beta+1)[\Delta_{51}]\end{array}\Big{\rangle}$
$\alpha=\frac{1}{2}(\beta\pm\sqrt{-2+6\beta-3\beta^{2}})$ and
$(\alpha,\beta)\neq(0,\frac{1}{2})$
${\rm Z^{2}}({\mathcal{A}}^{5}_{05}(0,\frac{1}{2}))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{41},\Delta_{14}-\Delta_{31}-\Delta_{41},\\\ 2\Delta_{15}-3\Delta_{23}-2\Delta_{24}-3\Delta_{32}+\Delta_{42}-4\Delta_{51},\Delta_{22}+2\Delta_{31}+\Delta_{41}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{05}(0,\frac{1}{2}))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{14}+2\Delta_{22}+3\Delta_{31}+\Delta_{41}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{05}(0,\frac{1}{2}))$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],[\Delta_{13}]+[\Delta_{41}],2[\Delta_{15}]-3[\Delta_{23}]-2[\Delta_{24}]-3[\Delta_{32}]+[\Delta_{42}]-4[\Delta_{51}]\end{array}\Big{\rangle}$
${\rm Z^{2}}({\mathcal{A}}^{5}_{06}))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{22}+\Delta_{31},\Delta_{14}+\Delta_{23}+\Delta_{32}+\Delta_{41},\\\ \Delta_{15}+\Delta_{24}+\Delta_{33}+\Delta_{42}+\Delta_{51}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{06}))$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12}+\Delta_{21},\Delta_{13}+\Delta_{22}+\Delta_{31},\Delta_{14}+\Delta_{23}+\Delta_{32}+\Delta_{41}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{06}))$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{21}],[\Delta_{15}]+[\Delta_{24}]+[\Delta_{33}]+[\Delta_{42}]+[\Delta_{51}]\end{array}\Big{\rangle}$
${\rm Z^{2}}({\mathcal{A}}^{5}_{07})$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12},\Delta_{21},\Delta_{13}+\Delta_{22}+\Delta_{31},\Delta_{14}+\Delta_{23}+\Delta_{32}+\Delta_{41},\\\ \Delta_{15}+2\Delta_{22}+\Delta_{24}+3\Delta_{31}+\Delta_{33}+\Delta_{42}+\Delta_{51}\end{array}\Big{\rangle}$
${\rm B^{2}}({\mathcal{A}}^{5}_{07})$ | $=$ | $\Big{\langle}\begin{array}[]{l}\Delta_{11},\Delta_{12}+\Delta_{21},\Delta_{13}+\Delta_{22}+\Delta_{31},\Delta_{14}+\Delta_{21}+\Delta_{23}+\Delta_{32}+\Delta_{41}\end{array}\Big{\rangle}$
${\rm H^{2}}({\mathcal{A}}^{5}_{07})$ | $=$ | $\Big{\langle}\begin{array}[]{l}[\Delta_{21}],[\Delta_{15}]+2[\Delta_{22}]+[\Delta_{24}]+3[\Delta_{31}]+[\Delta_{33}]+[\Delta_{42}]+[\Delta_{51}]\end{array}\Big{\rangle}$
###### Remark 6.
Extensions of the algebras ${\mathcal{A}}^{5}_{01},$
${\mathcal{A}}^{5}_{02}(\alpha)_{\alpha\neq 1},$ ${\mathcal{A}}^{5}_{03},$
${\mathcal{A}}^{5}_{04}(\alpha)$ and
${\mathcal{A}}^{5}_{05}(\alpha,\beta)_{\alpha\neq\frac{1}{2}(\beta\pm\sqrt{-2+6\beta-3\beta^{2}})}$
give algebras with $2$-dimensional annihilator. Then, in the following
subsections we study the central extensions of the other algebras.
### 3.2. Central extensions of ${\mathcal{A}}^{5}_{02}(1)$
Let us use the following notations:
$\nabla_{1}=[\Delta_{13}]-[\Delta_{41}],\
\nabla_{2}=[\Delta_{14}]+[\Delta_{31}]-[\Delta_{41}],\nabla_{3}=[\Delta_{15}]-[\Delta_{23}]-[\Delta_{32}]+[\Delta_{51}].$
The automorphism group of ${\mathcal{A}}^{5}_{07}$ consists of invertible
matrices of the form
$\phi=\left(\begin{array}[]{ccccc}x&0&0&0&0\\\ y&x^{2}&0&0&0\\\
z&2xy&x^{3}&0&0\\\ t&xy&0&x^{3}&0\\\
w&-y^{2}-2xz&-3x^{2}y&0&x^{4}\end{array}\right).$
Since
$\phi^{T}\left(\begin{array}[]{ccccc}0&0&\alpha_{1}&\alpha_{2}&\alpha_{3}\\\
0&0&-\alpha_{3}&0&0\\\ \alpha_{2}&-\alpha_{3}&0&0&0\\\
-\alpha_{1}-\alpha_{2}&0&0&0&0\\\ \alpha_{3}&0&0&0&0\\\
\end{array}\right)\phi=\left(\begin{array}[]{ccccc}\alpha^{****}&\alpha^{***}&\alpha^{*}_{1}+\alpha^{*}&\alpha^{*}_{2}&\alpha^{*}_{3}\\\
\alpha^{**}&\alpha^{*}&-\alpha^{*}_{3}&0&0\\\
\alpha^{*}_{2}+\alpha^{*}&-\alpha^{*}_{3}&0&0&0\\\
-\alpha^{*}_{1}-\alpha^{*}_{2}&0&0&0&0\\\ \alpha^{*}_{3}&0&0&0&0\\\
\end{array}\right)$
we have that the action of ${\rm Aut}({\mathcal{A}}^{5}_{07})$ on the subspace
$\langle\sum\limits_{i=1}^{3}\alpha_{i}\nabla_{i}\rangle$ is given by
$\langle\sum\limits_{i=1}^{3}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{rclrclrcl}\alpha^{*}_{1}&=&x^{4}\alpha_{1},&\alpha^{*}_{2}&=&x^{4}\alpha_{2},&\alpha^{*}_{3}&=&x^{5}\alpha_{3}.\par\end{array}$
We have the following case:
1. (1)
If $\alpha_{2}\neq 0,$ then choosing $x=\frac{\alpha_{2}}{\alpha_{3}}$ we have
the representative $\langle\alpha\nabla_{1}+\nabla_{2}+\nabla_{3}\rangle;$
2. (2)
If $\alpha_{2}=0,$ we have two representatives $\langle\nabla_{3}\rangle$ and
$\langle\nabla_{1}+\nabla_{3}\rangle$ depending on whether $\alpha_{1}=0$ or
not.
Consequently, we have the following algebras from ${\mathcal{A}}^{5}_{02}(1):$
${\mathcal{A}}^{6}_{05}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-e_{5}+\alpha e_{6}$ | $e_{1}e_{4}=e_{6}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{4}$ |
---|---|---|---|---|---|---|---|---
| | $e_{2}e_{2}=-e_{5}$ | $e_{2}e_{3}=-e_{6}$ | $e_{3}e_{1}=-e_{5}+e_{6}$ | $e_{3}e_{2}=-e_{6}$ | $e_{4}e_{1}=-(\alpha+1)e_{6}$ | $e_{5}e_{1}=e_{6}$ |
${\mathcal{A}}^{6}_{06}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-e_{5}+e_{6}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{4}$ | $e_{2}e_{2}=-e_{5}$ |
| | $e_{2}e_{3}=-e_{6}$ | $e_{3}e_{1}=-e_{5}$ | $e_{3}e_{2}=-e_{6}$ | $e_{4}e_{1}=-e_{6}$ | $e_{5}e_{1}=e_{6}$ | |
${\mathcal{A}}^{6}_{07}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-e_{5}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{4}$ | $e_{2}e_{2}=-e_{5}$ |
| | $e_{2}e_{3}=-e_{6}$ | $e_{3}e_{1}=-e_{5}$ | $e_{3}e_{2}=-e_{6}$ | $e_{5}e_{1}=e_{6}$ | | |
### 3.3. Central extensions of ${\mathcal{A}}^{5}_{05}(\alpha,\beta)$
Here we will consider the special cases for
$\alpha=\frac{1}{2}(\beta\pm\sqrt{-2+6\beta-3\beta^{2}}).$
The automorphism group of ${\mathcal{A}}^{5}_{05}(\alpha,\beta)$ consists of
invertible matrices of the form
$\phi=\left(\begin{array}[]{ccccc}x&0&0&0&0\\\ \frac{y}{x}&x^{2}&0&0&0\\\
z&y&x^{3}&0&0\\\ t&y&0&x^{3}&0\\\
v&\frac{x^{3}((2-\beta+\alpha)z+(1+\alpha)t)+y^{2}}{x^{2}}&(\alpha-2\beta+4)xy&(\alpha+\beta+1)xy&x^{4}\\\
\end{array}\right).$
Let use the following notations:
$\nabla_{1}=[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],\nabla_{2}=[\Delta_{13}]+[\Delta_{41}],$
$\nabla_{3}=(2\beta-1)[\Delta_{15}]+(2\alpha\beta-2\beta+1)[\Delta_{23}]+(\alpha+2\beta^{2}-3\beta+1)[\Delta_{24}]+$
$(3\alpha-2\alpha\beta+2\beta^{2}-3\beta+1)[\Delta_{32}]+(2\alpha-2\alpha\beta+2\beta-1)[\Delta_{42}]+(2\alpha-2\beta+1)[\Delta_{51}].$
So,
$\phi^{T}\left(\begin{array}[]{ccccc}0&0&\alpha_{2}&\alpha_{1}&(2\beta-1)\alpha_{3}\\\
0&0&(2\alpha\beta-2\beta+1)\alpha_{3}&(\alpha+2\beta^{2}-3\beta+1)\alpha_{3}&0\\\
-\alpha_{1}&(3\alpha-2\alpha\beta+2\beta^{2}-3\beta+1)\alpha_{3}&0&0&0\\\
-\alpha_{1}+\alpha_{2}&(2\alpha-2\alpha\beta+2\beta-1)\alpha_{3}&0&0&0\\\
(2\alpha-2\beta+1)\alpha_{3}&0&0&0&0\\\ \end{array}\right)\phi=$
$=\left(\begin{array}[]{ccccc}\alpha^{****}&\alpha^{***}&\alpha\alpha^{*}+\alpha^{*}_{2}&\beta\alpha^{*}+\alpha^{*}_{1}&(2\beta-1)\alpha^{*}_{3}\\\
\alpha^{**}&\alpha^{*}&(2\alpha\beta-2\beta+1)\alpha^{*}_{3}&(\alpha+2\beta^{2}-3\beta+1)\alpha^{*}_{3}&0\\\
(2-\beta)\alpha^{*}-\alpha^{*}_{1}&(3\alpha-2\alpha\beta+2\beta^{2}-3\beta+1)\alpha^{*}_{3}&0&0&0\\\
(1-\beta+\alpha)\alpha^{*}-\alpha^{*}_{1}++\alpha^{*}_{2}&(2\alpha-2\alpha\beta+2\beta-1)\alpha^{*}_{3}&0&0&0\\\
(2\alpha-2\beta+1)\alpha^{*}_{3}&0&0&0&0\\\ \end{array}\right)$
we have that the action of ${\rm Aut}({\mathcal{A}}^{5}_{05}(\alpha,\beta))$
on the subspace $\langle\sum\limits_{i=1}^{3}\alpha_{i}\nabla_{i}\rangle$ is
given by $\langle\sum\limits_{i=1}^{3}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{lclcl}\alpha^{*}_{1}&=&x^{4}\alpha_{1}-\beta(\beta-2)(4\beta-2\alpha-2)\alpha_{3}x^{2}y,\\\
\alpha_{2}^{*}&=&x^{4}\alpha_{2}-\left(\beta(\beta-2)(2\beta-1)+\alpha(2\beta^{2}-4\beta+3)\right)\alpha_{3}x^{2}y,\\\
\alpha_{3}^{*}&=&x^{5}\alpha_{3}.\\\ \end{array}$
We are interested only in the cases with $\alpha_{3}\neq 0.$ Now we obtain the
following cases:
1. (1)
For $\beta(\beta-2)(2\beta-1)+\alpha\left(2\beta^{2}-4\beta+3\right)\neq 0$ :
1. (a)
If
$2\beta(\beta-2)(2\beta-\alpha-1)\alpha_{2}=\alpha_{1}\left(\beta(\beta-2)(2\beta-1)+\alpha(2\beta^{2}-4\beta+3)\right),$
then by choosing $x=\frac{1}{\sqrt[5]{\alpha_{3}}}$ and
$y=\frac{\alpha_{2}x^{2}}{\beta(\beta-2)(2\beta-1)+\alpha\left(2\beta^{2}-4\beta+3\right)},$
we have the representative $\langle\nabla_{3}\rangle;$
2. (b)
If
$2\beta(\beta-2)(2\beta-\alpha-1)\alpha_{2}\neq\alpha_{1}\left(\beta(\beta-2)(2\beta-1)+\alpha(2\beta^{2}-4\beta+3)\right),$
then by choosing
$x=\frac{\alpha_{1}\left(\alpha(2\beta^{2}-4\beta+3)+\beta(\beta-2)(2\beta-1)\right)+2\alpha_{2}\beta(\beta-2)(\alpha-2\beta+1)}{\beta(\beta-2)(2\beta-1)+\alpha(2\beta^{2}-4\beta+3)}$
and
$y=\frac{\alpha_{2}x^{2}}{\beta(\beta-2)(2\beta-1)+\alpha\left(2\beta^{2}-4\beta+3\right)},$
and we have the representative $\langle\nabla_{1}+\nabla_{3}\rangle.$
From the above cases we have new parametric algebras:
${\mathcal{A}}^{6}_{i}(\beta)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}$ |
---|---|---|---|---|---
| | $e_{1}e_{4}=\beta e_{5}$ | $e_{1}e_{5}=(2\beta-1)e_{6}$ | $e_{2}e_{1}=e_{3}$ |
| | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=(2\alpha\beta-2\beta+1)e_{6}$ | $e_{2}e_{4}=(\alpha+2\beta^{2}-3\beta+1)e_{6}$ |
| | $e_{3}e_{1}=(2-\beta)e_{5}$ | $e_{3}e_{2}=(3\alpha-2\alpha\beta+2\beta^{2}-3\beta+1)e_{6}$ | $e_{4}e_{1}=(\alpha-\beta+1)e_{5}$ |
| | $e_{4}e_{2}=(2\alpha-2\alpha\beta+2\beta-1)e_{6}$ | $e_{5}e_{1}=(2\alpha-2\beta+1)e_{6}$ | |
${\mathcal{A}}^{6}_{i+1}(\beta)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}$ |
| | $e_{1}e_{4}=\beta e_{5}+e_{6}$ | $e_{1}e_{5}=(2\beta-1)e_{6}$ | $e_{2}e_{1}=e_{3}$ |
| | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=(2\alpha\beta-2\beta+1)e_{6}$ | $e_{2}e_{4}=(\alpha+2\beta^{2}-3\beta+1)e_{6}$ |
| | $e_{3}e_{1}=(2-\beta)e_{5}-e_{6}$ | $e_{3}e_{2}=(3\alpha-2\alpha\beta+2\beta^{2}-3\beta+1)e_{6}$ | $e_{4}e_{1}=(\alpha-\beta+1)e_{5}-e_{6}$ |
| | $e_{4}e_{2}=(2\alpha-2\alpha\beta+2\beta-1)e_{6}$ | $e_{5}e_{1}=(2\alpha-2\beta+1)e_{6}$ | |
where $i=08$ for $\alpha=\frac{1}{2}(\beta+\sqrt{-2+6\beta-3\beta^{2}})$ with
${\beta\not\in\\{1,\frac{3}{2}\\}}$ and where $i=10$ for
$\alpha=\frac{1}{2}(\beta-\sqrt{-2+6\beta-3\beta^{2}})$ with
${\beta\neq\frac{1}{2}}.$
2. (2)
The condition $\beta=1,$ for
$\alpha=\frac{1}{2}(\beta+\sqrt{-2+6\beta-3\beta^{2}})$ gives $\alpha=1,$ that
is ${\mathcal{A}}^{5}_{05}(1,1)$. The base of the second cohomology of this
algebra spanned by elements:
$\nabla_{1}=[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],\
\nabla_{2}=[\Delta_{13}]+[\Delta_{41}],\
\nabla_{3}=[\Delta_{15}]+[\Delta_{23}]+[\Delta_{24}]+[\Delta_{32}]+[\Delta_{42}]+[\Delta_{51}].$
Since
$\phi^{T}\left(\begin{array}[]{ccccc}0&0&\alpha_{2}&\alpha_{1}&\alpha_{3}\\\
0&0&\alpha_{3}&\alpha_{3}&0\\\ -\alpha_{1}&\alpha_{3}&0&0&0\\\
\alpha_{2}-\alpha_{1}&\alpha_{3}&0&0&0\\\ \alpha_{3}&0&0&0&0\\\
\end{array}\right)\phi=\left(\begin{array}[]{ccccc}\alpha^{****}&\alpha^{**}&\alpha^{*}+\alpha^{*}_{2}&\alpha^{*}+\alpha^{*}_{1}&\alpha^{*}_{3}\\\
\alpha^{***}&\alpha^{*}&\alpha^{*}_{3}&\alpha^{*}_{3}&0\\\
\alpha^{*}-\alpha^{*}_{1}&\alpha^{*}_{3}&0&0&0\\\
\alpha^{*}-\alpha^{*}_{1}+\alpha^{*}_{2}&\alpha^{*}_{3}&0&0&0\\\
\alpha^{*}_{3}&0&0&0&0\\\ \end{array}\right)$
we have that the action of ${\rm Aut}({\mathcal{A}}^{5}_{05}(1,1))$ on the
subspace $\langle\sum\limits_{i=1}^{3}\alpha_{i}\nabla_{i}\rangle$ is given by
$\langle\sum\limits_{i=1}^{3}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{ccc}\alpha_{1}^{*}=x^{4}\alpha_{1},&\alpha_{2}^{*}=x^{4}\alpha_{2},&\alpha_{3}^{*}=\alpha_{3}x^{5}.\\\
\end{array}$
We are interested only in $\alpha_{3}\neq 0,$ then we have the following
cases:
1. (a)
If $\alpha_{2}\neq 0,$ then for $x=\frac{\alpha_{2}}{\alpha_{3}},$
$\alpha=\frac{\alpha_{1}}{\alpha_{2}}$ we have the representative
$\langle\alpha\nabla_{1}+\nabla_{2}+\nabla_{3}\rangle.$
2. (b)
If $\alpha_{2}=0,$ then also we have two cases:
1. (i)
If $\alpha_{1}\neq 0,$ then $x=\frac{\alpha_{1}}{\alpha_{3}},$ and we have the
representative $\langle\nabla_{1}+\nabla_{3}\rangle;$
2. (ii)
If $\alpha_{1}=0,$ then $x=\frac{1}{\sqrt[5]{\alpha_{3}}},$ and we have the
representative $\langle\nabla_{3}\rangle;$
Consequently, we have the following algebras from
${\mathcal{A}}^{5}_{05}(1,1):$ ${\mathcal{A}}^{6}_{08}(1),$
${\mathcal{A}}^{6}_{09}(1)$ and
${\mathcal{A}}^{6}_{12}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}+e_{6}$ | $e_{1}e_{4}=e_{5}+\alpha e_{6}$ | $e_{1}e_{5}=e_{6}$
---|---|---|---|---|---|---
| | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=e_{6}$ | $e_{2}e_{4}=e_{6}$ | $e_{3}e_{1}=e_{5}-\alpha e_{6}$
| | $e_{3}e_{2}=e_{6}$ | $e_{4}e_{1}=e_{5}+(1-\alpha)e_{6}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$ |
3. (3)
The condition $\beta=\frac{3}{2}$ gives $\alpha=1$ for
$\alpha=\frac{1}{2}(\beta+\sqrt{-2+6\beta-3\beta^{2}}),$ that is
${\mathcal{A}}^{5}_{05}(1,\frac{3}{2}).$ So, the second cohomology space of
${\mathcal{A}}^{5}_{05}(1,\frac{3}{2})$ spanned by elements:
$\nabla_{1}=[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],\
\nabla_{2}=[\Delta_{13}]+[\Delta_{41}],\
\nabla_{3}=2[\Delta_{15}]+[\Delta_{23}]+2[\Delta_{24}]+[\Delta_{32}]+[\Delta_{42}].$
Since
$\phi^{T}\left(\begin{array}[]{ccccc}0&0&\alpha_{2}&\alpha_{1}&2\alpha_{3}\\\
0&0&\alpha_{3}&2\alpha_{3}&0\\\ -\alpha_{1}&\alpha_{3}&0&0&0\\\
\alpha_{2}-\alpha_{1}&\alpha_{3}&0&0&0\\\ 0&0&0&0&0\\\
\end{array}\right)\phi=\left(\begin{array}[]{ccccc}\alpha^{****}&\alpha^{***}&\alpha^{*}_{2}+\alpha^{*}&\alpha^{*}_{1}+3\alpha^{*}&2\alpha^{*}_{3}\\\
\alpha^{**}&2\alpha^{*}&\alpha^{*}_{3}&2\alpha^{*}_{3}&0\\\
\alpha^{*}-\alpha^{*}_{1}&\alpha^{*}_{3}&0&0&0\\\
\alpha^{*}_{2}-\alpha^{*}_{1}+\alpha^{*}&\alpha^{*}_{3}&0&0&0\\\ 0&0&0&0&0\\\
\end{array}\right)$
we have that the action of ${\rm Aut}({\mathcal{A}}^{5}_{05}(1,\frac{3}{2}))$
on the subspace $\langle\sum\limits_{i=1}^{3}\alpha_{i}\nabla_{i}\rangle$ is
given by $\langle\sum\limits_{i=1}^{3}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{rclrclrcl}\alpha^{*}_{1}&=&x^{4}\alpha_{1}+\frac{3}{2}x^{3}y\alpha_{3},&\alpha^{*}_{2}&=&x^{4}\alpha_{2},&\alpha^{*}_{3}&=&x^{5}\alpha_{3}.\\\
\end{array}$
Since $\alpha_{3}\neq 0,$ and choosing
$y=-\frac{2x^{2}\alpha_{1}}{3\alpha_{3}},$ we have the representatives
$\langle\nabla_{3}\rangle$ and $\langle\nabla_{2}+\nabla_{3}\rangle,$
depending on whether $\alpha_{2}=0$ or not.
We have the following new $6$-dimensional algebras constructed from
${\mathcal{A}}^{5}_{05}(1,\frac{3}{2}):$ ${\mathcal{A}}^{6}_{08}(\frac{3}{2})$
and
${\mathcal{A}}^{6}_{13}:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}+e_{6}$ | $e_{1}e_{4}=\frac{3}{2}e_{5}$ | $e_{1}e_{5}=2e_{6}$
---|---|---|---|---|---
| $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=e_{6}$ | $e_{2}e_{4}=2e_{6}$ | $e_{3}e_{1}=\frac{1}{2}e_{5}$
| $e_{3}e_{2}=e_{6}$ | $e_{4}e_{1}=e_{6}$ | $e_{4}e_{2}=e_{6}$ | |
### 3.4. Central extensions of ${\mathcal{A}}^{5}_{05}(0,\frac{1}{2})$
If $\beta=\frac{1}{2}$ for
$\alpha=\frac{1}{2}(\beta-\sqrt{-2+6\beta-3\beta^{2}})$ gives $\alpha=0,$ that
is ${\mathcal{A}}^{5}_{05}(0,\frac{1}{2})$. So, the second cohomology space of
${\mathcal{A}}^{5}_{05}(0,\frac{1}{2})$ spanned by elements:
$\nabla_{1}=[\Delta_{14}]-[\Delta_{31}]-[\Delta_{41}],\nabla_{2}=[\Delta_{13}]+[\Delta_{41}],\nabla_{3}=2[\Delta_{15}]-3[\Delta_{23}]-2[\Delta_{24}]-3[\Delta_{32}]+[\Delta_{42}]-4[\Delta_{51}].$
Since
$\phi^{T}\left(\begin{array}[]{ccccc}0&0&\alpha_{2}&\alpha_{1}&2\alpha_{3}\\\
0&0&-3\alpha_{3}&-2\alpha_{3}&0\\\ -\alpha_{1}&-3\alpha_{3}&0&0&0\\\
\alpha_{2}-\alpha_{1}&\alpha_{3}&0&0&0\\\ -4\alpha_{3}&0&0&0&0\\\
\end{array}\right)\phi=\left(\begin{array}[]{ccccc}\alpha^{****}&\alpha^{**}&\alpha^{*}_{2}&\alpha^{*}_{1}+\alpha^{*}&2\alpha^{*}_{3}\\\
\alpha^{***}&2\alpha^{*}&-3\alpha^{*}_{3}&-2\alpha^{*}_{3}&0\\\
-\alpha^{*}_{1}+3\alpha^{*}&-3\alpha^{*}_{3}&0&0&0\\\
\alpha^{*}_{2}-\alpha^{*}_{1}+\alpha^{*}&\alpha^{*}_{3}&0&0&0\\\
-4\alpha^{*}_{3}&0&0&0&0\\\ \end{array}\right)$
we have that the action of ${\rm Aut}({\mathcal{A}}^{5}_{05}(0,\frac{1}{2}))$
on the subspace $\langle\sum\limits_{i=1}^{3}\alpha_{i}\nabla_{i}\rangle$ is
given by $\langle\sum\limits_{i=1}^{3}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{rclrclrcl}\alpha^{*}_{1}&=&x^{4}\alpha_{2}+\frac{9}{2}x^{3}y\alpha_{3},&\
\alpha^{*}_{2}&=&x^{4}\alpha_{1}+3x^{3}y\alpha_{3},&\
\alpha^{*}_{3}&=&x^{5}\alpha_{3}.\\\ \end{array}$
We are interested in $\alpha_{3}\neq 0,$ then we have the following cases:
1. (1)
If $3\alpha_{1}-2\alpha_{2}=0,$ then $x=\frac{1}{\sqrt[5]{\alpha_{3}}}$ and
$y=-\frac{x\alpha_{1}}{3\alpha_{3}},$ we have the representative
$\langle\nabla_{3}\rangle;$
2. (2)
If $3\alpha_{1}-2\alpha_{2}\neq 0,$ then
$x=\frac{-3\alpha_{1}+2\alpha_{2}}{{2\alpha_{3}}},y=-\frac{x\alpha_{1}}{3\alpha_{3}}$
and we have the representative $\langle\nabla_{2}+\nabla_{3}\rangle.$
We have the following new $6$-dimensional algebras constructed from
${\mathcal{A}}^{5}_{05}(0,\frac{1}{2}):$
${\mathcal{A}}^{6}_{14}:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{4}=\frac{1}{2}e_{5}$ | $e_{1}e_{5}=2e_{6}$ | $e_{2}e_{1}=e_{3}$ |
---|---|---|---|---|---|---
| $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=-3e_{6}$ | $e_{2}e_{4}=-2e_{6}$ | $e_{3}e_{1}=\frac{3}{2}e_{5}$ | $e_{3}e_{2}=-3e_{6}$ |
| $e_{4}e_{1}=\frac{1}{2}e_{5}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=-4e_{6}$ | | |
${\mathcal{A}}^{6}_{15}:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{6}$ | $e_{1}e_{4}=\frac{1}{2}e_{5}$ | $e_{1}e_{5}=2e_{6}$ |
| $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=-3e_{6}$ | $e_{2}e_{4}=-2e_{6}$ | $e_{3}e_{1}=\frac{3}{2}e_{5}$ |
| $e_{3}e_{2}=-3e_{6}$ | $e_{4}e_{1}=\frac{1}{2}e_{5}+e_{6}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=-4e_{6}$ | |
### 3.5. Central extensions of ${\mathcal{A}}^{5}_{06}$
Let us use the following notations:
$\nabla_{1}=[\Delta_{21}],\nabla_{2}=[\Delta_{15}]+[\Delta_{24}]+[\Delta_{33}]+[\Delta_{42}]+[\Delta_{51}].$
The automorphism group of ${\mathcal{A}}^{5}_{06}$ consists of invertible
matrices of the form
$\phi=\left(\begin{array}[]{ccccc}x&0&0&0&0\\\ y&x^{2}&0&0&0\\\
z&2xy&x^{3}&0&0\\\ v&2xz+y^{2}&3x^{2}y&x^{4}&0\\\
w&2xv+2yz&3x^{2}z+3xy^{2}&4x^{3}y&x^{5}\end{array}\right).$
Since
$\phi^{T}\left(\begin{array}[]{ccccc}0&0&0&0&\alpha_{2}\\\
\alpha_{1}&0&0&\alpha_{2}&0\\\ 0&0&\alpha_{2}&0&0\\\ 0&\alpha_{2}&0&0&0\\\
\alpha_{2}&0&0&0&0\\\
\end{array}\right)\phi=\left(\begin{array}[]{ccccc}\alpha^{****}&\alpha^{*}&\alpha^{**}&\alpha^{***}&\alpha^{*}_{2}\\\
\alpha^{*}_{1}+\alpha^{*}&\alpha^{**}&\alpha^{***}&\alpha^{*}_{2}&0\\\
\alpha^{**}&\alpha^{***}&\alpha^{*}_{2}&0&0\\\
\alpha^{***}&\alpha^{*}_{2}&0&0&0\\\ \alpha^{*}_{2}&0&0&0&0\\\
\end{array}\right),$
we have that the action of ${\rm Aut}({\mathcal{A}}^{5}_{06})$ on the subspace
$\langle\sum\limits_{i=1}^{2}\alpha_{i}\nabla_{i}\rangle$ is given by
$\langle\sum\limits_{i=1}^{2}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{rclrcl}\alpha^{*}_{1}&=&x^{3}\alpha_{1},&\alpha^{*}_{2}&=&x^{6}\alpha_{2}.\\\
\end{array}$
We suppose that $\alpha_{2}\neq 0$, otherwise obtained algebra gives an
algebra with 2-dimensional annihilator. Therefore, consider the following
cases:
1. (1)
If $\alpha_{1}=0,$ then $x=\frac{1}{\sqrt[6]{\alpha_{2}}},$ we have the
representative $\langle\nabla_{2}\rangle;$
2. (2)
If $\alpha_{1}\neq 0,$ then $x=\sqrt[3]{\frac{\alpha_{1}}{\alpha_{2}}},$ we
have the representative $\langle\nabla_{1}+\nabla_{2}\rangle.$
Hence, we have the following new algebras:
${\mathcal{A}}^{6}_{16}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{4}$ | $e_{2}e_{3}=e_{5}$
---|---|---|---|---|---|---|---|---|---
| | $e_{2}e_{4}=e_{6}$ | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$ | $e_{3}e_{3}=e_{6}$ | $e_{4}e_{1}=e_{5}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$ |
${\mathcal{A}}^{6}_{17}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{6}$ | $e_{2}e_{2}=e_{4}$ | $e_{2}e_{3}=e_{5}$
| | $e_{2}e_{4}=e_{6}$ | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$ | $e_{3}e_{3}=e_{6}$ | $e_{4}e_{1}=e_{5}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$ |
### 3.6. Central extensions of ${\mathcal{A}}^{5}_{07}$
Let us use the following notations:
$\nabla_{1}=[\Delta_{21}],\
\nabla_{2}=[\Delta_{15}]+2[\Delta_{22}]+[\Delta_{24}]+3[\Delta_{31}]+[\Delta_{33}]+[\Delta_{42}]+[\Delta_{51}].$
The automorphism group of ${\mathcal{A}}^{5}_{07}$ consists of invertible
matrices of the form
$\phi_{i}=\left(\begin{array}[]{ccccc}(-1)^{k}&0&0&0&0\\\ x&1&0&0&0\\\
y&(-1)^{k}2x&(-1)^{k}&0&0\\\ z&x^{2}+(-1)^{k}2y&3x&1&0\\\
t&2xy+(-1)^{k}(x+2z)&(-1)^{k}3x^{2}+3y&(-1)^{k}4x&(-1)^{k}\end{array}\right),$
where $k\in\\{1,2\\}.$ Since
$\phi_{i}^{T}\left(\begin{array}[]{ccccc}0&0&0&0&\alpha_{2}\\\
\alpha_{1}&2\alpha_{2}&0&\alpha_{2}&0\\\ 3\alpha_{2}&0&\alpha_{2}&0&0\\\
0&\alpha_{2}&0&0&0\\\ \alpha_{2}&0&0&0&0\\\
\end{array}\right)\phi_{i}=\left(\begin{array}[]{ccccc}\alpha^{****}&\alpha^{*}&\alpha^{**}&\alpha^{***}&\alpha^{*}_{2}\\\
\alpha^{*}_{1}+\alpha^{*}&2\alpha^{*}_{2}+\alpha^{**}&\alpha^{***}&\alpha^{*}_{2}&0\\\
3\alpha^{*}_{2}&\alpha^{***}&\alpha^{*}_{2}&0&0\\\
\alpha^{***}&\alpha^{*}_{2}&0&0&0\\\ \alpha^{*}_{2}&0&0&0&0\\\
\end{array}\right),$
we have that the action of ${\rm Aut}({\mathcal{A}}^{5}_{07})$ on the subspace
$\langle\sum\limits_{i=1}^{2}\alpha_{i}\nabla_{i}\rangle$ is given by
$\langle\sum\limits_{i=1}^{2}\alpha^{*}_{i}\nabla_{i}\rangle,$ where
$\begin{array}[]{rclrcl}\alpha^{*}_{1}&=&(-1)^{i}\alpha_{1}-6x\alpha_{2},&\alpha^{*}_{2}&=&\alpha_{2}.\\\
\end{array}$
We have only one non-trivial orbit with the representative
$\langle\nabla_{2}\rangle,$ and get
${\mathcal{A}}^{6}_{18}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$ | $e_{1}e_{4}=e_{5}$ | $e_{1}e_{5}=e_{6}$
---|---|---|---|---|---|---
| | $e_{2}e_{1}=e_{3}+e_{5}$ | $e_{2}e_{2}=e_{4}+2e_{6}$ | $e_{2}e_{3}=e_{5}$ | $e_{2}e_{4}=e_{6}$ | $e_{3}e_{1}=e_{4}+3e_{6}$
| | $e_{3}e_{2}=e_{5}$ | $e_{3}e_{3}=e_{6}$ | $e_{4}e_{1}=e_{5}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$
### 3.7. Classification theorem
Summarizing results of the present section we have the following theorem.
###### Theorem B.
Let $\mathcal{A}$ be a $6$-dimensional complex one-generated nilpotent
assosymmetric algebra, then $\mathcal{A}$ is isomorphic to an algebra from the
following list.
${\mathcal{A}}^{6}_{01}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}$
---|---|---|---|---
| | $e_{1}e_{4}=e_{6}$ | $e_{2}e_{1}=e_{3}$ | $e_{3}e_{1}=-e_{6}$
| | $e_{4}e_{1}=e_{5}-e_{6}$ | |
${\mathcal{A}}^{6}_{02}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}$
| | $e_{1}e_{4}=\alpha e_{6}$ | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{6}$
| | $e_{3}e_{1}=(2-\alpha)e_{6}$ | $e_{4}e_{1}=e_{5}-(\alpha-1)e_{6}$ |
${\mathcal{A}}^{6}_{03}(\alpha,\beta)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}+\beta e_{6}$
| | $e_{1}e_{4}=e_{5}$ | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{6}$
| | $e_{3}e_{1}=-e_{5}+2e_{6}$ | $e_{4}e_{1}=(\alpha-1)e_{5}+(\beta+1)e_{6}$ |
${\mathcal{A}}^{6}_{04}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$
| | $e_{1}e_{4}=e_{5}$ | $e_{2}e_{1}=e_{3}+e_{6}$ | $e_{2}e_{2}=e_{4}$
| | $e_{2}e_{3}=e_{5}$ | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$
| | $e_{4}e_{1}=e_{5}$ | |
${\mathcal{A}}^{6}_{05}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-e_{5}+\alpha e_{6}$
| | $e_{1}e_{4}=e_{6}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{4}$
| | $e_{2}e_{2}=-e_{5}$ | $e_{2}e_{3}=-e_{6}$ | $e_{3}e_{1}=-e_{5}+e_{6}$
| | $e_{3}e_{2}=-e_{6}$ | $e_{4}e_{1}=-(\alpha+1)e_{6}$ | $e_{5}e_{1}=e_{6}$
${\mathcal{A}}^{6}_{06}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-e_{5}+e_{6}$
| | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{4}$ | $e_{2}e_{2}=-e_{5}$
| | $e_{2}e_{3}=-e_{6}$ | $e_{3}e_{1}=-e_{5}$ | $e_{3}e_{2}=-e_{6}$
| | $e_{4}e_{1}=-e_{6}$ | $e_{5}e_{1}=e_{6}$ |
${\mathcal{A}}^{6}_{07}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=-e_{5}$
| | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{4}$ | $e_{2}e_{2}=-e_{5}$
| | $e_{2}e_{3}=-e_{6}$ | $e_{3}e_{1}=-e_{5}$ | $e_{3}e_{2}=-e_{6}$
| | $e_{5}e_{1}=e_{6}$ | |
$i=08$ for $\alpha=\frac{1}{2}(\beta+\sqrt{-2+6\beta-3\beta^{2}})$ and $i=10$
for $\alpha=\frac{1}{2}(\beta-\sqrt{-2+6\beta-3\beta^{2}})$ with
$\beta\neq\frac{1}{2}$
${\mathcal{A}}^{6}_{i}(\beta)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}$
| | $e_{1}e_{4}=\beta e_{5}$ | $e_{1}e_{5}=(2\beta-1)e_{6}$ | $e_{2}e_{1}=e_{3}$
| | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=(2\alpha\beta-2\beta+1)e_{6}$ | $e_{2}e_{4}=(\alpha+2\beta^{2}-3\beta+1)e_{6}$
| | $e_{3}e_{1}=(2-\beta)e_{5}$ | $e_{3}e_{2}=(3\alpha-2\alpha\beta+2\beta^{2}-3\beta+1)e_{6}$ | $e_{4}e_{1}=(\alpha-\beta+1)e_{5}$
| | $e_{4}e_{2}=(2\alpha-2\alpha\beta+2\beta-1)e_{6}$ | $e_{5}e_{1}=(2\alpha-2\beta+1)e_{6}$ |
$i=09$ for $\alpha=\frac{1}{2}(\beta+\sqrt{-2+6\beta-3\beta^{2}})$ with
$\beta\neq\frac{3}{2}$ and $i=11$ for
$\alpha=\frac{1}{2}(\beta-\sqrt{-2+6\beta-3\beta^{2}})$ with
$\beta\neq\frac{1}{2}$
${\mathcal{A}}^{6}_{i+1}(\beta)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=\alpha e_{5}$
| | $e_{1}e_{4}=\beta e_{5}+e_{6}$ | $e_{1}e_{5}=(2\beta-1)e_{6}$ | $e_{2}e_{1}=e_{3}$
| | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=(2\alpha\beta-2\beta+1)e_{6}$ | $e_{2}e_{4}=(\alpha+2\beta^{2}-3\beta+1)e_{6}$
| | $e_{3}e_{1}=(2-\beta)e_{5}-e_{6}$ | $e_{3}e_{2}=(3\alpha-2\alpha\beta+2\beta^{2}-3\beta+1)e_{6}$ | $e_{4}e_{1}=(\alpha-\beta+1)e_{5}-e_{6}$
| | $e_{4}e_{2}=(2\alpha-2\alpha\beta+2\beta-1)e_{6}$ | $e_{5}e_{1}=(2\alpha-2\beta+1)e_{6}$ |
${\mathcal{A}}^{6}_{12}(\alpha)$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}+e_{6}$
| | $e_{1}e_{4}=e_{5}+2\alpha e_{6}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}$
| | $e_{2}e_{2}=e_{5}+\alpha e_{6}$ | $e_{2}e_{3}=e_{6}$ | $e_{2}e_{4}=e_{6}$
| | $e_{3}e_{1}=e_{5}$ | $e_{3}e_{2}=e_{6}$ | $e_{4}e_{1}=e_{5}+(1-\alpha)e_{6}$
| | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$ |
${\mathcal{A}}^{6}_{13}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{5}+e_{6}$
| | $e_{1}e_{4}=\frac{3}{2}e_{5}$ | $e_{1}e_{5}=2e_{6}$ | $e_{2}e_{1}=e_{3}$
| | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=e_{6}$ | $e_{2}e_{4}=2e_{6}$
| | $e_{3}e_{1}=\frac{1}{2}e_{5}$ | $e_{3}e_{2}=e_{6}$ | $e_{4}e_{1}=e_{6}$
| | $e_{4}e_{2}=e_{6}$ | |
${\mathcal{A}}^{6}_{14}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{4}=\frac{1}{2}e_{5}$
| | $e_{1}e_{5}=2e_{6}$ | $e_{2}e_{1}=e_{3}$ | $e_{2}e_{2}=e_{5}$
| | $e_{2}e_{3}=-3e_{6}$ | $e_{2}e_{4}=-2e_{6}$ | $e_{3}e_{1}=\frac{3}{2}e_{5}$
| | $e_{3}e_{2}=-3e_{6}$ | $e_{4}e_{1}=\frac{1}{2}e_{5}$ | $e_{4}e_{2}=e_{6}$
| | $e_{5}e_{1}=-4e_{6}$ | |
${\mathcal{A}}^{6}_{15}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{4}$ | $e_{1}e_{3}=e_{6}$
| | $e_{1}e_{4}=\frac{1}{2}e_{5}$ | $e_{1}e_{5}=2e_{6}$ | $e_{2}e_{1}=e_{3}$
| | $e_{2}e_{2}=e_{5}$ | $e_{2}e_{3}=-3e_{6}$ | $e_{2}e_{4}=-2e_{6}$
| | $e_{3}e_{1}=\frac{3}{2}e_{5}$ | $e_{3}e_{2}=-3e_{6}$ | $e_{4}e_{1}=\frac{1}{2}e_{5}+e_{6}$
| | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=-4e_{6}$ |
${\mathcal{A}}^{6}_{16}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$
| | $e_{1}e_{4}=e_{5}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}$
| | $e_{2}e_{2}=e_{4}$ | $e_{2}e_{3}=e_{5}$ | $e_{2}e_{4}=e_{6}$
| | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$ | $e_{3}e_{3}=e_{6}$
| | $e_{4}e_{1}=e_{5}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$
${\mathcal{A}}^{6}_{17}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$
| | $e_{1}e_{4}=e_{5}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{6}$
| | $e_{2}e_{2}=e_{4}$ | $e_{2}e_{3}=e_{5}$ | $e_{2}e_{4}=e_{6}$
| | $e_{3}e_{1}=e_{4}$ | $e_{3}e_{2}=e_{5}$ | $e_{3}e_{3}=e_{6}$
| | $e_{4}e_{1}=e_{5}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$
${\mathcal{A}}^{6}_{18}$ | $:$ | $e_{1}e_{1}=e_{2}$ | $e_{1}e_{2}=e_{3}$ | $e_{1}e_{3}=e_{4}$
| | $e_{1}e_{4}=e_{5}$ | $e_{1}e_{5}=e_{6}$ | $e_{2}e_{1}=e_{3}+e_{5}$
| | $e_{2}e_{2}=e_{4}+2e_{6}$ | $e_{2}e_{3}=e_{5}$ | $e_{2}e_{4}=e_{6}$
| | $e_{3}e_{1}=e_{4}+3e_{6}$ | $e_{3}e_{2}=e_{5}$ | $e_{3}e_{3}=e_{6}$
| | $e_{4}e_{1}=e_{5}$ | $e_{4}e_{2}=e_{6}$ | $e_{5}e_{1}=e_{6}$
Note:
${\mathcal{A}}^{6}_{08}(\frac{3}{2})\cong{\mathcal{A}}^{6}_{09}(\frac{3}{2}).$
## References
* [1] Abdelwahab H., Calderón A.J., Kaygorodov I., The algebraic and geometric classification of nilpotent binary Lie algebras, International Journal of Algebra and Computation, 29 (2019), 6, 1113–1129.
* [2] Boers A., On assosymmetric rings, Indagationes Mathematicae N.S., 5 (1994), 1, 9–27.
* [3] Boers A., Mutation algebras of a nonassociative algebra, Indagationes Mathematicae N.S., 6 (1995), 1, 25–33.
* [4] Calderón Martín A., Fernández Ouaridi A., Kaygorodov I., The classification of $n$-dimensional anticommutative algebras with $(n-3)$-dimensional annihilator, Communications in Algebra, 47 (2019), 1, 173–181.
* [5] Calderón Martín A., Fernández Ouaridi A., Kaygorodov I., The classification of $2$-dimensional rigid algebras, Linear and Multilinear Algebra, 68 (2020), 4, 828–844.
* [6] Calderón Martín A., Fernández Ouaridi A., Kaygorodov I., On the classification of bilinear maps with radical of a fixed codimension, Linear and Multilinear Algebra, to appear, arXiv:1806.07009
* [7] Camacho L., Karimjanov I., Kaygorodov I., Khudoyberdiyev A., Central extensions of filiform Zinbiel algebras, Linear and Multilinear Algebra, 2020, DOI: 10.1080/03081087.2020.1764903
* [8] Camacho L., Karimjanov I., Kaygorodov I., Khudoyberdiyev A., One-generated nilpotent Novikov algebras, Linear and Multilinear Algebra, 2020, DOI: 10.1080/03081087.2020.1725411
* [9] Camacho L., Kaygorodov I., Lopatkin V., Salim M., The variety of dual Mock-Lie algebras, Communications in Mathematics, 28 (2020), 2, 161–178.
* [10] Cicalò S., De Graaf W., Schneider C., Six-dimensional nilpotent Lie algebras, Linear Algebra and its Applications, 436 (2012), 1, 163–189.
* [11] Darijani I., Usefi H., The classification of 5-dimensional $p$-nilpotent restricted Lie algebras over perfect fields, I., Journal of Algebra, 464 (2016), 97–140.
* [12] De Graaf W., Classification of 6-dimensional nilpotent Lie algebras over fields of characteristic not $2$, Journal of Algebra, 309 (2007), 2, 640–653.
* [13] De Graaf W., Classification of nilpotent associative algebras of small dimension, International Journal of Algebra and Computation, 28 (2018), 1, 133–161.
* [14] Dekimpe K., Ongenae V., Filiform left-symmetric algebras, Geometriae Dedicata, 74 (1999), 2, 165–199.
* [15] Dzhumadildaev A., Assosymmetric algebras under Jordan product, Communications in Algebra, 46 (2017), 1, 1532–4125.
* [16] Dzhumadildaev A., Zhakhayev B., Free assosymmetric algebras as modules of groups, arXiv:1810.05254
* [17] Fernández Ouaridi A., Kaygorodov I., Khrypchenko M., Volkov Yu., Degenerations of nilpotent algebras, arXiv:1905.05361
* [18] Gorshkov I., Kaygorodov I., Khrypchenko M., The algebraic classification of nilpotent Tortkara algebras, Communications in Algebra, 48 (2020), 8, 3608–3623.
* [19] Hegazi A., Abdelwahab H., Classification of five-dimensional nilpotent Jordan algebras, Linear Algebra and its Applications, 494 (2016), 165–218.
* [20] Hegazi A., Abdelwahab H., Calderón Martín A., The classification of $n$-dimensional non-Lie Malcev algebras with $(n-4)$-dimensional annihilator, Linear Algebra and its Applications, 505 (2016), 32–56.
* [21] Hegazi A., Abdelwahab H., Calderón Martín A., Classification of nilpotent Malcev algebras of small dimensions over arbitrary fields of characteristic not $2$, Algebras and Representation Theory, 21 (2018), 1, 19–45.
* [22] Hentzel I., Jacobs D., Peresi L., A basis for free assosymmetric algebras, Journal of Algebra, 183 (1996), 306–318.
* [23] Ismailov N., Kaygorodov I., Mashurov F., The algebraic and geometric classification of nilpotent assosymmetric algebras, Algebras and Represention Theory, 2020, DOI: 10.1007/s10468-019-09935-y.
* [24] Ismailov N., Kaygorodov I., Volkov Yu., The geometric classification of Leibniz algebras, International Journal of Mathematics, 29 (2018), 5, 1850035.
* [25] Jumaniyozov D., Kaygorodov I., Khudoyberdiyev A., The algebraic and geometric classification of nilpotent noncommutative Jordan algebras, Journal of Algebra and its Applications, 2020, DOI: 10.1142/S0219498821502029
* [26] Kadyrov Sh., Mashurov F., Unified computational approach to nilpotent algebra classification problems, Communications in Mathematics, to appear, arXiv:2001.07498
* [27] Karimjanov I., Kaygorodov I., Khudoyberdiyev K., The algebraic and geometric classification of nilpotent Novikov algebras, Journal of Geometry and Physics, 143 (2019), 11–21.
* [28] Karimjanov I., Kaygorodov I., Ladra M., Central extensions of filiform associative algebras, Linear and Multilinear Algebra, 2019, DOI: 10.1080/03081087.2019.1620674
* [29] Kaygorodov I., Khrypchenko M., Lopes S., The algebraic and geometric classification of nilpotent anticommutative algebras, Journal of Pure and Applied Algebra, 224 (2020), 8, 106337.
* [30] Kaygorodov I., Khrypchenko M., Popov Yu., The algebraic and geometric classification of nilpotent terminal algebras, Journal of Pure and Applied Algebra, 225 (2021), 6, 106625
* [31] Kaygorodov I., Khudoyberdiyev A., Sattarov A., One-generated nilpotent terminal algebras, Communications in Algebra, 48 (2020), 10, 4355–4390.
* [32] Kaygorodov I., Lopes S., Páez-Guillán P., Non-associative central extensions of null-filiform associative algebras, Journal of Algebra, 560 (2020), 1190–1210.
* [33] Kaygorodov I., Paez-Guillán P., Voronin V., The algebraic and geometric classification of nilpotent bicommutative algebras, Algebras and Representation Theory, 23 (2020), 6, 2331–2347.
* [34] Kaygorodov I., Rakhimov I., Said Husain Sh. K., The algebraic classification of nilpotent associative commutative algebras, Journal of Algebra and its Applications, 19 (2020), 11, 2050220.
* [35] Kaygorodov I., Volkov Yu., The variety of $2$-dimensional algebras over an algebraically closed field, Canadian Journal of Mathematics, 71 (2019), 4, 819–842.
* [36] Kim H., Kim K., The structure of assosymmetric algebras, Journal of Algebra, 319 (2008), 6, 2243–2258.
* [37] Kleinfeld E., Assosymmetric rings, Proceedings of the American Mathematical Society, 8 (1957), 983–986;
* [38] Masutova K., Omirov B., On some zero-filiform algebras, Ukrainian Mathematical Journal, 66 (2014), 4, 541–552.
* [39] Pokrass D., Rodabaugh D., Solvable assosymmetric rings are nilpotent, Proceedings of the American Mathematical Society, 64 (1977), 1, 30–34.
* [40] Skjelbred T., Sund T., Sur la classification des algebres de Lie nilpotentes, C. R. Acad. Sci. Paris Ser. A-B, 286 (1978), 5, A241–A242.
|
††thanks: Present address: LPNHE, Sorbonne Université, Université Paris
Diderot, CNRS/IN2P3, Paris, France.††thanks: Corresponding author
(s-ito@okayama-u.ac.jp).
Present address: Faculty of Science, Okayama University, Okayama, 700-8530,
Japan.††thanks: Present address: Experimental Physics Department, CERN, Genève
23, CH-1211, Switzerland.††thanks: Present address: Department of Physics,
University of Victoria, Victoria BC V8P 5C2, Canada.
PIENU Collaboration
# Search for three body pion decays ${\pi}^{+}{\to}l^{+}{\nu}X$
A. Aguilar-Arevalo Instituto de Ciencias Nucleares, Universidad Nacional
Autónoma de México, CDMX 04510, México M. Aoki Department of Physics,
Graduate School of Science, Osaka University, Toyonaka, Osaka, 560-0043, Japan
M. Blecher Virginia Tech., Blacksburg, Virginia 24061, USA D.I. Britton
SUPA - School of Physics and Astronomy, University of Glasgow, Glasgow,
G12-8QQ, United Kingdom D. vom Bruch Department of Physics and Astronomy,
University of British Columbia, Vancouver, British Columbia V6T 1Z1, Canada
D.A. Bryman Department of Physics and Astronomy, University of British
Columbia, Vancouver, British Columbia V6T 1Z1, Canada TRIUMF, 4004 Wesbrook
Mall, Vancouver, British Columbia V6T 2A3, Canada S. Chen Department of
Engineering Physics, Tsinghua University, Beijing, 100084, China J. Comfort
Physics Department, Arizona State University, Tempe, AZ 85287, USA S. Cuen-
Rochin TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia V6T 2A3,
Canada Universidad Autónoma de Sinaloa, Culiacán, México L. Doria TRIUMF,
4004 Wesbrook Mall, Vancouver, British Columbia V6T 2A3, Canada PRISMA+
Cluster of Excellence and Institut für Kernphysik, Johannes Gutenberg-
Universität Mainz, Johann-Joachim-Becher-Weg 45, D 55128 Mainz, Germany P.
Gumplinger TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia V6T 2A3,
Canada A. Hussein TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia
V6T 2A3, Canada University of Northern British Columbia, Prince George,
British Columbia V2N 4Z9, Canada Y. Igarashi KEK, 1-1 Oho, Tsukuba-shi,
Ibaraki, 300-3256, Japan S. Ito Physics Department, Osaka University,
Toyonaka, Osaka, 560-0043, Japan S. Kettell Brookhaven National Laboratory,
Upton, NY, 11973-5000, USA L. Kurchaninov TRIUMF, 4004 Wesbrook Mall,
Vancouver, British Columbia V6T 2A3, Canada L.S. Littenberg Brookhaven
National Laboratory, Upton, NY, 11973-5000, USA C. Malbrunot Department of
Physics and Astronomy, University of British Columbia, Vancouver, British
Columbia V6T 1Z1, Canada R.E. Mischke TRIUMF, 4004 Wesbrook Mall, Vancouver,
British Columbia V6T 2A3, Canada T. Numao TRIUMF, 4004 Wesbrook Mall,
Vancouver, British Columbia V6T 2A3, Canada D. Protopopescu SUPA - School of
Physics and Astronomy, University of Glasgow, Glasgow, G12-8QQ, United Kingdom
A. Sher TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia V6T 2A3,
Canada T. Sullivan Department of Physics and Astronomy, University of
British Columbia, Vancouver, British Columbia V6T 1Z1, Canada D. Vavilov
TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia V6T 2A3, Canada
###### Abstract
The three body pion decays ${\pi}^{+}{\rightarrow}l^{+}{\nu}X~{}(l=e,{\mu})$,
where $X$ is a weakly interacting neutral boson, were searched for using the
full data set from the PIENU experiment. An improved limit on
${\Gamma}({\pi}^{+}{\to}e^{+}{\nu}X)/{\Gamma}({\pi}^{+}{\to}{\mu}^{+}{\nu}_{\mu})$
in the mass range $0<m_{X}<120$ MeV/$c^{2}$ and a first result for
${\Gamma}({\pi}^{+}{\to}{\mu}^{+}{\nu}X)/{\Gamma}({\pi}^{+}{\to}{\mu}^{+}{\nu}_{\mu})$
in the region $0<m_{X}<33.9$ MeV/$c^{2}$ were obtained. The Majoron-neutrino
coupling model was also constrained using the current experimental result of
the ${\pi}^{+}{\to}e^{+}{\nu}_{e}({\gamma})$ branching ratio.
††preprint: APS/123-QED
## I Introduction
The existence of massive or massless weakly interacting neutral particles
($X$) has been suggested to augment the standard model with motivations that
include providing dark matter candidates DarkMatter , explaining baryogenesis
Baryogenesis , revealing the origin of neutrino masses SK , and finding
solutions to the strong $CP$ problem StrongCP1 ; StrongCP2 involving the
axion familon ; axion1 ; axion2 ; axion3 ; axion4 . Pion and kaon decays are
potential sources of $X$ particles as discussed by Altmannshofer, Gori, and
Robinson ALP who investigated a model with axionlike particles involved in
pion decay ${\pi}^{+}{\to}e^{+}{\nu}X$. Batell et al. DM studied a model of
thermal dark matter emitted in three body meson decay
${\pi}^{+}(K^{+}){\to}l^{+}{\chi}{\phi}$ where $\chi$ and $\phi$ are assumed
to be sterile neutrinos. Light vector bosons emitted in
${\pi}^{+}(K^{+}){\to}l^{+}{\nu}X$ decay have been discussed by Dror Dror .
A Nambu-Goldstone boson, the “Majoron” proposed by Gelmini and Roncadelli
majoron1 , is also a candidate of interest. It arises in gauge models that
have a spontaneous breaking of the baryon and lepton numbers ($B-L$) global
symmetry majoron1 ; majoron2 . In the Majoron models, neutrino masses arise
from the vacuum expectation value of a weak isotriplet scalar Higgs boson.
Barger, Keung, and Pakvasa extended the Majoron model to the decay processes
of pions and kaons ${\pi}^{+}(K^{+}){\to}l^{+}{\nu}X$ via Majoron-neutrino
couplings majoron3 . Other related processes and models have been discussed in
Refs. ref1 ; ref2 ; ref3 ; ref4 .
Three body pion decays ${\pi}^{+}{\to}l^{+}{\nu}X$ can be investigated using
the decay lepton energy spectra in pion decays. Figure 1 shows the total and
kinetic energy spectra of ${\pi}^{+}{\to}e^{+}{\nu}X$ and
${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ decays assuming the decay products of $X$ are
invisible or have very long lifetimes allowing undetected escape. The signal
shapes were obtained from Eq. (12) in Ref. DM . A previous search for the
decay ${\pi}^{+}{\rightarrow}e^{+}{\nu}X$ was performed by Picciotto et al.
Picciotto as a byproduct of the branching ratio measurement
$R^{\pi}={\Gamma}[{\pi}^{+}{\to}e^{+}{\nu_{e}}({\gamma})]/{\Gamma}[{\pi}^{+}{\rightarrow}{\mu}^{+}{\nu_{\mu}}({\gamma})]$,
where ($\gamma$) indicates the inclusion of radiative decays, using stopped
pions in an active target Britton . The upper limit on the branching ratio was
found to be
$R^{{\pi}e{\nu}X}={\Gamma}({\pi}^{+}{\to}e^{+}{\nu}X)/{\Gamma}({\pi}^{+}{\to}{\mu}^{+}{\nu_{\mu}}){\lesssim}4{\times}10^{-6}$
in the mass range $m_{X}$ from 0 to 125 MeV/$c^{2}$. The sensitivity was
limited by statistics and the remaining background originated from pion decay-
in-flight (${\pi}$DIF) events. For ${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ decay, no
comparable studies have been performed.
In the present work, the decays ${\pi}^{+}{\to}e^{+}{\nu}X$ and
${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ were sought using the full data set of the
PIENU experiment PIENU corresponding to two orders of magnitude larger
statistics than the previous experiment Picciotto . The analyses were based on
the searches for heavy neutrinos ${\nu}_{H}$ in ${\pi}^{+}{\to}e^{+}{\nu_{H}}$
decay PIENU2 and ${\pi}^{+}{\to}{\mu}^{+}{\nu_{H}}$ decay PIENU3 , and the
decays ${\pi}^{+}{\to}e^{+}{\nu}_{e}{\nu}\bar{\nu}$ and
${\pi}^{+}{\to}{\mu}^{+}{\nu}_{\mu}{\nu}\bar{\nu}$ PIENU4 .
Figure 1: Total energy spectra of ${\pi}^{+}{\to}e^{+}{\nu}X$ and kinetic
energy spectra of ${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ decays. (a)
${\pi}^{+}{\to}e^{+}{\nu}X$ decay with mass $m_{X}$ of 0 MeV/$c^{2}$ (solid
black), 40 MeV/$c^{2}$ (dotted red), and 80 MeV/$c^{2}$ (dashed blue). (b)
${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ decay with mass $m_{X}$ of 5 MeV/$c^{2}$
(solid black), 15 MeV/$c^{2}$ (dotted red), and 25 MeV/$c^{2}$ (dashed blue).
## II Experiment
Figure 2: Schematic of the PIENU detector NIMA .
The PIENU detector NIMA shown schematically in Fig. 2 was designed to measure
the pion branching ratio
$R^{\pi}={\Gamma}[{\pi}^{+}{\to}e^{+}{\nu_{e}}({\gamma})]/{\Gamma}[{\pi}^{+}{\rightarrow}{\mu}^{+}{\nu_{\mu}}({\gamma})]$.
The decay positron in ${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$ decay has total
energy $E_{e}=69.8$ MeV. For ${\pi}^{+}{\to}{\mu}^{+}{\nu_{\mu}}$ decay
followed by ${\mu}^{+}{\to}e^{+}{\nu_{e}}\bar{\nu_{\mu}}$ decay
(${\pi}^{+}{\to}{\mu}^{+}{\rightarrow}e^{+}$ decay chain), the decay muon has
kinetic energy $T_{\mu}=4.1$ MeV and a range in plastic scintillator of about
1 mm; the total energy of the positron in the subsequent muon decay
${\mu}^{+}{\to}e^{+}{\nu}_{e}\bar{\nu}_{\mu}$ ranges from $E_{e}=0.5$ to 52.8
MeV.
A pion beam with momentum of $75{\pm}1$ MeV/$c$ provided by the TRIUMF M13
beam line M13 was tracked by two multiwire proportional chambers (WC1 and
WC2) and two sets of silicon strip detectors (S1 and S2). Following WC2, the
beam was degraded by two thin plastic scintillators (B1 and B2) to measure
time and energy loss for particle identification. After S2, pions stopped and
decayed at rest in the center of an 8 mm thick plastic scintillator target
(B3). The pion stopping rate in B3 was $5{\times}10^{4}$ ${\pi}^{+}/$s.
Positrons from pion or muon decay were detected by another silicon strip
detector (S3) and a multiwire proportional chamber (WC3) located downstream of
B3 to reconstruct tracks and define the acceptance. Two thin plastic
scintillators (T1 and T2) were used to measure the positron time, and its
energy was measured by a 48 cm (dia.) $\times$ 48 cm (length) single crystal
NaI(T$\ell$) calorimeter surrounded by 97 pure CsI crystals to detect shower
leakage. The energy resolution of the calorimeter for positrons was 2.2%
(FWHM) at 70 MeV.
The pion and positron signals were defined by a coincidence of B1, B2, and B3,
and a coincidence of T1 and T2, respectively. A coincidence of the pion and
positron signals within a time window of $-$300 ns to 540 ns with respect to
the pion signal was the basis of the main trigger condition. This was
prescaled by a factor of 16 to form an unbiased trigger (Prescaled trigger).
${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$ event collection was enhanced by an
early time trigger selecting all events occurring between 6 and 46 ns after
the arrival of the pion (Early trigger). The typical trigger rate including
calibration triggers was about 600 s-1.
To extract the energy and time information, plastic scintillators, silicon
strip detectors and CsI crystals, and the NaI(T$\ell$) crystal were read out
by 500 MHz, 60 MHz, and 30 MHz digitizers, respectively. The wire chambers and
trigger signals were read by multi-hit time$-$to$-$digital converters with
0.625 ns resolution NIMA .
## III ${\pi}^{+}{\rightarrow}e^{+}{\nu}X$ decay
### III.1 Event selection
Figure 3: First and third panels from the top: the $E_{e}$ spectra of
${\pi}^{+}{\to}e^{+}{\nu_{e}}$ decay after
${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ suppression cuts for
datasets 1 (a) and 2 (c). The black crosses with the statistical uncertainties
show the data. Background components illustrated by the dashed and dotted
green line, dotted blue line, dashed gray line, and solid red line represent
${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ decays, low energy
${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$ tail, $\mu$DIF events, and the sum of
those three components, respectively (see text). Second and fourth panels from
the top: the residual plots shown by the black circles with statistical error
bars and hypothetical signals (solid red lines) with a mass of $m_{X}=80$
MeV/$c^{2}$ and a branching ratio $R^{{\pi}e{\nu}X}=2.0{\times}10^{-6}$ from
datasets 1 (b) and 2 (d) (the branching ratio obtained by the fit at this mass
was $R^{{\pi}e{\nu}X}=(-7.1{\pm}7.1){\times}10^{-8})$.
The decay ${\pi}^{+}{\rightarrow}e^{+}{\nu}X$ was searched for by fitting the
${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$ energy spectra after
${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ background suppression.
The cuts used for the pion selection, the rejection of the extra activity in
scintillators, and the suppression of ${\pi}^{+}{\to}{\mu}^{+}{\to}e^{+}$
backgrounds were the same as for the analysis of
${\pi}^{+}{\to}e^{+}{\nu}_{e}{\nu}\bar{\nu}$ decay PIENU4 . Pions were
identified using the energy loss information in B1 and B2. Events with extra
activity in B1, B2, T1 or T2 were rejected. Since the calibration system for
the CsI crystals was not available before November 1, 2010, the data were
divided into two sets (dataset 1, before, and dataset 2, after November 1,
2010). A 15% solid angle cut was used for the dataset 2, and a tighter cut
(10%) was applied to the dataset 1 to minimize the effects of electromagnetic
shower leakage.
The ${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ backgrounds were
suppressed using decay time, energy in the target, and tracking information
provided by WC1, WC2, S1, and S2 PIENU3 ; PIENU4 . Events were first selected
by the Early trigger and a decay time cut $t=7-35$ ns after the pion stop was
applied. The energy loss information in B3 was used because
${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ backgrounds deposit larger
energy in B3 than ${\pi}^{+}{\to}e^{+}{\nu}_{e}$ decays due to the presence of
the decay muon ($T_{\mu}=4.1$ MeV). After the timing selection and the energy
cut in B3, the beam pion tracking cut, which used the angle between WC1, 2 and
S1, 2 track segments, was applied to reject events with a larger angle than
most ${\pi}^{+}{\to}e^{+}{\nu}_{e}$ events (mostly, $\pi$DIF events before B3)
NIMA . Figure 3 shows the decay positron energy spectra of
${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$ decays after
${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ background suppression
cuts ((a) dataset 1 and (c) dataset 2). The bumps in the positron energy
spectra at about 58 MeV are due to photo-nuclear reactions in the NaI(T$\ell$)
PN . The total number of ${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$ events was
$1.3{\times}10^{6}$ ($5{\times}10^{5}$ in dataset 1 and $8{\times}10^{5}$ in
dataset 2).
### III.2 Energy spectrum fit
The energy spectrum was fitted with a combination of background terms and a
shape to represent the signal. The background component due to the remaining
${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ events was obtained from
the data by requiring a late time region $t>200$ ns. The shape of the low
energy ${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$ tail was obtained by Monte Carlo
(MC) simulation geant4 including the detector response which was measured
using a mono-energetic positron beam NIMA ; PN . Because the solid angle cut
was reduced and the CsI was not used for dataset 1, the shapes of the low
energy ${\pi}^{+}{\to}e^{+}{\nu}_{e}$ tails are slightly different for the two
datasets. Another background came from the decays-in-flight of muons
($\mu$DIF) following ${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu_{\mu}}$ decays in B3
that has a similar time distribution to ${\pi}^{+}{\rightarrow}e^{+}{\nu_{e}}$
decay. The shape of the $\mu$DIF event spectrum was obtained by MC simulation.
The signal shapes as shown in Fig. 1 (a) were produced with mass range $m_{X}$
from 0 to 120 MeV/$c^{2}$ in 5 MeV/$c^{2}$ steps by MC simulation including
the detector response. These shapes were normalized to 1 and used for the fit
to search for the signals. To combine the two data sets, simultaneous fitting
with a common branching ratio as a free parameter was performed. The fit in
the range of $E_{e}=5-56$ MeV without any signal resulted in
${\chi}^{2}/$d.o.f.=1.04 (d.o.f.=402). The addition of the signals did not
change the fit result.
### III.3 Results
Figure 4: Results of the 90% C.L. upper limit branching ratio
$R^{{\pi}e{\nu}X}$. Dashed black line: previous TRIUMF results Picciotto .
Solid red line with filled circles: results from this work.
Figure 3 (b) and (d) show the residual plots without any signal in datasets 1
and 2; hypothetical signals assuming $m_{X}=80$ MeV/$c^{2}$ with the branching
ratio $R^{{\pi}e{\nu}X}=2.0{\times}10^{-6}$ are also shown. No significant
excess above the statistical uncertainty was observed. For example, the
branching ratio with $m_{X}=0$ MeV/$c^{2}$ obtained by the fit was
$R^{{\pi}e{\nu}X}=(0.3{\pm}3.2){\times}10^{-7}$. Figure 4 shows the 90%
confidence level (C.L.) upper limits for the branching ratio
${\pi}^{+}{\rightarrow}e^{+}{\nu}X$ in the mass region from 0 to 120
MeV/$c^{2}$ calculated using the Feldman and Cousins (FC) approach FC . Since
the signal shape at a mass of 55 MeV/$c^{2}$ is similar to the
${\pi}^{+}{\to}{\mu}^{+}{\to}e^{+}$ energy spectrum, the sensitivity was worse
than for other masses due to the strong correlation;
$R^{{\pi}e{\nu}X}=(-0.3{\pm}10.0){\times}10^{-7}$. The statistical uncertainty
dominates because the systematic uncertainties and the acceptance effects are
approximately canceled out by taking the ratio of the number of signal events
obtained by the fit to the number of pion decays. The acceptance effect due to
the cuts was examined by generating positrons in B3 isotropically with an
energy range of $E_{e}=0-70$ MeV using the MC simulation and the systematic
uncertainty was estimated to be $<$5%. Compared to the previous TRIUMF
experiment Picciotto , the limits were improved by an order of magnitude.
## IV ${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu}X$ decay
The decay ${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu}X$ can be sought by a
measurement of the muon kinetic energy in ${\pi}^{+}{\to}{\mu}^{+}{\nu}$ decay
(followed by ${\mu}^{+}{\to}e^{+}{\nu}_{e}\bar{\nu}_{\mu}$ decay) in the
target (B3). In the ${\pi}^{+}{\to}{\mu}^{+}{\to}e^{+}$ decay chain, three
hits are expected in B3: the first signal is from the beam pion, the second is
from the decay muon, and the third is from the decay positron. Thus, the
second of three pulses in B3 would be due to the muon kinetic energy. However,
the pulse detection logic could not efficiently identify pulses below 1.2 MeV
PIENU2 . Therefore, the search was divided into two muon energy regions, above
and below 1.2 MeV. The number of Prescaled trigger events used for the
analysis was $4{\times}10^{9}$. The analysis strategy and event selection cuts
were based on the massive neutrino PIENU3 and three neutrino decay PIENU4
searches, briefly described in the following sections.
### IV.1 Analysis of the region above 1.2 MeV
Figure 5: (a) The $T_{\mu}$ spectra of ${\pi}^{+}{\to}{\mu}^{+}{\to}e^{+}$
decay. The black crosses with the statistical uncertainties show the data. The
dotted green line, dashed blue line, and solid red line represent a Gaussian
distribution centered at 4.1 MeV,
${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu}_{\mu}{\gamma}$ decay, and the sum of
those two functions, respectively. (b) Residual plots shown by the black
circles with statistical error bars in the range $T_{\mu}$=1.3 to 3.4 MeV. The
solid red line represents a hypothetical signal with mass of $m_{X}=15$
MeV/$c^{2}$ and the branching ratio $R^{{\pi}{\mu}{\nu}X}=6.0{\times}10^{-5}$;
the branching ratio obtained by the fit was
$R^{{\pi}{\mu}{\nu}X}=(-3.6{\pm}5.1){\times}10^{-6}$.
As described in Sec. III.1, pions were identified using B1 and B2 and events
with extra hits in B1, B2, T1, or T2 were rejected. A solid angle acceptance
of about 20% for the decay positron was used. To ensure the selected events
were from ${\pi}^{+}{\rightarrow}{\mu}^{+}{\rightarrow}e^{+}$ decays, a late
positron decay time $t>200$ ns after the pion stop and the positron energy in
the NaI(T$\ell$) calorimeter $E_{e}<55$ MeV were required. Then, the events
with three clearly separated pulses in the target (B3) were selected and the
second pulse information was extracted and assigned to the decay muon PIENU2 .
The muon kinetic energy ($T_{\mu}$) spectrum after the event selection cuts is
shown in Fig. 5 (a). As described above, the drop below 1.2 MeV was due to the
inefficiency of the pulse detection logic PIENU2 . The main background below
3.4 MeV was due to the radiative pion decay
${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu_{\mu}}{\gamma}$ (branching fraction
$2{\times}10^{-4}$ pimunug ). The total number of
${\pi}^{+}{\to}{\mu}^{+}{\to}e^{+}$ events available was 9.1${\times}10^{6}$.
The decay ${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu}X$ was searched for by fitting
the $T_{\mu}$ energy spectrum of ${\pi}^{+}{\to}{\mu}^{+}{\to}e^{+}$ decays.
The fit was performed using a Gaussian peak centered at 4.1 MeV (energy
resolution ${\sigma}=0.16$ MeV), the
${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu}_{\mu}{\gamma}$ decay spectrum obtained
by MC simulation geant4 , and the normalized signal spectra including the
energy resolution in B3. The signal spectra as shown in Fig. 1 (b) were
generated with the mass range $0<m_{X}<26$ MeV/$c^{2}$ with 1 MeV/$c^{2}$
steps using MC including detector resolution. The fit for $T_{\mu}$ from 1.3
to 4.2 MeV without any ${\pi}^{+}{\rightarrow}{\mu}^{+}{\nu}X$ signal
introduced gave ${\chi}^{2}/$d.o.f.=1.27 (d.o.f.=53) and the residuals of the
fit for the signal sensitive region are shown in Fig. 5 (b). The addition of
signal components did not change the fit result.
No significant signal beyond the statistical uncertainty was observed. For
example, the branching ratios for the signals with mass $m_{X}=0$ MeV/$c^{2}$
and 26 MeV/$c^{2}$ obtained by the fit were
$R^{{\pi}{\mu}{\nu}X}={\Gamma}({\pi}^{+}{\to}{\mu}^{+}{\nu}X)/{\Gamma}({\pi}^{+}{\to}{\mu}^{+}{\nu}_{\mu})=(-2.1{\pm}1.3){\times}10^{-4}$
and $(-4.8{\pm}8.8){\times}10^{-6}$, respectively. Systematic uncertainties
and acceptance effects were approximately canceled by taking the ratio of
amplitudes for the signal and ${\pi}^{+}{\to}{\mu}^{+}{\nu}_{\mu}$ decays. The
systematic uncertainties and acceptance effects due to the cuts were examined
by generating decay muons in the target with several kinetic energies in the
range $T_{\mu}=0-4.1$ MeV using MC simulation, and the systematic uncertainty
was estimated to be $<$5%. The black circles in Fig. 6 show the result of the
90% C.L. upper limit branching ratio $R^{{\pi}{\mu}{\nu}X}$ in this energy
region calculated using the FC method.
Figure 6: Summary of the 90% C.L. upper limit branching ratio
$R^{{\pi}{\mu}{\nu}X}$ in this work. The black circles show the result of the
search in the energy region $T_{\mu}>1.2$ MeV (see text in Sec. IV.1) and the
red squares represent the analysis result in the region $T_{\mu}<1.2$ MeV (see
text in Sec. IV.2).
### IV.2 Analysis of the region below 1.2 MeV
Figure 7: (a) The total energy in the target due to the pion and muon after
subtracting 17 MeV. The black crosses with statistical uncertainties show the
data. The dotted green line, dashed blue line, and solid red line represent
the main peak at 4.1 MeV, quadratic background due to ${\pi}$DIF events, and
the sum of those two functions, respectively. (b) Residual plots shown by the
black circles with the statistical error bars in the signal region
$T_{\mu}$=-1.8 to 1.8 MeV. The solid red line represents a hypothetical signal
with mass of $m_{X}=33.9$ MeV/$c^{2}$ and the branching ratio
$R^{{\pi}{\mu}{\nu}X}=3.0{\times}10^{-5}$.
For $T_{\mu}<1.2$ MeV, the selection of pions, rejection of extra activity in
scintillators, the solid angle cut for the decay positron, and the positron
energy cut in the NaI(T$\ell$) calorimeter were all the same as in the
analysis in the energy region $T_{\mu}>1.2$ MeV. To minimize ${\pi}$DIF
events, the same tracking cut by WC1, WC2, S1, and S2 used in Sec. III.1 was
also applied. After these basic cuts, the energies observed in B3 in a wide
time window (700 ns) including pion and positron energies were obtained. To
cleanly subtract the positron contribution from the integrated energy, events
with late positron decay $t>300$ ns were selected and the isolated positron
energy was subtracted. After that, the contribution of the averaged pion
kinetic energy ($\sim$17 MeV) was subtracted from the total energy (due to the
pion and the muon). Figure 7 (a) shows the total energy (corresponding to
$T_{\mu}$) after subtracting 17 MeV. The background below $T_{\mu}<1$ MeV was
mainly due to remaining ${\pi}$DIF events. The number of
${\pi}^{+}{\to}{\mu}^{+}{\to}e^{+}$ events available for the analysis is
$1.3{\times}10^{8}$.
There are two background shapes, the 4.1 MeV peak and the ${\pi}$DIF events. A
quadratic function was used for the ${\pi}$DIF events. To search for
${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ decay, the width of the signal shape was
scaled using that at the 4.1 MeV peak. Figure 7 (b) shows the residual plots
in the signal region from -1.8 to 1.8 MeV without any signal shape and a
hypothetical signal shape assuming a mass of $m_{X}=33.9$ MeV/$c^{2}$ with the
branching ratio $R^{{\pi}{\mu}{\nu}X}=3.0{\times}10^{-5}$. The branching ratio
obtained by the fit was $(1.0{\pm}2.0){\times}10^{-6}$. The fit was performed
from -4.0 to 4.1 MeV and the fitting range of -4.0 to 2.0 MeV (signal region)
resulted in ${\chi}^{2}/$d.o.f.=1.03 (d.o.f.=115); there is some small
deviation above 2 MeV due to a small mismatch due to the kinetic energy
distribution of the beam pion.
The signals of ${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ decay were searched for in the
mass range of $m_{X}=26$ to 33.9 MeV/$c^{2}$, but no significant excess beyond
the statistical uncertainty was observed. The red squares in Fig. 6 represent
the result of the 90% C.L. upper limit branching ratio $R^{{\pi}{\mu}{\nu}X}$
in this energy region calculated using the FC approach.
## V Constraints on the Majoron model
The Majoron model can be constrained using the experimental value of the pion
branching ratio $R^{\pi}$. The predicted branching ratio including the
massless Majoron $X_{0}$ and a light neutral Higgs $H^{\prime}$ ($\lesssim$1
MeV/$c^{2}$) can be written as
$\frac{{\Gamma}({\pi}{\to}eL^{0})/{\Gamma}({\pi}{\to}{\mu}L^{0})}{{\Gamma}({\pi}{\to}e{\nu}_{e})/{\Gamma}({\pi}{\to}{\mu}{\nu}_{\mu})}=1+157.5g^{2}$
(1)
where $L^{0}$ is the final state ${\nu}$, ${\nu}X_{0}$, and ${\nu}H^{\prime}$,
and $g$ is the Majoron-neutrino coupling constant majoron3 . The upper limit
of the ratio $R^{\pi}_{\rm exp}/R^{\pi}_{\rm SM}$ at 90% C.L. using the
current averaged experimental value $R^{\pi}_{\rm
exp}=(1.2327{\pm}0.0023){\times}10^{-4}$ PDG is
$\frac{R^{\pi}_{\rm exp}}{R^{\pi}_{\rm SM}}<1.0014.$ (2)
Using this limit, the 90% C.L. upper limit of the coupling constant can be
found to be
$g^{2}<9{\times}10^{-6},$ (3)
which was improved by a factor of three over the previous experiment Britton .
## VI Conclusion
No evidence of the three body pion decays ${\pi}^{+}{\to}e^{+}{\nu}X$ or
${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ was found and new upper limits were set. The
limits on the branching ratio ${\pi}^{+}{\to}e^{+}{\nu}X$ were improved by an
order of magnitude over the previous experiment. For
${\pi}^{+}{\to}{\mu}^{+}{\nu}X$ decay, the limits obtained are the first
available results. The Majoron model was also constrained using the pion
branching ratio $R^{\pi}$.
###### Acknowledgements.
This work was supported by the Natural Sciences and Engineering Research
Council of Canada (NSERC, No. SAPPJ-2017-00033), and by the Research Fund for
the Doctoral Program of Higher Education of China, by CONACYT doctoral
fellowship from Mexico, and by JSPS KAKENHI Grant No. 18540274, No. 21340059,
No. 24224006, and No. 19K03888 in Japan. We are grateful to Brookhaven
National Laboratory for the loan of the crystals, and to the TRIUMF
operations, detector, electronics and DAQ groups for their engineering and
technical support.
## References
* (1) G. Bertone, D. Hooper, and J. Silk, Phys. Rep. 405, 279 (2005).
* (2) A.D. Dolgov, arXiv:hep-ph/9707419; V.A. Rubakov and M.E. Shaposhnikov, Phys. Usp. 39, 461 (1996).
* (3) Y. Fukuda et al., Phys. Rev. Lett. 81, (1998) 1562.
* (4) R.D. Peccei and H.R. Quinn, Phys. Rev. Lett. 38 (1977) 1440.
* (5) R.D. Peccei and H.R. Quinn, Phys. Rev. D 16 (1977) 1791.
* (6) F. Wilczek, Phys. Rev. Lett. 49, 1549 (1982); see also A. Davidson and K. C. Wali, Phys. Rev. Lett. 48, 11 (1982).
* (7) J. Jaeckel and A. Ringwald, Annu. Rev. Nucl. Part. Sci. 60, 405 (2010).
* (8) P. Agrawal and K. Howe, J. High Energy Phys. 12 (2018) 029.
* (9) D.S. M. Alves and N. Weiner, J. High Energy Phys. 07 (2018) 092.
* (10) K.S. Jeong, T.H. Jung, and C.S. Shin, Phys. Rev. D 101, 035009 (2020).
* (11) W. Altmannshofer, S. Gori, and D.J. Robinson, Phys. Rev. D 101, 075002 (2020).
* (12) B. Batell, T. Han, D. McKeen, and B.S.E. Haghi, Phys. Rev. D 97, 075016 (2018).
* (13) J.A. Dror, Phys. Rev. D 101 095013 (2020).
* (14) G.B. Gelmini and M. Roncadelli, Phys. Lett. B 99, 411 (1981); see also G.B. Gelmini, S. Nussinov, and M. Roncadelli, Nucl. Phys. B209 (1982) 157-173.
* (15) Y. Chikashige, R. N. Mohapatra, and R. D. Peccei, Phys. Lett. 98B, 265 (1981).
* (16) V. Barger, W.Y. Keung, and S. Pakvasa, Phys. Rev. D 25, 907 (1982).
* (17) A. Masiero, J.W.F. Valle, Phys. Lett. B251, 273-278 (1990).
* (18) A.P. Lessa and O.L.G. Peres, Phys. Rev. D 75, 094001 (2007).
* (19) M. Hirsch, A. Vicente, J. Meyer, and W. Porod, Phys. Rev. D 79, 055023 (2009).
* (20) X. Garcia i Tormo, D. Bryman, A. Czarnecki, and M. Dowling, Phys. Rev. D 84, 113010 (2011).
* (21) C.E. Picciotto et al., Phys. Rev. D 37, 1131 (1988).
* (22) D.I. Britton et al., Phys. Rev. Lett. 68, 3000 (1992) and Phys. Rev. D 49, 28 (1994).
* (23) A. Aguilar-Arevalo et al., Phys. Rev. Lett. 115, 071801 (2015).
* (24) M. Aoki et al., Phys. Rev. D 84, 052002 (2011) and A. Aguilar-Arevalo et al., Phys. Rev. D 97, 072012 (2018).
* (25) A. Aguilar-Arevalo et al., Phys. Lett. B 798, 134980 (2019).
* (26) A. Aguilar-Arevalo et al., Phys. Rev. D 102, 012001 (2020).
* (27) A. Aguilar-Arevalo et al., Nucl. Instrum. Methods Phys. Res., Sect. A 791, 38 (2015).
* (28) A. Aguilar-Arevalo et al., Nucl. Instrum. Methods Phys. Res., Sect. A 609, 102 (2009).
* (29) G. Bressi, G. Carugno, S. Cerdonio, E. Conti, A.T. Meneguzzo, and D. Zanello, Nucl. Phys. B 513 (1998) 555.
* (30) A. Aguilar-Arevalo et al., Nucl. Instrum. Methods Phys. Res., Sect. A 621, 188 (2010).
* (31) S. Agostinelli et al. (GEANT4 Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A 506, 250 (2003); http://geant4.cern.ch.
* (32) G.J. Feldman and R.D. Cousins, Phys. Rev. D 57, 3873 (1998).
* (33) M. Tanabashi et al. (Particle Data Group), Phys. Rev. D 98, 030001 (2018).
|
# On a polynomial congruence for Eulerian polynomials
Ira M. Gessel∗ Department of Mathematics
Brandeis University
Waltham, MA 02453<EMAIL_ADDRESS>
(Date: January 18, 2021)
Supported by a grant from the Simons Foundation (#427060, Ira Gessel).
Yoshinaga [2, Proposition 5.5] proved, using arrangements of hyperplanes, the
polynomial congruence for Eulerian polynomials
$A_{n}(t^{m})\equiv\left(\frac{1+t+\cdots+t^{m-1}}{m}\right)^{n+1}A_{n}(t)\mod(t-1)^{n+1}.$
(1)
Here the Eulerian polynomials $A_{n}(t)$ may be defined by the generating
function
$\sum_{n=0}^{\infty}\frac{A_{n}(t)}{(1-t)^{n+1}}\frac{x^{n}}{n!}=\frac{1}{1-te^{x}}.$
(2)
A simpler proof, using roots of unity, was given by Iijima et al. [1]. We give
here a very simple proof based on the generating function (2).
Since $1+t+\cdots+t^{m-1}=(1-t^{m})/(1-t)$, the congruence (1) is equivalent
to the statement that the denominator of
$m^{n+1}\frac{A_{n}(t^{m})}{(1-t^{m})^{n+1}}-\frac{A_{n}(t)}{(1-t)^{n+1}}$ (3)
is not divisible by $t-1$.
By (2) the rational function (3) is the coefficient of $x^{n}/n!$ in
$\frac{m}{1-t^{m}e^{mx}}-\frac{1}{1-te^{x}}=\frac{m}{1-t^{m}e^{mx}}-\frac{1+te^{x}+t^{2}e^{2x}+\cdots+t^{m-1}e^{(m-1)x}}{1-t^{m}e^{mx}}.$
Thus it suffices to show that the denominator of the coefficient of $x^{n}/n!$
in
$\frac{1-t^{j}e^{jx}}{1-t^{m}e^{mx}}=\frac{1+te^{x}+\cdots+t^{j-1}e^{(j-1)x}}{1+te^{x}+\cdots+t^{m-1}e^{(m-1)x}}$
(4)
is not divisible by $t-1$. We have
$1+te^{x}+\cdots+t^{m-1}e^{(m-1)x}=1+t+\cdots+t^{m-1}+xP(t,x)$ where $P(t,x)$
is a power series in $x$ with coefficients that are polynomials in $t$. It
follows that the denominator of the coefficient of $x^{n}/n!$ in in (4) is a
constant times a power of $1+t+\cdots+t^{m-1}$ and is thus not divisible by
$t-1$.
## References
* [1] Kazuki Iijima, Kyouhei Sasaki, Yuuki Takahashi, and Masahiko Yoshinaga, _Eulerian polynomials and polynomial congruences_ , Contrib. Discrete Math. 14 (2019), no. 1, 46–54.
* [2] Masahiko Yoshinaga, _Worpitzky partitions for root systems and characteristic quasi-polynomials_ , Tohoku Math. J. (2) 70 (2018), no. 1, 39–63.
|
# Galaxy Image Translation with Semi-supervised Noise-reconstructed Generative
Adversarial Networks
††thanks: This project has received funding from the European Union’s Horizon
2020 research and innovation programme under the Marie Skłodowska-Curie grant
agreement No713750. Also, it has been carried out with the financial support
of the Regional Council of Provence-Alpes-Côte d’Azur and with the financial
support of the A*MIDEX (n° ANR-11-IDEX-0001-02), funded by the Investissements
d’Avenir project funded by the French Government, managed by the French
National Research Agency (ANR).
1st Qiufan Lin Aix Marseille Univ., CNRS/IN2P3, CPPM
Marseille, France
<EMAIL_ADDRESS>2nd Dominique Fouchez Aix Marseille Univ., CNRS/IN2P3,
CPPM
Marseille, France
<EMAIL_ADDRESS>3rd Jérôme Pasquet UMR TETIS, Univ. Montpellier
AgroParisTech, Cirad, CNRS, Irstea
Montpellier, France
<EMAIL_ADDRESS>
###### Abstract
Image-to-image translation with Deep Learning neural networks, particularly
with Generative Adversarial Networks (GANs), is one of the most powerful
methods for simulating astronomical images. However, current work is limited
to utilizing paired images with supervised translation, and there has been
rare discussion on reconstructing noise background that encodes instrumental
and observational effects. These limitations might be harmful for subsequent
scientific applications in astrophysics. Therefore, we aim to develop methods
for using unpaired images and preserving noise characteristics in image
translation. In this work, we propose a two-way image translation model using
GANs that exploits both paired and unpaired images in a semi-supervised
manner, and introduce a noise emulating module that is able to learn and
reconstruct noise characterized by high-frequency features. By experimenting
on multi-band galaxy images from the Sloan Digital Sky Survey (SDSS) and the
Canada France Hawaii Telescope Legacy Survey (CFHT), we show that our method
recovers global and local properties effectively and outperforms benchmark
image translation models. To our best knowledge, this work is the first
attempt to apply semi-supervised methods and noise reconstruction techniques
in astrophysical studies.
###### Index Terms:
Semi-supervised learning, Image processing and analysis, Deep learning
## I Introduction
Simulating realistic astronomical images is an important but hard task in
astrophysics. Other than real observational data, astronomers utilize
simulated images in a variety scientific studies, ranging from analyzing
single celestial objects (e.g., transients, stars, galaxies, etc.) to probing
the evolution of the universe (e.g., weak gravitational lensing, large-scale
structures, supernova cosmology, etc.). Unlike traditional non-Deep-Learning
simulation methods, generative models built on Generative Adversarial Networks
(GANs) have shown promises in producing high-fidelity images in an efficient
fashion without imposing theoretical modeling assumptions (e.g., [1], [2],
[3]). While generative models using random seeds as input tend to “invent” new
content, image-to-image translation is able to generate new images retaining
the content learned from a source domain. This approach has its merits when we
attempt to augment data for real tasks for a target sky survey based on other
existing surveys differing in instrumental and observational effects. In this
regard, our work specifically focuses on making image simulations via image-
to-image translation.
A straightforward way of making image translation would be to use image pairs
from two surveys that contain the same objects, each serving as the “ground
truth” for the other in a supervised manner. However, due to limited
overlapping sky coverage, we are usually lacking such paired data to train
large-scale neural networks, while there are always sufficient unpaired data
to use. Moreover, models trained with paired data alone might not be able to
generalize well over the distribution of the unpaired data. As current image
translation work in astrophysics is limited to using paired data, we aim to
develop an unsupervised or semi-supervised translation method for exploiting
unpaired data.
Another major part in our work is noise reconstruction in image translation.
One image can be decomposed into useful signal and noise. We define the useful
signal (or the non-noise part) as the true signal from the object convolved
with the Point Spread Function (PSF), predominantly governed by atmospheric
blurring for ground-based telescopes and usually characterized as low-
frequency features. Background noise is the other important component of an
image. Because of its existence and variations in nature, even the same
celestial objects may have dissimilar appearances on images from different
surveys. Noise could come from divergent sources, including shot noise from
the object itself and the sky background under the object, thermal radiation
inside the detector, or random loss and gain of electrons during the CCD read-
out process. As noise encodes systematic effects of a survey, models developed
upon training data with particular noise characteristics might fail on
previously unseen data with different noise properties. In addition, noise is
an unavoidable element in the simulation of realistic data for a survey.
Despite its importance, noise is usually overlooked as it is hard to learn and
preserve due to its high-frequency nature. We are cautious that noise would
bias signal recovery and jeopardize subsequent analyses if not treated
properly. We note that CycleGAN-like image translation implementations are
already capable of reconstructing non-noise patterns, but they lack
ingredients to generate noise. There have been studies discussing noise
modeling for natural image processing (e.g., [4], [5]), but they are
restricted to pre-defined and artificially generated noise patterns. Unlike
other studies that focus on removing noise from images based on noise modeling
or simulations, we attempt to preserve noise information from real images and
reconstruct noise in image translation.
The contributions of this work are as follows.
1. 1.
We develop a two-way one-to-one mapping model111Code is available at
https://github.com/QiufanLin/ImageTranslation. for galaxy image translation
using Generative Adversarial Networks (GANs). Our semi-supervised training
scheme makes use of not only the unpaired data representative of the
distribution of the full dataset, but also the paired data that ensures
precise calibration of the target domain.
2. 2.
We achieve to reconstruct noise by introducing noise emulating modules and
discriminators that concentrate on high-frequency features. Though developed
for astronomical imaging analysis, this technique might also be beneficial for
image processing tasks in other fields.
## II Related Work
### II-A Generative Adversarial Models in Astrophysics
Recently, the GAN networks [6] have become increasingly popular in
astrophysical studies. The vanilla GAN networks set up a minmax game between a
Generator and a Discriminator, taking a random noise seed to produce fake data
— such as images [7] — as expected to be indistinguishable from the target
domain. This has been applied as generative models in image simulations such
as [1] and [2]. [1] achieved to simulate weak lensing convergence maps with
high statistical confidence. [2] developed a chained method to produce galaxy
images with high resolution based on StackGAN [8], by first producing low-
resolution images from random seeds and then upsampling low-resolution images.
The GAN method has also been implemented in studies relevant to image
translation. For example, [9] used image translation method to capture the
underlying noise field from a noisy weak lensing convergence map. Similarly,
[10] trained a model to recover clean images from degraded images with bad
astronomical seeing and high noise. [11] proposed a branched GAN network to
deblend overlapping galaxies. We note that all of these studies trained the
networks with simulated data, from which the ground truth information for
supervised translation is accessible. [3] made image translation using real
data from two radio surveys with different resolutions and brightness
sensitivities, yet they only exploit paired image cutouts.
### II-B Two-way Adversarial Networks
Two-way GAN networks in general hold two minmax games in parallel, having two
Generators and two Discriminators, which are built to make connections across
two domains. Though not specifically referred to as two-way translation, the
idea of connecting domains has existed in early work such as ALI [12] and
BiGAN [13], where an inverse mapping is made to recover the random seed
inputted for generating fake data. Similarly, InfoGAN [14] is trained to
maximize the mutual information between generated data and a second seed used
as a conditional code [15]. CoGAN [16] produces images from a joint
distribution learned from different domains.
The concept of two-way translation became clear when CycleGAN [17], as an
extension of pix2pix [18], applied a cycle-consistency loss to ensure one-to-
one mapping between two domains. It was proposed to make use of unpaired data
for training. Similar ideas can also be found in other work. StarGAN [19]
makes multi-domain translation controlled by domain-specified labels.
Translation between each pair of domains is constrained by a cycle-consistency
loss. DualGAN [20] adopts the same cycle-consistency constraint, except that
the adversarial loss follows Wasserstein GAN [21]. DiscoGAN [22] explores a
few forms of cycle-consistency loss. XGAN [23] applies semantic consistency to
embedded features. While CycleGAN achieves approximately deterministic
mappings between domains, Augmented CycleGAN [24] extends the idea of CycleGAN
by introducing stochastic many-to-many mappings. TraVeLGAN [25] applies a
siamese network to preserve high-level intra-domain semantics to get rid of
the commonly-used cycle-consistency loss.
To our best knowledge, unsupervised or semi-supervised methods have not yet
been investigated in any research of astrophysics prior to our work, though
they have already been heavily explored in the field of computer science. In
addition, previous work predominantly focuses on de-noising techniques,
whereas there has been rare discussion on noise reconstruction.
Figure 1: Framework of our two-step training scheme. Step One: (i) update the
Autoencoders $A^{X}$, $A^{Y}$ with the original images $x$, $y$ from the two
domains $X$, $Y$, respectively; (ii) adversarially update the Noise Emulators
$NE^{X}$, $NE^{Y}$ and the Discriminators $D^{X}$, $D^{Y}$ while keeping
$A^{X}$, $A^{Y}$ (thus the generated non-noise images $\widetilde{x}$,
$\widetilde{y}$) fixed and taking Gaussian random seeds $z_{1}$, $z_{2}$ as
inputs to $NE^{X}$, $NE^{Y}$ to produce noise. Step Two: update the Generators
$G^{X\rightarrow Y}$, $G^{Y\rightarrow X}$, using noise produced by $NE^{X}$,
$NE^{Y}$.
Figure 2: The architecture networks in our model. “Conv”, “AvgPool” and “FC”
refer to Convolutional layers, Average Pooling layers and Fully-Connected
layers, respectively. “GlobalPool” refers to Global Pooling layers, divided
into Average Pooling, Maximum Pooling and Minimum Pooling. “Interpolate” and
“SymPadding” refer to the Nearest Neighbor Interpolation and the Symmetric
Padding, used as upsampling operations. The reversed downsampling operation is
“Cropping”. “ACM” refers to the Attention Complementary Modules implemented by
[26]. The labels “k”, “n” and “s” refer to the size of the convolutional
filter, the number of output channels and the size of the output feature map
or image. The output from the 2D Fourier Transform is separate as real and
imagery parts in two channels. We adopt Leaky ReLUs with a slope of 0.2 unless
otherwise noted. (a) Architecture of the Autoencoders. (b) Architecture of the
Generator translating SDSS images to non-noise CFHT images. (c) Architecture
of the Generator translating CFHT images to non-noise SDSS images. The size of
each output feature map or image is specified. (d) Architecture of the
Discriminators for each passband. (e) Architecture of the Noise Emulators for
each passband. $z_{1}$ and $z_{2}$ denote the injected Gaussian random seeds.
“$\bigotimes$” denotes element-wise product. (f) The 7$\times$7 zero-bias
symmetric filter applied in the Noise Emulators. (g) The high-pass filter
applied in the Discriminators.
## III Our Model
Our two-way image-to-image translation model connects two domains $X$ and $Y$
via two mapping functions, $F^{X\rightarrow Y}$ and $F^{Y\rightarrow X}$. Each
mapping function is composed of two components — a Generator that decouples
noise from the input image and generates the non-noise part of the target
image, and a Noise Emulator that reconstructs a noise background associated
with the target domain. The output target image is the sum of these two parts
($F^{X\rightarrow Y}=G^{X\rightarrow Y}+NE^{Y}$, $F^{Y\rightarrow
X}=G^{Y\rightarrow X}+NE^{X}$). The amplitude of the noise background changes
from image to image, whose distribution is what we want to learn and sample
via adversarial training with two Discriminators.
If we directly train the Generators and the Noise Emulators with two
Discriminators like other two-way translation methods, we would encounter
difficulties in regenerating well-behaved noise. Noise has to be learned
together with the non-noise part of an image, as they cannot be split a
priori. For translation from a high-resolution domain to a low-resolution
domain, the Generator is fed with high-resolution information that is
redundant for generating low-resolution images, and the gradients propagated
from the Discriminator would force the Generator to produce high-frequency
fluctuations that mimic the behavior of noise but actually hinder the training
of the Noise Emulator. On the other side, noise reconstruction is unlikely to
succeed for translation from a low-resolution domain to a high-resolution
domain, since the Discriminator would stick with detailed yet non-noise
features that can never be regenerated by the Generator with low-resolution
images as input.
Therefore, we speculate that noise reconstruction would only be achievable
through translation within the same domain, i.e., via the use of Autoencoders.
Moreover, having them trained separately from the Discriminators can avert
noise-like patterns harmful for the Noise Emulators. Adopting two independent
Autoencoders as auxiliary components is a trade-off between the two
aforementioned challenging situations, which enables the Noise Emulators to be
properly optimized.
### III-A Training Scheme and Objective Functions
Having these considerations, we propose a two-step training scheme (Figure 1).
In Step One, the Noise Emulators $NE^{X}$, $NE^{Y}$ are first trained with two
Discriminators $D^{X}$, $D^{Y}$ and two Autoencoders $A^{X}$, $A^{Y}$. The
noise generated by the Noise Emulators is added to the non-noise part
generated by the Autoencoders and fed as input into the Discriminators. For
each iteration of the training, we update the Autoencoders independently from
the other two components, then update the Discriminators and the Noise
Emulators via adversarial training while keeping the Autoencoders fixed, i.e.,
we minimize the auto and adversarial losses alternately.
$\displaystyle\mathcal{L}_{auto}(A^{X},A^{Y})$
$\displaystyle=\mathbb{E}_{x\sim X}[\|A^{X}(x)-x\|_{2}]$ (1)
$\displaystyle+\mathbb{E}_{y\sim Y}[\|A^{Y}(y)-y\|_{2}]$
$\displaystyle\mathcal{L}_{adv}(D^{X},D^{Y})$ $\displaystyle=\mathbb{E}_{x\sim
X}[-\sum_{p=1}^{N_{p}}(\log D^{X}_{p}(x)+\log(1-D^{X}_{p}(F^{X}(x))))]$ (2)
$\displaystyle+\mathbb{E}_{y\sim Y}[-\sum_{p=1}^{N_{p}}(\log
D^{Y}_{p}(y)+\log(1-D^{Y}_{p}(F^{Y}(y))))]$
$\displaystyle\mathcal{L}_{adv}(NE^{X},NE^{Y})$
$\displaystyle=\mathbb{E}_{x\sim X}[-\sum_{p=1}^{N_{p}}(\log
D^{X}_{p}(F^{X}(x)))]$ (3) $\displaystyle+\mathbb{E}_{y\sim
Y}[-\sum_{p=1}^{N_{p}}(\log D^{Y}_{p}(F^{Y}(y)))]$
$X$ and $Y$ denote two domains. $D^{X}_{p}$ and $D^{Y}_{p}$ denote the Sub-
Discriminators for the passband $p$ (discussed in Section III-E), running over
the total number of passbands $N_{p}$. $F^{X}=A^{X}+NE^{X}$ and
$F^{Y}=A^{Y}+NE^{Y}$ denote the self-mapping functions combining the
Autoencoders $A^{X}$, $A^{Y}$ and the Noise Emulators $NE^{X}$, $NE^{Y}$.
In Step Two, both the Discriminators and the Autoencoders are discarded. We
add the noise reconstructed from the trained Noise Emulators to the non-noise
output of the Generators $G^{X\rightarrow Y}$, $G^{Y\rightarrow X}$ and train
the Generators in the form of a cycle. The identity and cycle-consistency
losses are minimized with paired and unpaired data respectively.
$\displaystyle\mathcal{L}_{id}(G^{X\rightarrow Y},G^{Y\rightarrow X})$
$\displaystyle=\mathbb{E}_{(x,y)\sim(X,Y)_{pair}}[\|G^{X\rightarrow
Y}(x)-y\|_{2}]$ (4)
$\displaystyle+\mathbb{E}_{(x,y)\sim(X,Y)_{pair}}[\|G^{Y\rightarrow
X}(y)-x\|_{2}]$ $\displaystyle\mathcal{L}_{cyc}(G^{X\rightarrow
Y},G^{Y\rightarrow X})$ $\displaystyle=\mathbb{E}_{x\sim
X_{unpair}}[\|G^{Y\rightarrow X}(F^{X\rightarrow Y}(x))-x\|_{2}]$ (5)
$\displaystyle+\mathbb{E}_{y\sim Y_{unpair}}[\|G^{X\rightarrow
Y}(F^{Y\rightarrow X}(y))-y\|_{2}]$
$F^{X\rightarrow Y}=G^{X\rightarrow Y}+NE^{Y}$ and $F^{Y\rightarrow
X}=G^{Y\rightarrow X}+NE^{X}$ denote the cross-domain mappings combining the
Generators $G^{X\rightarrow Y}$, $G^{Y\rightarrow X}$ and the Noise Emulators
$NE^{X}$, $NE^{Y}$.
In addition to the loss functions discussed above, one could add extra losses
or auxiliary networks so as to put an additional constraint on the information
critical to subsequent tasks (e.g., galaxy type classification). We leave this
investigation to future analysis, as our goal in this work is only to recover
broad galaxy shapes by the content losses (Eq. 4 and Eq. 5).
### III-B Autoencoders
The architecture of the Autoencoders is shown in Figure 2(a). We observe in
our experiments that applying three Average Pooling layers is sufficient to
smooth images of the typical medium-level noise in our study. In order to
avoid producing grid patterns, we upsample images using the Nearest Neighbor
Interpolation rather than the Deconvolutional layers or the Pixel Shuffle
units ([27, 28]).
### III-C Generators
The Generators (Figures 2(b) and Figures 2(c)) are used for cross-domain
translation, and can be regarded as variants of the Autoencoders. We use the
Symmetric Padding/Cropping as additional upsampling/downsampling operations to
adjust the image size with the target domain.
### III-D Noise Emulators
The Noise Emulators are built upon the following observations and assumptions:
(1) Noise behaves as fluctuations discretized on pixels. There may be
correlations among adjacent pixels. (2) The noise pattern is translationally
and rotationally invariant over the relatively small spatial scale of the
images used in our work. (3) While independent of signal, noise is subject to
the properties of a survey and its amplitude may vary due to varying
observational or instrumental conditions. (4) While signal is shared among
passbands, we assume that the noise properties associated with one passband
from a survey are independent of the other passbands as well as the other
survey. Despite this assumption, we are cautious that noise amplitudes among
different passbands might be correlated due to certain observation strategies.
However, as the noise amplitude for each passband is to be randomly sampled,
we do not take this correlation into account.
The architecture of the Noise Emulators is shown in Figure 2(e). Each channel
is independent but shares the same architecture, corresponding to a passband.
To generate noise, a random number $z_{1}$ is sampled from the standard
Gaussian distribution, which is transformed into a scalar that controls the
noise amplitude; a 2D Gaussian random seed $z_{2}$ is sampled as a noise map;
finally, the noise map is multiplied with the amplitude and convolved with a
7$\times$7 zero-bias symmetric filter (i.e., with shared weights at opposite
positions w.r.t the center, Figure 2(f)) that introduces short-scale
correlations. Once trained, we can use the Noise Emulators to regenerate noise
by sampling $z_{1}$ and $z_{2}$ for each passband.
### III-E Discriminators
Since noise is treated independently among different passbands, we take
individual channels as Sub-Discriminators, each corresponding to a passband
(Figure 2(d)). For each channel, we first apply a high-pass filter (Figure
2(g)) to the input, so that noise is more emphasized relative to low-frequency
features. The information flow is then split into two branches, one of which
passes through a 2D Fourier Transform module. The output real and imagery
parts of the Fourier Transform are concatenated. The ability of discriminating
high-frequency features is improved via combining the pixel space and the
Fourier space. Furthermore, the feature maps are enhanced by the Attention
Complementary Modules (ACMs) proposed by [26]. After concatenating the global
average, maximum and minimum of each feature map in each branch, the Sub-
Discriminator outputs a probability indicating how likely the input in the
passband is real.
Figure 3: Results of image translation obtained from our model. The upper
panel shows reconstructed and original image pairs of two galaxies in $rz$
bands. Negative flux is due to background subtraction in image pre-processing.
The lower panel shows a few more examples in RGB format created by the method
introduced in [29]. Note that the noise amplitude for each reconstructed image
is randomly sampled from the corresponding Noise Emulator, thus it does not
necessarily match the noise amplitude of the original counterpart.
Figure 4: The $z$-band image pairs of a galaxy obtained from variant cases.
(a) One-way translation using our networks similar to pix2pix [18] (“Adapted
pix2pix”). (b) Two-way translation using our networks similar to CycleGAN [17]
(“Adapted CycleGAN”). (c) Adapted CycleGAN with the $pseudo$-identity loss
(PID). (d) Adapted CycleGAN with the identity loss (ID). (e) Adapted CycleGAN
with the identity loss (ID) and the Pixel Shuffle units (PS). (f) Two-way
translation using our model except the Autoencoders. (g) Two-way translation
using CycleGAN implemented in [17]. (h) Two-way translation using Augmented
CycleGAN implemented in [24]. More details can be found in the text.
## IV Experiments
### IV-A Data
The two domains $X$, $Y$ in our experiments are galaxy images from two surveys
— the Sloan Digital Sky Survey (SDSS) and the Canada France Hawaii Telescope
Legacy Survey (CFHT), respectively. We take the SDSS dataset retrieved by [30]
from SDSS Data Release 12 [31], consisting of 659,821 cutout images of size
64$\times$64 pixels over five photometric passbands ($ugriz$). Each image
contains a galaxy at the center. The CFHT cutout sample is created from the
CFHT wide field observations W1, W2, W3 and W4 [32], consisting of 130,093
cutout images with $ugriz$ passbands as well. Due to a different resolution,
we choose the CFHT cutout size to be 136$\times$136 pixels so that both SDSS
and CFHT cutout images span over the same angular scale. As will be used in
our experiments, we also create a sample of CFHT images of size 64$\times$64
pixels as SDSS images by regridding original CFHT images with the Bilinear
Interpolation.
We use ra and dec – the sky coordinate information — to cross-match images in
the two samples and identify 5,057 ones that contain same galaxies and have
aligned peak positions (i.e., paired images). Of these images, we take 2,557
as the paired training sample and 2,500 as the test sample. The remaining
654,764 SDSS images and 125,036 CFHT images are used as the unpaired training
sample.
### IV-B Experiment Details
We consider a few variants of our proposed model as ablation analysis and look
for substitutes for the identity loss (Eq. 4) and the Noise Emulators. To
regulate the intermediate output, one way is to use the following
$pseudo$-identity loss.
$\displaystyle\mathcal{L}_{pseudo-id}(G^{X\rightarrow Y},G^{Y\rightarrow X})$
$\displaystyle=\mathbb{E}_{x\sim X}[\|G^{X\rightarrow Y}(x)-x\|_{2}]$ (6)
$\displaystyle+\mathbb{E}_{y\sim Y}[\|G^{Y\rightarrow X}(y)-y\|_{2}]$
This is different from Eq. 4 in our work. It acts as a restriction on cross-
domain variation with unpaired data rather than a precise identity mapping
with paired data. Regarding noise reconstruction, an alternative would be to
make use of the Pixel Shuffle units for upsampling, the key element to
generating super-resolution images ([27, 28]). Moreover, CycleGAN [17] and
Augmented CycleGAN [24] are examples of the benchmark translation methods
using unpaired images. We are also interested in checking their applicability
in our study.
The variant cases we analyze are summarized below: (a) Ad.pix2pix: One-way
translation using our networks with the identity loss but not the Autoencoders
or the Noise Emulators, similar to pix2pix [18] (denoted with “Adapted
pix2pix”). (b) Ad.CycleGAN: Two-way translation using our networks with the
cycle-consistency loss but not the Autoencoders or the Noise Emulators,
similar to CycleGAN [17] (denoted with “Adapted CycleGAN”). (c)
Ad.CycleGAN+PID: Same as Case (b), except adding the $pseudo$-identity loss
(PID). (d) Ad.CycleGAN+ID: Same as Case (b), except adding the identity loss
(ID). (e) Ad.CycleGAN+ID+PS: Same as Case (d), except using the Pixel Shuffle
units (PS) in the Generators. (f) Ours–Auto: Same as our model, except trained
in one step without the Autoencoders. (g) CycleGAN: Two-way translation with
only the cycle-consistency loss, using the 6-residual-block CycleGAN
architecture as presented in [17]. (h) AugCGAN: The semi-supervised two-way
translation setting of Augmented CycleGAN as presented in [24], based on the
CycleGAN architecture with the identity loss, having random seeds injected to
the Generators to enable stochastic mappings.
For all of these cases, the training is completed in one step, in which the
Autoencoders are not used, and the images produced by the Generators are
inputted to the Discriminators. In Case (f), only the content losses are used
to update the Generators (i.e., the adversarial loss is applied to the Noise
Emulators rather than the Generators, similar to the training step one for our
model); while in the remaining cases without the Noise Emulators, the content
losses are multiplied by 1,000 and added to the adversarial loss to update the
Generators.
We run 60,000 update iterations for each of these cases and the two steps for
our model. In an iteration, we randomly select a mini-batch of 24 unpaired
SDSS images and 24 unpaired CFHT images, as well as 24 image pairs (i.e., 96
images in total), with random flipping and rotation by 90 deg steps. The
regridded 64$\times$64 CFHT images are used in Cases (g) and (h). The learning
rate starts with $10^{-4}$ and is reduced by a factor of 5 every 20,000
iterations. We adopt the default implementation of Adam Optimizer [33], and
perform gradient clipping at a ratio of 5 to the gradient norm. Using an
Intel(R) Core(TM) i9-7920X CPU and a Titan V GPU, roughly 30 hours is required
to complete the two training steps (120,000 iterations in total).
### IV-C Results
The images reconstructed by our translation model show good quality in visual
comparison with the paired real images (Figure 3). We are able to regenerate
well-behaved noise background that significantly improves the fidelity of
reconstructed images. Moreover, our paired training data is sparse yet
sufficient to convey the knowledge of the “ground truth” identity mapping and
calibrate the broad galaxy shapes with good precision. The major deficiency of
our method would be the loss of non-noise structures at small spatial scales
(e.g., small spots or faint spiral arms) for the reconstructed CFHT images. We
note that this is partly due to the compromise of using the Nearest Neighbor
Interpolation, and also a common challenge for image translation, as it is
difficult for the networks to “extrapolate” such detailed information that
does not exist on the low-resolution SDSS images.
In contrast to our model, there are limitations in the variant cases (Figure
4). In Case (a), while the one-way translation using paired images is capable
of conveying identity information, paired data alone cannot represent the full
sample distribution dominated by unpaired data. In Cases (b) and (g), although
the cycle consistency is guaranteed, the intermediate output from the half-
cycle has large variation due to the absence of paired data. The
$pseudo$-identity loss in Case (c) helps constrain the intermediate output
given the similarity between two domains, whereas a gap still exists from the
“ground truth” paired image counterparts. This gap cannot be corrected unless
the true identity information from paired data is exploited. In Cases (a) —
(e) and (g), as no randomness has been introduced to regenerate noise, there
exist regular fluctuations mimicking the behavior of noise. In Case (h), the
injected seeds might help maintain some level of stochasticity, but fail to
ensure the shape reconstruction or recover the correct spatial noise patterns.
Figure 5: Global shape reconstruction evaluated on spatial flux distributions.
Results with the $r$-band images are shown for our model and Cases (a) — (h)
(defined in the text). Both axes display the summed absolute pixel-wise fluxes
in the logarithmic scale. The black dotted and dashed lines indicate the flux
differences corresponding to the optimal global reconstruction with and
without noise reconstruction, respectively (i.e., $\sqrt{2}\times\sigma$ and
$\sigma$, where $\sigma$ is the median summed noise amplitude estimated using
the Noise Emulators).
Figure 6: Local fluctuation patterns displayed in the Fourier space. Each
panel shows the stacked Fourier amplitude map for the original $r$-band images
and those with our model and Cases (a) — (h) (defined in the text). The CFHT
images in Cases (g) and (h) have a modified image resolution due to
regridding. The centers correspond to the highest frequency.
### IV-D Evaluation
We define metrics to evaluate the reconstruction of global galaxy shapes and
local fluctuation patterns. The evaluation is made on 2,500 image pairs from
the test sample and the 5,000 corresponding reconstructed images. Since we
generate images with noise of random amplitude, metrics such as SSIM or PSNR
are incapable of evaluating image quality in our work. Quantitative evaluation
has to be made using metrics specific to real tasks in astrophysics.
#### IV-D1 Global Shape Reconstruction
To evaluate the global shape reconstruction of an image, we treat each of its
passbands as a 2D spatial flux distribution on the grid, and sum up the
absolute pixel-wise differences between the image and its original
counterpart. Figure 5 shows the $r$-band flux difference as a function of the
original flux for all the cases. The other passbands exhibit similar trends.
The groups of dots produced by all the methods with the identity loss
(including our model and Cases (a), (d), (e), (f), (h)) are on average lower
than the remaining groups, suggesting a better constraint. As a result of
noise, the flux difference cannot reach zero even using a perfect model to
recover global shapes. Compared to our model, Case (f) appears to have smaller
flux discrepancy for low-flux images, because the images reconstructed by this
method tend to have smaller noise-like fluctuations that reduce pixel-wise
differences. More importantly, most groups with the identity loss remain
nearly flat over different flux scales, whereas the other groups, due to a
lack of identity information, are strongly biased towards having larger flux
gaps with increasing flux.
#### IV-D2 Local Fluctuation Patterns
The behavior of noise is characterized as local fluctuation patterns, which
can be captured by high-pass filters and displayed in the Fourier space. For
evaluation, we convolve the images with the filter shown in Figure 2(g) and
apply a 2D Fourier Transform, same as the operations we perform in the
Discriminators. Since the images from the same domain have similar Fourier
modes, we stack all the 2D Fourier amplitude maps for the original images and
the reconstructed images in each case, respectively. In Figure 6, we present
the Fourier amplitude maps for the $r$-band images. We do not make such
comparison for the CFHT images in Cases (g) and (h), since the regridded
images used in those cases may have distinct noise behavior from the non-
regridded images. The Fourier map reconstructed by our model resembles the
original one, while those obtained by other methods are distinctive. Notably,
there are dramatically high peaks located at the center for Cases (e), (g) and
(h) in which the Deconvolutional layers or the Pixel Shuffle units are used.
These methods overly concentrate on a few high-frequency Fourier modes and
thus produce regular ripples shown in Figure 4. Although failed regeneration
of noise may not necessarily imply inability to learn noise properties, we
suggest that only by adding random seeds can we achieve to convert learned
noise properties into realistic noise. Finally, Case (f) is comparable to our
model for SDSS images, but fails for CFHT images, implying that the Noise
Emulators cannot be properly trained due to the negative effect from the
Generators, as illustrated in Section III.
## V Conclusion
We develop a semi-supervised noise-reconstructed GAN approach to conduct
image-to-image translation between two sky surveys. As demonstrated by our
experiments on galaxy images, we achieve to learn and reconstruct high-
frequency noise using noise emulating modules. We emphasize that this is a
noise reconstruction approach rather than a de-noising method as those
developed by other work. We also show that a small amount of paired data can
greatly alleviate the difficulty in recovering galaxy shapes from the
intermediate output inside a translation cycle, suggesting the necessity of
paired data even when unpaired data is plentifully available.
This work is the first step towards investing two-way noise-reconstructed
image translation methods in astrophysical studies, which would become a
promising image simulation approach complementary to traditional methods
(e.g., Markov Chain Monte Carlo). The noise reconstruction techniques might
also be applicable in areas other than astrophysics (e.g., sonar imaging
[34]). As high-fidelity data is a stringent demand in real astrophysical
applications, future work will need to focus on not only reconstructing high-
quality images but also recovering correct physical properties of the targeted
source. We caution that merely minimizing the MSE (Eq. 4 and Eq. 5) might be
insufficient to preserve salient information relevant to real tasks of various
kinds (e.g., [35]). Therefore, it would be interesting to extend this work in
the context of certain astrophysical applications.
## References
* [1] M. Mustafa, D. Bard, W. Bhimji, Z. Lukić, R. Al-Rfou, and J. M. Kratochvil, “CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks,” _Computational Astrophysics and Cosmology_ , vol. 6, no. 1, p. 1, May 2019.
* [2] L. Fussell and B. Moews, “Forging new worlds: high-resolution synthetic galaxies with chained generative adversarial networks,” _Monthly Notices of the Royal Astronomical Society (MNRAS)_ , vol. 485, no. 3, pp. 3203–3214, Mar 2019.
* [3] N. Glaser, O. I. Wong, K. Schawinski, and C. Zhang, “RadioGAN – translations between different radio surveys with generative adversarial networks,” _Monthly Notices of the Royal Astronomical Society (MNRAS)_ , vol. 487, no. 3, pp. 4190–4207, Jun 2019.
* [4] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising,” _IEEE Transactions on Image Processing_ , vol. 26, no. 7, pp. 3142–3155, Jul 2017.
* [5] J. Chen, J. Chen, H. Chao, and M. Yang, “Image blind denoising with generative adversarial network based noise modeling,” in _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2018, pp. 3155–3164.
* [6] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in _Advances in Neural Information Processing Systems 27_ , 2014, pp. 2672–2680.
* [7] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” _arXiv preprint arXiv:1511.06434_ , 2015.
* [8] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas, “StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks,” in _2017 IEEE International Conference on Computer Vision (ICCV)_ , Oct 2017, pp. 5908–5916.
* [9] M. Shirasaki, N. Yoshida, and S. Ikeda, “Denoising weak lensing mass maps with deep learning,” _Physical Review D_ , vol. 100, no. 4, Aug 2019.
* [10] K. Schawinski, C. Zhang, H. Zhang, L. Fowler, and G. K. Santhanam, “Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit,” _Monthly Notices of the Royal Astronomical Society: Letters_ , vol. 467, no. 1, pp. L110–L114, May 2017.
* [11] D. M. Reiman and B. E. Göhre, “Deblending galaxy superpositions with branched generative adversarial networks,” _Monthly Notices of the Royal Astronomical Society (MNRAS)_ , vol. 485, no. 2, pp. 2617–2627, Feb 2019.
* [12] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville, “Adversarially Learned Inference,” _arXiv preprint arXiv:1606.00704_ , 2016.
* [13] J. Donahue, P. Krähenbühl, and T. Darrell, “Adversarial Feature Learning,” _arXiv preprint arXiv:1605.09782_ , 2016.
* [14] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets,” in _Advances in Neural Information Processing Systems 29_ , 2016, pp. 2172–2180.
* [15] M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” _arXiv preprint arXiv:1411.1784_ , 2014.
* [16] M. Liu and O. Tuzel, “Coupled Generative Adversarial Networks,” _arXiv preprint arXiv:1606.07536_ , 2016.
* [17] J. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” _arXiv preprint arXiv:1703.10593_ , 2017.
* [18] P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , July 2017, pp. 5967–5976.
* [19] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo, “StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation,” in _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , June 2018, pp. 8789–8797.
* [20] Z. Yi, H. Zhang, P. Tan, and M. Gong, “DualGAN: Unsupervised dual learning for image-to-image translation,” in _2017 IEEE International Conference on Computer Vision (ICCV)_ , Oct 2017, pp. 2868–2876.
* [21] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” _arXiv preprint arXiv:1701.07875_ , 2017.
* [22] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim, “Learning to discover cross-domain relations with generative adversarial networks,” _arXiv preprint arXiv:1703.05192_ , 2017.
* [23] A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Mosseri, F. Cole, and K. Murphy, “XGAN: Unsupervised image-to-image translation for many-to-many mappings,” _arXiv preprint arXiv:1711.05139_ , 2018.
* [24] A. Almahairi, S. Rajeshwar, A. Sordoni, P. Bachman, and A. Courville, “Augmented CycleGAN: Learning many-to-many mappings from unpaired data,” in _Proceedings of the 35th International Conference on Machine Learning_ , ser. Proceedings of Machine Learning Research, vol. 80. Stockholmsmässan, Stockholm Sweden: PMLR, Jul 2018, pp. 195–204.
* [25] M. Amodio and S. Krishnaswamy, “TraVeLGAN: Image-to-image Translation by Transformation Vector Learning,” _arXiv preprint arXiv:1902.09631_ , 2019\.
* [26] X. Hu, K. Yang, L. Fei, and K. Wang, “ACNet: Attention based network to exploit complementary features for RGBD semantic segmentation,” _arXiv preprint arXiv:1905.10089_ , 2019.
* [27] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2016, pp. 1874–1883.
* [28] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , July 2017, pp. 105–114.
* [29] R. Lupton, M. Blanton, G. Fekete, D. Hogg, W. O’Mullane, A. Szalay, and N. Wherry, “Preparing red‐green‐blue images from CCD data,” _Publications of the Astronomical Society of the Pacific_ , vol. 116, no. 816, pp. 133–137, Feb 2004.
* [30] J. Pasquet, E. Bertin, M. Treyer, S. Arnouts, and D. Fouchez, “Photometric redshifts from sdss images using a convolutional neural network,” _Astronomy and Astrophysics_ , vol. 621, p. A26, Dec 2018.
* [31] The SDSS collaboration, “The eleventh and twelfth data releases of the Sloan Digital Sky Survey: Final data from SDSS-III,” _The Astrophysical Journal Supplement Series_ , vol. 219, no. 1, p. 12, Jul 2015.
* [32] S. D. J. Gwyn, “The Canada-France-Hawaii Telescope Legacy Survey: Stacked Images and Catalogs,” _The Astronomical Journal_ , vol. 143, no. 2, p. 38, Feb 2012.
* [33] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
* [34] R. F. Louise, L. Christer, and G. Andreas, “Deep learning based technique for enhanced sonar imaging,” in _Underwater Acoustics Conference & Exhibition series 2019, Crete-Greece_, 2019.
* [35] J. P. Cohen, M. Luck, and S. Honari, “Distribution matching losses can hallucinate features in medical image translation,” _arXiv preprint arXiv:1703.05192_ , 2018.
|
# The General Graph Matching Game:
Approximate Core
Vijay V. Vazirani111Supported in part by NSF grant CCF-1815901. University of
California, Irvine
###### Abstract
The classic paper of Shapley and Shubik [SS71] characterized the core of the
assignment game using ideas from matching theory and LP-duality theory and
their highly non-trivial interplay. Whereas the core of the assignment game is
always non-empty, that of the general graph matching game can be empty.
This paper salvages the situation by giving an imputation in the
$2/3$-approximate core for the latter. This bound is best possible, since it
is the integrality gap of the natural underlying LP. Our profit allocation
method goes further: the multiplier on the profit of an agent lies in the
interval $[{2\over 3},1]$, depending on how severely constrained the agent is.
The core is a key solution concept in cooperative game theory. It contains all
ways of distributing the total worth of a game among agents in such a way that
no sub-coalition has incentive to secede from the grand coalition. Our
imputation, in the $2/3$-approximate core, implies that a sub-coalition will
gain at most a $3/2$ factor by seceding, and less in typical cases.
## 1 Introduction
The matching game forms one of the cornerstones of cooperative game theory.
This game can also be viewed as a matching market in which utilities of the
agents are stated in monetary terms and side payments are allowed, i.e., it is
a transferable utility (TU) market. A key solution concept in this theory is
that of the core, which captures all possible ways of distributing the total
worth of a game among individual agents in such a way that the grand coalition
remains intact, i.e., a sub-coalition will not be able to generate more
profits by itself and therefore has no incentive to secede from the grand
coalition. For an extensive coverage of these notions, see the book by Moulin
[Mou14].
When restricted to bipartite graphs, the matching game is called the
assignment game. The classic paper of Shapley and Shubik [SS71] characterized
profit-sharing methods that lie in the core of such games by using ideas from
matching theory and LP-duality theory and their highly non-trivial interplay;
in particular, the core is always non-empty.
On the other hand, for games defined over general graphs, the core is not
guaranteed to be non-empty, see Section for an easy proof. The purpose of this
paper is to salvage the situation to the extent possible by giving a notion of
approximate core for such games. The approximation factor we achieve is $2/3$.
This is best possible, since it is the integrality gap of the underlying LP;
this follows easily from an old result of Balinski [Bal65] characterizing the
vertices of the polytope defined by the constraints of this LP. It turns out
that the constraints of dual of this LP must be respected by any profit-
sharing mechanism.
An interesting feature of our profit-sharing mechanism is that it restricts
only the most severely constrained agents to a multiplier of $2/3$, and the
less severely constrained an agent is, the better is her multiplier, all the
way to 1; bipartite graphs belong to the last category. One way of stating the
improved factor is: if the underlying graph has no odd cycles of length less
than $2k+1$, then our factor is ${{2k}\over{2k+1}}$.
The following setting, taken from [EK01] and [BKP12], vividly captures the
underlying issues. Suppose a tennis club has a set $V$ of players who can play
in an upcoming doubles tournament. Let $G=(V,E)$ be a graph whose vertices are
the players and an edge $(i,j)$ represents the fact that players $i$ and $j$
are compatible doubles partners. Let $w$ be an edge-weight function for $G$,
where $w_{ij}$ represents the expected earnings if $i$ and $j$ do partner in
the tournament. Then the total worth of agents in $V$ is the weight of a
maximum weight matching in $G$. Assume that the club picks such a matching $M$
for the tournament. The question is how to distribute the total profit among
the agents — strong players, weak players and unmatched players — so that no
subset of players feel they will be better off seceding and forming their own
tennis club.
## 2 Definitions and Preliminary Facts
###### Definition 1.
The general graph matching game consists of an undirected graph $G=(V,E)$ and
an edge-weight function $w$. The vertices $i\in V$ are the agents and an edge
$(i,j)$ represents the fact that agents $i$ and $j$ are eligible for an
activity, for concreteness, let us say trading. If $(i,j)\in E$, $w_{ij}$
represents the profit generated if $i$ and $j$ trade. The worth of a coalition
$S\subseteq V$ is defined to be the maximum profit that can be generated by
trades made within $S$ and is denoted by $p(S)$. Formally, $p(S)$ is defined
to be the weight of a maximum weight matching in the graph $G$ restricted to
vertices in $S$ only. The characteristic function of the matching game is
defined to be $p:2^{V}\rightarrow\mathcal{R}_{+}$.
Among the possible coalitions, the most important one is of course $V$, the
grand coalition.
###### Definition 2.
An imputation gives a way of dividing the worth of the game, $p(V)$, among the
agents. Formally, it is a function $v:{V}\rightarrow\mathcal{R}_{+}$ such that
$\sum_{i\in V}{v(i)}=p(V)$. An imputation $v$ is said to be in the core of the
matching game if for any coalition $S\subseteq V$, there is no way of dividing
$p(S)$ among the agents in $S$ in such a way that all agents are at least as
well off and at least one agent is strictly better off than in the imputation
$v$.
We next describe the characterization of the core of the assignment game given
by Shapley and Shubik [SS71]. In this game, agents are of two types, buyers
$B$ and sellers $R$. Let $G=(B,R,E)$ be a bipartite graph over agent sets $B$
and $R$, and having edges $E$ whose weights are given by $w$. The worth of
this game, $w(B\cup R)$, is the weight of a maximum weight matching in $G$;
the linear program (3) gives the LP-relaxation of the problem of finding such
a matching. In this program, variable $x_{ij}$ indicates the extent to which
edge $(i,j)$ is picked in the solution. Matching theory tells us that this LP
always has an integral optimal solution; the latter is a maximum weight
matching in $G$.
∑_(i, j) ∈E w_ij x_ij ∑_(i, j) ∈E x_ij≤1 ∀i ∈B ∑_(i, j) ∈E x_ij≤1 ∀j ∈R
x_ij≥0∀(i, j) ∈E
Taking $v_{i}$ and $u_{j}$ to be the dual variables for the first and second
constraints of (2), we obtain the dual LP:
∑_i ∈B v_i + ∑_j ∈R u_j v_i + u_j ≥w_ij ∀(i, j) ∈E v_i≥0∀i ∈B u_j≥0∀j ∈R
The definition of an imputation, Definition 2, needs to be modified in an
obvious way to distinguish the profit shares of buyers and sellers; we will
denote an imputation by $(v,u)$.
###### Theorem 3.
(Shapley and Shubik [SS71]) The imputation $(v,u)$ is in the core of the
assignment game if and only if it is an optimal solution to the dual LP, (2).
For general graphs, the LP relaxation of the maximum weight matching problem
is an enhancement of that for bipartite graphs via odd set constraints, as
given below in (3). The latter constraints are exponential in number.
∑_(i, j) ∈E w_ij x_ij ∑_(i, j) ∈E x_ij≤1 ∀i ∈V ∑_(i, j) ∈S x_ij≤(—S—-1) 2 ∀S
⊆V, S odd x_ij≥0∀(i, j) ∈E
The dual of this LP has, in addition to variables corresponding to vertices,
$v_{i}$, exponentially many more variables corresponding to odd sets, $z_{S}$,
as given in (3). As a result, the entire worth of the game does not reside on
vertices only — it also resides on odd sets.
∑_i ∈V v_i + ∑_S ⊆V, odd z_S v_i + v_j + ∑_S ∋ i, jz_S ≥w_ij ∀(i, j) ∈E
v_i≥0∀i ∈V z_S≥0∀S ⊆V, S odd
There is no natural way of dividing $z_{S}$ among the vertices in $S$ to
restore core properties. The situation is more serious than that: it turns out
that in general, the core of a non-bipartite game may be empty.
A (folklore) proof of the last fact goes as follows: Consider the graph
$K_{3}$, i.e., a clique on three vertices, $i,j,k$, with a weight of 1 on each
edge. Any maximum matching in $K_{3}$ has only one edge, and therefore the
worth of this game is 1. Suppose there is an imputation $v$ which lies in the
core. Consider all three two-agent coalitions. Then, we must have:
$v(i)+v(j)\geq 1,\ \ \ \ v(j)+v(k)\geq 1\ \ \ \ \mbox{and}\ \ \ \
v(i)+v(k)\geq 1.$
This implies $v(i)+v(j)+v(k)\geq 3/2$ which exceeds the worth of the game,
giving a contradiction.
Observe however, that if we distribute the worth of this game as follows, we
get a $2/3$-approximate core allocation: $v(i)=v(j)=v(k)=1/3$, since each edge
is covered to the extent of $2/3$ of its weight. In Section 3 we show that
such an approximate core allocation can always be obtained for the general
graph matching game.
###### Remark 4.
In the assignment game, monetary transfers are required only between a buyer-
seller pair who are involved in a trade. In contrast, observe that in the
approximate core allocation given above for $K_{3}$, transfers are even made
from agents who make trade to agents who don’t, thereby exploiting the TU
market’s capabilities more completely.
###### Definition 5.
Let $p:2^{V}\rightarrow\mathcal{R}_{+}$ be the characteristic function of a
game and let $1\geq\alpha>0$. An imputation $v:V\rightarrow\mathcal{R}_{+}$ is
said to be in the $\alpha$-approximate core of the game if:
1. 1.
The total profit allocated by $v$ is at most the worth of the game, i.e.,
$\sum_{i\in V}{v(i)}\leq p(V).$
2. 2.
The total profit accrued by agents in a sub-coalition $S\subseteq V$ is at
least $\alpha$ fraction of the profit which $S$ can generate by itself, i.e.,
$\forall S\subseteq V:\ \sum_{i\in S}{v_{i}}\geq\alpha\cdot p(S).$
If imputation $v$ is in the $\alpha$-approximate core of a game, then the
ratio of the total profit of any sub-coalition on seceding from the grand
coalition to its profit while in the grand coalition is bounded by a factor of
at most ${1\over{\alpha}}$.
In Section 3 we will need the following notion.
###### Definition 6.
Consider a linear programming relaxation for a maximization problem. For an
instance $I$ of the problem, let $\mbox{\rm OPT}(I)$ denote the weight of an
optimal solution to $I$ and let $\mbox{\rm OPT}_{f}(I)$ denote the weight of
an optimal solution to the LP-relaxation of $I$. Then, the integrality gap of
this LP-relaxation is defined to be:
$\inf_{I}{{\mbox{\rm OPT}(I)}\over{\mbox{\rm OPT}_{f}(I)}}.$
## 3 A $2/3$-Approximate Core for the Matching Game
We will work with the following LP-relaxation of the maximum weight matching
problem, (3). This relaxation always has an integral optimal solution in case
$G$ is bipartite, but not in general graphs. In the latter, its optimal
solution is a maximum weight fractional matching in $G$.
∑_(i, j) ∈E w_ij x_ij ∑_(i, j) ∈E x_ij≤1 ∀i ∈V x_ij≥0∀(i, j) ∈E
Taking $v_{i}$ to be dual variables for the first constraint of (3), we obtain
LP (3). Any feasible solution to this LP is called a cover of $G$ since for
each edge $(i,j)$, $v_{i}$ and $v_{j}$ cover edge $(i,j)$ in the sense that
$v_{i}+v_{j}\geq w_{ij}$. An optimal solution to this LP is a minimum cover.
We will say that $v_{i}$ is the profit of vertex $i$.
∑_i ∈V v_i v_i + v_j ≥w_ij ∀(i, j) ∈E v_i≥0∀i ∈V
By the LP Duality Theorem, the weight of a maximum weight fractional matching
equals the total profit of a minimum cover. If for graph $G$, LP (3) has an
integral optimal solution, then it is easy to see that an optimal dual
solution gives a way of allocating the total worth which lies in the core.
Deng et. al. [DIN97] prove that the core of this game is non-empty if and only
if LP (3) has an integral optimal solution.
We will say that a solution $x$ to LP (3) is half-integral if for each edge
$(i,j)$, $x_{ij}$ is 0, 1/2 or 1. Balinski [Bal65] showed that the vertices of
the polytope defined by the constraints of LP (3) are half-integral, see
Theorem 14 below. As a consequence, any optimal solution to LP (3) is half-
integral. Biro et. al [BKP12] gave an efficient algorithm for finding an
optimal half-integral matching by using an idea of Nemhauser and Trotter
[NT75] of doubling edges, hence obtaining an efficient algorithm for
determining if the core of the game is non-empty.
Our mechanism starts by using the doubling idea of [NT75]. Transform $G=(V,E)$
with edge-weights $w$ to graph $G^{\prime}=(V^{\prime},E^{\prime})$ and edge
weights $w^{\prime}$ as follows. Corresponding to each $i\in V$, $V^{\prime}$
has vertices $i^{\prime}$ and $i^{\prime\prime}$, and corresponding to each
edge $(i,j)\in E$, $E^{\prime}$ has edges $(i^{\prime},j^{\prime\prime})$ and
$(i^{\prime\prime},j^{\prime})$ each having a weight of $w_{ij}/2$.
Since each cycle of length $k$ in $G$ is transformed to a cycle of length $2k$
in $G^{\prime}$, the latter graph is bipartite. A maximum weight matching and
a minimum cover for $G^{\prime}$ can be computed in polynomial time [LP86],
say $x^{\prime}$ and $v^{\prime}$, respectively. Next, let
$x_{ij}\ =\ {1\over
2}\cdot(x_{i^{\prime},j^{\prime\prime}}+x_{i^{\prime\prime},j^{\prime}})\ \ \
\ \mbox{and}\ \ \ \ v_{i}=(v_{i^{\prime}}+v_{i^{\prime\prime}}).$
Clearly, $x$ is an optimal half-integral matching and $v$ is a cover in $G$.
It is easy to see that the weight of $x$ equals the value of $v$, thereby
implying that $v$ is an optimal cover.
Edges that are set to half in $x$ form connected components which are either
paths or cycles. For any such path, consider the two matchings obtained by
picking alternate edges. The half-integral solution for this path is a convex
combination of these two integral matchings. Therefore both these matchings
must be of equal weight, since otherwise we can obtain a heavier matching.
Pick any of them. Similarly, if a cycle is of even length, pick alternate
edges and match them. This transforms $x$ to a maximum weight half-integral
matching in which all edges that are set to half form disjoint odd cycles.
Henceforth we will assume that $x$ satisfies this property.
Let $C$ be a half-integral odd cycle in $x$ of length $2k+1$, with consecutive
vertices $i_{1},\ldots i_{2k+1}$. Let
$w_{C}=w_{i_{1},i_{2}}+w_{i_{2},i_{3}}+\ldots+w_{i_{2k+1},i_{1}}$ and
$v_{C}=v_{i_{1}}+\ldots v_{i_{2k+1}}$. On removing any one vertex, say
$i_{j}$, with its two edges from $C$, we are left with a path of length
$2k-1$. Let $M_{j}$ be the matching consisting of the $k$ alternate edges of
this path and let $w(M_{j})$ be the weight of this matching.
###### Lemma 7.
Odd cycle $C$ satisfies:
1. 1.
$w_{C}=2\cdot v_{C}$
2. 2.
$C$ has a unique cover: $v_{i_{j}}=v_{C}-w(M_{j})$, for $1\leq j\leq 2k+1$.
###### Proof.
1). We will use the fact that $x$ and $v$ are optimal solutions to LPs (3) and
(3), respectively. By the primal complementary slackness condition, for $1\leq
j\leq 2k+1$, $w_{i_{j},i_{j+1}}=v_{i_{j}}+v_{i_{j+1}}$, where addition in the
subindices is done modulo $2k+1$; this follows from the fact that
$x_{i_{j},i_{j+1}}>0$. Adding over all vertices of $C$ we get $w_{C}=2\cdot
v_{C}$.
2). By the equalities established in the first part, we get that for $1\leq
j\leq 2k+1$, $v_{C}=v_{i_{j}}+w(M_{j})$. Rearranging terms gives the lemma. ∎
Let $M^{\prime}$ be heaviest matching among $M_{j}$, for $1\leq j\leq 2k+1$.
###### Lemma 8.
$w(M^{\prime})\geq{{2k}\over{2k+1}}\cdot v_{C}$
###### Proof.
Adding the equality established in the second part of Lemma 7 for all $2k+1$
values of $j$ we get:
$\sum_{j=1}^{2k+1}{w(M_{j})}\ =\ (2k)\cdot v_{C}$
Since $M^{\prime}$ is the heaviest of the $2k+1$ matchings in the summation,
the lemma follows. ∎
Modify the half-integral matching $x$ to obtain an integral matching $T$ in
$G$ as follows. First pick all edges $(i,j)$ such that $x_{ij}=1$ in $T$.
Next, for each odd cycle $C$, find the heaviest matching $M^{\prime}$ as
described above and pick all its edges.
###### Definition 9.
Let $1>\alpha>0$. A function $c:V\rightarrow\mathcal{R}_{+}$ is said to be an
${\alpha}$-approximate cover for $G$ if
$\forall(i,j)\in E:\ \ c_{i}+c_{j}\geq\alpha\cdot w_{ij}$
Define function $f:V\rightarrow[{2\over 3},1]$ as follows: $\forall i\in V$:
$f(i)=\begin{cases*}{{2k}\over{2k+1}}&if $i$ is in a half-integral cycle of
length $2k+1$.\\\ 1&if $i$ is not in a half-integral cycle.\end{cases*}$
Next, modify cover $v$ to obtain an approximate cover $c$ as follows: $\forall
i\in V:\ c_{i}=f(i)\cdot v_{i}$.
###### Lemma 10.
$c$ is a ${2\over 3}$-approximate cover for $G$.
###### Proof.
Consider edge $(i,j)\in E$. Then
$c_{i}+c_{j}\ =\ f(i)\cdot v_{i}+f(j)\ \cdot v_{j}\ \geq{2\over
3}\cdot(v_{i}+v_{j})\ \geq\ {2\over 3}\cdot w_{ij},$
where the first inequality follows from the fact that $\forall i\in V,\
f(i)\geq{2\over 3}$ and the second follows from the fact that $v$ is a cover
for $G$. ∎
The mechanism for obtaining imputation $c$ is summarized as Mechanism 11.
###### Mechanism 11.
(${2/3}$-Approximate Core Imputation)
1. Compute $x$ and $v$, optimal solutions to LPs (3) and (3), where $x$ is half-integral. 2. Modify $x$ so all half-integral edges form odd cycles. 3. $\forall i\in V$, compute: $f(i)=\begin{cases*}{{2k}\over{2k+1}}&if $i$ is in a half-integral cycle of length $2k+1$.\\\ 1&otherwise.\end{cases*}$ 4. $\forall i\in V$: $c_{i}\leftarrow f(i)\cdot v_{i}$. Output $c$.
###### Theorem 12.
The imputation $c$ is in the ${2\over 3}$-approximate core of the general
graph matching game.
###### Proof.
We need to show that $c$ satisfies the two conditions given in Definition 5,
for $\alpha={2\over 3}$.
1). By Lemma 8, the weight of the matched edges picked in $T$ from a half-
integral odd cycle $C$ of length $2k+1$ is $\geq f(k)\cdot v_{C}\ =\
\sum_{i\in C}{c(i)}$. Next remove all half-integral odd cycles from $G$ to
obtain $G^{\prime}$. Let $x^{\prime}$ and $v^{\prime}$ be the projections of
$x$ and $v$ to $G^{\prime}$.
By the first part of Lemma 7, the total decrease in weight in going from $x$
to $x^{\prime}$ equals the total decrease in value in going from $v$ to
$v^{\prime}$. Therefore, the weight of $x^{\prime}$ equals the total value of
$v^{\prime}$. Finally, observe that in $G^{\prime}$, $T$ picks an edge $(i,j)$
if and only if $x^{\prime}_{ij}=1$ and $\forall i\in G^{\prime},\
c_{i}=v^{\prime}_{i}$.
Adding the weight of the matching and the value of the imputation $c$ over
$G^{\prime}$ and all half-integral odd cycles we get $w(T)\geq\sum_{i\in
V}{c_{i}}$.
2). Consider a coalition $S\subseteq V$. Then $p(S)$ is the weight of a
maximum weight matching in $G$ restricted to $S$. Assume this matching is
$(i_{1},j_{1}),\ldots(i_{k},j_{k})$, where $i_{1},\ldots i_{k}$ and
$j_{1},\ldots j_{k}\in S$. Then $p(S)=(w_{i_{1}j_{1}}+\ldots+w_{i_{k}j_{k}})$.
By Lemma 10,
$c_{i_{l}}+c_{j_{l}}\ \geq\ {2\over 3}\cdot w_{i_{l},j_{l}},\ \ \mbox{for}\
1\leq l\leq k.$
Adding all $k$ terms we get:
$\sum_{i\in S}{c_{i}}\ \geq\ {2\over 3}\cdot p(S).$
∎
Observe that for the purpose of Lemma 10, we could have defined $f$ simply as
$\forall i\in V,\ f(i)={2\over 3}$. However in general, this would have left a
good fraction of the worth of the game unallocated. The definition of $f$
given above improves the allocation for agents who are in large odd cycles and
those who are not in odd cycles with respect to matching $x$. As a result, the
gain of a typical sub-coalition on seceding will be less than a factor of
${3\over 2}$, giving it less incentive to secede. One way of formally stating
an improved factor is given in Proposition 13; its proof is obvious from that
of Theorem 12.
###### Proposition 13.
Assume that the underlying graph $G$ has no odd cycles of length less than
$2k+1$. Then imputation $c$ is in the ${{2k}\over{2k+1}}$-approximate core of
the matching game for $G$.
An easy corollary of Balinski’s result, Theorem 14, is that the integrality
gap of LP-relaxation (3) is precisely ${2\over 3}$. For completeness, we have
provided a proof in Corollary 15. As a consequence of this fact, improving the
approximation factor of an imputation for the matching game is not possible.
###### Theorem 14.
(Balinski [Bal65]) The vertices of the polytope defined by the constraints of
LP (3) are half-integral.
###### Corollary 15.
The integrality gap of LP-relaxation (3) is ${2\over 3}$.
###### Proof.
From the proof of the first part of Theorem 12 we get:
$w(T)\ \geq\ \sum_{i\in V}{c_{i}}\ \geq\ {2\over 3}\cdot\sum_{i\in V}{v_{i}}\
=\ {2\over 3}\cdot w(x).$
Therefore for any instance $I=(G,w)$,
${{\mbox{\rm OPT}(I)}\over{\mbox{\rm OPT}_{f}(I)}}\geq{2\over 3}.$
This places a lower bound of ${2\over 3}$ the integrality gap of LP-relaxation
(3).
To place an upper bound of ${2\over 3}$, consider the following infinite
family of graphs. For each $n$, the graph $G_{n}$ has $6n$ vertices
$i_{l},j_{l},k_{l}$, for $1\leq l\leq 2n$, and $6n$ edges
$(i_{l},j_{l}),(j_{l},k_{l}),(i_{l},k_{l})$, for $1\leq l\leq 2n$ all of
weight 1. Clearly, $\mbox{\rm OPT}(G_{n})=2n$ and $\mbox{\rm
OPT}_{f}(G_{n})=3n$. In case a connected graph is desired, add a clique on the
$2n$ vertices $i_{l}$, for $1\leq l\leq 2n$, with the weight of each edge
being $\epsilon$, where $\epsilon$ tends to zero. ∎
## 4 Discussion
In the $2/3$-approximate core imputation, observe that in an odd cycle of
length $2k+1$, $k$ pairs of agents are matched and one agent is left
unmatched. As a consequence, monetary transfers may be needed from all $2k$
matched agents to the unmatched agent. What happens if monetary transfers to
an agent are allowed from only a limited number of other agents? If so, what
is the best approximation factor possible? See also Remark 4.
For the assignment game, Shapley and Shubik are able to characterize
“antipodal” points in the core, i.e., imputations which are maximally distant.
An analogous understanding of the ${2\over 3}$-approximate core of the general
graph matching game will be desirable.
## 5 Acknowledgements
I wish to thank Federico Echenique and Thorben Trobst for valuable
discussions.
## References
* [Bal65] Michel Louis Balinski. Integer programming: methods, uses, computations. Management science, 12(3):253–313, 1965.
* [BKP12] Péter Biró, Walter Kern, and Daniël Paulusma. Computing solutions for matching games. International journal of game theory, 41(1):75–90, 2012.
* [DIN97] Xiaotie Deng, Toshihide Ibaraki, and Hiroshi Nagamochi. Algorithms and complexity in combinatorial optimization games. In Proc. 8th ACM Symp. on Discrete Algorithms, 1997.
* [EK01] Kimmo Eriksson and Johan Karlander. Stable outcomes of the roommate game with transferable utility. International Journal of Game Theory, 29(4):555–569, 2001.
* [LP86] L. Lovász and M.D. Plummer. Matching Theory. North-Holland, Amsterdam–New York, 1986.
* [Mou14] Hervé Moulin. Cooperative microeconomics: a game-theoretic introduction, volume 313. Princeton University Press, 2014.
* [NT75] George L Nemhauser and Leslie Earl Trotter. Vertex packings: structural properties and algorithms. Mathematical Programming, 8(1):232–248, 1975.
* [SS71] Lloyd S Shapley and Martin Shubik. The assignment game I: The core. International Journal of game theory, 1(1):111–130, 1971.
|
# Up, down, two-sided Lorenz attractor, collisions, merging and switching
Diego Barros111partially supported by CAPES, Christian Bonatti and Maria José
Pacifico222partially supported by FAPERJ, CNPq, and Brazilian-French Network
in Mathematics
###### Abstract
We present a slightly modified version of the well known "geometric Lorenz
attractor". It consists in a $C^{1}$ open set ${\mathcal{O}}$ of vector fields
in ${\mathbb{R}}^{3}$ having an attracting region ${\mathcal{U}}$ containing:
* •
a unique singular saddle point $\sigma$;
* •
a unique attractor $\Lambda$ containing the singular point;
* •
the maximal invariant in ${\mathcal{U}}$ contains at most $2$ chain recurrence
classes, which are $\Lambda$ and (at most) one hyperbolic horseshoe.
The horseshoe and the singular attractor have a collision along the union of
$2$ co-dimension $1$ submanifolds which divide ${\mathcal{O}}$ in $3$ regions.
By crossing this collision locus, the attractor and the horseshoe may merge in
a two-sided Lorenz attractor, or they may exchange their nature: the Lorenz
attractor expel the singular point $\sigma$ and becomes a horseshoe and the
horseshoe absorbs $\sigma$ becoming a Lorenz attractor.
## 1 Introduction
Lorenz presented in [13] an example of a parameterized 2-degree polynomial
system of differential equations as a very simplified model for the convection
of thermal fluid, motivated by an attempt to understand long-term weather
forecasting. Numerical simulations for an open neighborhood of the chosen
parameters suggested that almost all points in phase space tend to a strange
attractor, called the Lorenz attractor. However, Lorenz’s equations proved to
be very resistant to rigorous mathematical analysis, from both conceptual
(existence of an equilibrium point accumulated by regular orbits prevents the
attractor to be hyperbolic) as well numerical ( solutions slow down as they
pass near the equilibrium, which means unbounded return times and, thus,
unbounded integration errors) point of view. A very successful approach was
proposed by Guckenheimer and Williams [8]. They constructed a geometric Lorenz
attractor that reproduces the behavior observed by Lorenz.
As an abstract object, the geometric Lorenz attractor is the inverse limit of
a semiflow on a branched $2$-manifold (with boundary). The flow has a single
saddle-like rest point on the boundary of the surface, and orbits leaving a
neighborhood of this saddle follow either of two arms which return to (and are
glued together along) an interval of branch points transverse to the stable
manifold of the saddle. From the geometrical point of view, geometric Lorenz
attractors are flows on $3$ dimensional space that contain a rest point
accumulated by regular trajectories. Lorenz’s equations proved to be very
resistant to rigorous mathematical analysis, from both conceptual (existence
of a rest point accumulated by regular orbits prevents the attractor to be
hyperbolic) as well numerical (solutions slow down as they pass near the rest
point, which means unbounded return times and, thus, unbounded integration
errors) point of view. Moreover, for almost every pair of nearby initial, the
corresponding solutions move apart from each other exponentially fast as they
converge to the attractor, that is, the attractor is sensitive to initial
conditions. This unpredictability is a characteristic of chaos.
In the 90’s a breakthrough was obtained by Morales, Pacifico and Pujals, [18],
following the very original ideas developed by Mañé in the proof of the
$C^{1}$ stability conjecture: they provide a characterization of robustly
transitive attractors for 3-dimensional flows, of which the Lorenz attractor
is the more significant example: they are partially hyperbolic invariant sets
with volume expanding central direction. Moreover, robustly transitive
attractors without equilibria were proved to be hyperbolic. Thus these results
extend the classical hyperbolic theory for flows with isolated equilibria and
this characterization placed this class of attractors within the realm of a
weak form of hyperbolicity.
After this seminal work, significant advances in this theory for the
topological as well ergodic point of view where recently obtained through the
work of many authors, see [3] and references therein. Despite all this
progress, we still are far from a topological classification of singular
hyperbolic attracting sets in dimension $3$, and there is also a huge gap in
the understanding of unfolding parameterized families of singular hyperbolic
attractors, in any dimension.
The classical Lorenz attractor has a unique hyperbolic saddle singularity,
whose strong stable manifold $W^{ss}(\sigma)$ cuts the stable manifold
$W^{cs}(\sigma)$ in two _stable separatrices_ called the upper and lower
separatrices. One of these separatrices is _disjoint_ from the attractor.
These properties are shared by the geometrical Lorenz model.
This paper was motivated by a question of A. da Luz to the second author. She
wanted to know if a singular hyperbolic attractor, in dimension $3$,
containing a unique Lorenz-like singular saddle $\sigma$, could intersect
robustly both stable separatrices of $W^{cs}(\sigma)\setminus W^{ss}(\sigma)$.
There are many such examples which can be build easily, but one of the
simplest example display a very intriguing behavior under perturbation, and
that is what we want to present in this paper. This work aims to build an open
set of vector fields, for which the attractors have intersections with the
upper, lower and both stable separatrices.
We describe a $C^{1}$-open set ${\mathcal{O}}_{1}$ of vector fields on a
closed $3$-manifold having a common attracting region $U$ which contains a
unique saddle singularity $\sigma$ of Lorenz type, see Definition 2.3. As in
the classical geometric Lorenz attractor, each flow $X\in{\mathcal{O}}_{1}$
has a global cross section $\Sigma$, which is a topological annulus. The
intersection of the stable manifold of $\sigma$ with the cross section splits
$\Sigma$ into two connected components, $\Sigma^{1}$ and $\Sigma^{2},$ and the
intersection of the upper (lower) stable component of $W^{cs}(\sigma)$ contain
each a segment, $\gamma^{s}_{+}$ and $\gamma^{s}_{-}$, respectively,
transverse to the boundary $\partial\Sigma$ and connecting the two boundary
components of the annulus $\Sigma$. We denote the upper (lower) stable
component of $W^{cs}(\sigma)$ by $W^{s}_{+}(\sigma)$ ($W^{s}_{-}(\sigma)$ )
respectively. The first results give the topological nature of the class of
flows in $\mathcal{O}_{1}$, that intersect just one or both component of the
stable separatrices of $\sigma$, proving the existence of three disjointed and
non-empty sets in $\mathcal{O}_{1}:\,\,\mathcal{L}^{-}$, whose attractors
intersect just the lower component of $W^{cs}(\sigma)\setminus
W^{ss}(\sigma)$, $\mathcal{L}^{+}$, whose attractors intersect just the upper
component of $W^{cs}(\sigma)\setminus W^{ss}(\sigma)$ and $\mathcal{L}^{-,+}$,
whose attractors intersect both components of $W^{cs}(\sigma)\setminus
W^{ss}(\sigma)$. We call the attractors in $\mathcal{L}^{-}$ as down-Lorenz,
the attractors in $\mathcal{L}^{+}$ as up-Lorenz and the attractors in
$\mathcal{L}^{-,+}$ as two-sided Lorenz.
###### Theorem A.
Any vector field $X\in{\mathcal{L}}^{+}$ admits exactly $2$ chain recurrence
classes: one is a up-Lorenz attractor, and the other is a hyperbolic basic
set, topologically equivalent to the suspension of a fake horseshoe. The
symetric statement holds in ${\mathcal{L}}^{-}$, interchanging the up for the
down.
Here, recall that a fake horseshoe map consists of a sequence of operations on
the unit square, quite similar to the usual horseshoe, the only difference
being the way back to the square of the resulting folded rectangle: the bottom
of the folded rectangle fits back like the top of the starting square. See
Section 2.2 for a precise definition.
The topological nature of the attractors for flows in $\mathcal{L}^{+,-}$ is
described in the next result.
###### Theorem B.
For any $X\in{\mathcal{L}}^{+,-}$ the maximal invariant set in $U$ is a
transitive singular hyperbolic attractor meeting both stable separatrices of
$\sigma$, that is, it is a two-sided Lorenz attractor.
From our construction, we get that every
$X\in\mathcal{L}^{-,+}\cup\mathcal{L}^{+}\cup\mathcal{L^{-}}$ contains a
robustly attractor (that is, there exists an open neighborhood $\mathcal{U}$
of $X$ in the space of vector fields, such that every $Y\in\mathcal{U}$ has an
attractor). Besides that, although the complement of
$\mathcal{L}^{-,+}\cup\mathcal{L}^{+}\cup\mathcal{L^{-}}$ in $\mathcal{O}_{1}$
is nonempty it has empty interior. Thus, the next step is to understand the
reasons why an attractor stops intercepting just one component of
$W^{cs}(\sigma)\setminus W^{ss}(\sigma)$ and suddenly starts to intersect both
(and vice versa).
We will see that this phenomenon occurs due to the appearance of certain types
of homoclinic and heteroclinic connections caused by the displacement of the
intersection points $W^{u}(\sigma)\cap W^{cs}(\sigma)$ from one stable
separatrix of $W^{cs}(\sigma)\setminus W^{ss}(\sigma)$ to another. Recall that
the unstable manifold of $\sigma$ is one dimensional and
$W^{u}(\sigma)\setminus\\{\sigma\\}$ consists of two separatrices
$W^{u}_{i}(\sigma),\,1\leq i\leq 2.$ A homoclinic loop arises when the
intersection point $q_{i}$ of the unstable separatrix $W^{u}_{i}(\sigma)$ of
$\sigma$ belongs to $\gamma^{s}_{+}\cup\gamma^{s}_{-}$. Consider
${\mathcal{H}}^{i}\subset{\mathcal{O}}_{1}$, $i=1,2$, the hypersurface
corresponding to the vector fields $X$ for which the point $q_{i}$ (first
intersection with $\Sigma$ of the unstable separatrix $W^{u}_{i}(\sigma)$)
belongs to $\gamma^{s}_{+}\cup\gamma^{s}_{-}$. In other words, $X$ belongs to
${\mathcal{H}}^{i}$ if $\sigma$ admits a homoclinic loop for the unstable
separatrix $W^{u}_{i}(\sigma)$ so that the homoclinic loop cuts $\Sigma$ at a
unique point $q_{i}$. We split each ${\mathcal{H}}^{i}$ in two
${\mathcal{H}}^{i}={\mathcal{H}}^{i}_{+}\cup{\mathcal{H}}^{i}_{-}$ according
to $\\{q_{i}\in\gamma^{s}_{+}\\}$ and $\\{q_{i}\in\gamma^{s}_{-}\\}$
respectively. We then prove the following result, that characterizes the
topological nature of the attractors for flows in the hypersurfaces
${\mathcal{H}}^{i}_{+}$ and ${\mathcal{H}}^{i}_{-}$.
###### Theorem C.
For any $X\in{\mathcal{O}}_{1}$ in one of the hypersurfaces
${\mathcal{H}}^{1}$ or ${\mathcal{H}}^{2}$, the maximal invariant in $U$ is a
transitive singular attractor meeting both stable separatrices
$W^{s}_{\pm}(\sigma)$.
The statement of the theorem above does not announce two-sided Lorenz
attractor, because the hypersurface ${\mathcal{H}}^{i},i=1,2,$ is not an open
subset, and then the robustness of the transitivity in not ensured. However
Theorem D establishes that $X\in{\mathcal{H}}^{i}$ exhibits a two-sided Lorenz
attractor excepted for $X$ in a codimension $2$ submanifold.
There are still the case of a double homoclinic loop, that occurs when both
intersection points of the unstable manifold of $\sigma$ with the cross
section fall in the same segment of the intersection of the upper (lower)
stable component of $W^{cs}(\sigma)$ with the cross section. Let
${\mathcal{H}}^{1,2}_{+}$ and ${\mathcal{H}}^{1,2}_{-}$ the codimension $2$
submanifolds included in ${\mathcal{H}}^{1}\cup{\mathcal{H}}^{2}$ consisting
in vector fields $X$ so that $q_{1},q_{2}\in\gamma^{s}_{+}$ or
$q_{1},q_{2}\in\gamma^{s}_{-}$, respectively, see Figure 16. Thus both
unstable separatrices of $\sigma$ are homoclinic connections and are included
in the same stable separatrix of $\sigma$. Next result ensures that, for
$X\in{\mathcal{H}}^{i}$ out of ${\mathcal{H}}^{1,2}_{+}$ and
${\mathcal{H}}^{1,2}_{-}$ the transitivity of the attractor is robust so that
the attractor is a two-sided Lorenz attractor.
###### Theorem D.
If
$X\in{\mathcal{H}}^{i}\setminus({\mathcal{H}}^{1,2}_{+}\cup{\mathcal{H}}^{1,2}_{-})$,
there is a neighborhood ${\mathcal{U}}(X)$ of $X$ such that the maximal
invariant set for every $Y\in{\mathcal{U}}(X)$ is a two-sided Lorenz
attractor.
The appearance of a heteroclinic connection again is related to the position
of the intersections points $q_{1},\,q_{2}$ of $W^{u}_{i}(\sigma)$ with the
cross section, falling at a stable manifold of fixed points of the first
return map.
Recall that $\Sigma$ is splitted into two connected components $\Sigma^{1}$
and $\Sigma^{2}$, determined by the the intersections of $W^{cs}(\sigma)$ with
$\Sigma$. Assuming that the first return map has an expansion rate greater
then the golden number $\varphi$, when $q_{i}$ falls into $\Sigma^{j}$ with
$i\neq j$, the Poincaré map has a fixed point $p_{i}\in\Sigma^{j}$. Typically,
a heteroclinic cycle arises when $q_{i}$ belongs to the stable leaf
$W^{s}_{j}$ through $p_{j}$. We denote by ${\mathcal{O}}_{\varphi}$ the subset
of flows in ${\mathcal{O}}_{1}$ for which the first return map has an
expansion bigger that $\varphi$.
Let ${\mathcal{H}}{\mathcal{E}}_{i}\subset{\mathcal{O}}_{\varphi}$ be the
subset of flows corresponding to the vector fields $X$ for which $q_{i}\in
W^{s}_{j}$. In other words for $X\in{\mathcal{H}}{\mathcal{E}}_{i}$ the
unstable separatrix of $\sigma$ corresponding to $q_{i}$ is a heteroclinic
connection with $p_{j}$. The subsets ${\mathcal{H}}{\mathcal{E}}_{i}$ are
codimension $1$ submanifolds of ${\mathcal{O}}_{\varphi}$.
The case when both $q_{1}$ and $q_{2}$ belong to $W^{s}_{1}\cup W^{s}_{2}$
corresponds to
$X\in{\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$ which
is a codimension $2$ submanifold and we prove the following result:
###### Theorem E.
For any
$X\in{\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$, there
is a unique chain recurrence class, which is not transitive. Both $\Sigma_{-}$
and $\Sigma_{+}$ are invariant by $P$. The maximal invariant of the
restriction of $P$ to $\Sigma_{i}$ is transitive: every unstable segment in
$\Sigma_{i}$ has its iterates which cut every segment in $\Sigma_{i}$. The
open set $U$ splits in $2$ regions , each containing a _full Lorenz attractor_
(that is, a Lorenz crossing all stable foliations over each region splited on
$U$), and intersecting along $W^{s}(p_{1})\cup W^{s}(p_{2})\cup
W^{u}(\sigma)$. See Figure 17.
The goal now is to describe the drastic changes of the behavior appearing in
the topological dynamics when a family
$X_{\mu},\,\mu=(\mu_{1},\mu_{2})\in[-1,1]^{2},$ crosses the boundary of the
regions determined above. For flows in ${\mathcal{O}}_{\varphi}$ the first
return map $P$ has two fixed points $p_{1}$ and $p_{2}$, and the union of the
stable manifolds $W^{s}_{j}$ splits $\Sigma$ into two components $\Sigma^{+}$
and $\Sigma^{-}$, with $\gamma^{+}\subset\Sigma^{+}$ and
$\gamma^{-}\subset\Sigma^{-}$, and we can refine the analysis of the
topological behavior of these flows, based on the position of $q_{1}$ and
$q_{2}$ in $\Sigma^{\pm}$. This introduces a new region, denoted by
$\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$ and explicit defined at Section
4.5.
The behavior of the topological dynamics in all these regions is established
in the next theorem.
###### Theorem F.
With the previous notations, we obtain the following:
* •
the vector field $X_{\mu}$ belongs to ${\mathcal{L}}^{+}$ iff $\mu$ belongs to
the quadrant $\mu_{1}>0,\mu_{2}>0$. In other words, the up full Lorenz
attractor of $X_{0,0}$ becomes a true (robust) Lorenz attractor, and the down
full Lorenz attractor of $X_{0,0}$ becomes a fake horseshoe.
* •
the vector field $X_{\mu}$ belongs to ${\mathcal{L}}^{-}$ iff $\mu$ belongs to
the quadrant $\mu_{1}<0,\mu_{2}<0$. In other words, the up full Lorenz
attractor of $X_{0,0}$ becomes a true (robust) Lorenz attractor, and the down
full Lorenz attractor of $X_{0,0}$ becomes a fake horseshoe.
* •
the vector field $X_{\mu}$ belongs to
$\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$ iff $\mu$ belongs to the quadrant
$\mu_{1}<0,\mu_{2}>0$ or to the quadrant $\mu_{1}>0,\mu_{2}<0$. In other
words, the two full Lorenz attractors of $X_{0,0}$ merge into a two-sided
Lorenz attractor.
Consider a $1$ parameter family $X_{\mu_{1},\alpha\mu_{1}}$ for some
$\alpha>0$. Then
* •
for $\mu_{1}<0$, the vector fields admits a down Lorenz attractor and an up
fake horseshoe.
* •
when $\mu_{1}=0$ the fake horseshoe become a full Lorenz attractor and,
simultanuously, enters in collision with the up Lorenz attractor, which
becomes a full Lorenz attractor
* •
for $\mu_{1}>0$, the up full Lorenz attractor become a fake horseshoe when the
down full Lorenz attractor becomes a true robust Lorenz attractor.
Figure 1 below describes the main features stablished at Theorem F.
Figure 1: (a) Local bifurcation at
${\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$ and (b)
Global bifurcation
Next, we study deeper the topological nature of a flow in $\mathcal{O}_{1}$.
As for the classical geometric model of Lorenz attractor, we will see that to
any vector field in ${\mathcal{O}}_{1}$, we can associate a combinatorial
data, called the itinerary of the discontinuities. These itineraries induce,
in some sense, a semi-conjugacy of $(\Sigma,P)$ with
$({\mathbb{X}},\mathfrak{S})$, where $\mathfrak{S}$ is the shift map on
${\mathbb{X}}$, and ${\mathbb{X}}$ is an appropriated alphabet chosen to
represent the crossing points of the unstable manifold of the rest point with
the cross section. Being rigorous, $P$ is defined on
$\Sigma\setminus(\gamma^{s}_{+}\cup\gamma^{s}_{-})$ which is not invariant
under $P$. Thus we get conjugacies in the restriction of $\Sigma\setminus
W^{s}(\sigma)$. But, since the Poincaré map preserves a stable foliation and
the itineraries for the Poincaré map are functions of the stable leaf, they
passes to the quotient on the leaves space, inducing itineraries for the one
dimensional quotient map. We use this symbolic analysis to show that each
neighborhood of a flow in $\mathcal{O}_{1}$, in the $C^{1}$ topology, contains
flows whose attractor exhibits all of an uncountable family of topological
types. As in the case of the classical geometric Lorenz attractor, we prove
that this phenomenon is stable of co-dimension two, in the sense that the
topological types exhibited by various versions of this example can be
distinguished by two parameters, and in fact a two-parameter family of
examples can be constructed which is $C^{1}$ stable among such families.
###### Theorem G.
The restrictions to the maximal invariant sets of $X$ and
$Y\in\mathcal{O}_{\varphi}$ are topologically equivalent by a homeomorphism
close to identity if, and only if, $X$ and $Y$ have the same itineraries.
This paper is organized as follows. Section 2 gives the concepts, definition
and results proved elsewhere that will be needed in the text. In Section 3 we
construct the open set of flows ${\mathcal{O}}_{1}$ that will be analyzed in
the remained of the text. Section 4 gives the background and auxiliary results
to the proof of the main results. In Section 5 we prove theorems A, B, C, D
and E. In Section 6 we prove Theorems F and G.
## 2 Preliminaries
In this section we recall some definitions and results proved elsewhere that
will be used in this text.
### 2.1 Basic topological notions
Let $\mathfrak{X}^{r}(M)$, $\,r\geq 1,$ be the space of vector fields defined
on a compact boundary less manifold $M$, with the $C^{r}$ topology. For
$X\in\mathfrak{X}^{r}(M)$ , we denote by $X^{t}$ the flow of $X$. If $U\subset
M$, $\overline{U}$ denotes the closure of $U$, $\operatorname{int}U$ the
interior of $U$ and $\partial U$ the border of $U$.
An invariant compact set $\Lambda$ of $X$ is _transitive_ if there is
$p\in\Lambda$ so that its positive orbit is dense in $\Lambda$ (this notion is
called _topological ergodicity_ by some authors).
Recall that a sequence $\\{p_{i}\\}$ is a $\varepsilon$-pseudo-orbit for $X$
if there is $t_{i}>1$ such that $d(p_{i+1},X^{t_{i}}(p_{i}))<\varepsilon$, for
all $i$. A point $p$ is _chain recurrent_ if for each $\varepsilon>0$ there is
a $\varepsilon$-pseudo orbit $p=p_{0},p_{1},\dots,p_{n}=p$.
The chain recurrent set ${\mathcal{R}}(X)$ is the set of all chain recurrent
points. It is a compact invariant set.
An invariant compact set $\Lambda$ is _chain recurrent_ or _chain transitive_
if for every $\varepsilon>0$ it admits a $\varepsilon$-pseudo orbit
$\\{x_{i}\\}$ , $x_{i}\in\Lambda$, dense in $\Lambda$.
Two points $p,q$ are chain equivalent if for any $\varepsilon>0$ there are
$\varepsilon$-pseudo orbits from $p$ to $q$ and from $q$ to $p$. Conley theory
[7] proves that the chain-equivalence is a equivalence relation whose
equivalence classes are called the _chain recurrence classes_. They define a
partition of ${\mathcal{R}}(X)$ by invariant pairwise disjoint compact sets.
Given a region $U$ the maximal ivariant set in $U$ is the set $\Lambda_{U}$ of
points $p$ whose whole orbit is contained in $U$, that is:
$\Lambda_{U}=\bigcap_{t\in{\mathbb{R}}}X^{t}(U).$
A region $U$ is _attracting_ for a vector field $X$ of $M$ if its boundary is
a codimension $1$ submanifold transverse to $X$ and $X$ is entering in $U$.
###### Definition 2.1.
* •
A compact invariant set $\Lambda_{X}$ of $X^{t}$ is _attracting_ if there
exists an open neighborhood $U\supset\Lambda_{X}$ which is an attracting
region and such that $\Lambda_{X}$ is the maximal invariant set in $U$, that
is:
$\Lambda_{X}=\underset{t>0}{\bigcap}X^{t}(U).$
* •
We say that $\Lambda_{X}$ is _an attractor_ if it is attracting and chain
transitive (in particular, $\Lambda_{X}$ is a chain recurrence class).
* •
An invariant compact set $K$ is a _quasi-attractor_ if it is a chain
recurrence class and admits a basis of neighborhoods which are attracting
regions: there are a decreasing sequence $U_{i}\subset U_{i-1}$ of attracting
regions so that $K=\bigcap_{i}U_{i}$.
Notice that the existence of an attractor is not ensured a priori, but Conley
proves the existence of quasi-attractors in any attracting region (for a
vector field on a compact manifold).
###### Definition 2.2.
Two vector fields $X$ and $Y$ defined on $M$ are topologically equivalent, if
there exists a homeomorphism $h:M\to M$ satisfying:
1. 1.
$h(\mathcal{O}_{X}(p))=\mathcal{O}_{Y}(h(p))$.
2. 2.
for all $p\in M$ and $\varepsilon>0$, there exists $\delta>0$ such that for
$t\in(0,\delta)$ there is $s\in(0,\varepsilon)$ satisfying
$h(X^{t}(p))=Y^{s}(h(p))$.
### 2.2 Fake horseshoe
The fake horseshoe map consists of a sequence of operations on the unit
square. First, stretch in the $y$ direction by more than a factor of two, then
compress in the $x$ direction by more than a factor of two. Finally, fold the
resulting rectangle and fit it back onto the square, overlapping at the top
and bottom, and not quite reaching the ends to the left and right (and with a
gap in the middle), as illustrated in the Figure 2. The only difference with
the usual horseshoe is the way back to the square of the resulting folded
rectangle: the bottom of the folded rectangle fits back like the top of the
starting square.
Figure 2: The usual horseshoe and a fake horseshoe
### 2.3 Singular points
A point $\sigma$ is singular if $X$ vanishes at $\sigma$. The set of singular
points of $X$ is denoted by $Zero(X)$. A point $\sigma\in Zero(X)$ is
hyperbolic if the real part of the eigenvalues of $DX(\sigma)$ does not
vanish.
###### Definition 2.3.
A singularity $\sigma$ of $X$ is Lorenz-like if the eigenvalues
$\lambda_{i}\in\mathbb{R},\,i\in\\{ss,s,u\\},$ of $DX(\sigma)$ satisfy
$\lambda_{ss}<\lambda_{s}<0<-\lambda_{s}<\lambda_{u}<-\lambda_{s}$.
###### Definition 2.4.
In addition, $\sigma$ is non resonant, if for all $N>2$ and any choice of
nonnegative integer numbers $m_{1},m_{2},m_{3}$ with
$2\leq\overset{3}{\underset{j=1}{\sum}}m_{j}<N$ we have
$\overset{3}{\underset{j=1}{\sum}}m_{j}\lambda_{j}\neq\lambda_{i}$.
Hartman Grobman theorem asserts that, in a small neighborhood of a hyperbolic
singular point, a vector field is locally topologically equivalent to its
linear part. Then Sternberg provides conditions to guarantee that this local
topological equivalence is indeed of class $C^{2}$:
###### Theorem 2.5.
Let $X$ a vector field and let $n\in\mathbb{Z}^{+}$ be given. Then there
exists an integer $N=N(n)\geq 2$ such that: if $A$ is a real non-singular
$d\times d$ matrix with eigenvalues $\gamma_{1},\ldots,\gamma_{d}$ satisfying
$2\leq\overset{d}{\underset{j=1}{\sum}}m_{j}\lambda_{j}<N\,\,\,\mbox{e}\,\,\,\overset{d}{\underset{j=1}{\sum}}m_{j}\lambda_{j}\neq\lambda_{i}$
for any choice of nonnegative integers $m_{1},m_{2}\ldots m_{d}$ and if the
application $X(v)=A(v)+\psi(v)$ and $\psi$ is of class $C^{N}$ for small
$||v||$ with $\psi(0)=0$ e $\partial_{v}\psi(0)=0$; then there exists a
$C^{N}$ diffeomorphism from a neighborhood of $v=0$ to a neighborhood of
$\xi=0$ that define a topologic conjugation between $X$ and $A$.
Furthermore the linearizing diffeomorphisms depends continuously in the
$C^{2}$ topology from the vector field $X$, see for instance [20, Corollary in
the Appendix] which provides a stronger statement.
### 2.4 The shift map $\mathfrak{S}$
We consider ${\mathbb{X}}=\\{A_{0},A_{1},B_{0},B_{1}\\}^{\mathbb{N}}$ the
space of positive infinite words in the alphabet
$\\{A_{0},A_{1},B_{0},B_{1}\\}$. We endow the alphabet
$\\{A_{0},A_{1},B_{0},B_{1}\\}$ with the total order
$A_{0}<A_{1}<B_{0}<B_{1}<1.$
We endow ${\mathbb{X}}$ with the corresponding lexicographic order, that we
denote by $\prec$ (and $\preccurlyeq$ for the non-strict order).
The shift map $\mathfrak{S}:{\mathbb{X}}\to{\mathbb{X}}$ is defined as
$(T_{j})_{j\geq 0}\in{\mathbb{X}}\mapsto\mathfrak{S}(T_{j})_{j\geq
0}=(T_{j+1})_{j\geq 0}.$
We also define the star map (denoted by $\star$) on ${\mathbb{X}}$ as
follows: given a sequence $w=(w_{0},w_{1},\cdots)\in{\mathbb{X}}$ and a letter
$L\in\\{A_{0},A_{1},B_{0},B_{1}\\}$
$L\star w\stackrel{{\scriptstyle\scriptscriptstyle\rm
def}}{{=}}(L,w_{0},w_{1},\cdots).$
### 2.5 Hyperbolic notions
###### Definition 2.6.
Let $X$ be a vector field of a manifold $M$. A compact invariant set
$\Lambda\subset M\setminus Zero(X)$ is hyperbolic if the tangent bundle
$TM|_{\Lambda}$ over $\Lambda$ admits a splitting
$TM|\Lambda=E^{s}\oplus{\mathbb{R}}\cdot X\oplus E^{u}$
so that
* •
the bundles $E^{s}$ and $E^{u}$ are continuous and invariant under the natural
action of the derivative of the flow.
* •
there is $T>0$ so that for any point $p\in\Lambda$ and for any unit vectors
$u\in E^{s}(p)$ and $v\in E^{u}(p)$ one has
$\|DX^{T}(u)\|<\frac{1}{2}\,\,\mbox{ and }\,\,\|DX^{T}(v)\|>2$
(one says that $E^{s}$ is _uniformly contracted_ and $E^{u}$ is _uniformly
expanded_).
An hyperbolic invariant compact set $K$ is called an _hyperbolic basic set_ if
is transitive and admits a neighborhood on which it is the maximal invariant
set.
If $K$ is a hyperbolic set we denote $E^{cs}=E^{s}\oplus{\mathbb{R}}X$ and
$E^{cu}\oplus{\mathbb{R}}X$, and these bundles are called, respectively, the
weak stable and weak unstable bundles.
###### Definition 2.7.
A compact invariant set $\Lambda\subset M$ for $X^{t}$ is _partially
hyperbolic_ if there exist a continuous and $DX^{t}$-invariant splitting
$T_{\Lambda}M=E^{s}\oplus E^{cu}$ and constants $\lambda\in]0,1[,K>0$ such
that for all $x\in\Lambda$ and $t\geq 0$, the following inequalities hold:
1. (a)
$||DX^{t}|_{E^{s}_{x}}||\cdot\|DX^{-t}|_{E^{cu}_{X_{t}(x)}}\|\leq
K\lambda^{t}\quad\mbox{(domination)};$
2. (b)
$\|DX^{t}|_{E^{s}_{x}}\|\leq K\lambda^{t}\quad\mbox{(uniform contraction). }$
In addition, if $E^{cu}_{\Lambda}$ is volume expanding, that is,
$|\det(DX^{t}|_{E^{cu}_{x}})|>Ke^{\lambda t}$ for all $x\in\Lambda$ and $t\geq
0$, $\Lambda$ is, by definition, a singular hyperbolic set.
Note that any hyperbolic set is also partially hyperbolic.
Notice that if $\Lambda_{X}$ an invariant compact set disjoint from $Zero(X)$
is partially hyperbolic if and only if it is hyperbolic. In dimension $3$, if
$\Lambda_{X}$ is partially hyperbolic, then every singular point in
$\Lambda_{X}$ is a Lorenz-like singularity, see [18].
### 2.6 Invariant manifold and foliations
Let $\Lambda$ be a compact invariant set for the flow $X^{t}$ and
$p\in\Lambda$. The stable $W^{s}_{X}(p$ and unstable $W^{u}_{X}(p)$ sets at
$p$ are defined by
$\displaystyle W^{s}_{X}(p)$ $\displaystyle=$ $\displaystyle\\{q\in
M\,:\mbox{dist}(X^{t}(q),X^{t}(p))\underset{t\to+\infty}{\longrightarrow}0\\}$
$\displaystyle W^{u}_{X}(p)$ $\displaystyle=$ $\displaystyle\\{q\in
M\,:\mbox{dist}(X^{t}(q),X^{t}(p))\underset{t\to-\infty}{\longrightarrow}0\\}.$
If $\mathcal{O}=\mathcal{O}_{X}(p)\subset\Lambda$ denotes the orbit of $p\in
M$, the stable set of the orbit of $p$ is
$W^{s}_{X}(\mathcal{O})=\underset{t\in\mathbb{R}}{\bigcup}W^{s}(X^{t}(p))$.
Analogously, the unstable set of the orbit of $p$ is
$W^{u}_{X}(\mathcal{O})=\underset{t\in\mathbb{R}}{\bigcup}W^{u}(X^{t}(p))$.
If $\Lambda$ is a hyperbolic set and $p$ is a point in an orbit
${\mathcal{O}}$ in $\Lambda$, then $W^{s}(p)$ and $W^{s}({\mathcal{O}})$ are
manifolds of the same regularity as $X$, and are tangent (at $p$) to the
stable bundle $E^{s}(p)$ and $E^{cs}(p)$, respectively.
If $\Lambda$ is a partially hyperbolic set the stable sets of the points in
$\Lambda$ are submanifolds of the same regularity as $X$, tangent to $E^{s}$
and varying continuously with the point.
Assume now $U$ is an attracting region and that $\Lambda=\Lambda_{U}$, the
maximal invariant in $U$ is a partially hyperbolic attracting invariant
compact set. Let $E^{s}$ denote the stable bundle defined over $\Lambda$. Then
the bundle $E^{s}$ always extends in a unique way to $U$ as an invariant
bundle (still denoted by $E^{s}$) tangent to a topological foliation. Recently
Araùjo and Melbourne, [5], provided bunching conditions ensuring the
smoothness of the stable foliation, as stated below:
###### Theorem 2.8.
Let be $M$ differentiable riemannian manifold of dimension 3, $X$ a $C^{r}$
vector field, $U$ an attracting region and $\Lambda\subset M$ , maximal
invariant set in $U$, a partial hyperbolic attracting set. Let
$\\{W^{s}_{x}\\}$ denote the stable foliation in $U$.
Let $q\in[0,r]$ and suppose that there exists $t>0$ such that
$\|DX^{t}|_{E^{s}(x)}\|\cdot\|DX^{-t}|_{E^{cu}(X^{t}(x))}\|\cdot\|DX^{t}|_{E_{x}^{cu}}\|^{q}<1\quad\mbox{
for all $x\in\Lambda$}.$ (1)
Then the foliation $\\{W^{s}_{x}\\}$ is of class $C^{q}$.
### 2.7 Cone fields
Let $f:M\to M$ be a $C^{1}$ diffeomorphism and $T_{x}M=F^{u}(x)\oplus
F^{s}(x)$ a continuous splitting. We define the stable and unstable cone
fields of size $\gamma<1$ as
$\displaystyle C^{s}_{\gamma}(x)$ $\displaystyle=$
$\displaystyle\\{(v_{1},v_{2})\in F^{u}(x)\oplus
F^{s}(x)\,:\|v_{1}\|\leq\gamma\|v_{2}\|\\},$ $\displaystyle C^{s}_{\gamma}(x)$
$\displaystyle=$ $\displaystyle\\{(v_{1},v_{2})\in F^{u}(x)\oplus
F^{s}(x)\,:\|v_{2}\|\leq\gamma\|v_{1}\|\\}.$
We say that $C^{s}_{\gamma}$ (reciprocally $C^{u}_{\gamma}(x)$) is strictly
invariant by $Df^{-1}$ (reciprocally by $Df$) if there is $\alpha<1$ such that
$Df^{-1}(C^{s}_{\gamma}(f(x)))\subset C^{s}_{\alpha\gamma}(x)$ (reciprocally
$Df(C^{u}_{\gamma}(x))\subset C^{u}_{\alpha\gamma}(f(x))$).
## 3 An open set of singular hyperbolic flows
Le $M$ be a compact $3$-manifold.
We consider an open set ${\mathcal{O}}_{1}$ of vector fields $X$ on $M$ with
the following properties described in the two next sections.
### 3.1 Topological conditions
1. 1.
$X$ has a Lorenz-like singularity $\sigma=\sigma(X)$ varying continuously with
$X$, and satisfying the Sternberg of non-resonance conditions. We denote by
$W^{s}_{+}(\sigma)$ and $W^{s}_{-}(\sigma)$ the connected components of
$W^{s}(\sigma)\setminus W^{ss}(\sigma)$.
2. 2.
the vector field $X$ is transverse to $3$ embedded, disjoint annuli denoted by
$\Sigma$ , $D^{2}$ and $D^{1}$.
3. 3.
for any point $p\in D^{i}$ there is $t(p)$ depending smoothly on $p$ so that
$X^{t(p)}(p)\in\Sigma$ and $X^{s}(p)\notin\Sigma\cup D^{1}\cup D^{2}$ for
$s\in]0,t(p)[$. In other words, $X^{t(p)}(p)$ is the first return of the orbit
of $p$ on $\Sigma\cup D^{1}\cup D^{2}$, and we denote it $R(p)=X^{t(p)}(p)$.
Note $R$ is a diffeomorphism from $D^{1}\cup D^{2}$ to its image in $\Sigma$.
In particular $R(D^{1})$ and $R(D^{2})$ are annuli. We require that each of
them is _an essential annulus_ in $\Sigma$ (that is, is not homotopic to a
point).
Figure 3: The starting flow
4. 4.
$\Sigma\cap W^{s}_{+}(p)$ and $\Sigma\cap W^{s}_{-}(\sigma)$ contain each a
segment, $\gamma^{s}_{+}$ and $\gamma^{s}_{-}$, respectively, tranverse to the
boundary $\partial\Sigma$ and connecting the two boundary components of the
annulus $\Sigma$.
The positive orbits of points in $\gamma^{s}_{+}$ and $\gamma^{s}_{-}$ go
directly to the singular point and is disjoint from $\Sigma\cup D^{1}\cup
D^{2}$.
Then $\Sigma\setminus(\gamma^{s}_{+}\cup\gamma^{s}_{-})$ consists in two
components $\Sigma^{1}$ and $\Sigma^{2}.$ See Figure 3.
1. 5.
For any $p\in\Sigma^{1}\cup\Sigma^{2}$ there is $t(p)$ depending smoothly on
$p$ so that $X^{t(p)}(p)\in D^{1}\cup D^{2}$ and $X^{s}(p)\notin\Sigma\cup
D^{1}\cup D^{2}$ for $s\in]0,t(p)[$. Thus, $X^{t(p)}(p)$ is the first return
of the orbit of $p$ on $\Sigma\cup D^{1}\cup D^{2}$, and we denote it
$S(p)=X^{t(p)}(p)$.
Using item 3) one gets that $P=R\circ
S\colon\Sigma\setminus(\gamma^{s}_{+}\cup\gamma^{s}_{-})\to\Sigma$ is the
first return map of $X$ on $\Sigma$.
Figure 4: (a) The cross-sections $\Sigma\cup D^{1}\cup D^{2}$, (b) The image
$P(\Sigma^{1}\cup\Sigma^{2})$.
2. 6.
Let $W^{u}_{1}(\sigma)$ and $W^{u}_{2}(\sigma)$ denote the unstable
separatrices of $\sigma$, that is, the connected components of
$W^{u}(\sigma)\setminus\sigma$.
Then the interior of $D^{i}$, $i=1;2$ contains the first intersection point
$\widetilde{q}_{i}$ of $W^{u}_{i}(\sigma)$ with $\Sigma\cup D^{1}\cup D^{2}$.
Furthermore, $S(\Sigma^{i})\cup\\{\widetilde{q}_{i}\\}$ is a annulus which is
pinched at $\widetilde{q}_{i}$, and this pinched annulus is essential in
$D^{i}$ (see Figure 3(a)) : by pinched we means that, in a neighborhood of
$\widetilde{q}_{i}$ the set $S(\Sigma^{i})\cup\\{\widetilde{q}_{i}\\}$
consists in two cuspidal triangles, with cusp at $\widetilde{q}_{i}$ and
tangent at this point to the same line but oriented in opposite direction.
As a consequence of item 5), the closure of $P(\Sigma^{1}\cup\Sigma^{2})$
consists in $2$ parallel essential annuli in $\Sigma$, pinched at
$q_{i}=R(\widetilde{q}_{i})$. See Figure 3(b).
1. 7.
As a consequence of the above items, there is an attracting region $U$
containing $\Sigma$, $D^{1}$, $D^{2}$, $\sigma$ so that every regular orbit in
$U$ crosses $\Sigma$. The maximal invariant set $\Lambda$ in $U$ is an
attracting invariant compact set (see Figure 4).
Figure 5: Attraction region for the flow.
### 3.2 Singular hyperbolic conditions
1. 8.
The maximal invariant set $\Lambda$ in $U$ is singular hyperbolic with bundles
$E^{s}$ and $E^{cu}$. This is equivalent to request that the first return map
$P$ is hyperbolic. We will request more.
2. 9.
There is a conefield ${\mathcal{C}}^{u}$ on the annulus
$\Sigma\simeq\SS^{1}\times[-1,1]$, transverse to the fibers
$\\{\theta\\}\times[-1,1]$, and strictly invariant by the derivative $DP$ of
$P$, and so that the vectors in ${\mathcal{C}}^{u}$ are uniformly expanded by
a factor $\lambda>1$.
The cone ${\mathcal{C}}^{u}(p)$ has two components: vectors cutting the fiber
in the positive, or negative orientation. We require that $DP$ preserves this
transverse orientation.
3. 10.
there exists $t>0$ such that
$\|DX^{t}|_{E^{s}(x)}\|\cdot\|DX^{-t}|_{E^{cu}(X^{t}(x))}\|\cdot\|DX^{t}|_{E_{x}^{cu}}\|<1\quad\mbox{
for all $x\in\Lambda$}.$ (2)
As a consequence of item • ‣ 8.2) and Theorem 2.8 the stable foliation, well
defined in $U$ is of class $C^{1}$.
This foliation is not tangent to $\Sigma$. However, there is a $2$ dimensional
_center-stable foliation_ well defined on $U\setminus\sigma$, obtained by
considering the $X^{t}$ orbits of the stable foliation: this foliation is
$C^{1}$ too. This center stable foliation cuts the annulus $\Sigma$
transversely, along a $C^{1}$, one-dimensional foliation ${\mathcal{F}}^{s}$,
which is the stable foliation of the return map $P$. The segment
$\gamma^{s}_{+}$ and $\gamma^{s}_{-}$ are leaves of ${\mathcal{F}}^{s}$.
The foliation ${\mathcal{F}}^{s}$ is transverse to the unstable cone field
${\mathcal{C}}^{u}$. The leaves of the foliation ${\mathcal{F}}^{s}$ are
segments crossing $\Sigma$ (connecting the two boundary components of
$\Sigma$).
## 4 Topological dynamics: the attractor and the chain recurrence classes
### 4.1 Quasi-attractor, two-sided, up and down Lorenz attractors
###### Definition 4.1.
* •
We say that $X\in{\mathcal{O}}_{1}$ exhibits a _two-sided(geometric model)of
Lorenz attractor_ if, for any $Y$ in a neighborhood of $X$ maximal invariant
set $\Lambda_{Y}$ in the attracting region $U$ is transitive and have a non-
trivial intersection with both stable separatrices $W^{s}_{+}(\sigma,Y)$ and
$W^{s}_{-}(\sigma,Y)$.
* •
We say that $X$ exhibit a _up-Lorenz attractor_ if it admits a geometric model
of Lorenz attractor (in the usual meaning) disjoint from the component
$W^{s}_{-}$.
* •
We say that $X$ exhibit a _down-Lorenz attractor_ if it admits a geometric
model of Lorenz attractor disjoint from the component $W^{s}_{+}$.
The aim of this section is to show that, under a certain condition on the
expansion in the unstable direction, the vector fields $X$ in an open and
dense subset exhibit either a two-sided or a up- or a down- Lorenz attractor,
and the $3$ cases appear.
### 4.2 Quasi-attractor
We started by noticing that there is non ambiguity on what can be the
attractor:
###### Proposition 4.2.
For any $X\in{\mathcal{O}}_{1}$ the chain recurrence class of the sigularity
$\sigma$ is the unique quasi attractor in the attracting region $U$
As $U$ is an attracting region, Conley theory implies that $U$ contains at
least one quase attractor. A quasi attractor admits by definition, arbitrarily
small invairant neighborhoods. Now Proposition 4.2 is a direct consequence of
nex lemma:
###### Lemma 4.3.
The stable manifold of $\sigma$ is dense in $U$.
This proof is identical to the proof of the similar statement for the
classical geometric model of Lorenz attractor, and is very simple. We include
it here for completeness.
###### Proof.
As any orbit cuts the cross-section $\Sigma$, we just need to prove that, for
any open set $O\subset\Sigma$ there is $n>0$ so that
$P^{n}(O)\cap(\gamma^{s}_{+}\cup\gamma^{s}_{+})\neq\emptyset$.
We consider a segment $S\subset O$ tangent to the unstable cone
${\mathcal{C}}^{u}$. By item 9) there is $\lambda>1$ so that the vector in
${\mathcal{C}}^{u}$ are expanded by a factor larger than $\lambda$. Thus:
* •
either $S$ cuts $(\gamma^{s}_{+}\cup\gamma^{s}_{+})$ and we are done,
* •
or $P(S)$ is a segment tangent to ${\mathcal{C}}^{u}$ whose length is at least
$\lambda$ times the length of $S$.
Iterating the process one gets that either there is $n$ so that $P^{n}(S)$
cuts $\gamma^{s}_{+}\cup\gamma^{s}_{+}$ or $P^{n}(S)$ is a segment tangent to
${\mathcal{C}}^{u})$ whose length thend to infinity. As $\gamma^{s}_{+}$ and
$\gamma^{s}_{+}$ are leaves of the foliation ${\mathcal{F}}^{s}$ which is a
fibration by segments on the annulus $\Sigma$, there is a bound for the length
of any segment in the unstable cone which do not cut every leaf, ending the
proof.
∎
### 4.3 Iterating unstable segments and transitive properties
We fix a vector field $X\in{\mathcal{O}}_{1}$, $U$ is the attracting region in
the definition, $\Sigma$ is the global cross section, and $P$ the first return
map.
First notice that the closures $\bar{P}^{n}(\Sigma)$, $n\geq 0$, is a nested
family of compact subsets of $\Sigma$. Let $\Lambda_{P}$ denote the
intersection of the $\bar{P}^{n}(\Sigma)$
$\Lambda_{P}=\bigcap_{n\geq 0}\bar{P}^{n}(\Sigma).$
###### Lemma 4.4.
The compact set $\Lambda_{P}$ is the intersection of the maximal invariant set
$\Lambda_{X}$ with $\Sigma$. Furthermore, $\Lambda_{X}$ is the union of
$\sigma$ with the saturated by the flow of $\Lambda_{P}$.
###### Proof.
Any orbit of $\Lambda_{X}$, but $\sigma$, cuts $\Sigma$. Any orbit but
$W^{u}(\sigma)$ cut $\Sigma$ for an infinite sequence of negative times
(tending to $-\infty$): indeed the $\alpha$-limit set of such an orbit is not
reduced to $\sigma$, and therefore cuts $\Sigma$.
Thus every orbit of $\Lambda_{X}$ not in $W^{u}(\sigma)$ cuts $\Sigma$ in
$\bigcap_{n\geq 0}P^{n}(\Sigma)$.
In $\Lambda_{P}$ we consider the closures $\bar{P}^{n}(\Sigma)$. This consists
in adding $\Sigma\cap W^{u}(\sigma)$ to $\bigcap_{n\geq 0}P^{n}(\Sigma)$. The
proof follows from these observations.
∎
Note that for every $n>0$, the closure $\bar{P}^{n}(\Sigma)$ consists in the
union of $2^{n}$ pinched essential annuli, each annulus admits a finite set of
pinched (cuspidal) points, the boundary is tangent to the unstable cone, and
the intersection of two annuli is contained in this finite set of pinched
points. Finally the thickness of this annuli is exponentially decreasing with
$n$.
For any nested sequence of such pinched annuli in $\bar{P}^{n}(\Sigma)$,
$n>0$, the intersection is an essential circle. In particular the topological
dimension of $\Lambda_{P}$ is $1$. Moreover the boundary of any annulus in
tangent to the unstable cone, and to the image by $DP^{n}$ of this unstable
cone. The intersection of the iterates of the unstable cones converges to a
well defined continuous, invariant line field $E^{u}$ on $\Lambda_{P}$. Each
of the circles is tangent to $E^{u}$.
###### Lemma 4.5.
Let $X$ be a vector field in ${\mathcal{O}}_{1}$ and $P$ the first return map
on the cross section $\Sigma$. If, for every segment $S\subset\Sigma$
transverse to the stable foliation ${\mathcal{F}}^{s}$, the union of stable
leaves through the iterates $P^{n}(S)$, $n\geq 0$, covers an open and dense
subset of $\Sigma$, then the maximal invariant set $\Lambda_{X}$ is
transitive.
One easily check that is enough to prove the following
###### Lemma 4.6.
Under the same hypotesis, given any non-empty open subsets $U\cap\Lambda_{P}$,
$V\cap\Lambda_{P}$ ($U$ and $V$ open sets of $\Sigma$) there is $n>0$ so that
$P^{n}(V\cap\Lambda_{P})\cap(U\cap\Lambda_{P})\neq\emptyset.$
###### Proof.
The open subset $U\cap\Lambda_{P}$ contains a segment $S_{U}$ tangent to the
unstable direction. There is $\varepsilon>0$ so that the segments of stable
leaves through $S_{U}$ (shrinking $S_{U}$ if necessary) are contained in $U$.
Let $W^{s}_{\varepsilon}(S_{U})$ be the union of these segments.
Note that for any $n>0$, $P^{-n}$ is defined on $S_{U}$ up to a finite set
(the first $n$ iterates of the unstable manifold of $\sigma$). Now
$P^{-n}(W^{s}_{\varepsilon}(S_{U}))$ contains the saturated by
${\mathcal{F}}^{s}$ of $P^{-n}(S_{U})$ which contains open segments in
$\Lambda_{P}$.
Consider now a segment $S_{V}$ contained in $V\cap\Lambda_{P}$. By assumtion
there is $m>0$ so that $P^{m}(S_{V})$ intersect stable leaves in
$P^{-n}(W^{s}_{\varepsilon}(S_{U}))$. In other words $P^{m+n}(S_{V})$ cuts
$W^{s}_{\varepsilon}(S_{U})$ at points which belong to $\Lambda_{\Sigma}$
(because $S_{V}$ is contained in $\Lambda_{\Sigma}$ which is positively
invariant by $P$). Thus these intersection points belong to
$U\cap\Lambda_{\Sigma}$ concluding the proof. ∎
We now present some tools ensuring that the orbit of an _unstable segment_
(i.e. a segment in $\Sigma$ tangent to the unstable cone ) cuts almost every
stable leaves.
###### Lemma 4.7.
Let $S$ be an unstable segment joining a cuspidal point $q_{i}$ in
$\bar{P}(\Sigma)$ to a point $q_{\pm}$ in $\gamma^{s}_{\pm}$ . Assume that
$q_{i}$ belongs to $\Sigma_{i}$.
Then there is $n>0$ so that the union of the iterates $P^{i}(S)$,
$i\in\\{0,\dots,n\\}$ cut every stable leaves but a finite number.
###### Proof.
The unstable cone is oriented, inducing an orientation over every unstable
segment. Assume for instance that $S$ is starting (for this orientation) at
$q_{i}\subset\Sigma_{1}$ and ends on $q_{-}\in\gamma^{s}_{-}$: in particular,
$S\subset\Sigma_{1}$. Now, $P(S)$ is an unstable segment ending at $q_{1}$ and
of length $\ell(P(S)\geq\lambda\ell(S)$.
So $S_{1}=P(S)\cup S$ is an unstable segment of length at least twice
$\ell(S)$ and ending at $q_{-}$.
We define by induction a finite sequence $S_{n}$ as follows: if $S_{n-1}$ is
contained in $\Sigma_{1}$ then $S_{n}=P(S_{n-1})\cup S$. Otherwize, the
sequence ends and $S_{n}$ is not defined. Thus for every $n$, $S_{n}$ is a
unstable segment ending at $q_{-}$ and of size at least $n$ times the size of
$S$. In particular, this sequence ends at some $n_{0}$, and $S_{n_{0}}$ is not
contained in $\Sigma_{1}$: it cuts $\gamma^{s}_{+}$.
Now $P(S_{n_{0}})$ cuts every stable leaf up to the one of $q_{1}$. ∎
###### Remark 4.8.
In Lemma 4.7 the unique stable leaf which may not intersects the iterates
$P^{n}(S)$, $n\geq 0$, is the leaf through $q_{1}$. Furthermore, if $q_{1}$ is
distinct from $q_{2}$, then further iterates of $S$ cut $q_{1}$, so that every
leaf intersect some iterate of $S$.
###### Lemma 4.9.
Assume now that $\Sigma_{i}$ contains a fixed point $p_{i}$ of $P$. Let $S$ be
an unstable segment whose interior cuts the stable leaf through $p_{i}$. Then
there is $n_{0}$ so that for any $n\geq n_{0}$, the iterate $P^{n}(S)$ cuts
every stable leaf but the one through $q_{i}$.
###### Proof.
By the inclination lemma (also known as $\lambda$-lemma), the positive
iterates $P^{n}(S)$ accumulate the unstable manifold $W^{u}(p_{i})$. Some of
these iterates will cross completely $\Sigma_{i}$ (crossing both
$\gamma^{s}_{+}$ and $\gamma^{s}_{-}$). Then the next iterate will cross the
whole essential pinched annulus $P(\Sigma_{i})$, ending the proof. ∎
### 4.4 Fixing the expansion rate larger than the golden number
For any flow $X\in{\mathcal{O}}_{1}$ the return map $P$ is defined on the
cross section $\Sigma$ but the two stable leaves $\gamma^{s}_{+}$ and
$\gamma^{s}_{-}$. So we get two rectangles and both of them are sent into
$\Sigma$ as a pinched essential annulus. Therefore the expansion rate
$\lambda$ by $P$ in the unstable cone cannot be required uniformly larger than
$2$. However we will see that we can require a uniform rate expansion
arbitrarily close to $2$. Figure 6 displays the main features of the return
map $P$.
Figure 6:
The proof of the main topological properties consists in iterating unstable
segments (tangent to the unstable cone) and to estimate the length of the
components of these iterates. In this section we will choose a rate $\lambda$
which will allow us to estimate these length.
Let us start with a very elementary observation: Consider a segment
$[a,c]\subset{\mathbb{R}}$ and pick $b\in]a,c[$. Then one of the length
$\lambda^{2}\ell([a,b])$ or $\lambda^{2}\ell([b,c])$ is strictly larger than
$\ell([a,c]$ if $\lambda>\sqrt{2}$. That is, the choice of $\lambda>\sqrt{2}$
ensures the increase of a segment splitted into two components after two
iterations, if those components are not split again. This is the traditional
rate of expansion to guarantee the transitivity of a Lorenz attractor.We note
that there are examples of one-dimensional Lorenz maps with rate of expansion
smaller than $\sqrt{2}$ that are not transitive. Note that this map has an
isolated periodic orbit. See Figure 7.
Figure 7: The map at left is transitive but not robustly, the map at right is
not transitive.
Consider now the maximum of the lengths $\lambda\ell([a,b])$ and
$\lambda^{2}\ell([b,c])$. Notice that the golden ration
$\varphi=\frac{1+\sqrt{5}}{2}\in]1,2[$.
###### Lemma 4.10.
For any $a<b<c$, for any $\lambda\geq\varphi$ on gets
$\max\\{\lambda\ell([a,b]),\lambda^{2}\ell([b,c])\\}\geq\frac{\lambda}{\varphi}\ell([a,c]).$
###### Proof.
Taking $\alpha=\frac{\ell([b,c])}{\ell([a,c])}$, we only have to show
$\max\\{(1-\alpha),\lambda\alpha\\}\geq\frac{1}{\varphi}.$
Case $\alpha\geq\frac{\varphi-1}{\varphi}$, we have
$\lambda\alpha\geq\lambda\left(\frac{\varphi-1}{\varphi}\right)\geq\varphi\left(\frac{\varphi-1}{\varphi}\right)\geq\frac{\varphi^{2}-\varphi}{\varphi}\geq\frac{1}{\varphi}$.
On the other hand, if $\alpha<\frac{\varphi-1}{\varphi}$, we get
$1-\varphi>1-\left(\frac{\varphi-1}{\varphi}\right)=\frac{1}{\varphi}$. ∎
We consider now the open subset
${\mathcal{O}}_{\varphi}\subset{\mathcal{O}}_{1}$ consisting of flows $X$ so
that the expansion rate $\lambda$ of the return map in the unstable cone is
larger than $\varphi$. See Section which explicits vector fields in
${\mathcal{O}}_{\varphi}$, showing that $O_{\varphi}$ is not empty.
### 4.5 Cutting ${\mathcal{O}}_{\varphi}$ in regions
Consider ${\mathcal{H}}^{i}\subset{\mathcal{O}}_{1}$, $i=1,2$, the subset
corresponding to the vector fields $X$ for which the point $q_{i}$ (first
intersection with $\Sigma$ of the unstable separatrix $W^{u}_{i}(\sigma)$)
belongs to $\gamma^{s}_{+}\cup\gamma^{s}_{-}$. In other words, $X$ belongs to
${\mathcal{H}}^{i}$ if $\sigma$ admits a homoclinic loop for the unstable
separatrix $W^{u}_{i}(\sigma)$ so that the homoclinic loop cuts $\Sigma$ at a
unique point $q_{i}$. We split each ${\mathcal{H}}^{i}$ is two
${\mathcal{H}}^{i}={\mathcal{H}}^{i}_{+}\cup{\mathcal{H}}^{i}_{-}$ according
to $\\{q_{i}\in\gamma^{s}_{+}\\}$ and $\\{q_{i}\in\gamma^{s}_{-}\\}$
respectively. Figure 8 displays the feature of a vector field
$X\in{\mathcal{H}}^{1}$.
It is well known that a homoclinic connection corresponds to a codimension $1$
phenomenon, as expressed below:
###### Lemma 4.11.
The subsets ${\mathcal{H}}^{i}$ are codimension $1$ submanifolds of
${\mathcal{O}}_{1}$.
Figure 8: (a) $X\in{\mathcal{H}}^{1}_{+}$ and (b) $X\in{\mathcal{H}}^{1}_{-}$.
The submanifolds ${\mathcal{H}}^{1}$ and ${\mathcal{H}}^{2}$ cut
${\mathcal{O}}_{1}$ in $4$ regions
${\mathcal{O}}_{\varphi}^{\omega_{1},\omega_{2}}$, $\omega_{i}\in\\{+,-\\}$,
so that $\omega_{i}=-$ if and only if $q_{i}\in\Sigma_{i}$. Then:
###### Lemma 4.12.
Let $X$ be a vector field in ${\mathcal{O}}_{1}$ out of ${\mathcal{H}}^{1}$
and ${\mathcal{H}}^{2}$. Then the first return map $P$ admits a fixed point in
$\Sigma_{1}$ (resp. $\Sigma_{2}$) if and only if $X$ belongs to
${\mathcal{O}}_{\varphi}^{+,-}\cup{\mathcal{O}}_{\varphi}^{+,+}$ (resp.
${\mathcal{O}}_{\varphi}^{-,+}\cup{\mathcal{O}}_{\varphi}^{+,+}$).
###### Proof.
If $q_{1}\in\Sigma_{2}$ this means that the cuspidal point of the piched
annulus $P(\Sigma_{1})$ belongs to $\Sigma_{2}$. Thus $P(\Sigma_{1})$ crosses
$\Sigma_{2}$ in a (hyperbolic) Markov way leading to a unique fixed point in
$\Sigma_{1}$, see Figure 9(a). If $q_{1}\in\Sigma_{1}$ then $\Sigma_{1}\cap
P(\Sigma_{1})$ has $2$ connected components which are cuspidal triangles
$T_{1}^{+}$ (bounded by $\gamma^{s}_{+}$) , $T_{1}^{-}$ (bounded by
$\gamma^{s}_{-}$). Assume that there is a fixed point $p$ in $T_{1}^{-}$ (for
instance, the other case being equivalent). Then the region $R$ bounded by
$\gamma^{s}_{+}$ and $W^{s}(p)$ is invariant by $P$, see Figure 9. But
$W^{u}(p)$ contains a segment joining $p$ to $\gamma^{s}_{+}$. This segment in
expanded by $P$ contradicting the invariance of $R$. This ends the proof.
Figure 9:
$X\in{\mathcal{O}}_{1}\setminus{\mathcal{H}}^{1}\cup{\mathcal{H}}^{2}$
∎
Consider now the region ${\mathcal{O}}_{\varphi}^{+,+}$ : every vector field
$X\in{\mathcal{O}}_{\varphi}^{+,+}$ has exactly $1$ fixed point $p_{1}$ in
$\Sigma_{1}$ and $1$ fixed point $p_{2}$ in $\Sigma_{2}$. Let $W^{s}_{i}$ be
the stable leaf through $p_{i}$.
Note that $q_{1}$ and $p_{2}$ are both in $\Sigma_{2}$, and in the same way
$q_{2}$ and $p_{1}$ are both in $\Sigma_{1}$. This remark allows, _a priori_ ,
that $q_{1}$ and $p_{2}$ belong to the same stable leaf, or that $q_{2}$ and
$p_{1}$ belong to the same stable leaf. This corresponds to our next splitting
of the region ${\mathcal{O}}_{\varphi}^{+,+}$.
Let us denote ${\mathcal{H}}{\mathcal{E}}_{i}$ the subset of
${\mathcal{O}}_{\varphi}^{+,+}$ corresponding to the vector fields $X$ for
which $q_{i}\in W^{s}_{j}$. In other words for
$X\in{\mathcal{H}}{\mathcal{E}}_{i}$ the unstable separatrix of $\sigma$
corresponding to $q_{i}$ is a heteroclinic connection with $p_{j}$, see Figure
10. As in Lemma 4.11 one has:
Figure 10: (a) $X\in{\mathcal{H}}{\mathcal{E}}^{1}$ and (b)
$X\in{\mathcal{H}}{\mathcal{E}}^{2}$
###### Lemma 4.13.
The subsets ${\mathcal{H}}{\mathcal{E}}_{i}$ are codimension $1$ submanifold
of ${\mathcal{O}}_{1}$.
Consider
$X\in{\mathcal{O}}_{\varphi}^{+,+}\setminus({\mathcal{H}}{\mathcal{E}}_{1}\cup{\mathcal{H}}{\mathcal{E}}_{2})$.
Then $\Sigma\setminus(W^{s}_{1}\cup W^{s}_{2})$ has exactly $2$ connected
components: one, denoted by $\Sigma_{+}$ contains $\gamma^{s}_{+}$, and the
other, denoted by $\Sigma_{-}$ contains $\gamma^{s}_{-}$.
We denoted by ${\mathcal{L}}^{+}$ the open subset of
${\mathcal{O}}_{\varphi}^{+,+}$ where both points $q_{1},q_{2}$ belong to
$\Sigma_{+}$ and ${\mathcal{L}}^{-}$ the open set where both points
$q_{1},q_{2}$ belong to $\Sigma_{-}$. We denote by
$\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$ the open subset where $q_{1}$ and
$q_{2}$ belong to different component $\Sigma_{\pm}$ and $\Sigma_{\mp}$.
We will denote ${\mathcal{L}}^{+,-}$ the union
${\mathcal{L}}^{+,-}={\mathcal{O}}_{\varphi}^{-,-}\cup{\mathcal{O}}_{\varphi}^{+,-}\cup{\mathcal{O}}_{\varphi}^{-,+}\cup\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}\cup\left(({\mathcal{H}}^{1}\cup{\mathcal{H}}^{2})\setminus(({\mathcal{H}}^{1}_{+}\cap{\mathcal{H}}^{2}_{+})\cup({\mathcal{H}}^{1}_{-}\cap{\mathcal{H}}^{2}_{-})\right).$
Recall that ${\mathcal{H}}^{i}$ corresponds to a homoclinic loop and
${\mathcal{H}}^{i}_{\pm}$ distinguish the up of down stable seapratrix of
$\sigma$ involved in that loop.
## 5 The topological dynamics in the different regions of
${\mathcal{O}}_{\varphi}$
### 5.1 Up and down Lorenz attractor: the regions ${\mathcal{L}}^{+}$ and
${\mathcal{L}}^{-}$
The aim of this section is to prove
###### Theorem 5.1 (A).
With the notation of Section 4.5, any vector field $X\in{\mathcal{L}}^{+}$
admits exactly $2$ chain recurrence classes: one is a up-Lorenz attractor, and
the other is a hyperbolic basic set, topologically equivalent to the
suspension of a fake horseshoe.
The symetric statement holds in ${\mathcal{L}}^{-}$, interchanging the up and
down.
###### Proof.
The component $\Sigma_{+}$ is a rectangle. The stable leaf $\gamma^{s}_{+}$
cuts $\Sigma_{+}$ in two component. One is $\Sigma_{1}\cap\Sigma_{+}$ and is
bounded by $W^{s}_{1}=W^{s}(p_{1})$ and the other is
$\Sigma_{2}\cap\Sigma_{+}$ and is bounded by $W^{s}_{2}$. Note that $q_{1}$
belongs to $\Sigma_{2}\cap\Sigma_{+}$ and $q_{2}$ belongs to
$\Sigma_{1}\cap\Sigma_{+}$.
Consider a stable leaf $L_{1}\in\Sigma_{1}\cap\Sigma_{+}$ separating
$W^{s}_{1}$ from $q_{2}$, and a stable leaf $L_{2}\in\Sigma_{2}\cap\Sigma_{+}$
separating $W^{s}_{2}$ from $q_{1}$, see Figure 11.
Then $L_{1}\cup L_{2}$ cut $\Sigma^{+}$ in $3$ components: one is bounded by
$W^{s}_{1}$ another by $W^{s}_{2}$ and the third, denoted by $R_{L}$ is a
rectangle bounded by both $L_{1}$ and $L_{2}$ and containing $q_{2}$,$q_{1}$
and $\gamma^{s}_{+}$. The leaf $\gamma^{s}_{+}$ cuts $R_{L}$ in two
components, $R_{L,1}\subset\Sigma_{1}$ and $R_{L_{2}}\subset\Sigma_{2}$.
Consider the restriction of $P$ to $R_{L}\setminus\gamma^{+}_{s}$. The images
of $R_{L_{1}}$ and $R_{L_{2}}$ are cuspidal triangle with cusps at $q_{1}$ and
$q_{2}$, and contained in $R_{L}$. Recall that $P$ is hyperbolic. Thus the
restriction or $P$ to $R_{L}$ satisfies all the properties of the return map
in the geometric model of Lorenz attractor with a rate expansion larger than
$\varphi>\sqrt{2}$. The rectangle $R_{L}$ is an attracting region for $P$. On
deduces that $U$ contains a sub-region which is attracting, and in which $X$
is a geometric model of Lorenz attractor.
Figure 11: The leaves $L_{1}$ and $L_{2}$, and the region $\Sigma_{+}$.
Consider now the component $R_{H}$ of $\Sigma\setminus(L_{1}\cup L_{2})$
disjoint from $R_{L}$, and containing $\gamma^{s}_{-}$. Consider the
restriction of $P$ to $R_{H}\setminus\gamma^{s}_{-}$. The image of each of
these components crosses $R_{H}$ in a Markovian way. Thus the maximal
invariant in $R_{H}$ is far from the discontinuity, and is conjugated to the
_fake horseshoe_ , as the map $P$ preserves the orientation of the unstable
cone field. This ends the proof. The down case is similar. ∎
### 5.2 Two-sided Lorenz attractor in ${\mathcal{O}}_{\varphi}^{-,-}$
###### Proposition 5.2.
For any $X\in{\mathcal{O}}_{\varphi}^{-,-}$ the maximal invariant set in $U$
is transitive and consists in a two-sided Lorenz attractor.
###### Proof.
According to Lemma 4.9, for proving the transitivity of the maximal invariant
it is enough to prove that the iterates of any unstable segment $S$ in
$\Sigma$ cut all the stable leaves with a possible exception of a set with
empty interior, see Figure 12.
Figure 12: $q_{1}\in\Sigma_{1}$ and $q_{2}\in\Sigma_{2}$
So consider a unstable segment $S\subset\Sigma$. Consider the set of lengths
of the connected components of all positive iterates $P^{n}(S)$. If this set
of lengths is not bounded, then some component cuts every stable leaf and we
are done.
Otherwise, given any $\delta>1$, up to replace $S$ by a segment in one of its
iterates, one may assume that any connected component $S^{\prime}$ in any
iterate $P^{n}(S)$ has a length bounded by $\delta\ell(S)$. We now fix
$\delta\in]1,\frac{\lambda^{2}}{2}[$.
If $S$ is disjoint from $\gamma^{s}_{+}\cup\gamma^{s}_{-}$ then
$\ell(P(S))>\lambda\ell(S)>\delta\ell(S)$ contradicting the choice of $S$. So
$S$ cuts $\gamma^{s}_{+}$ or $\gamma^{s}_{-}$. If it cuts both, then it
crosses completely $\Sigma_{1}$ or $\Sigma_{2}$ so that $P(S)$ cuts all the
stable leaves but $1$, and we are done.
So we may assume that $S$ cuts exactly one of $\gamma^{s}_{+}$ or
$\gamma^{s}_{-}$, say $\gamma^{s}_{+}$, for instance. Let $S_{1}$ be a
component of $S\setminus\gamma^{s}_{+}$ with length larger or equal to
$\frac{1}{2}\ell(S)$. The length of $P(S_{1})$ is at least
$\frac{\lambda}{2}\ell(S)$.
If $P(S_{1})$ is disjoint from $\gamma^{s}_{+}\cup\gamma^{s}_{-}$ then
$\ell(P(S_{1})>\lambda\ell(S_{1})>\frac{\lambda^{2}}{2}\ell(S)>\delta\ell(S).$
contradicting the choice of $S$. So $P(S_{1})$ cuts $\gamma^{s}_{+}$ or
$\gamma^{s}_{-}$. Once again if it cuts both of them we are done, so one may
assume that $P(S_{1})$ cuts exactly one of $\gamma^{s}_{+}$ or
$\gamma^{s}_{-}$. Note that one of the end points of $P(S_{1})$ is a cuspidal
point $q_{i}\in\Sigma_{i}$ as $X\in{\mathcal{O}}_{1}^{-,-}$. Thus Lemma 4.7
applies and concludes the proof of the transitivity of the maximal invariant
set $\Lambda_{X}$.
Note that the compact set $\Lambda_{P}$ consists in a union of essential
circles in $\Sigma$ and therefore allways cuts $\gamma^{s}_{+}$ and
$\gamma^{s}_{-}$, so that the maximal invariant set $\Lambda_{X}$ intersect
non-trivially both stable separatrices $W^{s}_{+}(\sigma)$ and
$W^{s}_{-}(\sigma)$. Thus $\Lambda_{X}$ is a two-sided Lorenz attractor,
ending the proof. See Figure 12.
∎
### 5.3 Two-sided Lorenz attractor in ${\mathcal{O}}^{+,-}_{\varphi}$ and
${\mathcal{O}}^{-,+}_{\varphi}$
###### Proposition 5.3.
For any $X\in{\mathcal{O}}_{\varphi}^{+,-}\cup{\mathcal{O}}_{\varphi}^{-,+}$
the maximal invariant set in $U$ is transitive and consist in a two-sided
Lorenz attractor.
The proof of the proposition for $X\in{\mathcal{O}}_{\varphi}^{-,+}$ is
identical to the proof for $X\in{\mathcal{O}}_{\varphi}^{+,-}$ interchanging
the componentns $\Sigma_{1}$ and $\Sigma_{2}$, so we will write the proof only
for $X\in{\mathcal{O}}_{\varphi}^{+,-}$. See Figure 13.
###### Proof.
Recall that the rate of exapansion of $X$ is $\lambda>\varphi$.
Figure 13: $q_{1}\in\Sigma_{+}$ for $X\in{\mathcal{O}}^{+,-}$
As for Proposition 5.2 it is enough to prove that the iterates of any unstable
segment $S$ in $\Sigma$ cut all the stable leaves with a possible exception of
a set with empty interior. See Figure 13.
So consider a unstable segment $S\subset\Sigma$. Consider the set of lengths
of the connected components of all positive iterates $P^{n}(S)$. If this set
of lengths is not bounded, then some component cuts every stable leaf and we
are done.
Otherwise, given any $\delta>1$, up to replace $S$ by a segment in one of its
iterates, one may assume that any connected component $S^{\prime}$ in any
iterate $P^{n}(S)$ has a length bounded by $\delta\ell(S)$. We now fix
$\delta\in]1,\frac{\lambda}{\varphi}[$.
If $S$ is disjoint from $\gamma^{s}_{+}\cup\gamma^{s}_{-}$ then
$\ell(P(S)>\lambda\ell(S)>\delta\ell(S)$ contradicting the choice of $S$. So
$S$ cuts $\gamma^{s}_{+}$ or $\gamma^{s}_{-}$. If it cuts both, then it
crosses completely $\Sigma_{1}$ or $\Sigma_{2}$ so that $P(S)$ cuts all the
stable leaves but $1$, and we are done.
So we may assume that $S$ cuts exactly one of $\gamma^{s}_{+}$ or
$\gamma^{s}_{-}$, say $\gamma^{s}_{-}$, for instance. Thus $S$ is cutted by
$\gamma^{s}_{-}$ in two components $S_{i}=S\cap\Sigma_{i}$.
Consider $P(S_{2})$. If
$P(S_{2})\cap(\gamma^{s}_{-}\cup\gamma^{s}_{+})\neq\emptyset$ then Lemma 4.7
applies (because $q_{2}\in\Sigma_{2}$ by our assumption
$X\in{\mathcal{O}}^{+,-}_{\varphi}$). Thus the iterate of $S_{2}$ cuts every
stable leaf but a finite number of them, so that we are done.
Thus we may assume that $P(S_{2})$ is disjoint from
$\gamma^{s}_{-}\cup\gamma^{s}_{+}$. Thus $P(S_{2})$ is an unstable segment of
length larger than $\lambda^{2}\ell(S_{2})$. On the other hand
$\ell(P(S_{1}))\geq\lambda\ell(S_{1})$. Now Lemma 4.10 implies that
$\max\\{\ell(P(S_{1})),\ell(P^{2}(S_{2}))\\}\geq\frac{\lambda}{\varphi}\ell(S)>\delta\ell(S),$
contradicting the choice of the segment $S$, finishing the proof.
∎
### 5.4 Two-sided Lorenz attractor in
$\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$
###### Proposition 5.4.
For any $X\in\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$ the maximal invariant
set in $U$ is transitive and consist in a two-sided Lorenz attractor.
Vector fields $X$ in $\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$ are
characterized by the fact that the points $q_{1}$ and $q_{2}$ belong to
different components $\Sigma_{+}$ (containing $\gamma^{s}_{+}$) and
$\Sigma_{-}$ (containing $\gamma^{s}_{-}$) of $\Sigma\setminus(W^{s}_{1}\cup
W^{s}_{2})$ where $W^{s}_{i}=W^{s}(p_{i})$ and $p_{i}$ is the fixed point of
$P$ in $\Sigma_{i}$.
Thus $\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$ is the union of two disjoint
open subsets, defined by $q_{1}\in\Sigma_{+}$ or $q_{1}\in\Sigma_{-}$.
The proof of the proposition is symetrical in these two open sets. We provide
here the proof where $q_{1}\in\Sigma_{+}$. See Figure 14.
Figure 14: $q_{1}\in\Sigma_{+}$ for
$X\in\widetilde{{\mathcal{O}}_{\varphi}}^{+,+}$
###### Proof.
Most of the proof is identical to the proofs of Propositions 5.2 and 5.3, and
allow us to consider a segment $S$ so that any component in any iterate
$P^{n}(S)$ has a length bounded by $\delta\ell(S)$ with
$1<\delta<\frac{\lambda}{\varphi}$ where $\lambda$ is the expansion rate of
$X$. Furthermore $S$ cuts exactly one of the stable leaves $\gamma^{s}_{+}$ or
$\gamma^{s}_{-}$.
Let us assume that $S$ cuts $\gamma^{s}_{-}$, and denote
$S_{i}=S\cap\Sigma_{i}$. If one of $P(S_{1})$ or $P(S_{2})$ is disjoint from
$\gamma^{s}_{+}\cup\gamma^{s}_{-}$ then one concludes in the same way as in
the proof of Proposition 5.3 that, using Lemma 4.10 one of the iterates
$P(S_{1})$, $P^{2}(S_{1})$, $P(S_{2})$ or $P^{2}(S_{2})$ contains a segment of
length larger than $\frac{\lambda}{\varphi}$ contradicting the choice of $S$.
Thus one may assume that both $P(S_{1})$ and $P(S_{2})$ cut
$\gamma^{s}_{+}\cup\gamma^{s}_{-}$.
For the orientation of ${\mathcal{C}}^{u}$, $S_{1}$ has its end point at
$\gamma^{s}_{-}$. Thus $P(S_{1})$ ends at $q_{1}\in\Sigma_{+}$. As seen above
$P(S_{1})$ cuts $\gamma^{s}_{+}\cup\gamma^{s}_{-}$ and its orientation implies
that indeed it cuts $\gamma^{s}_{-}$. In particular its goes out of
$\Sigma_{+}$. One deduces that $P(S_{1})$ cuts transversely
$W^{s}_{2}=W^{s}(p_{2})$, where $p_{2}$ is the fixed point in $\Sigma_{2}$
(recall that $X\in{\mathcal{O}}_{\varphi}^{+,+}$). Thus Lemma 4.9 ensures that
the iterates of $P(S_{2})$ cross all stable leaves but a finite number of
them, concluding that case.
The case where $S$ cuts $\gamma^{s}_{+}$ is similar, just replacing
$S_{1}=S\cap\Sigma_{1}$ and $W^{s}(p_{2})$ by $S_{2}=S\cap\Sigma_{2}$ and
$W^{s}(p_{1})$. This concludes the proof.
∎
### 5.5 Two-sided Lorenz attractor in hypersufaces ${\mathcal{H}}^{i}$ of
homoclinic loops
###### Proposition 5.5.
For any $X\in{\mathcal{O}}_{1}$ in one of the hypersurfaces
${\mathcal{H}}^{1}$ or ${\mathcal{H}}^{2}$, the maximal invariant in $U$ is a
transitive singular attractor meeting both stable separatrices
$W^{s}_{\pm}(\sigma)$.
###### Proof.
We assume that one of the points $q_{1}$,$q_{2}$, say $q_{1}$ belongs to one
of the stable leaves $\gamma^{s}_{+}$ or $\gamma^{s}_{-}$, say
$q_{1}\in\gamma^{s}_{+}$ (all the cases admits an identical proof, _mutatis
mutandi_). See Figure 15.
Figure 15: $X\in{\mathcal{H}}^{1}\cup{\mathcal{H}}^{2}$
In a similar way to the proof of the Propositions 5.2, 5.3 and 5.4, the proof
consist in considering an unstable segment $S$ which does not admit any
segment $\tilde{S}$ of length larger than $\delta\ell(S)$,
$1<\delta<\frac{\lambda}{\varphi}$, so that $\tilde{S}$, excepted a finite
subset, is contained in the union of the iterate $P^{n}(S)$, $n\geq 0$. One
needs to prove that the iterates of such a segment $S$ cut any stable leaves
but a finite number of them.
Again, the choice of $S$ implies that $S$ cuts $\gamma^{s}_{+}$ or
$\gamma^{s}_{-}$ and if it cuts both then $P(S)$ already cuts all stable
leaves but one. So we assume that $S$ cuts only $1$ of these leaves.
Assume first that $S$ cuts $\gamma^{s}_{+}$. Consider $S_{1}=S\cap\Sigma_{1}$.
It is a segment starting at a point in $\gamma^{s}_{+}$ and contained in
$\Sigma_{1}$. Then $P(S_{1})$ is a segment a length larger than
$\lambda\ell(S)$ and starting at $q_{1}\in\gamma^{s}_{+}$. If $P(S_{1})$ is
not included in $\Sigma_{1}$ then it cuts both $\gamma^{s}_{+}$ and
$\gamma^{s}_{-}$ and $P^{2}(S_{1})$ cuts all leaves but one. If
$P(S^{1})\subset\Sigma_{1}$ then $P^{2}(S_{1})$ is a segment of length larger
than $\lambda^{2}\ell(S)$ and starting at $q_{1}$.
Iterating the process one gets that one of the iterates $P^{k}(S_{1})$ crosses
completely $\Sigma_{1}$, so that $P^{k+1}(S_{1})$ cuts all stable leaves but
one, and we are done.
Assume now that $S$ cuts $\gamma^{s}_{-}$. Consider $S_{i}=S\cap\Sigma_{i}$.
Then $P(S_{1})$ is a segment ending at $q_{1}\in\gamma^{s}_{+}$, and
$P(S_{1})$ either crosses completely $\Sigma_{2}$ (and we are done) or is
contained in $\Sigma_{2}$. Now $P^{2}(S_{1})$ is a segment ending at $q_{2}$.
On the other hand $P(S_{2})$ is a segment staring at $q_{2}$. Now
$P^{2}(S_{1})\cup\\{q_{2}\\}\cup P(S_{2})$ is a segment of length at least
$\lambda\ell(S)$. This contradicts the choice of $S$, ending the proof. ∎
The statement of Proposition 5.5 does not announce two-sided Lorenz attractor,
because the ${\mathcal{H}}^{i}$ are not open subsets, and then the robustness
of the transitivity in not ensured. However we will see below that
$X\in{\mathcal{H}}^{i}$ exhibits indeed a two-sided Lorenz attractor excepted
for $X$ in a codimension $2$ submanifold.
Let ${\mathcal{H}}^{1,2}_{+}$ and ${\mathcal{H}}^{1,2}_{-}$ the codimension
$2$ submanifolds included in ${\mathcal{H}}^{1}\cup{\mathcal{H}}^{2}$
consisting in vector fields $X$ so that $q_{1},q_{2}\in\gamma^{s}_{+}$ or
$q_{1},q_{2}\in\gamma^{s}_{-}$, repectively, see Figure 16.Thus both unstable
separatrices of $\sigma$ are homoclinic connections and are included in the
same stable separatrix of $\sigma$.
Figure 16: $X\in{\mathcal{H}}^{1,2}_{+}\cup{\mathcal{H}}^{1,2}_{-}$
Next lemma ensures that, for $X\in{\mathcal{H}}^{i}$ out of
${\mathcal{H}}^{1,2}_{+}$ and ${\mathcal{H}}^{1,2}_{-}$ the transitivity of
the attractor is robust so that the attractor is a two-sided Lorenz attractor.
###### Lemma 5.6.
If
$X\in{\mathcal{H}}^{i}\setminus({\mathcal{H}}^{1,2}_{+}\cup{\mathcal{H}}^{1,2}_{-})$,
a neighborhood of $X$ is contained in
${\mathcal{O}}^{-,-}_{\varphi}\cup{\mathcal{O}}^{+,-}_{\varphi}\cup{\mathcal{O}}^{-,+}_{\varphi}\cup\widetilde{{\mathcal{O}}^{+,+}_{\varphi}}\cup{\mathcal{H}}^{1}\cup{\mathcal{H}}^{2}$.
###### Proof.
The proof consists in unfolding the homoclinic loops and check that all the
possibilities lead to one of the sets listed. ∎
### 5.6 Two-sided Lorenz attractor in ${\mathcal{L}}^{+-}$
Note that Propositions 5.2, 5.3, 5.4 and 5.5 and Lemma 5.6 together prove
###### Proposition 5.7.
For any $X\in{\mathcal{L}}^{+,-}$ the maximal invariant set in $U$ is a
transitive singular hyperbolic attractor meeting both stable separatrices of
$\sigma$, that is, is a two-sided Lorenz attractor.
The region ${\mathcal{L}}^{+,-}$ has been defined as a union of many regions
and the propositions listed above prove the conclusion of Proposition 5.7 in
each of these regions. Finally, some of these regions are not open so Lemma
5.6 checks that the vector fields in these non-open regions admits
neighborhood contained it the union of the other regions.
### 5.7 The collisions of the Lorenz attractor and a fake horseshoe: vector
fields in ${\mathcal{H}}{\mathcal{E}}_{i}$
In this section we consider vector fields $X$ in
${\mathcal{O}}^{+,+}_{\varphi}$, that is, so that the return map $P$ have $2$
fixed point $p_{i}\in\Sigma_{i}$, $i=1,2$, and a heteroclinic connections
between $\sigma$ and one of the points $p_{i}$, more precisely $q_{i}$ belongs
to the stable leaf $W^{s}_{j}$ through $p_{j}$ ; note that $j\neq i$ because
$q_{i}$ is not in $\Sigma_{i}$ when $\Sigma_{i}$ contains a fixed point.
The case when both $q_{1}$ and $q_{2}$ belong to $W^{s}_{1}\cup W^{s}_{2}$
corresponds to
$X\in{\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$ which
is a codimension $2$ submanifold.
###### Proposition 5.8.
For any
$X\in{\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$, there
is a unique chain recurrence class, which is not transitive. Both $\Sigma_{-}$
and $\Sigma_{+}$ are invariant by $P$. The maximal invariant of the
restriction of $P$ to $\Sigma_{i}$ is transitive: every unstable segment in
$\Sigma_{i}$ has its iterates which cut every segment in $\Sigma_{i}$.
Figure 17:
$X\in{\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$
The open set $U$ splits in $2$ regions , each containing a _full Lorenz
attractor_ (that is, a Lorenz crossing all stable foliations over each region
splited on $U$), and intersecting along $W^{s}(p_{1})\cup W^{s}(p_{2})\cup
W^{u}(\sigma)$. See Figure 17.
###### Proof.
The first return map is illustrated in Figure… The study of the first return
map in each rectangle $\overline{\Sigma}_{i}$ is classical for the expansion
rate larger than $\sqrt{2}<\varphi$. ∎
The submanifold
${\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$ cuts
${\mathcal{H}}{\mathcal{E}}_{1}$ in two (relative) open subsets, as $q_{2}$
belongs either to $\Sigma_{+}$ or $\Sigma_{-}$ (connected components of
$\Sigma\setminus(W^{s}_{1}\cup W^{s}_{2})$.
Consider
$X\in{\mathcal{H}}{\mathcal{E}}_{1}\setminus({\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2})$,
and assume $q_{2}\in\Sigma_{+}$. Then the stable leaf through $q_{2}$ cuts
$\Sigma_{+}$ in two components, one, denoted as $\Sigma^{+}_{1}$ bounded by
the stable leaf $W^{s}_{1}$ trough $p_{1}$ and the other, denoted as
$\Sigma^{+}_{2}$ bounded by $W^{s}_{2}$ (which contains $q_{1}$ by definition
of ${\mathcal{H}}{\mathcal{E}}_{1}$); notice that $\Sigma^{+}_{2}$ contains
the stable leaf $\gamma^{s}_{+}$ , because $q_{2}\in\Sigma_{1}$.
###### Proposition 5.9.
Consider
$X\in{\mathcal{H}}{\mathcal{E}}_{1}\setminus({\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2})$,
and assume $q_{2}\in\Sigma_{+}$. Then :
* •
$X$ has a unique chain recurrence class in $U$, but this class is not
transitive and consists in two (singular) homoclinic classes $K_{-},K_{+}$
containing $\sigma$.
* •
The iterates by $P$ of any unstable segment in $\Sigma$ cut any stable leaf in
$\Sigma^{+}_{2}$ but finitely many of them. Further more the return map $P$
restricted to the rectangle $\overline{\Sigma^{+}_{2}}$ is a classical Lorenz
map (for the geometrical model of Lorenz attractor) for the parameter
corresponding to one homoclinic connection. See Figure 18.
Figure 18:
* •
The homoclinic class $K_{-}$ is a _singular fake horseshoe_ which intersects
$\Sigma_{-}$ and is disjoint from $\Sigma_{+}$.
* •
the intersection $K_{-}\cap K_{+}\cap\Sigma$ is contained in $W^{s}_{2}$ and
consists in $p_{2}$ and the orbit of $q_{1}$.
###### Proof.
We just refer the reader to the (very classical) pictures illustrating the
restriction of the return map $P$ to $\Sigma^{+}_{2}$ and to
$\Sigma\setminus\Sigma^{+}_{2}$. See Figure 18. ∎
A similar result holds for
$X\in\mathcal{HE}_{2}\setminus(\mathcal{HE}_{1}\cap\mathcal{HE}_{2})$ and
$q_{1}\in\Sigma_{-}$.
### 5.8 Fat Lorenz attractor: vector fields in ${\mathcal{H}}^{1,2}_{+}$ and
${\mathcal{H}}^{1,2}_{-}$
###### Proposition 5.10.
If $X\in{\mathcal{H}}^{1,2}_{+}$ then the iterates of every unstable segment
in $\Sigma\setminus\gamma^{s}_{+}$ cut every stable leaf excepted
$\gamma^{s}_{+}$. The maximal invariant set in $U$ is a homoclinic class which
is a singular attractor, that we call the fat Lorenz attractor.
A similar result holds for $X\in\mathcal{H}_{-}^{1,2}$, interchanging
$\gamma_{+}^{s}$ by $\gamma_{-}^{s}$.
###### Proof.
We just refer the reader to the pictures illustrating the return map $P$,
Figures 15 and 16. ∎
## 6 Collisions and collapses
The aim of this section is to describe the drastic changes of behaviour
appearing in the topological dynamics when a family $X_{\mu}$ crosses the
boundary of the regions we described in Section 5.
### 6.1 Collisions
Let us consider a $1$ parameter family
$X_{\mu}\in{\mathcal{O}}^{+,+}_{\varphi}$, $\mu\in[-1,1]$ with the following
properties:
* •
The family is crossing transversely the hypersurface
${\mathcal{H}}{\mathcal{E}}^{1}_{+}$ at $\mu=0$: this means that $X_{0}$
exhibit a heteroclinic connection $q_{1}\in W^{s}_{2}$ and
$q_{2}\in\Sigma_{+}$.
* •
for $\mu<0$ the vector field $X_{\mu}$ belongs to ${\mathcal{L}}^{+}$: it has
a up Lorenz attractor and $\Sigma_{-}$ contains a fake horseshoe for $P$.
* •
for $\mu>0$ the vector field $X_{\mu}$ has a two-sided Lorenz attractor.
In other words, $X_{\mu}$ consists in a generic unfolding of the heteroclinic
connection $q_{1}\in W^{s}_{2}$. The cuspidal point $q_{1,\mu}$ is moving with
the parameter $\mu$ and is crossing transversely $W^{s}_{2,\mu}$ (stable leaf
through $p_{2,\mu}$) for $\mu=0$.
For $\mu<0$ , the up-Lorenz attractor is intersecting exactly every stable
leaf in $\Sigma^{+}$ between $q_{2,\mu}$ and $q_{1,\mu}$, and the horseshoe in
$\Sigma_{-}$ is bounded by the stable leaves $W^{s}_{1,\mu}$, $W^{s}_{2,\mu}$
and contains the periodic points $p_{1,\mu}$ $p_{2,\mu}$.
When $\mu$ tends to $0$ the point $q_{1,\mu}$ tends to a point $q_{1,0}$ in
$W^{s}_{2}$ and this point $q_{1,0}$ is also the limite of the intersection of
$W^{s}_{2}$ with one of the rectangle (the one containing $p_{1}$) of the
Markov partition of the fake horseshoe. This corresponds to a Cantor set in
$W^{s}_{2}$ of points of the fake horseshoe for $X_{\mu}$, $\mu<0$ whose
diameter tends to $0$ as $\mu\to 0$: all the Cantor set tends to $q_{1,0}$.
For the parameter $0$, the Lorenz attractor no more admits an attracting
neighborhood, and is no more robust. The fake horseshoe became singular, and
intersects the Lorenz attractor along $\sigma$ and the orbits of $p_{2,0}$ and
of $q_{1,0}$.
When $\mu>0$ the Lorenz attractor and the (singular) fake horseshoe merge into
a two-sided Lorenz attractor.
Notice that if all the vector fields $X_{\mu}$ are assumed to be of class
$C^{2}$ then all along the parameter, $X_{\mu}$ admits a unique SRB-measure
$\nu_{\mu}$ (see [2]). For $\mu\leq 0$ the support of $\nu_{\mu}$ is the
Lorenz attractor, and in particular does not intersect $\Sigma_{-}$. For
$\mu>0$ the support of $\nu_{\mu}$ intersects any stable leaf in $\Sigma$.
Thus the support of $\nu_{\mu}$ change drastically at the collision point.
###### Question 1.
Is the map $\mu\mapsto\nu_{\mu}$ continuous (for the weak topology) at
$\mu=0$?
### 6.2 Collapse of the horseshoe: parameter families crossing
${\mathcal{H}}^{1,2}_{+}$ or ${\mathcal{H}}^{1,2}_{-}$
Recall that ${\mathcal{H}}^{1,2}_{+}$ is the codimension $2$ submanifold
corresponding to the double homoclinic connection
$q_{1},q_{2}\in\gamma^{s}_{+}$. We consider here a $2$-parameter family
$X_{\mu}$, $\mu=(\mu_{1},\mu_{2})\in[-1,1]^{2}$ which is a generic unfolding
of this double homoclinic connection: in other words, the family cuts
transversely ${\mathcal{H}}^{1,2}_{+}$ at $\mu=(0,0)$. We assume furthermore
that, for any fixed $\mu_{2}$, the $1$ parameter family $X_{\mu_{1},\mu_{2}}$
is tranverse to ${\mathcal{H}}^{1}$ at $\mu_{1}=0$ and, reciprocally, for any
fixed for any fixed $\mu_{1}$, the $1$ parameter family $X_{\mu_{1},\mu_{2}}$
is tranverse to ${\mathcal{H}}^{1}$ at $\mu_{2}=0$. More precisely one may
assume that
$X_{\mu_{1},\mu_{2}}\in{\mathcal{O}}_{\varphi}^{\omega_{1},\omega_{2}}$ where
$\omega_{i}\in\\{-,+\\}$ is the sign of $\mu_{i}$.
When one unfolds this double homoclinic connection, if $q_{1,\mu}$ enters in
$\Sigma_{2}$ (that is, $\mu_{1}>0$), then a fixed point
$p_{1,\mu}\in\Sigma_{1}$ is created and tends to $q_{1,0}\in\gamma^{s}_{+}$ as
$\mu$ tends to $0$. In the same way, if $\mu_{2}>0$ then $q_{2}$ enters in
$\Sigma_{1}$ and the fixed point $p_{2,\mu}$ is created in $\Sigma_{2}$ and
tends to $q_{2,0}\in\gamma^{s}_{+}$. The stable leaves $W^{s}_{1,\mu}$ and
$W^{s}_{2,\mu}$ bound the small strip $\Sigma^{+}$ in $\Sigma$ containing
$\gamma^{s}_{+}$ (tending to the segment $\gamma^{s}_{+}$ as $\mu\to 0$), and
a large strip $\Sigma^{-}$.
The vector field $X_{\mu}$ in ${\mathcal{O}}^{+,+}_{\varphi}$ belongs to
${\mathcal{L}}^{+}\cup{\mathcal{L}}^{-}$ if and only if both $q_{1}$ and
$q_{2}$ belong to the same component $\Sigma^{+}$ or $\Sigma^{-}$.
###### Lemma 6.1.
For small $\mu$, $\\{q_{1},q_{2}\\}$ is not contained in $\Sigma^{+}$. In
other words the closure of ${\mathcal{L}}^{+}$ is disjoint from
${\mathcal{H}}^{1,2}_{+}$.
For the set $\\{\mu|\\{q_{1,\mu},q_{2,\mu}\\}\subset\Sigma_{-}\\}$ is an open
subset containing $(0,0)$ in its closure. More precisely, for any $\alpha>0$
there is $\mu_{0}$ so that for any $0<\mu<\mu_{0}$ the vector field
$X_{\mu,\alpha\mu}\in{\mathcal{L}}^{-}$.
###### Proof.
Assume (arguing by contradiction) that $\\{q_{1},q_{2}\\}\subset\Sigma^{+}$.
Thus $X\in{\mathcal{L}}^{+}$. Then $\Sigma^{+}$ is invariant under the action
of $P$. However, as $\Sigma^{+}$ is contained in an arbitrarily small
neighborhood of $\gamma^{s}_{+}$ the rate of expansion of $P$ in the unstable
cone is arbitrarily large, in particular $\gg 2$. So $\Sigma^{+}$ cannot be
invariant, ending the proof.
The proof of the second point is as follonws: consider one half line
$X_{\mu,\alpha\mu}$ and consider $\mu>0$ very small. Thus the point $q_{1}$
(resp. $q_{2}$) belongs to $\Sigma_{2}$ (resp. $\Sigma_{1}$) and its distance
to $\gamma^{s}_{+}$ is $\mu$ (resp. $\alpha\mu$).
Consider any point $p$ in $\Sigma_{1}$ at a small distance
$>\frac{1}{2}\alpha\mu$ de $\gamma^{s}_{+}$. Consider an unstable segment
$\gamma_{p}$ realizing this distance and so that $\gamma_{p}$ starts at
$\gamma^{s}_{+}$ and ends at $p$. Then $P(\gamma_{p})$ is an unstable segment
starting at $q_{1}$ and ending at $P(p)$. The length $\ell(P(\gamma_{p}))$ is
arbitrarily larger than $\ell(\gamma_{p})$ (say, larger than
$100(1+\alpha)\mu\ell(\gamma_{p})$ as $\mu$ tends to $0$ (as the expansion
rate close to $\gamma^{s}_{+}$ tends to infinity).
One deduce that the point $P(p)$ belongs to $\Sigma_{1}$ and is at a distance
larger than $99(1+\alpha)\mu$ of $\gamma^{s}_{+}$. One deduces that $P(p)$
(and thus $p$) does not belong to the stable leaf through the fixed point
$p_{1}\in\Sigma_{1}$. In particular $q_{2}$ belongs to $\Sigma_{-}$. One
proves in the same way than for $\mu>0$ small $q_{1}\in\Sigma_{-}$, and thus
the vector filed belongs to ${\mathcal{L}}^{-}$, ending the proof. Figure 1
displays a local bifurcation diagram in this case. ∎
The lemma implies that, when one consider a segment in the parameter plane
crossing $(0,0)$ and entering in ${\mathcal{O}}^{+,+}_{\varphi}$ transversely
to both ${\mathcal{H}}^{1}$ and ${\mathcal{H}}^{2}$, then one creates a down
Lorenz attractor, which cuts every stable leaves in $\Sigma^{-}$. When the
parameter tends to $(0,0)$ then $\Sigma^{-}$ tends to $\Sigma$ and the Lorenz
attractor tends to what we called the fat Lorenz attractor. The horseshoe,
corresponding to $\Sigma^{+}$ collapses into the double homoclinic connection.
### 6.3 Switching from up to down: family crossing
${\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$
In this section we consider a two parameter family
$X_{\mu}\in{\mathcal{O}}_{\varphi}^{+,+}$, $\mu=(\mu_{1},\mu_{2})$ crossing
transversely
${\mathcal{H}}{\mathcal{E}}_{1}\cap{\mathcal{H}}{\mathcal{E}}_{2}$ at $\mu=0$.
In other words $X_{(0,0)}$ exibits two heteroclinic connections $q_{1}\in
W^{s}_{2}$ and $q_{2}\in W^{s}_{1}$ (recall that $W^{s}_{i}$ is the stable
leaf through the fixed point $p_{i}$). We have seen that this behavior implies
that $X_{(0,0)}$ has two _full Lorenz attractors_ which intersect along
$W^{s}(p_{1})\cup W^{s}(p_{2})\cup W^{u}(\sigma)$. See Figure 17.
Up to reparametrize the family one may assume that
$\begin{array}[]{c}\left(\mu_{1}=0\Leftrightarrow q_{1}\in
W^{s}_{2}\right)\mbox{ and }\left(\mu_{2}=0\Leftrightarrow q_{2}\in
W^{s}_{1}\right)\\\ \left(\mu_{1}>0\Leftrightarrow
q_{1}\in\Sigma^{+}\right)\mbox{ and }\left(\mu_{2}>0\Leftrightarrow
q_{2}\in\Sigma^{+}\right)\end{array}$
Figure 19: Local bifurcation: (a) $(\mu_{1},\mu_{2}),\mu_{i}>0$,
$(\mu_{1},\mu_{2}),\mu_{i}=0$, (c) $(\mu_{1},\mu_{2}),\mu_{i}<0$.
With these notations, note that Theorem F is just a reformulation of the
results in the previous sections.
## 7 Parameter families $X_{\mu}\in{\mathcal{O}}_{\varphi}$, with parameters
in the torus
### 7.1 ${\mathbb{T}}^{2}$-parameter families
###### Definition 7.1.
Let
$\pi\colon{\mathbb{R}}^{2}\to{\mathbb{T}}^{2}={\mathbb{R}}^{2}/{\mathbb{Z}}^{2}$
be the canonical projection. Let $V\subset{\mathbb{R}}^{2}$ be an open subset
so that the projection $\pi(V)$ is the whole torus ${\mathbb{T}}^{2}$.
We say that a family $\\{X_{\mu}\in{\mathcal{O}}_{1}\\}_{\mu\in V}$ is a
$C^{r}$ family, $r\geq 0$ of vector fields in ${\mathcal{O}}_{1}$ parametrized
by ${\mathbb{T}}^{2}$ if
* •
the map $\mu\to X_{\mu}$ is continuous for the $C^{1}$-topology.
* •
the map $(p,\mu)\mapsto X_{\mu}(p)$ is of class $C^{r}$.
* •
for any $\mu,\mu^{\prime}$ so that $\mu^{\prime}-\mu\in{\mathbb{Z}}^{2}$ one
has: the return maps $P,P^{\prime}$ of $X,X^{\prime}$ on the transverse cross
section $\Sigma$ coincide: $P=P^{\prime}$. In particular, $X$ and $X^{\prime}$
are smoothly topologically equivalent in restriction to the attracting region
$U$, by an equivalence whose restriction to the cross section $\Sigma$ is the
identity map.
Shortly, we say that $X_{\mu}$ is a ${\mathbb{T}}^{2}$-parameter family.
### 7.2 Essential families
The aim of this section is to define the notion of essential
${\mathbb{T}}^{2}$ families. In a rough way of speaking, we don’t want that
the family $X_{\mu}$ is contained in a small neighborhood of a given vector
field $X_{0}$. We want that, when $\mu$ follows a closed simple path in
${\mathbb{T}}^{2}$, non homotopic to a point, then the images $P(\Sigma_{1})$
or $P(\Sigma_{2})$ give a turn in $\Sigma$ in an essential way.
Let us firt present a non-intrinsecal definition of these phenomenon, and we
will see then an intrinsecal definition, showing that the definition does not
depend of the choices.
Consider a ${\mathbb{T}}^{2}$-parameter family $\\{X_{\mu}\\}$ and let
$\gamma^{s}_{+,\mu}$ and $\gamma^{s}_{-,\mu}$ the associated stable leaves
corresponding to the discontinuities of the first return map. The leaves
$\gamma^{s}_{\mu}$ vary continuously with $\mu$. This allow us to choose a
parametrization of $\Sigma$, depending on ${}_{m}u$ and so that
$\gamma^{s}_{+,\mu}$ is the segment $\\{0\\}\times[-1,1]$ in the annulus
$\Sigma={\mathbb{R}}/{\mathbb{Z}}\times[-1,1]$.
Consider now the points $q_{1,\mu}$ and $q_{2,\mu}$ (first intersection points
of the unstable separatrices of $\sigma_{\mu}$ with $\Sigma$). This defines
two continuous maps
$q_{1}\colon{\mathbb{T}}^{2}\to{\mathbb{R}}/{\mathbb{Z}}\times[-1,1]\mbox{ and
}q_{2}\colon{\mathbb{T}}^{2}\to{\mathbb{R}}/{\mathbb{Z}}\times[-1,1]$.
Then, composing $q_{1}$ and $q_{2}$ by the projection
$\psi\colon\SS^{1}\times[-1,1]\to\SS^{1}$ one gets two continuous maps
$\psi\circ q_{i}\colon{\mathbb{T}}^{2}\to\SS^{1}$, $i=1,2$.
Let us denote
$Q=\left(\psi\circ q_{1},\psi\circ
q_{2}\right)\colon{\mathbb{T}}^{2}\to{\mathbb{T}}^{2}.$
###### Definition 7.2.
With the notation above, we say that the family $X_{\mu}$ is essential if the
topological degree of $\mu$ is $1$ (or else, if $Q$ is homotopic to an
orientation preserving homeomorphism).
Let us convince you now that this notion does not depend on the choice of the
parametrization of $\Sigma$.
Consider the hypersuperface
$\Gamma^{s}_{+}\subset\Sigma\times{\mathbb{T}}^{2}$ defined by
$\Gamma^{s}_{+}=\bigcup_{\mu}\gamma^{s}_{m}u\times\\{\mu\\}$. Is it
homeomorphic to a $3$-torus in $\Sigma\times{\mathbb{T}}^{2}$.
To every closed path $c\colon\SS^{1}\to{\mathbb{T}}^{2}$ let us consider the
closed pathes $q_{i,c}\colon\SS^{1}\to\Sigma\times{\mathbb{T}}^{2}$,
$i\in\\{1,2\\}$ so that
$q_{i,c}(\theta)=\left(q_{i,c(\theta)},c(\theta)\right)$ for
$\theta\in\SS^{1}$.
Notice that $q_{i,c}$ depends continuouly on the closed path $c$. In
particular, its algebraic intersection number with the hypersurface
$\Sigma^{s}_{+}$ only depends on the homology class $[c]$. We denote it
$[q_{i,c}]\cdot\Gamma^{s}_{+}$.
One gets a maps
$[c]\mapsto\left([q_{1,c}]\cdot\Gamma^{s}_{+}],[q_{2,c}]\cdot\Gamma^{s}_{+}\right)\in{\mathbb{Z}}^{2}$
defined on $H_{1}({\mathbb{T}}^{2},{\mathbb{Z}})\simeq{\mathbb{Z}}^{2}$ with
values in ${\mathbb{Z}}^{2}$, and this map is a linear map given by a $2$ by
$2$ matrix with ${\mathbb{Z}}$-entries. The topological degree in the first
presentation of the notion is just the determinant of this linear map: the
family is essential if and only iff the determinant is $1$.
### 7.3 Building ${\mathbb{T}}^{2}$-families
Consider a vector field $X\in{\mathcal{O}}_{1}$ on a $3$-manifold $M$. Thus,
by definition, it is transversal to the annuli $\Sigma$ $D_{1}$ and $D^{2}$.
Furthermore, the first return maps of $D_{i}$ to $\Sigma\cup D_{1}\cup D_{2}$
is a smooth map $R\colon D_{1}\cup D_{2}\to\Sigma$ , mapping $D_{1}$ and
$D_{2}$ on two disjoint essential annuli in the interior of $\Sigma$.
Consider a annuli $\tilde{D}_{i}$, $i=1,2$, containing $D_{i}$ in its
interior, and so that $R$ extends in a diffeomorphism
$\tilde{R}\colon\tilde{D}_{1}\cup\tilde{D}_{2}\to\Sigma$ which is the first
return maps from $\tilde{D}_{i}$ to
$\Sigma\cup\tilde{D}_{1}\cup\tilde{D}_{2}$.
We denote by $\Delta_{i}$ the union of the $X$-orbit segment joining points
$p\in\tilde{D}_{i}$ to $\tilde{R}(p)$. Thus $(\Delta_{i},X)$ is smoothly
orbitally equivalent to $(\tilde{D}_{i}\times[0,1],\frac{\partial}{\partial
t})$
Next lemma expresses that one can realize by a continuous family of vector
fileds in $M$ any continuous deformation of the return map $R$.
###### Lemma 7.3.
Consider a continuous family $R_{\mu}\colon D_{1}\cup
D_{2}\to\tilde{R}(\tilde{D}_{1}\cup\tilde{D}_{2})$, $\mu\in V$, where
$V\subset{\mathbb{R}}^{2}$ is an open disk containing $0$. One assume
$R_{0}=R$.
Then there is a family of vector fields $X_{\mu}$ with the following
properties:
* •
$X_{0}=X$
* •
for any $\mu$, $X_{\mu}$ satifies all the topological properties (items 1 to
7)) of the definition of the set ${\mathcal{O}}_{1}$.
* •
for any $\mu$ , $X_{\mu}$ coincides with $X$ out of
$\Delta_{1}\cup\Delta_{2}$.
* •
for any $\mu$, the restriction of $X_{\mu}$ to $\Delta_{i}$ is transverse to
the fibers $\tilde{D}_{i}\times\\{t\\}$
* •
the return map of $X$ from $D_{1}\cup D_{2}$ to $\Sigma$ is $R_{\mu}$.
###### Proof.
One extends $R_{\mu}$ to $\tilde{D}_{i}$ in a diffeomorphism $\tilde{R}_{\mu}$
so that all $\tilde{R}_{\mu}$ coincide with $\tilde{R}$ in a neighborhood of
the boundary $\partial\tilde{D}_{i}$.
Then we replace the retriction of $X$ in $\Delta_{i}$ by a vectorfield which
coincides with $X$ in a neighborhood of $\partial\Delta_{i}$ and which
entrance exit map is $\tilde{R}_{\mu}$. ∎
Assume now that the projection of $V$ on ${\mathbb{R}}^{2}/{\mathbb{T}}^{2}$
covers the whole torus ${\mathbb{T}}^{2}$, and assume that the family
$R_{\mu}$ is ${\mathbb{Z}}^{2}$-periodic in the following sense:
* •
for any $\mu_{1},\mu_{2}\in V$ so that $\mu_{2}-\mu_{1}\in{\mathbb{Z}}^{2}$
then $R_{\mu_{1}}=R_{\mu_{2}}$.
Then the family $X_{\mu}$ defined in Lemma 7.3 is a ${\mathbb{T}}^{2}$
parameter family of vector fields having $U$ has an attracting region.
### 7.4 An essential ${\mathbb{T}}^{2}$-family of vector fields in
${\mathcal{O}}_{\varphi}$ obtained by rotating the return maps $R|_{D_{i}}$
Consider $\Sigma\simeq\SS^{1}\times[-1,1]$ with the coordinates $\theta,t$ and
consider the constant cone field ${\mathcal{C}}$ defined by
${\mathcal{C}}(p)=\\{u=\alpha\frac{\partial}{\partial\theta}+\beta\frac{\partial}{\partial\theta}\in
T_{p}\Sigma\mbox{ so that }|\alpha|\geq|\beta|\\}$
We denote by ${\mathcal{R}}_{\alpha}\colon\Sigma\to\Sigma$ the rotation of
angle $\alpha\in\SS^{1}$, that is $(theta,t)\mapsto(\theta+\alpha,t)$.
Consider a vector field $X\in{\mathcal{O}}_{\varphi}$ with the extra following
properties
* •
The constant cone field ${\mathcal{C}}$ is the unstable cone field
${\mathcal{C}}^{u}$
* •
The images $R(D_{i})$ are product annulus $\SS^{1}\times
I_{i}\subset\Sigma=\SS^{1}\times[-1,1]$
Now consider the ${\mathbb{Z}}^{2}$-family of maps $R_{\alpha,\beta}\colon
D_{1}\cup D_{2}\to\Sigma$,
$\alpha,\beta\in{\mathbb{T}}^{2}={\mathbb{R}}^{2}/\SS^{2}$ defined by
$R={\mathcal{R}}_{\alpha}\circ R\mbox{ on }D_{1},\mbox{ and
}R={\mathcal{R}}_{\beta}\circ R\mbox{ on }D_{2}$
According to Lemma 7.3 one can realize the periodic family $R_{\mu}$ by a
${\mathbb{T}}^{2}$-parameter family of vector fields $X_{\mu}$ and one easily
check
###### Lemma 7.4.
The family $X_{\mu}$ is a ${\mathbb{T}}^{2}$-parameter essential family in
${\mathcal{O}}_{\varphi}$.
## 8 Reduction to a $1$-dimensional dynamics
### 8.1 Action of the return map on the space of stable leaves
Any $C^{1}$ vector field $X\in{\mathcal{O}}_{1}$ is singular hyperbolic in the
attracting region $U$, with a continuous strong stable direction. There is a
well defined _stable foliation_ (also called _strong stable foliation_ tangent
to the stable distribution, and with leaves having the same regularity as $X$.
It admits therefore a well defined $2$-dimensional _(weak)stable foliation_
(also called _center-stable folaition_) out of the strong stable manifold of
the singularity. The leaves of the weak stable foliation are the orbits for
the flow of the leaves of the strong stable: out of the strong stable manifold
of $\sigma$, the vector is not tangent to the stable direction, so that these
orbits are $2$-dimensional. Along the strong stable manifold of $\sigma$ the
vector field is tangent to the $1$ dimensional strong stable leaf, so the
foliation is singular.
The strong stable foliation is not tangent to $\Sigma$. However, the $2$
dimensional center-stable foliation cuts the annulus $\Sigma$ transversely,
along a one-dimensional foliation ${\mathcal{F}}^{s}$, which is the stable
foliation of the return map $P$. The segment $\gamma^{s}_{+}$ and
$\gamma^{s}_{-}$ are leaves of ${\mathcal{F}}^{s}$.
The foliation ${\mathcal{F}}^{s}$ is transverse to the unstable cone field
${\mathcal{C}}^{u}$. The leave of the foliation ${\mathcal{F}}^{s}$ are
segments crossing $\Sigma$ (connecting the two boundary components of
$\Sigma$). The leaves space $\Sigma/{\mathcal{F}}^{s}$ is a (topological)
circle $\SS^{1}_{X}$. The leaves $\gamma^{s}_{+}$ and $\gamma^{s}_{-}$ induce
each a point $c_{+}$ and $c_{-}$ respectively, on $\SS^{1}_{X}$.
Note that the flow $X^{t}$ preserves the center-stable foliation, and thus the
first return map $P$ preserves the foliation ${\mathcal{F}}^{s}$.
As a consequence $P$ passes to the quotient as a map $f=f_{X}$ defined from
$\SS^{1}_{X}\setminus\\{c_{+},c_{-}\\}$ to $\SS^{1}_{X}$.
As $P(\Sigma_{i})$ is an essential pinched annulus in $\Sigma$ we deduce that
$f_{X}$ restricted to each interval of
$\SS^{1}_{X}\setminus\\{c_{+},c_{-}\\}$, is a diffeomorphism on a punctured
circle.
### 8.2 Increasing the regularity of the foliation
In a recent work [5] Araújo and Melbourne adapt to our setting a condition
from [11] ensuring the smouthness of the strong stable foliation of $X$.
* •
there exists $t>0$ such that
$\|DX^{t}|_{E^{s}(x)}\|\cdot\|DX^{-t}|_{E^{cu}(X^{t}(x))}\|\cdot\|DX^{t}|_{E_{x}^{cu}}\|<1\quad\mbox{
for all $x\in\Lambda$}.$ (3)
As a consequence of item • ‣ 8.2), the weak stable foliation of $X$, and ,
also the stable foliation $F^{s}$ of the first return map $P$ are of class
$C^{1}$. Therefore the circle $\SS^{1}_{X}$ is endowed with a natural
$C^{1}$-structure, and $f_{X}$ is of class $C^{1}$.
## 9 Symbolic dynamics and topological classification
As for the classical geometric model of Lorenz attractor, we will see in that
section that, to any vector field in ${\mathcal{O}}_{1}$ one, can associate a
combinatorial data, called the itinerary of the discontinuities. Furthermore,
these itineraries provide a topological classification of the vector fields in
the attracting region $U$.
In this section we fix a vector field $X\in{\mathcal{O}}_{1}$, and its return
map $P\colon\Sigma\to\Sigma$.
### 9.1 Itineraries for the return map $P$ on $\Sigma$
Recall that $\Sigma$ is endowed with two specific stable leaves
$\gamma^{s}_{+}$ and $\gamma^{s}_{-}$ which split $\Sigma$ in $\Sigma_{1}$ and
$\Sigma_{2}$.
Note that $\gamma^{s}_{+}$ cuts both pinched annuli $P(\Sigma_{i})$ along one
stable leaf, excepted the case where $q_{i}\in\gamma^{s}_{+}$.
Thus $P^{-1}(\gamma^{s}_{+})$ cut $\Sigma_{i}$ in two components (one of them
being empty if $q_{i}\in\gamma^{s}_{+}$):
* •
we denote by $A_{0}$ and $A_{1}$ the two components of $\Sigma_{1}\setminus
P^{-1}(\gamma^{s}_{+})$ where $A_{0}$ is starting at $\gamma^{s}_{+}$ (for the
positive orientation of the circle $\SS^{1}$ of $\Sigma=\SS^{1}\times[-1,1]$.
If $q_{1}\in\gamma^{s}_{+}$ then $A_{1}=\emptyset$.
* •
we denote by $B_{0}$ and $B_{1}$ the two components of $\Sigma_{2}\setminus
P^{-1}(\gamma^{s}_{+})$ where $B_{0}$ is starting at $\gamma^{s}_{-}$. If
$q_{1}\in\gamma^{s}_{+}$ then $B_{1}=\emptyset$.
We consider ${\mathbb{X}}=\\{A_{0},A_{1},B_{0},B_{1}\\}^{\mathbb{N}}$ the
space of positive infinite words in the alphabet
$\\{A_{0},A_{1},B_{0},B_{1}\\}$.
A finite word of length $k$ is an element
$\\{\omega_{0},\dots,\omega_{k-1}\\}$ of $\\{A_{0},A_{1},B_{0},B_{1}\\}^{k}$.
Given $\omega=\\{\omega_{i}\\}_{i\in{\mathbb{N}}}\in{\mathbb{X}}$ we denote by
$[\omega]_{k}$ the initial word $\\{\omega_{0},\dots,\omega_{k-1}\\}$ of
length $k$ of $\omega$.
###### Remark 9.1.
For any $k>0$ and any $\varepsilon\in+,-$, $P^{-k}(\gamma^{s}_{\varepsilon})$
consists in at most $2^{k}$ stable leaves.
###### Definition 9.2.
For every
$p\in\Sigma\setminus\bigcup_{j=0}^{k}P^{-j}(\gamma^{s}_{+}\cup\gamma^{s}_{-})$
we denote by $[\omega(p)]_{k}=\\{\omega_{0}(p),\dots,\omega_{k-1(p)}\\}$ the
word defined by $f^{i}(p)\in\omega_{i}(p)$ (recall that $\omega_{i}(p)$ is one
of the four regions $A_{0},A_{1},B_{0},B_{1}$).
$[\omega(p)]_{k}$ is called the _$k$ -itinerary_ of $p$.
Figure 20 displays the choice of the alphabet above for the $1$-dimensional
dynamics as in the picture.
Figure 20: The alphabet chosen.
###### Lemma 9.3.
Consider any point $p\in\Sigma$, and $S\colon[-1,1]\to\Sigma$ a positively
oriented unstable segment centered at $p$ (i.e., $S(0)=p$). Then for any
$k\geq 0$ the itinerary $[\omega(S(t))]_{k}$ is well defined and constant for
$t>0$ (resp. $t<0$) small enough. This itinerary is independent of the choice
of $S$. We denote them
$[\omega_{-}(p)]_{k}\quad\mbox{and}\quad[\omega_{+}(p)]_{k}.$
If $p_{1}$ and $p_{2}$ belong to the same stable leaf, then
$[\omega_{\pm}(p_{1})]_{k}=[\omega_{\pm}(p_{2})]_{k}.$
In other words, the itinerary depends only on the stable leaf, and not of the
point in the leaf.
For any $p\in\Sigma$ one denotes by $\omega_{-}(p)$ and $\omega_{+}(p)$ the
infinite words whose first segments of length $k$ are, respectively,
$[\omega_{-}(p)]_{k}$ and $[\omega_{+}(p)]_{k}$. They are called the _down-_
and _up -itinerary_ of $p$. The itinerary of $p$ is the pair of sequences
$\omega(p)=(\omega_{-}(p),\omega_{+}(p)).$
###### Remark 9.4.
If $p\notin W^{s}(\sigma)$ ( that is, $P^{k}(p)\notin\gamma^{s}_{\pm}$ for all
$k>0$) $k$ is then $\omega_{-}(p)=\omega_{+}(p)$ and the segment of length $k$
is _$k$ -itinerary_ $[\omega(p)]_{k}$ of $p$. This shows that our terminology
and notations are consistent.
Next remark says that the itineraries $\omega_{+}$ and $\omega_{-}$ induce, in
some sense, a semi-conjugacy of $(\Sigma,P)$ with
$({\mathbb{X}},\mathfrak{S})$, where $\mathfrak{S}$ is the shift on
${\mathbb{X}}$. Being rigorous, $P$ is defined on
$\Sigma\setminus(\gamma^{s}_{+}\cup\gamma^{s}_{-})$ which is not invariant
under $P$. Thus $\omega_{\pm}$ are conjugacies in restriction of
$\Sigma\setminus W^{s}(\sigma)$.
As $P$ is discontinuous along $\gamma^{s}+$ and $\gamma^{s}_{-}$ we also
explain the itinerary of these points.
###### Remark 9.5.
* •
For any $x\in\Sigma\setminus(\gamma^{s}_{+}\cup\gamma^{s}_{-})$ (that is,
$P(x)$ is defined), then
$\omega_{-}(P(x))=\mathfrak{S}(\omega_{-}(x))\quad\mbox{and}\quad\omega_{+}(P(x))=\mathfrak{S}(\omega_{+}(x))$
* •
For $x\in\gamma^{s}_{+}$ one has:
$\omega_{+}(x)=A_{0}\star\omega_{+}(q_{1})$
and
$\begin{array}[]{l}\omega_{-}(x)=B_{1}\star\omega_{-}(q_{2}),\mbox{ if
}q_{2}\notin\gamma^{s}_{+}\\\ \omega_{-}(x)=B_{0}\dots B_{0}\dots,\mbox{ if
}q_{2}\in\gamma^{s}_{+}\quad(\mbox{and then }B_{1}=\emptyset)\end{array}$
* •
For $x\in\gamma^{s}_{-}$ one has:
$\omega_{+}(x)=B_{0}\star\omega_{+}(q_{2})$
and
$\begin{array}[]{l}\omega_{-}(x)=A_{1}\star\omega_{-}(q_{1}),\mbox{ if
}q_{1}\notin\gamma^{s}_{+}\\\ \omega_{-}(x)=A_{0}\star\omega_{-}(q_{1})\mbox{
if }q_{1}\in\gamma^{s}_{+}\quad(\mbox{and then }A_{1}=\emptyset)\end{array}$
### 9.2 Itineraries for the $1$-dimensional dynamics
According to Lemma 9.3, the itineraries $\omega_{-}$ and $\omega_{+}$ are
functions of the stable leaf. So they passes to the quotient on the leaves
space $\SS^{1}_{X}$. We still denote by $\omega_{-}$ and $\omega_{+}$ the
quotient maps
$\omega_{\pm}\colon\SS^{1}_{X}\to{\mathbb{X}}.$
### 9.3 Order and topology
We endow the alphabet $\\{A_{0},A_{1},B_{0},B_{1}\\}$ with the total order
$A_{0}<A_{1}<B_{0}<B_{1}<1$
which corresponds to the order that an unstable segment starting at
$\gamma^{s}_{+}$ crosses the corresponding regions in $\Sigma$.
We endow ${\mathbb{X}}$ with the corresponding lexicographic order, that we
denote by $\prec$ (and $\preccurlyeq$ for the non-strict order).
###### Proposition 9.6.
Let $S\colon[0,1]\to\Sigma$ be a unstable segment, positively oriented, whose
interior is contained in $\Sigma\setminus\gamma^{s}_{+}$. Then:
* •
for any $t\in(0,1)$ one has $\omega_{-}(S(t))\preccurlyeq\omega_{+}(S(t)),$
* •
for any $t_{1},t_{2}\in[0,1]$ so that $t_{1}<t_{2}$ one has
$\omega_{+}(S(t_{1}))\prec\omega_{-}(S(t_{2})).$
This proposition has a straightforward translation for the itineraries
associated to the $1$-dimensional dynamic $f$.
###### Proof.
For the first item, we already have seen that if $S(t)$ does not belong to
$W^{s}(\sigma)$ then $\omega_{-}(S(t))=\omega_{+}(S(t)$, and there is nothing
to prove.
Consider $t_{1}<t_{2}$ so that $S(t_{i})\notin W^{s}(\sigma)$. As $\omega_{+}$
and $\omega_{-}$ coincide on $S(t_{i})$ we just note
$\omega^{i}=\omega_{+}(S(t_{i}))=\omega_{-}(S(t_{i}))$. If the first letter is
not the same, that is $S(t_{1})$ and $S(t_{2})$ are not in the same region
$A_{0},A_{1},B_{0},B_{1}$, then $(\omega^{1})_{0}<(\omega^{2})_{0}$, by the
choice of the order on our alphabet, and so $\omega^{1}\prec\omega^{2}$.
Assume now that $(\omega^{1})_{j}=(\omega^{2}_{j})$ for $j=0,\dots,k-1$ but
$(\omega^{1})_{k}\neq(\omega^{2})_{k}$.
###### Claim 1.
With the hypotheses above
* •
$P^{j}$, $0\leq j\leq k$ is defined on the unstable segment $S([t_{1},t_{2}])$
* •
$P^{j}(S([t_{1},t_{2}])$ is contained in one of the regions
$A_{0},A_{1},B_{0},B_{1}$ for $0\leq j<k$,
* •
$P^{k}(S(t_{1}))$ and $P^{k}(S(t_{2}))$ are not in same region
###### Proof.
The third item just says $(\omega^{1})_{k}\neq(\omega^{2})_{k}$ which is our
hypothesis.
The proof of the two first items goes togetehr and by induction. As $S(t_{1})$
and $S(t_{2})$ belongs to the same region, and as the interior of $S$ is
disjoint from $\gamma^{s}_{+}$ and $S$ is an unstable segment (transverse to
the fibration by stable leaves) this implies that $S([t_{1},t_{2}])$ is
contained in one of the region of the alphabet. As $P^{0}=id$ is clearly
defined on $S([t_{1},t_{2}])$, we proved both items for $j=0$.
We assume now that both items have been proved for $0,\dots,j-1$ and let us
prove them for $j$. Thus $P^{j-1}(S([t_{1},t_{1}]))$ is contained in one of
the regions. Thus $P$ is well defined on this interval meaning that $P^{j}$ is
well defined on $S([t_{1},t_{2}])$, proving the the first item.
If $j\neq k$, then $(\omega^{1})_{j}=(\omega^{2})_{j}$: in other words the end
points of the unstable segment $P^{j}(S([t_{1},t_{1}]))$ belong to the same
region. If the whole segment is contained in that region, we are done.
Otherwize, $P^{j}(S([t_{1},t_{1}]))$ crosses $\gamma^{s}_{+}$. This means that
$P^{j-1}(S([t_{1},t_{1}]))$ is crossing $P^{-1}(\gamma^{s}_{+}$, and this (by
definition of the regions $A_{0},A_{1},B_{0},B_{1}$) is contradicting the fact
that $P^{j-1}(S([t_{1},t_{1}]))$ is contained in one of these regions. This
ends the proof of the claim. ∎
###### Claim 2.
With the hypotheses above, $(\omega^{1})_{k}<(\omega^{2})_{k}$.
###### Proof.
According to Claim 1, the $P^{k-1}(S([t_{1},t_{1}]))$ is an unstable segment
contained in one of the regions, and we have seen in the proof that this
implies that $P^{k}(S([t_{1},t_{1}]))$ is an unstable segment which does not
cross $\gamma^{s}_{+}$. As already seen, the choice of the order on
$A_{0},A_{1},B_{0},B_{1}$ implies that, either $P^{k}(S([t_{1},t_{1}]))$ is
contained in one region (which contradicts
$(\omega^{1})_{k}\neq(\omega^{2})_{k}$) or
$(\omega^{1})_{k}<(\omega^{2})_{k}$, ending the proof of the claim. ∎
The claims above show that, for any $t_{1}<t_{2}$ so that $S(t_{i})\notin
W^{s}(\sigma)$ one has
$\omega^{1}\preccurlyeq\omega^{2}.$
###### Claim 3.
Given $t_{1}<t_{2}$ so that $S(t_{i})\notin W^{s}(\sigma)$, then
$\omega^{1}\neq\omega^{2}$.
###### Proof.
As in Claim 1, if $\omega^{1}=\omega^{2}$ then $P^{j}$ is well defined on
$S([t_{1},t_{2}])$ for any $j\geq 0$ and $P^{j}(S([t_{1},t_{2}])$ is contained
in $1$ of the regions $A_{0},A_{1},B_{0},B_{1}$. The length of these iterates
is increasing exponentially, and these forbids (for large iterates) these
segments to be contained in one region, ending the proof of the claim. ∎
Consider now any $t_{1}<t_{2}$. We need to prove
$\omega^{1}_{+}=\omega_{+}(S(t_{1}))\prec\omega_{-}(S(t_{2}))=\omega^{2}_{-}.$
By definition of $\omega_{-}$, there is a decreasing sequence $t_{1,n}<t_{2}$
tending to $t_{1}$ and so that:
* •
$S(t_{1,n})\notin W^{s}(\sigma)$
* •
$[\omega^{1}_{+}]_{n}=[\omega^{1,n}]_{n}$ (where $\omega^{1,n}$ is the
itinerary of $S(t_{1,n})$.
Note that the sequence $\omega^{1,n}$ is strictly descreasing for $\prec$ and
tends to $\omega^{1}_{-}$. In other words,
$\omega^{1}_{-}=\inf_{n\to+\infty}\omega^{1,n}.$
In the same way we fix an incresing sequence $t_{2,n}>t_{1,0}$ tending to
$t_{2}$ and so that
* •
$S(t_{2,n})\notin W^{s}(\sigma)$
* •
$[\omega^{2}_{-}]_{n}=[\omega^{2,n}]_{n}$ (where $\omega^{2,n}$ is the
itinerary of $S(t_{2,n})$.
Then
$\omega^{2}_{+}=\sup_{n\to+\infty}\omega^{2,n}.$
As $\omega^{1,n}\prec\omega^{2,n}$, we conclude
$\omega^{1}_{+}\prec\omega^{2}_{-}$
proving the second item of the proposition.
To end the proof of the proposition it remains to show that
$\omega_{-}(S(t))\preccurlyeq\omega_{+}(S(t))$ for any $t\in(0,1)$.
For that we consider sequences
$t_{-,n}<t_{-,n+1}<\dots<t<\dots<t_{+,n+1}<t_{+,n}$ tending to $t$ as
$n\to+\infty$ and so that
* •
$S(t_{\pm,n})\notin W^{s}(\sigma)$
* •
$[\omega_{-}(S(t))]_{n}=[\omega^{-,n}]_{n}$ (where $\omega^{-,n}$ is the
itinerary of $S(t_{-,n})$)
* •
$[\omega_{+}(S(t))]_{n}=[\omega^{+,n}]_{n}$ (where $\omega^{+,n}$ is the
itinerary of $S(t_{-,n})$ )
We know that $\omega^{-,n}\prec\omega^{+,n}$ (the itinerary is strictly
increasing on point out of $W^{s}(\sigma)$). So for every $n$ one has
$[\omega_{-}(S(t))]_{n}\preccurlyeq[\omega_{+}(S(t))]_{n}$, ending the proof.
∎
### 9.4 Admissible itineraries
Given a vector field $X\in{\mathcal{O}}_{1}$, we associate four itineraries:
$\omega^{+}_{+}=\omega_{+}(\gamma^{s}_{+})$,
$\omega^{+}_{-}=\omega_{-}(\gamma^{s}_{+})$,
$\omega^{-}_{+}=\omega_{+}(\gamma^{s}_{-})$ and
$\omega^{-}_{-}=\omega_{-}(\gamma^{s}_{-})$.
###### Definition 9.7.
We say that a $\omega\in{\mathbb{X}}$ is admissible for the vector field $X$
(or, shortly, $X$-admissible), if it satisfies the following inequalities
* •
$\omega^{+}_{+}\preccurlyeq\mathfrak{S}^{n}(\omega)\preccurlyeq\omega^{+}_{-}$.
* •
If $(\omega)_{i}\in\\{A_{0},A_{1}\\}$ then
$\mathfrak{S}^{i}(\omega)\preccurlyeq\omega^{-}_{-}$.
* •
If $(\omega)_{i}\in\\{B_{0},B_{1}\\}$ then
$\omega^{-}_{+}\preccurlyeq\mathfrak{S}^{i}\omega$.
We denote by ${\mathcal{A}}_{X}\subset{\mathbb{X}}$ the set of $X$-admissibles
itinerires.
###### Remark 9.8.
The subset ${\mathcal{A}}_{X}$ is a $\mathfrak{S}$-invariant compact set.
We note that if $X$ does not exhibit homoclinic loops then $\underset{x\to
c^{-}_{-}}{\lim}f^{i}_{X}(x)=\underset{x\to c^{+}_{+}}{\lim}f^{i}_{X}(x)$ and
$\underset{x\to c^{-}_{+}}{\lim}f^{i}_{X}(x)=\underset{x\to
c^{+}_{-}}{\lim}f^{i}_{X}(x)$, for all $i\in\mathbb{N}$, and therefore
$\begin{array}[]{c}\mathfrak{S}(\omega^{+}_{+})=\mathfrak{S}(\omega^{-}_{-})\\\
\mathfrak{S}(\omega^{+}_{-})=\mathfrak{S}(\omega^{-}_{+})\end{array}$ (4)
Remembering that $(\omega^{+}_{+})_{0}=A_{0}$ and $(\omega^{-}_{+})_{0}=B_{0}$
one gets that these $4$ itineraries are determined by $\omega^{+}_{-}$ and
$\omega^{-}_{-}$.
If $X$ exhibits a homoclinic loop, the equalities (4) are no longer true. For
instance, this is the case when $W^{u}_{+}(\sigma)\cap
W^{s}_{+}(\sigma)\neq\emptyset$ and $W^{u}_{-}(\sigma)\cap
W^{s}(\sigma)=\emptyset$, $\omega_{+}^{+}$ is periodic and $\omega_{-}^{-}$ is
not. See Figure 21. However, since $\underset{x\to
c^{-}_{-}}{\lim}f_{X}(x)=\underset{x\to c^{+}_{+}}{\lim}f_{X}(x)$ and
$\underset{x\to c^{-}_{+}}{\lim}f_{X}(x)=\underset{x\to
c^{+}_{-}}{\lim}f_{X}(x)$, the set ${\mathcal{A}}_{X}$ is still determined by
$\\{\omega_{+}^{+},\omega_{-}^{+}\\}$ or
$\\{\omega_{-}^{-},\omega_{+}^{-}\\}$.
Figure 21: (a) $\mathcal{H}_{-}^{1}\cap\mathcal{H}^{2}_{+}$, (b)
$\mathcal{H}_{-}^{1}\cap\mathcal{H}_{-}^{2}$.
$\displaystyle\mbox{Itineraries
of}\quad\mathcal{H}_{-}^{1}\cap\mathcal{H}_{-}^{2}$
$\displaystyle\mbox{Itineraries
of}\quad\mathcal{H}_{-}^{1}\cap\mathcal{H}_{+}^{2}$
$\displaystyle\omega_{+}^{+}=A_{0}B_{0}A_{0}B_{0}\ldots\,\,\,\,\,$
$\displaystyle\,\,\,\,\,\omega_{+}^{+}=A_{0}B_{0}B_{0}B_{0}\ldots$
$\displaystyle\omega_{-}^{-}=A_{1}A_{1}A_{1}A_{1}\ldots\,\,\,\,\,$
$\displaystyle\,\,\,\,\,\omega_{-}^{-}=A_{1}A_{1}A_{1}A_{1}\ldots$
$\displaystyle\omega_{-}^{+}=B_{0}A_{0}B_{0}A_{0}\ldots\,\,\,\,\,$
$\displaystyle\,\,\,\,\,\omega_{-}^{+}=B_{1}A_{1}A_{1}A_{1}\ldots$
$\displaystyle\omega_{+}^{-}=B_{0}B_{0}B_{0}B_{0}\ldots\,\,\,\,\,$
$\displaystyle\,\,\,\,\,\omega_{+}^{-}=B_{0}B_{0}B_{0}B_{0}\ldots$
###### Lemma 9.9.
If $p\in\Sigma$ then $\omega_{-}(p)$ and $\omega_{+}(p)$ are $X$-admissible.
###### Proof.
Consider an unstable segment $S\colon[0,1]\to\Sigma$ whose interior is
disjoint from $\gamma^{s}_{+}$ and so that $S(0),S(1)\in\gamma^{s}_{+}$ and
$p\in S([0,1])$. Then Proposition 9.6 applies and implies that, if
$p\notin\gamma^{s}_{+}$ then
$\omega^{+}_{+}\preccurlyeq\omega_{-}(p)\preccurlyeq\omega_{+}(p)\preccurlyeq\omega^{+}_{-}$.
In particular,
$\begin{array}[]{c}\omega^{+}_{-}=\max\\{\omega_{-}(p),\omega_{+}(p),p\in\Sigma\\}\\\
\omega^{+}_{+}=\min\\{\omega_{-}(p),\omega_{+}(p),p\in\Sigma\\}\end{array}$
In particular, this shows that $\omega_{-}(p)$ and $\omega_{+}(p)$ satisfy the
first item of the definition of $X$-admissibility.
The other two items correspond to several cases whose proof are very similiar,
let us just present one of these cases:
Let $p=S(t)$ so that $(\omega_{+}(p))_{0}\in\\{A_{0},A_{1}\\}$. Then there is
a decreasing sequence $t_{n}\to t$ so that $S(t_{i})\notin W^{s}(\sigma)$ and
$[\omega_{+}(p)]_{n}=[\omega^{n}]_{n}$
where $\omega^{n}$ denotes $\omega_{-}(S(t_{n}))=\omega_{+}(S(t_{n}))$.
Then $\omega_{+}(p)=\inf\omega^{n}$. On hte other hand $S(t_{n})$ is a point
out of $W^{s}(\sigma)$ and contained in $A_{0}\cup A_{1}$, and thus in
$\Sigma_{1}$. Proposition 9.6 implies that $\omega^{n}\prec\omega^{-}_{-}$,
finishing this case, and the proof. ∎
### 9.5 Realizing $X$-admissible itineraries
The aim of this section is to prove
###### Proposition 9.10.
Given any $\omega\in{\mathcal{A}}_{X}$ there is $p\in\Sigma$ so that
$\omega\in\\{\omega_{+}(p),\omega_{-}(p)\\}.$
###### Remark 9.11.
Proposition 9.6 implies that any two points satisfying the conclusion of
Proposition 9.10 belong to the same stable leaf.
Proposition 9.10 is a direct consequence of Lemma 9.12 below:
###### Lemma 9.12.
Given any $\omega\in{\mathcal{A}}_{X}$, the set $\Omega_{n}$ of points $p$ in
$\Sigma$ so that
$[\omega]_{n}\in\\{[\omega_{+}(p)]_{n},[\omega_{-}(p)]_{n}\\}$
is a non empty compact subset of $\Sigma$.
Assuming that Lemma 9.12 is true, then the sequence $\Omega_{n}$ is a nested
sequence of non-empty compact sets and any $p\in\bigcap\Omega_{n}$ satifies
that $\omega\in\\{\omega_{+}(p),\omega_{-}(p)\\}.$ Note that, indeed, this
intersection is exactly the stable leaf through $p$, according to Proposition
9.6.
It remains to prove Lemma 9.12.
### 9.6 Proof of Lemma 9.12
For any itinerary $\omega\in{\mathcal{A}}_{X}$ we denote by
$\Omega_{n}(\omega)$ the set of points $q\in\Sigma$ so that
$[\omega]_{n}\in\\{[\omega_{-}(q)]_{n},[\omega_{+}(q)]_{n}\\}$.
###### Lemma 9.13.
Let $\omega\in{\mathcal{A}}_{X}$ so that there is some $p\in\Sigma$ for which
$\omega\in\\{\omega_{-}(p),\omega_{+}(p)\\}$.
Fix $n\in{\mathbb{N}}$ and denote $\Omega_{n}(\omega)$ the set of points
$q\in\Sigma$ so that $[\omega]_{n}\in[\omega_{-}(q)]_{n},[\omega_{+}(q)]_{n}$.
Then $\Omega_{n}(\omega)$ is the closure of a connected component of
$\Gamma_{n}\stackrel{{\scriptstyle\scriptscriptstyle\rm
def}}{{=}}\Sigma\setminus\bigcup_{i=0}^{n-1}P^{-i}(\gamma^{s}_{+}\cup
P^{-1}(\gamma^{s}_{+})\cup\gamma^{s}_{-}).$
###### Proof.
First notice that $\Gamma_{n}$ consist in the union of finitely many stable
leaves.
Consider an unstable segment $S\colon[0,1]\to\Sigma$ whose interior is
disjoint from $\Gamma^{n}$, and having its end points on $\Gamma_{n}$. Let
$\Omega_{n}$ be the closure of the connected component of
$\Sigma\setminus\Gamma_{n}$ containing $S((0,1))$. Then for any $0\leq i<n$,
$P^{i}(S((0,1)))$ is well defined and disjoint from $\gamma^{s}_{\pm}$ and of
$P^{-1}(\gamma^{s}_{+})$. In other word, $P^{i}(S((0,1)))$ is contained in one
of the region defining the alphabet. Thus $[\omega_{-}(S(t))]_{n}$ does not
depend on $t\in(0,1)$ and is equal to $[\omega_{+}(S(0))]_{n}$ and
$[\omega_{-}(S(1))]_{n}$.
One deduces that $[\omega_{-}]_{n}$ and $[\omega_{+}]_{n}$ are equal and
constant on the interior of $\Omega_{n}$, and $[\omega_{-}]_{n}$ takes the
same value on one of the boundary stable leaf, and $[\omega_{+}]_{n}$ on the
other boudary stable leaf.
This proves that $\Omega_{n}(\omega)$ is a union of such closures of connected
components $\Omega_{n}$.
Fix $\Omega_{n}\subset\Omega_{n}(\omega)$ and let $q\notin\Omega_{n}$. If $q$
is not is the same region $\\{A_{0},A_{1},B_{0},B_{1}\\}$ as $\Omega_{n}$ then
$[\omega_{\pm}(q)]_{n}$ is not $[\omega]_{n}$. Otherwise, there is an unstable
segment ( still denoted by $S$) in this region (hence disjoint from
$\gamma^{s}_{+}\cup\gamma^{s}_{-}\cup P^{-1}(\gamma^{s}_{+})$, joining $q$ to
a point $p$ in the interior of $\Omega_{n}$.
The interior $\operatorname{int}(S)$ of segment $S$ crosses the boundary of
$\Omega_{n}$ that is crosses $\Gamma_{n}$. Let $i$ the smallest integer so
that $\operatorname{int}(S)\cap P^{-i}(\gamma^{s}_{+}\cup\gamma^{s}_{-}\cup
P^{-1}(\gamma^{s}_{+}))$.
Then $P^{i-1}(S)$ is a contained in the closure of one of the regions
$\\{A_{0},A_{1},B_{0},B_{1}\\}$ but not $P^{i}(S)$. This implies that the two
end points of $P_{i}(S)$ are not in the same region
$\\{A_{0},A_{1},B_{0},B_{1}\\}$. This implies that $[\omega_{+}(q)]_{i}$ and
$[\omega_{-}(q)]_{i}$ are different from $[\omega]_{i}$, proving that
$q\notin\Omega_{n}(\omega)$, ending the proof. ∎
###### Prof of Lemma 9.12.
. The proof goes by induction.
We want to prove that, for every $n\geq 0$ and every
$\omega\in{\mathcal{A}}_{X}$ , $\Omega_{n}(\omega)$ is the closure of one
connected component of $\Sigma\setminus\Gamma_{n}$.
Let us check this is true for $n=0$. Each itineraries of length $0$ is one
letter of our alphabet, which corresponds to a connected component of
$\Sigma\setminus\Gamma_{0}$, and its closure is the one we announced.
We now prove it also holds for $n=1$. Assume for instance that
$(\omega)_{0}=A_{0}$. Thus $\Omega_{0}(\omega)=\bar{A}_{0}$, and
$P(\Omega_{0}(\omega))$ is a cuspidal triangle starting at $q_{1}$ and ending
at $\gamma^{s}_{+}$.
Now, by definition of ${\mathcal{A}}_{X}$ one has
$\omega_{+}(\gamma_{+}^{s})\preccurlyeq\omega\preccurlyeq\omega_{-}(\gamma^{s}_{-}).$
As the first letter of $\omega$ is the same as the first letter of
$\omega_{+}(\gamma^{s}_{+})$ one gets
$\mathfrak{\omega_{+}}(\gamma^{s}_{+})=\omega_{+}(q_{1})\preccurlyeq\mathfrak{S}(\omega).$
In particular $(\omega)_{1}=(\mathfrak{S}(\omega))_{0}$ either is strictly
bigger than or is equal to $(\omega_{+}(q_{1}))_{0}$. In both cases
$P(\Omega_{0}(\omega))$ intersects $\Omega_{0}(\mathfrak{S}(\omega))$ proving
that $\Omega_{2}(\omega)$ is not empty. Now Lemma 9.13 asserts that it is the
closure of a connecting component of $\Sigma\setminus\Gamma_{2}$ proving the
induction hypothesis in that case. The case $(\omega)_{0}=A_{1},B_{0},B_{1}$
are very similar.
We assume now that the induction hypothese have been proved for $i=0\dots n$.
Consider $\omega\in{\mathcal{A}}_{X}$. By the induction hypothesis,
$\Omega_{n}(\omega)$ is the closure of one connected component of
$\Sigma\setminus\Gamma_{n}$.
We split the proof in cases.
Case 1: $\gamma^{s}_{+}$ and $\gamma^{s}_{-}$ are not contained in the compact
set $\Omega_{n}(\omega)$.
Then $P$ is defined on $\Omega_{n}(\omega)$ and the boundary $\partial
P(\Omega_{n}(\omega))$ is contained in $\Gamma_{n-1}$.
This implies that $P(\Omega_{n}(\omega))$ crosses every stable leaf in the
closure of a connected component of $\Sigma\setminus\Gamma_{n-1}$ , which has
to be $\Omega_{n-1}(\mathfrak{S}(\omega))$ (by Lemma 9.13).
By the induction hypothesis $\Omega_{n}(\mathfrak{S}(\omega))$ is a connected
component of $\Sigma\setminus\Gamma_{n}$, and is contained in
$\Omega_{n-1}(\mathfrak{S}(\omega))$.
This implies that $P(\Omega_{n}(\omega))$ intersects
$\Omega_{n-1}(\mathfrak{S}(\omega))$. Thus $\Omega_{n+1}(\omega)$ is not
empty, and therefore is a connected component of $\Sigma\setminus\Gamma_{n+1}$
by Lemma 9.13.
Case 2: We now assume that $1$ of the boundary components of
$\Omega_{n}(\omega)$ is $\gamma^{s}_{+}$ and the other is not
$\gamma^{s}_{-}$. Up to reverse the orientation we assume that the positively
oriented unstable segments starting at $\gamma^{s}_{+}$ enter in
$\Omega_{n}(\omega)$.
Then $P(\Omega_{n}(\omega))$ is a cuspidal triangle starting at $q_{1}$ and
ending on a stable leaf in $\Gamma_{n-1}$.
Now $\Omega_{n-1}(\mathfrak{S}(\omega))$ is (induction hypothesis) the closure
of a connected component in $\Sigma\setminus\Gamma_{n-1}$, which contains
$P(\Omega_{n}(\omega))$ and thus contains $q_{1}$.
Now $\Omega_{n}(\mathfrak{S}(\omega))$ is (induction hypothesis) the closure
of a connected component in $\Sigma\setminus\Gamma_{n}$.
###### Claim 4.
$P(\Omega_{n}(\omega))\cap\Omega_{n}(\mathfrak{S}(\omega)\neq\emptyset$
###### Proof.
Note that the first letter of $\omega$ is $A_{0}$. As
$\omega\in{\mathcal{A}}_{X}$ on has
$\omega_{+}(\gamma^{s}_{+})\preccurlyeq\omega$. As their first letters are
equal, this implies
$\mathfrak{\omega}_{+}(\gamma^{s}_{+})=\omega_{+}(q_{1})\preccurlyeq\mathfrak{S}(\omega).$
Consider a positively oriented unstable segment $S\colon[0,1]\to\Sigma$
crossing every stable leaf in $\Omega_{n-1}(\mathfrak{S}(\omega))$ and
containing $q_{1}=S(t_{0})$. Then $S$ is crossing
$\Omega_{n}(\mathfrak{S}(\omega))$ at point $S(t)$.
Recall that $\omega_{+}(q_{1})\preccurlyeq\mathfrak{S}(\omega)$.
Recall that the fonction $[\omega_{+}(S(t))]n$ is non decreasing with $t$, so
that $S^{-1}(\Omega_{n}(\mathfrak{S}(\omega)))\subset[0,1]$ is not inferior to
$S^{-1}(\Omega_{n}(\omega_{+}(q_{1}))$.
As a consequence $S^{-1}(\Omega_{n}(\mathfrak{S}(\omega)))\subset[0,1]$ either
coincide with $S^{-1}(\Omega_{n}(\omega_{+}(q_{1}))$ or is strictly larger
than $t_{0}$.
In both cases $P(\Omega_{n}(\omega))$ intersects
$\Omega_{n}(\mathfrak{S}(\omega))$, concluding.
∎
This implies that $\Omega_{n+1}(\omega)$ is not empty, and Lemma 9.13
concludes that case.
Case 3: We now assume that $1$ of the boundary components of
$\Omega_{n}(\omega)$ is $\gamma^{s}_{-}$ and the other is not
$\gamma^{s}_{+}$.
Up to reverse the orientation we assume that the positively oriented unstable
segments starting at $\gamma^{s}_{+}$ enter in $\Omega_{n}(\omega)$.
Then $P(\Omega_{n}(\omega))$ is a cuspidal triangle starting at $q_{2}$ and
ending on a stable leaf in $\Gamma_{n-1}$.
Note that the first letter of $\omega$ is $B_{0}$. As
$\omega\in{\mathcal{A}}_{X}$ on has
$\omega_{+}(\gamma^{s}_{-})\preccurlyeq\omega$. As their first letters are
equal, this implies
$\mathfrak{S}(\omega)\preccurlyeq\mathfrak{S}\omega_{+}(\gamma^{s}_{+})=\omega_{+}(q_{2}).$
The proof follows now in a similar way to Case 2.
Case 4: Finally, we assume that both boundary components of
$\Omega_{n}(\omega)$ are $\gamma^{s}_{-}$ and $\gamma^{s}_{+}$. This implies
that $\Omega_{n}(\omega)$ is the closure of $\Sigma_{1}$ or of $\Sigma_{2}$,
and thus $P(\Sigma)$ crosses $\Omega_{n}(\mathfrak{S}(\omega))$ concluding.
Now the proof of Lemma 9.12 (and thus of Proposition 9.10) is complete. ∎
### 9.7 Itineraries, conjugacy, and topological equivalence
###### Theorem 9.14.
Consider $X,Y\in{\mathcal{O}}_{1}$ ad let $f_{X},f_{Y}$ be the corresponding
$1$-dimensional dynamics. Assume that the itineraries
$\omega_{-}(\gamma^{s}_{-},X)=\omega_{-}(\gamma^{s}_{-},Y)$ and
$\omega_{-}(\gamma^{s}_{+},X)=\omega_{-}(\gamma^{s}_{+},Y)$. Then there is a
orientation preserving map of $\SS^{1}$ which is a conjugation between $f_{X}$
and $f_{Y}$.
###### Proof.
We have seen that the sets ${\mathcal{A}}_{X}$ and ${\mathcal{A}}_{Y}$ of
admissible itineraries for $X$ and $Y$ are completely determined by
$\omega_{-}(\gamma^{s}_{\pm},X)$ and $\omega_{-}(\gamma^{s}_{\pm},Y)$,
respectively. Thus
${\mathcal{A}}_{X}={\mathcal{A}}_{Y}\stackrel{{\scriptstyle\scriptscriptstyle\rm
def}}{{=}}{\mathcal{A}}.$
Now for any $\omega\in{\mathcal{A}}$, Proposition 9.12 and Remark 9.11 imply
that there are a unique point $x_{\omega}\in\SS^{1}_{X}$ and
$y_{\omega}\in\SS^{1}_{Y}$ so that
$\omega\in\\{\omega_{-}(x_{\omega},f_{X}),\omega_{+}(x_{\omega},f_{X})\\}$ and
$\omega\in\\{\omega_{-}(y_{\omega},f_{Y}),\omega_{+}(y_{\omega},f_{Y})\\}$.
We define $h(x_{\omega})=y_{\omega}$. This defines a bijection from
$\SS^{1}_{X}$ to $\SS^{1}_{Y}$, which sends $\gamma^{s}_{i,X}$ on
$\gamma^{s}_{i,Y}$.
The punctured circle is an interval endowed with an order (frome the positive
orientation of the unstable segments), and Proposition 9.6 implies that
$\omega\mapsto x_{\omega}$ and $\omega\mapsto y_{\omega}$ are increasing. This
implies that $h\colon x_{\omega}\mapsto y_{\omega}$ is an increasing bijection
from $\SS^{1}_{X}\setminus\\{\gamma^{s}_{+,X}\\}$ onto
$\SS^{1}_{Y}\setminus\\{\gamma^{s}_{+,Y}\\}$. An increasing bijection between
to intervals is aa homeomorphism, so that $h$ is a homeomorphism.
The fact that $h$ is a conjugacy now comes from the fact that
$x_{\mathfrak{S}(\omega)}=f_{X}(x)$ for $x\notin\\{\gamma^{s}_{\pm,X}\\}$. ∎
Recall that the discontinuities are fixed points for the conjugacy $h$
contructed above.
We finish proving Theorem G, which establishes that the restriction to the
maximal invariant set of $X,\,Y\in\mathcal{O}_{1}$ are topologically
equivalent by a conjugacy close to identity if, and only if, $X$ and $Y$ have
the same itineraries.
###### Proof.
(of Theorem G) ($\Rightarrow$) A conjugation $\mathbb{H}$ between
$X|_{\Lambda_{X}}$ and $Y|_{\Lambda_{Y}}$ induces a topological conjugation
$h:\SS^{1}\to\SS^{1}$ between $f_{X}:\SS^{1}\to\SS^{1}$ and
$f_{Y}:\SS^{1}\to\SS^{1}$. Let
$\varepsilon=\frac{1}{2}\min\\{d(D_{1},\Sigma;d(D_{2},\Sigma))\\}$ (see
Section 3.1 for the definitions of $D_{1}$ and $D_{2}$). If
$d(\mathbb{H},id)<\varepsilon$, $h$ is orientation preserving and
$h(c_{i,X})=c_{i,Y}$ for $i\in\\{-,+\\}$, and therefore $X$ and $Y$ have the
same itineraries.
($\Leftarrow$ ) Fix $p\in\Sigma\cap\Lambda_{X}$. Because $X$ and $Y$ have de
the same itineraries, Lemma 9.12 together with Theorem 9.14 ensure the
existence of $q\in\Sigma\cap\Lambda_{Y}$ such that for all $n\in\mathbb{N}$,
points in $\gamma^{s}_{X}(P_{X}^{-n}(p))$ and $\gamma^{s}_{Y}(P_{Y}^{-n}(q)$
have the same itineraries. Thus
$C_{n}=\\{P_{Y}^{n}(\gamma^{s}_{Y}(P_{Y}^{-n}(q))\\}_{n\in\mathbb{N}}$ is a
sequence of compact sets in $\gamma^{s}_{Y}(q)$ converging to a single point
and we define $H(p)=\underset{n\in\mathbb{N}}{\cap}C_{n}$. Lema 9.9 implies
that $H$ is onto. For $p_{1},p_{2}\in\Sigma$ such that
$\gamma^{s}_{X}(p_{1})\neq\gamma^{s}_{X}(p_{2})$, it is easy to see that
$H(p_{1})\neq H(p_{2})$. For $p_{1}$ and $p_{2}$ in the same leave, there
exists $n_{1}\in\mathbb{N}$ for which $P^{-n_{1}}(p_{1})$ and
$P^{-n_{1}}(p_{2})$ belongs to different connected component of
$\Sigma\setminus\\{\gamma^{s}_{-,X},\gamma^{s}_{+,X}\\}$, which implies that
$\gamma^{s}_{X}(P_{X}^{-n_{1}}(p_{1}))\cap\gamma^{s}_{X}(P_{X}^{-n_{1}})(p_{2}))=\emptyset$
and hence $H_{n}(p_{1})\cap H_{n}(p_{2})\neq\emptyset$ for all $n>n_{1}$,
providing $H$ injectivity. The existence of unstable cone fields around
$\Sigma$ and the continuity of $h$ and $h^{-1}$ give the continuity of $H$ and
$H^{-1}$. To finish, for $p\in\Sigma\cap\Lambda_{X}$, consider
$\alpha\subset\mathcal{O}(p)$ and $\beta\subset\mathcal{O}(H(p))$, curves
parametrized by the arc length, joining $p$ to $D^{1}\cup D^{2}$ and $H(p)$ to
$D^{1}\cup D^{2}$ respectively, in away that $\alpha(t_{1}),\beta(t_{2})\notin
D^{1}\cup D^{2}$ for all $0<t_{1}<\ell(\alpha)$ and $0<t_{2}<\ell(\beta)$).
For $\rho$ being the ratio of length of $\alpha$ to the length of $\beta$, we
define $\mathbb{H}(t)=\beta(\rho t)$. Extend this map to segments of
trajectories leaving $D^{1}\cup D^{2}$ and returning to $\Sigma$ in the same
way as before. $\mathbb{H}$ defines a topological equivalence. Note that
$\mathbb{H}(D_{i})=D_{i}$ and $\mathbb{H}(\Sigma)=\Sigma$, which implies that
$d(\mathbb{H},id)<\varepsilon$ and we have the result.
∎
## References
* [1] Andronov, A.A.; Pontrjagin, L., Stochastic dynamics of deterministic systems,. Doklady Acc. Sc. USSR,volume 14, 1937.
* [2] Araújo, V.; Pacifico, M. J., Pujals, P., Viana, M. , Singular-hyperbolic attractors are chaotic, Trans. Amer. Math. Soc., volume=361, pages 2431–2485, 2009.
* [3] Araújo, V.; Pacifico, M. J. , Three-dimensional flows, volume 53, 2010, editora Springer Science & Business Media.
* [4] Araujo, V.; Melbourne, I.; Varandas, P., Rapid mixing for the Lorenz attractor and statistical limit laws for their time-1 maps, Communications in Mathematical Physics, volume 340, pages 901-938, 2015.
* [5] Araújo, V.; Melbourne, I., Existence and smoothness of the stable foliation for sectional hyperbolic attractors, Bulletin of the London Mathematical Society,volume 49, pages 351-367, 2017.
* [6] Bamón, R.; Labarca, R.; Mañé, R.; Pacífico, M. J., The explosion of singular cycles, Publications Mathématiques de l’Institut des Hautes Études Scientifiques, volume 78, pages 207-232, 1993.
* [7] Conley, C. Some aspects of the qualitative theory of differential equations. Dynamical systems (Proc. Internat. Sympos., Brown Univ., Providence, R.I., 1974), Vol. I, pp. 1–12. Academic Press, New York, 1976.,
* [8] Guckenheimer, J.; Williams, R. F., Structural stability of Lorenz attractors, Publications Mathématiques de l’Institut des Hautes Études Scientifiques, volume 50, pages 59-72, 1979.
* [9] Hartman, P., Ordinary differential equations, Classics in Applied Mathematics, vol. 38, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002, Corrected reprint of the second (1982) edition.
* [10] Hayashi, S., Connecting invariant manifolds and the solution of the $C^{1}$ stability and $\Omega$-stability conjectures for flows, Annals of mathematics, pages 81-137, 1997.
* [11] Hirsch, M. W.; Pugh, C. C.; Shub, M. Invariant manifolds. Lecture Notes in Mathematics, Vol. 583. Springer-Verlag, Berlin-New York, 1977
* [12] Leplaideur, R.; Pinheiro, V., Thermodynamic formalism for Lorenz maps,arXiv preprint arXiv:1209.2008, 2012.
* [13] Lorenz, E. N., Deterministic nonperiodic flow, Journal of the atmospheric sciences, volume 20, pages 130-141, 1963.
* [14] Mañé, R., A proof of the $C^{1}$ stability conjecture, Publications Mathématiques de l’Institut des Hautes Études Scientifiques, volume 66, pages 161-210, 1987.
* [15] Mañé, R., Contributions to the stability conjecture, Topology, volume 17, pages 383-396, 1978.
* [16] Morales, C.,A,; Pujals, E.R., Singular strange attractors on the boundary of Morse-Smale systems, Annales Scientifiques de l’Ecole Normale Supérieure, volume 30, pages 693-717, 1997.
* [17] Milnor, J.; Thurston, W., On iterated maps of the interval, Dynamical systems, pages 465-563, 1988.
* [18] Morales, C. A. ; Pacifico, M. J.; Pujals, E. R., Robust transitive singular sets for 3-flows are partially hyperbolic attractors or repellers, Annals of mathematics, pages 375-432, 2004.
* [19] Morales, C.; Pacifico, M. J.; Pujals, E., Singular hyperbolic systems, Proceedings of the American Mathematical Society, volume 127, pages 3393-3401, 1999.
* [20] Pacifico, M. J. Structural stability of vector fields on 3-manifolds with boundary. J. Differential Equations 54 (1984), no. 3, 346–372.
* [21] Palis, J.; Smale, S., Structural stability theorems, The Collected Papers of Stephen Smale: Volume 2, pages 739-747, 2000.
* [22] Peixoto, Mauricio Matos, Structural stability on two-dimensional manifolds, 1961.
* [23] Smale, S., Differentiable dynamical systems Bulletin of the American mathematical Society,volume 73, pages 747-817, 1967.
* [24] Tucker, W., The Lorenz attractor exists, Comptes Rendus de l’Académie des Sciences-Series I-Mathematics, volume 328, pages 1197-1202, 1999.
* [25] Viana M. Stochastic dynamics of deterministic systems, Brazilian Math. Colloquium, Publicações do IMPA, 1997.
* [26] Williams, R. F., The structure of Lorenz attractors, Publications Mathématiques de l’Institut des Hautes Études Scientifiques, volume 50, pages 73-99, 1979.
|
# Local Dimensions of Self-similar Measures Satisfying the Finite Neighbour
Condition
Kathryn E. Hare and Alex Rutar Dept. of Pure Mathematics, University of
Waterloo, Waterloo, Canada N2L 3G1<EMAIL_ADDRESS>Mathematical
Institute, North Haugh, St Andrews, Fife KY16 9SS, Scotland ar339@st-
andrews.ac.uk
###### Abstract.
We study sets of local dimensions for self-similar measures in $\mathbb{R}$
satisfying the finite neighbour condition, which is formally stronger than the
weak separation condition but satisfied in all known examples. Under a mild
technical assumption, we establish that the set of attainable local dimensions
is a finite union of (possibly singleton) compact intervals. The number of
intervals is bounded above by the number of non-trivial maximal strongly
connected components of a finite directed graph construction depending only on
the governing iterated function system. We also explain how our results allow
computations of the sets of local dimensions in many explicit cases. This
contextualizes and generalizes a vast amount of prior work on sets of local
dimensions for self-similar measures satisfying the weak separation condition.
###### Key words and phrases:
iterated function system, self-similar, local dimension, multifractal
analysis, weak separation condition
###### 2020 Mathematics Subject Classification:
28A80
KEH was supported by NSERC Grant 2016-03719. AR was supported by this grant as
well as EPSRC Grant EP/V520123/1
This paper is in final form and no version of it will be submitted for
publication elsewhere.
###### Contents
1. 1 Introduction
1. 1.1 Organization of the paper
2. 1.2 Some questions
3. 1.3 Notation
4. 1.4 Acknowledgements
2. 2 Graph-directed matrix product systems
1. 2.1 Basic definitions
2. 2.2 Irreducible matrix product systems
3. 2.3 Attainable Lyapunov exponents
4. 2.4 Density of periodic paths
3. 3 Iterated function systems and their matrix product systems
1. 3.1 The transition graph and the finite neighbour condition
2. 3.2 Transition matrices
3. 3.3 Maximal loop classes and irreducibility
4. 4 Sets of local dimensions of self-similar measures
1. 4.1 Basic results about local dimensions and periodic points
2. 4.2 Sets of local dimensions for simple and irreducible loop classes
5. 5 Examples of IFS satisfying the finite neighbour condition
1. 5.1 Bernoulli convolutions
2. 5.2 Testud measures
3. 5.3 Other examples
1. 5.3.1 Cantor-like measures
2. 5.3.2 An example of Lau and Wang
3. 5.3.3 A non-equicontractive finite type example
4. 5.3.4 An example with the set of lower local dimensions not equal to the set of upper local dimensions
5. 5.3.5 A Pisot reciprocal Bernoulli convolution with a non-simple non-essential loop class
## 1\. Introduction
A natural question when studying Borel probability measures on the real line,
in particular those which are not absolutely continuous with respect to
Lebesgue measure, is to quantify the singularity of the measure. The Hausdorff
dimension of the measure provides one coarse measurement. A more fine-grained
approach is through the local dimensions of the measure $\mu$ at points $x$ in
its support, namely, the quantities
$\dim_{\operatorname{loc}}\mu(x)=\lim_{r\to 0}\frac{\log\mu(B(x,r))}{\log r}.$
In this paper, we are interested in determining properties of the set of
attainable local dimensions for a given measure.
Our focus is on the invariant measures associated with an iterated function
system (IFS) of similarities on $\mathbb{R}$, also known as self-similar
measures. These measures are simple to describe (see Eq. 3.1 for the
definition), yet exhibit rich and complex behaviour. Historically, such
measures have been of great interest.
Investigation of the sets of local dimensions of self-similar measures is
related to multifractal analysis, in which one studies dimensional properties
of the level sets of the local dimension function. A heuristic relationship,
known as the multifractal formalism [13], implies (when it is satisfied) that
the set of local dimensions is a closed interval. The multifractal formalism
holds if the IFS satisfies the classical open set condition (OSC) and there
are simple formulas for the endpoints of the interval of attainable local
dimensions [1, 20]. But when the OSC fails to hold, the situation is much more
complicated and less is known.
In [15], Hu and Lau discovered that when $\mu$ is the 3-fold convolution of
the classical middle-third Cantor measure, the set of local dimensions of
$\mu$ consists of a closed interval along with an isolated point.
Generalizations of this example were studied in [11, 22], for example, while
Testud [23] gave an example of a Cantor-like measure, but with some of the
similarities in the IFS having negative contraction factors, whose set of
local dimensions was the union of two disjoint (non-trivial) intervals.
Another much studied family of self-similar measures which fail the OSC are
the Bernoulli convolutions. These are the measures associated with the IFS
$\\{\rho x,\rho x+1-\rho\\}$ where $1/2<\rho<1$. (See [24] for more background
on Bernoulli convolutions.) It was shown by Feng [4] that when $\rho$ is the
reciprocal of a simple Pisot number, such as the Golden mean, the set of local
dimensions of the corresponding uniform Bernoulli convolution is, again, a
closed interval. However, all biased Bernoulli convolutions (regardless of the
choice of $\rho$) and unbiased Bernoulli convolutions with contraction ratio
greater than the reciprocal of the Golden mean have an isolated point in their
set of local dimensions [8]. We refer the reader to Section 5 for more
discussion on these important examples.
Convolutions of the middle-third Cantor measure and the Bernoulli convolutions
with contraction factor the reciprocal of a Pisot number are all examples of
self-similar measures associated with IFSs that satisfy the weak separation
condition (WSC) [16]. This separation condition is similar to the open set
condition but allows exact overlaps [25]. For such measures, the second author
recently established the existence of a directed transition graph that encodes
the local behaviour of the measure, and related the multifractal analysis of
the measure with connectivity properties of the graph [21]. One corollary of
this earlier work is that when the transition graph is strongly connected, the
set of attainable local dimensions of the measure is a closed interval.
In this paper, we significantly extend this local dimension result beyond the
strongly connected case to obtain a more thorough understanding of sets of
attainable local dimensions. We specialize slightly to the case where the
transition graph is finite, which we call the finite neighbour condition. This
separation condition is closely related to the generalized finite type
condition defined by Lau and Ngai [17]. The finite neighbour condition is
equivalent to the weak separation condition when the support of the measure is
an interval [12]. Our main contribution is to establish under the finite
neighbour condition, and a weak technical assumption, that the set of local
dimensions is a finite union of (possibly singleton) intervals. Moreover, the
number of intervals is bounded above by the number of non-trivial maximal
strongly connected components of the transition graph.
Our research generalizes and contextualizes the prior analysis of sets of
local dimensions for overlapping iterated function systems satisfying the weak
separation condition. We should emphasize that, in contrast with much of the
earlier work on this problem, we do not require the IFS to have similarities
with commensurable contraction factors. Moreover, we are not aware of any
examples of self-similar measures in $\mathbb{R}$, satisfying the weak
separation condition, to which our results do not apply.
### 1.1. Organization of the paper
The main content of the paper is separated into two conceptual components:
analysis of a graph-theoretic symbolic case, and specialization to self-
similar measures.
First, in Section 2, we introduce a general weighted matrix product system.
This symbolic formalism can be thought of as a weighted generalization of the
matrix-valued functions on shift space studied by past authors [2, 5, 6].
Under an irreducibility hypothesis similar to [5], and using modified versions
of the techniques contained therein, we establish in Theorem 2.10 that the
corresponding sets of Lyapunov exponents form a closed interval. We also
establish in Proposition 2.13 the density of Lyapunov exponents at special
types of paths for which local dimension computations are particularly
straightforward; this is useful in the computation of sets of local dimensions
for specific examples.
In Section 3, we review the details of the transition graph construction from
[21] with a particular focus on self-similar measures $\mu$ on $\mathbb{R}$
that satisfy the finite neighbour condition (see Definition 3.6). This
construction establishes the existence of a finite directed graph such that
infinite paths in the graph correspond (almost) injectively to points in the
support of $\mu$. In fact, this directed graph construction is our motivation
for studying the general matrix product systems. The $\mu$-measure of a rich
set of intervals (generating the topology on the support of $\mu$) is
determined by products of non-negative matrices. The weights in the matrix
product system allow us to handle non-equicontractive IFS.
Then, in Section 4, we apply the results from the symbolic case to the study
of the sets of local dimensions for these measures. The relevant transition
graph can be decomposed into finitely many non-trivial strongly connected
components, which we refer to as maximal loop classes. Any infinite path in
the graph is eventually in exactly one maximal loop class, so maximal loop
classes correspond to particular subsets of the support of $\mu$. Under a
technical assumption - that each maximal loop class satisfies either a
simplicity or irreducibility hypothesis (see Definition 3.10) - we relate the
local dimensions at points corresponding to a maximal loop class to the
Lyapunov exponents of the associated matrix product system. This allows us to
establish in Corollary 4.8 that the set of local dimensions at points
corresponding to such a maximal loop class forms a closed interval.
Consequently, in Corollary 4.11 we deduce that the set of attainable local
dimensions of the measure is a finite union of intervals, some of which could
be degenerate, with the number of intervals bounded above by the number of
maximal loop classes. The same results hold for upper local dimensions as
well.
Lastly, in Section 5, we illustrate these ideas with examples, including those
mentioned above.
### 1.2. Some questions
1. (1)
We do not know if every self-similar measure in $\mathbb{R}$ that satisfies
the weak separation condition also satisfies our formally stronger finite
neighbour condition, or if every measure satisfying the finite neighbour
condition satisfies the required technical assumption. If not, it would be of
interest to extend the analysis.
2. (2)
Our results establish that the sets of local dimensions and sets of upper
local dimensions coincide. However, the set of lower local dimensions can be
different, as seen in Remark 4.9. In that example, the set of lower local
dimensions is still, however, a finite union of intervals corresponding to
maximal loop classes. It is of interest to determine if similar results hold
for sets of lower local dimensions.
### 1.3. Notation
The reals $\operatorname{{\mathbb{R}}}$ are a metric space with the usual
Euclidean metric, and $\operatorname{{\mathbb{N}}}$ is the set of natural
numbers beginning at 1. The set $B(x,r)$ is a closed ball centred at $x$ with
ratius $r$. Given a set $E\subseteq\operatorname{{\mathbb{R}}}$, we write
$\operatorname{diam}(E)=\sup\\{|x-y|:x,y\in E\\}$.
Given a set $X$, we write $\\#X$ to denote the cardinality of $X$. Given two
real-valued functions $f(z),g(z)$ defined on some index set $Z$, we write
$f\succcurlyeq g$ (resp. $f\preccurlyeq g$) if there exists some $c>0$ such
that $f(z)\geq cg(z)$ (resp. $f(z)\leq cg(z)$) for each $z\in Z$. We say
$f\asymp g$ if $f\succcurlyeq g$ and $f\preccurlyeq g$.
If $M$ is a square matrix, we denote by $\operatorname{sp}M$ the spectral
radius of $M$. All matrices in this document are non-negative.
### 1.4. Acknowledgements
The authors would like to thank K. G. Hare for many helpful conversations.
## 2\. Graph-directed matrix product systems
### 2.1. Basic definitions
Let $G$ be a finite directed graph with vertex set $V(G)$ and edge set $E(G)$.
We will assume that $G$ is _strongly connected_ , which means that there is a
directed path connecting any two vertices. Each vertex $v$ has a _dimension_
$d(v)\in\operatorname{{\mathbb{N}}}$, and to each edge $e=(v_{1},v_{2})$ we
associate a non-negative $d(v_{1})\times d(v_{2})$ _transition matrix_ $T(e)$
and a _weight_ $W(e)\in(0,1)$. We let
$d_{\max}=\max_{v\in G}d(v).$
Let $\Sigma$ denote the set of all infinite paths $(e_{i})_{i=1}^{\infty}$ in
$G$ and let $\Sigma^{*}$ denote the set of all finite paths in $G$. A path is
a _cycle_ if it begins and ends at the same vertex. The _length_ of a finite
path is the number of edges it contains. We say a path $\eta\in\Sigma^{*}$ is
a _prefix_ of a (finite or infinite) path $\gamma$ if
$\gamma=\eta\gamma^{\prime}$ for some path $\gamma^{\prime}$. Given
$\gamma=(e_{n})_{n=1}^{\infty}\in\Sigma$, we write
$\gamma|n=(e_{1},\ldots,e_{n})\in\Sigma^{*}$ to denote the unique prefix of
length $n$.
Given $\eta=(e_{1},\ldots,e_{n})\in\Sigma^{*}$, we write
$W(\eta)=W(e_{1})\cdots W(e_{n})$
and if $\eta\in\Sigma^{*}$ has length at least 1,
$\eta^{-}=(e_{1},\ldots,e_{n-1})$. For convenience, let
$\displaystyle W_{\min}$ $\displaystyle=\min\\{W(e):e\in E(G)\\}>0,$
$\displaystyle W_{\max}$ $\displaystyle=\max\\{W(e):e\in E(G)\\}<1.$
If $\eta$ is the empty path, we say $W(\eta)=1$. Similarly, we write
$T(\eta)=T(e_{1})\cdots T(e_{n})\text{ for
}\eta=(e_{1},\ldots,e_{n})\in\Sigma^{*}.$
We equip $\Sigma$ with the topology induced by the metric
$d(\gamma,\xi)=\inf\\{W(\eta):\eta\text{ a prefix of }\gamma\text{ and
}\xi\\}.$
With this topology, $\Sigma$ is a compact totally disconnected metric space.
We refer to this data as a _graph-directed matrix product system_ or, in
short, a _matrix product system_. Typically, we will denote this by
$\mathcal{G}$.
###### Definition 2.1.
Given an infinite path $\gamma=(e_{j})_{j=1}^{\infty}\in\Sigma$, we define the
_lower Lyapunov exponent_ by
$\underline{\lambda}(\mathcal{G},\gamma)=\liminf_{n\to\infty}\frac{\log\left\lVert
T(e_{1})\cdots T(e_{n})\right\rVert}{\log W(e_{1})\cdots W(e_{n})}.$
The _upper Lyapunov exponent_ is defined similarly; when the values coincide,
we call this value the _Lyapunov exponent_ of the path $\gamma$, and denote it
by $\lambda(\mathcal{G},\gamma)$. Typically, we omit writing $\mathcal{G}$
when it is clear from the context.
For any $t>0$, denote
$\Sigma_{t}=\\{\eta\in\Sigma^{*}:W(\eta)<t\leq W(\eta^{-}),\left\lVert
T(\eta)\right\rVert>0\\},$
which is the set of paths with non-zero transition matrix and weight
approximately $t$.
### 2.2. Irreducible matrix product systems
It is clear that the geometric properties of the metric space $\Sigma$ are
determined completely from the edge weights. However, in order to say
meaningful things about products of matrices and Lyapunov exponents, we
require a stronger form of irreducibility than the graph $G$ being strongly
connected.
###### Definition 2.2.
We say that the matrix product system is _irreducible_ if there exists a
finite family of paths $\mathcal{H}\subset\Sigma^{*}$ such that for any
vertices $v_{1},v_{2}$, $1\leq i\leq d(v_{1})$, and $1\leq j\leq d(v_{2})$,
there exists a path $\gamma\in\mathcal{H}$ from vertex $v_{1}$ to $v_{2}$ such
that $T(\gamma)_{i,j}>0$.
###### Remark 2.3.
Equivalently, for each $1\leq i,j\leq m=\\#V(G)$, define $M_{i,j}=T(e)$ if
there is an edge $e$ from vertex $v_{i}$ to $v_{j}$, and let $M_{i,j}=0$
otherwise. The matrix product system is irreducible if and only if the block
matrix
$M=\begin{pmatrix}M_{1,1}&\cdots&M_{1,m}\\\ \vdots&\ddots&\vdots\\\
M_{m,1}&\cdots&M_{m,m}\end{pmatrix}$
is irreducible, i.e. there exists some $r>0$ such that $\sum_{k=1}^{r}M^{k}$
is a strictly positive matrix.
Of course, irreducible systems are necessarily strongly connected.
###### Remark 2.4.
Our irreducibility criterion is very similar to the one assumed by Feng [5].
However, since our weights depend on the edge rather than the source vertex,
we find it more natural to speak of infinite paths in a graph rather than
words in a sequence space. One may equivalently think of the graph as a
subshift of finite type determined by a weighted adjacency matric.
For the remainder of this section, unless otherwise stated, our matrix product
system is irreducible.
Irreducibility is essential for obtaining the following estimates, which we
will use frequently.
###### Lemma 2.5.
There are constants $A,B>0$ such that for any paths
$\eta_{1},\eta_{2}\in\Sigma^{*}$, there exists some $\gamma\in\mathcal{H}$
such that $\eta_{1}\gamma\eta_{2}$ is a path and
$A\left\lVert T(\eta_{1})\right\rVert\left\lVert
T(\eta_{2})\right\rVert\leq\left\lVert
T(\eta_{1}\gamma\eta_{2})\right\rVert\leq B\left\lVert
T(\eta_{1})\right\rVert\left\lVert T(\eta_{2})\right\rVert$
###### Proof.
By the irreducibility assumption, for any $v_{1},v_{2}\in V(G)$, $1\leq i\leq
d(v_{1})$, and $1\leq j\leq d(v_{2})$, there exists a path
$\gamma=\gamma(v_{1},v_{2},i,j)\in\mathcal{H}$ from $v_{1}$ to $v_{2}$ such
that $T(\gamma)_{i,j}>0$. Let
$\displaystyle C$
$\displaystyle=\min\\{T(\gamma(v_{1},v_{2},i,j))_{i,j}:v_{1},v_{2}\in
V(G),1\leq i\leq d(v_{1}),1\leq j\leq d(v_{2})\\}.$
If $\eta_{1},\eta_{2}$ are arbitrary paths, by the pidgeonhole principle,
there exists some $k,i,j,\ell$ such that
$d_{\max}^{2}T(\eta_{1})_{k,i}\geq\left\lVert T(\eta_{1})\right\rVert$ and
$d_{\max}^{2}T(\eta_{2})_{j,\ell}\geq\left\lVert T(\eta_{2})\right\rVert$.
Assume $\eta_{1}$ ends at vertex $v_{1}$, $\eta_{2}$ begins at vertex $v_{2}$,
and take $\gamma=\gamma(v_{1},v_{2},i,j)\in\mathcal{H}$. Then
$\eta_{1}\gamma\eta_{2}$ is a path and
$\left\lVert T(\eta_{1}\gamma\eta_{2})\right\rVert\geq
T(\eta_{1})_{k,i}T(\gamma)_{i,j}T(\eta_{2})_{j,\ell}\geq\frac{C}{d_{\max}^{4}}\left\lVert
T(\eta_{1})\right\rVert\left\lVert T(\eta_{2})\right\rVert.$
The lower bound follows by taking $A=C/d_{\max}^{4}$.
To obtain the upper bound, we simply note that
$\left\lVert T(\eta_{1}\gamma\eta_{2})\right\rVert\leq\left\lVert
T(\eta_{1})\right\rVert\left\lVert T(\gamma)\right\rVert\left\lVert
T(\eta_{2})\right\rVert$
and it suffices to take $B=\max\\{\left\lVert
T(\gamma)\right\rVert:\gamma\in\mathcal{H}\\}$. ∎
In the following lemma, we do not formally need the irreducibility hypothesis:
it suffices to know that if $\eta$ is any path in $\mathcal{G}$, then
$T(\eta)$ is not the zero matrix.
###### Lemma 2.6.
There are constants $A,r>0$ such that for any $t_{1},t_{2}\in(0,1)$,
$\eta_{1}\in\Sigma_{t_{1}}$, and $\eta_{2}\in\Sigma_{t_{2}}$, there are paths
$\phi$ and $\psi$ such that $\eta_{1}\phi\eta_{2}\psi\in\Sigma_{t_{1}t_{2}r}$
and
$\left\lVert T(\eta_{1}\phi\eta_{2}\psi)\right\rVert\leq A\left\lVert
T(\eta_{1})\right\rVert\left\lVert T(\eta_{2})\right\rVert.$
###### Proof.
Take $r=\min\\{W(\eta):\eta\in\mathcal{H}\\}$. By the irreducibility
hypothesis, there exists some $\phi\in\mathcal{H}$ such that $\left\lVert
T(\eta_{1}\phi\eta_{2})\right\rVert>0$.
Moreover, for any path $\eta$ with $\left\lVert T(\eta)\right\rVert>0$, there
exists an edge $e$ such that $\eta e$ is a path and $\left\lVert T(\eta
e)\right\rVert>0$. Since $t_{1}t_{2}W_{\min}^{-2}\geq
W(\eta_{1}\phi\eta_{2})\geq t_{1}t_{2}r$, repeatedly applying this
observation, there exists $\psi$ such that
$\eta_{1}\phi\eta_{2}\psi\in\Sigma_{t_{1}t_{2}r}$. Note that $W(\psi)\geq
rW_{\min}^{2}$. Thus $\left\lVert T(\eta_{1}\phi\eta_{2}\psi)\right\rVert\leq
A\left\lVert T(\eta_{1})\right\rVert\left\lVert T(\eta_{2})\right\rVert$ where
$A=\max\\{\left\lVert T(\eta)\right\rVert:\eta\in E(\mathcal{G}),W(\eta)\geq
rW_{\min}^{2}\\}\cdot\max\\{\left\lVert
T(\eta):\eta\in\mathcal{H}\right\rVert\\}$
as required. ∎
###### Lemma 2.7.
There are constants $A,B>0$ such that for any path $\eta\in\Sigma^{*}$, there
exists some $\phi\in\mathcal{H}$ such that $\eta\phi$ is a cycle and
$A\left\lVert T(\eta)\right\rVert\leq\operatorname{sp}T(\eta\phi)\leq
B\left\lVert T(\eta)\right\rVert.$
###### Proof.
Let $C$ be the minimal strictly positive coefficient of any $T(\phi)$ for
$\phi\in\mathcal{H}$. Suppose $T(\eta)_{i,\ell}$ is the maximal coordinate of
$T(\eta)$. Get $\phi\in\mathcal{H}$ such that $\eta\phi$ is a cycle and
$T(\phi)_{\ell,i}\geq C>0$. Then
$\displaystyle\operatorname{Tr}T(\eta\phi)$ $\displaystyle\geq
T(\eta\phi)_{i,i}\geq T(\eta)_{i,\ell}T(\phi)_{\ell,i}\geq\left\lVert
T(\eta)\right\rVert\frac{C}{d_{\max}^{2}}.$
Since the trace of a matrix is the sum of its eigenvalues,
$\operatorname{Tr}(T(\eta\phi))\leq d_{\max}\operatorname{sp}T(\eta\phi).$
Thus with $A=C/d_{\max}^{3}$, we have $\operatorname{sp}(T(\eta\phi))\geq
A\left\lVert T(\eta)\right\rVert$.
Conversely, we have
$\operatorname{sp}(T(\eta\phi))\leq\left\lVert
T(\eta\phi)\right\rVert\leq\left\lVert T(\eta)\right\rVert\left\lVert
T(\phi)\right\rVert\leq B\left\lVert T(\eta)\right\rVert$
where $B=\max\\{\left\lVert T(\gamma)\right\rVert:\gamma\in\mathcal{H}\\}$. ∎
### 2.3. Attainable Lyapunov exponents
The main goal of this subsection is to determine the possible values of
Lyapunov exponents of paths in the matrix product system.
We begin with notation. Put
(2.1) $\displaystyle\alpha_{\min}(\mathcal{G})=\alpha_{\min}$
$\displaystyle:=\lim_{t\to 0}\min_{\eta\in\Sigma_{t}}\frac{\log\left\lVert
T(\eta)\right\rVert}{\log t},$
$\displaystyle\alpha_{\max}(\mathcal{G})=\alpha_{\max}$
$\displaystyle:=\lim_{t\to 0}\max_{\eta\in\Sigma_{t}}\frac{\log\left\lVert
T(\eta)\right\rVert}{\log t}.$
We will first show that $\alpha_{\min}$ and $\alpha_{\max}$ are well defined
and take real values. This will use the following standard submultiplicativity
result, which is a slightly modified version of, for example, [14, Thm.
7.6.1].
###### Lemma 2.8.
Let $f:(0,1)\to\operatorname{{\mathbb{R}}}$ be measurable and suppose there
exists $c>0$ and $r>0$ such that $f(t_{1}t_{2}r)\leq c\cdot f(t_{1})f(t_{2})$
for all $t_{1},t_{2}\in(0,1)$. Then
$\lim_{t\to 0}\frac{\log f(t)}{\log t}=\inf_{t>0}\frac{\log f(t)}{\log t}.$
Note the similarity of the following lemma with [5, Lem. 2.3].
###### Lemma 2.9.
The limits defining $\alpha_{\min}$ and $\alpha_{\max}$ exist and take real
values.
###### Proof.
We will first prove that the limit
$\lim_{t\to 0}\min_{\eta\in\Sigma_{t}}\frac{\log\left\lVert
T(\eta)\right\rVert}{\log t}=\lim_{t\to
0}\frac{\log\max_{\eta\in\Sigma_{t}}\left\lVert T(\eta)\right\rVert}{\log t}$
exists. Set $f(t)=\max_{\eta\in\Sigma_{t}}\left\lVert T(\eta)\right\rVert$.
Let $t_{1},t_{2}>0$ be arbitrary. If $\eta\in\Sigma_{t_{1}t_{2}W_{\min}}$, we
may write $\eta=\eta_{1}\eta_{2}\gamma$ where $\eta_{1}\in\Sigma_{t_{1}}$ and
$\eta_{2}\in\Sigma_{t_{2}}$ and $W(\gamma)\geq W_{\min}^{2}$. In particular,
with $c=\max\\{\left\lVert T(\psi)\right\rVert:W(\psi)\geq W_{\min}^{2}\\}$,
we have $\left\lVert T(\eta)\right\rVert\leq c\left\lVert
T(\eta_{1})\right\rVert\left\lVert T(\eta_{2})\right\rVert$ and therefore
$f(t_{1}t_{2}W_{\min})\leq c\cdot f(t_{1})f(t_{2})$. Applying Lemma 2.8 with
$c$ as above and $r=W_{\min}$, we have our desired result.
We now show that $\lim_{t\to 0}\max_{\eta\in\Sigma_{t}}\frac{\log
T(\eta)}{\log t}$ exists. Set $g(t)=\min_{\eta\in\Sigma_{t}}\left\lVert
T(\eta)\right\rVert$. Let $t_{1},t_{2}>0$ and let $\eta_{1}\in\Sigma_{t_{1}}$
and $\eta_{2}\in\Sigma_{t_{2}}$ be arbitrary. Note that $\eta_{1}\eta_{2}$
need not be a path, and even if it were, it need not hold that
$\eta_{1}\eta_{2}\in\Sigma_{t_{1}t_{2}}$. By Lemma 2.6, there exists some
$c,r>0$ (not depending on $\eta_{1}$ and $\eta_{2}$) and paths $\phi$ and
$\psi$ such that $\eta_{1}\phi\eta_{2}\psi$ is an admissible path in
$\Sigma_{rt_{1}t_{2}}$ and
(2.2) $g(rt_{1}t_{2})\leq c\left\lVert T(\eta_{1})\right\rVert\left\lVert
T(\eta_{2})\right\rVert.$
Now taking the minimum over all $\eta_{1}\in\Sigma_{t_{1}}$ and
$\eta_{2}\in\Sigma_{t_{2}}$ yields $g(rt_{1}t_{2})\leq cg(t_{1})g(t_{2})$.
Thus $g$ satisfies Lemma 2.8.
To see that $\alpha_{\min},\alpha_{\max}\in\operatorname{{\mathbb{R}}}$, let
$a$ be the smallest strictly positive entry in any $T(e)$ for $e\in E(G)$. Let
$b=\min\\{\left\lVert T(e)\right\rVert:e\in E(G)\\}$. Then if
$\eta\in\Sigma_{t}$ is any path of length $n$, we have that
$\frac{\log b}{\log W_{\min}}\leq\frac{\log\left\lVert
T(\eta)\right\rVert}{\log t}\leq\frac{n\log a}{(n-1)\log W_{\max}}$
so that $\alpha_{\min},\alpha_{\max}$ are real-valued. ∎
Of course, if $\eta\in\Sigma_{t}$, then $W(\eta)\asymp t$. Consequently,
$\alpha_{\min}=\lim_{t\to 0}\min_{\eta\in\Sigma_{t}}\frac{\log\left\lVert
T(\eta)\right\rVert}{\log W(\eta)}\text{ and }\alpha_{\max}=\lim_{t\to
0}\max_{\eta\in\Sigma_{t}}\frac{\log\left\lVert T(\eta)\right\rVert}{\log
W(\eta)}.$
We are now ready to prove the following result about the set of attainable
Lyapunov exponents. We remind the reader that the Lyapunov exponent of the
path $\gamma$, $\lambda(\gamma)$, was defined in Definition 2.1.
Our proof follows [5, Lem. 2.3 and Prop. 3.2].
###### Theorem 2.10.
Let $\mathcal{G}$ be a matrix product system satisfying the irreducibility
hypothesis.
1. (1)
If $\gamma\in\Sigma$ is any path, then
$\underline{\lambda}(\gamma),\overline{\lambda}(\gamma)\in[\alpha_{\min},\alpha_{\max}]$.
2. (2)
For any $\alpha\in[\alpha_{\min},\alpha_{\max}]$ and $\xi\in\Sigma^{*}$ with
$T(\xi)$ non-zero, there exists some $\gamma=(e_{m})_{m=1}^{\infty}\in\Sigma$
and a sequence $(m_{j})_{j=1}^{\infty}$ such that $\lambda(\gamma)=\alpha$,
$\lim_{j\to\infty}m_{j+1}/m_{j}=1$, and for each
$j\in\operatorname{{\mathbb{N}}}$, $\xi$ is a prefix of
$(e_{m_{j}},e_{m_{j}+1},\ldots)$.
###### Proof.
For (i), if $\gamma\in\Sigma$ is arbitrary, then
$\underline{\lambda}(\gamma)=\lim_{k\to\infty}\frac{\log\left\lVert
T(\gamma|{n_{k}})\right\rVert}{W(\gamma|{n_{k}})}$
for some subsequence $n_{k}$. But if $\gamma|n_{k}\in\Sigma_{t_{n_{k}}}$, then
$W(\gamma|{n_{k}})\asymp t_{n_{k}}$ and
$\alpha_{\min}\leq\lim_{k\to\infty}\frac{\log\left\lVert
T(\gamma|{n_{k}})\right\rVert}{\log t_{n_{k}}}\leq\alpha_{\max}$
from the existence of the limits defining $\alpha_{\min}$ and $\alpha_{\max}$.
The upper Lyapunov exponent result is identical, giving (i).
Now for (ii), given $\alpha\in[\alpha_{\min},\alpha_{\max}]$, let $s\in[0,1]$
be such that $\alpha=s\alpha_{\min}+(1-s)\alpha_{\max}$. For each
$n\in\operatorname{{\mathbb{N}}}$, choose
$\phi_{n},\psi_{n}\in\Sigma_{2^{-n}}$ with the property that
$\frac{\log\left\lVert T(\phi_{n})\right\rVert}{\log
W(\phi_{n})}=u_{n}\rightarrow\alpha_{\min}\text{ and }\frac{\log\left\lVert
T(\psi_{n})\right\rVert}{\log W(\psi_{n})}=v_{n}\rightarrow\alpha_{\max}.$
Let $\\{A_{n}\\}_{n=1}^{\infty}$, $\\{B_{n}\\}_{n=1}^{\infty}$ be sequences of
natural numbers given by
$\displaystyle A_{n}$ $\displaystyle=[sn]$ $\displaystyle B_{n}$
$\displaystyle=[(1-s)n],$
where $[x]$ denotes the integer part of $x$. Then define a sequence by
$\underbrace{\phi_{1},\ldots,\phi_{1}}_{A_{1}},\underbrace{\psi_{1},\ldots,\psi_{1}}_{B_{1}},\ldots,\underbrace{\phi_{n},\ldots,\phi_{n}}_{A_{n}},\underbrace{\psi_{n},\ldots,\psi_{n}}_{B_{n}},\ldots$
and relabel it $\\{\eta_{n}\\}_{n=1}^{\infty}$, i.e. $\eta_{1}=\phi_{1}$,
$\eta_{A_{1}}=\phi_{1}$, $\eta_{A_{1}+1}=\psi_{1}$, etc.
Now since $\xi\in\Sigma^{*}$ has $T(\xi)$ non-zero, by repeatedly applying
Lemma 2.5, there are constants $C_{1},C_{2}>0$ such that for each
$n\in\operatorname{{\mathbb{N}}}$ there are paths
$\nu_{n}^{(1)},\nu_{n}^{(2)}$ in $\mathcal{H}$ such that with
$\nu_{n}:=\nu_{n}^{(1)}\xi\nu_{n}^{(2)}$,
$\gamma:=(\eta_{1},\nu_{1},\eta_{2},\nu_{2},\ldots)$
is an infinite path and
(2.3) $C_{1}^{m}\prod_{i=1}^{m}\left\lVert
T(\eta_{i})\right\rVert\leq\left\lVert
T(\eta_{1}\nu_{1}\ldots\eta_{m}\nu_{m})\right\rVert\leq
C_{2}^{m}\prod_{i=1}^{m}\left\lVert T(\eta_{i})\right\rVert.$
Since $\mathcal{H}$ is a finite set, there also exists $D_{1},D_{2}>0$ such
that
(2.4) $D_{1}^{m}\prod_{i=1}^{m}W(\eta_{i})\leq
W(\eta_{1}\nu_{1}\ldots\eta_{m}\nu_{m})\leq
D_{2}^{m}\prod_{i=1}^{m}W(\eta_{i}).$
For notation, let $\\{n_{k}\\}_{k=1}^{\infty}$ be the indices such that
$n_{k}$ is the index of the edge preceding the first edge of $\phi_{k+1}$ in
repetition $A_{k+1}$.
Let $(m_{j})_{j=1}^{\infty}$ be the sequence of indices such that $\xi$ is a
prefix and fix some $j\in\operatorname{{\mathbb{N}}}$. For any
$\ell\in\operatorname{{\mathbb{N}}}$, since $\phi_{\ell}$ and $\psi_{\ell}$
are in $\Sigma_{2^{-\ell}}$, there exists some $a,b>0$ such that
$a\ell\leq|\phi_{\ell}|\leq b\ell$ where $|\phi_{\ell}|$ is the number of
edges in $\phi_{\ell}$. Moreover, $(\gamma_{m_{j}},\gamma_{m_{j}+1},\ldots)$
has prefix $\xi\zeta_{1}\phi_{\ell}\zeta_{2}\xi$ or
$\xi\zeta_{1}\psi_{\ell}\zeta_{2}\xi$ where $\ell$ is chosen suitably and
$\zeta_{1},\zeta_{2}\in\mathcal{H}$ have bounded length. Thus there exists
some $M>0$ such that $m_{j+1}-m_{j}\leq M+b\ell$. On the other hand, it always
holds that $m_{j}\geq\sum_{i=1}^{\ell-1}ai$. It follows that
$\lim_{j\to\infty}m_{j+1}/m_{j}=1$, as claimed
We now prove that $\overline{\lambda}(\gamma)\leq\alpha$; the lower bound
$\underline{\lambda}(\gamma)\geq\alpha$ will follow by a similar argument. To
this end, let $n$ be a large number of edges and let $k$ be maximal such that
$n\geq n_{k}$ (that is, $k$ is the maximal number of completed blocks
$A_{i},B_{i}$ which occur before edge $n$). There exist constants
$C_{3},C_{4}>0$ such that
$C_{3}^{n_{k+1}-n_{k}}\left\lVert
T(\gamma_{1}\ldots\gamma_{n_{k+1}})\right\rVert\leq\left\lVert
T(\gamma|n)\right\rVert\leq C_{4}^{n_{k+1}-n_{k}}\left\lVert
T(\gamma_{1}\ldots\gamma_{n_{k}})\right\rVert$
and
$W(\gamma_{1}\ldots\gamma_{n_{k+1}})\leq W(\gamma|n)\leq
W(\gamma_{1}\ldots\gamma_{n_{k}}).$
Since the number of edges contained in $\gamma|n$ is at least
$\sum_{i=1}^{k}(A_{i}+B_{i})$ and at most $\sum_{i=1}^{k+1}(A_{i}+B_{i})$, we
deduce from Eq. 2.3 and Eq. 2.4 that
$\displaystyle\frac{\log\left\lVert T(\gamma|n)\right\rVert}{\log
W(\gamma|n)}$ $\displaystyle\qquad\leq\frac{(n_{k+1}-n_{k})\log
C_{3}+\sum_{i=1}^{k+1}\bigl{(}(A_{i}+B_{i})\log C_{1}+A_{i}\log\left\lVert
T(\phi_{i})\right\rVert+B_{i}\log\left\lVert
T(\psi_{i})\right\rVert\bigr{)}}{\sum_{i=1}^{k}\bigl{(}(A_{i}+B_{i})\log
D_{2}+A_{i}\log W(\phi_{i})+B_{i}\log W(\psi_{i})\bigr{)}}.$
Since each $\phi_{i},\psi_{i}\in\Sigma_{t_{2^{-i}}}$, we have
$W(\phi_{i})\asymp W(\psi_{i})\asymp 2^{-i}$. Recall, also, that
$A_{i},B_{i}\asymp i$. Therefore
$\Bigl{\lvert}\frac{\sum_{i=1}^{k+1}(A_{i}+B_{i})\log
C_{1}}{\sum_{i=1}^{k}(A_{i}\log W(\phi_{i})+B_{i}\log
W(\psi_{i}))}\Bigr{\rvert}\preccurlyeq\frac{\sum_{i=1}^{k+1}i}{\sum_{i=1}^{k}i^{2}}\preccurlyeq\frac{1}{k}\rightarrow
0$
and a similar statement holds with the numerator replaced by
$\sum_{i=1}^{k}(A_{i}+B_{i})\log D_{2}$. Moreover, since $(n_{k+1}-n_{k})\log
C_{3}\asymp(k+1)(A_{k+1}+B_{k+1})$, we also have
$\Bigl{\lvert}\frac{(n_{k+1}-n_{k})\log C_{3}}{\sum_{i=1}^{k}(A_{i}\log
W(\phi_{i})+B_{i}\log W(\psi_{i}))}\Bigr{\rvert}\rightarrow 0$
We thus have that
$\displaystyle\limsup_{n}\frac{\log\left\lVert T(\gamma|n)\right\rVert}{\log
W(\gamma|n)}$
$\displaystyle\leq\limsup_{k}\frac{\sum_{i=1}^{k+1}(A_{i}\log\left\lVert
T(\phi_{i})\right\rVert+B_{i}\log\left\lVert
T(\psi_{i})\right\rVert)}{\sum_{i=1}^{k}(A_{i}\log W(\phi_{i})+B_{i}\log
W(\psi_{i}))}$ $\displaystyle=\limsup_{k}\frac{\sum_{i=1}^{k+1}(A_{i}u_{i}\log
W(\phi_{i})+B_{i}v_{i}\log W(\psi_{i}))}{\sum_{i=1}^{k}(A_{i}\log
W(\phi_{i})+B_{i}\log W(\psi_{i}))}$
$\displaystyle=\limsup_{k}\frac{\sum_{i=1}^{k+1}(iA_{i}u_{i}+iB_{i}v_{i})}{\sum_{i=1}^{k}(iA_{i}+iB_{i})}$
$\displaystyle=\limsup_{k}\frac{\sum_{i=1}^{k+1}i^{2}(su_{i}+(1-s)v_{i})}{\sum_{i=1}^{k}i^{2}}.$
Fix $\epsilon>0$. Since $\lim_{i\to\infty}(su_{i}+(1-s)v_{i})=\alpha$, for
large enough $N$, $su_{i}+(1-s)v_{i}\leq\alpha+\epsilon$ for all $i\geq N$.
Thus
$\limsup_{n}\frac{\log\left\lVert T(\gamma|n)\right\rVert}{\log
W(\gamma|n)}\leq\limsup_{k}\frac{\sum_{i=N}^{k+1}i^{2}(\alpha+\epsilon)}{\sum_{i=1}^{k}i^{2}}\leq\alpha+\epsilon.$
Similar reasoning shows that
$\liminf_{n}\frac{\log\left\lVert T(\gamma|n)\right\rVert}{\log
W(\gamma|n)}\geq\alpha-\epsilon.$
As $\epsilon>0$ was arbitrary, it follows that $\lambda(\gamma)=\alpha$, as
claimed. ∎
The following result now follows directly from Theorem 2.10.
###### Corollary 2.11.
Let $\mathcal{G}$ be an irreducible matrix product system. Then the set of
attainable Lyapunov exponents is the compact interval
$[\alpha_{\min},\alpha_{\max}]$.
### 2.4. Density of periodic paths
An interesting class of paths are the so-called _periodic paths_ , which are
the paths in $\Sigma$ of the form
$\gamma=(\theta,\theta,\ldots)$
where $\theta$ is a cycle. We denote them by $\mathcal{P}$. We refer to
$\theta$ as a _period_ of the path.
The Lyapunov exponent of a periodic path always exists and has a simple
formula.
###### Proposition 2.12.
Let $\gamma$ be a periodic path with period $\theta$. Then the Lyapunov
exponent of $\gamma$ exists and is given by
$\lambda(\gamma)=\frac{\log\operatorname{sp}T(\theta)}{\log W(\theta)}.$
###### Proof.
Assume that $\theta=(\theta_{1},...,\theta_{k})$. For any positive integer $k$
and $j=1,\ldots,k$,
$\left\lVert
T(\theta^{n}\theta_{1}\cdots\theta_{j})\right\rVert\leq\left\lVert
T(\theta^{n})\right\rVert\left\lVert
T(\theta_{1}\cdots\theta_{j})\right\rVert$
and
$\left\lVert T(\theta^{n+1})\right\rVert\leq\left\lVert
T(\theta^{n}\theta_{1}\cdots\theta_{j})\right\rVert\left\lVert
T(\theta_{j+1}\cdots\theta_{k})\right\rVert.$
Consequently, there is some $A,B>0$, depending only on $\theta$, such that
$A\left\lVert T(\theta^{n+1})\right\rVert\leq\left\lVert
T(\theta^{n}\theta_{1}\cdots\theta_{j})\right\rVert\leq B\left\lVert
T(\theta^{n})\right\rVert.$
The result follows directly from the fact that
$\lim_{n\to\infty}\frac{\log\left\lVert
T(\theta^{n})\right\rVert}{n}=\log\operatorname{sp}(T(\theta)).$
∎
###### Proposition 2.13.
The set $\\{\lambda(\gamma):\gamma\in\mathcal{P}\\}$ is dense in
$[\alpha_{\min},\alpha_{\max}]$.
###### Proof.
It suffices to show that if $\gamma\in\Sigma$ is an arbitrary path such that
$\lambda(\gamma)$ exists, there exists a sequence of periodic paths
$\\{y_{n}\\}_{n=1}^{\infty}$ such that
$\lim_{n\to\infty}\lambda(y_{n})=\lambda(\gamma)$.
By Lemma 2.7, there are constants $A,B>0$ such that for any
$k\in\operatorname{{\mathbb{N}}}$, there is a path $\eta_{k}\in\mathcal{H}$
such that $(\gamma|k)\eta_{k}=:\theta_{k}$ is a cycle and
$A\left\lVert T(\gamma|k)\right\rVert\leq\operatorname{sp}T(\theta_{k})\leq
B\left\lVert T(\gamma|k)\right\rVert.$
Let $\overline{\theta}_{k}=(\theta_{k},\theta_{k},\ldots)$. This is a periodic
path with period $\theta_{k}$, so that
$\lambda(\overline{\theta_{k}})=\frac{\log\operatorname{sp}(T(\theta_{k}))}{\log
W(\theta_{k})}$
by Proposition 2.12. Also, $W(\gamma|k)\asymp W(\theta_{k})$. Hence
$\displaystyle\limsup_{k\to\infty}\lambda(\overline{\theta_{k}})$
$\displaystyle\leq\limsup_{k\to\infty}\frac{\log A\left\lVert
T(\gamma|k)\right\rVert}{\log W(\gamma|k)}=\lambda(\gamma)$
and the lower bound follows identically. Thus
$\lim_{k\to\infty}\lambda(\overline{\theta_{k}})=\lambda(\gamma)$ and we have
density, as claimed. ∎
## 3\. Iterated function systems and their matrix product systems
We now turn to studying iterated function systems of similarities. In this
section, we will describe how the dynamics of associated self-similar sets and
measures can be encoded with a matrix product system.
### 3.1. The transition graph and the finite neighbour condition
We begin with notation and terminology. By an iterated function system (IFS),
$\\{S_{i}\\}_{i=1}^{m},$ we mean a finite set of similarities
(3.1) $S_{i}(x)=r_{i}x+d_{i}:\mathbb{R}\rightarrow\mathbb{R}\text{ for each
}i=1,...,m$
with $0<\left|r_{i}\right|<1$ and $m\geq 1$. We say that the IFS is
_equicontractive_ if $r_{1}=\cdots=r_{m}>0$.
Each IFS generates a unique non-empty, compact set $K$ satisfying
$K=\bigcup_{i=1}^{m}S_{i}(K),$
known as the associated _self-similar set_. We will assume $K$ is not a
singleton. By translating the $d_{i}$ as necessary, without loss of generality
we may assume that the convex hull of $K$ is $[0,1]$.
Given probabilities $\mathbf{p}=(p_{i})_{i=1}^{m}$ where $p_{i}>0$ and
$\sum_{i=1}^{m}p_{i}=1$, there exists a unique Borel probability measure
$\mu_{\bm{p}}$ satisfying
(3.2) $\mu_{\bm{p}}(E)=\sum_{i=1}^{m}p_{i}\mu_{\bm{p}}(S_{i}^{-1}(E))$
for any Borel set $E\subseteq K$. This non-atomic measure $\mu_{\bm{p}}$ is
known as an associated self-similar measure and has as its support the self-
similar set $K$.
Given $\sigma=(\sigma_{1},\ldots,\sigma_{j})\in\\{1,...,m\\}^{j}$, we denote
$\sigma^{-}=(\sigma_{1},\ldots,\sigma_{j-1})\text{,
}S_{\sigma}=S_{\sigma_{1}}\circ\cdots\circ S_{\sigma_{j}}\text{ and
}r_{\sigma}=r_{\sigma_{1}}\cdots r_{\sigma_{j}}.$
For $t>0,$ put
$\Lambda_{t}=\\{\sigma:|r_{\sigma}|<t\leq|r_{\sigma^{-}}|\\}.$
The elements of $\Lambda_{t}$ are called the _words of generation $t$_. We
remark that in the literature it is more common to see this defined by the
rule $|r_{\sigma}|\leq t<|r_{\sigma^{-}}|$, but this essentially equivalent
choice is more convenient for our purposes.
The notions of net intervals and neighbour sets were first introduced in [3]
and extended in [12, 21]. We summarize the key ideas here.
Let $h_{1},\ldots,h_{s(t)}$ be the collection of distinct elements of the set
$\\{S_{\sigma}(0),S_{\sigma}(1):\sigma\in\Lambda_{t}\\}$ listed in strictly
ascending order and set
$\mathcal{F}_{t}=\\{[h_{j},h_{j+1}]:1\leq j<s(t)\text{ and
}(h_{j},h_{j+1})\cap K\neq\emptyset\\}.$
The elements of $\mathcal{F}_{t}$ are called the _net intervals of generation_
$t$. Note that $[0,1]$ is the (unique) net interval of any generation $t>1$
and denote by
$\mathcal{F}=\bigcup_{t>0}\mathcal{F}_{t}$
the set of all net intervals.
Given a net interval $\Delta$, we denote by $T_{\Delta}$ the unique similarity
$T_{\Delta}(x)=rx+a$ with $r>0$ such that
$T_{\Delta}([0,1])=\Delta.$
Of course, here $r=\operatorname{diam}(\Delta)$ and $a$ is the left endpoint
of $\Delta$.
###### Definition 3.1.
We will say that a similarity $f(x)=Rx+a$ is a _neighbour_ of
$\Delta\in\mathcal{F}_{t}$ if there exists some $\sigma\in\Lambda_{t}$ such
that $S_{\sigma}(K)\cap\Delta^{\circ}\neq\emptyset$ and
$f=T_{\Delta}^{-1}\circ S_{\sigma}$. In this case, we also say that
$S_{\sigma}$ _generates_ the neighbour $f$.
The _neighbour set_ of $\Delta$ is the maximal set
$\mathcal{V}_{t}(\Delta)=\\{f_{1},\ldots,f_{k}\\}$
where each $f_{i}=T_{\Delta}^{-1}\circ S_{\sigma_{i}}$ is a distinct neighbour
of $\Delta$. We denote by
$R_{\max}(\Delta):=\max\\{|R|:x\mapsto Rx+a\in\mathcal{V}_{t}(\Delta)\\}$
the maximum contraction factor of any neighbour of $\Delta$.
When the generation is implicit, we will often write $\mathcal{V}(\Delta)$.
Since $K=\bigcup_{\sigma\in\Lambda_{t}}S_{\sigma}(K)$, every net interval
$\Delta$ has a non-empty neighbour set.
###### Remark 3.2.
As explained in [21, Rem. 2.2], for an equicontractive IFS $\\{\lambda
x+d_{i}\\}_{i\in\mathcal{I}}$ with $0<\lambda<1$, our notion of neighbour set
is closely related to Feng’s neighbour and characteristic vector construction
[3]. Instead of normalizing by some global factor of the form $\lambda^{n}$,
we normalize locally with respect to $\operatorname{diam}(\Delta)$. In this
case, the words of generation $\lambda^{n-1}$ are the words of length $n$ and
the net intervals of generation $n$ (in Feng’s notation) have diameter
comparable to $\lambda^{n}$.
This is important since, outside the equicontractive case, there is no uniform
notion of an integer-valued generation.
We now discuss some illustrative examples of this construction.
Assume $\Delta\in\mathcal{F}_{t}$ has neighbour set $\\{f_{1},\ldots,f_{k}\\}$
and for each $i$, let $S_{\sigma_{i}}$ generate the neighbour $f_{i}$. The
_transition generation_ of $\Delta$, denoted $\operatorname{tg}(\Delta)$, is
given by
$\operatorname{tg}(\Delta)=\max\\{|r_{\sigma_{i}}|:1\leq i\leq k\\}.$
It is straightforward to verify that
$\operatorname{tg}(\Delta)=R_{\max}(\Delta)\operatorname{diam}(\Delta)$. The
_children_ of (_parent_) $\Delta$ are the net intervals of generation
$\operatorname{tg}(\Delta)$ contained in $\Delta$. We remark that if there is
only one child, $\Delta_{1}$, then
$\mathcal{V}(\Delta)\neq\mathcal{V}(\Delta_{1})$. Given $\Delta=[a,b],$ with
child $\Delta_{1}=[a_{1},b_{1}]$, we define the position index
$q(\Delta,\Delta_{1})=(a_{i}-a)/\operatorname{diam}\Delta$. The position index
will enable us to distinguish children with the same neighbour set.
The children of a net interval are locally determined by the neighbour set of
the net interval in the following sense.
###### Theorem 3.3 ([21], Thm. 2.8).
Let $\\{S_{i}\\}_{i=1}^{m}$ be an arbitrary IFS. Then for any
$\Delta\in\mathcal{F}_{t}$ with children $(\Delta_{1},\ldots,\Delta_{n})$ in
$\mathcal{F}_{\operatorname{tg}(\Delta)}$, the index $n$, neighbour sets
$\mathcal{V}(\Delta_{i})$, position indices $q(\Delta,\Delta_{i})$, and ratios
$\operatorname{tg}(\Delta_{i})/\operatorname{tg}(\Delta)$ depend only on
$\mathcal{V}(\Delta)$.
Thus much of the important information about the IFS is captured in the
behaviour of the neighbour sets. This motivates the construction of the
directed transition graph, $\mathcal{G}(\\{S_{i}\\}_{i=1}^{m})$, defined as
follows. The vertex set of $\mathcal{G}$, denoted $V(\mathcal{G})$, is the set
of distinct neighbour sets,
$V(\mathcal{G})=\\{\mathcal{V}(\Delta):\Delta\in\mathcal{F}\\}$. For each
parent/child pair of net intervals, $\Delta\in\mathcal{F}_{t}$ and
$\Delta_{i}\in\mathcal{F}_{\operatorname{tg}(\Delta)}$, we introduce an edge
$e=(\mathcal{V}_{t}(\Delta),\mathcal{V}_{\operatorname{tg}(\Delta)}(\Delta_{i}),q(\Delta,\Delta_{i}))$.
Here $\mathcal{V}_{t}(\Delta)$ is the source vertex and
$\mathcal{V}_{\operatorname{tg}(\Delta)}(\Delta_{i})$ is the target vertex. We
write $E(\mathcal{G})$ for the set of all edges. By Theorem 3.3, this
construction is well-defined since it depends only on the neighbour set of
$\Delta$.
An _(admissible) path_ $\eta$ in $\mathcal{G}$ is a sequence of edges
$\eta=(e_{1},\ldots,e_{n})$ in $\mathcal{G}$ where the target of $e_{i}$ is
the source of $e_{i+1}$. A path in $\mathcal{G}$ is a _cycle_ if it begins and
ends at the same vertex. We denote by $\Sigma_{0}$ the set of infinite paths
beginning at the root vertex $\mathcal{V}([0,1])$, and $\Sigma_{0}^{*}$ the
set of finite paths beginning at $\mathcal{V}([0,1])$.
Nested sequences of net intervals are in correspondence with finite paths in
$\Sigma_{0}^{*}$. Given $\Delta\in\mathcal{F}_{t}$, consider the sequence
$(\Delta_{0},\ldots,\Delta_{n})$ where $\Delta_{0}=[0,1]$,
$\Delta_{n}=\Delta$, and each $\Delta_{i}$ is a child of $\Delta_{i-1}$. By
the _symbolic representation_ of $\Delta$, we mean the finite path
$\eta=(e_{1},\ldots,e_{n})$ in $\mathcal{G}$ where
$e_{i}=\bigl{(}\mathcal{V}(\Delta_{i-1}),\mathcal{V}(\Delta_{i}),q(\Delta_{i-1},\Delta_{i})\bigr{)}\text{
for each }i=1,...,n.$
Conversely, if $\eta=(e_{1},\ldots,e_{n})$ is any finite path, we say that
$\eta$ is _realized_ by $(\Delta_{i})_{i=0}^{n}$ if each $\Delta_{i}$ is a
child of $\Delta_{i-1}$ and each
$e_{i}=(\mathcal{V}(\Delta_{i-1}),\mathcal{V}(\Delta_{i}),q(\Delta_{i-1},\Delta_{i}))$.
We denote the symbolic representation of $\Delta$ by $[\Delta]$.
###### Definition 3.4.
Given some $x\in K$, we say that an infinite path $\gamma\in\Sigma_{0}$ is a
_symbolic representation_ of $x$ if
$\\{x\\}=\bigcap_{i=1}^{\infty}\Delta_{i}$
where for each $n$, $[\Delta_{n}]$ is the symbolic representation of the
length $n$ prefix of $\gamma$, denoted by $\gamma|n$. We say that $x$ is an
_interior point_ of $K$ if $x$ has a unique symbolic representation.
If $x$ is not an interior point, then $x$ must be an endpoint of two distinct
net intervals at any sufficiently small scale.
###### Definition 3.5.
Let $\mathcal{G}$ be the transition graph of an IFS. We define the _edge
weight_ , $W:E(\mathcal{G})\to(0,1)$ by the rule that if edge $e$ has source
$\mathcal{V}(\Delta_{1})$ and target $\mathcal{V}(\Delta_{2})$, then
$W(e)=\operatorname{tg}(\Delta_{2})/\operatorname{tg}(\Delta_{1})$.
This function is well-defined by Theorem 3.3. We extend $W$ to finite paths by
putting $W(\eta)=W(e_{1})\cdots W(e_{n})$ when $\eta=(e_{1},\ldots,e_{n})$ .
An important observation is that if $\Delta\in\mathcal{F}_{t}$ is any net
interval with symbolic representation $\eta$, then $W(\eta)\asymp t$, with
constants of comparability not depending on $\Delta$. While the above choice
of the weight for an edge is not unique with this property, a straightforward
argument shows that any such function must agree with $W$ on any cycle.
###### Definition 3.6.
We say that the IFS $\\{S_{i}\\}_{i=1}^{m}$ satisfies the _finite neighbour
condition_ if its transition graph is a finite graph.
Equivalently, there are only finitely many neighbours. We also say that the
associated self-similar measure satisfies the finite neighbour condition, even
though this condition does not depend on the choice of probabilities.
The finite neighbour condition was introduced in [12] and explored in more
detail in [21, Sec. 5]. In [12] it was shown that the finite neighbour
condition is equivalent to the generalized finite type condition holding with
respect to the invariant open set $(0,1)$ (see [17] for the original
definition of GFT) and hence satisfies the weak separation condition [17]. In
particular, all IFS that satisfy the open set condition or the finite type
condition with respect to $(0,1)$ (see [3] for the definition of finite type)
satisfy the finite neighbour condition. For simplicity, throughout the
remainder of this document, whenever we say that an IFS satisfies the finite
type condition, we always mean with respect to $(0,1)$.
This includes examples such as the iterated function systems
$\\{\rho x,\rho x+(1-\rho)\\}$
where $\rho$ is the reciprocal of a Pisot number. Here, the associated self-
similar measures are the much studied Bernoulli convolutions (c.f., [4], [24]
and the many references cited therein), or the overlapping Cantor-like IFS
$\\{x/d+i(d-1)/md:i=0,1,...,m\\}$
where $d\geq 3$ is a natural number (see [11, 22]). For example, in the case
of the Bernoulli convolution with $\rho$ the reciprocal of the Golden mean,
there are six neighbour sets. These are listed in Section 5.1 and the
transition graph is given in Fig. 1.
A non-equicontractive example is given by the IFS $\\{\rho
x,rx+\rho(1-r),rx+1-r\\}$ where $\rho,r>0$ satisfy $\rho+2r-\rho r\leq 1$.
This was introduced in [18] where it was shown to satisfy the WSC. In fact,
this IFS satisfies the finite neighbour condition (see [17] or [21, Sec.
5.3]). Note that it does not satisfy the open set condition (due to the
existence of exact overlaps) and does not necessarily have commensurable
contraction factors, so it cannot be of finite type. See Fig. 3 for its
transition graph and Section 5.3.2 for more details about its structure. Other
examples of IFS satisfying the finite neighbour condition can also be found in
Section 5.
In [12, Thm. 4.4] it was proven, under the assumption that the self-similar
set is an interval, that the finite neighbour condition is equivalent to the
weak separation condition. It is unknown if the two properties coincide for
IFS in $\mathbb{R}$. Further details on these various separation conditions
for IFS can be found in [12].
### 3.2. Transition matrices
We now show how one can encode the measure of net intervals through the so-
called transition matrices.
Fix a total order on the set of all neighbours
$\\{f:f\in\mathcal{V}(\Delta),\Delta\in\mathcal{F}\\}$. Let $e\in
E(\mathcal{G})$ be an edge, say
$e=(\mathcal{V}(\Delta_{1}),\mathcal{V}(\Delta_{2}),q(\Delta_{1},\Delta_{2}))$.
Assume the neighbour sets are given by
$\mathcal{V}(\Delta_{1})=\\{f_{1},\ldots,f_{k}\\}$ and
$\mathcal{V}(\Delta_{2})=\\{g_{1},\ldots,g_{n}\\}$ where $f_{1}<\cdots<f_{k}$
and $g_{1}<\cdots<g_{n}$. We define the _transition matrix_ $T(e)$ as the non-
negative $k\times n$ matrix given by
(3.3) $T(e)_{i,j}=p_{\ell}$
if there exists an index $\ell\in\\{1,...,m\\}$ such that $f_{i}$ is generated
by $\sigma$ and $g_{j}$ is generated by $\sigma\ell$; otherwise, set
$T(e)_{i,j}=0$. Note that this definition is slightly different than the
original definition; see [21, Sec. 5.2] for more detail concerning this.
It is clear from Theorem 3.3 that this definition depends only on the edge
$e$. If $\eta=(e_{1},\ldots,e_{n})$ is a path, we define
$T(\eta)=T(e_{1})\cdots T(e_{n}).$
We refer to these matrices as transition matrices, as well.
Recall that if $\sigma^{\prime}$ generates any neighbour of $\Delta_{2}$, then
necessarily $\sigma^{\prime}=\sigma\ell$ for some $\sigma$ which generates a
neighbour of $\Delta_{1}$; thus, every column of $T(e)$ has a positive entry.
More generally, if $\eta$ is a path, then $T(\eta)$ has a positive entry in
every column. However, it may not hold that each row of $T(\eta)$ has a
positive entry.
We continue to use the notation $\left\lVert T\right\rVert=\sum_{i,j}T_{ij}$
for the matrix $1$-norm of a non-negative matrix $T$.
The following relationship between the measure of net intervals and transition
matrices is known.
###### Proposition 3.7 ([21], Cor. 5.5).
Suppose $\Delta$ is a net interval with symbolic representation $\eta$. Then
$\mu_{\bm{p}}(\Delta)\asymp\left\lVert T(\eta)\right\rVert,$
with constants of comparability not depending on the choice of $\Delta$.
Thus the transition matrices encode the distribution of $\mu_{\bm{p}}$ on net
intervals.
We conclude this subsection by mentioning the following straightforward
property of transition matrices.
###### Lemma 3.8.
Let $\gamma$ be a finite path. Fix $n\in\operatorname{{\mathbb{N}}}$ and let
$\gamma^{\prime}=(\gamma_{n+j})_{j=0}^{\infty}$. We have $\left\lVert
T(\gamma^{\prime})\right\rVert\asymp\left\lVert T(\gamma)\right\rVert$ with
constant of comparability depending only on $n$.
###### Proof.
Write $\gamma=\eta\gamma^{\prime}$ where $\eta$ is a path of length $n$. Since
every transition matrix has a non-zero entry in each column, a straightforward
calculation shows that there exists some constant $a=a(\eta)$ such that
$\left\lVert T(\eta\gamma^{\prime})\right\rVert\geq a(\eta)\left\lVert
T(\gamma^{\prime})\right\rVert$. On the other hand, $\left\lVert
T(\eta\gamma^{\prime})\right\rVert\leq\left\lVert
T(\eta)\right\rVert\left\lVert T(\gamma^{\prime})\right\rVert$. But there are
only finitely many paths $\eta$ of length $n$, giving the result. ∎
### 3.3. Maximal loop classes and irreducibility
From this point on we will assume that $\mathcal{G}$ is the matrix product
system corresponding to an IFS that satisfies the finite neighbour condition.
Let $\mathcal{L}$ be an induced subgraph of $\mathcal{G}$ (i.e. $\mathcal{L}$
is the graph consisting of the vertices $V(\mathcal{L})$ and any edge $e\in
E(\mathcal{G})$ such that $e$ connects two vertices in $\mathcal{L}$). Of
course, the induced subgraph naturally inherits a matrix product system from
the full graph.
###### Definition 3.9.
We say that the subgraph $\mathcal{L}$ is a _loop class_ if for any vertices
$v_{1},v_{2}\in V(\mathcal{L})$, there is a non-empty directed path connecting
$v_{1}$ and $v_{2}$, and we call $\mathcal{L}$ _maximal_ if it is maximal with
this property.
Two maximal loop classes neccessarily have disjoint vertex sets, but not all
vertices need to belong to a maximal loop class. However, given any symbolic
representation $\gamma=(\gamma_{i})_{i=0}^{\infty}$, there is a unique maximal
loop class $\mathcal{L}$ in which $\gamma$ is _eventually_ , meaning there
exists some $N$ such that $\gamma^{\prime}:=(\gamma_{i})_{i=N}^{\infty}$ is an
element of $\Sigma(\mathcal{L})$.
We will let
$K_{\mathcal{L}}=\\{x\in K:x\text{ has a symbolic representation that is
eventually in }\mathcal{L}\\}.$
Every element of $K$ belongs to at least one set $K_{\mathcal{L}}$ for a
maximal loop class $\mathcal{L}$, and at most two such sets.
Abusing notation slightly, given $\gamma$ which is eventually in
$\mathcal{L}$, we write
$\lambda(\mathcal{L},\gamma)=\lim_{n\to\infty}\frac{\log\left\lVert
T(\gamma|n)\right\rVert}{\log W(\gamma|n)}.$
By Lemma 3.8, for $k\geq N$ we have $\left\lVert
T(\gamma|k)\right\rVert\asymp\left\lVert T(\gamma^{\prime}|k-N)\right\rVert$
where $\gamma^{\prime}$ is as above. Since, also, $W(\gamma|k)\asymp
W(\gamma^{\prime}|k-N)$, we have
$\lambda(\mathcal{L},\gamma)=\lambda(\mathcal{L},\gamma^{\prime})$ (and
similarly for upper and lower Lyapunov exponents), where $\gamma^{\prime}$ is
a path in $\mathcal{L}$, justifying our notation.
We are primarily interested in three types of maximal loop classes.
###### Definition 3.10.
1. (1)
We say that a maximal loop class is _irreducible_ if the corresponding matrix
product system is irreducible.
2. (2)
We say that a maximal loop class is _simple_ if all cycles share the same edge
set.
3. (3)
We say that a maximal loop class is an _essential class_ if any vertex
reachable from the maximal loop class by a directed path is also in the
maximal loop class.
For example, the IFS $\\{\rho x,\rho x+1-\rho\\}$ where $\rho$ is the
reciprocal of the Golden ratio has an essential class with three elements, and
two other singleton maximal loop classes. These loop classes are all
irreducible; for more details, see Section 5.1. Other examples are also given
in Section 5.
Note that irreducibility is a statement about the IFS and does not depend on
the choice of (non-zero) probabilities.
Any IFS satisfying the weak separation condition (such as those satisfying the
finite neighbour condition) has a unique essential class by [21, Prop. 3.3].
In fact, the finite neighbour condition can be characterized by the property
that the associated transition graph has a finite essential class [21, Thm.
5.3].
Moreover, the essential class is always irreducible; this is essentially shown
in [21, Lem. 3.9] (or [5, Lem. 6.4] in the equicontractive case), but we
include a self-contained proof here as an illustrative example:
###### Proposition 3.11.
Let $\mathcal{G}$ be the transition graph of an IFS satisfying the finite
neighbour condition with essential class $\mathcal{G}_{\operatorname{ess}}$.
Then $\mathcal{G}_{\operatorname{ess}}$ is irreducible.
###### Proof.
It suffices to show that for any
$v_{1},v_{2}\in\mathcal{G}_{\operatorname{ess}}$, $1\leq i\leq d(v_{1})$,
$1\leq j\leq d(v_{2})$, there exists a path $\eta$ from $v_{1}$ to $v_{2}$
such that $T(\eta)_{i,j}>0$. Let $\Delta\in\mathcal{F}$ be some net interval
with $\mathcal{V}(\Delta)=v_{1}$, let $\Delta_{0}\in\mathcal{F}_{t_{0}}$ be a
net interval with neighbour set in the essential class such that
$R_{\max}(\Delta_{0})$ is maximal and $\\#\mathcal{V}(\Delta_{0})$ maximal
among such net intervals. Let $\phi$ be a path from $\mathcal{V}(\Delta_{0})$
to $v_{2}$. Let $\xi$ generate neighbour $i$ of $\Delta$ and let
$\sigma\in\mathcal{I}^{*}$ have prefix $\xi$, $r_{\sigma}>0$ and
$S_{\sigma}([0,1])\subseteq\Delta$; such a $\sigma$ must necessarily exist
since $\Delta^{\circ}\cap S_{\xi}(K)\neq\emptyset$. Write
$\mathcal{V}(\Delta_{0})=\\{f_{1},\ldots,f_{k}\\}$ with $f_{1}<\cdots<f_{k}$
where each neighbour $f_{j}$ is generated by some word
$\omega_{j}\in\Lambda_{t_{0}}$. We have
$\Delta_{0}=[S_{\tau_{1}}(z_{1}),S_{\tau_{2}}(z_{2})]$ where
$\tau_{1},\tau_{2}\in\Lambda_{t_{0}}$ and $z_{1},z_{2}\in\\{0,1\\}$.
We now show that $S_{\sigma}(\Delta_{0})$ is indeed a net interval. Note that
the words $\sigma\tau_{j}$ and $\sigma\omega_{j}$ are in
$\Lambda_{r_{\sigma}t_{0}}$ by direct computation. Suppose for contradiction
$S_{\sigma}(\Delta_{0})$ is not a net interval. Without loss of generality,
let $\omega_{1}$ have
$|r_{\omega_{1}}|=R_{\max}(\Delta_{0})\operatorname{diam}(\Delta_{0})$. Since
$S_{\omega_{1}}(K)\cap\Delta_{0}^{\circ}\neq\emptyset$, we have
$S_{\sigma\omega_{1}}(K)\cap S_{\sigma}(\Delta_{0})^{\circ}\neq\emptyset$ so
there exists some net interval $\Delta^{\prime}\subseteq
S_{\sigma}(\Delta_{0})$ where the inclusion is proper and
$S_{\sigma\omega_{1}}(K)\cap(\Delta^{\prime})^{\circ}\neq\emptyset$. But then
$R_{\max}(\Delta^{\prime})>R_{\max}(\Delta_{0})$, contradicting the choice of
$\Delta_{0}$. Thus $\Delta_{1}:=S_{\sigma}(\Delta_{0})$ is indeed a net
interval.
Moreover, $\Delta_{1}$ has neighbours generated by the words $S_{\sigma}\circ
S_{\omega_{i}}$, and since $r_{\sigma}>0$,
$T_{\Delta_{1}}^{-1}=T_{\Delta_{0}}^{-1}\circ S_{\sigma}^{-1}$ and thus
$\mathcal{V}(S_{\sigma}(\Delta_{0}))\supseteq\mathcal{V}(\Delta_{0})$.
Equality then follows by the maximality of $\\#\mathcal{V}(\Delta_{0})$.
As $\xi$ is a prefix of $\sigma$, write $\sigma=\xi\tau$ for some
$\tau\in\mathcal{I}^{*}$. Let $\Delta$ have symbolic representation
$\phi_{0}$. Since $\Delta_{1}\subseteq\Delta$, there exists $\eta_{1}$ such
that $\Delta_{1}$ has symbolic representation $\phi_{0}\eta_{1}$. Since each
neighbour of $\Delta_{1}$ is generated by a word
$\sigma\omega_{j}=\xi\tau\omega_{j}$, by definition of the transition matrix
and choice of $\xi$, row $i$ of the matrix $T(\eta_{1})$ is strictly positive.
But then $\eta:=\eta_{1}\phi$ is an admissible path from $v_{1}$ to $v_{2}$,
and since every column of a transition matrix has a positive entry, row $i$ of
the matrix $T(\eta)$ is strictly positive as well. ∎
###### Remark 3.12.
In fact, as we argued in the above proof, the essential class satisfies a
somewhat stronger form of irreducibility: for any
$v_{1},v_{2}\in\mathcal{G}_{\operatorname{ess}}$ and $1\leq j\leq d(v_{2})$,
there exists a path $\eta$ from $v_{1}$ to $v_{2}$ such that row $j$ of
$T(\eta)$ is strictly positive. This property is closely related to the key
feature of the quasi-product structure under the weak separation condition
demonstrated by Feng and Lau [7]. Moreover, under somewhat stronger hypotheses
(satisfied, for example, when the attractor $K$ is an interval), the path
$\eta$ can be chosen such that $T(\eta)$ is a positive matrix (see [11] for a
proof in the equicontractive case, but the general case follows similarly).
## 4\. Sets of local dimensions of self-similar measures
We continue to use the notation of the previous section. In particular, we
assume that $\mathcal{G}$ is the matrix product system associated with an IFS
that satisfies the finite neighbour condition.
### 4.1. Basic results about local dimensions and periodic points
The following notion is a well-studied way of quantifying the singularity of
the measure $\mu$ with respect to Lebesgue measure at a point
$x\in\operatorname{supp}\mu=K$.
###### Definition 4.1.
Let $x\in K$ be arbitrary. Then the _lower local dimension of $\mu$ at $x$_ is
given by
$\underline{\dim}_{\operatorname{loc}}\mu(x)=\liminf_{t\to
0}\frac{\log\mu(B(x,t))}{\log t}$
and the _upper local dimension_ is given similarly with the limit infimum
replaced by the limit supremum. When the values of the upper and lower local
dimension agree, we call the shared value the _local dimension of $\mu$ at
$x$_.
Intuitively, the multifractal analysis of self-similar sets satisfying the
finite neighbour condition is related to the multifractal analysis of the
corresponding matrix product system. However, the exact relationship is
somewhat more complicated to establish: while the Lyapunov exponent of a path
$\gamma$ depends only on the single sequence of edges determining $\gamma$,
the local dimension of $\mu_{\bm{p}}$ at a point $x\in K$ can also depend on
net intervals which are adjacent to net intervals containing $x$. This happens
when $x$ is the shared boundary point of two distinct net intervals, but it
can also happen when $x$ is approximated very well by boundary points (so that
balls $B(x,r)$ overlap significantly with neighbouring net intervals, for many
values of $r$).
A point $x\in K$ is said to be _periodic_ if it has a symbolic representation
that is eventually a periodic path. For such points, this issue with overlaps
is easy to resolve. A boundary point of a net interval is a periodic point and
all elements of a simple loop class are periodic points. Indeed, for each
simple loop class there is a cycle $\theta$ such that all elements in the loop
class have a symbolic representation of the form $\gamma_{0}\overline{\theta}$
where $\overline{\theta}$ is the infinite periodic path with cycle $\theta$.
If $x$ has two distinct symbolic representations, then $x$ is necessarily the
endpoint of a net interval so the finite neighbour condition ensures that both
symbolic representations are periodic points. Note that a periodic point can
be an interior point (in the sense of Definition 3.4), but every non-periodic
point is interior.
By [21, Prop. 3.15] (see also [11, Prop. 2.7]), we have the following simple
formula for the local dimension of a periodic point:
###### Proposition 4.2 ([21], Prop. 3.15).
Suppose $x$ is an interior, periodic point with unique symbolic representation
$\gamma$ which is eventually in the loop class $\mathcal{L}$. Let $\theta$ be
any period of $\gamma$ and let $\overline{\theta}\in\Sigma(\mathcal{L})$
denote the path formed by repeating $\theta$ infinitely. Then the local
dimension exists at $x$ and is given by
$\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)=\frac{\log\operatorname{sp}(T(\theta))}{\log
W(\theta)}=\lambda(\mathcal{L},\overline{\theta}).$
Otherwise, $x$ has two distinct symbolic representations with periods
$\theta_{1}$ and $\theta_{2}$ and
$\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)=\min\left\\{\frac{\log\operatorname{sp}(T(\theta_{1}))}{\log
W(\theta_{1})},\frac{\log\operatorname{sp}(T(\theta_{2}))}{\log
W(\theta_{2})}\right\\}.$
###### Corollary 4.3.
If $x\in K_{\mathcal{L}}$ is a periodic point, then
$\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)$ belongs to
$\bigcup_{i=1}^{m}\\{\lambda(\mathcal{L}_{i},\gamma):\gamma\in\Sigma(\mathcal{L}_{i})\\}$
where $\mathcal{L}_{1},\ldots,\mathcal{L}_{m}$ is a complete list of the
maximal loop classes in $\mathcal{G}$.
More generally, when the Lyapunov exponent exists or the local dimension
exists, we can relate the two notions.
###### Proposition 4.4.
Suppose $x\in K$ is an interior point with exactly one symbolic representation
$\gamma$ that is eventually in the maximal loop class $\mathcal{L}$.
1. (1)
Then
$\alpha_{\min}(\mathcal{L})\leq\underline{\lambda}(\mathcal{L},\gamma)\leq\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x)\leq\overline{\lambda}(\mathcal{L},\gamma)\leq\alpha_{\max}(\mathcal{L}).$
2. (2)
If $\lambda(\mathcal{L},\gamma)$ exists, then
$\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x)=\lambda(\gamma)$.
3. (3)
If $\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)$ exists, then
$\underline{\lambda}(\gamma)=\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)$.
###### Proof.
By assumption, $x$ belongs to $K_{\mathcal{L}}$ and $\mathcal{L}$ is unique
with this property.
1. (1)
We have already seen that
$\overline{\lambda}(\mathcal{L},\gamma)\leq\alpha_{\max}(\mathcal{L})$ and
$\underline{\lambda}(\mathcal{L},\gamma)\geq\alpha_{\min}(\mathcal{L})$ in
Theorem 2.10.
For any $t>0$, we have $B(x,t)\supseteq\Delta(\gamma|t)$ and therefore
$\displaystyle\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x)=\limsup_{t\to
0}\frac{\log\mu_{\bm{p}}(B(x,t))}{\log t}$ $\displaystyle\leq\limsup_{t\to
0}\frac{\log\left\lVert T(\gamma|t)\right\rVert}{\log
t}=\overline{\lambda}(\mathcal{L},\gamma).$
If $x$ is not a boundary point of some net interval, since there are only
finitely many neighbour sets, there exists some $\rho>0$ and a monotonically
increasing sequence $(n_{k})_{k=1}^{\infty}$ with $n_{1}>N$ such that for each
$k\in\operatorname{{\mathbb{N}}}$ we have $B(x,\rho
W(\gamma|n_{k}))\subseteq\Delta(\gamma|n_{k})$. Since $\rho
W(\gamma|n_{k})\asymp W(\gamma|n_{k})$ we have
$\displaystyle\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x)=\limsup_{t\to
0}\frac{\log\mu_{\bm{p}}(B(x,t))}{\log t}$
$\displaystyle\geq\limsup_{k\to\infty}\frac{\log\mu_{\bm{p}}(B(x,\rho
W(\gamma|n_{k})))}{\log\rho W(\gamma|n_{k})}$
$\displaystyle\geq\limsup_{k\to\infty}\frac{\log\left\lVert
T(\gamma|n_{k})\right\rVert}{\log W(\gamma|n_{k})}$
$\displaystyle\geq\underline{\lambda}(\mathcal{L},\gamma).$
as required.
Otherwise, $x$ is a boundary point with a unique symbolic representation. In
this case, for suitable $\rho>0$ and large $n$, $B(x,\rho W(\gamma|n))\cap
K_{\mathcal{L}}\subseteq\Delta(\gamma|n)$, so we can argue similarly.
2. (2)
This is immediate from (i) as
$\underline{\lambda}(\mathcal{L},\gamma)=\overline{\lambda}(\mathcal{L},\gamma)$.
3. (3)
The same argument as (i) shows that
$\underline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x)\leq\underline{\lambda}(\mathcal{L},\alpha)$,
from which the result follows.
∎
### 4.2. Sets of local dimensions for simple and irreducible loop classes
We begin by noting that periodic points are dense in the set of local
dimensions.
###### Proposition 4.5.
Let $\mathcal{L}$ be an irreducible maximal loop class. Then the set of local
dimensions at interior periodic points is dense in
$[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$.
###### Proof.
This follows by slightly modifying the proof of Proposition 2.13 by choosing
the paths $\eta_{k}$ such that they are also interior paths. Thus the
corresponding point in $K_{\mathcal{L}}$ is interior periodic and has local
dimension equal to the symbolic local dimension by Proposition 4.4. Then the
result follows from Corollary 2.11, which states that
$\\{\underline{\lambda}(\mathcal{L},\gamma):\gamma\text{ eventually in
}\mathcal{L}\\}=[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$. ∎
Our next result establishes a converse to Proposition 4.4.
###### Theorem 4.6.
Let $\mathcal{L}$ be an irreducible, maximal loop class that is not simple.
Then
$[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]\subseteq\\{\dim_{\operatorname{loc}}\mu_{\bm{p}}(x):x\in
K_{\mathcal{L}},x\text{ interior}\\}.$
###### Proof.
Since $\mathcal{L}$ is not simple, there exists a path
$\xi\in\Sigma^{*}(\mathcal{L})$ such that if $\xi$ is realized by
$(\Delta_{i})_{i=0}^{m}$, then $\Delta_{m}\subseteq\Delta_{0}^{\circ}$. Let
$\alpha\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$ be
arbitrary and by Theorem 2.10 get some
$\gamma=(e_{n})_{n=1}^{\infty}\in\Sigma(\mathcal{L})$ and a sequence
$(n_{j})_{j=1}^{\infty}$ with $\lim_{j}n_{j+1}/n_{j}=1$ such that
$\lambda(\gamma)=\alpha$ and for each $j$, $\xi$ is a prefix of
$(e_{n_{j}},e_{n_{j}+1},\ldots)$. Let $\zeta_{0}$ be such that
$\zeta_{0}\gamma$ is a path in $\mathcal{G}$ beginning at the root vertex
$\mathcal{V}([0,1])$. By the choice of $\xi$, there exists a unique interior
point $x$ with symbolic representation $\zeta_{0}\gamma$. We will show that
$\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)=\alpha$.
We first note that $B(x,t)\supseteq\Delta_{t}(x)$ where $\Delta_{t}(x)$ is the
unique net interval in generation $t$ containing $x$. Thus if
$\zeta_{0}e_{1}\ldots e_{n}$ is the symbolic representation of
$\Delta_{t}(x)$, then
$\mu_{\bm{p}}(B(x,t)))\geq\mu_{\bm{p}}(\Delta_{t}(x))\asymp\left\lVert
T(\zeta_{0}e_{1}\ldots e_{n})\right\rVert\asymp\left\lVert T(e_{1}\ldots
e_{n})\right\rVert$
by Lemma 3.8. Hence for some constant $C>0$ we have
$\frac{\log\mu_{\bm{p}}(B(x,t))}{\log t}\leq\frac{\log C\left\lVert
T(e_{1}\ldots e_{n})\right\rVert}{\log t}.$
Since $\log t\asymp W(e_{1}\ldots e_{n})$, it follows that
$\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x)\leq\alpha.$
To obtain the other inequality, we use the special properties of the path
$\gamma$. For each $k\in\operatorname{{\mathbb{N}}}$, let $\Delta^{(k)}$ be
the net interval with symbolic representation $\zeta_{0}e_{1}\ldots
e_{n_{k}-1}$. Let $\rho>0$ be such that for each
$k\in\operatorname{{\mathbb{N}}}$,
$B(x,\rho\operatorname{diam}(\Delta^{(k)}))\subseteq\Delta^{(k)}.$
Given $t>0$ sufficiently small, let $k$ be such that
$\rho\operatorname{diam}(\Delta^{(k+1)})\leq
t<\rho\operatorname{diam}(\Delta^{(k)})$, so that
$B(x,t)\subseteq
B(x,\rho\operatorname{diam}(\Delta^{(k)}))\subseteq\Delta^{(k)}.$
This ensures that $\mu_{\bm{p}}(B(x,t))\leq\mu_{\bm{p}}(\Delta^{(k)})$. We
also have
$\displaystyle t\geq\rho\operatorname{diam}(\Delta^{(k+1)})$
$\displaystyle\asymp W(\zeta_{0}e_{1}\ldots
e_{n_{k+1}-1})=W(\zeta_{0}e_{1}\ldots e_{n_{k}-1})W(e_{n_{k}}\ldots
e_{n_{k+1}-1})$ $\displaystyle\geq
W_{\min}^{n_{k+1}-n_{k}}W(\zeta_{0}e_{1}\ldots e_{n_{k}-1})\asymp
W_{\min}^{n_{k+1}-n_{k}}\operatorname{diam}(\Delta^{(k)}).$
Combining these observations, we see that there exist positive constants
$C_{1},C_{2}$ such that
$\displaystyle\frac{\log\mu_{\bm{p}}(B(x,t))}{\log t}$
$\displaystyle\geq\frac{\log\mu_{\bm{p}}(\Delta^{(k)})}{\log t}\geq\frac{\log
C_{1}+\log\left\lVert T(e_{1}\ldots
e_{n_{k}-1})\right\rVert}{\log\rho+\log\operatorname{diam}(\Delta^{(k+1)})}$
$\displaystyle\geq\frac{\log C_{1}+\log\left\lVert T(e_{1}\ldots
e_{n_{k}-1})\right\rVert}{(n_{k+1}-n_{k})\log
C_{2}+\log\operatorname{diam}(\Delta^{(k)})}$
But $\log\operatorname{diam}(\Delta^{(k)})\asymp-n_{k}$ and
$\lim_{k}(n_{k+1}-n_{k})/n_{k}=0$ by choice of $\gamma$, so that
$\displaystyle\underline{\dim}_{loc}\mu_{\bm{p}}(x)$
$\displaystyle\geq\liminf_{k\to\infty}\frac{\log\left\lVert T(e_{1}\ldots
e_{n_{k}-1})\right\rVert}{\log\operatorname{diam}(\Delta^{(k)})}$
$\displaystyle\geq\liminf_{n\to\infty}\frac{\log\left\lVert T(e_{1}\ldots
e_{n})\right\rVert}{\log W(e_{1}\ldots e_{n})}=\alpha.$
We have thus shown that $\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)=\alpha$, as
required. ∎
###### Remark 4.7.
If $\mathcal{L}$ is a simple loop class with interior points, it is clear that
the conclusions of the theorem also hold and $\alpha_{\min}=\alpha_{\max}$.
The preceding theorem gives us strong information about the set of attainable
local dimensions:
###### Corollary 4.8.
Let $\mathcal{L}$ be an irreducible, maximal loop class that is not simple.
Then
$\displaystyle[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$
$\displaystyle=\\{\dim_{\operatorname{loc}}\mu_{\bm{p}}(x):x\in
K_{\mathcal{L}},x\text{ interior}\\}$
$\displaystyle=\\{\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x):x\in
K_{\mathcal{L}},x\text{ interior}\\}.$
###### Proof.
This follows by combining Theorem 4.6 and Proposition 4.4, noting that the set
of local dimensions is contained in the set of upper local dimensions. ∎
###### Remark 4.9.
When $\mathcal{L}$ is the essential class, this result was shown in [21].
Moreover, in that case, $[\alpha_{\min},\alpha_{\max}]$ is also the set of
lower local dimensions.
However, outside the essential class, the same statement need not hold for the
lower local dimension in place of the upper local dimension; the set of lower
local dimensions can be strictly larger. Consider the example from Section
5.3.4; this example and the result here is treated in [11, Sec. 6]. In that
example, with our notation, if $\mathcal{L}$ is the irreducible maximal loop
class not equal to the essential class, then
$[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]=\left[\frac{\log
7}{\log 4},\frac{\log 14}{\log 4}\right],$
while
$\\{\underline{\dim}_{loc}\mu_{\bm{p}}(x):x\in K_{\mathcal{L}}\text{
interior}\\}=\left[\frac{1}{2},\frac{\log 14}{\log 4}\right].$
Here $1/2$ is also the local dimension of a boundary point (that is not
interior).
It would be interesting to know if the set of lower local dimensions at
interior points is always an interval.
If $\mathcal{L}$ is a loop class that contains interior points, then the set
of local dimensions at these interior points is given by the interval
$[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$. If $\mathcal{L}$
does not contain any interior points, then $\mathcal{L}$ is a simple loop
class. In this situation, it may hold that every $x\in K_{\mathcal{L}}$ has
two symbolic representations, and the local dimension is always given by the
symbolic representation of the adjacent path which is not eventually in
$\mathcal{L}$. This motivates the following definition:
###### Definition 4.10.
We say that a loop class $\mathcal{L}$ is _non-degenerate_ if $\mathcal{L}$ is
not simple, or if $\mathcal{L}$ is simple with period $\theta$ and there
exists some $x\in K_{\mathcal{L}}$ such that
$\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)=\lambda(\overline{\theta}).$
We say that $\mathcal{L}$ is _degenerate_ otherwise.
We emphasize that, unlike simplicity or irreducibility, degeneracy depends on
the choice of probabilities. For an example of this phenomenon, see Section
5.3.3.
The point is that if $\mathcal{L}$ is a degenerate loop class, then the local
dimension at any point $x\in K_{\mathcal{L}}$ is given by the Lyapunov
exponent of a path not in $\mathcal{L}$. We now have the following corollary,
which holds under the assumptions that all maximal loop classes are either
irreducible or simple.
###### Corollary 4.11.
Let $\\{S_{i}\\}_{i\in\mathcal{I}}$ be an IFS satisfying the finite neighbour
condition with maximal loop classes $\\{\mathcal{L}_{i}\\}_{i=1}^{\ell}$.
Suppose each $\mathcal{L}_{i}$ is either irreducible or simple. Let
$\\{\mathcal{L}_{i}\\}_{i=1}^{\ell^{\prime}}$ denote the non-degenerate
maximal loop classes. Then
$\\{\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x):x\in
K\\}=\\{\dim_{loc}\mu_{\bm{p}}(x):x\in
K\\}=\bigcup_{i=1}^{\ell^{\prime}}[\alpha_{\min}(\mathcal{L}_{i}),\alpha_{\max}(\mathcal{L}_{i})].$
###### Proof.
If $x$ is a periodic point, then
$\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)=\lambda(\mathcal{L},\gamma)$ for
some path $\gamma$ in some non-degenerate loop class $\mathcal{L}$ according
to Proposition 4.2. Otherwise, $x$ must be an interior point of some
$K_{\mathcal{L}}$ where $\mathcal{L}$ is not simple. By Corollary 4.8,
$\overline{\dim}_{\operatorname{loc}}\mu_{\bm{p}}(x)=\alpha$ for some
$\alpha\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$.
On the other hand, if $\mathcal{L}$ is a non-simple loop class, then Theorem
4.6 shows that each
$\alpha\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$ is attained
as a local dimension. If $\mathcal{L}$ is a simple loop class, then
$\alpha_{\min}(\mathcal{L})=\alpha_{\max}(\mathcal{L})$ and this value is
attained as a local dimension precisely when $\mathcal{L}$ is non-degenerate.
∎
###### Remark 4.12.
The authors do not know if this result continues to hold without the
irreducibility assumption. However, we are not aware of any examples in
$\operatorname{{\mathbb{R}}}$ satisfying the weak separation condition which
do not satisfy the hypotheses for Corollary 4.11.
## 5\. Examples of IFS satisfying the finite neighbour condition
Throughout this section, for $\mathcal{L}$ a maximal loop class and
$\mu_{\bm{p}}$ a self similar measure, we will write
$\displaystyle\mathcal{D}(\mathcal{L})$
$\displaystyle=\\{\lambda(x):x\in\Sigma(\mathcal{L})\\}$
$\displaystyle\mathcal{D}(\mu_{\bm{p}})$
$\displaystyle=\\{\dim_{\operatorname{loc}}\mu_{\bm{p}}(x):x\in\operatorname{supp}\mu_{\bm{p}}\\}.$
### 5.1. Bernoulli convolutions
One much studied example of an equicontractive IFS of finite type is the IFS
with two contractions,
(5.1) $\\{\rho x,\rho x+1-\rho\\},$
with $\rho$ the reciprocal of the Golden mean. Feng [4] (see also [10, 11])
computed the neighbour sets (or characteristic vectors in his terminology)
with respect to the original net interval construction.
In our slightly modified setting, there six neighbour sets. These are:
$\displaystyle v_{1}$ $\displaystyle=\\{x\\}$ $\displaystyle v_{2}$
$\displaystyle=\\{x\cdot(1+\rho)\\}$ $\displaystyle v_{3}$
$\displaystyle=\\{x\cdot(1+\rho)-\rho\\}$ $\displaystyle v_{4}$
$\displaystyle=\\{x\cdot(2+\rho),x(2+\rho)-(1+\rho)\\}$ $\displaystyle v_{5}$
$\displaystyle=\\{x\cdot(3+2\rho)-(1+\rho)\\}$ $\displaystyle v_{6}$
$\displaystyle=\\{x\cdot(1+\rho),x\cdot(1+\rho)-\rho\\}$
The weight function is given by $W(e)=\rho$ for all edges $e$. The essential
class has $V(\mathcal{G}_{\operatorname{ess}})=\\{v_{4},v_{5},v_{6}\\}$ and
there are two other maximal loop classes, $\mathcal{L}_{1}$ and
$\mathcal{L}_{2}$, which are the simple loops with vertex sets
$V(\mathcal{L}_{1})=\\{v_{2}\\}$ and $V(\mathcal{L}_{2})=\\{v_{3}\\}$. Both
simple loop classes are non-degenerate since 0 and 1 are interior points. We
have $K_{\operatorname{ess}}=(0,1)$, $K_{\mathcal{L}_{1}}=\\{0\\}$, and
$K_{\mathcal{L}_{2}}=\\{1\\}$. See Fig. 1 for the transition graph as well as
the associated transition matrices.
$v_{1}$$v_{2}$$v_{3}$$v_{4}$$v_{5}$$v_{6}$$e_{1}$$e_{2}$$e_{3}$$e_{4}$$e_{5}$$e_{6}$$e_{7}$$e_{8}$$e_{9}$$e_{10}$$e_{11}$$e_{12}$
Edge | Weight | Transition Matrix
---|---|---
$e_{1}$ | $\rho$ | $\begin{pmatrix}p\end{pmatrix}$
$e_{2}$ | $\rho$ | $\begin{pmatrix}1-p\end{pmatrix}$
$e_{3}$ | $\rho$ | $\begin{pmatrix}1-p&p\end{pmatrix}$
$e_{4}$ | $\rho$ | $\begin{pmatrix}p\end{pmatrix}$
$e_{5}$ | $\rho$ | $\begin{pmatrix}1-p&p\end{pmatrix}$
$e_{6}$ | $\rho$ | $\begin{pmatrix}1-p\end{pmatrix}$
$e_{7}$ | $\rho$ | $\begin{pmatrix}1-p&p\end{pmatrix}$
$e_{8}$ | $\rho$ | $\begin{pmatrix}1-p&p\end{pmatrix}$
$e_{9}$ | $\rho$ | $\begin{pmatrix}p\\\ 1-p\end{pmatrix}$
$e_{10}$ | $\rho$ | $\begin{pmatrix}p&0\\\ 1-p&p\end{pmatrix}$
$e_{11}$ | $\rho$ | $\begin{pmatrix}1-p&p\\\ 0&1-p\end{pmatrix}$
$e_{12}$ | $\rho$ | $\begin{pmatrix}p&0\\\ 0&1-p\end{pmatrix}$
Figure 1. Transition graph for the Golden mean Bernoulli convolution
Since the essential class is always irreducible and the non-essential maximal
loop classes are simple, the set of local dimensions is a union of a possibly
non-singleton interval along with at most two isolated points. The
corresponding sets of Lyapunov exponents are
$\mathcal{D}(\mathcal{L}_{1}):=\left\\{\frac{\log p}{\log\rho}\right\\}\text{
and
}\mathcal{D}(\mathcal{L}_{2})=\left\\{\frac{\log(1-p)}{\log\rho}\right\\}.$
These are also the local dimensions at $0$ and $1$ respectively since $0$ and
$1$ are interior points, so we have
$\mathcal{D}(\mathcal{L}_{1}),\mathcal{D}(\mathcal{L}_{2})\subseteq\mathcal{D}(\mu_{\bm{p}})$.
Now, for $p\leq 1/2$, note that the transition matrix of the cycle
$(e_{11},e_{12})$ has spectral radius $(1-p)^{2}$ and weight $\rho^{2}$. The
Lyapunov exponent corresponding to this path is $\frac{\log(1-p)}{\log\rho}$
so that
$\mathcal{D}(\mathcal{L}_{2})\subseteq\mathcal{D}(\mathcal{G}_{\operatorname{ess}})$.
Similarly if $p\geq 1/2$, then the cycle $(e_{10},e_{12})$ has corresponding
Lyapunov exponent $\frac{\log p}{\log\rho}$ and
$\mathcal{D}(\mathcal{L}_{1})\subseteq\mathcal{D}(\mathcal{G}_{\operatorname{ess}})$.
In particular, when $p=1/2$, then $\mathcal{D}(\mu_{\bm{p}})$ is a closed
interval, and when $p\neq 1/2$, $\mathcal{D}(\mu_{\bm{p}})$ is a closed
interval along with at most a singleton point.
When $p\neq 1/2$, we know in general, by a short argument in [8], that
$\mathcal{D}(\mu_{\bm{p}})$ must contain an isolated point corresponding to
either $x=0$ or $x=1$, so $\mathcal{D}(\mu_{\bm{p}})$ is precisely a closed
interval along with an isolated point.
### 5.2. Testud measures
Consider the IFS given by the maps
$\displaystyle S_{1}(x)$ $\displaystyle=\frac{x}{4}$ $\displaystyle S_{2}(x)$
$\displaystyle=\frac{x}{4}+\frac{1}{4}$ $\displaystyle S_{3}(x)$
$\displaystyle=\frac{x}{4}+\frac{1}{2}$ $\displaystyle S_{4}(x)$
$\displaystyle=\frac{x}{4}+\frac{3}{4}$ $\displaystyle S_{5}(x)$
$\displaystyle=-\frac{x}{4}+\frac{1}{4}$ $\displaystyle S_{6}(x)$
$\displaystyle=-\frac{x}{4}+\frac{1}{2}$
This example is treated in [23, Sec. 6.2]. For each $i$, we have
$S_{i}([0,1])=[j/4,(j+1)/4]$ for some $j\in\\{0,1,2,3\\}$. There are two
neighbour sets,
$\displaystyle v_{1}$ $\displaystyle=\\{x\\}$ $\displaystyle v_{2}$
$\displaystyle=\\{-x+1,x\\}.$
The transition graph is given in Fig. 2, and there is the essential class
$\mathcal{G}_{\operatorname{ess}}$ with vertex set $\\{v_{2}\\}$ and a non-
simple irreducible maximal loop class $\mathcal{L}$ with vertex set
$\\{v_{1}\\}$.
Every cycle in $\mathcal{L}$ is a concatenation of the edges $e_{1}$ and
$e_{2}$; since the corresponding transition matrices are singletons, we have
$\displaystyle\mathcal{D}(\mathcal{L})$
$\displaystyle=\Bigl{\\{}\frac{\log(p_{3}^{n}p_{4}^{m})}{-\log 4^{n+m}}:n\geq
0,m\geq 0,n+m\geq 1\Bigr{\\}}$
$\displaystyle=\Bigl{[}\frac{\log\max\\{p_{3},p_{4}\\}}{-\log
4},\frac{\log\min\\{p_{3},p_{4}\\}}{-\log 4}\Bigr{]}.$
Similarly, the cycles in $\mathcal{G}_{\operatorname{ess}}$ are arbitrary
concatenations of edges in $\\{e_{5},e_{6},e_{7},e_{8}\\}$. Now, under the
assumption that $p_{1}=p_{4}+p_{5}$ and $p_{2}=p_{3}+p_{6}$, if
$\eta=(e_{i_{1}},e_{i_{2}},\ldots,e_{i_{k}})$ is any path in the essential
class with $n$ edges in $\\{e_{5},e_{8}\\}$ and $m$ edges in
$\\{e_{6},e_{7}\\}$, one may show that
$\operatorname{sp}T(\eta)=p_{1}^{n}p_{2}^{m}$. Thus
$\displaystyle\mathcal{D}(\mathcal{G}_{\operatorname{ess}})$
$\displaystyle=\Bigl{\\{}\frac{\log(p_{1}^{n}p_{2}^{m})}{-\log 4^{n+m}}:n\geq
0,m\geq 0,n+m\geq 1\Bigr{\\}}$
$\displaystyle=\Bigl{[}\frac{\log\max\\{p_{1},p_{2}\\}}{-\log
4},\frac{\log\min\\{p_{1},p_{2}\\}}{-\log 4}\Bigr{]}.$
We therefore have
$\displaystyle\mathcal{D}(\mu_{\bm{p}})=\Bigl{[}\frac{\log\max\\{p_{1},p_{2}\\}}{-\log
4},\frac{\log\min\\{p_{1},p_{2}\\}}{-\log
4}\Bigr{]}\cup\Bigl{[}\frac{\log\max\\{p_{3},p_{4}\\}}{-\log
4},\frac{\log\min\\{p_{3},p_{4}\\}}{-\log 4}\Bigr{]}.$
In particular, if $p_{1}<p_{2}<p_{3}<p_{4}$, then $\mathcal{D}(\mu_{\bm{p}})$
is a disjoint union of two non-trivial closed intervals.
Note that the other examples treated in [23] can be analysed similarly.
$v_{1}$$v_{2}$$e_{1}$$e_{2}$$e_{3}$$e_{4}$$e_{8}$$e_{7}$$e_{6}$$e_{5}$
Edge | Weight | Transition Matrix
---|---|---
$e_{1}$ | $1/4$ | $\begin{pmatrix}p_{3}\end{pmatrix}$
$e_{2}$ | $1/4$ | $\begin{pmatrix}p_{4}\end{pmatrix}$
$e_{3}$ | $1/4$ | $\begin{pmatrix}p_{5}&p_{1}\end{pmatrix}$
$e_{4}$ | $1/4$ | $\begin{pmatrix}p_{6}&p_{2}\end{pmatrix}$
$e_{5}$ | $1/4$ | $\begin{pmatrix}p_{4}&0\\\ p_{5}&p_{1}\end{pmatrix}$
$e_{6}$ | $1/4$ | $\begin{pmatrix}p_{3}&0\\\ p_{6}&p_{2}\end{pmatrix}$
$e_{7}$ | $1/4$ | $\begin{pmatrix}p_{2}&p_{6}\\\ 0&p_{3}\end{pmatrix}$
$e_{8}$ | $1/4$ | $\begin{pmatrix}p_{1}&p_{5}\\\ 0&p_{4}\end{pmatrix}$
Figure 2. Transition graph for the Testud IFS
### 5.3. Other examples
#### 5.3.1. Cantor-like measures
Consider the family of IFS given by
$\bigl{\\{}S_{j}(x)=\frac{x}{r}+\frac{j}{mr}(r-1):0\leq j\leq m\bigr{\\}}$
for integers $m,r$ satisfying $2\leq r\leq m$. This family includes a rescaled
version of the three-fold convolution of the middle-third Cantor measure,
which was the earliest example of a self-similar measure known to exhibit
isolated points in the set of local dimensions [15]. The transition graph
consists of an essential class along with two simple maximal loop classes
$\mathcal{L}_{1}$ and $\mathcal{L}_{2}$, where $K_{\mathcal{L}_{1}}=\\{0\\}$,
$K_{\mathcal{L}_{2}}=\\{1\\}$, and
$K_{\mathcal{G}_{\operatorname{ess}}}=(0,1)$. The loops $\mathcal{L}_{1}$ and
$\mathcal{L}_{2}$ consist of single edges with $1\times 1$ transition
matrices, and
$\displaystyle\mathcal{D}(\mathcal{L}_{1})$ $\displaystyle=\frac{\log
p_{0}}{-\log r}$ $\displaystyle\mathcal{D}(\mathcal{L}_{1})$
$\displaystyle=\frac{\log p_{m}}{-\log r}.$
For appropriately chosen probabilities, these singletons contribute the
isolated points in the set of local dimensions for the self-similar measure
$\mu_{\bm{p}}$ and the essential class contributes a closed interval of
dimensions. See [11] for more details.
#### 5.3.2. An example of Lau and Wang
By nature of the definition, an IFS of finite type must have logarithmically
commensurable contraction factors. Here is an example of an IFS satisfying the
finite neighbour condition which does not have commensurable contraction
factors.
The IFS $\\{\rho x,rx+\rho(1-r),rx+1-r\\}$ where $\rho+2r-\rho r\leq 1$ was
seen to satisfy the WSC in [18], but it is not of finite type when $\rho$ and
$r$ are non-commensurable. For simplicity, we consider the case $\rho=1/3$ and
$r=1/4$; for a more general treatment, this family was studied in [21, Sec.
5.2].
There are 5 neighbour sets given by
$\displaystyle v_{1}$ $\displaystyle=\\{x\\}$ $\displaystyle v_{2}$
$\displaystyle=\\{4x/3\\}$ $\displaystyle v_{3}$
$\displaystyle=\\{3x/2-1/2\\}$ $\displaystyle v_{4}$
$\displaystyle=\\{3x,4x-3\\}$ $\displaystyle v_{5}$
$\displaystyle=\\{x,3x\\}.$
The transition graph and transition matrices are given in Fig. 3. One can see
that there is only one maximal loop class, which is the essential class; thus,
the set of local dimensions is a closed interval. For more details on the
computations of the set of attainable local dimensions, we refer the reader to
[21].
$v_{1}$$v_{3}$$v_{2}$$v_{5}$$v_{4}$$e_{1}$$e_{9}$$e_{5}$$e_{3}$$e_{8}$$e_{11}$$e_{12}$$e_{2}$$e_{4}$$e_{6}$$e_{7}$$e_{10}$
Edge | Weight | Transition Matrix
---|---|---
$e_{1}$ | $1/4$ | $\begin{pmatrix}p_{3}\end{pmatrix}$
$e_{2}$ | $1/3$ | $\begin{pmatrix}p_{1}\end{pmatrix}$
$e_{3}$ | $1/4$ | $\begin{pmatrix}p_{2}\end{pmatrix}$
$e_{4}$ | $1/3$ | $\begin{pmatrix}p_{2}&p_{1}\end{pmatrix}$
$e_{5}$ | $1/3$ | $\begin{pmatrix}p_{1}\end{pmatrix}$
$e_{6}$ | $1/4$ | $\begin{pmatrix}p_{2}\end{pmatrix}$
$e_{7}$ | $1/3$ | $\begin{pmatrix}p_{2}&p_{1}\end{pmatrix}$
$e_{8}$ | $1/4$ | $\begin{pmatrix}p_{3}\end{pmatrix}$
$e_{9}$ | $1/4$ | $\begin{pmatrix}p_{2}\end{pmatrix}$
$e_{10}$ | $1/3$ | $\begin{pmatrix}1\\\ p_{1}\end{pmatrix}$
$e_{11}$ | $1/3$ | $\begin{pmatrix}0&1\\\ p_{2}&p_{1}\end{pmatrix}$
$e_{12}$ | $3/4$ | $\begin{pmatrix}0&1\\\ p_{3}&0\end{pmatrix}$
Figure 3. Transition graph for the example of Lau and Wang
#### 5.3.3. A non-equicontractive finite type example
Here is an example which satisfies the finite type condition without equal
contraction ratios.
Take $\rho=(\sqrt{5}-1)/2$, the reciprocal of the Golden mean. Consider the
IFS given by the maps
$\displaystyle S_{1}(x)$ $\displaystyle=\rho x$ $\displaystyle S_{2}(x)$
$\displaystyle=\rho^{2}x+\rho-\rho^{2}$ $\displaystyle S_{3}(x)$
$\displaystyle=\rho^{2}x+(1-\rho^{2})$
with probabilities $(p_{i})_{i=1}^{3}$. This IFS has 7 neighbour sets given by
$\displaystyle v_{1}$ $\displaystyle=\\{x\\}$ $\displaystyle v_{2}$
$\displaystyle=\\{(2+\rho)x\\}$ $\displaystyle v_{3}$
$\displaystyle=\\{x,(1+\rho)x-(1+\rho)\\}$ $\displaystyle v_{4}$
$\displaystyle=\\{(2+\rho)x,(3+2\rho)x-(1+\rho)\\}$ $\displaystyle v_{5}$
$\displaystyle=\\{(1+\rho)x-\rho,(2+\rho)x,(2+\rho)x-(1+\rho)\\}$
$\displaystyle v_{6}$ $\displaystyle=\\{(1+\rho)x,(2+\rho)x,(2+\rho)x-1\\}$
and
$v_{7}=\\{(2+\rho)x,(2+\rho)x-(1+\rho),(3+2\rho)x-2(1+\rho),(3+2\rho)x-(1+\rho)\\}.$
There are three simple non-essential maximal loop classes, with vertex sets
$V(\mathcal{L}_{1})=\\{v_{1}\\}$, $V(\mathcal{L}_{2})=\\{v_{2}\\}$, and
$V(\mathcal{L}_{3})=\\{v_{3}\\}$. The essential class has vertex set
$V(\mathcal{G}_{\operatorname{ess}})=\\{v_{4},v_{5},v_{6},v_{7}\\}$. The
transition graph and transition matrices are given in Fig. 4.
A direct computation shows that
$\mathcal{D}(\mathcal{L}_{1})=\mathcal{D}(\mathcal{L}_{3})=\frac{\log
p_{3}}{2\log\rho}$ and $\mathcal{D}(\mathcal{L}_{2})=\frac{\log
p_{1}}{\log\rho}$. Thus $\mathcal{D}(\mu_{\bm{p}})$ consists of a possibly
non-singleton interval along with at most two isolated points. Both
$K_{\mathcal{L}_{1}}$ and $K_{\mathcal{L}_{2}}$ contain interior points, so
they are non-degenerate. However, every point in $x\in K_{\mathcal{L}_{3}}$
has two symbolic representations of the form
$(\underbrace{e_{1},\ldots,e_{1}}_{n},e_{3},e_{6},e_{6},\ldots)\text{ and
}(\underbrace{e_{1},\ldots,e_{1}}_{n},e_{1},e_{2},e_{4},e_{4},\ldots)$
for some $n\geq 0$. Thus for any $x\in K_{\mathcal{L}_{3}}$, we have
$\dim_{\operatorname{loc}}\mu_{\bm{p}}(x)=\min\Bigl{\\{}\frac{\log
p_{1}}{\log\rho},\frac{\log p_{3}}{2\log\rho}\Bigr{\\}}$
and when the minimum is not attained at $\frac{\log p_{3}}{2\log\rho}$,
$\mathcal{L}_{3}$ is a degenerate loop class. However, this does not impact
the set of possible local dimensions.
Suppose in particular that the probabilities satisfy $p_{1}^{2}>p_{2}$ and
$p_{3}>p_{2}$. Then the cycle $(e_{10},e_{11})$ in the essential class has
$\operatorname{sp}T(e_{10},e_{11})=p_{1}^{2}$ and $W(e_{10},e_{11})=\rho^{2}$,
so
$\mathcal{D}(\mathcal{L}_{2})\subseteq\mathcal{D}(\mathcal{G}_{\operatorname{ess}}).$
Similarly, the cycle $(e_{12},e_{13})$ in the essential class has
$\operatorname{sp}T(e_{12},e_{13})=p_{3}$ and $W(e_{12},e_{13})=\rho^{2}$ so
$\mathcal{D}(\mathcal{L}_{1})=\mathcal{D}(\mathcal{L}_{3})\subseteq\mathcal{D}(\mathcal{G}_{\operatorname{ess}})$
Therefore
$\mathcal{D}(\mu_{\bm{p}})=\mathcal{D}(\mathcal{G}_{\operatorname{ess}})$ is a
closed interval for such probabilities.
$v_{1}$$v_{2}$$v_{3}$$v_{4}$$v_{5}$$v_{6}$$v_{7}$$e_{1}$$e_{2}$$e_{3}$$e_{4}$$e_{5}$$e_{6}$$e_{7}$$e_{8}$$e_{9}$$e_{10}$$e_{11}$$e_{12}$$e_{13}$
Edge | Weight | Transition Matrix
---|---|---
$e_{1}$ | $\rho^{2}$ | $\begin{pmatrix}p_{3}\end{pmatrix}$
$e_{2}$ | $\rho$ | $\begin{pmatrix}p_{1}\end{pmatrix}$
$e_{3}$ | $\rho$ | $\begin{pmatrix}p_{2}&p_{1}\end{pmatrix}$
$e_{4}$ | $\rho$ | $\begin{pmatrix}p_{1}\end{pmatrix}$
$e_{5}$ | $\rho$ | $\begin{pmatrix}p_{2}&p_{1}\end{pmatrix}$
$e_{6}$ | $\rho$ | $\begin{pmatrix}0&1\\\ p_{3}&0\end{pmatrix}$
$e_{7}$ | $\rho$ | $\begin{pmatrix}0&1&0\\\ p_{2}&0&p_{1}\end{pmatrix}$
$e_{8}$ | $\rho$ | $\begin{pmatrix}0&1&0\\\ p_{2}&0&p_{1}\end{pmatrix}$
$e_{9}$ | $\rho$ | $\begin{pmatrix}0&1\\\ p_{2}&p_{1}\\\ p_{3}&0\end{pmatrix}$
$e_{10}$ | $\rho$ | $\begin{pmatrix}0&1&0\\\ 0&p_{1}&0\\\ p_{2}&0&p_{1}\end{pmatrix}$
$e_{11}$ | $\rho$ | $\begin{pmatrix}0&0&1\\\ 0&p_{1}&0\\\ p_{3}&0&0\end{pmatrix}$
$e_{12}$ | $\rho$ | $\begin{pmatrix}0&0&1&0\\\ p_{2}&0&0&p_{1}\\\ 0&p_{3}&0&0\end{pmatrix}$
$e_{13}$ | $\rho$ | $\begin{pmatrix}0&1&0\\\ 0&0&1\\\ p_{3}&0&0\\\ p_{2}&0&p_{1}\end{pmatrix}$
Figure 4. Transition graph for the non-equicontractive finite type example
#### 5.3.4. An example with the set of lower local dimensions not equal to
the set of upper local dimensions
The IFS with $S_{i}(x)=x/4+d_{i}/12$ for $d_{i}=i$ when $i=0,1,...,5$,
$d_{6}=8$, $d_{7}=9$, is known to be of finite type [11, 19] and satisfies the
finite neighbour condition. The essential class is a single vertex with four
outgoing edges, and there are two additional loop classes: a simple loop class
with one vertex, along with a non-simple irreducible loop class with three
vertices.
This example is notable since the set of lower local dimensions in the non-
essential irreducible loop class need not coincide with the set of upper local
dimensions (see Remark 4.9).
#### 5.3.5. A Pisot reciprocal Bernoulli convolution with a non-simple non-
essential loop class
Another interesting example is the Bernoulli convolution with parameter
$\rho$, where $\rho$ is the reciprocal of the Pisot root of $x^{3}-x^{2}-1$.
This finite type IFS has five maximal loop classes: the essential class with
46 elements, another irreducible loop class with 23 elements, and three
additional simple loop classes. For more details on this IFS, we refer the
reader to [9].
## References
* [1] Robert Cawley and R. Daniel Mauldin, _Multifractal decompositions of Moran fractals_ , Adv. Math. 92 (1992), no. 2, 196–236.
* [2] De-Jun Feng, _Lyapunov exponents for products of matrices and multifractal analysis. Part I: Positive matrices_ , Isr. J. Math. 138 (2003), no. 1, 353–376.
* [3] by same author, _Smoothness of the $L^{q}$-spectrum of self-similar measures with overlaps_, J. Lond. Math. Soc. 68 (2003), no. 01, 102–118.
* [4] by same author, _The limited Rademacher functions and Bernoulli convolutions associated with Pisot numbers_ , Adv. Math. 195 (2005), no. 1, 24–101.
* [5] by same author, _Lyapunov exponents for products of matrices and multifractal analysis. Part II: General matrices_ , Isr. J. Math. 170 (2009), no. 1, 355–394.
* [6] De-Jun Feng and Ka-Sing Lau, _The Pressure Function for Products of Non-negative Matrices_ , Math. Res. Lett. 9 (2002), no. 3, 363–378.
* [7] by same author, _Multifractal formalism for self-similar measures with weak separation condition_ , J. Math. Pures Appl. 92 (2009), no. 4, 407–428.
* [8] Kathryn E. Hare and Kevin G. Hare, _Local Dimensions of Overlapping Self-Similar Measures_ , Real Anal. Exchange 44 (2019), no. 2, 247\.
* [9] Kathryn E. Hare, Kevin G. Hare, and Kevin R. Matthews, _Local dimensions of measures of finite type: Appendix_ , arXiv:1504.00510 [math] (2015).
* [10] by same author, _Local dimensions of measures of finite type_ , J. Fractal Geom. 3 (2016), no. 4, 331–376.
* [11] Kathryn E. Hare, Kevin G. Hare, and Michael K.S. Ng, _Local dimensions of measures of finite type II: Measures without full support and with non-regular probabilities_ , Can. J. Math. 70 (2018), no. 4, 824–867.
* [12] Kathryn E. Hare, Kevin G. Hare, and Alex Rutar, _When the Weak Separation Condition implies the Generalized Finite Type Condition_ , Proc. Amer. Math. Soc. (to appear).
* [13] H. George E. Hentschel and Itamar Procaccia, _The infinite number of generalized dimensions of fractals and strange attractors_ , Physica D: Nonlinear Phenomena 8 (1983), no. 3, 435–444.
* [14] Einar Hille and Ralph S. Phillips, _Functional analysis and semi-groups_ , American Mathematical Society, Providence, R.I., 1957.
* [15] Tian-You Hu and Ka-Sing Lau, _Multifractal Structure of Convolution of the Cantor Measure_ , Adv. Appl. Math. 27 (2001), no. 1, 1–16.
* [16] Ka-Sing Lau and Sze-Man Ngai, _Multifractal Measures and a Weak Separation Condition_ , Adv. Math. 141 (1999), no. 1, 45–96.
* [17] by same author, _A generalized finite type condition for iterated function systems_ , Adv. Math. 208 (2007), no. 2, 647–671.
* [18] Ka-Sing Lau and Xiang-Yang Wang, _Iterated function systems with a weak separation condition_ , Studia Math. 161 (2004), no. 3, 249–268.
* [19] Sze-Man Ngai and Yang Wang, _Hausdorff dimension of self-similar sets with overlaps_ , J. Lond. Math. Soc. 63 (2001), no. 3, 655–672.
* [20] Norbert Patzschke, _Self-Conformal Multifractal Measures_ , Adv. Appl. Math. 19 (1997), no. 4, 486–513.
* [21] Alex Rutar, _Geometric and Combinatorial Properties of Self-similar Multifractal Measures_ , arXiv:2008.00197 [math] (2020).
* [22] Pablo Shmerkin, _A Modified Multifractal Formalism for a Class of Self-similar Measures with Overlap_ , Asian J. Math. 9 (2005), no. 3, 323–348.
* [23] Benoît Testud, _Phase transitions for the multifractal analysis of self-similar measures_ , Nonlinearity 19 (2006), no. 5, 1201–1217.
* [24] Péter P. Varjú, _Recent progress on Bernoulli convolutions_ , Apollo Cambridge Repository (2018).
* [25] Martin P.W. Zerner, _Weak Separation Properties for Self-Similar Sets_ , Proc. Amer. Math. Soc. 124 (1996), no. 11, 3529–3539.
|
# Clustering Future Scenarios Based on Predicted Range Maps
Davidow, Matthew Merow, Cory Che-Castaldo, Judy Schafer, Toryn L. J Düker,
Marie-Christine Corcoran, Derek Matteson, David S
(November 2020)
###### Summary
1. 1.
Predictions of biodiversity trajectories under climate change are crucial in
order to act effectively in maintaining the diversity of species. In many
ecological applications, future predictions are made under various global
warming scenarios as described by a range of different climate models. The
outputs of these various predictions call for a reliable interpretation.
2. 2.
We propose a interpretable and flexible two step methodology to measure the
similarity between predicted species range maps and cluster the future
scenario predictions utilizing a spectral clustering technique.
3. 3.
We find that clustering based on ecological impact (predicted species range
maps) is mainly driven by the amount of warming. We contrast this with
clustering based only on predicted climate features, which is driven mainly by
climate models.
4. 4.
The differences between these clusterings illustrate that it is crucial to
incorporate ecological information to understand the relevant differences
between climate models. The findings of this work can be used to better
synthesize forecasts of biodiversity loss under the wide spectrum of results
that emerge when considering potential future biodiversity loss.
Key-words: biodiversity; clustering; similarity measures; future scenarios;
animal species; climate change.
## 1 Introduction
Predicting ecological responses to a rapidly changing climate is essential to
enact effective conservation policies (Parry et al., 2007; Hannah et al.,
2013, 2020). Future range maps of various species can be predicted based on
predicted future patterns of climate (Burrows et al., 2014; Jones and Cheung,
2015; Molinos et al., 2016). However, predicting future patterns of climate is
a challenging task due to the many sources of uncertainty, and a plethora of
climate predictions are possible that are consistent with various unknown
factors (e.g. differing human policy responses, climate models). We are
interested in the analysis of comparing the outputs of these various
predictions and deducing common patterns among predictions.
We make use of climate predictions from the Coupled Model Intercomparison
Project 6 (CMIP6). This project provides multiple climate predictions which
vary by the underlying global climate model (GCM) and what representative
concentration pathway (RCP) is used. An RCP is a greenhouse gas concentration
trajectory (Eyring et al., 2016). Four such trajectories are included in
CMIP6, varying in the quantity of greenhouse gas emissions to capture the
uncertainty of future emissions. We refer to the RCP trajectory of least
emission as “optimistic”, and refer to the most pessimistic trajectory as the
“extreme” scenario. In this work we refer to a _scenario_ as a (GCM, RCP)
pair, these scenarios represent uncertainty both in the evolution of climate,
and in future greenhouse gas emissions.
In order to interpret the differences among climate predictions, we propose a
methodology to cluster the scenarios. We create such a clustering both from
the climate features, and from predicted range maps for 1101 mammalian
species. These predicted range maps are based on the predicted climate
features, the details of these range maps are discussed in Section 2. These
clusterings reveal the important differences between climate models, such as
whether the global climate model or the RCP differences are the most salient
features. The clustering will also group the scenarios into interpretable
collections, such as an “optimistic” collection of scenarios of lesser
ecological impact, and an “extreme” collection of scenarios of greater
ecological impact.
Clustering the scenarios based on these predicted range maps is a difficult
task due to the discrete and high dimensional nature of the range maps.
Although Principal Component Analysis (PCA) is a common tool to analyze data,
it has two significant drawbacks in this setting. One is that PCA implicitly
measures distance between range maps in Euclidean space, whereas we present a
flexible alternative, allowing for any similarity or distance between range
maps. Secondly, it is not clear how to incorporate information from multiple
species into a PCA-based approach.
Currently, a popular metric of change in species richness due to climate
change is climate velocity (Burrows et al., 2014; Jones and Cheung, 2015;
Molinos et al., 2016). However, these climate velocities rely heavily on
climate information while ignoring ecological data. Such ecological
information, such as species’ exposure to climate conditions not found in
their current niches, are important factors to predict the species future
ranges (Trisos et al., 2020). We incorporate such ecological data by
clustering based on predicted range maps based on models of historical ranges.
We will contrast this ecologically driven clustering with a climate driven
clustering to emphasise the importance of incorporating ecological
information. This climate driven clustering will use only predicted climate
features taken from CMIP6, such as annual mean temperature and annual mean
precipitation. By comparing the climate driven clustering and the ecological
one obtained from the range map similarities, we demonstrate the need for
incorporating ecological information. Clustering based only on climate
features is a similar but distinct task to the ecological based clustering;
the climate features are continuous whereas the ecological range maps are
binary. Previously these scenarios have been clustered using the climate
features by averaging these climate features over global regions Giorgi and
Francisco (2000); Cannon (2015). However, spatially averaging the climate
features this way loses significant information about the spatial variability
of the features.
We propose a flexible two step approach to cluster both predicted species
range maps and predicted climate features. The first step is to measure the
pairwise similarity between the prediction maps. The second step is to use the
pairwise similarities for spectral clustering, whose implementation is
discussed in Section 2. This two-step procedure is highly flexible, any
similarity measure can be used between range maps.
We propose to measure the similarity between range maps as the cosine
similarity of the range maps. This choice allows for considerable flexibility;
the modeller can weight absences and presences separately, and give certain
sets of cells higher importance. In addition the cosine similarity is
interpretable and comparable across species; it is always in the range
$[-1,1]$. This interpretability allows for a simple method to combine
information across species, as will be discussed in Section 2.2.
## 2 Materials and Methods
We first describe prior modelling work to predict the future range maps given
climate information. Then we discuss our methodology for clustering the
scenarios; first based on on predicted range maps, and secondly based on
predicted climate features. These clusterings reveal interpretable
relationships between the scenarios, and the contrast of these two clusterings
demonstrate the importance of incorporating ecological information.
### 2.1 Data: Predicted Range Map Modelling
A set of 9 GCMs are used from CMIP6 (Eyring et al., 2016; Stouffer et al.,
2017), which span four different levels of RCPS. Five climate features are
chosen from the set of 19 commonly used in WordCLIM (Fick and Hijmans, 2017),
which were chosen to minimize correlations. These five climate features are
annual mean temperature, temperature seasonality (standard deviation of
temperature), annual precipitation, precipitation seasonality (standard
deviation), and precipitation in the driest quarter of a year.
The five chosen climate features are used to fit a Poisson point process model
to explain the present day range maps (Merow et al., 2013; Elith et al.,
2011). The present day occurrences of 1101 mammals are obtained from Miller
(2020), whose range maps contain at 10 unique presence cells on a 10km grid.
This Poisson point process model was used to predict future spatial occurrence
based on these five predicted climate values. Binary maps are obtained from
these abundance maps by thresholding based on the $5$th percentile of
predicted values at training presences. This approach is used to make
predictions on all 1101 mammals using the 34 different sets of predicted
climate features. We make use of these predicted range maps to measure the
similarities between the 34 different climate scenarios.
### 2.2 Range Map Clustering Methodology
We present a novel methodology to cluster scenarios based on the binary
presence maps such as those shown in Figure 1(b). We now discuss the procedure
for achieving this scenario clustering. This clustering illuminates the
important similarities between scenarios, for instance if GCM or RCP is the
main differentiating feature among scenarios, and additionally gives insight
into the variation among scenarios.
#### 2.2.1 Range Map Notation
For notational simplicity we focus the presentation of the methodology on a
single species. Furthermore, we suppose our maps are on a $n_{r}$ by $n_{c}$
grid represented as $(r,c):r=1,\ldots,n_{r},c=1,\ldots,n_{c}$. We denote
$B_{s}$ as the binary presence map of this species according to scenario $s$,
to be more precise $B_{s}(r,c)=1$ if the cell at $(r,c)$ represents a
presence, and $B_{s}(r,c)=0$ otherwise. We recognize it is important to
consider how these range maps differ from the present day as will be discussed
further. For this reason we denote by $P$ the present day map. Similarly as
for $B(r,c)$, we write $P(r,c)=1$ if the cell at $(r,c)$ is presently
occupied, and $0$ otherwise. We let the binary map $A$ (same size as $P$ and
each $B_{s}$) denote the background set, where valid absences may occur. For
instance, the set $A$ can take the value 1 only when there is land, or
alternatively only on the same continent or regional areas as $P$. We have
chosen to take $A$ as the union of the original map and all scenarios, thus
$A(r,c)=1$ if $P(r,c)=1$, or there is a scenario $s$ such that $B_{s}(r,c)=1$.
#### 2.2.2 Cosine Similarity
In order to interpret the differences between range map predictions, a
similarity or distance measure is called for between pairs of range maps. The
quantification of the similarity or distance between range maps allows us to
cluster the scenarios, which is a desirably interpretable result.
There exists a plethora of alternative measures to quantify the similarity or
distance between these presence maps (Visser and De Nijs, 2006; Wilson, 2011;
Hagen, 2002; Hill et al., 2013; Gritti et al., 2013). The Hellinger distance
and Kullback-Leibler divergence rely on a probabilistic interpretation, and
thus it is difficult to incorporate absences into these measures (Wilson,
2011). The Kappa statistic (Hagen, 2002) can be used to measure similarities
between categorical maps, however it is not as clear how to weight important
cells, such as novel absences. In addition, the Kappa statistic does not
directly incorporate the relative frequencies of absences and presences. For
instance if two maps both predict all presences except a single absence cell,
these two maps will have a negative Kappa statistic if their one absence cell
differs. However, for our purposes we would like to consider such a pair of
maps very similar. The Wasserstein distance (or Earth Mover’s Distance) (Peyré
et al., 2019; Kranstauber et al., 2017) is an attractive alternative which
captures the idea of movement of a species. However, the Wasserstein distance
does not model the disappearance of regions (as opposed to the shift/movement
of regions), and computing the distance via optimal discrete transport (a
linear program of quadratic size in the number of present cells) proved too
slow or even infeasible (no solution found to the linear program) in all but
the smallest maps (the maps with the smallest number of presence cells).
We chose the cosine similarity function, which has several attractive
features. It is flexible; the modeller can weight absences and presences
separately, and the modeller has the flexibility to give certain sets of cells
higher importance. The cosine similarity is interpretable and comparable
across species as we describe below.
We compute the pairwise similarity between range maps as the cosine similarity
of their weighted range maps. The motivation and details for weighting the
range maps is discussed in Section 2.2.3. In short, we construct a weighted
map $W_{s}$ based on the binary range map $B_{s}$ and present day map, $P$.
For a pair of scenarios $s,s^{\prime}$ with corresponding binary range maps
$B_{s},B_{s^{\prime}}$ and present day map $P$, we measure their similarity as
the cosine similarity of their weighted maps,
$\text{CS}(W_{s},W_{s^{\prime}})$. We suppose that a map $G$ on an $n_{r}$ by
$n_{c}$ grid can be interpreted as a vector with length $n_{r}\times n_{c}$.
Then the cosine similarity between two vectors $W_{s},W_{s^{\prime}}$ can be
written as
$\text{CS}(W_{s},W_{s^{\prime}})=\frac{W_{s}\cdot
W_{s^{\prime}}}{\|W_{s}\|_{2}\|W_{s^{\prime}}\|_{2}}=\frac{\sum\limits_{r=1}^{n_{r}}\sum\limits_{c=1}^{n_{c}}W_{s}(r,c)W_{s^{\prime}}(r,c)}{\|W_{s}\|_{2}\|W_{s^{\prime}}\|_{2}},\hskip
5.69046pt\text{ with }\hskip
5.69046pt\|W_{s}\|_{2}=\Big{(}\sum\limits_{r=1}^{n_{r}}\sum\limits_{c=1}^{n_{c}}W_{s}(r,c)^{2}\Big{)}^{1/2}.$
(1)
#### 2.2.3 Weighted Map
Weighting the cells of the binary maps is required to quantitatively measure
the similarity between range maps. For each scenario $s$, we denote the
corresponding weighted map $W_{s}$. The value of $W_{s}(r,c)$ is 0 if
$A(r,c)=0$, but when $A(r,c)=1$ we choose one of four values for $W_{s}(r,c)$
depending on whether the cell represents a presence/absence for the present
day, and a presence/absence for the scenario. These four choices are shown in
Table 1.
| $P(r,c)=1$ | $P(r,c)=0$
---|---|---
$B(r,c)=1$ | $p_{\text{keep}}{}$ | $p_{\text{new}}{}$
$B(r,c)=0$ | $a_{\text{new}}{}$ | $a_{\text{keep}}{}$
Table 1: Cell Weighting Values for $W(r,c)$ given $A(r,c)=1$.
The choice of weightings in Table 1 is dependent on the use of cosine
similarity. For instance using cosine similarity but representing presences
with the value 1 and absences with 0 is inappropriate because any two maps
will have positive similarity, and this choice will also have the property
that absences and presences are counted differently in the denominator of the
cosine similarity.
These first four cases (reading row by row) represent unchanged (kept)
presences, new presences, new absences, and unchanged absences respectively,
where “keep” and “new” are with respect to the present day distribution, $P$.
For example, a cell corresponding to $p_{\text{keep}}{}$ is a cell that is
present in $P$, and is kept present according to scenario $B_{s}$. Presences
are given positive weights, absences negative weights, and we choose
$|a_{\text{new}}{}|>|a_{\text{keep}}{}|$, to emphasize those cells whose
ecological suitability for this species is vanishing. These “lost” cells
corresponding to novel absences are particularly important; they represent
cells where the climate is changing so drastically that the species cannot
continue to live there. In addition these cells should be weighted higher
because we are more confident about their prediction; by definition they exist
within the training data’s presences. By contrast the cells corresponding to
$p_{\text{new}}{}$ represent regions where the species are predicted to move
to, however such predictions are more uncertain as the movement of a species
is complex and not directly taken into account by the Poisson point process
model. Thus we also make the choice $|p_{\text{keep}}{}|>|p_{\text{new}}{}|$.
We make the choice
$|a_{\text{new}}{}|=|p_{\text{keep}}{}|=1,|a_{\text{keep}}{}|=|p_{\text{new}}{}|=0.5$
to represent the fact we are more confident about all cells in the region of
present day occurrences (those cells corresponding to $P(r,c)=1$). However, we
emphasize the flexibility of our model, alternative choices can be made
depending on the modeler’s goals. A visualization of this weighting scheme is
shown in Figure 1.
(a) Present Day Range Map
(b) Scenario Range Map
(c) Overlap of Present Day and Scenario
Figure 1: Illustration for weighted presences and absences. Presences are
shown in blue, absences in red/pink. The overlap with the present day in shown
in darker regions which represent more significant cells, which are dark blue
kept presences, and dark red new absences, which represent “lost” cells. A
strength of our proposed methodology is the flexibility to weight these cells
differently
The computation and resultant cosine similarity using these weighted maps are
highly interpretable; when two scenarios agree on the presence or absence of a
cell, this cell has a positive contribution to the cosine similarity, whereas
the cell has a negative contribution when the two scenarios disagree. The
resulting similarity is always in the range $[-1,1]$ (for any choice of
weightings), and takes the value $1$ when the two maps are identical and $-1$
if the two maps completely disagree on presences, as long as
$|a_{\text{new}}{}|=|p_{\text{keep}}{}|,|a_{\text{keep}}{}|=|p_{\text{new}}{}|$,
which was argued for previously.
#### 2.2.4 Spectral Clustering
For all species we compute the pairwise scenario similarity matrix by
computing the cosine similarity, eq. (1) on each pair of scenarios. That is
for each species $m$, we construct the $n_{s}$-by-$n_{s}$ matrix
$S^{m}(s,s^{\prime})=CS(W^{m}_{s},W^{m}_{s^{\prime}})$, where $n_{s}$ is the
number of scenarios and $W^{m}_{s}$ is the weighting map for species $m$ on
scenario $s$. Spectral clustering is well suited to cluster the scenarios
based on this similarity measure (Von Luxburg, 2007).
The properties of spectral clustering are understood from a graph theory
perspective. The similarity matrix $S^{m}$ can be thought of as an undirected
graph whose nodes are the scenarios and edge weights between a pair of
scenarios $s$ and $s^{\prime}$ given by $S^{m}(s,s^{\prime})$. Spectral
clustering has best performance on sparse graphs, thus for the dense
similarity matrix $S^{m}$, the first step is to sparsify it by means of taking
the $k$-nearest neighbor graph, that is retaining an edge from $s$ to
$s^{\prime}$ only if $s^{\prime}$ is within the top $k$ neighbors of $s$ (i.e.
it is within the top $k$ nodes of maximal similarity to $s$). However this
would lead to a directed graph as this definition of nearest neighbor is not
symmetric. Thus we retain the undirected edge from $s$ to $s^{\prime}$ if
either $s^{\prime}$ is within the top $k$ neighbors of $s$, or vice-versa. The
retained edges are still weighted by the similarity of their endpoints. We
denote by $E$ this matrix of retained weights.
The main tool of spectral clustering is the graph Laplacian $L=D-E$, where $D$
is a diagonal matrix of node degree, $D_{ii}=\sum_{j=1}^{n_{s}}E(i,j)$. We use
the random-walk normalized graph Laplacian, $L_{rw}=D^{-1}L$ as suggested in
Von Luxburg (2007). Both $L$ and $L_{rw}$ have several nice properties, they
are positive semi-definite and the multiplicity of their zero eigenvalue is
the number of connected components of the graph. For most real-world graphs
including the sparsified cosine similarity matrix $E$, the graph is fully
connected and thus the number of connected components of this graph is one.
When this is the case the eigenvectors corresponding to the smallest non-zero
eigenvalues can be used as an embedding. This can be thought of from a
perturbation perspective, if the graph had true clusters of connected
components than these eigenvectors would have the same span as vectors
representing clustering membership indicators. However in real world graphs
there are a few “noisy” edges between clusters, thus these first few
eigenvectors instead are near piecewise on those indicators.
Drawing from this insight, the space associated with the eigenvectors of
$L_{rw}$, which we denote as columns of a matrix $U$, is used as “spectral
embedding”. We use the second and third eigenvectors of $U$ (corresponding to
the second and third smallest eigenvalue) as the spectral embedding , that is
scenario $s$ is represented by $(U_{s,2},U_{s,3})$. We choose a simple
clustering algorithm, single-linkage clustering, to cluster in this embedded
space as it performs well according to the Davies-Bouldin criterion, a common
clustering criterion for how well separated clusters are (Davies and Bouldin,
1979).
This clustering can be performed for a single species using only $S^{m}$, for
example this is performed for four different species, shown in Figure 6. These
results will be discussed in the next section. One way to combine information
across animals is to average the similarity matrices across all $n_{m}=1101$
animals, as in $S=(1/n_{m})\sum_{a=1}^{n_{m}}S^{m}$. Performing spectral
clustering using this averaged similarity matrix, $S$, is shown in Figure
5(b).
### 2.3 Global Spatial Diversity Loss: Data Summary Visualization
One can get an overall sense of the changes predicted in the range maps from
Figure 2, which shows how the diversity of mammals is spread spatially
throughout the globe. We see from both Figures 2(c) and 2(d) that significant
diversity losses are predicted around the equator in South America and Africa,
whereas there is some diversity increases further up north, consistent with
previous findings (Chen et al., 2011).
(a) Present Day Mammal Variety
(b) Predicted Mammal Variety
(c) Net Change
(d) Fraction Change
Figure 2: Visualization of the mammal richness in the dataset over space.
White cells correspond to locations with no recorded presences. Our findings
are consistent with Chen et al. (2011) which finds that species are moving
poleward and towards higher elevations, there is loss around the equator and
some increase in diversity towards the northern pole.
#### 2.3.1 Frequency Analysis
One way to qualitatively measure the performance of the clustering is to look
at the spatial overlap of presences for each scenario. That is we define the
frequency map $F^{m}$ based on the scenario maps:
$F^{m}(r,c):=\sum_{s=1}^{n_{s}}B^{m}_{s}(r,c)$ For example the overlap of all
34 scenarios for the African cheetah is shown in Figure 8(a).
#### 2.3.2 Principal Component Analysis
Principal component analysis is a great tool to understand the main directions
of variation in data. We use PCA to visualize the correlations of presence
cells. This can be performed for a single species by considering each scenario
as an observation, with $n_{r}\times n_{c}$ binary features corresponding to
the vectorized (flattened) binary range map.
#### 2.3.3 Climate Based Scenario Clustering
We discuss the process of clustering using only the five climate features that
were used to predict the range maps. This clustering when contrasted with the
ecologically driven clustering demonstrates the importance of incorporating
ecological information. Each of these five globally distributed features is
predicted across the 34 scenarios. In order to directly compare to our
ecologically based clustering, we utilize the climate features to perform a
clustering in a similar fashion to the ecologically based clustering. However,
for continuous data, the cosine similarity is not appropriate, as two maps
that are scaled versions of each other would be considered very similar to
each other, which is not desirable. For instance if one map predicted two
degrees warmer everywhere than another, the cosine similarity between these
two maps would very high, which is not desirable as these two maps represent
significantly different predictions. Instead we use the $L_{2}$ distance
between maps, which will effectively use both the difference between the means
of maps, and differences in the spatial variation. To incorporate all five
climate features, each feature is normalized before applying the $L_{2}$
distance. We denote $T_{s}^{f}(r,c)$ the value of feature $f$ according to
scenario $s$ at location $(r,c)$. The feature scaled $L_{2}$ distance between
a pair of scenarios $s$ and $s^{\prime}$ using all five features is given by:
$H(s,s^{\prime})=\sum_{f=1}^{5}\sum_{r=1}^{n_{r}}\sum_{c=1}^{n_{c}}[(T_{s}^{f}(r,c)-T_{s^{\prime}}^{f}(r,c))/\sigma_{f}]^{2}.$
Where $\sigma_{f}$ is the standard deviation of feature $f$ measured across
all locations and scenarios, that is we calculate the mean of each feature
across all locations and scenarios,
$\sigma_{f}^{2}=\sum_{s=1}^{n_{s}}\sum_{r=1}^{n_{r}}\sum_{c=1}^{n_{c}}[T_{s}^{f}(r,c)-\mu_{f}]^{2}\hskip
5.69046pt\text{ with }\hskip 5.69046pt\mu_{f}=(n_{s}\cdot n_{r}\cdot
n_{c})^{-1}\sum_{s=1}^{n_{s}}\sum_{r=1}^{n_{r}}\sum_{c=1}^{n_{c}}T_{s}^{f}(r,c).$
(2)
Spectral clustering requires a similarity matrix (instead of a dissimilarity
or distance matrix), thus we transform the pairwise $L_{2}$ distances into
similarities with any monotonically negative function, such as $x\rightarrow
1/x$. An important step in spectral clustering is to sparsify the similarity
matrix by keeping only the largest similarities (discussed above), and so any
monotonically negative function will produce the same spectral embedding and
clusters Von Luxburg (2007). Thus we use the similarity matrix
$L(s,s^{\prime}):=1/H(s,s^{\prime})$ as the similarity matrix for spectral
clustering, which is shown in Figure 5(b).
(a) Annual Mean Temperature
(b) Temperature Seasonality
(c) Annual Precipitation
(d) Precipitation Seasonality
Figure 3: Spectral clustering of scenarios using individual climate features
(one of the five is not shown due to space). The annual temperature clustering
is most similar to the ecological clustering of Figure 5(b), clustering mainly
by RCP. The other features mainly cluster mainly by GCM. This is why the
climate driven clustering of Figure 5(a) is driven mainly by GCM, most of the
features are. The ecological clustering is important to discern which of these
features are most ecologically relevant, these plots show that annual
temperature contains the most ecologically relevant differences between
climate models.
### 2.4 Variation of Scenarios
We summarize some overall patterns of the projected range maps by counting
both presences that differ from the present day range map, and absences that
differ from the present day range maps (cells corresponding to
$a_{\text{new}}{}$ and $p_{\text{new}}{}$ respectively). The fraction of each
of these cell types is averaged over species for each scenario. The resulting
average of fraction of $a_{\text{new}}{}$ and $p_{\text{new}}{}$ type cells is
shown in Figure 4, which shows that these novel absences and novel presences
are correlated.
## 3 Results and Discussion
In this section we discuss the scenario clusterings, with emphasis on the
difference between the climate based and ecological based clustering. We then
demonstrate that we have detected meaningful clusters, with range maps within
clusters similar to each other, but different than range maps in other
clusters.
### 3.1 Range Shifts
By comparing the fraction of cells corresponding to $a_{\text{new}}{}$ and
$p_{\text{new}}{}$ we get a sense of the changes of the range maps compared to
the present day range map. This is shown in Figure 4, which demonstrates these
range maps tend to predict range shifts, as opposed to range expansions or
contractions. We conclude this because the number of new presences and new
absences grow together, which would occur as range shifts, instead of say
absences growing as presences shrink, which would be indicative of an overall
range decrease.
On average there are significantly more novel absences (average 38% (28% s.d.)
of current range) than novel presences (average 25% (26% s.d.)), which implies
that the species ranges are shrinking (average 13% net loss, (31% s.d.)) due
to the changing climate.
Figure 4: Shows that predicted range maps tend to be shifted, that is lost
and new cells grow together, with losses being larger than gains, as the
scenarios are above the 45 degree dashed black line.
### 3.2 Clustering Plots
The spectral clustering results using the species specific similarity matrix
is shown in Figure 6 for four species, illustrating the main types of patterns
observed. We see an interesting mix of RCP and GCM dependence. It appears that
RCP is a major driving factor that separates these clusters; in most of these
clusterings the far left (“optimistic”) cluster contains mainly scenarios with
low RCP, and the far right (“extreme”) cluster only scenarios with high RCP.
However, the clustering using the slender treeshew (_Tupaia gracilis_) is
driven mainly by GCM. This clustering mainly by GCM was found in many species
(30%), although the most common trend is RCP dependence (70% of species). The
variability among animals is further evidence supporting the importance of the
climate ecology relationship: the most important difference between climate
models (GCM or RCP) varies depending on the individual species’ response to
the climate.
The spectral cluster results from using the similarity matrix averaged over
all 1101 mammals is shown in Figure 5(b). We see in the bottom cluster of
Figure 5(b) an interesting mix of varying RCP and GCM. This suggests that RCP
alone does not account for the variation, the GCM is also important. However,
there are still variabilities among the GCMs. For example, the “ca” climate
model appears twice in the far right cluster.
(a) Climate Feature Based
(b) Predicted Range Map Based
Figure 5: In the climate based clustering shown on the left, the clusters are
mainly determined by GCM. This clustering is significantly different from the
ecological based one shown on the right, which demonstrates the importance of
incorporating ecological information. B: Scenario Embedding and Clustering.
The clusters are mainly driven by RCP. However, there is still some
relationships among the GCM, for instance the red ’ca’ GCM predicts more
extreme outcomes, and at each level of RCP the green black and yellow
(cc,ce,ip) climate models are together. This clustering is significantly
different from the climate based one (Panel A), which demonstrates the
importance of incorporating ecological information.
Instead of averaging over all species, we also performed clustering for only
the species most at risk, defined by those species whose fraction of area lost
is among the highest $10\%$. This loss in area can be used to approximate a
loss in population abundance using the techniques in He (2012); Che-Castaldo
and Neel (2016). Performing spectral clustering by using the average of
similarity matrices of this subset of animals at risk in shown in Figure 7.
This clustering puts a stronger emphasis on RCP, that is, the animals most at
risk are more sensitive to RCP.
(a) _Acinonyx_ jubatus (cheetah)
(b) _Zapus princeps_
(c) _Procyon lotor_
(d) _Tupaia gracilis_
Figure 6: Spectral clustering of scenarios using individual species. We see
mainly grouping by RCP for the first three, but a starkly differently mainly
GCM driven clustering for the treeshew. This variety was found among animals,
most species driven clusterings are strongly connected with RCP, but some are
more connected with GCM. These differences between species further
demonstrates that ecological information is important to interpret the
differences between scenarios. Figure 7: Scenario Embedding and Clustering
using only animals in riskiest quantiles. Clustering based only these species
most at risks puts an even higher emphasis on RCP than the clustering in
Figure 5(b). This further demonstrates the importance of accounting for
ecological information, different subsets of ecological populations emphasis
RCP even more strongly.
### 3.3 Cluster Quality Analysis
One way to qualitatively measure the performance of the clustering is to look
at the spatial overlap of presences for each scenario. For example the overlap
of all 34 scenarios is shown in Figure 8(a). We see that the cluster
associated with the smallest RCP scenarios (“optimistic” scenarios) accounts
for most of the presences in the discrepant regions, whereas the cluster
associated with the highest RCP (“extreme” scenarios) accounts for many of the
absences. This demonstrates that we have discovered meaningful clusters; there
is agreement within clusters but disagreements across clusters.
We visualize the correlations of presence cells over scenarios using principal
component analysis. This can be performed for a single species by considering
each scenario as an observation, with $n_{r}\times n_{c}$ binary features
corresponding to the vectorized (flattened) binary range map. Performing PCA
in this way is shown for the cheetah in Figure 8(b). The mix of positive and
negative coordinates illustrate the fact that the most extreme scenarios do
not only “lose” certain cells, but also predict novel presences.
(a) All Scenarios
(b) First PC
(c) Optimistic Scenarios
(d) Exteme Scenarios
Figure 8: Sum of presences over scenarios. A meaningful clustering should
have strong similarities within cluster, and differences across cluster. The
circled regions showed that indeed we have discovered meaningful clusters;
there is agreement within cluster in these regions, but differences across
clusters.
### 3.4 Individual Climate Feature Clustering
We see that the climate based clustering is significantly different than the
ecological one, the clustering is driven mainly by GCM instead of by RCP in
the ecological based clustering. This is evidence for the importance of
considering the specific climate niche occupied by a species in relation to
how those conditions are projected to change. Although these same five
features are used to predict the ecological range maps, these predicted range
maps paint a different picture of the scenario clustering because of how the
animals are influenced by the climate features. In fact, we can get a sense of
feature importance by clustering based on individual climate features, shown
in Figure 3. We see that clustering using only the annual temperature creates
the most similar clustering to the ecologically driven one, suggesting that
annual temperature is the most important feature of these five.
## 4 Conclusion
We have proposed a novel methodology to cluster scenarios based on ecological
range maps. The presented approach is interpretable, flexible, and fast. We
have demonstrated different patterns of clustering depending on which ]subsets
of species are included. For instance, the animals most at risk group many of
the higher RCP scenarios together. The differences between the climate based
clustering and ecological based clustering highlights the importance of
considering ecological response; the interaction of climate and ecology is
essential to understand the ecologocially most important differences between
future scenario predictions. An interesting direction to explore further is to
uncover subsets of animals that respond differently than others. For instance
it may be the case that rodents tend to fare worse under the “bc” climate
model, compared to other mammals. A similar area of future research is to
determine why some species like the slender treeshew cluster more by GCM
instead of the more pattern of RCP. Are these species less sensitive to
temperature changes? Another extension is to consider how to combine
information across animals in a more holistic manner. For instance, Dong et
al. (2013) presents a methodology to cluster according to many graphs, which
could be applied in our use case to the scenario graphs from each species.
## References
* Parry et al. (2007) Martin L Parry, Osvaldo Canziani, Jean Palutikof, Paul Van der Linden, and Clair Hanson. _Climate change 2007-impacts, adaptation and vulnerability: Working group II contribution to the fourth assessment report of the IPCC_ , volume 4. Cambridge University Press, 2007.
* Hannah et al. (2013) Lee Hannah, Makihiko Ikegami, David G Hole, Changwan Seo, Stuart HM Butchart, A Townsend Peterson, and Patrick R Roehrdanz. Global climate change adaptation priorities for biodiversity and food security. _PLoS one_ , 8(8):e72590, 2013.
* Hannah et al. (2020) Lee Hannah, Patrick R Roehrdanz, Pablo A Marquet, Brian J Enquist, Guy Midgley, Wendy Foden, Jon C Lovett, Richard T Corlett, Derek Corcoran, Stuart HM Butchart, et al. 30% land conservation and climate action reduces tropical extinction risk by more than 50%. _Ecography_ , 2020.
* Burrows et al. (2014) Michael T Burrows, David S Schoeman, Anthony J Richardson, Jorge Garcia Molinos, Ary Hoffmann, Lauren B Buckley, Pippa J Moore, Christopher J Brown, John F Bruno, Carlos M Duarte, et al. Geographical limits to species-range shifts are suggested by climate velocity. _Nature_ , 507(7493):492–495, 2014.
* Jones and Cheung (2015) Miranda C Jones and William WL Cheung. Multi-model ensemble projections of climate change effects on global marine biodiversity. _ICES Journal of Marine Science_ , 72(3):741–752, 2015.
* Molinos et al. (2016) Jorge García Molinos, Benjamin S Halpern, David S Schoeman, Christopher J Brown, Wolfgang Kiessling, Pippa J Moore, John M Pandolfi, Elvira S Poloczanska, Anthony J Richardson, and Michael T Burrows. Climate velocity and the future global redistribution of marine biodiversity. _Nature Climate Change_ , 6(1):83–88, 2016.
* Eyring et al. (2016) Veronika Eyring, Sandrine Bony, Gerald A Meehl, Catherine A Senior, Bjorn Stevens, Ronald J Stouffer, and Karl E Taylor. Overview of the coupled model intercomparison project phase 6 (CMIP6) experimental design and organization. _Geoscientific Model Development_ , 9(5):1937–1958, 2016.
* Trisos et al. (2020) Christopher H Trisos, Cory Merow, and Alex L Pigot. The projected timing of abrupt ecological disruption from climate change. _Nature_ , 580(7804):496–501, 2020.
* Giorgi and Francisco (2000) Filippo Giorgi and Raquel Francisco. Uncertainties in regional climate change prediction: a regional analysis of ensemble simulations with the hadcm2 coupled aogcm. _Climate Dynamics_ , 16(2-3):169–182, 2000.
* Cannon (2015) Alex J Cannon. Selecting gcm scenarios that span the range of changes in a multimodel ensemble: application to cmip5 climate extremes indices. _Journal of Climate_ , 28(3):1260–1267, 2015\.
* Stouffer et al. (2017) Ronald J Stouffer, Veronika Eyring, Gerald A Meehl, Sandrine Bony, Cath Senior, Bjorn Stevens, and KE Taylor. CMIP5 scientific gaps and recommendations for CMIP6. _Bulletin of the American Meteorological Society_ , 98(1):95–105, 2017.
* Fick and Hijmans (2017) Stephen E Fick and Robert J Hijmans. Worldclim 2: new 1-km spatial resolution climate surfaces for global land areas. _International journal of climatology_ , 37(12):4302–4315, 2017.
* Merow et al. (2013) Cory Merow, Matthew J Smith, and John A Silander Jr. A practical guide to maxent for modeling species’ distributions: what it does, and why inputs and settings matter. _Ecography_ , 36(10):1058–1069, 2013.
* Elith et al. (2011) Jane Elith, Steven J Phillips, Trevor Hastie, Miroslav Dudík, Yung En Chee, and Colin J Yates. A statistical explanation of maxent for ecologists. _Diversity and distributions_ , 17(1):43–57, 2011\.
* Miller (2020) Joe Miller. Gbif home page, 2020. URL https://www.gbif.org.
* Visser and De Nijs (2006) Hans Visser and T De Nijs. The map comparison kit. _Environmental Modelling & Software_, 21(3):346–358, 2006.
* Wilson (2011) Peter D Wilson. Distance-based methods for the analysis of maps produced by species distribution models. _Methods in Ecology and Evolution_ , 2(6):623–633, 2011.
* Hagen (2002) Alex Hagen. Multi-method assessment of map similarity. In _Proceedings of the 5th AGILE Conference on Geographic Information Science_ , pages 171–182. Universitat de les Illes Balears Palma, Spain, 2002.
* Hill et al. (2013) Mark O Hill, Colin A Harrower, and Christopher D Preston. Spherical k-means clustering is good for interpreting multivariate species occurrence data. _Methods in Ecology and Evolution_ , 4(6):542–551, 2013.
* Gritti et al. (2013) Emmanuel S Gritti, Anne Duputie, Francois Massol, and Isabelle Chuine. Estimating consensus and associated uncertainty between inherently different species distribution models. _Methods in Ecology and Evolution_ , 4(5):442–452, 2013.
* Peyré et al. (2019) Gabriel Peyré, Marco Cuturi, et al. Computational optimal transport: With applications to data science. _Foundations and Trends® in Machine Learning_ , 11(5-6):355–607, 2019.
* Kranstauber et al. (2017) Bart Kranstauber, Marco Smolla, and Kamran Safi. Similarity in spatial utilization distributions measured by the earth mover’s distance. _Methods in Ecology and Evolution_ , 8(2):155–160, 2017.
* Von Luxburg (2007) Ulrike Von Luxburg. A tutorial on spectral clustering. _Statistics and computing_ , 17(4):395–416, 2007\.
* Davies and Bouldin (1979) David L Davies and Donald W Bouldin. A cluster separation measure. _IEEE transactions on pattern analysis and machine intelligence_ , 2(2):224–227, 1979.
* Chen et al. (2011) I-Ching Chen, Jane K Hill, Ralf Ohlemüller, David B Roy, and Chris D Thomas. Rapid range shifts of species associated with high levels of climate warming. _Science_ , 333(6045):1024–1026, 2011.
* He (2012) Fangliang He. Area-based assessment of extinction risk. _Ecology_ , 93(5):974–980, 2012.
* Che-Castaldo and Neel (2016) Judy P Che-Castaldo and Maile C Neel. Species-level persistence probabilities for recovery and conservation status assessment. _Conservation Biology_ , 30(6):1297–1306, 2016\.
* Dong et al. (2013) Xiaowen Dong, Pascal Frossard, Pierre Vandergheynst, and Nikolai Nefedov. Clustering on multi-layer graphs via subspace analysis on Grassmann manifolds. _IEEE Transactions on signal processing_ , 62(4):905–918, 2013.
|
# Correction to the photometric colors of Gaia Early Data Release 3
Zexi Niu National Astronomical Observatories, Chinese Academy of Sciences
20A Datun Road, Chaoyang District, Beijing, China Haibo Yuan Department of
Astronomy, Beijing Normal University
19th Xinjiekouwai Street, Haidian District, Beijing, China Jifeng Liu
National Astronomical Observatories, Chinese Academy of Sciences
20A Datun Road, Chaoyang District, Beijing, China
###### Abstract
In this work, we use the spectroscopy-based stellar color regression (SCR)
method with $\sim$ 0.7 million common stars between LAMOST DR7 and Gaia EDR3
to acquire color corrections in $G-G_{\rm RP}$ and $G_{\rm BP}-G_{\rm RP}$. A
sub-mmag precision is achieved. Our results demonstrate that improvements in
the calibration process of the EDR3 have removed the color term in $G_{\rm
BP}-G_{\rm RP}$ and eliminated the discontinuity caused by the changes of
instrument configurations to a great extent. However, modest systematic trends
with $G$ magnitude are still detected. The corresponding color correction
terms as a function of $G$ are provided for $9.5<G<17.5$ mag and compared with
other determinations. We conclude that the corrections given in this work are
particularly suited for cases where the color-color investigations are
required while for color-magnitude investigations other corrections may be
better due to systematic with reddening. Possible applications of our results
are discussed.
Astronomy data analysis, Fundamental parameters of stars, Stellar photometry
††journal: ApJL
## 1 Introduction
Very recently, Gaia Collaboration published the Early Data Release 3 (EDR3)
(Gaia Collaboration et al., 2020) based on the first 34 months of its nominal
mission (Gaia Collaboration et al., 2016), providing $G$ band photometry for
1.8 billion sources brighter than 21 and 1.5 billion sources with $G_{BP}$ and
$G_{RP}$ photometry, with an uniform calibration at mmag level. In the Gaia
DR2 era, several works have detected the magnitude dependent systematic errors
up to 10 mmag or higher (Maíz Apellániz & Weiler, 2018; Weiler, 2018;
Casagrande & VandenBerg, 2018; Niu et al., 2021) and the discontinuities
caused by the changes of instrument configurations (Evans et al., 2018; Niu et
al., 2021). Among them, using about 0.5 million well selected common stars
between the LAMOST DR5 (Luo et al., 2015; Zhao et al., 2012) and Gaia DR2, Niu
et al. (2021) (hereafter Paper I) applied the stellar color regression (SCR)
method (Yuan et al., 2015) to calibrate the $G-G_{\rm RP}$ and $G_{\rm
BP}-G_{\rm RP}$ colors. With an unprecedented precision of about 1 mmag,
systematic trends with $G$ magnitude are revealed in great detail for both
colors, reflecting changes in instrument configurations. Color-dependent
trends are also found for the $G_{\rm BP}-G_{\rm RP}$ for stars brighter than
$G\sim$ 11.5 mag.
From DR2 (Gaia Collaboration et al., 2018) to EDR3, a number of important
improvements have been implemented to further reduce its photometric (random
and systematic) errors, including the fitting of $G$ fluxes, the processing of
BP and RP spectra, and the calibration processes (Riello et al., 2020;
Fabricius et al., 2020). The median uncertainties of $G$ magnitudes are
reduced by almost a factor of two, reaching 0.2 mmag at $10<G<14$, 0.8 mmag at
$G\sim$ 17, and 2.6 mmag at $G\sim$ 19\. In order to take full advantage of
its exquisite photometric quality, in this work, we follow the same routine of
Paper I to validate and correct possible magnitude/color dependent systematics
in EDR3.
The paper is organized as follows. We briefly describe our data and method in
Section 2, with differences from Paper I stressed. The result is presented and
discussed in Section 3. We summarize in Section 4.
## 2 Data and Method
We use the same method and follow the same steps as in Paper I, which are
briefly summarized below. Details of criteria used to select samples and the
SCR method can be found in Paper I.
We combine the newest Gaia EDR3 with the LAMOST DR7 (Luo et al., 2015) and
apply the same constraints as in Paper I, e.g., E(B$-$V) $<$ 0.05 mag
according to the Schlegel, Finkbeiner, & Davis (1998, hereafter SFD) dust
reddening map, galactic latitude $|b|>20$ deg, and vertical distance to the
galactic disk $|Z|>0.2$ kpc. In Figure 3 of Paper I, we clarified that the
signal-to-noise ratio for the $g$ band ($S/N_{\rm g}$) of 20 is sufficient for
this work because there are no systematic effects in the LAMOST stellar
parameters with the $S/N_{\rm g}$ till at very low $S/N_{\rm g}$ of about 15.
So we adopt a lower cut on $S/N_{\rm g}$ of 20 to generate a larger sample. At
$S/N_{\rm g}$ $>$ 20, the agreement of the LAMOST parameters with APOGEE is
better than 120 K for $T_{\rm eff}$, 0.15 dex for $Log$ g, and 0.1 dex for
[Fe/H], as shown in Figure 3 of Paper I.
We finally select a sample containing 779,691 main-sequence (MS) stars and
71,952 Red Giant Branch (RGB) stars, covering $9.5<G<17.5$ magnitude range.
Note that as proposed by the Gaia Collaboration et al. (2020) and Riello et
al. (2020), the corrected $G$ magnitudes for sources with 6-parameter
astrometric solutions are used in this paper.
Due to the different passbands between DR2 and EDR3, their reddening
coefficients are slightly different. Following Sun et al. (to be submitted),
we have empirically determined the temperature and reddening dependent
reddening coefficients for the EDR3 colors, as given by the Equations 1 and 2.
Note that the reddening corrections in this work do not take into account the
distance of sources and assume that all sources to be beyond than the source
of reddening. All colors referred to hereafter are dereddened using the SFD
map and the empirical coefficients.
$\displaystyle R(G_{\rm BP}-G)=1.428-0.539\times E(B-V)_{\rm SFD}+0.406$ (1)
$\displaystyle\times{E(B-V)}^{2}_{\rm SFD}-1.976\times 10^{-4}\times T_{\rm
eff}+2.004\times 10^{-5}$ $\displaystyle\times T_{\rm eff}\times E(B-V)_{\rm
SFD}-1.187\times 10^{-8}\times{T_{\rm eff}}^{2}$
$\displaystyle R(G_{\rm BP}-G_{\rm RP})=1.684-0.839\times E(B-V)_{\rm
SFD}+0.555$ (2) $\displaystyle\times{E(B-V)}^{2}_{\rm SFD}-1.390\times
10^{-4}\times T_{\rm eff}-3.142\times 10^{-6}$ $\displaystyle\times T_{\rm
eff}\times E(B-V)_{\rm SFD}+1.520\times 10^{-8}\times{T_{\rm eff}}^{2}$
As in Paper I, a control sample of $13.3<G<13.7$ are selected to define the
empirical relations between the Gaia intrinsic colors and the LAMOST DR7
stellar parameters. Then we divide the selected samples into six subsamples of
different evolution stages and colors. MS stars are divided into four
subsamples with the median $G_{\rm BP}-G_{\rm RP}$ colors of 1.12, 0.92, 0.76,
and 0.64 mag, respectively. RGB stars are divided into two subsamples, with
the median $G_{\rm BP}-G_{\rm RP}$ colors of 1.11 and 0.99 mag, respectively.
Applying these relations to a given (sub-)sample, the median values of the
color residuals between the SCR-derived colors and the corresponding Gaia EDR3
ones as a function of $G$ are obtained as the correction terms, as shown in
Figures LABEL:fig1 and LABEL:fig2. A 3$\sigma$-clipping is performed in the
process. Specifically, corrections can be expressed as:
$C^{{}^{\prime}}=C+\Delta C$ (3)
where $C^{{}^{\prime}}$ is the corrected color, $C$ is the Gaia EDR3 color and
$\Delta C$ is the color correction term.
## 3 Result and Discussion
Figure 3 plots color correction curves yielded by different subsamples. As
already demonstrated in Paper I: (1) the correction terms are independent of
the stellar evolution stages; (2) the inconsistency between the red and blue
subsamples when $G>14$ mag is due to the selection function of the LAMOST data
and the spatially dependent systematics of the SFD reddening map. Therefore,
we compute the recommended correction curves using the three blue MS
subsamples (MS 0.92, MS 0.76, and MS 0.64). The results are over-plotted in
black lines in Figure 3 and listed in Table 1. The standard deviations of the
difference between the recommended curve and the three blue MS subsamples for
$G-G_{\rm RP}$ and $G_{\rm BP}-G_{\rm RP}$ are 0.3 and 0.7 mmag, respectively.
This suggests that the calibration curve of each subsample has a typical
random error smaller than 1.0 mmag, and the random errors of the recommended
curves are even smaller. In the top panel of Figure 3, the systematic trend of
$G-G_{\rm RP}$ is smaller than the one found for Gaia DR2, with a maximum
range of less than 10 mmag in total. Feature coming from the observation modes
changing at $G\sim 16$ mag is gone and those at $G\sim$ 11 and 13 mag are
visible but at a level of about 1 mmag from the recommended curve. This
confirms the big improvement in the $G$ band photometry. The bottom panel
shows that the $G_{\rm BP}-G_{\rm RP}$ correction curves are no longer color-
dependent at the bright end, at least for F/G/K stars. However, the magnitude-
dependent trend is still significant, as in DR2.
Figure 3: Color correction curves yielded by different subsamples and the
recommended ones for $G-G_{\rm RP}$ (top) and $G_{\rm BP}-G_{\rm RP}$
(bottom).
To test our color correction curves, we select a MS sample within a narrow
[Fe/H] range of $-0.5<{\rm Fe/H}<-0.25$, and compare their distributions in
the color-color diagram before and after corrections using the recommended
curves in Table 1. The results are shown in the left panels of Figure 4.
Magnitude-dependent offsets in the color-color diagram are clearly seen from
the top left panel. After corrections, there are no offsets, and the total
width becomes narrower as expected. Furthermore, like Figure 32 in Fabricius
et al. (2020) of EDR3 and Figure 31 in Arenou et al. (2018) of DR2, we plot a
2D histogram of the $G-G_{\rm RP}$ residuals in the right panels of Figure 4
with respect to the metallicity-dependent stellar color locus. The improved
yet still subsistent trend with magnitude is clearly seen in the top panel,
especially at the bright and faint ends. After corrections of this work, the
trend becomes flat and centered at zero, demonstrating the power of our
corrections in the color-color diagram.
Figure 4: From top to bottom: before and after color corrections. left: color-
color diagram of a MS sample within a narrow [Fe/H] range. right: Residuals
from a global $G-G_{\rm RP}=f(G_{\rm BP}-G_{\rm RP},{\rm[Fe/H]})$ relation.
There are also color corrections deduced from other approaches. For example,
using 10,000 well selected Landolt standard stars (Clem & Landolt, 2013), Yang
et al. (2021) has obtained magnitude and color corrections of the Gaia EDR3 by
training the observed $UBVRI$ magnitudes into the Gaia EDR3 magnitudes and
colors. Their results (red lines in their Figure 4) are plotted in green lines
in Figure 5. Results from synthetic colors of the CALSPEC (Bohlin 2014)
spectra are also plotted in black dots. Only 61 stars with
phot$\\_$bp$\\_$rp$\\_$excess$\\_$factor $<0.1$ after Riello et al. (2020)
correction and $G>8$ mag are shown. Note that Yang et al. (2021) adopted a
control sample of $17<G<17.5$ mag, their correction curves were shifted to
match the results of the CALSPEC spctera. In this work, we have adopted a
different control sample of $13.2<G<13.6$ mag, therefore, a constant offset
may well exist. To make a straightforward comparison, the recommended curves
in this work, which are over-plotted in red solid lines, are systematically
shifted by a few mmag to match the Yang et al. (2021) corrections at 14 mag.
Given the small number of black dots and their internal scatter (about 10 mmag
for $G-G_{\rm RP}$ and 15 mmag for $G_{\rm BP}-G_{\rm RP}$), both results from
this work and Yang et al. (2021) are consistent with the black dots. However,
discrepancies between this work and Yang et al. (2021) are up to about $\pm$5
mmag for $G-G_{\rm RP}$ and $\pm$10 mmag for $G_{\rm BP}-G_{\rm RP}$, and are
correlated with the $G$ magnitude.
To investigate the origins and effects of the discrepancies between this work
and Yang et al. (2021), their differences of the color corrections without
shifting in $G_{\rm BP}-G_{\rm RP}$ are plotted against those in $G-G_{\rm
RP}$ in the bottom panel of Figure 5. A strong linear correlation is found.
The slope is in good agreement with the median value of $\frac{R(G-G_{\rm
RP})}{R(G_{\rm BP}-G_{RP})}$. Considering that reddening corrections are not
involved in Yang et al. (2021), the result suggests that the discrepancies are
mainly caused by imperfect reddening corrections in this work. We use the 2D
SFD reddening map. Despite the systematic errors that depend on spatial
position and dust temperature (e.g., Peek & Graves 2010; Sun et al. to be
submitted) in the SFD reddening map as discussed in Paper I, the map also
tends to overestimate reddening for stars that are within the Galactic dust
layer. Although in this work (and Paper I) we require stars of $|Z|>0.2$ kpc,
their reddening corrections may be overestimated to some extent around 0.01
mag in E(B-V), depending on their distances/magnitudes. It is not surprising,
as there are increasing evidences supporting the co-existence of a thin dust
disk and a thick dust disk in the Galaxy (e.g., Yuan et al. in prep.; Guo et
al. 2021; Zhang et al., to be submitted). The scale height of the thick dust
disk is about 200 - 400 pc in the solar neighborhood. Dust clouds in the
Galactic halo (Yuan et al. in prep.) and halos of other galaxies (e.g., M 31
and M 33, Zhang & Yuan 2020) are also detected. We redo the procedure with an
excessive sample of $|Z|>1$ kpc and a control sample of $15.3<G<15.5$ mag. Its
results are plotted in red dashed lines, which match the green lines much
better, consistent with the above scenario. It suggests that our correction
curves of individual colors are subject to systematic errors from reddening
correction using the SFD map. We further test other 2D reddening maps of Lenz
et al. (2017), Peek & Graves (2010), and Planck Collaboration et al. (2014),
and find no big differences with the SFD map.
Figure 5: Top and middle: Comparisons of different color correction curves.
Red solid line: $|Z|>0.2$ kpc using the SCR method. Red dashed line: $|Z|>1$
kpc using the SCR method. Black dots: from the CALSPEC spectra. Green line:
from Yang et al. (2021). Both red solid and dashed lines are systematically
shifted by a few mmag to match the green lines at 14 mag for comparison.
bottom: Differences of the color correction terms in $G_{\rm BP}-G_{\rm RP}$
versus those in $G-G_{\rm RP}$. The two dashed lines have the same slope,
determined by the median value of $\frac{R(G-G_{\rm RP})}{R(G_{\rm
BP}-G_{RP})}$.
Fortunately, even though both correction curves of the two Gaia colors
suffering systematic errors in reddening correction, their systematic errors
are largely canceled in the color-color diagram, as the reddening vector is
almost parallel to the stellar locus in the color-color diagram. We estimate
the standard deviation of the differences between the purple points and their
corresponding gray line, which is 0.3 mmag and contributed by the random
errors of corrections from Yang et al. (2021) and this work. Given that the
typical errors of color correction curves of Yang et al. (2021) are about 0.2
– 0.4 mmag, the small standard deviation of 0.3 mmag suggests that the typical
uncertainties in this work are much smaller than 0.3 mmag. Moreover, with 0.7
million stars, our sample enables a much higher resolution in $G$ magnitude,
yielding improved corrections in the color-color diagram compared to Yang et
al. (2021) (See their Figure 5).
It is worth clarifying that our color corrections in this work and in Paper I
are precise to sub-mmag only in the cases where a color-color diagram is used.
In other cases, for example where a color-magnitude diagram is used, the
corrections from Yang et al. (2021) are preferred. Another thing should be
mentioned is that we compute the curves within approximately $9.5<G<17.5$ and
$0.6<G_{\rm BP}-G_{\rm RP}<1.3$, so for stars outside the above ranges, e.g.,
very blue stars brighter than G $\sim$ 13 (Riello et al., 2020), the curves
should be used with caution. Nevertheless, this corrected color-color diagram
could be helpful in a number of studies, for example, determining reliable
photometric metallicities for an enormous and magnitude-limited sample of
stars from Gaia (Xu et al. to be submitted) and estimating binary fractions of
a volume-limited sample of stars (Niu et al. to be submitted).
Table 1: Color correction curves. The first column is $G$ magnitude. The second and third columns are respectively recommended $G-G_{\rm RP}$ and $G_{\rm BP}-G_{\rm RP}$ calibration terms in units of mmag. G | recom | G | recom | G | recom | G | recom | G | recom | G | recom
---|---|---|---|---|---|---|---|---|---|---|---
9.50 | $-$ 2.54 | 2.95 | 10.85 | 0.17 | 6.00 | 12.20 | 1.68 | 3.84 | 13.55 | $-$ 0.04 | $-$ 0.02 | 14.90 | $-$ 4.65 | $-$ 7.07 | 16.25 | $-$ 5.41 | $-$ 10.68
9.53 | $-$ 2.48 | 3.01 | 10.88 | 0.30 | 6.19 | 12.23 | 1.68 | 3.68 | 13.58 | $-$ 0.04 | $-$ 0.10 | 14.93 | $-$ 4.74 | $-$ 7.19 | 16.28 | $-$ 5.38 | $-$ 10.75
9.56 | $-$ 2.39 | 3.10 | 10.91 | 0.43 | 6.30 | 12.26 | 1.69 | 3.55 | 13.61 | $-$ 0.06 | $-$ 0.21 | 14.96 | $-$ 4.81 | $-$ 7.26 | 16.31 | $-$ 5.33 | $-$ 10.90
9.59 | $-$ 2.33 | 3.16 | 10.94 | 0.60 | 6.36 | 12.29 | 1.68 | 3.39 | 13.64 | $-$ 0.07 | $-$ 0.32 | 14.99 | $-$ 4.90 | $-$ 7.28 | 16.34 | $-$ 5.28 | $-$ 11.09
9.62 | $-$ 2.26 | 3.24 | 10.97 | 0.74 | 6.38 | 12.32 | 1.66 | 3.29 | 13.67 | $-$ 0.09 | $-$ 0.41 | 15.02 | $-$ 4.96 | $-$ 7.34 | 16.37 | $-$ 5.24 | $-$ 11.28
9.65 | $-$ 2.18 | 3.32 | 11.00 | 0.87 | 6.42 | 12.35 | 1.64 | 3.23 | 13.70 | $-$ 0.11 | $-$ 0.47 | 15.05 | $-$ 5.01 | $-$ 7.46 | 16.40 | $-$ 5.17 | $-$ 11.53
9.68 | $-$ 2.12 | 3.38 | 11.03 | 1.00 | 6.79 | 12.38 | 1.62 | 3.23 | 13.73 | $-$ 0.14 | $-$ 0.54 | 15.08 | $-$ 5.07 | $-$ 7.64 | 16.43 | $-$ 5.12 | $-$ 11.66
9.71 | $-$ 2.04 | 3.46 | 11.06 | 1.15 | 6.66 | 12.41 | 1.59 | 3.20 | 13.76 | $-$ 0.18 | $-$ 0.66 | 15.11 | $-$ 5.12 | $-$ 7.83 | 16.46 | $-$ 5.07 | $-$ 11.71
9.74 | $-$ 1.98 | 3.53 | 11.09 | 1.25 | 6.50 | 12.44 | 1.56 | 3.03 | 13.79 | $-$ 0.21 | $-$ 0.78 | 15.14 | $-$ 5.17 | $-$ 7.97 | 16.49 | $-$ 5.00 | $-$ 11.75
9.77 | $-$ 1.92 | 3.60 | 11.12 | 1.34 | 6.32 | 12.47 | 1.53 | 2.86 | 13.82 | $-$ 0.26 | $-$ 0.94 | 15.17 | $-$ 5.22 | $-$ 8.13 | 16.52 | $-$ 4.94 | $-$ 11.77
9.80 | $-$ 1.84 | 3.68 | 11.15 | 1.41 | 6.20 | 12.50 | 1.50 | 2.71 | 13.85 | $-$ 0.32 | $-$ 1.09 | 15.20 | $-$ 5.26 | $-$ 8.21 | 16.55 | $-$ 4.89 | $-$ 11.80
9.83 | $-$ 1.77 | 3.75 | 11.18 | 1.46 | 6.15 | 12.53 | 1.47 | 2.55 | 13.88 | $-$ 0.38 | $-$ 1.20 | 15.23 | $-$ 5.30 | $-$ 8.28 | 16.58 | $-$ 4.82 | $-$ 11.85
9.86 | $-$ 1.69 | 3.83 | 11.21 | 1.50 | 6.21 | 12.56 | 1.45 | 2.46 | 13.91 | $-$ 0.46 | $-$ 1.31 | 15.26 | $-$ 5.35 | $-$ 8.41 | 16.61 | $-$ 4.77 | $-$ 11.87
9.89 | $-$ 1.60 | 3.90 | 11.24 | 1.51 | 6.27 | 12.59 | 1.44 | 2.41 | 13.94 | $-$ 0.53 | $-$ 1.38 | 15.29 | $-$ 5.39 | $-$ 8.55 | 16.64 | $-$ 4.69 | $-$ 11.85
9.92 | $-$ 1.52 | 3.93 | 11.27 | 1.52 | 6.33 | 12.62 | 1.42 | 2.33 | 13.97 | $-$ 0.60 | $-$ 1.43 | 15.32 | $-$ 5.43 | $-$ 8.73 | 16.67 | $-$ 4.63 | $-$ 11.85
9.95 | $-$ 1.43 | 4.00 | 11.30 | 1.52 | 6.33 | 12.65 | 1.40 | 2.25 | 14.00 | $-$ 0.72 | $-$ 1.48 | 15.35 | $-$ 5.48 | $-$ 8.92 | 16.70 | $-$ 4.57 | $-$ 11.92
9.98 | $-$ 1.38 | 4.06 | 11.33 | 1.52 | 6.20 | 12.68 | 1.39 | 2.15 | 14.03 | $-$ 0.81 | $-$ 1.52 | 15.38 | $-$ 5.50 | $-$ 9.04 | 16.73 | $-$ 4.50 | $-$ 12.05
10.01 | $-$ 1.35 | 4.13 | 11.36 | 1.50 | 5.97 | 12.71 | 1.38 | 2.06 | 14.06 | $-$ 0.90 | $-$ 1.56 | 15.41 | $-$ 5.54 | $-$ 9.10 | 16.76 | $-$ 4.44 | $-$ 12.24
10.04 | $-$ 1.31 | 4.23 | 11.39 | 1.48 | 5.76 | 12.74 | 1.36 | 1.96 | 14.09 | $-$ 1.02 | $-$ 1.68 | 15.44 | $-$ 5.56 | $-$ 9.17 | 16.79 | $-$ 4.38 | $-$ 12.51
10.07 | $-$ 1.26 | 4.28 | 11.42 | 1.47 | 5.61 | 12.77 | 1.33 | 1.89 | 14.12 | $-$ 1.14 | $-$ 1.86 | 15.47 | $-$ 5.57 | $-$ 9.30 | 16.82 | $-$ 4.32 | $-$ 12.79
10.10 | $-$ 1.21 | 4.29 | 11.45 | 1.43 | 5.49 | 12.80 | 1.30 | 1.86 | 14.15 | $-$ 1.24 | $-$ 2.03 | 15.50 | $-$ 5.59 | $-$ 9.45 | 16.85 | $-$ 4.26 | $-$ 12.97
10.13 | $-$ 1.15 | 4.25 | 11.48 | 1.41 | 5.50 | 12.83 | 1.25 | 1.81 | 14.18 | $-$ 1.39 | $-$ 2.22 | 15.53 | $-$ 5.60 | $-$ 9.55 | 16.88 | $-$ 4.18 | $-$ 13.11
10.16 | $-$ 1.09 | 4.18 | 11.51 | 1.39 | 5.61 | 12.86 | 1.20 | 1.75 | 14.21 | $-$ 1.51 | $-$ 2.39 | 15.56 | $-$ 5.61 | $-$ 9.70 | 16.91 | $-$ 4.12 | $-$ 13.24
10.19 | $-$ 1.04 | 4.15 | 11.54 | 1.38 | 5.65 | 12.89 | 1.14 | 1.69 | 14.24 | $-$ 1.63 | $-$ 2.54 | 15.59 | $-$ 5.62 | $-$ 9.79 | 16.94 | $-$ 4.05 | $-$ 13.40
10.22 | $-$ 0.99 | 4.16 | 11.57 | 1.37 | 5.57 | 12.92 | 1.05 | 1.54 | 14.27 | $-$ 1.79 | $-$ 2.67 | 15.62 | $-$ 5.63 | $-$ 9.73 | 16.97 | $-$ 3.95 | $-$ 13.60
10.25 | $-$ 0.96 | 4.23 | 11.60 | 1.36 | 5.49 | 12.95 | 0.98 | 1.41 | 14.30 | $-$ 1.91 | $-$ 2.77 | 15.65 | $-$ 5.63 | $-$ 9.63 | 17.00 | $-$ 3.88 | $-$ 13.77
10.28 | $-$ 0.93 | 4.27 | 11.63 | 1.34 | 5.40 | 12.98 | 0.91 | 1.30 | 14.33 | $-$ 2.06 | $-$ 2.95 | 15.68 | $-$ 5.62 | $-$ 9.61 | 17.03 | $-$ 3.80 | $-$ 13.89
10.31 | $-$ 0.89 | 4.17 | 11.66 | 1.34 | 5.33 | 13.01 | 0.83 | 1.17 | 14.36 | $-$ 2.22 | $-$ 3.18 | 15.71 | $-$ 5.61 | $-$ 9.63 | 17.06 | $-$ 3.71 | $-$ 13.97
10.34 | $-$ 0.87 | 4.08 | 11.69 | 1.34 | 5.25 | 13.04 | 0.72 | 1.03 | 14.39 | $-$ 2.35 | $-$ 3.39 | 15.74 | $-$ 5.61 | $-$ 9.72 | 17.09 | $-$ 3.62 | $-$ 13.99
10.37 | $-$ 0.86 | 3.92 | 11.72 | 1.34 | 5.21 | 13.07 | 0.64 | 0.99 | 14.42 | $-$ 2.52 | $-$ 3.71 | 15.77 | $-$ 5.60 | $-$ 9.86 | 17.12 | $-$ 3.53 | $-$ 14.03
10.40 | $-$ 0.84 | 3.69 | 11.75 | 1.34 | 5.12 | 13.10 | 0.55 | 1.01 | 14.45 | $-$ 2.65 | $-$ 3.94 | 15.80 | $-$ 5.59 | $-$ 10.02 | 17.15 | $-$ 3.39 | $-$ 14.18
10.43 | $-$ 0.84 | 3.50 | 11.78 | 1.34 | 4.97 | 13.13 | 0.45 | 0.99 | 14.48 | $-$ 2.78 | $-$ 4.14 | 15.83 | $-$ 5.57 | $-$ 10.13 | 17.18 | $-$ 3.27 | $-$ 14.39
10.46 | $-$ 0.83 | 3.39 | 11.81 | 1.35 | 4.83 | 13.16 | 0.36 | 0.94 | 14.51 | $-$ 2.95 | $-$ 4.40 | 15.86 | $-$ 5.56 | $-$ 10.19 | 17.21 | $-$ 3.15 | $-$ 14.53
10.49 | $-$ 0.83 | 3.41 | 11.84 | 1.37 | 4.70 | 13.19 | 0.25 | 0.83 | 14.54 | $-$ 3.08 | $-$ 4.58 | 15.89 | $-$ 5.55 | $-$ 10.19 | 17.24 | $-$ 2.99 | $-$ 14.73
10.52 | $-$ 0.81 | 3.48 | 11.87 | 1.39 | 4.62 | 13.22 | 0.18 | 0.73 | 14.57 | $-$ 3.23 | $-$ 4.74 | 15.92 | $-$ 5.55 | $-$ 10.15 | 17.27 | $-$ 2.86 | $-$ 15.01
10.55 | $-$ 0.78 | 3.63 | 11.90 | 1.42 | 4.54 | 13.25 | 0.12 | 0.63 | 14.60 | $-$ 3.39 | $-$ 4.91 | 15.95 | $-$ 5.55 | $-$ 10.15 | 17.30 | $-$ 2.72 | $-$ 15.37
10.58 | $-$ 0.76 | 3.73 | 11.93 | 1.47 | 4.39 | 13.28 | 0.06 | 0.52 | 14.63 | $-$ 3.52 | $-$ 5.09 | 15.98 | $-$ 5.55 | $-$ 10.21 | 17.33 | $-$ 2.57 | $-$ 15.84
10.61 | $-$ 0.71 | 3.80 | 11.96 | 1.50 | 4.23 | 13.31 | 0.02 | 0.44 | 14.66 | $-$ 3.69 | $-$ 5.36 | 16.01 | $-$ 5.55 | $-$ 10.25 | 17.36 | $-$ 2.45 | $-$ 16.56
10.64 | $-$ 0.64 | 3.85 | 11.99 | 1.53 | 4.08 | 13.34 | $-$ 0.01 | 0.34 | 14.69 | $-$ 3.82 | $-$ 5.57 | 16.04 | $-$ 5.54 | $-$ 10.31 | 17.39 | $-$ 2.28 | $-$ 17.13
10.67 | $-$ 0.56 | 3.94 | 12.02 | 1.56 | 3.95 | 13.37 | $-$ 0.03 | 0.22 | 14.72 | $-$ 3.94 | $-$ 5.78 | 16.07 | $-$ 5.54 | $-$ 10.44 | 17.42 | $-$ 2.15 | $-$ 16.85
10.70 | $-$ 0.43 | 4.17 | 12.05 | 1.59 | 3.87 | 13.40 | $-$ 0.03 | 0.11 | 14.75 | $-$ 4.10 | $-$ 6.04 | 16.10 | $-$ 5.52 | $-$ 10.55 | 17.45 | $-$ 2.03 | $-$ 16.86
10.73 | $-$ 0.33 | 4.41 | 12.08 | 1.61 | 3.86 | 13.43 | $-$ 0.04 | 0.03 | 14.78 | $-$ 4.21 | $-$ 6.21 | 16.13 | $-$ 5.50 | $-$ 10.59 | 17.48 | $-$ 1.86 | $-$ 16.85
10.76 | $-$ 0.21 | 4.74 | 12.11 | 1.62 | 3.90 | 13.46 | $-$ 0.04 | 0.01 | 14.81 | $-$ 4.33 | $-$ 6.45 | 16.16 | $-$ 5.48 | $-$ 10.64 | 17.51 | $-$ 1.73 | $-$ 16.83
10.79 | $-$ 0.06 | 5.32 | 12.14 | 1.64 | 3.96 | 13.49 | $-$ 0.04 | 0.01 | 14.84 | $-$ 4.45 | $-$ 6.71 | 16.19 | $-$ 5.47 | $-$ 10.68 | | |
10.82 | 0.05 | 5.71 | 12.17 | 1.66 | 3.97 | 13.52 | $-$ 0.04 | 0.01 | 14.87 | $-$ 4.54 | $-$ 6.89 | 16.22 | $-$ 5.45 | $-$ 10.68 | | |
## 4 Summary
Following Paper I, by combining $\sim$ 0.7 million high-quality common stars
in LAMOST DR7 with the SCR method, we obtain $G-G_{\rm RP}$ and $G_{\rm
BP}-G_{\rm RP}$ color corrections as a function of $G$ magnitude for
$9.5<G<17.5$ to sub-mmag precision for Gaia EDR3. Our results confirm the
improvements in the calibration process of the EDR3. The color term of the
$G_{\rm BP}-G_{\rm RP}$ for bright stars is removed. The discontinuity caused
by the changes of instrument configurations is significantly reduced. Yet
modest systematic trends with $G$ magnitude are detected.
By comparing with the work of Yang et al. (2021), we find that our color
corrections of individual colors are subject to systematic errors in reddening
correction with the SFD map. In the cases of color-color diagram, our
corrections still achieve an unprecedented sub-mmag precision. Our work could
be beneficial to studies where a high-precision color-color diagram is
required, including estimates of Gaia photometric metallicities and
discrimination between binaries and single stars.
We acknowledge the anonymous referee for his/her valuable comments that
improve the quality of this paper significantly. This work is supported by
National Key Research and Development Program of China (NKRDPC) under grant
numbers 2019YFA0405503, 2019YFA0405504, and 2016YFA0400804, National Science
Foundation of China (NSFC) under grant numbers 11603002, 11988101, and
113300034, and Beijing Normal University grant No. 310232102. This work has
made use of data products from the Guoshoujing Telescope (the Large Sky Area
Multi-Object Fiber Spectroscopic Telescope, LAMOST). LAMOST is a National
Major Scientific Project built by the Chinese Academy of Sciences. Funding for
the project has been provided by the National Development and Reform
Commission. LAMOST is operated and managed by the National Astronomical
Observatories, Chinese Academy of Sciences. This work has made use of data
from the European Space Agency (ESA) mission Gaia
(https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and
Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has
been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement.
## References
* Arenou et al. (2018) Arenou, F., Luri, X., Babusiaux, C., et al. 2018, A&A, 616, A17, doi: 10.1051/0004-6361/201833234
* Bohlin (2014) Bohlin, R. C. 2014, AJ, 147, 127, doi: 10.1088/0004-6256/147/6/127
* Casagrande & VandenBerg (2018) Casagrande, L., & VandenBerg, D. A. 2018, MNRAS, 479, L102, doi: 10.1093/mnrasl/sly104
* Clem & Landolt (2013) Clem, J. L., & Landolt, A. U. 2013, AJ, 146, 88, doi: 10.1088/0004-6256/146/4/88
* Evans et al. (2018) Evans, D. W., Riello, M., De Angeli, F., et al. 2018, A&A, 616, A4, doi: 10.1051/0004-6361/201832756
* Fabricius et al. (2020) Fabricius, C., Luri, X., Arenou, F., et al. 2020, arXiv e-prints, arXiv:2012.06242. https://arxiv.org/abs/2012.06242
* Gaia Collaboration et al. (2020) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2020, arXiv e-prints, arXiv:2012.01533. https://arxiv.org/abs/2012.01533
* Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1, doi: 10.1051/0004-6361/201629272
* Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051
* Guo et al. (2021) Guo, H. L., Chen, B. Q., Yuan, H. B., et al. 2021, ApJ, 906, 47, doi: 10.3847/1538-4357/abc68a
* Lenz et al. (2017) Lenz, D., Hensley, B. S., & Doré, O. 2017, ApJ, 846, 38, doi: 10.3847/1538-4357/aa84af
* Luo et al. (2015) Luo, A. L., Zhao, Y.-H., Zhao, G., et al. 2015, Research in Astronomy and Astrophysics, 15, 1095, doi: 10.1088/1674-4527/15/8/002
* Maíz Apellániz & Weiler (2018) Maíz Apellániz, J., & Weiler, M. 2018, A&A, 619, A180, doi: 10.1051/0004-6361/201834051
* Niu et al. (2021) Niu, Z., Yuan, H., & Liu, J. 2021, arXiv e-prints, ApJ, accepted, arXiv:2101.04290. https://arxiv.org/abs/2101.04290
* Peek & Graves (2010) Peek, J. E. G., & Graves, G. J. 2010, ApJ, 719, 415, doi: 10.1088/0004-637X/719/1/415
* Planck Collaboration et al. (2014) Planck Collaboration, Abergel, A., Ade, P. A. R., et al. 2014, A&A, 571, A11, doi: 10.1051/0004-6361/201323195
* Riello et al. (2020) Riello, M., De Angeli, F., Evans, D. W., et al. 2020, arXiv e-prints, arXiv:2012.01916. https://arxiv.org/abs/2012.01916
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, doi: 10.1086/305772
* Weiler (2018) Weiler, M. 2018, A&A, 617, A138, doi: 10.1051/0004-6361/201833462
* Yang et al. (2021) Yang, L., Yuan, H., Zhang, R., et al. 2021, arXiv e-prints, ApJ, accepted, arXiv:2101.00750. https://arxiv.org/abs/2101.00750
* Yuan et al. (2015) Yuan, H., Liu, X., Xiang, M., et al. 2015, ApJ, 799, 133, doi: 10.1088/0004-637X/799/2/133
* Zhang & Yuan (2020) Zhang, R., & Yuan, H. 2020, ApJ, 905, L20, doi: 10.3847/2041-8213/abccc4
* Zhao et al. (2012) Zhao, G., Zhao, Y.-H., Chu, Y.-Q., Jing, Y.-P., & Deng, L.-C. 2012, Research in Astronomy and Astrophysics, 12, 723, doi: 10.1088/1674-4527/12/7/002
|
# Formation of Temporally Shaped Electron Bunches for
Beam-Driven Collinear Wakefield Accelerators
Wei Hou Tan<EMAIL_ADDRESS>Northern Illinois Center for Accelerator & Detector
Development and Department of Physics, Northern Illinois University, DeKalb,
IL 60115, USA Philippe Piot Northern Illinois Center for Accelerator &
Detector Development and Department of Physics, Northern Illinois University,
DeKalb, IL 60115, USA Argonne National Laboratory, Lemont, IL 60439, USA
Alexander Zholents Argonne National Laboratory, Lemont, IL 60439, USA
(August 27, 2024)
###### Abstract
Beam-driven collinear wakefield accelerators (CWAs) that operate by using
slow-wave structures or plasmas hold great promise toward reducing the size of
contemporary accelerators. Sustainable acceleration of charged particles to
high energies in the CWA relies on using field-generating relativistic
electron bunches with a highly asymmetric peak current profile and a large
energy chirp. A new approach to obtaining such bunches has been proposed and
illustrated with the accelerator design supported by particle tracking
simulations. It has been shown that the required particle distribution in the
longitudinal phase space can be obtained without collimators, giving CWAs an
opportunity for employment in applications requiring a high repetition rate of
operation.
###### pacs:
29.27.-a, 41.85.-p, 41.75.Fr
††preprint:
## I Introduction
In a beam-based collinear wakefield accelerator (CWA), a high-charge drive
bunch generates an electromagnetic field passing through a slow-wave structure
(a dielectric-lined or corrugated waveguide) or plasma. This field, called the
wakefield, is used to accelerate a witness bunch propagating the structure in
the same direction behind the drive bunch [1, 2, 3, 4, 5, 6, 7, 8]. An
important figure of merit for a CWA is the transformer ratio,
$\mathcal{R}\equiv\left|\mathcal{E}_{+}/\mathcal{E}_{-}\right|$, where
$\mathcal{E}_{+}$ is the maximum accelerating field behind the drive bunch,
and $\mathcal{E}_{-}$ is the maximum decelerating field within the drive
bunch. For symmetric drive-bunch current distribution in time $I(t)$, the
transformer ratio is limited to $\mathcal{R}<2$ [9]. However, asymmetric
$I(t)$ can significantly enhance the transformer ratio [9], albeit at the
expense of reduced $\mathcal{E}_{+}$ and $\mathcal{E}_{-}$ [10].
Bunch-shaping techniques investigated hitherto are photocathode-laser
intensity shaping [11, 12, 13], transverse-to-longitudinal phase-space
exchange [14, 15, 16], and use of multi-frequency linacs [17]. Despite
significant progress, they suffer either from their inability to deliver
highly asymmetric bunches or from prohibitively large beam losses on
collimators. Consequently, producing drive bunches with an asymmetric peak
current profile while preserving most of the bunch electrons has been an
active research topic.
Another important consideration for a drive bunch arises from its proneness to
the transverse beam-break-up (BBU) instability caused by the strong transverse
forces due to the transverse wakefield [18, 19, 20]. A possible BBU-mitigation
technique consists of imparting a large energy chirp along the drive bunch
[21, 22, 23] and creating a current profile $I(t)$ that stimulates a dynamic
adjustment of this chirp concurrently with the wakefield-induced bunch
deceleration in the CWA [24].
The work reported in this paper was motivated by a design of a high repetition
rate CWA for use in a free-electron laser (FEL) facility described in Refs.
[25, 26]. This facility plans to employ up to ten FELs individually driven by
a dedicated CWA. A single conventional accelerator delivers
$\sim$1\text{\,}\mathrm{GeV}$$ drive electron bunches with a highly asymmetric
$I(t)$ and a large energy chirp to the ten CWAs. Since the drive-bunch charge
considered in [25, 26] is up to $10\text{\,}\mathrm{nC}$ and the bunch
repetition rate up to $500\text{\,}\mathrm{kHz}$, the electron beam carries
significant power. Therefore, using collimators to assist with the bunch
shaping is prohibitive, and, consequently, preparing the drive bunches doing
otherwise becomes a prime challenge.
To solve the problem, we undertook a new approach and distributed the task of
obtaining the highly asymmetric $I(t)$ over the entire drive bunch accelerator
beginning from the photocathode electron gun and ending by the final bunch
compressor. To the best of our knowledge, our work demonstrates for the first
time a pathway to obtaining electron bunches with a highly asymmetric $I(t)$,
avoiding prohibitively large electron losses on collimators. The employed
technique is rather generic and can be used for preparing the electron bunch
peak current distribution with profiles different than those considered in
this paper.
Although the main focus of the work was to obtain a drive bunch with the
required distribution in the longitudinal phase space (LPS), an equally
important additional objective, was to ensure the associated transverse
emittances commensurate with the small CWA aperture.
## II The Drive Bunch and the Wakefield
We define the longitudinal charge distribution in the electron bunch as $q(z)$
and consider bunches localized on the interval $0\leq{z}\leq{L}$, where $z$ is
the distance behind the bunch head. Therefore, we have
$\displaystyle\int_{0}^{L}q(z)\mathrm{d}z=Q,$ (1)
where $Q$ is the total bunch charge. Following [10], we use the Green’s
function $G(z)$ consisting only of a fundamental mode
$G(z)=2\kappa_{\parallel}\cos{(kz)}H(z)$ 111It has been shown in [10] that a
multi-mode Green’s function is less effective in producing a high transformer
ratio., where $\kappa_{\parallel}$ is the loss factor of a point particle per
unit length, $k=2\pi/\lambda$ is the wave vector, $\lambda$ is the wavelength,
$H(z)$ is the Heaviside step function. The longitudinal electric field within
the electron bunch can be written as [27, 28]
$\displaystyle\mathcal{E}_{-}(z)$
$\displaystyle=2\kappa_{\parallel}\int_{0}^{z}\cos{[k(z-z^{\prime})]}q(z^{\prime})\mathrm{d}z^{\prime},\leavevmode\nobreak\
z\leq L,$ (2)
which is a Volterra equation of the first kind for the function $q(z)$ with
the trigonometric kernel $\cos{[k(z^{\prime}-z)]}$. If we assume that
$\mathcal{E}_{-}(0)=0$ at the bunch head, then the solution of Eq. (2) is
given by [29],
$\displaystyle q(z)$
$\displaystyle=\frac{1}{2\kappa_{\parallel}}\left[\mathcal{E}^{\prime}_{-}(z)+k^{2}\int_{0}^{z}\mathcal{E}_{-}(x)\mathrm{d}x\right],$
(3)
where $\mathcal{E}_{-}(z)$ is a known function, and its derivative is taken
over $z$. Hence, $q(z)$ is defined.
Figure 1: Nominal (green trace) and modified doorstep distributions with
associated wakefields calculated using $L=\lambda$, $\chi=0$ and
$\chi=-\frac{1}{10\lambda}$, respectively. The wakefields are computed for a
bunch charge of $10\text{\,}\mathrm{nC}$ and use a single-mode Green’s
function, where $f=$180\text{\,}\mathrm{GHz}$$ and
$\kappa_{\parallel}$=$14.3\text{\,}\mathrm{k}\mathrm{V}\mathrm{/}\mathrm{p}\mathrm{C}\mathrm{/}\mathrm{m}$
calculated using ECHO[30]. The transformer ratio for the modified doorstep
distribution shown in the plot is $\mathcal{R}=$5.6$$.
In order to maintain the stability of the drive bunch in the CWA throughout
its deceleration, we require the bunch’s relative chirp to be constant while
being decelerated by the wakefield $\mathcal{E}_{-}(z)$, based on studies in
[24]. This requirement is achieved by having a small linear variation in
energy loss within the bunch, where head particles lose more energies than
tail particles such that
$\displaystyle\chi(s)=\frac{1}{E\mathrm{{}_{0}(s)}}\frac{\partial E}{\partial
z}\propto\mathcal{E^{\prime}_{-}}(z)\equiv\text{const},$ (4)
where $E\mathrm{{}_{0}(s)}$ is the energy of the reference particle, and $s$
is the distance propagated by the bunch in the CWA. This is accomplished by
using the electron bunch producing $\mathcal{E}_{-}$ with a linear variation
in $z$. Similar to Ref. [9], we solve Eq. (3) considering $q(z)$ to be
constant in the range $0\leq z<\xi$ with $\xi=\frac{1}{k}\arccos(\chi/k)$, in
which case the continuities of $\mathcal{E}_{-}(z)$ and
$\mathcal{E}_{-}^{\prime}(z)$ are preserved over the entire bunch length
$\displaystyle q(z)$ $\displaystyle=\begin{cases}q_{0},&0\leq z<\xi\;,\\\
q_{0}\left[1-k\xi\sin(k\xi)+\frac{k^{2}}{2}\xi^{2}\cos(k\xi)+\big{(}k\sin(k\xi)-k^{2}\xi\cos(k\xi)\big{)}z+\frac{k^{2}}{2}\cos(k\xi)z^{2}\right],&\xi\leq
z\leq L\;,\\\ \end{cases}$ (5) $\displaystyle q_{0}$
$\displaystyle=\frac{6Q}{6L+k^{2}\cos(k\xi)(L-\xi)^{3}+3k\sin(k\xi)(L-\xi)^{2}}\;.$
Setting $\chi=0$, simplifies $q(z)$ to one used in [9]. Figure 1 shows an
example of a modified doorstep distribution with an associated wakefield
calculated using $L=\lambda$ and $\chi=-\frac{1}{10\lambda}$. In this example
we considered a corrugated waveguide with radius $a$=1 mm and fundamental mode
frequency $f=$180\text{\,}\mathrm{GHz}$$, as discussed in Ref. [31]. The
current profile has sharp features that are challenging to realize.
Consequently, the distribution defined by Eq. (5) is used only as a starting
point to construct a practically realizable distribution shown in Fig. 2 with
similar final properties listed in
Figure 2: A target drive bunch peak current (a) and longitudinal phase space
(b) distributions at the end of the drive bunch accelerator.
Table 1.
Table 1: Main parameters associated with the drive bunch distribution shown in Fig. 2. Bunch parameter | Value | Unit
---|---|---
Charge | $1$0 | $\mathrm{nC}$
Reference energy | $1$ | $\mathrm{GeV}$
RMS length | $419$ | $\mathrm{\SIUnitSymbolMicro m}$
Peak current | $3$.5 | $\mathrm{kA}$
RMS fractional energy spread | $2.51\text{\,}$ | %
RMS fractional slice energy spread | $0.1\text{\,}$ | %
## III A preliminary design of the drive bunch accelerator
### III.1 Basic considerations
A block diagram of the drive bunch accelerator is shown in Fig. 3. It utilizes
a commonly used configuration (see, for example, [32, 33]) and includes a
photocathode-gun-based injector, three linac sections, and two bunch
compressors. Linac sections L1 and L2 are based on $650\text{\,}\mathrm{MHz}$
superconducting (SRF) linac structures, and linac section L39 is based on
$3.9\text{\,}\mathrm{GHz}$ SRF structures. It is used for linearization of the
electron distribution in the longitudinal phase space (LPS). Two bunch
compressors are labeled as BC1 and BC2. Here we take advantage of the
requirement to prepare the drive bunch with the energy chirp seen in Fig. 2(b)
and move BC2 to the end of the linac, since we do not need to use the linac to
remove the energy chirp after bunch compression.
Figure 3: Block diagram of the drive bunch accelerator.
Using the known LPS distribution $\Phi_{f}(z_{f},E_{f})$ at the end of the
accelerator, we performed the one-dimensional (1D) backward tracking proposed
in [11] to find the LPS distribution $\Phi_{i}(z_{i},E_{i})$ at the entrance
of L1. We stopped at L1 where the beam energy is approximately
$50\text{\,}\mathrm{MeV}$ considering that 1D tracking may not be reliable at
lower energies where transverse and longitudinal space charge effects are
stronger. The assumption is that at this point the backward tracking will
produce a plausible $\Phi_{i}(z_{i},E_{i})$ that can be matched by the
injector. Specifically, we constrained the peak current to
$I\leq$300\text{\,}\mathrm{A}$$ and sought $\Phi_{i}(z_{i},E_{i})$ with
minimal high-order correlations.
A tracking program, twice [34], was developed for rapid prototyping of the
longitudinal dynamics in the linac without accounting for a transverse motion.
The program adopts an approach similar to that used in LiTrack [35]. An
important feature of twice is its ability to perform backward tracking
including time-reversal of the collective effect, see Appendix A.
The physics model implemented in twice includes the geometric wakefields in
the accelerating sections, longitudinal space charge effects (LSCs), and
coherent synchrotron radiation (CSR). The Green’s functions needed for
modeling of the geometric wakefield effects in the $650\text{\,}\mathrm{MHz}$
and $3.9\text{\,}\mathrm{GHz}$ linac sections were computed using the echo
software and the empirical formula documented in Ref. [36].
The backward tracking was performed to define $\Phi_{i}(z_{i},E_{i})$ using
$\Phi_{f}(z_{f},E_{f})$, shown in Fig. 2. The following constraints for the
accelerator components were observed. First, the BBU-mitigation scheme
implemented in the CWA requires a drive bunch with the negative chirp
$\frac{\partial E}{\partial z}<0$, which implies that the longitudinal
dispersions of BC1 and BC2 should be $R_{56}^{(1)}>0$ and $R_{56}^{(2)}>0$, as
we want to maintain a negative chirp throughout the entire accelerator.
Second, a total energy gain of $\sim$$950\text{\,}\mathrm{MeV}$ in the linac
part after the injector is needed. Third, an overall compression factor of
$\sim 10$ is required from two bunch compressors.
In order to enforce all these constraints, twice was combined with the multi-
objective optimization framework deap [37]. The optimization was performed by
analyzing the LPS distributions upstream of BC1 and L1 to extract the central
energy of the beam slices at every $z$-coordinate and to fit the slice-energy
dependence on $z$ with the polynomial
$\displaystyle E(z)=c_{0}+c_{1}z+c_{2}z^{2}+c_{3}z^{3},$ (6)
where $c_{i}$ are constants derived from the fit. The optimizer was requested
to minimize the ratio $c_{2}/c_{1}$ in both locations.
### III.2 Discussion of 1D simulation results
A list of optimized accelerator settings found with twice backward tracking is
given in Table 2 and the resulting $\Phi_{i}(z_{i},E_{i})$ is shown in Fig.
4(a,b). The forward tracking using this distribution recovers
$\Phi_{f}(z_{f},E_{f})$, as seen in Fig. 4(c,d). The excellent agreement
between Fig. 2(a,b) and Fig. 4(c,d) demonstrates the ability of twice to
properly handle collective effects in both forward and backward tracking.
Table 2: Optimized parameters from the one-dimensional model. Parameter | Value | Unit
---|---|---
Accelerating voltage L1 | $219.46$ | $\mathrm{MV}$
Phase L1 | $17.81$ | deg
Frequency L1 | 650 | $\mathrm{MHz}$
Accelerating voltage L39 | $9.57$ | $\mathrm{MV}$
Phase L39 | $205.72$ | deg
Frequency L39 | $3.9$ | $\mathrm{GHz}$
$R_{56}$ for bunch compressor 1 (BC1) | $0.1321$ | $\mathrm{m}$
$T_{566}$ for bunch compressor 1 (BC1) | $-0.1581$ | $\mathrm{m}$
Accelerating voltage L2 | $847.69$ | $\mathrm{MV}$
Phase L2 | $28$ | deg
Frequency L2 | $650$ | $\mathrm{GHz}$
$R_{56}$ for bunch compressor 2 (BC2) | $0.1301$ | $\mathrm{m}$
$T_{566}$ for bunch compressor 2 (BC2) | $0.22$ | $\mathrm{m}$
Figure 4: Current (a,c) and LPS (b,d) distributions obtained from the
backward-tracking optimization (a,b) and tracked up BC2 end (c,d) to confirm
the agreement with the targeted distribution shown in Fig. 2.
Each accelerator component serves a special role in obtaining the above-shown
result. Linac section L1 provides energy gain and operates far from the crest
acceleration to produce the required negative chirp. Linac section L39
corrects a second-order correlation between $E$ and $z$ imprinted on the bunch
by the injector and L1 before it enters BC1. Linac section L2 operates even
further off-crest to impart the necessary large chirp required for maintaining
beam stability in the CWA. Both bunch compressors shorten the bunch lengths
and impact the LPS distributions. The values of $T_{566}$ selected in both
bunch compressors ensure achieving $\Phi_{f}(z_{f},E_{f})$ despite the large
energy chirp. The use of a negative $T_{566}$ in BC1 and a positive $T_{566}$
in BC2 enables the generation of a doorstep-like initial distribution without
giving rise to a current spike, where $T_{566}$ has the effect of shifting the
peak of current [38, 39]. In this paper, we adopt the convention that
$T_{566}$ with a negative (resp. positive) sign shifts the peak of current
distribution to the tail (resp. head).
The result of the backward-tracking optimization provides only a starting
point for obtaining a more realistic solution. For instance, the zigzag
feature observed in the tail of the LPS distribution in Fig. 4(b) is
challenging to create. In the following sections, we discuss how 1D backward
tracking results guide the design of a photocathode-gun-based injector and the
downstream accelerator lattice.
## IV Injector design
Given the required initial LPS distribution obtained from the backward
tracking, the next step is to explore whether such LPS distribution is
achievable downstream of the injector; our approach relies on temporally
shaping the photocathode laser pulse [40].
The injector beamline was modeled using the particle-in-cell beam-dynamics
program astra, which includes a quasi-static space-charge algorithm [41]. The
program was combined with the deap multivariate optimization framework to find
a possible injector configuration and the laser pulse shape that realize the
desired final bunch distribution while minimizing the transverse-emittance
downstream of the photoinjector.
The injector configuration consists of a $200\text{\,}\mathrm{MHz}$ quarter-
wave SRF gun [42, 43, 44], coupled to a $650\text{\,}\mathrm{MHz}$ accelerator
module composed of five 5-cell SRF cavities [45]. The gun includes a high-Tc
superconducting solenoid [46] for emittance control.
In the absence of collective effects, the photoemitted electron-bunch
distribution mirrors the laser pulse distribution. In practice, image-charge
and space-charge effects are substantial during the emission process and
distort the electron bunch distribution. Consequently, devising laser-pulse
distributions that compensate for the introduced deformities is critical to
the generation of bunches with tailored current profiles. The laser pulse
distribution is characterized by $I(t,r)=\Lambda(t)R(r)$, where $\Lambda(t)$
and $R(r)$ describe the laser temporal profile and the transverse envelope,
respectively. In our simulation, we assumed the transverse distribution to be
radially uniform $R(r)=H(r_{c}-r)$, where $H(r_{c}-r)$ is Heaviside step
function and $r_{c}$ is the maximum radius. The temporal profile is
parameterized as
$\displaystyle\Lambda(t)$ $\displaystyle=Af(t)S(a(t-f))S(-b(t-g))\text{,
where}$ (7) $\displaystyle f(t)$ $\displaystyle=\begin{cases}h,&0\leq t<c\\\
h+d(t-c)^{d-1},&c\leq t\leq 1\\\ 0,&\text{elsewhere}\end{cases},$
where $A$ is the normalization constant; and $a$, $b$ $c$, $d$, $f$, $g$, and
$h$ are the parameters controlling the bunch shape. The smooth edges at both
ends are characterized by $a$, $b$, $f$, $g$ via the logistic function
$S(u)=1/(1+\mathrm{e}^{-u})$; $c$ determines the length of the constant part
of the laser pulse analogous to the length of the bunch head of the doorstep
distribution; and $h$ determines the relative amplitude of the constant laser
pulse; see Fig. 5.
Figure 5: Programmed macroparticle distributions at the photocathode surface:
for an optimized laser pulse (blue trace), taking into account the
photocathode response (orange trace), and both the cathode response and finite
bandwidth (BW) of the laser pulse (green trace). The laser bandwidth is taken
to be $\delta f=$2\text{\,}\mathrm{THz}$$.
The overall shape resembles a smoothed version of the door-step distribution.
The laser-shape parameters introduced in Eq. (7), the laser spot size, the
phase and accelerating voltage of all RF cavities, and the HTS solenoid peak
magnetic field were taken as control parameters for the optimization
algorithm. The beam kinetic energy was constrained not to exceed 60 MeV. In
order to quantify the final distribution, we used the Wasserstein’s distance
[47] to quantify how close the shape of the simulated macroparticle
distribution at the injector exit $I^{(o)}(z)$ was to the shape of the target
macroparticle density distributions $I^{(t)}(z)$ obtained from backward
tracking results. Specifically, the Wasserstein’s distance is evaluated as
$\displaystyle{\cal
D}=\sum_{i=1}{N_{b}}\frac{||I_{i}^{(t)}-I_{i}^{(o)}||}{N_{b}},$ (8)
where $I_{i}^{(t,o)}$ are the corresponding histograms of the macroparticles’
longitudinal positions over the interval $i$ defined as $[z_{i}+\delta
z,z_{i}-\delta z]$, with $\delta
z\equiv\frac{\mbox{max}(z)-\mbox{min}(z)}{N_{b}}$ being the longitudinal bin
size and $N_{b}$ the number of bins used to compute the histogram.
Additionally, we need to have a small beam transverse emittance. Hence, the
Wasserstein’s distance and the beam transverse emittance were used as our
objective functions to be minimized.
Table 3: Optimized parameters for the injector and beam parameters at $s=$11.67\text{\,}\mathrm{m}$$ from the photocathode surface. The RF-cavity phases are referenced with respect to the maximum-energy phases. Parameter | Value | Unit
---|---|---
Laser spot radius | $2.810$ | mm
Laser duration | 91 | ps
RF gun peak electric field | 40 | MV/m
RF gun phase | $1.71$ | deg
Cavity C1 peak electric field | $13.25$ | MV/m
Cavity C1 phase | 11.28 | deg
Cavity C2 phase | -15.05 | deg
Cavities C2 to C5 peak electric field | $20$ | MV/m
Cavities C3 to C4 phase | 0 | deg
Cavity C5 phase | 20 | deg
Cavity C1 distance from the photocathode | 2.67 | m
Solenoid B-field | 0.2068 | T
Shape parameter $a$ | 93.55 | -
Shape parameter $b$ | 80.70 | -
Shape parameter $c$ | 0.196 | -
Shape parameter $d$ | 3.044 | -
Shape parameter $f$ | 0.030 | -
Shape parameter $g$ | 0.900 | -
Shape parameter $h$ | 0.207 | -
Final beam energy | $58.7$ | MeV
Final beam bunch length | $7.06$ | mm
Final beam transverse emittance | $8.36$ | $\mathrm{\SIUnitSymbolMicro m}$
Final beam rms radius | $1.64$ | mm
Figure 6: Current profile (a) with associated LPS (b), and horizontal (c) and
vertical (d) phase-space distributions simulated with Astra at the end of the
photoinjector ($11.67$ m from the photocathode). In plot (b), the red trace
represents the slice RMS energy spread $\sigma_{E}$. Figure 7: Axial electric
$\mathcal{E}_{z}$ (red trace) and magnetic $B_{z}$ (blue trace) fields
experienced by the reference particle as it propagates along the optimized
photoinjector (a) with corresponding kinetic energy (b), transverse (blue) and
longitudinal (red) beam emittances (c), and sizes (d) evolving along the
injector.
An example of the optimized injector settings is summarized in Table 3, and
the evolution of the associated beam parameters along the beamline are
presented in Figs. 6 and 7. The final bunch distributions 11.5 m downstream of
the photocathode appears in Fig. 6. The beam transverse phase space indicates
some halo population. Ultimately, an alternative laser-shaping approach
implementing a spatiotemporal-tailoring scheme could provide better control
over the transverse emittance while producing the required shaped electron
beams [40]. We also find, as depicted in Fig. 6, that the current distribution
tends to have a peak current lower than that desired from the backward
tracking result shown in Fig. 4. Although higher currents are possible, they
come at the expense of transverse emittance. Consequently, the distribution
generated from the injector was considered as an input to the one-dimensional
forward tracking simulations. Iterations of one-dimensional forward tracking
simulation studies were done to further cross-check accelerator parameters
needed for the beam-shaping process. We especially found that the desired
final bunch shape at $1\text{\,}\mathrm{GeV}$ can be recovered by altering the
L39 phase and amplitude. Furthermore, the small slice rms energy spread
$\sigma_{E}<10$ keV simulated from the injector [see Fig. 4(b)] renders the
bunch prone to microbunching instability. Consequently, a laser heater is
required to increase the uncorrelated energy spread.
Figure 8: Updated accelerator design, with the addition of the injector
beamline and a laser heater section.
The correspondingly revised diagram of the accelerator beamline shown in Fig.
8 was used as a starting point to investigate the performance of the proposed
bunch-shaping process with elegant tracking simulations taking into account
the transverse beam dynamics.
Another challenge associated with the bunch formation pertains to the temporal
resolution of the bunch shaping process. Ultimately, the laser pulse shape can
only be controlled on a time scale $\delta t\geq 1/(2\pi\delta f_{L})$ limited
by the bandwidth of the photocathode laser $\delta f_{L}$. Contemporary laser
systems are capable to $\delta t\leq$150\text{\,}\mathrm{fs}$$ (RMS) [48].
Additionally, the electron bunch shape is also affected by the time response
of the photoemission process. Given the required charge of
$\sim$$10\text{\,}\mathrm{nC}$, we consider a Cs2Te photocathode with temporal
response numerically investigated in Ref. [49, 50]. Recent measurements
confirm that Cs2Te has a photoemission response time below
$370\text{\,}\mathrm{fs}$ [51]. Figure 5 compares the optimized ideal laser
pulse shape described by Eq. (7) with the cases when the photocathode response
time and the laser finite bandwidth are taken into account. The added effects
have an insignificant impact on the final distribution due to relatively slow
temporal variations in the required peak current distribution.
## V Final accelerator design
The strawman accelerator design developed with the help of 1D simulations
provides guidance for the final design of the accelerator.
### V.1 Accelerator components
##### Linacs:
For the $650\text{\,}\mathrm{MHz}$ L1 and L2 SRF linacs we adopted cryomodules
proposed for the PIP-II project [52]. The linac L1 consists of two
cryomodules, and L2 has eight cryomodules. Each cryomodule includes six
cavities containing five cells. We assume that in CW operation each cavity
provides up to $20\text{\,}\mathrm{M}\mathrm{V}\mathrm{/}\mathrm{m}$ average
accelerating gradients. The quadrupole magnet doublets are located between
cryomodules and produce a pseudo-periodic oscillation of the betatron
functions. The two cavities used in the $3.9\text{\,}\mathrm{GHz}$ L39 SRF
linac are similar to the cavity described in Ref. [36].
##### Bunch compressors:
We use an arc-shaped bunch compressor consisting of a series of FODO cells,
where each cell contains two quadrupoles and two dipole magnets. The latter
configuration nominally provides a positive $R_{56}$ [53, 54, 55, 56]
$\displaystyle
R_{56}\simeq\frac{\theta^{2}_{\text{total}}L_{\text{total}}}{4N^{2}_{\text{cell}}\sin^{2}{(\psi_{x}/2)}}\;,$
(9)
where $\theta_{\text{total}}$ is the total bending angle, $L_{\text{total}}$
is the total path length, $N_{\text{cell}}$ is the total number of FODO cells,
and $\psi_{x}$ is the horizontal phase advance per cell. The dipole magnet
bending angles can be used to tune the $R_{56}$. The bending angle or dipole
polarity from cell to cell does not need to be identical, but the number of
cells should be selected to realize a phase advance
$\psi_{x,\text{total}}=2n\pi$ (with $n$ integer) over the compressor to
achieve the first-order achromat.
The second-order longitudinal dispersion produced by the bunch compressor is
given by [57, 58]
$\displaystyle
T_{566}=\int_{0}^{L}\left[\frac{\eta_{1,x}(s^{\prime})}{\rho(s^{\prime})}+\frac{\eta_{x}^{\prime
2}(s^{\prime})}{2}\right]\mathrm{d}s^{\prime},$ (10)
where $L$ is the length of the beamline, $\rho$ is the bending radius,
$\eta_{1,x}(s)\equiv({E_{0}}^{2}/2)\partial^{2}x(s)/\partial E^{2}$ is the
second-order horizontal dispersion function, and $\eta_{x}^{\prime}(s)$ is the
derivative of the dispersion function. We incorporate 12 sextupole magnets to
control the $T_{566}$ and 12 octupole magnets to cancel the third-order
longitudinal transfer-map element $U_{5666}$ computed over BC1. If needed, a
non-vanishing value of $U_{5666}$ can enable higher-order control over the LPS
correlation [38].
Figure 9: Layout of bunch compressor BC1 (top diagram) with evolution of
associated betatron function (a) and pertinent linear (b), second-order (c),
and third-order (d) transfer-map elements along the beamline (with $s=0$
corresponding to the beginning of BC1). In plots (b-d) the left and right axes
refer to the horizontal chromatic functions $\eta_{i,x}$ and accumulated
longitudinal transfer-map elements from $0$ to location $s$ along BC1. In the
top diagram the red, blue, green, and purple rectangles correspond,
respectively, to quadrupole, dipole, sextupole, and octupole magnets.
The sextupole and octupole magnets are also used to zero the chromatic
transfer-map elements $T_{166},T_{266}$, and $U_{1666}$, resulting in the
bunch compressors being achromatic up to the third order.
Figure 9 displays the BC1 configuration along with the evolution of the
betatron functions and relevant horizontal chromatic ($\eta_{x}$,
$\eta_{1,x}$, and $\eta_{2,x}$) and longitudinal accumulated transfer-map
elements ($R_{56}^{0\rightarrow s}$, $T_{566}^{0\rightarrow s}$, and
$U_{5666}^{0\rightarrow s}$) up to third order as a function of the beamline
coordinate $s$. It has two arcs, one bending the beam trajectory by
$22.92^{\circ}$ and another one bending it back. Each bending magnet has the
bending angle $\theta=2.865^{\circ}$. This design eases the requirement on the
sextupole-magnet strength required to provide a $T_{566}<0$; see Table 2. The
strengths of the sextupole magnets were optimized using elegant to achieve the
required $T_{566}$ across BC1 while obtaining a second-order achromat by
constraining $T_{166}=T_{266}=0$. The three pairs of sextupole magnets in the
second arc are mirror-symmetric to the first three pairs, with opposite-
polarity magnet strengths. During the design process, the first pair of
sextupole magnets was inserted close to the region of the first arc with the
highest dispersion for tuning the desired $T_{566}$; its mirror symmetry pair
was placed in the second arc and separated by $2\pi$ phase advance. Another
two pairs of sextupole magnets were subsequently inserted for tuning
$T_{166}$. Similarly, their mirror symmetry pairs were separated by $2\pi$
phase advance. Finally, six pairs of octupole magnets were inserted to zero
the overall $U_{i666}$ $i=1,2,5$ transfer-map elements, where the same design
process was employed. The BC2 compressor requires both $R_{56}$ and $T_{566}$
to be positive, which is naturally provided by the arc bunch compressor
introduced earlier. It has a total bending angle of
$32.63\text{\,}\mathrm{\SIUnitSymbolDegree}$, and each dipole has a bending
angle of $4.079\text{\,}\mathrm{\SIUnitSymbolDegree}$. Similar to BC1, we used
sextupole- and octupole-magnet families to adjust both $T_{566}$ and
$U_{5666}$ and produce the third-order achromat. The BC2 lattice appears in
Fig. 10 along with the evolution of the betatron functions and relevant
chromatic elements.
Figure 10: Layout of bunch compressor BC2 (top diagram) with evolution of
associated betatron function (a) and pertinent linear (b), second-order (c),
and third-order (d) transfer-map elements along the beamline (with $s=0$
corresponding to the beginning of BC2). In plots (b-d) the left and right axes
refer to the horizontal chromatic functions $\eta_{i,x}$ and accumulated
longitudinal transfer-map elements from $0$ to location $s$ along BC2. The top
diagram follows the same conventions as in Fig. 9.
Finally, the layout of the two bunch compressors is presented in Fig. 11.
Figure 11: The geometry of the bunch compressors BC1 (a) and BC2 (b), where
red, blue, green, and purple rectangles are quadrupoles, dipoles, sextupoles,
and octupoles magnets, respectively.
##### Matching sections:
All accelerator components are connected using matching sections composed of
quadrupole magnets and drift spaces.
The evolution of the betatron functions from the injector exit up to the end
of BC2 appears in Fig. 12. Throughout the entire accelerator, the betatron
functions are maintained to values $\beta_{x,y}<$30\text{\,}\mathrm{m}$$.
Figure 12: Evolution of the betatron (left axis) and horizontal dispersion
(right axis) functions along the proposed linac. The vertical dispersion is
zero throughout the linac. The magnetic-lattice color coding for the element
follows Fig. 9 with the accelerating cavities shown as gold rectangles.
### V.2 Tracking and optimization
The beam distribution obtained at the exit of the injector was used as input
to elegant for tracking and optimization.
Figure 13: Current profile (a) and associated LPS (b) distributions simulated
with Astra at the end of the photoinjector (see Fig. 6) with added
uncorrelated fraction energy spread following a Gaussian distribution with RMS
spread $\sigma_{E}/E=1.5\times 10^{-3}$. In plot (b) the red trace represents
the slice RMS energy spread $\sigma_{E}$.
We found that we need to increase the slice energy spread to
$\sim$75\text{\,}\mathrm{keV}$$ using the laser heater to suppress the
microbunching instability [59, 60]. However, in this study, we numerically
added random noise with Gaussian distribution to the macroparticles’ energy
using the scatter element available in elegant. Thus, Fig. 13 shows the actual
LPS distribution used at the beginning of the accelerator in tracking studies.
The accelerator settings obtained with twice were used as a starting point in
the accelerator optimization including transverse effects. The fine-tuning of
the above-described accelerator components was accomplished using elegant. A
multi-objective optimization was applied to determine the twelve accelerator
parameters controlling the longitudinal dynamics, i.e., voltages and phases of
L1, L2, L39, and values of $R_{56}$, $T_{566}$ in two bunch compressors. The
resulting beam distribution obtained downstream of BC2 was then used to
compute the wakefield generated in a $180\text{\,}\mathrm{GHz}$ corrugated
waveguide considered for the role of the wakefield accelerator in [31]. The
resulting peak accelerating field and transformer ratio were then adopted as
objective functions to be maximized with the accelerator parameters as control
variables. The trade-off between peak accelerating field and transformer ratio
was quantified in Eq. (30) of Ref. [10], hence providing a good measure to
verify whether our optimization reaches the optimal Pareto front.
Table 4: Main accelerator parameters and beam parameters at the end of BC2. Parameter | Value | Unit
---|---|---
Accelerating voltage L1 | $193.22$ | $\mathrm{MV}$
Phase L1 | 21.64 | deg
Frequency L1 | 650 | $\mathrm{MHz}$
Accelerating voltage L39 | $9.73$ | $\mathrm{MV}$
Phase L39 | 202.52 | deg
Frequency L39 | 3.9 | $\mathrm{GHz}$
$R_{56}$ for bunch compressor 1 (BC1) | $0.1294$ | $\mathrm{m}$
$T_{566}$ for bunch compressor 1 (BC1) | -0.1294 | $\mathrm{m}$
$U_{5666}$ for bunch compressor 1 (BC1) | 0 | $\mathrm{m}$
Accelerating voltage L2 | $857.92$ | $\mathrm{MV}$
Phase L2 | 26.05 | deg
Frequency L2 | 650 | $\mathrm{MHz}$
$R_{56}$ for bunch compressor 2 (BC2) | $0.1312$ | $\mathrm{m}$
$T_{566}$ for bunch compressor 2 (BC2) | $0.1465$ | $\mathrm{m}$
$U_{5666}$ for bunch compressor 2 (BC1) | 0 | $\mathrm{m}$
Final beam energy | $998$ | $\mathrm{MeV}$
Final beam bunch length | $414$ | $\mathrm{\SIUnitSymbolMicro m}$
Final beam normalized emittance, $\varepsilon_{nx}$ | 31 | $\mathrm{\SIUnitSymbolMicro m}$
Final beam normalized emittance, $\varepsilon_{ny}$ | 12 | $\mathrm{\SIUnitSymbolMicro m}$
Peak accelerating wakefield $|\mathcal{E_{+}}|$ | $94.3$ | $\mathrm{M}\mathrm{V}\mathrm{/}\mathrm{m}$
Peak decelerating wakefield $|\mathcal{E_{-}}|$ | $18.8$ | $\mathrm{M}\mathrm{V}\mathrm{/}\mathrm{m}$
Transformer ratio $\mathcal{R}$ | $5.0$ | -
Figure 14: Current (a) with associated LPS (b), and transverse horizontal (c)
and vertical (d) phase-space distributions simulated with elegant at the end
of BC2 using the optimized linac and bunch-compressor settings summarized in
Table 4 and the injector distributions from Fig. 13. In plot (b) the red trace
represents the slice RMS energy spread $\sigma_{E}$. Figure 15: Comparison of
the Pareto front with the analytical trade-off curve between the peak field
and transformer ratio described by Eq. (30) of Ref. [10]. Each blue dot
represents a numerically simulated configuration with the red star
representing the configuration with parameters listed in Table 4.
The optimal accelerator settings and final beam parameters are summarized in
Table 4. The LPS distribution at the end of the accelerator is shown in Fig.
14. We also calculated that the $\sim$1\text{\,}\mathrm{GeV}$$,
$10\text{\,}\mathrm{nC}$ electron bunch having this distribution produces a
peak wakefield of $94.26\text{\,}\mathrm{M}\mathrm{V}\mathrm{/}\mathrm{m}$
with a transformer ratio of $5$ propagating in a corrugated waveguide. Figure
15 demonstrates that our optimization has reached the optimal set of
solutions, where the Pareto front closely follows the analytically calculated
tradeoff curve [10]. The obtained current profile produces a wakefield
amplitude $\sim 15\%$ lower than the one expected from the ideal distribution
for a transformer ratio ${\cal R}\simeq 5$. Such an agreement gives confidence
in our optimization approach based on the trade-off between peak accelerating
field and transformer ratio. The simulations also indicate that the horizontal
transverse emittance increases to
$\varepsilon_{nx}=31$$\mathrm{\SIUnitSymbolMicro m}$ due to the CSR and
chromatic aberrations in the electron bunch having large correlated energy
variations. Although significant, this emittance dilution is still acceptable.
Our main result is shown in Fig. 16. It compares the final distribution and
wakefield with that of the target distribution and wakefield from Fig. 2. A
good agreement manifests that, indeed, the drive electron bunch with a highly
asymmetric peak current profile can be obtained without employing the
collimators.
Figure 16: Target (from Fig. 2) and optimized final current distributions
(respectively shown as green- and red-shaded curves) with associated
wakefields (respectively displayed as green and red traces). The transformer
ratio for the simulated distribution is ${\cal R}=5.0$.
A comparison of Tables 3 and 4 indicates that the final accelerator settings
optimized by elegant deviate less than 10$\%$ from those obtained with twice.
It justifies the strategy taken in this study to solve the difficult problem
of formation of temporally shaped electron bunches for a beam-driven collinear
wakefield accelerator in two steps.
The nonlinear correlation observed in the tail of the LPS distribution
downstream of BC2 [see Fig. 14(b, blue trace)] originates from the CSR. As the
beam is compressed inside the bunch compressors, its tail experiences a
stronger CSR force due to its peak current being higher than the rest of the
bunch. It is worth noting that elegant uses a 1D projected model to treat the
CSR effect. The applicability of such a 1D treatment is conditioned by the
Derbenev’s criterion [61], which suggests that projecting the bunch
distribution onto a line-charge distribution may overestimate the CSR force,
particularly when the bunch has a large transverse-to-longitudinal aspect
ratio ${\cal
A}(s)\equiv\left(\sigma_{x}(s)/\sigma_{z}(s)\right)\sqrt{\left(\sigma_{x}(s)/\rho(s)\right)}$.
In our design, the condition ${\cal A}\ll 1$ was not rigorously followed (but
rather the softer condition ${\cal A}<1$ was achieved), suggesting that the
impact of CSR may be overestimated in some regions of the bunch compressors.
We also note that the final beam distribution exhibits significant
longitudinal-horizontal ($z-x$) correlations due to CSR effects; see Fig. 17.
Although the associated projected-emittance dilution is tolerable, the
electrons in the longitudinal slices with the horizontal offsets seen in Fig.
17(c) will excite transverse wakefields in the CWA and ultimately seed the BBU
instability. These offsets come from CSR-induced energy loss occurring in the
BC2 that breaks the achromatic property of this beamline. Understanding the
impact of this distribution feature in the CWA linac along with finding
mitigation techniques is a current research focus.
Figure 17: Final $(z,x)$ (a) and $(z,y)$ (c) beam distributions corresponding
to the data shown in Fig. 14, and slice analysis for positions
$\langle{x}\rangle$ and $\langle{y}\rangle$ (b) and RMS beam size and
emittances (d).
### V.3 Impact of errors
In order to validate the robustness of the proposed design, it is instructive
to investigate the sensitivity of the proposed shaping technique to shot-to-
shot jitters of the amplitude and phase of the accelerating field in the
linac’s structures. Consistent with LCLS-II specifications [62], we considered
the relative RMS amplitude jitter of 0.01% and the phase jitter of 0.01
degree. For simplicity, we assume that the injector produced identical
bunches, as shown in Fig. 6, and performed 100 simulations of the accelerator
beamline (from the injector exit to the exit of BC2) for different random
realizations of the phase and amplitude for linacs L1, L2, and L39. The errors
in linac settings were randomly generated using Gaussian probability function
with standard deviations of 0.01% and 0.01∘. Figure 18 presents the wakefield
averaged over the 100 simulations and indicates that a stable transformer
ratio $5.00\pm 0.05$ can be maintained owing to the stable beam produced in
the superconducting linac.
Figure 18: Wakefields obtained from 100 simulations with jitter in linacs L1,
L2, and L39. All cavities are taken to have relative jitter in accelerating
voltage of 0.01% and phase jitter $0.01^{\circ}$. The red line shows the
average wakefield while the blue shaded region represents the fluctuation of
wakefields due to jitter over 100 random realizations of the linac settings.
The average transformer ratio is $5.00\pm 0.05$. The reconstructed current
profile (green-shaded curve) is obtained numerically using Eq. (3).
Likewise, we observe the impact of charge fluctuation on the shaping to be
tolerable. Cathode-to-end simulations combining astra and elegant indicate
that a relative charge variation of +2% (resp. -2%) yields a relative change
in the transformer ratio of -2% (resp. +1%) and a relative variation in peak
field of -1.7% (resp. +1.7%); see Fig. 19.
Figure 19: Current distribution (shaded curves, left axis) and associated
wakefields (traces) for the nominal charge and $\pm 2$% relative change in
charge (9.8 and $10.2\text{\,}\mathrm{nC}$).
## VI Summary
We have presented the design of an accelerator capable of generating
$1\text{\,}\mathrm{GeV}$ electron bunches with a highly asymmetric peak
current profile and a large energy chirp required for a collinear wakefield
accelerator. It has been achieved without the use of collimators. Our approach
is based on ab-initio temporal shaping of the photocathode laser pulse
followed by nonlinear manipulations of the electron distribution in the
longitudinal phase space throughout the accelerator using collective effects
and precision control of the longitudinal dispersion in two bunch compressors
up to the third order. Finding the optimal design consisted of first
implementing a simplified accelerator model and using it for backward tracking
of the longitudinal phase space distribution of electrons through the main
accelerator to provide the longitudinal phase space distribution required from
the injector. The program twice was developed to support such a capability and
used to optimize the global linac parameters and time-of-flight properties of
bunch compressors. Second, the simulation of the photo-injector using astra
was performed to generate the required distribution. Third, the linac design
was refined using elegant to account for the transverse beam dynamics.
Finally, formation of longitudinally shaped drive bunches capable of producing
in the collinear wakefield accelerator a transformer ratio of $\sim 5$ and a
peak accelerating wakefield close to
$100\text{\,}\mathrm{M}\mathrm{V}\mathrm{/}\mathrm{m}$ has been numerically
demonstrated.
Although the proposed accelerator design is promising, we note that further
work is required to investigate whether the same accelerator can accelerate
the low-charge, low-emittance “witness bunches” that would be accelerated to
multi-$\mathrm{GeV}$ energies in the collinear wakefield accelerator and used
for the generation of x-rays in the downstream free-electron laser. Discussion
of this research is the subject of a forthcoming publication.
###### Acknowledgements.
The authors are grateful to Dr. Stanislav Baturin (NIU) for useful
discussions. WHT thanks Y. Park (UCLA) for several discussions on simulation
studies. This work is supported by the U.S. Department of Energy, Office of
Science, under award No. DE-SC0018656 with Northern Illinois University and
contract No. DE-AC02-06CH11357 with Argonne National Laboratory.
## Appendix A One-dimensional tracking model
A simple one-dimensional tracking program twice [34] was developed for rapid
assessment of the longitudinal dynamics of electrons in linear accelerators.
The program adopts an approach similar to the one used in LiTrack [35], where
only the accelerator components affecting the longitudinal beam dynamics are
considered and modeled analytically. A detailed description of twice is
published in [34]. In brief, the beam is represented by a set of $N$
macroparticles with identical charges $Q/N$ and given a set of initial LPS
coordinates $(z_{i},E_{i})$. A transformation $(z_{f},E_{f})=f(z_{i},E_{i})$
is applied to obtain final coordinates in the LPS.
#### A.0.1 Single particle dynamics
In twice the transformation for a macroparticle with coordinates
$(z_{i},E_{i})$ passing through a radiofrequency (RF) linac is given by
$\displaystyle\begin{pmatrix}z_{f}\\\
E_{f}\end{pmatrix}=\begin{pmatrix}z_{i}\\\ E_{i}(z_{i})\pm
eV\cos(kz_{i}+\varphi)\end{pmatrix},$ (11)
where $V$, $k$, and $\varphi$ are, respectively, the accelerating voltage,
wave-vector amplitude, and off-crest phase associated with the accelerating
section, and $e$ is the electronic charge. In the latter and following
equations the $\pm$ sign indicates the forward (+) and backward (-) tracking
process detailed in Sec. A.0.3. Similarly, the transformation through a
longitudinally dispersive section, such as a bunch compressor, is given by
$\displaystyle\begin{pmatrix}z_{f}\\\
E_{f}\end{pmatrix}=\begin{pmatrix}z_{i}\pm\left[R_{56}\frac{E_{i}-E_{0}}{E_{0}}+T_{566}\left(\frac{E_{i}-E_{0}}{E_{0}}\right)^{2}\right]\\\
E_{i}\end{pmatrix},$ (12)
where $E_{0}$ is the reference-particle energy assumed to remain constant
during the transformation, and $R_{56}\equiv E_{0}\frac{\partial
z_{f}}{\partial E_{i}}$ and
$T_{566}\equiv\frac{{E_{0}}^{2}}{2}\frac{\partial^{2}z_{f}}{\partial
E_{i}^{2}}$ are the first- and second-order longitudinal-dispersion functions
introduced by the beamline. It should be noted that, given our LPS coordinate
conventions, a conventional four-bend “chicane” magnetic bunch compressor has
a longitudinal dispersion $R_{56}>0$. The latter equation ignores energy loss,
e.g., due to incoherent synchrotron radiation, occurring in the beamline
magnets.
#### A.0.2 Collective effects
In twice, we implemented collective effects as an energy kick approximation
using the transformation
$\displaystyle\begin{pmatrix}z_{f}\\\
E_{f}\end{pmatrix}=\begin{pmatrix}z_{i}\\\ E_{i}(z_{i})\pm\Delta
E(z_{i})\end{pmatrix},$ (13)
where $\Delta E(z)$ represents the energy change associated with the
considered collective effect. The treatment of collective effects is modeled
as a $z$-dependent energy kick $\Delta E(z)$ taken downstream of beamline
elements as specified for the forward and backward tracking with the diagram
shown in Fig. 20. The implemented collective effects include wakefields
modeled after a user-supplied Green’s function, one-dimensional steady-state
coherent synchrotron radiation (CSR), and longitudinal space charge (LSC)
described via an impedance. The collective effects require the estimation of
the beam’s charge density, which is done in twice either using a standard
histogram binning method with noise filtering or via the kernel-density
estimation technique [63].
Figure 20: Treatment of collective effects as energy kicks downstream of
beamline elements. In forward (resp. backward) tracking, transformations of
beamline elements (resp. energy kicks) were applied, followed by energy kicks
(resp. beamline elements).
In order to model the impact of a wakefield, the charge distribution $q(z)$ is
directly used to compute the wake potential given a tabulated Green’s function
$\displaystyle
W(z)=\int^{z}_{0}q(z^{\prime})G(z-z^{\prime})\text{d}z^{\prime}\;.$ (14)
The change in energy is computed as $\Delta E(z)=LW(z)$, where $L$ is the
effective length where the beam experiences the wakefield.
The LSC is implemented using a one-dimensional model detailed in [64], where
the impedance per unit length is
$\displaystyle Z(k)$ $\displaystyle=i\frac{Z_{0}}{\pi\gamma
r_{b}}\frac{1-2I_{1}(\xi_{b})K_{1}(\xi_{b})}{\xi_{b}}\;,$ (15)
where $\xi_{b}\equiv kr_{b}/\gamma$; $I_{1}$ and $K_{1}$ are modified Bessel
functions of the first and second kind, respectively; and $k$, $Z_{0}$ and
$r_{b}$ are, respectively, the wave-vector amplitude, impedance of free space
and a user-defined transverse beam radius, and $\gamma$ is the Lorentz factor.
Given the charge density, the Fourier-transformed current density
$\tilde{I}(k)$ is derived from
$\displaystyle\tilde{I}(k)={\cal F}[cq(z)],$ (16)
with ${\cal F}$ representing the Fourier transform. The change in energy is
computed as
$\displaystyle\Delta E=-\mathcal{F}^{-1}[eZ(k)\tilde{I}(k)L],$ (17)
where $\mathcal{F}^{-1}$ is the inverse Fourier transform, and $L$ is the
effective distance along which the LSC interaction occurs. In order to account
for LSC during acceleration, $\gamma$ is replaced by the geometry average
$\sqrt{\gamma_{i}\gamma_{f}}$ of the Lorentz factors computed at the entrance
$\gamma_{i}$ and exit $\gamma_{f}$ of the linac section.
Finally, CSR energy kicks are applied downstream of the dispersive beamline
elements. For instance, a CSR energy kick can be applied after a dispersive
element with user-defined length and angle described by $R_{56}$ and
$T_{566}$. The effect of CSR is described using a one-dimensional model
commonly implemented in other beam-dynamics program [65]. To simplify the
calculation, only the steady-state CSR is considered in twice. The energy loss
associated with CSR is obtained from [66]222The variable $z$ here for CSR
calculation refers to the relative position from the bunch centroid with bunch
head at $z>0$.,
$\displaystyle\Delta
E(z)=\rho\theta\frac{\mathrm{d}E}{\mathrm{d}ct}=-\theta\frac{\gamma
m_{e}c^{2}r_{e}}{e}\int^{z}_{-\infty}\frac{\partial q(z^{\prime})}{\partial
z^{\prime}}I_{csr}(z,z^{\prime})\text{d}z^{\prime}\ ,$ (18)
with the integral kernel defined as
$\displaystyle
I_{csr}(z,z^{\prime})=\frac{4u(u^{2}+8)}{(u^{2}+4)(u^{2}+12)}\;,$ (19)
where $\theta$ is the angle, $m_{e}c^{2}$ is the electron rest mass energy,
$r_{e}$ is the classical electron radius and the variable $u$ is the solution
of $\frac{\gamma^{3}(z-z^{\prime})}{\rho}=\frac{u^{3}}{24}+\frac{u}{2}$. CSR
introduces an energy loss strongly dependent on the bunch length, which varies
within the dispersive sections used to compress the bunch. Consequently, the
longitudinally dispersive beamlines are segmented into several elements with
individual $(R_{56},T_{566})$ parameters. A CSR kick is applied after each of
the elements. A conventional chicane-type bunch compressor is usually broken
into two sections (two mirror-symmetric doglegs) but can in principle be
divided into an arbitrary number of segments to improve the resolution at the
expense of computational time.
#### A.0.3 Backward tracking
An important feature of twice is its capability to track the beam in the
forward or backward directions (indicated by the $\pm$ sign in Eqs. (12) and
(13)) in the presence of collective effects [so far LSC, CSR, and wakefield
effects are included]. The effects of LSC and wakefield are straightforward to
implement as they only involve a change in energy, while handling of the CSR
requires extra care since the particles’ positions also change throughout the
dispersive section. Therefore, an energy kick is applied after the beamline
element in the forward-tracking mode and before the beamline element in
backward-tracking mode, as shown in Fig. 20. Although the treatment of CSR is
not exact, it nonetheless provides a good starting point to account for the
effect.
## References
* Voss and Weiland [1982] G. Voss and T. Weiland, _The wake field acceleration mechanism_ , Tech. Rep. DESY-82-074 (DESY, 1982).
* Briggs _et al._ [1974] R. J. Briggs, T. J. Fessenden, and V. K. Neil, Electron autoacceleration, in _Proceedings, 9th International Conference on the High-Energy Accelerators_ (1974) p. 278.
* Friedman [1973] M. Friedman, Autoacceleration of an intense relativistic electron beam, Phys. Rev. Lett. 31, 1107 (1973).
* Perevedentsev and Skrinsky [1978] E. A. Perevedentsev and A. N. Skrinsky, On the Use of the Intense Beams of Large Proton Accelerators to Excite the Accelerating Structure of a Linear Accelerator, in _Proc. 6th All-Union Conference Charged Particle Accelerators, Dubna (Institute of Nuclear Physics, Novosibirsk, USSR, 1978)_, Vol. 2 (1978) p. 272, English version is available in Proceedings of the 2nd ICFA Workshop on Possibilities and Limitations of Accelerators and Detectors (1979) p. 61.
* Sessler [1982] A. M. Sessler, The free electron laser as a power source for a high‐gradient accelerating structure, AIP Conference Proceedings 91, 154 (1982).
* Chen _et al._ [1985] P. Chen, J. M. Dawson, R. W. Huff, and T. Katsouleas, Acceleration of electrons by the interaction of a bunched electron beam with a plasma, Phys. Rev. Lett. 54, 693 (1985).
* Chin [1983] Y. Chin, The Wake Field Acceleration Using a Cavity of Elliptical Cross Section, in _12th international linear accelerator conference_ (1983) pp. 159–161.
* Gai _et al._ [1988] W. Gai, P. Schoessow, B. Cole, R. Konecny, J. Norem, J. Rosenzweig, and J. Simpson, Experimental demonstration of wake-field effects in dielectric structures, Phys. Rev. Lett. 61, 2756 (1988).
* Bane _et al._ [1985] K. L. Bane, P. Chen, and P. B. Wilson, On collinear wake field acceleration, Proceedings of the 1985 Particle Accelerator Conference (PAC1985): Accelerator Engineering and Technology Vancouver, BC May 13-16, 1985 32, 3524 (1985).
* Baturin and Zholents [2017] S. S. Baturin and A. Zholents, Upper limit for the accelerating gradient in the collinear wakefield accelerator as a function of the transformer ratio, Phys. Rev. Accel. Beams 20, 061302 (2017).
* Cornacchia _et al._ [2006] M. Cornacchia, S. Di Mitri, G. Penco, and A. A. Zholents, Formation of electron bunches for harmonic cascade x-ray free electron lasers, Phys. Rev. ST Accel. Beams 9, 120701 (2006).
* Penco _et al._ [2014] G. Penco, M. Danailov, A. Demidovich, E. Allaria, G. De Ninno, S. Di Mitri, W. M. Fawley, E. Ferrari, L. Giannessi, and M. Trovó, Experimental Demonstration of Electron Longitudinal-Phase-Space Linearization by Shaping the Photoinjector Laser Pulse, Phys. Rev. Lett. 112, 044801 (2014).
* Lemery and Piot [2015] F. Lemery and P. Piot, Tailored electron bunches with smooth current profiles for enhanced transformer ratios in beam-driven acceleration, Phys. Rev. Spec. Top. Accel. Beams 18, 081301 (2015).
* Jiang _et al._ [2012] B. Jiang, C. Jing, P. Schoessow, J. Power, and W. Gai, Formation of a novel shaped bunch to enhance transformer ratio in collinear wakefield accelerators, Phys. Rev. ST Accel. Beams 15, 011301 (2012).
* Ha _et al._ [2017] G. Ha, M. H. Cho, W. Namkung, J. G. Power, D. S. Doran, E. E. Wisniewski, M. Conde, W. Gai, W. Liu, C. Whiteford, Q. Gao, K.-J. Kim, A. Zholents, Y.-E. Sun, C. Jing, and P. Piot, Precision Control of the Electron Longitudinal Bunch Shape Using an Emittance-Exchange Beam Line, Phys. Rev. Lett. 118, 104801 (2017).
* Gao _et al._ [2018] Q. Gao, G. Ha, C. Jing, S. P. Antipov, J. G. Power, M. Conde, W. Gai, H. Chen, J. Shi, E. E. Wisniewski, D. S. Doran, W. Liu, C. E. Whiteford, A. Zholents, P. Piot, and S. S. Baturin, Observation of High Transformer Ratio of Shaped Bunch Generated by an Emittance-Exchange Beam Line, Phys. Rev. Lett. 120, 114801 (2018).
* Piot _et al._ [2012] P. Piot, C. Behrens, C. Gerth, M. Dohlus, F. Lemery, D. Mihalcea, P. Stoltz, and M. Vogt, Generation and Characterization of Electron Bunches with Ramped Current Profiles in a Dual-Frequency Superconducting Linear Accelerator, Phys. Rev. Lett. 108, 034801 (2012).
* Panofsky and Bander [1968] W. K. H. Panofsky and M. Bander, Asymptotic Theory of Beam Break‐Up in Linear Accelerators, Review of Scientific Instruments 39, 206 (1968).
* Neil _et al._ [1979] V. K. Neil, L. S. Hall, and R. K. Cooper, Further Theoretical Studies Of The Beam Breakup Instability, Part. Accel. 9, 213 (1979).
* Chao _et al._ [1980] A. W. Chao, B. Richter, and C.-Y. Yao, Beam emittance growth caused by transverse deflecting fields in a linear accelerator, Nucl. Instrum. Methods 178, 1 (1980).
* Balakin _et al._ [1983] V. E. Balakin, A. V. Novokhatsky, and V. P. Smirnov, VLEPP: TRANSVERSE BEAM DYNAMICS, Proceedings, 12th International Conference on High-Energy Accelerators, HEACC 1983: Fermilab, Batavia, August 11-16, 1983 C830811, 119 (1983).
* Li _et al._ [2014] C. Li, W. Gai, C. Jing, J. G. Power, C. X. Tang, and A. Zholents, High gradient limits due to single bunch beam breakup in a collinear dielectric wakefield accelerator, Phys. Rev. ST Accel. Beams 17, 091302 (2014).
* Shchegolkov _et al._ [2016] D. Y. Shchegolkov, E. I. Simakov, and A. A. Zholents, Towards a Practical Multi-Meter Long Dielectric Wakefield Accelerator: Problems and Solutions, IEEE Transactions on Nuclear Science 63, 804 (2016).
* Baturin and Zholents [2018] S. S. Baturin and A. Zholents, Stability condition for the drive bunch in a collinear wakefield accelerator, Phys. Rev. Accel. Beams 21, 031301 (2018).
* Zholents _et al._ [2018] A. Zholents _et al._ , A Conceptual Design of a Compact Wakefield Accelerator for a High Repetition Rate Multi User X-ray Free-Electron Laser Facility, in _Proc. 9th International Particle Accelerator Conference (IPAC’18), Vancouver, BC, Canada, April 29-May 4, 2018_, International Particle Accelerator Conference No. 9 (JACoW Publishing, Geneva, Switzerland, 2018) pp. 1266–1268, https://doi.org/10.18429/JACoW-IPAC2018-TUPMF010.
* Waldschmidt and others [2018] G. J. Waldschmidt and others, Design and Test Plan for a Prototype Corrugated Waveguide, in _Proc. 9th International Particle Accelerator Conference (IPAC’18), Vancouver, BC, Canada, April 29-May 4, 2018_, International Particle Accelerator Conference (JACoW Publishing, Geneva, Switzerland, 2018) pp. 1550–1552.
* Zotter and Kheifets [1998] B. Zotter and S. Kheifets, _Impedances and Wakes in High Energy Particle Accelerators_ (World Scientific Publishing Company, 1998).
* Chao [1993] A. Chao, _Physics of Collective Beam Instabilities in High Energy Accelerators_ (Wiley and Sons, New York, 1993).
* Polyanin and Manzhirov [1998] A. Polyanin and A. Manzhirov, _Handbook of Integral Equations_ (CRC Press, Boca Raton, 1998).
* Zagorodnov _et al._ [2015] I. Zagorodnov, K. L. F. Bane, and G. Stupakov, Calculation of wakefields in 2D rectangular structures, Phys. Rev. ST Accel. Beams 18, 104401 (2015).
* Siy _et al._ [2019] A. E. Siy, G. J. Waldschmidt, and A. Zholents, Design of a compact wakefield accelerator based on a corrugated waveguide, in _Proc. 2019 North American Particle Accelerator Conference (NAPAC2019), Lansing, MI, USA, September 1-6, 2019_ (JACoW, Geneva, Switzerland, 2019) pp. 232–235.
* Arthur and others [2002] J. Arthur and others, _Linac Coherent Light Source (LCLS) conceptual design report_, SLAC Report SLAC-R-593 (Stanford Linear Accelerator Center, 2002).
* Bosch _et al._ [2008] R. A. Bosch, K. J. Kleman, and J. Wu, Modeling two-stage bunch compression with wakefields: Macroscopic properties and microbunching instability, Phys. Rev. ST Accel. Beams 11, 090702 (2008).
* Tan _et al._ [2018] W. Tan, P. Piot, and A. Zholents, Longitudinal Beam-Shaping Simulation for Enhanced Transformer Ratio in Beam-Driven Accelerators, in _2018 IEEE Advanced Accelerator Concepts Workshop (AAC)_ (2018) pp. 190–194.
* Bane and Emma [2005] K. L. F. Bane and P. Emma, Litrack: A Fast Longitudinal Phase Space Tracking Code with Graphical User Interface, in _Proceedings of the 2005 Particle Accelerator Conference_ (2005) pp. 4266–4268.
* Zagorodnov _et al._ [2004] I. Zagorodnov, T. Weiland, and M. Dohlus, _Wake fields generated by the LOLA-IV structure and the 3rd harmonic section in TTF-II_ , Tech. Rep. TESLA Report 2004-01 (DESY, Darmstadt, Germany, 2004).
* Fortin _et al._ [2012] F.-A. Fortin, F.-M. De Rainville, M.-A. Gardner, M. Parizeau, and C. Gagné, DEAP: Evolutionary algorithms made easy, Journal of Machine Learning Research 13, 2171 (2012).
* Charles _et al._ [2017] T. Charles, M. Boland, R. Dowd, and D. Paganin, Beam by Design: Current Pulse Shaping Through Longitudinal Dispersion Control, in _Proc. of International Particle Accelerator Conference (IPAC’17), Copenhagen, Denmark, 14-19 May, 2017_, International Particle Accelerator Conference No. 8 (JACoW, Geneva, Switzerland, 2017) pp. 644–647, https://doi.org/10.18429/JACoW-IPAC2017-MOPIK055.
* England _et al._ [2005] R. J. England, J. B. Rosenzweig, G. Andonian, P. Musumeci, G. Travish, and R. Yoder, Sextupole correction of the longitudinal transport of relativistic beams in dispersionless translating sections, Phys. Rev. ST Accel. Beams 8, 012801 (2005).
* Xu _et al._ [2018] T. Xu, C. Jing, A. Kanareykin, P. Piot, and J. Power, Optimized electron bunch current distribution from a radiofrequency photo-emission source, in _Proc. 2018 IEEE Advanced Accelerator Concepts Workshop (AAC)_ (2018) pp. 1–5.
* Floettmann [2017] K. Floettmann, _ASTRA – A Space Charge Tracking Algorithm_ (Deutsches Elektronen-Synchrotron, Hamburg, Germany, 2017).
* Bisognano _et al._ [2011] J. Bisognano, R. Bosch, D. Eisert, M. Fisher, M. Green, K. Jacobs, K. Kleman, J. Kulpin, G. Rogers, J. Lawler, D. Yavuz, and R. Legg, _Progress Toward the Wisconsin Free Electron Laser_, Tech. Rep. (Jefferson Lab, 2011) JLAB-ACC–11-1333.
* Bisognano _et al._ [2013] J. Bisognano, M. Bissen, R. Bosch, M. Efremov, D. Eisert, M. Fisher, M. Green, K. Jacobs, R. Keil, K. Kleman, G. Rogers, M. Severson, D. D Yavuz, R. Legg, R. Bachimanchi, C. Hovater, T. Plawski, and T. Powers, Wisconsin SRF Electron Gun Commissioning, in _Proceedings of the 25th Particle Accelerator Conference, PAC 2013_ (2013) p. 622.
* Legg _et al._ [2008] R. Legg, W. Graves, T. Grimm, and P. Piot, Half wave injector design for WiFEL, in _EPAC 2008 - Contributions to the Proceedings_ (European Physical Society Accelerator Group (EPS-AG), 2008) pp. 469–471.
* Tan _et al._ [2019] W. H. Tan, P. Piot, and A. Zholents, Longitudinal-Phase-Space Manipulation for Efficient Beam-Driven Structure Wakefield Acceleration, in _Proc. 10th International Particle Accelerator Conference (IPAC’19), Melbourne, Australia, 19-24 May 2019_, International Particle Accelerator Conference (JACoW Publishing, Geneva, Switzerland, 2019) pp. 2296–2299, issue: 10.
* Nielsen _et al._ [2020] G. Nielsen, N. Hauge, E. Krauthammer, and A. Baurichter, Compact high-Tc 2G superconducting solenoid for superconducting RF electron gun, in _Proc. 4th International Particle Accelerator Conference (IPAC2013), Shanghai, China, 12-17 May, 2013_ (JACoW, Geneva, Switzerland, 2020) pp. 3514–3516.
* Xu _et al._ [2019] T. Xu, C.-J. Jing, A. Kanareykin, P. Piot, and J. Power, Spatio-Temporal Shaping of the Photocathode Laser Pulse for Low-Emittance Shaped Electron Bunches, in _Proc. 10th International Particle Accelerator Conference (IPAC’19), Melbourne, Australia, 19-24 May 2019_, International Particle Accelerator Conference No. 10 (JACoW Publishing, Geneva, Switzerland, 2019) pp. 2163–2166.
* Gilevich _et al._ [2020] S. Gilevich, S. Alverson, S. Carbajo, S. Droste, S. Edstrom, A. Fry, M. Greenberg, R. Lemons, A. Miahnahri, W. Polzin, S. Vetter, and F. Zhou, The LCLS-II photo-injector drive laser system, in _Proc. Conference on Lasers and Electro-Optics_ (Optical Society of America, 2020) p. SW3E.3.
* Ferrini _et al._ [1998] G. Ferrini, P. Michelato, and F. Parmigiani, A Monte Carlo simulation of low energy photoelectron scattering in Cs2Te, Solid State Communications 106, 21 (1998).
* Piot _et al._ [2013] P. Piot, Y.-E. Sun, T. J. Maxwell, J. Ruan, E. Secchi, and J. C. T. Thangaraj, Formation and acceleration of uniformly filled ellipsoidal electron bunches obtained via space-charge-driven expansion from a cesium-telluride photocathode, Phys. Rev. ST Accel. Beams 16, 010102 (2013).
* Aryshev _et al._ [2017] A. Aryshev, M. Shevelev, Y. Honda, N. Terunuma, and J. Urakawa, Femtosecond response time measurements of a Cs2Te photocathode, Applied Physics Letters 111, 033508 (2017), https://doi.org/10.1063/1.4994224 .
* Jain and others [2017] V. Jain and others, 650 MHz Elliptical Superconducting RF Cavities for PIP-II Project, in _Proceedings, 2nd North American Particle Accelerator Conference (NAPAC2016): Chicago, Illinois, USA, October 9-14, 2016_ (2017) p. WEB3CO03.
* Mitri and Cornacchia [2015] S. D. Mitri and M. Cornacchia, Transverse emittance-preserving arc compressor for high-brightness electron beam-based light sources and colliders, EPL (Europhysics Letters) 109, 62002 (2015).
* Akkermans _et al._ [2017] J. A. G. Akkermans, S. Di Mitri, D. Douglas, and I. D. Setija, Compact compressive arc and beam switchyard for energy recovery linac-driven ultraviolet free electron lasers, Phys. Rev. Accel. Beams 20, 080705 (2017).
* Mitri [2018] S. D. Mitri, Bunch Length Compressors, in _Proceedings of the CAS-CERN Accelerator School on Free Electron Lasers and Energy Recovery Linacs_, Vol. 1 (CERN, Geneva, Switzerland, 2018) p. 363.
* Chao _et al._ [2013] A. Chao, K. H. Mess, M. Tigner, and F. Zimmermann, _Handbook of Accelerator Physics and Engineering_, 2nd ed. (WORLD SCIENTIFIC, 2013).
* Robin _et al._ [1993] D. Robin, E. Forest, C. Pellegrini, and A. Amiry, Quasi-isochronous storage rings, Phys. Rev. E 48, 2149 (1993).
* Williams _et al._ [2020] P. H. Williams, G. Pérez-Segurana, I. R. Bailey, S. Thorin, B. Kyle, and J. B. Svensson, Arclike variable bunch compressors, Physical Review Accelerators and Beams 23, 100701 (2020).
* Saldin _et al._ [2004] E. Saldin, E. Schneidmiller, and M. Yurkov, Longitudinal space charge-driven microbunching instability in the TESLA test facility linac, Nucl. Instrum. Methods Phys. Res., Sect. A 528, 355 (2004), also in Proc. 25th International Free Electron Laser Conference, and the 10th FEL Users Workshop.
* Huang _et al._ [2004] Z. Huang, M. Borland, P. Emma, J. Wu, C. Limborg, G. Stupakov, and J. Welch, Suppression of microbunching instability in the linac coherent light source, Phys. Rev. ST Accel. Beams 7, 074401 (2004).
* Derbenev _et al._ [1995] Y. S. Derbenev, J. Rossbach, E. L. Saldin, and V. D. Shiltsev, _Microbunch radiative tail - head interaction_, Tech. Rep. TESLA-FEL 1995-05 (DESY, 1995) series: TESLA-FEL Reports.
* Huang _et al._ [2017] G. Huang _et al._ , High Precision RF Control for the LCLS-II, in _Proc. 3rd North American Particle Accelerator Conference (NAPAC’16), Chicago, IL, USA, October 9-14, 2016_, North American Particle Accelerator Conference No. 3 (JACoW, Geneva, Switzerland, 2017) pp. 1292–1296, https://doi.org/10.18429/JACoW-NAPAC2016-FRA2IO02.
* Mohayai _et al._ [2017] T. A. Mohayai, P. Snopok, D. Neuffer, and C. Rogers, Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment, in _Proceedings, Meeting of the APS Division of Particles and Fields (DPF 2017): Fermilab, Batavia, Illinois, USA, July 31 - August 4, 2017_ (2017).
* Qiang _et al._ [2009] J. Qiang, R. D. Ryne, M. Venturini, A. A. Zholents, and I. V. Pogorelov, High resolution simulation of beam dynamics in electron linacs for x-ray free electron lasers, Phys. Rev. ST Accel. Beams 12, 100702 (2009).
* Saldin _et al._ [1997] E. L. Saldin, E. A. Schneidmiller, and M. V. Yurkov, On the coherent radiation of an electron bunch moving in an arc of a circle, Nucl. Instrum. Methods Phys. Res., Sect. A 398, 373 (1997).
* Mitchell _et al._ [2013] C. E. Mitchell, J. Qiang, and R. D. Ryne, A fast method for computing 1-D wakefields due to coherent synchrotron radiation, Nucl. Instrum. Methods Phys. Res., Sect. A 715, 119 (2013).
|
# Inferring COVID-19 Biological Pathways from Clinical Phenotypes via
Topological Analysis
Negin Karisani1, Daniel E. Platt2, Saugata Basu1, Laxmi Parida2 S. Basu and N.
Karisani were partially supported by NSF grant DMS-1620271.
###### Abstract
COVID-19 has caused thousands of deaths around the world and also resulted in
a large international economic disruption. Identifying the pathways associated
with this illness can help medical researchers to better understand the
properties of the condition. This process can be carried out by analyzing the
medical records. It is crucial to develop tools and models that can aid
researchers with this process in a timely manner. However, medical records are
often unstructured clinical notes, and this poses significant challenges to
developing the automated systems. In this article, we propose a pipeline to
aid practitioners in analyzing clinical notes and revealing the pathways
associated with this disease. Our pipeline relies on topological properties
and consists of three steps: 1) pre-processing the clinical notes to extract
the salient concepts, 2) constructing a feature space of the patients to
characterize the extracted concepts, and finally, 3) leveraging the
topological properties to distill the available knowledge and visualize the
result. Our experiments on a publicly available dataset of COVID-19 clinical
notes testify that our pipeline can indeed extract meaningful pathways.
## 1 Introduction
Since the early stages of the COVID-19 pandemic, the scientific community has
made tremendous effort to address the clinical course of the virus. However,
there is still a lot to reveal about COVID-19. For instance, most people who
contract COVID-19 develop mild to moderate symptoms (WHO 2020), some may show
no symptoms, while for others the disease can be fatal. To better understand
different strains of COVID-19 one approach is to study the underlying
pathways. The aim of this study is to investigate the application of
topological properties in automatically inferring candidate pathways. We use
unstructured clinical notes as the source of information to automatically
extract phenotypes to be used in our topological model. Phenotypes are the
symptoms and signs that reflect the presence of a disease–in the follows, we
refer to them as symptoms.
Advancement in technology has helped scientists to garner large amount of
biomedical data. This has provided the community with unprecedented
opportunities to study and better understand the spread of diseases. However,
this burst of information has posed significant challenges to the traditional
data analysis and visualization techniques. Traditional infographics, such as
Venn diagrams which are still widely used to compare and contrast set of
symptoms, fail to aid practitioners in analyzing large set of symptoms. Thus,
tools that can effectively employ the techniques in other scientific
communities to facilitate this process are of great value.
In this article, we rely on concepts from Topological Data Analysis and
propose a pipeline to automatically extract candidate pathways associated with
COVID-19 from clinical notes. Our pipeline which is based on the notion of
Redescriptions (Parida and Ramakrishnan 2005; Mullins et al. 2006; Platt et
al. 2016) consists of three steps: 1) pre-processing the notes and identifying
the candidate symptoms, 2) mapping the symptoms to the space of the patients,
and finally, 3) extracting the topological properties and their visualization.
We have evaluated our pipeline in a publicly available dataset of COVID-19
clinical notes. The results show that our model is able to extract meaningful
pathways. For example, in Section 6.1 we demonstrate that there are
potentially distinctive pathways between coughers and non-coughers.
The remainder of this article is organized as follows: in Section 2 we provide
an overview of the concepts that we use. In Section 3 we present our pipeline
and in Section 4 we discuss the implementation detail. In Section 5 we explain
the detail of our experiments and in Section 6 we present and discuss the
results. Finally, in Section 7 we conclude the article.
## 2 Background
In this section, we review the concepts used in the remainder of the paper.
### 2.1 Redescriptions
Redescriptions are used to identify the phenomena that occur in different
ways. The concept was first introduced in (Ramakrishnan et al. 2004), and
later in (Parida and Ramakrishnan 2005) was generalized to a framework called
redescription mining, for which the authors present some applications in
Genome Ontology database.
Redescriptions are mathematically formalized using Boolean algebra, which is
also used to model the cause-effect relationship among the symptoms. Two
different sets of symptoms which correspond to the same group of patients is
an example of redescriptions. More specifically, suppose $s_{1}$, $s_{2}$ are
two symptoms, and $P_{1}$, $P_{2}$ their respective set of patients. If the
presence of symptom $s_{1}$ implies the presence of symptom $s_{2}$, then
$P_{1}\subseteq P_{2}$. If we consider the combination of the symptoms (i.e.
$s_{1}\wedge s_{2}$), then the group of the patients who experience both
symptoms is $P_{1}\cap P_{2}=P_{1}$, which is the same group of patients that
we obtain by considering only the symptom $s_{1}$. Redescriptions–the
combination of symptoms that give rise to the same group of patients–can
reveal logical associations among symptoms. They can highlight the underlying
pathways and are commonly used to derive rules in the pathways (Mullins et al.
2006).
### 2.2 Topological Data Analysis
Over the past two decades, Topological Data Analysis (TDA), raised from
Algebraic Topology, has found its way in the real-world applications. In
(Dagliati et al. 2019), TDA is used to model disease progression by inferring
temporal phenotypes. More recently (Wang et al. 2020) use TDA and machine
learning techniques to investigate genome mutation of SARS-COV-2. Some other
examples in biology include analysis of brain neural activities (Dabaghian et
al. 2012; Nasrin et al. 2019), and cancer genomics (Nicolau, Levine, and
Carlsson 2011; Rabadán et al. 2020). In this section, we aim to provide a
brief overview of the primary TDA concept, i.e., the persistent homology. We
avoid the mathematical detail which is beyond the scope of this article. For a
thorough description see (Edelsbrunner and Harer 2008; Wasserman 2018).
Let $M$ be a continuous space equipped with a metric $\delta$, the topological
invariants of $M$ are defined as the properties that do not change under
continuous deformation (i.e. twisting but not tearing). The invariants in $M$
for lower dimensions are usually referred to as the connected components, the
holes, and the void spaces, respectively in dimension 0, 1 and 2; in the
higher dimensions, they are understood as $k$-dimensional holes. The number of
$k$-dimensional holes in $M$ are called the $k$-th betti numbers.
Given data points $X$ and a distance function $\delta$—$X$ represents as the
points sampled from $M$—the goal is to compute the topological invariants of
the underlying structure of $X$ (i.e. space $M$). A common approach to
accomplish this is by constructing $k$-simplexes over $X$. Intuitively, one
could think of a $k$-simplex as the smallest convex hull of $k+1$ points. A
collection of $k$-simplexes glued together is called a simplicial complex
(satisfying some conditions). Since it is not reasonable to begin with all the
possible $k$-simplexes over the data points in $X$; the technique is to add
the simplexes in a sequence of steps. First, a parameter is selected and the
initial simplicial complex $S$, is set to be the collection of points in $X$
as the 0-simplexes; then the parameter is increased such that at each step
just a specific set of simplexes, that satisfy some conditions, could be added
to $S$; this procedure creates a filtration of simplicial complexes on $X$,
which then is analyzed. The conditions to select a subset of simplexes at each
step, give rise to different types of simplicial complexes.
Figure 1: Recovering topological properties using simplicial complexes
An example of a simplicial complex is Čech complex. Figure 1 shows a simple
illustrative example. The goal is to recover the topological invariants of the
space in Figure 1. It is clear that the 0-betti number is one, since there is
only one connected component; and the 1-betti number is two, since there are
two holes; the higher dimensional betti numbers are zero. The dataset $X$ is
given by the six sampled points in Figure 1. To construct the Čech complex
over $X$, we begin with the points in $X$ as 0-simplexes. In order to
construct the higher dimensional simplexes, we start growing a ball at each
point, as in Figure 1; at this state, the 0-betti number is six, and 1-betti
number is zero. By increasing the radius, some of the balls start to overlap
each others; for each $k+1$ overlapped balls we insert a $k$-simplex. Figure 1
shows a collection of 1-simplexes, line segments joining the two points,
created by the pairwise overlap of their corresponding balls; what can be
clearly seen in this simplicial complex is that it recovers the topological
properties of the underlying structure in Figure 1. If we increase the radius
further, the three balls at the top begin to overlap each others, hence we can
add a 2-simplex—a filled in triangle—as in Figure 1. Therefore, the hole which
was created at Figure 1 disappeared by increasing the radius at Figure 1, as a
result that topological property is lost; this could eventually happen for the
second hole as we continue to increase the radius and add more simplexes. It
is important to notice that the topological properties that persist for a
longer period, before they disappear, best represent the properties of the
underlying structure. This characteristic is the basic principle of the
persistent homology method, which was first formalized in (Edelsbrunner,
Letscher, and Zomorodian 2002).
A diagram known as Barcode is commonly used to keep track of the lifetime of
topological properties. In the barcode, each topological property is
represented as a horizontal line segment. The line segments span the period
that the corresponding topological properties exist, along the parameter axis
(i.e. radius). We use the barcode in Section 6.1 (see Figure 2).
## 3 Proposed Pipeline
In this section, we introduce our pipeline. The first step is to extract
structured data—the set of symptoms and their corresponding patients—from the
unstructured clinical records. Next step is to define the feature space, the
sampling strategy, and the metric to measure the similarity between the data
points. Finally, the topological properties are extracted and are visualized.
Concept extraction: We carry out the concept extraction in three steps: 1) We
parse the clinical notes and map the biological terms to the concepts in a
medical ontology. 2) Since the clinical notes have informal language model
their parsing can be noisy. Thus we ask the user to cure the candidate
relations and resolve the inconsistencies. 3) We use the health records to
construct an association matrix between the patients and the extracted
concepts.
Natural language processing techniques are widely used to analyze biomedical
documents (Demner-Fushman, Chapman, and McDonald 2009; Koleck et al. 2019).
Despite the significant advances in neural text processing over the last
decade, we found that the existing methods are not adequate to effectively
parse the medical records. Thus, to reduce the noise and ensure that the
extracted terms are indeed valid medical concepts we use manual supervision to
validate the automatic process.
Feature space construction: In our model, features correspond to the patients,
and the data points correspond to the combination of concepts–we call them
patterns. Given a feature vector–i.e., a data point–a feature is set to 1 if
the corresponding patient shows all the symptoms associated with the data
point. Thus, a data point is understood as a cluster of patients, who share
the same set of symptoms–i.e., pattern.
As mentioned in Section 2.1, pathways can be inferred by identifying the
redescriptions—i.e., patterns that have the same group of patients. However,
in order to make inference about the underlying pathways, it is important to
analyze the patterns whose clusters are statistically significant.
Since tests such as Binomial are not successful in separating higher-order
correlations, which can distinguish groups of patients that identify disease
processes from the impact of pair-wise correlations, we use cumulant
correlation expansions. In quantum field theory, they emerge as connected
Feynman diagrams (1-particle irreducible). These are multivariate moments
related to cumulants appearing in statistics. In that context, their
generating functions factor according to partitions of sets.
Let $G_{\bullet}$ represent the moments in moment generating functions
$\mathbb{E}\left[\exp\left(\sum_{i}x_{i}J_{i}\right)\right]$ where the
$J_{i}$’s are the conjugate variables, and $\Gamma_{\bullet}$ represent the
higher dimensional cumulants, e.g. for the symptoms $x_{i},x_{j},x_{k}$ then
$G_{ij}=E(x_{i}x_{j})$ and $G_{ijkk}=E(x_{i}x_{j}x_{k}^{2})$, and
$\Gamma_{ij}$ and $\Gamma_{ijkk}$ are the corresponding cummulants. The
factorizations are as follows.
$\begin{split}\mathbb{E}\left[\exp\left(\sum_{i}x_{i}J_{i}\right)\right]=A+\sum_{i}J_{i}G_{i}+\frac{1}{2!}\sum_{ii^{\prime}}J_{i}J_{i^{\prime}}G_{ii^{\prime}}+\\\
\frac{1}{3!}\sum_{ii^{\prime}i^{\prime\prime}}J_{i}J_{i^{\prime}}J_{i^{\prime\prime}}G_{ii^{\prime}i^{\prime\prime}}+\\\
\frac{1}{4!}\sum_{ii^{\prime}i^{\prime\prime}i^{\prime\prime\prime}}J_{i}J_{i^{\prime}}J_{i^{\prime\prime}}J_{i^{\prime\prime\prime}}G_{ii^{\prime}i^{\prime\prime}i^{\prime\prime\prime}}+\cdots\\\
=\exp\left(\sum_{i}J_{i}\Gamma_{i}+\frac{1}{2!}\sum_{ii^{\prime}}J_{i}J_{i^{\prime}}\Gamma_{ii^{\prime}}+\frac{1}{3!}\sum_{ii^{\prime}i^{\prime\prime}}J_{i}J_{i^{\prime}}J_{i^{\prime\prime}}\Gamma_{ii^{\prime}i^{\prime\prime}}+\right.\\\
\left.\frac{1}{4!}\sum_{ii^{\prime}i^{\prime\prime}i^{\prime\prime\prime}}J_{i}J_{i^{\prime}}J_{i^{\prime\prime}}J_{i^{\prime\prime\prime}}\Gamma_{ii^{\prime}i^{\prime\prime}i^{\prime\prime\prime}}+\cdots\right),\end{split}$
where $A$ is nominally 1, seen by setting the $J_{i}=0$.
The power series in the $J_{i}$’s then require
$\displaystyle G_{k}$ $\displaystyle=\Gamma_{k}$ $\displaystyle
G_{kk^{\prime}}$
$\displaystyle=\Gamma_{kk^{\prime}}+\Gamma_{k}\Gamma_{k^{\prime}}$
$\displaystyle G_{kk^{\prime}k^{\prime\prime}}$
$\displaystyle=\Gamma_{kk^{\prime}k^{\prime\prime}}+\Gamma_{k}\Gamma_{k^{\prime}k^{\prime\prime}}+\Gamma_{k^{\prime}}\Gamma_{k^{\prime\prime}k}+\Gamma_{k^{\prime\prime}}\Gamma_{kk^{\prime}}+\Gamma_{k}\Gamma_{k^{\prime}}\Gamma_{k^{\prime\prime}}$
$\displaystyle G_{kk^{\prime}k^{\prime\prime}k^{\prime\prime\prime}}$
$\displaystyle=\Gamma_{kk^{\prime}k^{\prime\prime}k^{\prime\prime\prime}}+\Gamma_{k}\Gamma_{k^{\prime}k^{\prime\prime}k^{\prime\prime\prime}}+\Gamma_{k^{\prime}}\Gamma_{k^{\prime\prime}k^{\prime\prime\prime}k}+$
$\displaystyle\Gamma_{k^{\prime\prime}}\Gamma_{k^{\prime\prime\prime}kk^{\prime}}+\Gamma_{k^{\prime\prime\prime}}\Gamma_{kk^{\prime}k^{\prime\prime}}+\Gamma_{k^{\prime\prime\prime}k^{\prime}}\Gamma_{k^{\prime\prime}k}+$
$\displaystyle\Gamma_{k^{\prime}k^{\prime\prime}}\Gamma_{k^{\prime\prime\prime}k}+\Gamma_{k^{\prime\prime\prime}k^{\prime\prime}}\Gamma_{kk^{\prime}}+2\Gamma_{k}\Gamma_{k^{\prime}}\Gamma_{k^{\prime\prime}k^{\prime\prime\prime}}+$
$\displaystyle
2\Gamma_{k}\Gamma_{k^{\prime\prime\prime}}\Gamma_{k^{\prime}k^{\prime\prime}}+2\Gamma_{k}\Gamma_{k^{\prime\prime}}\Gamma_{k^{\prime}k^{\prime\prime\prime}}+2\Gamma_{k^{\prime}}\Gamma_{k^{\prime\prime\prime}}\Gamma_{kk^{\prime\prime}}+$
$\displaystyle
2\Gamma_{k^{\prime}}\Gamma_{k^{\prime\prime}}\Gamma_{kk^{\prime\prime\prime}}+2\Gamma_{k^{\prime\prime\prime}}\Gamma_{k^{\prime\prime}}\Gamma_{kk^{\prime}}+\Gamma_{k}\Gamma_{k^{\prime}}\Gamma_{k^{\prime\prime}}\Gamma_{k^{\prime\prime\prime}}$
We apply the above factorization to the clusters, and shuffle the symptoms to
test significance by constructing variances and null hypotheses.
To search for the redescriptions, we need to investigate the cause–effect
relationships among the selected patterns. However, often due to the
misclassifications of patients, e.g., caused by wrong diagnosis, the set
inclusion property does not hold in the data. Therefore, the exact equality of
sets should be estimated. This estimation can be done by Jaccard distance,
which measures the dissimilarity between sets. For the two sets $A$ and $B$,
Jaccard distance is defined by,
$d(A,B)=1-\frac{|A\cap B|}{|A\cup B|}.$
For the example in Section 2.1, when $P_{1}\subseteq P_{2}$, then the Jaccard
distance $d(P_{1}\cap P_{2},P_{1})=0$, otherwise if $P_{1}\not\subseteq P_{2}$
then $0<d(P_{1}\cap P_{2},P_{1})\leq 1$, which can be interpreted as the
probability that subjects picked from the two sets are not shared.
Hence, we consider Jaccard distance to measure the distances between the
sampled data points.
Topological analysis and visualization: To explore the structure of the space
created in the previous step, Vietoris–Rips (VR) complexes are employed to
construct the filtration. The VR complex is an abstract simplicial complex
with 0-simplexes as the data points, and $k$-simplexes are created for any
$k+1$ points whose pairwise distances are at most $2r$, while $r$ is fixed.
The initial simplicial complex is a collection of 0-simplexes which correspond
to the sampled data points–i.e., the clusters of patients selected from the
previous step–and Jaccard distance is used as the filtration parameter to
construct the VR complexes. Finally, the barcode is generated and
representative cycles of the bars are retrieved for further analysis.
## 4 Implementation Details
To parse the clinical notes and extract the biomedical terms we used Amazon
Comprehend Medical (ACM)111https://aws.amazon.com/comprehend/medical/, an
online proprietary NLP programming interface to analyze the unstructured
clinical notes. For technical details regarding ACM see (Jin et al. 2018;
Bhatia, Busra Celikkaya, and Khalilia 2020). We also used the International
Classification of Diseases
(ICD-10CM)222https://www.cdc.gov/nchs/icd/icd10cm.htm to select the concepts,
which are mapped by ACM to the extracted terms. ICD is a medical ontology,
published by the World Health Organization to classify diseases, symptoms, and
other medical conditions.
In the TDA step, we used Dionysus333https://mrzv.org/software/dionysus2/
package for the construction of simplicial complexes and visualization. We
also incorporated the Cyclonysus444https://github.com/sauln/cyclonysus
implementation to retrieve the representative cycles of the 1-dimensional
topological properties.
## 5 Experimental Details
We begin this section by describing the dataset, then we discuss the steps of
the experiment.
### 5.1 Dataset
We used the dataset introduced in (Xu et al. 2020)555
https://github.com/beoutbreakprepared/nCoV2019/tree/master/latest_data. The
dataset is continually updated with the available records of confirmed
COVID-19 patients. We used the version published on June 8, 2020. Among the
available records in the data set we retained all the records that their
“symptom” field was non-empty, this amounted to 1,545 patients. This field,
which is a textual feature, is a clinical note describing the patient’s
medical state.
### 5.2 Experimental Setup
ACM associates a list of ICD-10CM codes to each extracted medical condition,
ordered by their confidence scores, hence we retained a code with the highest
confidence score. We only considered medical conditions that at least 0.3
percent of patients experienced. If the ICD-10CM codes associated to a medical
condition were at the same level of the hierarchical ontology and ACM was
assigning high confidence scores to all of them, we considered them as one
class. An example of that includes $R53.=\\{R53.1:\textit{Weakness},\
R53.81:\textit{Malaise},\ R53.83:\textit{Other fatigue}\\}$. We retained the
data corresponded to thirty-one ICD-10CM codes. Based on the data Fever, Cough
and Fatigue are the most common symptoms among the COVID-19 patients. Table 1
presents the list of selected classes and their number of patients, and Table
2 provides the number of patients who experienced $k$ medical conditions.
Description | ICD-10CM | $\sharp$ | $\%$ | Description | ICD-10CM | $\sharp$ | $\%$
---|---|---|---|---|---|---|---
Acute myocardial infarction | I21.9 | 5 | 0.3 | Chest pain | R07. | 24 | 1.6
Pulmonary heart disease | I27. | 6 | 0.4 | Abnormal sputum | R09.3 | 43 | 2.8
Cardiac arrhythmia | I49.9 | 5 | 0.3 | Nasal congestion | R09.81 | 11 | 0.7
Heart failure | I50.9 | 9 | 0.6 | Abdominal pain | R10.9 | 6 | 0.4
Acute pharyngitis | J02.9 | 136 | 8.8 | Nausea | R11. | 29 | 1.9
Pneumonia | J18. | 151 | 9.7 | Diarrhea | R19. | 28 | 1.8
Nasal sinuses | J34.89 | 65 | 4.2 | Dizziness | R42 | 6 | 0.4
Respiratory failure | J96. | 64 | 4.1 | Fever | R50.9 | 1073 | 69.4
Pain in joint | M25.50 | 23 | 1.5 | Headache | R51 | 76 | 5
Muscle spasm | M62.838 | 24 | 1.6 | Unspecified pain | R52 | 24 | 1.6
Myalgia | M79.10 | 70 | 4.5 | Fatigue | R53. | 177 | 11.5
Disorders of bone | M89.8X9 | 10 | 0.6 | Anorexia | R63.0 | 8 | 0.5
Kidney failure | N17.9 | 9 | 0.6 | Sepsis | R65.21 | 17 | 1.1
Cough | R05 | 594 | 38.4 | Chills | R68.83 | 41 | 2.7
Abnormalities of breathing | R06. | 138 | 9 | Dry mouth | R68.2 | 6 | 0.4
Sneezing | R06.7 | 17 | 1.1 | | | |
Table 1: Thirty-one ICD10-CM concepts with the number of patients in each class and their respective percentage of total. $k$ | $\sharp$
---|---
1 | | 651
2 | | 431
3 | | 286
4 | | 115
5 | | 45
6 | | 11
7 | | 5
8 | | 1
Table 2: Number of patients with $k$ symptoms.
In the second step of the pipeline, we selected 632 data points with patterns
corresponded to the subsets of the thirty-one ICD-10CM codes. To construct the
VR filtration, we set the threshold of the filtration parameter to $0.5$.
## 6 Results
In this section, we report the main result and discuss its significance.
### 6.1 Main Result
We obtained topological properties of dimensions 0 and 1; there was no
topological property of higher dimensions. We report an important
1-dimensional property which is striking.
As mentioned in Section 3, we used Jaccard distance. Therefore, at any two
data points, the lower the distance, the more similar their sets of patients
are. Following from this, the topological properties whose 1-simplexes
corresponded to low distances were of interest.
Figure 2 shows the barcode of the 1-dimensional topological properties, whose
lifetime is within the interval (0, 0.5). The horizontal axis corresponds to
the parameter of the filtration—Jaccard distance—and the vertical axis
corresponds to the number of properties. With respect to the previous
paragraph, what stands out in the diagram is the first bar annotated by the
circled line, which spans between $0.23$ and $0.34$.
Figure 2: Barcode of 1-dimensional topological properties. Figure 3:
Representative cycle of the annotated bar.
Since the 1-dimensional topological properties are understood as the holes
made up of points and 1-simplexes, a cycle generating the annotated bar is
shown in Figure 3. Data points are illustrated by their associated combination
of ICD-10CM codes along with the number of patients who experienced them, and
the 1-simplexes joining the data points are labeled by the Jaccard distance
between the respective sets of patients. Therefore, as an example, the label
$(R05\cap R09.3):33$ means that there are 33 patients who experienced both
Cough and Abnormal sputum. The low values of the Jaccard distances imply
stronger associations among the respective clusters, which are important to
identify the redescriptions. In particular, this cycle suggests that among the
subjects in $R09.3$–Abnormal sputum–it appears that there is not a particular
interaction between subjects in $R05$–Cough–with subjects in $R50.9$–Fever.
This opens the question if there is a distinctive signature showing
alternative pathways to disease among non-coughers compared to coughers.
### 6.2 Discussion
To interpret the relationships between the symptoms in Figure 3 we rely on
Jaccard distance. Since the equivalence of sets of subjects matching different
patterns produces logical constraints determined by biological processes,
multiple pathways connecting phenotypes to disease may yield information about
multigenic complex diseases marked by multiple pathways leading to disease.
However, phenotype definitions are prone to misclassification for numbers of
reasons. Therefore, equivalence may be meaningfully characterized by based on
the chances that a subject in one or the other of two phenotype clusters is
not in both of them, which is the Jaccard distance, described above.
In the case of Figure 3, there are two paths leading from $R09.3$ (abnormal
sputum) to $R05\cap R09.3\cap R50.9$, one passing through $R09.3\cap R50.9$
and the other through $R05\cap R09.3$, where $R05$ is cough, and $R50.9$ is
fever. In both pathways, the distances between sputum and cough is larger than
that between sputum and fever. So coughing is not as strong an association as
fever for abnormal sputum production. In this case, the relationship between
sputum and fever is independent of coughing, since the cycle appears to be a
parallelogram. So a coughing symptom is independent of fever among sputum
productive subjects. This suggests the paths are independent predictors of
severe disease.
## 7 Conclusions and Future Work
In this study we investigated the application of topological properties in
extracting the candidate COVID-19 pathways from the clinical notes. We also
proposed a pipeline to pre-process the data, extract the salient concepts,
construct a feature space, and visualize the results. We evaluated our
pipeline on a set of 1,545 patients and showed that it can extract meaningful
associations between symptoms, and reveal intriguing candidate pathways.
One limitation of our study is the reliance on human validation. As we
mentioned in Section 3, the available text processing tools were not able to
effectively parse and extract the relevant medical concepts. To resolve this
shortcoming we plan to exploit the available structured data accompanied by
the medical records to cluster the patients and automatically filter out the
improbable associations.
## References
* Bhatia, Busra Celikkaya, and Khalilia (2020) Bhatia, P.; Busra Celikkaya, E.; and Khalilia, M. 2020. _End-to-End Joint Entity Extraction and Negation Detection for Clinical Text_ , 139–148. Cham: Springer International Publishing. ISBN 978-3-030-24409-5. doi:10.1007/978-3-030-24409-5˙13. URL https://doi.org/10.1007/978-3-030-24409-5_13.
* Dabaghian et al. (2012) Dabaghian, Y.; Mémoli, F.; Frank, L.; and Carlsson, G. 2012. A Topological Paradigm for Hippocampal Spatial Map Formation Using Persistent Homology. _PLOS Computational Biology_ 8(8): 1–14. doi:10.1371/journal.pcbi.1002581. URL https://doi.org/10.1371/journal.pcbi.1002581.
* Dagliati et al. (2019) Dagliati, A.; Geifman, N.; Peek, N.; Holmes, J. H.; Sacchi, L.; Sajjadi, S. E.; and Tucker, A. 2019. Inferring Temporal Phenotypes with Topological Data Analysis and Pseudo Time-Series. In Riaño, D.; Wilk, S.; and ten Teije, A., eds., _Artificial Intelligence in Medicine_ , 399–409. Cham: Springer International Publishing. ISBN 978-3-030-21642-9.
* Demner-Fushman, Chapman, and McDonald (2009) Demner-Fushman, D.; Chapman, W. W.; and McDonald, C. J. 2009. What can natural language processing do for clinical decision support? _Journal of Biomedical Informatics_ 42(5): 760 – 772. ISSN 1532-0464. doi:https://doi.org/10.1016/j.jbi.2009.08.007. URL http://www.sciencedirect.com/science/article/pii/S1532046409001087. Biomedical Natural Language Processing.
* Edelsbrunner, Letscher, and Zomorodian (2002) Edelsbrunner; Letscher; and Zomorodian. 2002. Topological Persistence and Simplification. _Discrete & Computational Geometry_ 28(4): 511–533. ISSN 1432-0444. doi:10.1007/s00454-002-2885-2. URL https://doi.org/10.1007/s00454-002-2885-2.
* Edelsbrunner and Harer (2008) Edelsbrunner, H.; and Harer, J. 2008. Persistent homology—a survey. In _Surveys on discrete and computational geometry_ , volume 453 of _Contemp. Math._ , 257–282. Providence, RI: Amer. Math. Soc.
* Jin et al. (2018) Jin, M.; Bahadori, M. T.; Colak, A.; Bhatia, P.; Celikkaya, B.; Bhakta, R.; Senthivel, S.; Khalilia, M.; Navarro, D.; Zhang, B.; Doman, T.; Ravi, A.; Liger, M.; and Kass-Hout, T. A. 2018. Improving Hospital Mortality Prediction with Medical Named Entities and Multimodal Learning. In _Workshop on Machine Learning for Health, NeurIPS_.
* Koleck et al. (2019) Koleck, T. A.; Dreisbach, C.; Bourne, P. E.; and Bakken, S. 2019. Natural language processing of symptoms documented in free-text narratives of electronic health records: a systematic review. _Journal of the American Medical Informatics Association_ 26(4): 364–379. ISSN 1527-974X. doi:10.1093/jamia/ocy173. URL https://doi.org/10.1093/jamia/ocy173.
* Mullins et al. (2006) Mullins, I. M.; Siadaty, M. S.; Lyman, J.; Scully, K.; Garrett, C. T.; Greg Miller, W.; Muller, R.; Robson, B.; Apte, C.; Weiss, S.; Rigoutsos, I.; Platt, D.; Cohen, S.; and Knaus, W. A. 2006. Data mining and clinical data repositories: Insights from a 667,000 patient data set. _Computers in Biology and Medicine_ 36(12): 1351 – 1377. ISSN 0010-4825. doi:https://doi.org/10.1016/j.compbiomed.2005.08.003. URL http://www.sciencedirect.com/science/article/pii/S0010482505001046.
* Nasrin et al. (2019) Nasrin, F.; Oballe, C.; Boothe, D.; and Maroulas, V. 2019. Bayesian Topological Learning for Brain State Classification. In _2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)_ , 1247–1252. doi:10.1109/ICMLA.2019.00205.
* Nicolau, Levine, and Carlsson (2011) Nicolau, M.; Levine, A. J.; and Carlsson, G. 2011. Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival. _Proceedings of the National Academy of Sciences_ 108(17): 7265–7270. ISSN 0027-8424. doi:10.1073/pnas.1102826108. URL https://www.pnas.org/content/108/17/7265.
* Parida and Ramakrishnan (2005) Parida, L.; and Ramakrishnan, N. 2005. Redescription Mining: Structure Theory and Algorithms. In _AAAI_.
* Platt et al. (2016) Platt, D. E.; Basu, S.; Zalloua, P. A.; and Parida, L. 2016. Characterizing redescriptions using persistent homology to isolate genetic pathways contributing to pathogenesis. _BMC Systems Biology_ 10(1): S10. ISSN 1752-0509. doi:10.1186/s12918-015-0251-2. URL https://doi.org/10.1186/s12918-015-0251-2.
* Rabadán et al. (2020) Rabadán, R.; Mohamedi, Y.; Rubin, U.; Chu, T.; Alghalith, A. N.; Elliott, O.; Arnés, L.; Cal, S.; Obaya, Á. J.; Levine, A. J.; and Cámara, P. G. 2020. Identification of relevant genetic alterations in cancer using topological data analysis. _Nature Communications_ 11(1): 3808. ISSN 2041-1723. doi:10.1038/s41467-020-17659-7. URL https://doi.org/10.1038/s41467-020-17659-7.
* Ramakrishnan et al. (2004) Ramakrishnan, N.; Kumar, D.; Mishra, B.; Potts, M.; and Helm, R. F. 2004. Turning CARTwheels: An Alternating Algorithm for Mining Redescriptions. In _Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , KDD ’04, 266–275. New York, NY, USA: Association for Computing Machinery. ISBN 1581138881. doi:10.1145/1014052.1014083. URL https://doi.org/10.1145/1014052.1014083.
* Wang et al. (2020) Wang, R.; Hozumi, Y.; Yin, C.; and Wei, G.-W. 2020. Decoding asymptomatic COVID-19 infection and transmission.
* Wasserman (2018) Wasserman, L. 2018. Topological Data Analysis. _Annual Review of Statistics and Its Application_ 5(1): 501–532. doi:10.1146/annurev-statistics-031017-100045. URL https://doi.org/10.1146/annurev-statistics-031017-100045.
* WHO (2020) WHO. 2020. _Coronavirus:symptoms_. (accessed October 18, 2020), https://www.who.int/health-topics/coronavirus.
* Xu et al. (2020) Xu, B.; Gutierrez, B.; Mekaru, S.; Sewalk, K.; Goodwin, L.; Loskill, A.; Cohn, E. L.; Hswen, Y.; Hill, S. C.; Cobo, M. M.; Zarebski, A. E.; Li, S.; Wu, C.-H.; Hulland, E.; Morgan, J. D.; Wang, L.; O’Brien, K.; Scarpino, S. V.; Brownstein, J. S.; Pybus, O. G.; Pigott, D. M.; and Kraemer, M. U. G. 2020. Epidemiological data from the COVID-19 outbreak, real-time case information. _Scientific Data_ 7(1): 106. ISSN 2052-4463. doi:10.1038/s41597-020-0448-0. URL https://doi.org/10.1038/s41597-020-0448-0.
|
# Implementing Asymmetric Dark Matter and Dark Electroweak Baryogenesis in a
Mirror Two-Higgs-Doublet Model
Alexander C. Ritter<EMAIL_ADDRESS>(corresponding author)
Raymond R. Volkas<EMAIL_ADDRESS>ARC Centre of Excellence for Dark
Matter Particle Physics, School of Physics,
The University of Melbourne, Victoria 3010, Australia
###### Abstract
Models of asymmetric dark matter (ADM) seek to explain the apparent
coincidence between the present-day mass densities of visible and dark matter,
$\Omega_{\mathrm{DM}}\simeq 5\Omega_{\mathrm{VM}}$. However, most ADM models
only relate the number densities of visible and dark matter without motivating
the similar particle masses. We expand upon a recent work that obtained a
natural mass relationship in a mirror matter ADM model with two Higgs doublets
in each sector, by looking to implement dark electroweak baryogenesis as the
means of asymmetry generation. We explore two aspects of the mechanism: the
nature of the dark electroweak phase transition, and the transfer of particle
asymmetries between the sectors by the use of portal interactions. We find
that both aspects can be implemented successfully for various regions of the
parameter space. We also analyse one portal interaction – the neutron portal –
in greater detail, in order to satisfy the observational constraints on dark
radiation.
††preprint: APS/123-QED
## I Introduction
Determining the particle nature of dark matter (DM) remains one of the most
important problems in fundamental physics. While there are some important
constraints on its nature – for example, it cannot be hot DM because large-
scale structure formation then yields incorrect results – DM is famous, or
notorious, for being anything from “fuzzy” scalars at the $10^{-22}$ eV mass
scale [1], to several solar-mass primordial black holes [2], with many
different kinds of possibilities at intermediate mass scales [3]. It therefore
makes sense to carefully examine what we _do_ know observationally about DM,
because there may be clues already lurking in the data about what its
fundamental nature is.
One fact that may be important is the apparent coincidence in the present-day
cosmological mass densities of visible and dark matter, which obey
$\Omega_{\textrm{DM}}\simeq 5\,\Omega_{\textrm{VM}},$ (1)
where $\Omega_{X}$ is the mass density of $X$ divided by the critical density
[4]. Cosmologically one would expect different relic species to have very
different mass and/or number densities unless there are fundamental reasons
for it to be otherwise. For example, the equal number densities of protons and
electrons is a consequence of the basic requirement of electric charge
neutrality for the universe. It is thus worth exploring the hypothesis that
Eq. (1) is the result of a deep connection between visible and dark matter
rather than being a true coincidence.
For most DM candidates, the physics determining the relic density – for
example, the freeze out process for a thermal relic – has no connection with
the physics driving the mass density of visible matter: baryogenesis, which
sets the proton number density, and the confinement scale of quantum
chromodynamics (QCD), which sets the proton mass.
Asymmetric DM (ADM) is an exception to this general rule, since the relic
number density of DM particles is then determined by an asymmetry in the dark
sector that is chemically related to the baryon asymmetry. Asymmetric DM is a
paradigm, and many different models have been proposed (for reviews, see [5,
6, 7]). The vast majority of these proposals provide specific dynamics to
relate the number density asymmetries, but are silent on why the DM mass seems
apparently to be related to the proton mass. Yet without such a connection,
ADM models fail to explain the cosmological coincidence. Instead the factor of
five in Eq. (1) is used to “predict” the DM mass within schemes that only
relate the number densities. Clearly, this is unsatisfactory. The purpose of
this paper is to continue analysing ways to connect the DM and proton masses
within an ADM model. Note that it is not our goal to explain exactly why the
approximate ratio is the specific value of five. Rather, our goal is to
construct a theory where a ratio of order one is relatively generic, and the
precise value can be fitted by choosing parameters appropriately.
We are faced with the task of explaining why the DM mass should have anything
to do with the confinement scale of QCD. The most obvious idea is that DM is a
baryon-like bound state of an interaction in the dark sector that resembles
QCD. Several such schemes have been proposed and analysed in the literature,
though only a few of them have the DM-proton mass connection as a motivation
[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. Indeed, with
an arbitrary confining gauge force in the dark sector and a general particle
content, there is no reason for the confinement scale to be near that of
visible QCD. Two exceptions to this have been proposed: (i) the dark gauge
group mirrors QCD by being SU(3), and the two confinement scales are related
through either an exact or somewhat broken symmetry that connects the two
SU(3) sectors, and (ii) the particle content is chosen so that the two running
coupling constants approach infrared fixed points whose magnitudes are similar
[22, 23]. Both ideas have merit, and in this paper we consider option (i).
_A priori_ , the two SU(3) sectors could be related by either a continuous or
a discrete symmetry. But it is difficult to make the former work, because of
the necessary appearance of multiplets that transform under both the dark
SU(3) and the usual standard model (SM) gauge group $G_{\textrm{SM}}$. We
therefore focus on the discrete symmetry possibility, specifically the
simplest case of $Z_{2}$. We seek a theory where the DM is a baryonic bound
state of “dark quarks” which are triplets under dark SU(3) and singlets under
$G_{\textrm{SM}}$. Because the usual quarks are in the
$(\mathbf{3},\mathbf{2},\frac{1}{6})$, $(\mathbf{3},\mathbf{1},\frac{2}{3})$
and $(\mathbf{3},\mathbf{1},-\frac{1}{3})$ representations of
$G_{\textrm{SM}}$, the only way their $Z_{2}$ partners can be singlets under
SM forces is if we duplicate the electroweak sector as well. We are evidently
driven to a mirror-matter type of gauge group,
$G\times G^{\prime}$ (2)
where $G^{\prime}$ is isomorphic to $G$ with prime denoting the dark sector.
In this paper we continue in the vein of Ref. [24] and consider the simplest
case where $G$ is just the SM gauge group SU(3)$\times$SU(2)$\times$U(1),
which is exactly the mirror-symmetric extension of the SM. The $Z_{2}$
symmetry interchanges the visible and dark sectors and enforces equal QCD and
dark-QCD coupling constants when it is exact. To relate the DM mass to the
proton mass we need some kind of a connection between the two QCD coupling
constants. This connection may be the strict equality of the coupling
constants and hence also the confinement scales, or some $Z_{2}$ breaking can
be introduced so as to remove the exact equality but retain a relationship.
The unbroken case has been extensively studied – see, for example, Refs. [25,
26, 27, 28, 29, 30, 10, 31, 32, 33, 34, 35, 36, 37, 38, 11, 39, 40, 12, 41,
42, 43]. We choose to follow Ref. [24] and explore a spontaneously broken
$Z_{2}$ scenario (see also Refs. [44, 45, 46, 14, 47, 48, 49, 18, 19, 20,
21]). Part of the motivation for that is to permit the DM candidate to be a
single dark-neutron-like particle, rather than having to deal with the
complicated (though very interesting) situation of exact mirror-DM.
Reference [24] was based on the process of “asymmetric symmetry breaking
(ASB)”. This is a spontaneous symmetry breaking scheme that permits the two
sectors to break quite differently despite the $Z_{2}$ symmetry of the
Lagrangian. It is distinct from the idea of introducing a $Z_{2}$-odd scalar
whose vacuum expectation value (VEV) breaks the mirror symmetry, in that in
general it affords more flexibility in the symmetry-breaking outcome.111The
ASB mechanism really comes into its own when you want to break $G$ and
$G^{\prime}$ to _different subgroups_. This will not be the case in the model
analysed in this paper. However, the mirror-symmetric SM, rather than a
mirror-symmetric theory with an extended gauge group such as a grand unified
theory, permits us to focus on the DM physics rather than being distracted by
the many unrelated issues that arise from SM gauge-group extensions.
Ultimately, the proper context for ASB may well be a grand unified theory, as
discussed in the original papers [50, 51]. The mirror-symmetric SM analysed
here could then be the low-energy effective theory of a more ambitious model.
Reference [24] analysed a mirror-symmetric model with two Higgs doublets in
each sector. The ASB process was then employed to ensure that the doublets,
one from each sector, that gain the dominant VEVs are _not_ $Z_{2}$ partners.
This allows the dark-fermion masses and mass ratios to be completely different
from the usual quark and lepton masses and ratios. Reference [24] described an
attempt at a full theory that saw visible and dark baryogenesis occur through
the familiar sphaleron-reprocessed type I seesaw leptogenesis dynamics driven
by the out-of-equilibrium decays of heavy neutral leptons.
The purpose of the present paper is:
* •
To construct an alternative version of the theory where asymmetry generation
occurs through dark electroweak baryogenesis, which is another reasonable
mechanism that is worth exploring. We show that there is sufficient freedom to
arrange for the dark electroweak phase transition to be strongly first order,
as required for this mechanism. Such a phenomenon may give rise to
gravitational waves that are detectable through future space-based
interferometers [52].
* •
To analyse some minimal possibilities for how the dark asymmetry may be
reprocessed into a visible asymmetry through various higher dimension portal
interactions222For previous applications of asymmetry transfer through portal
interactions in ADM models, see [38, 53, 54]..
* •
To continue to analyse the quite difficult problem of how observational
constraints on dark radiation may be obeyed in such a theory. One of our goals
here is to present a clear account of the challenges in achieving this aim
without introducing fine-tuning that is as bad or worse than the cosmological
coincidence puzzle of Eq. 1.
The remainder of this paper is structured as follows: In Section II we outline
the model and provide some theoretical and experimental constraints on the
parameters of the theory. These will help guide our search in Section III,
where we analyse the dynamics of the dark electroweak phase transition and
identify areas of parameter space for which the transition is strongly first-
order. Such a transition is necessary to allow for the generation of a dark
baryon asymmetry through electroweak baryogenesis. In Section IV we consider
the partial reprocessing of this asymmetry into a visible baryon asymmetry
through a number of effective operator portal interactions. In Section V we
then analyse one of these possibilities – the “neutron portal” – in more
detail, as it can also play a role in avoiding strong observational bounds on
additional dark radiation. This introduces a number of difficulties, which we
clearly outline, before providing some concluding remarks in Section VI.
## II The Model and Constraints
As this work builds off the mirror two Higgs doublet model of Ref. [24], we do
not provide a fully detailed description of the theory in this section.
Rather, we summarise the salient details of the model so that the contents of
this paper can be understood in isolation, as well as highlighting the points
where we differ. We also provide some more specific restraints on the
parameters of the model, and in particular on the couplings and mass terms in
the scalar potential. These are especially relevant for Section III, where we
determine whether the model can accommodate a strong first-order dark
electroweak phase transition; the restrictions we discuss here will help guide
our search through the large parameter space of the scalar sector.
The gauge group is
SU(3)$\times$SU(2)$\times$U(1)$\times$SU(3)${}^{\prime}\times$SU(2)${}^{\prime}\times$U(1)′,
where the mirror (dark) sector is a duplicated version of the standard model.
The dark gauge groups and particles are indicated by primes. The dark particle
content is a copy of the visible particle content, as required by a discrete
$Z_{2}$ parity symmetry that exchanges SM particles with their dark
counterparts. The particle transformation properties are given by
$\phi\leftrightarrow\phi^{\prime},\quad G^{\mu}\leftrightarrow
G_{\mu}^{\prime},\quad f_{L}\leftrightarrow f_{R}^{\prime},\quad
f_{R}\leftrightarrow f_{L}^{\prime},$ (3)
where $\phi$, $G^{\mu}$, and $f$ are scalar, gauge, and fermion fields
respectively. Note that as a parity symmetry, the $Z_{2}$ exchanges left-
handed and right-handed particles. (The chirality flip feature is an aesthetic
choice, and is not essential.)
$q_{iL}$ | $(\mathbf{3},\mathbf{2},-\frac{1}{6})(\mathbf{1},\mathbf{1},0)$ | $q_{iR}^{\prime}$ | $(\mathbf{1},\mathbf{1},0)(\mathbf{3},\mathbf{2},-\frac{1}{6})$
---|---|---|---
$u_{iR}$ | $(\mathbf{3},\mathbf{1},\frac{2}{3})(\mathbf{1},\mathbf{1},0)$ | $u_{iL}^{\prime}$ | $(\mathbf{1},\mathbf{1},0)(\mathbf{3},\mathbf{1},\frac{2}{3})$
$d_{iR}$ | $(\mathbf{3},\mathbf{1},-\frac{1}{3})(\mathbf{1},\mathbf{1},0)$ | $d_{iL}^{\prime}$ | $(\mathbf{1},\mathbf{1},0)(\mathbf{3},\mathbf{1},-\frac{1}{3})$
$l_{iL}$ | $(\mathbf{1},\mathbf{2},-\frac{1}{2})(\mathbf{1},\mathbf{1},0)$ | $l_{iR}^{\prime}$ | $(\mathbf{1},\mathbf{1},0)(\mathbf{1},\mathbf{2},-\frac{1}{2})$
$e_{iR}$ | $(\mathbf{1},\mathbf{1},-1)(\mathbf{1},\mathbf{1},0)$ | $e_{iL}^{\prime}$ | $(\mathbf{1},\mathbf{1},0)(\mathbf{1},\mathbf{1},-1)$
$\Phi_{1}$ | $(\mathbf{1},\mathbf{2},0)(\mathbf{1},\mathbf{1},0)$ | $\Phi_{1}^{\prime}$ | $(\mathbf{1},\mathbf{1},0)(\mathbf{1},\mathbf{2},0)$
$\Phi_{2}$ | $(\mathbf{1},\mathbf{2},0)(\mathbf{1},\mathbf{1},0)$ | $\Phi_{2}^{\prime}$ | $(\mathbf{1},\mathbf{1},0)(\mathbf{1},\mathbf{2},0)$
Table 1: The particle content and their representations under the mirror
symmetric gauge group
(SU(3)$\times$SU(2)$\times$U(1))$\times$(SU(3)${}^{\prime}\times$SU(2)${}^{\prime}\times$U(1)′)
The total particle content of the model is given in Table 1. The fermion
content consists of the standard model fermions and their dark partners. Note
that unlike the original paper, we do not list right-handed singlet neutrinos
and their partners. These were introduced to allow for asymmetry generation
through thermal leptogenesis, whereas we will be considering dark electroweak
baryogensis as the asymmetry creation mechanism.333Of course, we need to
generate massive neutrinos somehow, but we may remain largely agnostic about
the precise mechanism for present purposes, only requiring that it not
dominate asymmetry generation and also not contribute significantly to
washout. There are four scalars in the model: two Higgs doublets, $\Phi_{1}$
and $\Phi_{2}$, along with their dark counterparts $\Phi_{1}^{\prime}$ and
$\Phi_{2}^{\prime}$. The additional Higgs doublets allow for the ASB mechanism
[50] to be implemented; as a vital component of the model, understanding the
mechanism will be central to constructing the scalar potential.
### II.1 The Scalar Potential and Asymmetric Symmetry Breaking
To introduce the ASB mechanism we first consider the scalar potential in an
illustrative toy model. In addition to the mirror $Z_{2}$ symmetry exchanging
$\Phi_{1}$ and $\Phi_{2}$ with $\Phi_{1}^{\prime}$ and $\Phi_{2}^{\prime}$, we
impose extra discrete $Z_{2}$ symmetries such that only terms with even
numbers of a given scalar are allowed. Then, the scalar potential can be
written in the form
$\begin{split}V_{\mathrm{ASB}}&=\lambda_{1}\left(\Phi_{1}^{\dagger}\Phi_{1}+\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}-\frac{v^{2}}{2}\right)^{2}+\lambda_{2}\left(\Phi_{2}^{\dagger}\Phi_{2}+\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}-\frac{w^{2}}{2}\right)^{2}\\\
&+\kappa_{1}\left(\Phi_{1}^{\dagger}\Phi_{1}\right)\left(\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}\right)+\kappa_{2}\left(\Phi_{2}^{\dagger}\Phi_{2}\right)\left(\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}\right)\\\
&+\sigma_{1}\left(\left(\Phi_{1}^{\dagger}\Phi_{1}\right)\left(\Phi_{2}^{\dagger}\Phi_{2}\right)+\left(\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}\right)\left(\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}\right)\right)\\\
&+\sigma_{2}\left(\Phi_{1}^{\dagger}\Phi_{1}+\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}+\Phi_{2}^{\dagger}\Phi_{2}+\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}-\frac{v^{2}}{2}-\frac{w^{2}}{2}\right)^{2}.\end{split}$
(4)
In the parameter space region where each of
$[\lambda_{1},\lambda_{2},\kappa_{1},\kappa_{2},\sigma_{1},\sigma_{2}]$ are
positive, the global minimum occurs when all terms are independently zero.
This can be achieved by the following pattern of VEVs:
$\begin{split}\expectationvalue{\Phi_{1}}=\begin{bmatrix}0\\\
\frac{v}{\sqrt{2}}\end{bmatrix},\quad\expectationvalue{\Phi_{1}^{\prime}}=0,\\\
\expectationvalue{\Phi_{2}}=0,\quad\expectationvalue{\Phi_{2}^{\prime}}=\begin{bmatrix}0\\\
\frac{w}{\sqrt{2}}\end{bmatrix}.\end{split}$ (5)
This minimum clearly breaks the mirror $Z_{2}$ symmetry, with non-mirror
partner Higgs doublets gaining non-zero VEVs in the two sectors. The
motivation for breaking the mirror symmetry is to obtain differing particle
masses in the visible and dark sectors. As the masses of the visible and dark
baryons result from the QCD confinement energy of the SU(3) interaction in
each sector, we want the QCD confinement scale in each sector –
$\Lambda_{\mathrm{QCD}}$ and $\Lambda_{\mathrm{DM}}$ – to differ.
This was explored in Ref. [50], which considered the evolution of the SU(3)
gauge couplings $\alpha_{3}$ and $\alpha_{3}^{\prime}$. At temperatures above
the scale of mirror symmetry breaking, these couplings are equal; after the
mirror symmetry is broken by the development of an asymmetric minimum, their
running to low energies depends upon the spectrum of quark masses in each
sector. Thanks to the asymmetric symmetry breaking minimum, these two spectra
are independent, as the Higgs doublets that give masses to the quarks in each
sector are not mirror partners. If the minimum is constructed such that $w\gg
v$, then depending on the Yukawa couplings of the quarks to
$\Phi_{2}^{\prime}$, a dark confinement scale $\Lambda_{\mathrm{DM}}$ a factor
of a few higher than $\Lambda_{\mathrm{QCD}}$ can be easily achieved. This is
encapsulated in Fig.1 of Ref. [24], which plots $\Lambda_{\mathrm{DM}}$
against the ratio between electroweak scales ($\rho\equiv w/v$) for a
selection of dark quark mass spectra.
While an asymmetric symmetry breaking minimum can be readily obtained for the
toy scalar potential, the situation is more complex when the full scalar
potential is considered. The most general mirror two-Higgs-doublet scalar
potential – where we only impose the mirror $Z_{2}$ exchange symmetry – is
given by
$\begin{split}V_{\mathrm{M2HDM}}&=m_{11}^{2}\left(\Phi_{1}^{\dagger}\Phi_{1}+\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}\right)+m_{22}^{2}\left(\Phi_{2}^{\dagger}\Phi_{2}+\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}\right)\\\
&+\left(m_{12}^{2}\left(\Phi_{1}^{\dagger}\Phi_{2}+\Phi_{1}^{\prime\dagger}\Phi_{2}^{\prime}\right)+h.c.\right)+\frac{1}{2}z_{1}\left(\left(\Phi_{1}^{\dagger}\Phi_{1}\right)^{2}+\left(\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}\right)^{2}\right)\\\
&+\frac{1}{2}z_{2}\left(\left(\Phi_{2}^{\dagger}\Phi_{2}\right)^{2}+\left(\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}\right)^{2}\right)+z_{3}\left(\Phi_{1}^{\dagger}\Phi_{1}\Phi_{2}^{\dagger}\Phi_{2}+\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}\right)\\\
&+z_{4}\left(\Phi_{1}^{\dagger}\Phi_{2}\Phi_{2}^{\dagger}\Phi_{1}+\Phi_{1}^{\prime\dagger}\Phi_{2}^{\prime}\Phi_{2}^{\prime\dagger}\Phi_{1}^{\prime}\right)+\frac{1}{2}z_{5}\left(\left(\Phi_{1}^{\dagger}\Phi_{2}\right)^{2}+\left(\Phi_{1}^{\prime\dagger}\Phi_{2}^{\prime}\right)^{2}+h.c.\right)\\\
&+\left[\left(z_{6}\Phi_{1}^{\dagger}\Phi_{1}+z_{7}\Phi_{2}^{\dagger}\Phi_{2}\right)\Phi_{1}^{\dagger}\Phi_{2}+\left(z_{6}\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}+z_{7}\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}\right)\Phi_{1}^{\prime\dagger}\Phi_{2}^{\prime}+h.c.\right]\\\
&+z_{8}\Phi_{1}^{\dagger}\Phi_{1}\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}+z_{9}\Phi_{2}^{\dagger}\Phi_{2}\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}+\left(z_{10}\Phi_{1}^{\dagger}\Phi_{2}\Phi_{1}^{\prime\dagger}\Phi_{2}^{\prime}+h.c.\right)\\\
&+\left(z_{11}\Phi_{1}^{\dagger}\Phi_{2}\Phi_{2}^{\prime\dagger}\Phi_{1}^{\prime}+h.c.\right)+z_{12}\left(\Phi_{1}^{\dagger}\Phi_{1}\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}+\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}\Phi_{2}^{\dagger}\Phi_{2}\right)\\\
&+\left[\left(z_{13}\Phi_{1}^{\dagger}\Phi_{1}+z_{14}\Phi_{2}^{\dagger}\Phi_{2}\right)\Phi_{1}^{\prime\dagger}\Phi_{2}^{\prime}+\left(z_{13}\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}+z_{14}\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}\right)\Phi_{1}^{\dagger}\Phi_{2}+h.c.\right].\end{split}$
(6)
The large number of new terms prevents us from constructing the potential in
such a way that its minimum exactly follows the asymmetric symmetry breaking
pattern of Eq. 4. In general, the global minimum is given by
$\expectationvalue{\Phi_{i}}=\begin{bmatrix}0\\\
\frac{v_{i}}{\sqrt{2}}\end{bmatrix},\quad\expectationvalue{\Phi_{i}^{\prime}}=\begin{bmatrix}0\\\
\frac{w_{i}}{\sqrt{2}}\end{bmatrix}$ (7)
meaning that all four doublets have nonzero VEVs. To recover the pattern of
Eq. 4, we transform to a basis in which one Higgs doublet in each sector has a
zero VEV. This “dual Higgs basis” is defined by
$\begin{split}H_{1}=\frac{v_{1}^{*}\Phi_{1}+v_{2}^{*}\Phi_{2}}{v},\quad
H_{2}=\frac{-v_{2}\Phi_{1}+v_{1}\Phi_{2}}{v},\\\
H_{1}^{\prime}=\frac{w_{1}^{*}\Phi_{1}^{\prime}+w_{2}^{*}\Phi_{2}^{\prime}}{w},\quad
H_{2}^{\prime}=\frac{-w_{2}\Phi_{1}^{\prime}+w_{1}\Phi_{2}^{\prime}}{w},\end{split}$
(8)
where
$v=\sqrt{\absolutevalue{v_{1}}^{2}+\absolutevalue{v_{2}}^{2}},\quad
w=\sqrt{\absolutevalue{w_{1}}^{2}+\absolutevalue{w_{2}}^{2}}.$ (9)
With these assignments, only $H_{1}$ and $H_{1}^{\prime}$ gain non-zero VEVs,
and they will not be mirror partners if we have $v_{1}\neq w_{1}$ and
$v_{2}\neq w_{2}$.
We wish to maintain the desirable features of exact asymmetric symmetry
breaking, where unrelated Higgs bosons are responsible for mass generation in
each sector. Thus, we want $H_{1}$ and $H_{1}^{\prime}$ to be largely
independent admixtures of $\Phi_{1}^{(\prime)}$ and $\Phi_{2}^{(\prime)}$.
This can be achieved by a global minimum that resembles the ASB minimum; that
is, one where
$v_{1}\gg v_{2},\quad w_{1}\ll w_{2},\quad w_{2}\gg v_{1}.$ (10)
We will refer to this as the “ASB limit”.
To obtain it, we want to choose the parameters for $V_{\mathrm{M2HDM}}$ such
that the potential is of a similar form to $V_{\mathrm{ASB}}$. Equating
coefficients between the two potentials, we obtain
$\begin{split}{m_{11}}^{2}=-\lambda_{1}v^{2}-\sigma_{2}(v^{2}+w^{2}),\quad
z_{1}=2\lambda_{1}+2\sigma_{2},&\quad
z_{8}=\kappa_{1}+2\lambda_{1}+2\sigma_{2},\\\
{m_{22}}^{2}=-\lambda_{2}w^{2}-\sigma_{2}(v^{2}+w^{2}),\quad
z_{2}=2\lambda_{2}+2\sigma_{2},&\quad
z_{9}=\kappa_{2}+2\lambda_{2}+2\sigma_{2},\\\
z_{3}=\sigma_{1}+2\sigma_{2},&\quad z_{12}=2\sigma_{2}.\\\ \end{split}$ (11)
The other parameters in $V_{\mathrm{M2HDM}}$ do not correspond with any terms
in $V_{\mathrm{ASB}}$. Thus, to approximately replicate the form of
$V_{\mathrm{ASB}}$ in the full potential, we can initially apply a rough
condition that these additional parameters are small with respect to those
listed above; that is,
$z_{1},z_{2},z_{3},z_{8},z_{9},z_{12}\gg
z_{4},z_{5},z_{6},z_{7},z_{10},z_{11},z_{13},z_{14}.$ (12)
Parameters | Values
---|---
${m_{11}}^{2}$ | $-(87~{}\mathrm{GeV})^{2}$
${m_{12}}^{2}$ | $-(90~{}\mathrm{GeV})^{2}$
${m_{22}}^{2}$ | $-(2600~{}\mathrm{GeV})^{2}$
$z_{1}$, $z_{2}$ | 0.129
$z_{3}$, $z_{8}$, $z_{9}$, $z_{10}$ | 0.8
$z_{4}$, $z_{5}$, $z_{6}$, $z_{7}$, $z_{11}$, $z_{13}$, $z_{14}$ | 0.01
$z_{12}$ | $1\times 10^{-8}$
Table 2: Benchmark parameter point for the full scalar potential, taken from
Table 1 in Ref. [24].
To see how this might be applied, we consider the benchmark parameter point
given in Table 1 of the original paper [24], which we reproduce in Table 2.
Most of the parameters satisfy our rough condition, with two notable
exceptions. Firstly, $z_{12}$ is by far the smallest coupling; this will be
motivated when we consider the scalar masses of the theory. In addition,
$z_{10}$ is just as large as the other quartic couplings. Even though it does
not correspond to any terms in the toy model potential, making $z_{10}$ large
does not alter the asymmetric symmetry breaking pattern of the minimum; the
term
$(z_{10}\Phi_{1}^{\dagger}\Phi_{2}\Phi_{1}^{\prime\dagger}\Phi_{2}^{\prime}+h.c.)$
contains both Higgs bosons that gain small VEVs, and thus only provides a
small contribution to the potential at the asymmetric symmetry breaking
minimum. By this logic, $z_{4}$, $z_{5}$, and $z_{11}$ also do not necessarily
have to be small. Thus, ensuring an asymmetric symmetry breaking pattern for
the minimum of $V_{\mathrm{M2HDM}}$ only requires
$z_{1},z_{2},z_{3},z_{8},z_{9}\gg z_{6},z_{7},z_{11},z_{13},z_{14}.$ (13)
This is rather a rough condition, and there will be more nuance in exactly how
small these quartic couplings will need to be.
To conclude this discussion we note what happens for large values of the dark
electroweak scale $w$. As can be seen in Table 2, a valid parameter point can
be achieved with only one to two orders of magnitude difference between the
couplings (except for the aforementioned $z_{12}$). This benchmark point
corresponds to $w=7276$ GeV, thirty times greater than the visible VEV $v=246$
GeV. For values of $w$ one or more orders of magnitude larger than this, an
issue will arise from the term
$(z_{14}\Phi_{1}^{\dagger}\Phi_{2}\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}+h.c.)$.
Expanding around the VEV of $\Phi_{2}^{\prime}$ we obtain the term
$(z_{14}{w_{2}}^{2})\Phi_{1}^{\dagger}\Phi_{2}$, which will strongly alter the
tree-level ASB minimum when $z_{14}{w_{2}}^{2}$ is of the order of the larger
scalar couplings. To preserve the asymmetric symmetry breaking pattern as we
increase the value of $w$, $z_{14}$ must then be made smaller than $0.01$.
### II.2 Scalar Masses
In deriving the masses of the scalar content of the model, we follow the
original paper and define the field content of the doublets by
$\begin{split}\Phi_{1}=\begin{bmatrix}G_{1}^{+}\\\
\frac{1}{\sqrt{2}}(v_{1}+\phi_{1}+iG_{1})\end{bmatrix},&\quad\Phi_{1}^{\prime}=\begin{bmatrix}I_{1}^{+}\\\
\frac{1}{\sqrt{2}}(w_{1}+\phi_{1}^{\prime}+ia_{1})\end{bmatrix},\\\
\Phi_{2}=\begin{bmatrix}I_{2}^{+}\\\
\frac{1}{\sqrt{2}}(v_{2}+\phi_{2}+ia_{2})\end{bmatrix},&\quad\Phi_{2}^{\prime}=\begin{bmatrix}{G_{2}^{+}}^{\prime}\\\
\frac{1}{\sqrt{2}}(w_{2}+{\phi_{2}}^{\prime}+iG_{2})\end{bmatrix}.\end{split}$
(14)
Generically we obtain a 16 $\times$ 16 mass matrix which produces 10 nonzero
mass eigenstates when diagonalised. When working with real parameters in
$V_{\mathrm{M2HDM}}$, there will only be mixing between fields at the same
position in each doublet; thus, we only have to diagonalise four separate 4
$\times$ 4 mass matrices. These four matrices each involve mixing between
visible and dark fields, and thus the mass eigenstates are generically
admixtures of visible and dark states.
This is an issue, as we require one of these states to serve as the SM Higgs
boson $h$, which must not mix strongly with any of the new scalars. We must
especially avoid any dependence of the mass of $h$ on the dark scale $w$. To
see how this is achieved, we consider the mass mixing matrix for the $\phi$
fields. These terms derive from Appendix A in the original paper, where we
work in the ASB limit for the minimum; that is, we ignore terms involving the
small VEVs $v_{2}$ and $w_{1}$, and work only with real quartic couplings. We
then obtain the mass matrix
$\frac{1}{2}\left(\phi_{1},\phi_{2},\phi_{1}^{\prime},\phi_{2}^{\prime}\right)\begin{pmatrix}m_{\phi_{1}\phi_{1}}&m_{\phi_{1}\phi_{2}}&m_{\phi_{1}\phi_{1}^{\prime}}&m_{\phi_{1}\phi_{2}^{\prime}}\\\
m_{\phi_{1}\phi_{2}}&m_{\phi_{2}\phi_{2}}&m_{\phi_{1}\phi_{2}^{\prime}}&m_{\phi_{2}\phi_{2}^{\prime}}\\\
m_{\phi_{1}\phi_{1}^{\prime}}&m_{\phi_{1}\phi_{2}^{\prime}}&m_{\phi_{1}^{\prime}\phi_{1}^{\prime}}&m_{\phi_{1}^{\prime}\phi_{2}^{\prime}}\\\
m_{\phi_{1}\phi_{2}^{\prime}}&m_{\phi_{2}\phi_{2}^{\prime}}&m_{\phi_{1}^{\prime}\phi_{2}^{\prime}}&m_{\phi_{2}^{\prime}\phi_{2}^{\prime}}\end{pmatrix}\begin{pmatrix}\phi_{1}\\\
\phi_{2}\\\ \phi_{1}^{\prime}\\\ \phi_{2}^{\prime}\end{pmatrix}$ (15)
where
$\begin{split}m_{\phi_{1}\phi_{1}}&\simeq{m_{11}}^{2}+\frac{3}{2}v_{1}^{2}z_{1}+\frac{1}{2}w_{2}^{2}z_{12}\\\
m_{\phi_{1}\phi_{2}}&\simeq{m_{12}}^{2}+\frac{3}{2}v_{1}^{2}z_{6}+\frac{1}{2}w_{2}^{2}z_{14}\\\
m_{\phi_{1}\phi_{1}^{\prime}}&\simeq v_{1}w_{2}z_{13}\\\
m_{\phi_{1}\phi_{2}^{\prime}}&\simeq v_{1}w_{2}z_{12}\\\
m_{\phi_{2}\phi_{2}}&\simeq{m_{22}}^{2}+\frac{1}{2}v_{1}^{2}(z_{3}+z_{4}+z_{5})+\frac{1}{2}w_{2}^{2}z_{9}\\\
\end{split}\quad\begin{split}m_{\phi_{2}\phi_{1}^{\prime}}&\simeq\frac{1}{2}v_{1}w_{2}z_{10}+\frac{1}{2}v_{1}w_{2}z_{11}\\\
m_{\phi_{2}\phi_{2}^{\prime}}&\simeq v_{1}w_{2}z_{14}\\\
m_{\phi_{1}^{\prime}\phi_{1}^{\prime}}&\simeq{m_{11}}^{2}+\frac{1}{2}v_{1}^{2}z_{8}+\frac{1}{2}w_{2}^{2}(z_{3}+z_{4}+z_{5})\\\
m_{\phi_{1}^{\prime}\phi_{2}^{\prime}}&\simeq{m_{12}}^{2}+\frac{1}{2}v_{1}^{2}z_{13}+\frac{3}{2}w_{2}^{2}z_{7}\\\
m_{\phi_{2}^{\prime}\phi_{2}^{\prime}}&\simeq{m_{22}}^{2}+\frac{1}{2}v_{1}^{2}z_{12}+\frac{3}{2}w_{2}^{2}z_{2}.\end{split}$
(16)
We consider the sizes of these terms given the constraint on relative
parameter sizes from Eq. 13. We first note that the off-diagonal terms
involving $\phi_{1}$ depend on couplings that we require to be small, with the
exception of $z_{12}$ in the term $m_{\phi_{1}\phi_{2}^{\prime}}$. In the
diagonal mass term for $\phi_{1}$, $m_{\phi_{1}\phi_{1}}$, $z_{12}$ also
controls the term’s dependence on $w_{2}$. So, setting $z_{12}$ to be very
small – as was done in the original paper’s benchmark point shown in Table. 2
– ensures that $\phi_{1}$ is decoupled from the dark electroweak scale, and
has minimal mixing with any other scalars. This then means that there is a
mass eigenstate composed primarily of $\phi_{1}$, which we denote as $h$ and
identify as the SM Higgs boson.
With $z_{12}$ small, the off-diagonal terms involving $\phi_{2}^{\prime}$ are
also relatively small, so we identify the dark Higgs boson $h^{\prime}$ as the
mass eigenstate composed predominantly of $\phi_{2}^{\prime}$. The level of
mixing between the remaining neutral real scalars, $\phi_{2}$ and
$\phi_{1}^{\prime}$, is controlled by $z_{10}$. As we noted earlier, this
coupling is relatively large in the given benchmark point; this allows for the
other two mass eigenstates to be heavy, as their masses depend on the dark
electroweak scale $w$. As in the original paper, we denote these eigenstates
as $J_{1}^{0}$ and $J_{2}^{0}$. We take a similar approach with the remaining
mass eigenstates. Following the original paper, we name them $A_{1}^{0}$,
$A_{2}^{0}$, $H^{\pm}$, and $H^{\pm\prime}$; they too couple to the dark
electroweak scale $w$ and are thus much heavier than the visible Higgs boson
$h$. This allows the low-energy scalar sector of this theory to contain solely
an SM Higgs state, and is also relevant for meeting constraints from flavour-
changing neutral current measurements.
### II.3 Yukawa Couplings and Flavour-Changing Neutral Currents
The Yukawa sector of this theory is given by
$\begin{split}-\mathcal{L}_{Y}&=y_{1ij}^{u}(\bar{q_{L}^{i}}u_{R}^{j}\Phi_{1}+\bar{q_{R}^{i\prime}}u_{L}^{j\prime}\Phi_{1}^{\prime})+y_{1ij}^{d}(\bar{q_{L}^{i}}d_{R}^{j}\tilde{\Phi}_{1}+\bar{q_{R}^{i\prime}}d_{L}^{j\prime}\tilde{\Phi}_{1}^{\prime})+y_{1ij}^{l}(\bar{l_{L}^{i}}e_{R}^{j}\tilde{\Phi}_{1}+\bar{l_{R}^{i\prime}}e_{L}^{j\prime}\tilde{\Phi}_{1}^{\prime})\\\
&+y_{2ij}^{u}(\bar{q_{L}^{i}}u_{R}^{j}\Phi_{2}+\bar{q_{R}^{i\prime}}u_{L}^{j\prime}\Phi_{2}^{\prime})+y_{2ij}^{d}(\bar{q_{L}^{i}}d_{R}^{j}\tilde{\Phi}_{2}+\bar{q_{R}^{i\prime}}d_{L}^{j\prime}\tilde{\Phi}_{2}^{\prime})+y_{2ij}^{l}(\bar{l_{L}^{i}}e_{R}^{j}\tilde{\Phi}_{2}+\bar{l_{R}^{i\prime}}e_{L}^{j\prime}\tilde{\Phi}_{2}^{\prime})+h.c.\end{split}$
(17)
where $\tilde{\Phi}=i\tau_{2}\Phi^{\star}$. We note that the mirror symmetry
enforces the Yukawa couplings of a doublet and its mirror counterpart to be
equal.
We are interested in how these couplings generate quark masses, as it is the
quark mass spectrum in each sector that affects the running of $\alpha_{3}$
and $\alpha_{3}^{\prime}$ and allows us to achieve different visible and dark
QCD scales after the mirror symmetry is broken. So, we work in the Higgs basis
of Eq. 8, where only $H_{1}$ and $H_{1}^{\prime}$ gain VEVs. Then, the
relevant Yukawa matrices are given by
$\begin{split}\tilde{y}_{1}^{q}=V_{L}^{q}\left(\frac{v_{1}y_{1}^{q}+v_{2}y_{2}^{q}}{v}\right)V_{R}^{q\dagger},\quad&\tilde{y}_{2}^{q}=V_{L}^{q}\left(\frac{-v_{2}y_{1}^{q}+v_{1}y_{2}^{q}}{v}\right)V_{R}^{q\dagger},\\\
\tilde{y}_{1}^{q\prime}=W_{L}^{q}\left(\frac{w_{1}y_{1}^{q}+w_{2}y_{2}^{q}}{w}\right)W_{R}^{q\dagger},\quad&\tilde{y}_{2}^{q\prime}=W_{L}^{q}\left(\frac{-w_{2}y_{1}^{q}+w_{1}y_{2}^{q}}{w}\right)W_{R}^{q\dagger},\end{split}$
(18)
where $q=u,d$, and $\tilde{y}_{i}^{q(\prime)}$ is the Yukawa matrix for
couplings between $H_{i}^{(\prime)}$ and either up- or down-type quarks.
$V_{L,R}^{q}$ and $W_{L,R}^{q}$ are the left- and right-handed matrices that
respectively diagonalise $\tilde{y}_{1}^{q}$ and $\tilde{y}_{1}^{q\prime}$.
The Yukawa matrices relevant for generating quark masses in each sector are
$\tilde{y}_{1}^{q}$ and $\tilde{y}_{1}^{q\prime}$. In the ASB limit of Eq. 10,
we see that the visible and dark quark masses depend primarily on $y_{1}^{q}$
and $y_{2}^{q}$, respectively. This is just the statement that $\Phi_{1}$ and
$\Phi_{2}^{\prime}$ are the doublets primarily responsible for mass generation
in their respective sectors, allowing the quark mass spectrum in each sector
to be largely independent.
The secondary Yukawa matrices in each sector are not diagonal. This leads to
flavour-changing neutral currents at tree-level, which are strongly suppressed
in the SM and are subject to strict experimental constraints [55]. This is
often controlled in 2HDMs by introducing additional discrete symmetries to
restrict which types of quarks each doublet can couple to; in effect, this
equates to setting some of $y_{1}^{q}$ and $y_{2}^{q}$ to zero. However, in
our case all of these matrices are relevant for mass generation, and must be
non-zero. Thus, in the visible sector $\Phi_{1}$ and $\Phi_{2}$ will both
couple to all quarks; this corresponds to a Type III 2HDM, in which tree-level
FCNCs are present, and must be sufficiently suppressed.
The original paper quoted an approximate result from [56], where FCNC bounds
were avoided in a Type III 2HDM for $m_{H_{2}}\gtrsim 150$ TeV. However, this
bound was obtained under the assumptions that all Yukawa couplings of $H_{2}$
were the size of the SM top quark coupling. The more realistic Yukawa coupling
selection in this model leads to much less stringent bounds [57, 58]; we
follow the guide of the original paper, in which all stated mass values for
the additional scalars are heavy enough to sufficiently suppress FCNCs. We
then ensure that we consider parameter points where all additional scalars
have masses at least as large as those given in Table 1 of Ref. [24].
## III Dark Electroweak Phase Transition
To address the apparent coincidence of cosmological mass densities of Eq. 1, a
comprehensive dark matter theory must explain why both the particle masses and
number densities of visible and dark matter are similar. As outlined in the
previous section, the asymmetric symmetry breaking structure of the mirror two
Higgs doublet model allows for a dark neutron-like particle with a mass a
factor of a few larger than the visible proton. With the particle masses thus
linked, we now need to produce related number densities $n_{\mathrm{VM}}$ and
$n_{\mathrm{DM}}$.
In this section we implement electroweak baryogenesis (EWBG) as the asymmetry
generation mechanism [59]. EWBG occurs at a first-order electroweak phase
transition (EWPT), where the transition proceeds by bubble nucleation. The
Sakharov conditions [60] are satisfied by out-of-equilibrium $C$\- and
$CP$-violating Yukawa interactions at the bubble walls together with
$B$-violating electroweak sphaleron processes, and thus a baryon asymmetry is
generated during the transition.
While all these ingredients are present within the SM, the visible electroweak
phase transition (vEWPT) is crossover, not first-order [61]. Even if that was
not so, the $CP$-violation in the SM Yukawa matrix would be insufficient to
generate the required asymmetry [62]. In our model, however, there will be a
dark electroweak phase transition (dEWPT) in which the $\Phi_{2}^{\prime}$
gains a VEV of order $w$. Its dynamics are controlled by the scalar and Yukawa
couplings of the second Higgs doublet, which are only very weakly constrained
by SM measurements. So, we should have the flexibility to successfully
implement EWBG at the dEWPT, thus generating an asymmetry in the dark baryon
number $B^{\prime}$ and/or dark lepton number $L^{\prime}$.
In this section we analyse the dynamics of the dEWPT, searching for parameter
selections for the scalar potential $V_{\mathrm{M2HDM}}$ of Eq. 6 such that we
obtain a first-order electroweak phase transition that could allow for EWBG in
the dark sector. We begin by constructing the finite temperature effective
potential, and then specify the method by which we search for valid dark phase
transitions. We find that for a number of regions of parameter space, a viable
first-order EWPT can be readily achieved in the dark sector.
### III.1 The finite temperature effective potential
We begin by constructing the finite temperature effective potential (FTEP)
[63, 64], our pertubative tool for analysing the dEWPT.
The one-loop effective potential is calculated in terms of a constant
background classical field $\varphi$, and is given in general by
$V_{\mathrm{eff}}(\varphi,T)=V_{0}(\varphi)+V_{1}(\varphi,0)+\Delta
V_{1}(\varphi,T),$ (19)
where the zero-loop contribution $V_{0}(\varphi)$ is just the classical tree-
level potential and the one-loop contributions are split into zero-temperature
and finite temperature corrections.
For our mirror two-Higgs-doublet model, the FTEP will be a function of four
variables – $\varphi_{1}$, $\varphi_{2}$, $\varphi_{1}^{\prime}$, and
$\varphi_{2}^{\prime}$ – as we require a real constant classical background
field for each field that gains a VEV. We define the shorthand notation
$f(\varphi)\equiv
f(\varphi_{1},\varphi_{2},\varphi_{1}^{\prime},\varphi_{2}^{\prime})$ for any
function $f$. The background fields are incorporated by defining
$\begin{split}\Phi_{1}=\begin{bmatrix}G_{1}^{+}\\\
\frac{1}{\sqrt{2}}(\varphi_{1}+\phi_{1}+iG_{1})\end{bmatrix},&\quad\Phi_{1}^{\prime}=\begin{bmatrix}I_{1}^{+}\\\
\frac{1}{\sqrt{2}}(\varphi_{1}^{\prime}+\phi_{1}^{\prime}+ia_{1})\end{bmatrix},\\\
\Phi_{2}=\begin{bmatrix}I_{2}^{+}\\\
\frac{1}{\sqrt{2}}(\varphi_{2}+\phi_{2}+ia_{2})\end{bmatrix},&\quad\Phi_{2}^{\prime}=\begin{bmatrix}{G_{2}^{+}}^{\prime}\\\
\frac{1}{\sqrt{2}}(\varphi_{2}^{\prime}+{\phi_{2}}^{\prime}+iG_{2})\end{bmatrix}.\end{split}$
(20)
Expanding $V_{\mathrm{M2HDM}}$ using the above definitions and assuming real
parameters, the tree-level component of the FTEP is given by
$\begin{split}V_{0}(\varphi)&=\frac{1}{2}m_{11}^{2}\left({\varphi_{1}}^{2}+{\varphi_{1}^{\prime}}^{2}\right)+\frac{1}{2}m_{22}^{2}\left({\varphi_{2}}^{2}+{\varphi_{2}^{\prime}}^{2}\right)+m_{12}^{2}\left(\varphi_{1}\varphi_{2}+\varphi_{1}^{\prime}\varphi_{2}^{\prime}\right)\\\
&+\frac{1}{8}z_{1}\left({\varphi_{1}}^{4}+{\varphi_{1}^{\prime}}^{4}\right)+\frac{1}{8}z_{2}\left({\varphi_{2}}^{4}+{\varphi_{2}^{\prime}}^{4}\right)+\frac{1}{4}\left(z_{3}+z_{4}+z_{5}\right)\left({\varphi_{1}}^{2}{\varphi_{2}}^{2}+{\varphi_{1}^{\prime}}^{2}{\varphi_{2}^{\prime}}^{2}\right)\\\
&+\frac{1}{2}z_{6}\left({\varphi_{1}}^{3}\varphi_{2}+{\varphi_{1}^{\prime}}^{3}\varphi_{2}^{\prime}\right)+\frac{1}{2}z_{7}\left(\varphi_{1}{\varphi_{2}}^{3}+\varphi_{1}^{\prime}{\varphi_{2}^{\prime}}^{3}\right)+\frac{1}{4}z_{8}{\varphi_{1}}^{2}{\varphi_{1}^{\prime}}^{2}+\frac{1}{4}z_{9}{\varphi_{2}}^{2}{\varphi_{2}^{\prime}}^{2}\\\
&+\frac{1}{2}\left(z_{10}+z_{11}\right)\varphi_{1}\varphi_{2}\varphi_{1}^{\prime}\varphi_{2}^{\prime}+\frac{1}{4}z_{12}\left({\varphi_{1}}^{2}{\varphi_{2}^{\prime}}^{2}+{\varphi_{1}^{\prime}}^{2}{\varphi_{2}}^{2}\right)\\\
&+\frac{1}{2}z_{13}\left({\varphi_{1}}^{2}\varphi_{1}^{\prime}\varphi_{2}^{\prime}+{\varphi_{1}^{\prime}}^{2}\varphi_{1}\varphi_{2}\right)+\frac{1}{2}z_{14}\left({\varphi_{2}}^{2}\varphi_{1}^{\prime}\varphi_{2}^{\prime}+{\varphi_{2}^{\prime}}^{2}\varphi_{1}\varphi_{2}\right).\end{split}$
(21)
#### III.1.1 One-loop corrections and renormalisation
The one-loop corrections $V_{1}(\varphi,0)$ and $\Delta V_{1}(\varphi,T)$ are
calculated using the Coleman-Weinberg [65] and finite temperature [66] methods
respectively, and are given by
$\begin{split}V_{1}(\varphi,0)&=\sum_{i}\pm\frac{n_{i}}{2}\int\frac{d^{4}p}{(2\pi)^{4}}\log\left(p^{2}+m_{i}^{2}(\varphi)\right),\\\
\Delta V_{1}(\varphi,T)&=\frac{T^{4}}{2\pi^{2}}\sum_{i}\pm
n_{i}J_{\pm}\left(\frac{{m_{i}}^{2}(\varphi)}{T^{2}}\right),\end{split}$ (22)
where $i$ counts over all particle species with $+$ for bosons and $-$ for
fermions, and $n_{i}$ and $m_{i}(\varphi)$ are the multiplicity and field-
dependent mass of species $i$. The thermal functions $J_{\pm}(y^{2})$ are
given by
$J_{\pm}(y^{2})\equiv\int_{0}^{\infty}dx\;x^{2}\log\left(1\mp
e^{-\sqrt{x^{2}+y^{2}}}\right),$ (23)
and will be calculated numerically via the package CosmoTransitions [67].
Before specifying a renormalisation scheme for the UV-divergent integral
$V_{1}(\varphi,0)$, we calculate the field-dependent masses
${m_{i}}^{2}(\varphi)$. The scalar boson mass matrix is obtained from
$V_{\mathrm{M2HDM}}$ given the field definitions from Eq. 20. The Goldstone
bosons that are massless at the tree-level minimum will not be massless in
general when we expand around the background fields, and thus must be included
in the one-loop corrections. In total, 16 scalars are included: the six
neutral mass eigenstates $h$, $h^{\prime}$, $A_{1}^{0}$, $A_{2}^{0}$,
$J_{1}^{0}$, and $J_{2}^{0}$, four charged scalars $H^{\pm}$ and
${H^{\pm}}^{\prime}$, and six Goldstone bosons $G^{0}$, $G^{\pm}$,
${G^{0}}^{\prime}$, and ${G^{\pm}}^{\prime}$, each with a multiplicity of one.
In the visible sector the gauge boson mass matrix with respect to the basis
$W^{\pm}$, $W^{3}$, $B$ is [68]
$\mathcal{M}_{\mathrm{gauge}}=\frac{{\varphi_{1}}^{2}+{\varphi_{2}}^{2}}{4}\begin{pmatrix}g^{2}&0&0&0\\\
0&g^{2}&0&0\\\ 0&0&g^{2}&gg^{\prime}\\\
0&0&gg^{\prime}&{g^{\prime}}^{2}\end{pmatrix}$ (24)
By mirror symmetry, the dark gauge bosons have an equivalent mass matrix with
respect to the basis ${W^{\pm}}^{\prime}$, ${W^{3}}^{\prime}$, $B^{\prime}$,
with $\varphi_{1}^{\prime}$ and $\varphi_{2}^{\prime}$ replacing $\varphi_{1}$
and $\varphi_{2}$. All gauge bosons have multiplicity three, corresponding to
one longitudinal and two transverse modes.
The mass of the top quark dominates the contributions from the visible
fermions, and is given by
${m_{t}}^{2}(\varphi)=\frac{1}{2}(y_{1}^{t}{\varphi_{1}}^{2}+y_{2}^{t}{\varphi_{2}}^{2}),$
(25)
where $y_{i}^{t}$ indicates the Yukawa coupling of the doublet $\Phi_{i}$ to
the top quark. In the dark sector, the heaviest quark is not necessarily the
mirror partner of the visible top quark. We denote the Yukawa couplings of the
heaviest dark quark by $y_{i}^{h}$, and obtain its mass by replacing
$y_{i}^{t}$, $\varphi_{1}$ and $\varphi_{2}$ with $y_{i}^{h}$,
$\varphi_{1}^{\prime}$ and $\varphi_{2}^{\prime}$ respectively in the above
equation. Since only $\Phi_{2}^{\prime}$ gains a VEV during the dEWPT,
$y_{2}^{h}$ is the relevant parameter when considering the fermionic
contributions to the FTEP; its value depends on the choice of dark Yukawa
couplings. The multiplicities of the top quark and heaviest dark quark are
both 12.
With the field-dependent masses determined, the next point to address is the
choice of renormalisation scheme for the zero-temperature one-loop
corrections. We will use a cut-off regularisation scheme444This scheme differs
from the standard $\overline{MS}$ dimensional regularisation commonly applied
in two-Higgs-doublet models. This is due to the disparate electroweak scales
in our model, $v$ and $w$, which lead to the FTEP being highly sensitive to
the choice of renormalisation scale $\mu$. In particular, the tree-level
minimum will change drastically for differing values of $\mu$, whereas cut-off
regularisation automatically preserves tree-level minima. given by
$V_{1}(\varphi,0)=\sum_{i}\pm\frac{n_{i}}{64\pi^{2}}\left[{m_{i}}^{4}(\varphi)\left(\log\frac{{m_{i}}^{2}(\varphi)}{{m_{i}}^{2}(v)}-\frac{3}{2}\right)+{m_{i}}^{2}(\varphi){m_{i}}^{2}(v)\right],$
(26)
where ${m_{i}}^{2}(v)$ indicates that the mass is calculated at the tree-level
minimum given by $\varphi_{i}=v_{i}$ and $\varphi_{i}^{\prime}=w_{i}$.
However, $\log({m_{i}}^{2}(\varphi)/{m_{i}}^{2}(v))$ is logarithmically
divergent for $i=G^{0}$, $G^{\pm}$, ${G^{0}}^{\prime}$, and ${G^{\pm}}$, as
the Goldstone bosons are massless at the tree-level minimum, . This has been
addressed in both the standard model [69] and a two-Higgs-doublet model [70];
the issue is alleviated by adjusting the Goldstone contributions to be
$\sum_{i=G^{0},G^{\pm}}\frac{n_{i}}{64\pi^{2}}\left[{m_{i}}^{4}(\varphi)\left(\log\frac{{m_{i}}^{2}(\varphi)}{{m_{\mathrm{IR}}}^{2}(v)}-\frac{3}{2}\right)\right],$
(27)
where ${m_{\mathrm{IR}}}^{2}(v)$ is some infrared mass scale that both
references take to be ${m_{h}}^{2}(v)$, the mass of the SM Higgs boson. We
adapt this condition to our situation by choosing ${m_{\mathrm{IR}}}^{2}(v)$
to be ${m_{h}}^{2}(v)$ for the visible Goldstone bosons ${G^{0}}$ and
${G^{\pm}}$, and ${m_{h}^{\prime}}^{2}(v)$ for the dark Goldstone bosons
${G^{0}}^{\prime}$ and ${G^{\pm}}^{\prime}$.
So, altogether, the one-loop corrections are given at zero temperature by
$\begin{split}V_{1}(\varphi,0)&=\sum_{i\in\mathcal{F}}\pm\frac{n_{i}}{64\pi^{2}}\left[{m_{i}}^{4}(\varphi)\left(\log\frac{{m_{i}}^{2}(\varphi)}{{m_{i}}^{2}(v)}-\frac{3}{2}\right)+{m_{i}}^{2}(\varphi){m_{i}}^{2}(v)\right]\\\
&+\sum_{i=G^{0},G^{\pm}}\frac{n_{i}}{64\pi^{2}}\left[{m_{i}}^{4}(\varphi)\left(\log\frac{{m_{i}}^{2}(\varphi)}{{m_{h}}^{2}(v)}-\frac{3}{2}\right)\right]\\\
&+\sum_{i={G^{0}}^{\prime},{G^{\pm}}^{\prime}}\frac{n_{i}}{64\pi^{2}}\left[{m_{i}}^{4}(\varphi)\left(\log\frac{{m_{i}}^{2}(\varphi)}{{m_{h}^{\prime}}^{2}(v)}-\frac{3}{2}\right)\right],\end{split}$
(28)
where
$\mathcal{F}=[h,h^{\prime},A_{1}^{0},A_{2}^{0},J_{1}^{0},J_{2}^{0},H^{\pm},{H^{\pm}}^{\prime},W^{\pm},Z,{W^{\pm}}^{\prime},Z^{\prime},t,t^{\prime}]$
lists all species except the Goldstone bosons.
#### III.1.2 Thermal masses
To be able to trust our perturbative calculations at the critical temperature,
we need to apply a daisy resummation procedure to account for higher-loop
corrections. Considering the one-loop corrections as a function of the field-
dependent masses,
$V_{1}({m_{i}}^{2}(\varphi),T)\equiv V_{1}(\varphi,0)+\Delta
V_{1}(\varphi,T),$ (29)
the standard resummation methods according to Parwani [71] and Arnold and
Espinosa [72]) produce
$V_{1,\mathrm{P}}=V_{1}({m_{i}}^{2}(\varphi,T),T)$ (30)
and
$V_{1,\mathrm{A-E}}=V_{1}({m_{i}}^{2}(\varphi),T)+\frac{T}{12\pi}\sum_{i=\mathrm{bosons}}n_{i}\left[{m_{i}}^{3}(\varphi)-{m_{i}}^{3}(\varphi,T)\right]$
(31)
respectively.
These expressions require the calculation of the thermal masses
${m_{i}}^{2}(\varphi,T)$. Only scalars and the longitudinal components of the
gauge bosons gain thermal masses. These are calculated by adding thermal
correction matrices to the scalar and gauge boson mass matrices prior to
diagonalisation [73]. For the scalars, this is given by
$(\delta\mathcal{M}_{\mathrm{scalar}})_{ij}=\frac{T^{2}}{24}\sum_{i}c_{i}n_{i}\frac{\partial^{2}{m_{i}}^{2}(\varphi)}{\partial\varphi_{i}\partial\varphi_{j}}$
(32)
where $c_{i}$ is $1$ for bosons and $-1/2$ for fermions, and $\varphi_{i}$
runs over all four background fields $\varphi_{1}$, $\varphi_{2}$,
$\varphi_{1}^{\prime}$, and $\varphi_{2}^{\prime}$. For the gauge bosons, the
thermal correction matrix is
$\delta\mathcal{M}_{\mathrm{gauge}}=2T^{2}\begin{pmatrix}g^{2}&0&0&0\\\
0&g^{2}&0&0\\\ 0&0&g^{2}&0\\\ 0&0&0&{g^{\prime}}^{2}\end{pmatrix}.$ (33)
### III.2 Characterising the phase transition
With the effective potential in hand, we can now determine the properties of
the dEWPT. To allow for electroweak baryogenesis, we search for phase
transitions that are first-order – that is, where the effective potential
develops multiple minima separated by energy barriers. Such a transition is
characterised by the critical temperature $T_{C}$, determined by
$V_{\mathrm{eff}}(0)|_{T=T_{C}}=V_{\mathrm{eff}}(\phi_{C})|_{T=T_{C}},$ (34)
where $\phi_{C}$ indicates the field values at the symmetry-breaking minimum
at $T=T_{C}$. The other relevant parameter of the phase transition is its
_strength_ , given by $\xi=\phi_{C}/T_{C}$. For the dark baryon asymmetry to
avoid being washed out following the dEWPT, the sphaleron rate must be
sufficiently suppressed; this corresponds to the condition $\xi\gtrsim 1$, or
that the phase transition is _strongly first-order_ 555While this is the
conventional criterion used to avoid sphaleron washout in electroweak
baryogenesis theories, it is not gauge invariant; both $\phi_{C}$ and $T_{C}$
suffer gauge dependence when calculated to a finite order of perturbation
theory [74]. Despite this, it remains in use in the literature, including in
2HDM implementations of EWBG (see e.g. [75, 76])..
In this section we describe our method for calculating these properties and
identifying strong first-order phase transitions. A number of difficulties
arise in this discussion that produce non-negligible theoretical uncertainties
in our calculations; however, we find that these are manageable, as we are not
aiming to precisely calculate the strength of phase transition, but are merely
ensuring that the bound $\xi\gtrsim 1$ is satisfied.
#### III.2.1 Finding the critical temperature
The actual temperature at which a first-order EWPT commences is the nucleation
temperature $T_{N}$, where tunnelling between the minima occurs at a
sufficient rate for bubbles of broken phase to be nucleated. This is a
difficult quantity to precisely determine, and requires calculation of
tunnelling probabilities and bubble profiles that are beyond the scope of this
analysis [77]. $T_{C}$ is much easier to determine, and is a common stand-in
when analysing first-order phase transitions, as it usually lies just above
$T_{N}$.
We now describe an algorithm for finding $T_{C}$. At $T=0$, the global minimum
of the effective potential is given by the ASB limit $\varphi_{i}=v_{i}$,
$\varphi_{i}^{\prime}=w_{i}$ where $v_{1}\gg v_{2}$, $w_{2}\gg w_{1}$, and
$w_{2}\gg v_{1}$. At very high temperatures, the only minimum of the effective
potential is at the origin. To identify a dark first-order transition, we
track the value of $\varphi_{2}^{\prime}$ at the minimum of
$V_{\mathrm{eff}}(\varphi,T)$, denoting this quantity as $w_{2}(T)$. As $T$
increases, we observe that $w_{2}(T)$ decreases from $w_{2}$ at $T=0$ until a
given temperature at which it has a sudden discontinuity and drops to the
symmetric phase where $w_{2}(T)\sim 0$. The temperature at which this drop
occurs is the critical temperature $T_{C}$. We take the transition strength to
be
$\xi=w_{2}(T_{C})/T_{C},$ (35)
as all other background field values are much smaller than
$\varphi_{2}^{\prime}$ at the asymmetric minimum near the dEWPT.
To find the temperature at which the drop occurs, we start at $T=0$ and begin
by increasing the temperature with a large step size (on the order of
$w_{2}$). At each new temperature $T$, we calculate $w_{2}(T)$ by numerically
finding the minimum of the effective potential using the methods provided by
the coding package CosmoTransitions. When $w_{2}(T)$ jumps to being very small
(we use the condition that $w_{2}(T)<1$ GeV), the symmetric minimum is now the
global minimum. We then decrease the step size by an order of magnitude and
begin decreasing the temperature until the asymmetric minimum becomes the
global minimum again and $w_{2}(T)$ jumps back up to a large value. Repeating
this process, we zero in with increasing accuracy on the temperature at which
there is a discontinuity in $w_{2}(T)$, and we terminate the process when the
step size is small enough that the critical temperature has been determined to
a desired precision.
However, when we apply this algorithm we run into an issue: for many parameter
values of interest, $w_{2}(T)$ undergoes not one but two discontinuities as
the temperature changes. This corresponds to the presence of a second
asymmetric minimum between the symmetric minimum and the main asymmetric
minimum in regions near the critical temperature. This extra minimum can be
seen in Fig. 1, where we plot the effective potential as a function of
$\varphi_{2}^{\prime}$ for a range of temperatures around the critical
temperature, setting $\varphi_{1}=\varphi_{2}=\varphi_{1}^{\prime}=0$. This
secondary minimum was identified in Ref. [75] as an anomalous effect due to
the presence of small or negative field-dependent particle masses. To explain
and account for the presence of this artifact, we must address the
perturbative validity of our daisy-resummed effective potential.
Figure 1: The effective potential $V_{\mathrm{eff}}$ as a function of
$\varphi_{2}^{\prime}$ for a range of temperatures around the critical
temperature at $T_{C}\sim 1500$ GeV. The other background fields have been set
to $\varphi_{1}=\varphi_{2}=\varphi_{1}^{\prime}=0$. The potentials at
different temperatures have been vertically translated such that they coincide
at the origin in order to more clearly illustrate the anomalous behaviour of
$V_{\mathrm{eff}}$ for small values of $\varphi_{2}^{\prime}$.
#### III.2.2 The perturbative validity of the effective potential
In Ref. [66], the perturbative validity of the daisy resummation scheme was
discussed in the context of a simple model involving a scalar singlet $\phi$
with a quartic coupling $\lambda$. They identified an expansion parameter
$\lambda T/m(\phi)$ that must be small for daisy resummation to be valid.
Since $m(\phi)\propto|\phi|$, the expansion parameter will be larger for
smaller values of $\phi$, and indeed it is at low values of the background
field that the anomalous minimum appears – we suggest that its presence
implies that our perturbative calculations are not valid when background field
values are too small.
To quantify this, we need to adapt the expansion parameter for the simple case
to our more complex theory. This was done in a two-Higgs-doublet model in Ref.
[75]; their expansion parameter was of the same form as for the simple case,
with $\lambda$ chosen to be largest quartic coupling in their potential, and
$m(\phi)$ taken to be the mass of the lightest of the additional scalars in
the Higgs sector. Their argument for considering only the masses of the new
scalars was that these provided the dominant corrections to the one-loop
potential. In our case, we have a similar situation, where the heavy
additional scalars – $A_{1}^{0}$, $A_{2}^{0}$, $J_{1}^{0}$, $J_{2}^{0}$,
$H^{\pm}$, and ${H^{\pm}}^{\prime}$ – give the strongest contributions to the
effective potential. So, by analogy, we define the perturbative expansion
parameter $\epsilon$ for our model to be
$\epsilon\equiv\frac{\max(z_{i})T}{\min(m_{j}(\varphi))},$ (36)
where $z_{i}$ is any of the fourteen quartic couplings in
$V_{\mathrm{M2HDM}}$, and $j$ counts over the heavy scalar species listed
above.
So, for a given value of $T$, and with
$\varphi_{1}=\varphi_{2}=\varphi_{1}^{\prime}$ = 0, there is be a specific
value of $|\varphi_{2}^{\prime}|$ at which $\epsilon=1$, which we call the
perturbative boundary. For values of $|\varphi_{2}^{\prime}|$ below this, the
expansion parameter will be greater than one and we can not trust the
perturbative techniques used to calculate $V_{\mathrm{eff}}(\varphi,T)$. To
account for this, we adjust the algorithm to find the temperature at which
$w_{2}(T)$ drops discontinuously not to a value close to zero, but to a value
below this perturbative boundary. This gives only an estimate for $T_{C}$, as
we cannot measure the true critical temperature directly if we do not trust
the effective potential at the origin. However, as noted before, we are not
looking to calculate the specific value of the strength; as we are only
identifying phase transitions satisfying the condition $\xi\gtrsim 1$, our new
method for calculating $T_{C}$ is sufficiently accurate for our purposes.
### III.3 Results
We begin our search for a strong dark electroweak phase transition at a
parameter point given by Table 3. This corresponds to the parameter selection
from the original paper that we considered in Section II, but with $z_{1}$ and
$z_{2}$ each increased by a factor of two to account for an erratum in Ref.
[24]. These values provide a good starting point as they satisfy all
conditions from Section II, producing an asymmetric symmetry breaking minimum
with $v=246$ GeV and $w=7276$ GeV. We also note that we initially assume
$\Phi_{2}$ and $\Phi_{1}$ to have identical Yukawa couplings, setting
$y_{2}^{h}=1$ when calculating the field-dependent fermion masses.
The phase transition at this parameter point is second-order. In the following
sections, we alter these parameters to find regions of parameter space in
which the dark electroweak phase transition is strongly first-order. We
identify a number of qualitatively different regions for which this is
possible, guided by considerations from Section IV in which extra requirements
are placed on the VEVs and particle masses of the scalar sector to ensure the
feasibility of certain portal interactions.
Parameters | Values
---|---
${m_{11}}^{2}$ | $-(87~{}\mathrm{GeV})^{2}$
${m_{12}}^{2}$ | $-(90~{}\mathrm{GeV})^{2}$
${m_{22}}^{2}$ | $-(2600~{}\mathrm{GeV})^{2}$
$z_{1}$, $z_{2}$ | $0.258$
$z_{3}$, $z_{8}$, $z_{9}$, $z_{10}$ | $0.8$
$z_{4}$, $z_{5}$, $z_{6}$, $z_{7}$, $z_{11}$, $z_{13}$, $z_{14}$ | $0.01$
$z_{12}$ | $1\times 10^{-8}$
Table 3: The initial parameter selection for the scalar potential
$V_{\mathrm{M2HDM}}$ at which we start the search of the parameter space.
#### III.3.1 Decreasing the mass of the dark Higgs boson
Figure 2: Plots of the phase transition strength $\xi$ and perturbative
expansion parameter $\epsilon$ at the critical temperature for different
values of the quartic coupling $z_{2}$. The left-hand plot uses the Parwani
daisy resummation scheme, and right-hand plot uses the Arnold-Espinosa
resummation scheme. These values are calculated for $\rho=30$ and
$y_{2}^{h}=1$. Other unspecified parameters of the scalar potential are as
described in the text.
In the standard model the EWPT is expected to be first-order for small values
of the Higgs mass [61]. So, we adjust our parameters such that the dark Higgs
mass $m_{h}^{\prime}$ is reduced: the relevant coupling to consider is
$z_{2}$. To keep $w$ the same when $z_{2}$ is altered, we set
${m_{22}}^{2}=-\frac{1}{2}z_{2}w^{2}$. In Fig. 2, we plot the strength of the
phase transition, $\xi$, in red for small values of $z_{2}$. As exptected, for
low values of $z_{2}$ the dEWPT is sufficiently strong, and gets stronger as
$z_{2}$ gets smaller. This general behaviour remains true for both and the
Parwani and Arnold-Espinosa resummation procedures – given by the left-hand
and right-hand plots, respectively – with the latter method producing slightly
stronger transitions for the same value of $z_{2}$. In the same figure we also
plot the value of the perturbative expansion parameter $\epsilon$ at the
critical temperature in blue. Since we can only trust our calculation of
$V_{\mathrm{eff}}(\varphi,T)$ for $\epsilon<1$, we only extend the graph until
the value of $z_{2}$ at which $\epsilon$ becomes too large. The inverse
relationship between $\epsilon$ and $\xi$ implies that our perturbative
methods are most reliable when the phase transition is very strong, which
allows us to be confident in our results in the relevant regions of parameter
space.
#### III.3.2 Increasing the dark electroweak scale
Figure 3: Plots of the phase transition strength $\xi$ and perturbative
expansion parameter $\epsilon$ at the critical temperature for different
values of the quartic coupling $z_{2}$. The left-hand plot uses the Parwani
daisy resummation scheme, and right-hand plot uses the Arnold-Espinosa
resummation scheme. These values are calculated for $\rho=1000$ and
$y_{2}^{h}=1$. Other unspecified parameters of the scalar potential are as
described in the text.
We now explore what happens when we alter the dark electroweak scale $w$. This
is motivated by the upcoming discussion in Section IV, where varying the dark
electroweak scale gives us greater freedom in choosing portal interactions.
From now on we follow the lead of the original paper and work primarily with
the quantity $\rho\equiv w/v$. Since the value of $v$ is fixed by the VEV of
the SM Higgs boson, we have that $w$ = $(246\rho)$ GeV.
When we increase $\rho$, the only necessary change to make to the parameters
of Table 3 to satisfy the constraints of Section II is to decrease $z_{14}$.
For $\rho=1000$, we set $z_{14}=1\times 10^{-4}$. As before, we calculate the
strength and expansion parameter of the phase transition for small $z_{2}$;
these are shown in Fig. 3, and are qualitatively very similar to the results
of Fig. 2, where $\rho=30$. So, for a given selection of quartic couplings,
altering $\rho$ has minimal effect on the strength of the phase transition.
#### III.3.3 Decreasing the masses of the additional scalars
Figure 4: Plots of the ratio of the critical temperature $T_{C}$ and heaviest
additional scalar mass $M_{H}$ (left-most column), phase transition strength
$\xi$ (middle column), and perturbative expansion parameter $\epsilon$ at the
critical temperature (right-most column) for parameter points with a varying
scalar coupling $z_{2}$ and scaling factor $k$. The upper (lower) plot in each
column corresponds to calculations using the Parwani (Arnold-Espinosa) daisy
resummation scheme. These values were calculated for $\rho=300$ and
$y_{2}^{h}=1$. Other unspecified parameters of the scalar potential are as
described in the text.
In these regions of parameter space, while the small values of $z_{2}$ lower
the mass of the dark Higgs boson $h^{\prime}$, the additional scalars –
$A_{1}^{0}$, $A_{2}^{0}$, $J_{1}^{0}$, $J_{2}^{0}$, $H^{\pm}$, and
${H^{\pm}}^{\prime}$ – remain heavy. This allows for the FCNC constraints to
be met; however, the lepton portal interactions in Section IV require these
heavy scalars to be in thermal equilibrium, and thus they must have masses
lower than the critical temperature of the dark electroweak phase transition,
$T_{C}$. This cannot be achieved with the parameter selections we have
identified so far. For example, consider a strong first-order dEWPT phase
transition at which $z_{2}=0.02$, $\rho=30$,
${m_{22}}^{2}=-\frac{1}{2}z_{2}w^{2}$, and all other parameters are given as
in Table 3. At this parameter point we have a strong-first order transition
with $\xi=2.2$ and $T_{C}=1715$ GeV, while the heavy scalar masses range
between $4500$ and $4800$ GeV; so, we must identify new regions of parameter
space for these masses to lie under $T_{C}$.
The masses of the additional scalars depend primarily on $z_{3}$ and $z_{9}$,
so we must reduce these parameters. However, to maintain an asymmetric
symmetry breaking structure for the scalar potential, we must satisfy the
constraint that $z_{3},z_{8},z_{9}\gg z_{6},z_{7},z_{11},z_{13},z_{14}$. To
roughly achieve both requirements in way that can be easily parametrised, we
divide all scalar couplings $z_{i}$ (with $i$ counting from 3 to 14) by a
scaling factor $k$. This leaves $z_{1}$ and $z_{2}$ unchanged – $z_{1}$ must
remain fixed to preserve the tree-level mass of the visible Higgs boson at the
standard model value of 125 GeV, and we vary $z_{2}$ independently from $k$ to
find parameter points for which the phase transition is strongly first-order
and the additional scalars are sufficiently light.
As the additional scalars are now lighter, we have a potential conflict with
the FCNC constraints. We can avoid this concern by working in regions with
large values of $\rho$, as the higher dark electroweak scale $w$ raises both
the critical temperature and the masses of the additional scalars. This allows
these masses to be large enough to suppress the FCNCs while still being
smaller than the temperature of the dEWPT.
For our search we choose $\rho=300$; this necessitates that prior to being
divided by the scaling factor $k$, $z_{14}$ is set to $1\times 10^{-3}$. The
eleven scalar couplings from $z_{3}$ to $z_{13}$ again have their values
before scaling given by those from Table 3. We vary the scaling factor $k$
from 1 up to 75, and work in a region of very small values for $z_{2}$ given
by $5\times 10^{-4}\leq z_{2}\leq 5\times 10^{-3}$. The results of this scan
are given in Fig. 4, where all calculations are performed using both the
Parwani and Arnold-Espinosa daisy resummation schemes. The left-most column of
figures gives the value of the ratio $T_{C}/M_{H}$, where $T_{C}$ is the
critical temperature of the dEWPT and $M_{H}$ is the mass of the heaviest
additional scalar. Regions in white have $T_{C}/M_{H}<1$, indicating parameter
selections for which the additional scalars are not lighter than the dEWPT
temperature. The middle column of figures show the strength of the phase
transition; here, white regions indicate that $\xi<1$, and thus give
parameters for which the phase transition is not sufficiently strong. The
final column of figures give the perturbative expansion parameter $\epsilon$
at $T_{C}$. The black regions of these plots correspond to $\epsilon>1$, and
thus indicate regions where we cannot trust the calculations at the critical
temperature. So, the valid parameter points in this region of parameter space
are those that are not contained in any of these forbidden regions.
These results display some features that we expect to see, and some that are
more curious. Increasing the scaling factor $k$ allows for the masses of the
additional scalars to be below the critical temperature; decreasing $z_{2}$
both strengthens the phase transition and increases the validity of the
perturbative expansion, as expected. However, there are notable artifacts in
our results: points where the perturbative expansion parameter unexpectedly
becomes large again for low values of $z_{2}$. While similar features occur
when using both daisy resummation schemes, they do not occur at the same
parameter points, so we take them to be anomalous effects.
#### III.3.4 Decreasing the mass of the heaviest dark quark
Figure 5: Plots of the phase transition strength $\xi$ and perturbative
expansion parameter $\epsilon$ at the critical temperature for different
values of the quartic coupling $z_{2}$. The left-hand plot uses the Parwani
daisy resummation scheme, and right-hand plot uses the Arnold-Espinosa
resummation scheme. These values are calculated for $\rho=30$ and
$y_{2}^{h}=0.02$. Figure 6: Plots of the ratio of the critical temperature
$T_{C}$ and heaviest additional scalar mass $M_{H}$ parameter points with a
varying scalar coupling $z_{2}$ and scaling factor $k$. The left-hand plot
uses the Parwani daisy resummation scheme, and right-hand plot uses the
Arnold-Espinosa resummation scheme. These values were calculated for
$\rho=300$ and $y_{2}^{h}=0.02$. Other unspecified parameters of the scalar
potential are as described in the text.
We now briefly consider one final situation that assists our asymmetry
reprocessing analysis: lowering the mass of the heaviest dark quark. We
consider the case when the largest dark quark Yukawa coupling $y_{2}^{h}$ is
equal in size to the standard model Yukawa coupling involving the bottom
quark, $y_{1}^{b}\sim 0.02$, and repeat the calculations from Fig. 2 and Fig.
4 with all other parameters unchanged. These new results are given in Fig. 5
and Fig. 6, and are markedly different to the earlier results. Comparing Fig.
5 with Fig. 2, we see that the strength of the phase transition for a given
value of $z_{2}$ is up to twice as strong, and our calculations remain
perturbatively valid to a slightly higher value of $z_{2}$. The more notable
differences come in the comparison between Fig. 6 and Fig. 4. We have only
included the plots for $T_{C}/M_{H}$ in the new figure, as the requirements
that $\xi>1$ and $\epsilon<1$ are now satisfied at every point in this region
of parameter space. Whereas the size of $T_{C}/M_{H}$ in the earlier case
depended mainly on the scaling parameter $k$, there is a now a clear
dependence on both parameters, with the optimum value of $T_{C}/M_{H}$
occurring for larger values of $k$ and $z_{2}$.
#### III.3.5 Summary of results
In this section we identified a number of regions of parameter space for which
the dark electroweak phase transition is strongly first-order, providing the
out-of-equilibrium dynamics that allow for electroweak baryogenesis to occur
in the dark sector. To summarise these results, in Table 4 we give an example
parameter selection for each region, along with a link to the subsection in
which that region was motivated. To keep the results concise, we only state
the parameters of $V_{\mathrm{M2HDM}}$ that differ from those listed in Table
3, noting that $m_{22}^{2}$ is given by $-\frac{1}{2}z_{2}(v\rho)^{2}$. The
exception to this is when the scaling factor $k$ is increased. Then, the
quoted values of all parameters from $z_{3}$ to $z_{14}$ must be reduced by
this scaling factor, with the value of $z_{14}$ given prior to scaling. For
each parameter point we list the strength of the phase transition as well as
two parameters relevant to the discussions of the following section: the
critical temperature $T_{C}$ and heaviest scalar mass $M_{H}$.
Section | $z_{2}$ | $z_{14}$ | $k$ | $\rho$ | $y^{h}_{2}$ | $\xi$ | $T_{C}$ [GeV] | $M_{H}$ [GeV]
---|---|---|---|---|---|---|---|---
III.3.1 | $0.01$ | $0.01$ | $1$ | $30$ | $1$ | $2.7$ | $1.5\times 10^{3}$ | $4.8\times 10^{3}$
III.3.2 | $0.02$ | $0.0001$ | $1$ | $1000$ | $1$ | $2.6$ | $5.0\times 10^{4}$ | $1.6\times 10^{5}$
III.3.3 | $0.0035$ | $0.001$ | $25$ | $300$ | $1$ | $2.3$ | $1.7\times 10^{4}$ | $8.8\times 10^{3}$
III.3.4 | $0.01$ | $0.01$ | $1$ | $30$ | $0.02$ | $5.0$ | $1.2\times 10^{3}$ | $4.8\times 10^{3}$
III.3.4 | $0.01$ | $0.001$ | $15$ | $300$ | $0.02$ | $2.2$ | $1.8\times 10^{4}$ | $1.2\times 10^{4}$
Table 4: Example parameter points for each of the regions of parameter space
discussed in the sections indicated in the first column.
## IV Asymmetry Reprocessing
Following electroweak baryogenesis at the dark electroweak phase transition,
asymmetries are produced in the dark particle numbers $B^{\prime}$ and/or
$L^{\prime}$. In this section we investigate the cross-sector portal
interactions through which these asymmetries may be transferred to the visible
sector. Our goal is to do this in such a way that we reproduce the 5:1 ratio
between the present-day cosmological mass densities of visible and dark
matter, $\Omega_{\mathrm{DM}}\simeq 5\Omega_{\mathrm{VM}}$, where
$\Omega_{X}=n_{X}m_{X}/\rho_{c}$. Since both visible and dark matter are
predominantly comprised of stable baryonic matter, then $n_{\mathrm{VM}}$ and
$n_{\mathrm{DM}}$ are proportional to the net baryon numbers $B$ and
$B^{\prime}$ and $m_{\mathrm{VM}}$ and $m_{\mathrm{DM}}$ are proportional to
the confinement scales $\Lambda_{\mathrm{QCD}}$ and $\Lambda_{\mathrm{DM}}$.
We can then recast the mass density relationship as
$\frac{B^{\prime}}{B}\frac{\Lambda_{\mathrm{DM}}}{\Lambda_{\mathrm{QCD}}}\simeq
5.$ (37)
In the original paper [24], the thermal leptogenesis mechanism produced equal
baryon asymmetries between the two sectors, and so only the relative sizes of
the confinement scales could reproduce the ratio. As the standard model
confinement scale $\Lambda_{\mathrm{QCD}}\sim 200$ MeV, this required
$\Lambda_{\mathrm{DM}}\sim 1$ GeV. In our work, the transfer of baryon
asymmetry from the visible sector to the dark sector can produce a range of
different values for the ratio $B^{\prime}/B$, and thus allow for greater
variance in $\Lambda_{\mathrm{DM}}$. However, we require that
$\Lambda_{\mathrm{DM}}>\Lambda_{\mathrm{QCD}}$, as this is necessary to lower
the temperature of the dark sector relative to the visible sector and satisfy
bounds from Big Bang nucleosynthesis. So, the asymmetry reprocessing must be
able to produce a baryon number ratio satisfying
$\frac{B^{\prime}}{B}\lesssim 5.$ (38)
In this section we analyse the asymmetry transfer by working with chemical
potentials, where for a relativistic species $i$, its chemical potential
$\mu_{i}$ is related to its number density asymmetry (for $|\mu_{i}|\ll T$) by
$n_{i}-\bar{n}_{i}=\frac{g_{i}T^{3}}{6}\begin{cases}\beta\mu_{i},&(\mathrm{fermions})\\\
2\beta\mu_{i},&(\mathrm{bosons})\end{cases}$ (39)
where $g_{i}$ is the multiplicity of the particle species and $\beta=1/T$
[78]. At a given temperature, constraints on these potentials arise from the
reactions that are in thermal equilibrium. If there are fewer constraints than
chemical potentials, then there is a conserved charge associated with each
free parameter, and we can determine the ratio $B^{\prime}/B$ in terms of the
initial values of the conserved charges.
Directly after the dEWPT phase transition, the initial conditions are $B,L=0$
and $B^{\prime},L^{\prime}\neq 0$, with the initial dark particle asymmetries
determined by the specifics of dark EWBG. In the minimal SM implementation of
EWBG, a baryon asymmetry is generated in the top quarks thanks to their large
Yukawa coupling [59]. Since dEWPT is mediated predominantly by the additional
Higgs doublet $\Phi_{2}^{\prime}$, whose $CP$-violating parameters and Yukawa
couplings are largely unconstrained, dark EWBG could involve a number of
baryonic and leptonic species. We do not investigate the specifics of
asymmetry creation in this work, and so leave the initial asymmetries in
$B^{\prime}$ and $L^{\prime}$ as free parameters. Their relative sizes can
then be restricted by the condition Eq. 38.
In this section we consider cross-sector effective operators that conserve a
total particle number – that is, some combination of particle numbers from
each sector – to avoid the washout of the initial dark asymmetries. There is
no restriction on whether viable operators need to involve just leptonic
species, just baryonic species, or both, in either sector. However, operators
involving just leptonic species in the visible sector must be active prior to
the vEWPT to allow electroweak sphaleron reprocessing to generate a visible
baryon asymmetry.
### IV.1 Chemical equilibrium conditions
Before addressing any specific cross-sector portal interactions, we list the
general chemical potential constraints that hold separately in each sector. We
also discuss the temperatures at which these interactions will be in thermal
equilibrium.
#### IV.1.1 The visible sector
For the effective operators we consider later in this section, the asymmetry
transfer will occur before the vEWPT. We assign chemical potentials to the
Higgs doublets $\Phi_{a}$, the left-handed lepton doublets $l_{iL}$, the
right-handed leptons $e_{iR}$, the left-handed quark doublets $q_{iL}$, and
the right-handed quarks $u_{iR}$ and $d_{iR}$666Note that each doublet only
receives one chemical potential. This is due to weak interactions involving
the $W^{\pm}$ gauge bosons, which are massless above the scale of electroweak
symmetry breaking and thus have $\mu_{W^{\pm}}=0$., where $a=1,2$ and
$i=1,2,3$ are generation indices. We choose to work in the diagonal Yukawa
basis for the quark fields as in Eq. 18; thus Cabibbo mixing between left-
handed quarks of different generations implies that
$\mu_{q_{1L}}=\mu_{q_{2L}}=\mu_{q_{3L}}\equiv\mu_{q_{L}}$777PMNS mixing would
imply an equivalent relationship for left-handed leptons. However, as we do
not specify a neutrino mass generation mechanism in this work, we keep the
analysis general by maintaining independent chemical potentials for left-
handed leptons of different generations, as in Refs. [37, 38].. When both
Higgs doublets are in thermal equilibrium, mixing between them sets
$\mu_{\Phi_{1}}=\mu_{\Phi_{2}}\equiv\mu_{\Phi}$.
When in thermal equilibrium, Yukawa interactions provide the following
restrictions [78]:
$\begin{split}&\mu_{q_{L}}+\mu_{\Phi}-\mu_{u_{iR}}=0,\quad\mu_{q_{L}}-\mu_{\Phi}-\mu_{d_{iR}}=0,\\\
&\mu_{l_{iL}}-\mu_{\Phi}-\mu_{e_{iR}}=0.\end{split}$ (40)
There are also restrictions from sphaleron processes – above the vEWPT
transition, both electroweak and QCD sphalerons will be in equilibrium,
leading to the the conditions [79, 80]
$9\mu_{q_{L}}+\sum_{i=1}^{3}\mu_{l_{iL}}=0,\quad
6\mu_{q_{L}}-\sum_{i=1}^{3}(\mu_{u_{iR}}+\mu_{d_{iR}})=0.$ (41)
The final condition to consider above the EWPT is the hypercharge neutrality
of the universe, which gives the relation
$2N_{\Phi}\mu_{\Phi}+3\mu_{q_{L}}+\sum_{i=1}^{3}(2\mu_{u_{iR}}-\mu_{d_{iR}}-\mu_{l_{iL}}-\mu_{e_{iR}})=0,$
(42)
where $N_{\Phi}$ is the number of Higgs bosons that are in thermal
equilibrium.
While hypercharge neutrality applies at all temperatures above the vEWPT, the
other relations quoted above only apply when the interaction is in thermal
equilibrium. Above the vEWPT temperature, both sphaleron processes are in
thermal equilibrium for all temperatures $T<10^{12}$ GeV [78]. For a Yukawa
interaction with dimensionless coupling $\lambda$, the approximate rate
$\Gamma\sim\lambda^{2}T$ implies that a given Yukawa interaction is in
equilibrium for $T\lesssim\lambda^{2}10^{16}$ GeV. Lighter fermions thus enter
thermal equilibrium at lower temperatures.
Finally, we give the combinations of chemical potentials that correspond to
the visible baryon and lepton numbers, $B$ and $L=\sum_{i=1}^{3}L_{i}$:
$\begin{split}B&\leftrightarrow
6\mu_{q_{L}}+\sum_{i=1}^{3}(\mu_{u_{iR}}+\mu_{d_{iR}}),\\\
L_{i}&\leftrightarrow 2\mu_{l_{iL}}+\mu_{e_{iR}}.\end{split}$ (43)
### IV.2 The dark sector
In the dark sector we only consider asymmetry transfer at temperatures below
the dEWPT. At this transition, the ${W^{\pm}}^{\prime}$ gauge bosons will
become massive and initially gain a chemical potential; thus, we assign a
different chemical potential for each field in a doublet. Due to the parity
symmetry between the visible and dark sectors, these doublets will be right-
handed, and we define their field content by
$q_{iR}^{\prime}=\begin{pmatrix}u_{iR}^{\prime}\\\
d_{iR}^{\prime}\end{pmatrix},\quad
l_{iR}^{\prime}=\begin{pmatrix}\nu_{iR}^{\prime}\\\
e_{iR}^{\prime}\end{pmatrix},\quad\Phi_{jR}^{\prime}=\begin{pmatrix}{\phi_{j}^{+}}^{\prime}\\\
{\phi_{j}^{0}}^{\prime}\end{pmatrix},$ (44)
where $i$ and $j$ are generation indices. We assign a chemical potential to
each of these fields, as well as to the left-handed leptons $e_{iL}^{\prime}$
and left-handed quarks $u_{iL}^{\prime}$ and $d_{iL}^{\prime}$. Here we work
in the diagonal Yukawa basis for the dark quarks888In this basis the dark
fields we assign chemical potentials to are not the mirror partners of the
visible fields that are assigned potentials. Thus, when we refer to dark
fermion generations and flavours, we are not referring to the mirror
counterparts of the visible fermion generations. Instead, the “dark top” and
“dark bottom” quarks, for example, refer to the most massive dark quarks with
dark electric charge $+2/3$ and $-1/3$, respectively. and thus Cabibbo mixing
sets
$\mu_{u_{1R}^{\prime}}=\mu_{u_{2R}^{\prime}}=\mu_{u_{3R}^{\prime}}\equiv\mu_{u_{R}^{\prime}}$
and
$\mu_{d_{1R}^{\prime}}=\mu_{d_{2R}^{\prime}}=\mu_{d_{3R}^{\prime}}\equiv\mu_{d_{R}^{\prime}}$.
If both Higgs doublets are in thermal equilibrium, mixing between them sets
$\mu_{{\phi_{1}^{+}}^{\prime}}=\mu_{{\phi_{2}^{+}}^{\prime}}\equiv\mu_{{\phi^{+}}^{\prime}}$
and
$\mu_{{\phi_{1}^{0}}^{\prime}}=\mu_{{\phi_{2}^{0}}^{\prime}}\equiv\mu_{{\phi^{0}}^{\prime}}$.
The formation of a vacuum condensate of $\phi^{0}$ bosons also sets
$\mu_{{\phi^{0}}^{\prime}}\>=\>0$ [81].
Below the dEWPT, all dark particles are now massive. So, at temperatures below
a particle’s mass, Boltzmann suppression leads the reactions which involve it
to fall out of thermal equilibrium. If the particle is unstable, it then
decays into other species, sending its chemical potential to zero. We thus
will not consider the chemical potential of species that fall out of
equilibrium. In the dark sector, this will always apply to the
${W^{\pm}}^{\prime}$ gauge bosons, as for all the parameter selections in
Section III, they will be more massive than the phase transition temperature
$T_{C}$, and thus will swiftly fall out of thermal equilibrium below the
dEWPT999Although $\mu_{{W^{\pm}}^{\prime}}=0$, the fields in each doublet do
not need to have equal potentials as in the visible sector. This is because
the ${W^{\pm}}^{\prime}$ bosons have fallen out of thermal equilibrium and so
the weak interactions that related the relevant chemical potentials have now
frozen out..
With independent chemical potentials for the fields in each doublet, the
Yukawa interactions provide twice as many constraints:
$\begin{split}&\mu_{u_{R}^{\prime}}+\mu_{{\phi^{0}}^{\prime}}-\mu_{u_{iL}^{\prime}}=0,\quad\mu_{d_{R}^{\prime}}-\mu_{{\phi^{0}}^{\prime}}-\mu_{d_{iL}^{\prime}}=0,\quad\mu_{e_{iR}^{\prime}}-\mu_{{\phi^{0}}^{\prime}}-\mu_{e_{iL}^{\prime}}=0,\\\
&\mu_{d_{R}^{\prime}}+\mu_{{\phi^{+}}^{\prime}}-\mu_{u_{iL}^{\prime}}=0,\quad\mu_{u_{R}^{\prime}}-\mu_{{\phi^{+}}^{\prime}}-\mu_{d_{iL}^{\prime}}=0,\quad\mu_{\nu_{iR}^{\prime}}-\mu_{{\phi^{+}}^{\prime}}-\mu_{e_{iL}^{\prime}}=0.\end{split}$
(45)
Below the dEWPT the dark electroweak sphaleron process is strongly suppressed
while the dark QCD sphaleron remains active down until the dark quark-hadron
phase transition and provides the constraint:
$3(\mu_{u_{R}^{\prime}}+\mu_{d_{R}^{\prime}})-\sum_{i=1}^{3}(\mu_{u_{iL}^{\prime}}+\mu_{d_{iL}^{\prime}})=0.$
(46)
Even though the ${W^{\pm}}^{\prime}$ bosons are too heavy for gauge
interactions with on-shell ${W^{\pm}}^{\prime}$ to be in equilibrium, they can
act as a mediator for the four-lepton interactions
$e_{iR}^{\prime}+\bar{\nu}_{iR}^{\prime}\to
e_{jR}^{\prime}+\bar{\nu}_{jR}^{\prime}$101010Of course there are also
analogous four-quark interactions for the fields within the right-handed dark
quark doublets. However, as these fields already have equal chemical
potentials between different generations, these interactions do not provide
any additional constraints.. A reaction mediated by a massive gauge boson of
mass $m$ is in thermal equilibrium for $T\gtrsim(m/100~{}\mathrm{GeV})^{4/3}$
MeV [82]. So, for even the largest values of $m_{{W^{\pm}}^{\prime}}$ that we
consider, these interactions are in thermal equilibrium from at least $T=1$
GeV. This is far below the temperature ranges we are interested in, so the
four-lepton reactions provide the additional constraints:
$\mu_{e_{iR}^{\prime}}-\mu_{\nu_{iR}^{\prime}}-\mu_{e_{jR}^{\prime}}+\mu_{\nu_{jR}^{\prime}}=0.$
(47)
Below the dEWPT, dark hypercharge is broken to dark electric charge
$Q^{\prime}$. The dark charge neutrality of the universe then enforces [81]:
$6\mu_{u_{R}^{\prime}}-3\mu_{d_{R}^{\prime}}+2N_{\Phi^{\prime}}\mu_{{\phi^{+}}^{\prime}}+\sum_{i=1}^{3}(2\mu_{u_{iL}^{\prime}}-\mu_{d_{iL}^{\prime}}-\mu_{e_{iR}^{\prime}}-\mu_{e_{iL}^{\prime}})=0,$
(48)
where $N_{\Phi^{\prime}}$ is the number of dark Higgs doublets in thermal
equilibrium.
Lastly, we state the chemical potentials combinations that correspond to the
dark baryon and lepton numbers, $B^{\prime}$ and
$L^{\prime}=\sum_{i=1}^{3}L_{i}^{\prime}$:
$\begin{split}B^{\prime}&\leftrightarrow
3(\mu_{u_{R}^{\prime}}+\mu_{d_{R}^{\prime}})+\sum_{i=1}^{3}(\mu_{u_{iL}^{\prime}}+\mu_{d_{iL}^{\prime}}),\\\
L_{i}^{\prime}&\leftrightarrow\mu_{e_{iR}^{\prime}}+\mu_{\nu_{iR}^{\prime}}+\mu_{e_{iL}^{\prime}}.\end{split}$
(49)
### IV.3 The Neutron Portal
The neutron portal operators are dimension-9 quark interactions involving one
singlet up-type and two singlet down-type quarks from each sector, for
example:
$\frac{1}{M^{5}}\bar{u}\bar{d}\bar{d}u^{\prime}d^{\prime}s^{\prime}+h.c.$ (50)
where we have simplified our notation by defining $u\equiv u_{1R}$, $d\equiv
d_{1R}$, $u^{\prime}\equiv u_{1L}^{\prime}$ $d^{\prime}\equiv
d_{1L}^{\prime}$, and $s^{\prime}\equiv d_{2L}^{\prime}$. This specific
neutron portal operator111111We note the flavour structure of this operator,
involving $u^{\prime}$, $d^{\prime}$, and $s^{\prime}$ quarks in the dark
sector. If this portal instead involved one $u^{\prime}$ and two $d^{\prime}$
quarks, then the dark neutron would be unstable to decay into visible
neutrons. As the dark neutron comprises the dark matter in the dark sector, we
cannot permit this to occur. was already considered in the original paper [24]
in the context of satisfying bounds on dark radiation from Big Bang
nucleosynthesis. Satisfying these phenomenological constraints is a vitally
important aspect of the original model; thus, we briefly review and summarise
the mechanism by which this is achieved, and in so doing motivate the neutron
portal as the asymmetry transfer mechanism.
#### IV.3.1 Dark radiation and thermal decoupling
The presence of dark relativistic degrees of freedom is a generic feature of
ADM models that faces strong constraints from BBN and CMB measurements. These
constraints can be satisfied if the temperature of the dark sector is
sufficiently less than that of the visible sector at the time of BBN; to
achieve this, the original paper considered a situation where the two sectors
decouple between the visible and dark confinement scales (also see Ref. [18]).
In this temperature region, the dark quark-hadron phase transition has taken
place and dark-colour confinement reduces the number of dark degrees of
freedom. This allows for a large transfer of entropy from the dark to the
visible sector while they are still in thermal equilibrium, causing the dark
sector to cool at a faster rate than the visible sector and leading to the
necessary temperature difference between the sectors.
For this process to naturally take place, there must be interactions that
maintain thermal equilibrium between the visible and dark sectors down to a
decoupling temperature between the two confinement sectors,
$T_{\mathrm{dec}}\sim 1$ GeV. While it is not the only suggestion given for
this interaction, the neutron portal of Eq. 50 is the most promising candidate
proposed in the original paper; once the dark quarks become confined into
hadrons, they can decay through the neutron portal into unconfined visible
quarks, naturally providing the mechanism through which to transfer entropy
between the sectors. This portal also allows for the apparently fine-tuned
decoupling temperature $T_{\mathrm{dec}}$ to arise naturally; the large masses
for the dark hadrons – gained below the dark quark-hadron phase transition by
dark QCD confinement – cause these particles to become Boltzmann suppressed,
thus decoupling the neutron portal interaction in the desired temperature
range.
#### IV.3.2 Thermal equilibrium
The approximate rate of the neutron portal interaction is $\Gamma\sim
T^{11}/M^{10}$. Comparing this to the expansion rate $H\sim
T^{2}/m_{\mathrm{Pl}}$, we find that the interaction is in thermal equilibrium
for the temperature range
$M>T>\left(\frac{M^{10}}{m_{\mathrm{Pl}}}\right)^{\frac{1}{9}}.$ (51)
where the upper bound is from the region of validity of the effective field
theory. The lower bound only applies if all quarks involved in the portal
interaction have masses below the lower bound temperature; otherwise, the
heaviest quark involved in the portal (of mass $m_{q}$) becomes Boltzmann
suppressed for $T<m_{q}$ and the interaction falls out of thermal equilibrium.
In this section, we perform a simplified analysis in which we only consider
neutron portals active at temperatures above the vEWPT temperature. This
captures some of the general properties of the asymmetry transfer that this
portal can achieve. However, we also lose the strong motivation from the
thermal decoupling considerations, as these require neutron portals to be
active down to the quark-hadron phase transition, well below the vEWPT
temperature121212The high-scale neutron portal we consider here can still be
possible in this model if one of the other interactions from the original
paper is implemented to maintain thermal equilibrium below the vEWPT
temperature. The CP-odd Higgs-mediated interactions, for example, could
fulfill this role as they do not violate lepton or baryon number within a
given sector, and so could maintain thermal equilibrium down to the decoupling
temperature without affecting the asymmetry transfer.. Analysing the asymmetry
transfer down to this level of around $1$ GeV introduces a large number of
difficulties, which we outline in Section V.
To find a neutron portal that operates in this temperature range, we either
take $M$ to be large enough that $(M^{10}/m_{\mathrm{Pl}})^{1/9}>T_{C}$, where
$T_{C}\sim 200$ GeV is the critical temperature of the vEWPT; or, we choose a
flavour structure for the specific portal operator such that it involves at
least one dark quark species with a mass greater than the vEWPT temperature.
Given the freedom we have in choosing the dark quark masses, the latter case
is easy to implement; for example, for $\rho\sim 50$ a dark quark with a
Yukawa coupling on the order of the bottom quark coupling has a mass over
$200$ GeV.
#### IV.3.3 Equilibrium conditions
The additional chemical potential constraint due to the neutron portal
operator of Eq. 50 is given by
$\mu_{u}+2\mu_{d}-\mu_{u^{\prime}}-\mu_{d^{\prime}}-\mu_{s^{\prime}}=0.$ (52)
Similar constraints arise from neutron portal operators with different flavour
structures. We note that when all Yukawa interactions involving non-Boltzmann
suppressed quarks are in thermal equilibrium, the right-handed visible singlet
quarks ($u_{iR}$, $d_{iR}$) and the left-handed dark singlet quarks
($u_{iL}^{\prime}$, $d_{iL}^{\prime}$) have equal chemical potentials between
the various generations, and so the final value for $B^{\prime}/B$ does not
depend on the specific flavour structure of the operator.
In the visible sector, the electron is in thermal equilibrium for $T\lesssim
10^{5}$ GeV, and so for temperatures below $10^{5}$ GeV all visible Yukawa
interactions are in equilibrium in addition to the electroweak and QCD
spahlerons. Assuming that the mass of the additional heavy scalars is above
$10^{5}$ GeV, $\Phi_{2}$ will have fallen out of equilibrium by the
temperatures of interest and so for the hypercharge neutrality condition Eq.
42 we set $N_{\Phi}$ = 1.
In the dark sector, the dark quarks are massive and thus have the potential to
become Boltzmann suppressed in our temperature range of interest. We consider
two cases for the choices of Yukawa couplings in the dark sector: one where
they are the same as the standard model, and one where $y_{2}^{h}=y_{1}^{b}$,
as considered in Section III.
For the first case, the dark top quark has a mass greater than the dEWPT
temperature, and so falls out of thermal equilibrium by the temperature at
which the portal is active. So, all dark sector Yukawa constraints will apply,
except for the Yukawa interaction involving the dark top quarks. The charge
neutrality and QCD sphaleron conditions must also be altered by removing the
chemical potentials associated with the top quarks – that is, removing the
$\mu_{u_{3L}^{\prime}}$ terms entirely and reducing the $\mu_{u_{R}^{\prime}}$
terms by a factor of 1/3. We similarly alter the definition of $B^{\prime}$.
For the second case, the mass of the heaviest dark quark is now orders of
magnitude smaller, letting it lie below the dEWPT temperature. Thus for
reasonable values of $\rho$ we can work in a temperature range where all dark
Yukawa constraints apply in addition to the dark QCD sphaleron. Note that for
both of these cases, similarly to the visible sector, we set
$N_{\Phi^{\prime}}$ = 1 in the charge neutrality condition.
In either case, there are 6 unconstrained chemical potentials, corresponding
to six conserved charges. Not considering the portal interaction, the
conserved charges in each sector are given by $B/3-L_{i}$ for the visible
sector and $B^{\prime}$ and $L_{i}^{\prime}$ for the dark sector. As the
portal conserves $B+B^{\prime}$, the six conserved charges will then be given
by:
$\begin{split}\mathcal{L}_{1}&=\frac{1}{3}(B+B^{\prime})-L_{1},\\\
\mathcal{L}_{2}&=\frac{1}{3}(B+B^{\prime})-L_{2},\\\
\mathcal{L}_{3}&=\frac{1}{3}(B+B^{\prime})-L_{3},\\\
\end{split}\quad\begin{split}\mathcal{L}_{4}&=L_{1}^{\prime},\\\
\mathcal{L}_{5}&=L_{2}^{\prime},\\\
\mathcal{L}_{6}&=L_{3}^{\prime}.\end{split}$ (53)
The various particle numbers in the two sectors can then be expressed as a
linear combination of these conserved charges, as per
$B=\sum_{i=1}^{6}a_{i}\mathcal{L}_{i},\quad
L=\sum_{i=1}^{6}b_{i}\mathcal{L}_{i},\quad
B^{\prime}=\sum_{i=1}^{6}c_{i}\mathcal{L}_{i},\quad
L^{\prime}=\sum_{i=1}^{6}d_{i}\mathcal{L}_{i}.$ (54)
Solving the linear system of chemical potential constraints then allows us to
calculate the values of the coefficients in these expressions. As an example
of this calculation, we give the results for the case where
$y_{2}^{h}=y_{1}^{t}$ in Table 5
$i$ | $a_{i}$ | $b_{i}$ | $c_{i}$ | $d_{i}$
---|---|---|---|---
1 | $\frac{476}{1959}$ | $\frac{-289}{653}$ | $\frac{158}{5877}$ | $0$
2 | $\frac{476}{1959}$ | $\frac{-289}{653}$ | $\frac{158}{5877}$ | $0$
3 | $\frac{476}{1959}$ | $\frac{-289}{653}$ | $\frac{158}{5877}$ | $0$
4 | $\frac{-56}{5877}$ | $\frac{-34}{1959}$ | $\frac{158}{5877}$ | $0$
5 | $\frac{-56}{5877}$ | $\frac{-34}{1959}$ | $\frac{158}{5877}$ | $0$
6 | $\frac{-56}{5877}$ | $\frac{-34}{1959}$ | $\frac{158}{5877}$ | $1$
Table 5: The values of the coefficients defined in Eq. 54, calculated using
the set of chemical potentials constraints that apply when
$y_{2}^{h}=y_{1}^{t}$
To obtain the final ratios of particle numbers, we simply specify the initial
conditions for the particle asymmetries. For example, we may assume that we
start only with an asymmetry in dark baryon number $B^{\prime}$, and call this
value $X$. Then, the only non-zero conserved charges are
$\mathcal{L}_{1}=\mathcal{L}_{2}=\mathcal{L}_{3}=X/3$, and we obtain
$\begin{split}&B=\sum_{i=1}^{3}\frac{a_{i}X}{3},\quad
L=\sum_{i=1}^{3}\frac{b_{i}X}{3},\\\
&B^{\prime}=\sum_{i=1}^{3}\frac{c_{i}X}{3},\quad
L^{\prime}=\sum_{i=1}^{3}\frac{d_{i}X}{3}.\end{split}$ (55)
Although we can now directly calculate $B^{\prime}/B$ from these results, that
only gives the ratio of particle numbers directly after the portal falls out
of thermal equilibrium. As this occurs at a temperature that is still above
the visible electroweak phase transition, $B$ is violated by the visible
electroweak sphaleron process. $B-L$ is preserved, however, and we can relate
it to $B$ after the freezeout of the neutron portal operation by the standard
relationship [78]
$B=\frac{28}{79}(B-L).$ (56)
Below the visible electroweak phase transition, $B$ is conserved, and the
ratio between the final baryon numbers $B_{f}^{\prime}$ and $B_{f}$ is given
by
$\begin{split}\frac{B_{f}^{\prime}}{B_{f}}&=\frac{79}{28}\frac{B^{\prime}}{B-L}\\\
&=\frac{79}{28}\frac{c_{1}+c_{2}+c_{3}}{a_{1}+a_{2}+a_{3}-b_{1}-b_{2}-b_{3}}.\end{split}$
(57)
#### IV.3.4 Results
The results for the two cases are given in Table 6. We note that both ratios
satisfy the rough condition of Eq. 38 that $B^{\prime}/B\lesssim 5$, ensuring
that the dark confinement scale lies above the visible confinement scale. From
Eq. 37, and taking $\Lambda_{\mathrm{QCD}}\sim 200$ MeV, we can also calculate
the dark confinement scale $\Lambda_{\mathrm{DM}}$. This then allows us to
restrict the permissible values of $\rho$, as given in the last column of the
table. These are determined from Fig. 1 in the original paper [24], which
plots $\Lambda_{\mathrm{DM}}$ against $\rho$ for a selection of different
choices for the spectrum of Yukawa couplings to $\Phi_{2}$. For a given value
of $\Lambda_{\mathrm{DM}}$, we can only choose values of $\rho$ for which the
necessary Yukawa spectrum matches the conditions on $y_{2}^{t}$ for each case.
We note that in this simplified analysis, the neutron portal can successfully
reprocess the asymmetries over regions of both large and small values of
$\rho$; thus, it seems applicable as the asymmetry transfer mechanism over a
wide region of parameter space.
Case | $B_{f}^{\prime}/B_{f}$ | $\Lambda_{\mathrm{DM}}$ | $\rho$
---|---|---|---
1 | 1.29 | 0.77 GeV | $\lesssim 100$
2 | 1.45 | 0.68 GeV | $\gtrsim 200$
Table 6: The results for the neutron portal. Case 1 corresponds to the
heaviest dark quark Yukawa coupling being equal to the visible top quark
Yukawa coupling ($y_{2}^{h}=y_{1}^{t}$) while in case 2 the heaviest dark
quark Yukawa coupling is equal to the visible bottom quark Yukawa coupling
($y_{2}^{h}$ = $y_{1}^{b}$).
### IV.4 The Lepton Portal
The effective interaction mediating this portal involves a lepton doublet and
a Higgs doublet from each sector, and is given by
$\frac{1}{M_{ab}}\bar{l}_{iL}\Phi_{a}^{c}l_{jR}^{\prime}\Phi_{b}^{\prime}+h.c.$
(58)
where the indices $i,j=1,2,3$ and $a,b=1,2$ specify the lepton generation and
Higgs doublet number, respectively.
These interactions allow for neutrino mass terms after electroweak symmetry
breaking; the Higgs doublets gain VEVs given by
$\expectationvalue{\Phi_{a}}=v_{a}/\sqrt{2}$ and
$\expectationvalue{\Phi_{b}^{\prime}}=w_{b}/\sqrt{2}$ and a mass term is
produced with
$m_{\nu}=\frac{v_{a}w_{b}}{2M_{ab}}.$ (59)
The observational data for neutrino masses give an upper bound of
$m_{\nu}\lesssim 0.12$ eV [4]. This translates to a lower bound on $M_{ab}$
given by
$M_{ab}\gtrsim\frac{v_{a}w_{b}}{0.5\mathrm{eV}}$ (60)
#### IV.4.1 Thermal equilibrium
The approximate rate of the lepton portal interaction is $\Gamma\sim
T^{3}/M_{ab}^{2}$. $\Gamma>H$ then implies that a given lepton portal is in
thermal equilibrium for $T>{M_{ab}}^{2}/m_{\mathrm{Pl}}$. Combining this with
Eqs. 59 and 60, we recast the condition as
$T\gtrsim
0.25\left(\frac{v_{a}w_{b}}{\mathrm{GeV}^{2}}\right)^{2}\mathrm{GeV}.$ (61)
Thus, the temperature range for which a given lepton portal is in thermal
equilibrium depends on which Higgs doublets take part in the interaction.
Consider the lepton portal involving $\Phi_{1}$ and $\Phi_{2}^{\prime}$ – the
two doublets which gain large VEVs $v_{1}\approx v$ and $w_{2}\approx w$. For
a given value of $\rho$, we have $v=246$ GeV and $w=246\rho$ GeV, so this
lepton portal is only in thermal equilibrium for $T>10^{9}\rho^{2}$ GeV. This
is a temperature range well above the dEWPT temperature, and so this lepton
portal operator cannot serve to reprocess the asymmetries.
The other Higgs doublets in each sector have VEVs that are much smaller than
$v$ and $w$, typically on the order of at most 1 GeV. Thus, ignoring Boltzmann
suppression, the lepton portal involving $\Phi_{2}$ and $\Phi_{1}^{\prime}$
remains in thermal equilibrium down to at least 1 GeV – well below the
temperature ranges we consider. However, these doublets are comprised of the
heavy additional scalars, which become Boltzmann suppressed at high
temperatures near the dark electroweak phase transition. For this portal to
remain in thermal equilibrium long enough to reprocess the particle
asymmetries between the sectors, we must work at a point in parameter space
where the additional scalars have masses lower than the dEWPT temperature. In
Section III, we found such parameter selections; for example, see the
parameter points for regions 3 and 5 in Table 4. We consider the point in
region 5, and thus work in an approximate temperature range between the
critical temperature, $T_{C}=1.8\times 10^{4}$ GeV, and the mass of the
heaviest additional scalar, $M_{H}=1.2\times 10^{4}$ GeV.
#### IV.4.2 Equilibrium conditions
The lepton portal in Eq. 58 induces chemical potential constraints given by
$\mu_{l_{iL}}+\mu_{\Phi}-\mu_{\nu_{jR}^{\prime}}-\mu_{{\phi^{0}}^{\prime}}=0,\quad\mu_{l_{iL}}+\mu_{\Phi}-\mu_{e_{jR}^{\prime}}-\mu_{{\phi^{+}}^{\prime}}=0.$
(62)
As we are also working in a temperature regime below $10^{5}$ GeV where all
Yukawa interactions are in equilibrium, the discussion of the additional
constraints in the visible and dark sectors is very similar to the neutron
portal. The only difference is that the additional scalars are in equilibrium
while the portals are active, and so we must set
$N_{\Phi}=N_{\Phi^{\prime}}=2$ in the visible hypercharge and dark charge
neutrality conditions. We also only consider the case where
$y_{2}^{h}=y_{1}^{b}$, and thus none of the particle species are Boltzmann
suppressed in our temperature region of interest. This is due to the
considerations from Fig. 1 in the original work; we are working at a parameter
point with a large value of $\rho$, which preferences a dark quark Yukawa
coupling structure where the largest coupling is only on the order of the
standard model bottom quark Yukawa coupling.
Recall that the visible lepton mass eigenstates are not mirror partners of the
dark lepton mass eigenstates. So, a lepton portal respecting the mirror
symmetry will induce interactions between all pairs of visible and dark lepton
mass eigenstates. As we assign chemical potentials to the mass eigenstates in
each sector, the lepton portal thus introduces all possible constraints of the
form given in Eq. 62. This has the effect of setting the chemical potentials
to be equal for all left-handed visible leptons and for all right-handed dark
leptons; that is, $\mu_{l_{1L}}=\mu_{l_{2L}}=\mu_{l_{3L}}\equiv\mu_{l_{L}}$,
$\mu_{e_{1R}^{\prime}}=\mu_{e_{2R}^{\prime}}=\mu_{e_{3R}^{\prime}}\equiv\mu_{e_{R}^{\prime}}$
and
$\mu_{\nu_{1R}^{\prime}}=\mu_{\nu_{2R}^{\prime}}=\mu_{\nu_{3R}^{\prime}}\equiv\mu_{\nu_{R}^{\prime}}$.
This then sets $L_{1}$ = $L_{2}$ = $L_{3}$ = $L/3$ and $L_{1}^{\prime}$ =
$L_{2}^{\prime}$ = $L_{3}^{\prime}$ = $L^{\prime}/3$; thus, when the lepton
portal is active, there are only two conserved charges:
$\mathcal{L}_{1}=B-L-L^{\prime},\quad\mathcal{L}_{2}=B^{\prime}.$ (63)
As before, we define the visible and dark particle numbers as linear
combinations of the conserved charges,
$B=\sum_{i=1}^{2}a_{i}\mathcal{L}_{i},\quad
L=\sum_{i=1}^{2}b_{i}\mathcal{L}_{i},\quad
B^{\prime}=\sum_{i=1}^{2}c_{i}\mathcal{L}_{i},\quad
L^{\prime}=\sum_{i=1}^{2}d_{i}\mathcal{L}_{i}.$ (64)
We define $X$ and $Y$ as the initial asymmetries in $B^{\prime}$ and
$L^{\prime}$ respectively, leading to the conserved charges
$\mathcal{L}_{1}=-Y$ and $\mathcal{L}_{2}=X$. The same behaviour occurs as
before with the final visible baryon asymmetry depending on the $B-L$
asymmetry transferred to the visible sector, and so we obtain
$\frac{B_{f}^{\prime}}{B_{f}}=\frac{79}{28}\frac{-c_{1}Y+c_{2}X}{-a_{1}Y+a_{2}X+b_{1}Y-b_{2}X}.$
(65)
#### IV.4.3 Results
For this section we give results as the range of relative sizes between $X$
and $Y$ that produce appropriate baryon ratios. To roughly identify the
allowed range of baryon ratios, we again look to Fig. 1 from the original
paper [24]. We are working at a parameter point where $\rho=300$, and the
heaviest of the new Yukawa couplings is on the order of the standard model
bottom quark Yukawa coupling. Thus, we have a value of $\Lambda_{\mathrm{DM}}$
between $0.5$ and $0.7$ GeV. To reproduce the 5:1 ratio between the dark and
visible mass densities, from Eq. 37 it follows that we need a value for for
the baryon ratio satisfying $1.4\lesssim B_{f}^{\prime}/B_{f}\lesssim 2$.
The asymmetry transfer by our lepton portal produces a valid baryon ratio for
$-1.2<\frac{Y}{X}<-0.8,$ (66)
where $Y/X$ is the ratio of the initial dark lepton and baryon asymmetries.
Thus we favour a dark EWBG which generates a baryon and lepton asymmetry of
roughly equal magnitude but opposite sign. While we have not investigated the
initial asymmetry generation dynamics in this work, to achieve similar initial
asymmetries we expect that the heaviest dark quarks and leptons should have
masses of similar sizes. This is readily achievable given the freedom we have
for the dark sector Yukawa couplings; it also fits nicely with our assumption
in this section that the largest dark quark Yukawa coupling should be on the
order of the visible bottom quark coupling, which was motivated by the large
values of $\rho$ necessary for a feasible lepton portal effective operator. To
make any more precise statements about the viability of this portal would
require a detailed analysis of the dark electroweak baryogenesis dynamics.
## V Dark Radiation and the Neutron Portal
So far, we have investigated the transfer of particle number asymmetries at
temperatures well above the visible electroweak scale, providing a general
analysis of effective operators which preserve a total particle number. Of
these, the neutron portal was especially promising, as it could also naturally
allow for the stringent bounds on dark radiation to be alleviated. However, to
serve this dual role the neutron portal must remain in equilibrium until a
decoupling temperature $T_{\mathrm{dec}}$ that lies between the visible and
dark quark-hadron phase transition (QHPT) temperatures; in addition, this must
be achieved in a way that does not introduce excessive fine-tuning to the
extent that the model can no longer serve as a natural explanation of the
cosmological coincidence $\Omega_{\mathrm{DM}}\simeq 5\Omega_{\mathrm{VM}}$.
In this section we discuss the difficulties that arise when attempting to
implement the neutron portal at these low temperatures, in particular: (i)
specifying a valid UV completion of the neutron portal and (ii) tracking the
asymmetry transfer over a larger temperature range.
### V.1 UV-completing the neutron portal
For the neutron portal to be active below the dark QHPT, it must remain in
thermal equilibrium down to a temperature below $\Lambda_{\mathrm{DM}}$. While
the specific value of $\Lambda_{\mathrm{DM}}$ depends on the spectrum of dark
quark masses, it is at most a few GeV. From Eq. 51, the neutron portal
effective operator falls out of equilibrium at
$T\approx(M^{10}/m_{\mathrm{Pl}})^{1/9}$. So, for the neutron portal to remain
in operation at $T\sim 1$ GeV, we require a cutoff scale $M\lesssim 63$ GeV.
At temperatures above this scale the effective operator description will be
invalid, and so to properly analyse the asymmetry transfer we must provide a
renormalisable realisation of the neutron portal effective operator.
A UV completion for this operator was given in Ref. [83], and a similar
interaction was given in Ref. [84] in the context of neutron-antineutron
oscillation. Similarly to these papers, we introduce a scalar diquark
$S\sim(\bm{3},\bm{1},\frac{2}{3})$ with baryon number $B=-\frac{2}{3}$ and a
gauge singlet fermion $N_{R}\sim(\bm{1},\bm{1},0)$ with baryon number $B=-1$,
as well as their mirror partners $S^{\prime}$ and $N_{L}^{\prime}$. Assuming
$B-B^{\prime}$ conservation, the new Yukawa and mass terms are given by
$\begin{split}\mathcal{L}&\supset\lambda_{i}(S\bar{u}_{Ri}N_{R}^{c}+S^{\prime}\bar{u}_{Li}^{\prime}N_{L}^{c\prime})+\kappa_{ij}(S\bar{d^{c}}_{Ri}d_{Rj}+S^{\prime}\bar{d^{c}}_{Li}^{\prime}d_{Lj}^{\prime})\\\
&+M_{S}^{2}(S^{\ast}S+S^{\ast\prime}S^{\prime})+M_{N}N_{R}N_{L}^{\prime}.\end{split}$
(67)
$B-B^{\prime}$ conservation forbids Majorana mass terms for the singlet
fermions, preventing the washout of any dark or visible baryon number
asymmetry carried by the respective singlet fermion. While the mirror symmetry
gives equal mass terms for $S$ and $S^{\prime}$, they can obtain differing
masses following symmetry breaking through their couplings to the Higgs
doublets. In the ASB limit, the relevant couplings are given by
$\begin{split}\mathcal{L}&\supset\eta_{1}(S^{\ast}S\Phi_{1}^{\dagger}\Phi_{1}+S^{\prime\ast}S^{\prime}\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime})+\eta_{2}(S^{\ast}S\Phi_{2}^{\dagger}\Phi_{2}+S^{\prime\ast}S^{\prime}\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime})\\\
&+\eta_{3}(S^{\ast}S\Phi_{1}^{\prime\dagger}\Phi_{1}^{\prime}+S^{\prime\ast}S^{\prime}\Phi_{1}^{\dagger}\Phi_{1})+\eta_{4}(S^{\ast}S\Phi_{2}^{\prime\dagger}\Phi_{2}^{\prime}+S^{\prime\ast}S^{\prime}\Phi_{2}^{\dagger}\Phi_{2}),\end{split}$
(68)
producing scalar diquark masses
$\begin{split}m_{S}^{2}=M_{S}^{2}+\frac{v^{2}}{2}(\eta_{1}+\rho^{2}\eta_{4}),\\\
m_{S^{\prime}}^{2}=M_{S}^{2}+\frac{v^{2}}{2}(\eta_{3}+\rho^{2}\eta_{2}).\end{split}$
(69)
The neutron portal operators are induced by diagrams such as that given in
Fig. 7. As before, to ensure the stability of the dark neutron – our dark
matter candidate – we cannot allow the neutron portal to involve only the
lightest dark quarks, $u^{\prime}$ and $d^{\prime}$. We thus need to introduce
some additional flavour structure to the Yukawa interactions, such that one of
$\lambda_{1}$ or $\kappa_{11}$ is equal to zero.
$N_{R}$$u$$d$$s$$N_{L}^{\prime}$$u^{\prime}$$d^{\prime}$$s^{\prime}$$S$$S^{\prime}$
Figure 7: Diagram inducing the neutron portal effective operator for
temperatures $T<m_{S},m_{S^{\prime}},M_{N}$.
At temperatures $T<m_{S},m_{S^{\prime}},M_{N}$, and assuming $\mathcal{O}(1)$
Yukawa couplings, the cutoff scale for the neutron portal effective operator
is given by $M\sim(m_{S}^{2}m_{S^{\prime}}^{2}M_{N})^{\frac{1}{5}}$. Visible
scalar diquarks at masses below a few TeV have been disfavoured by collider
searches [85], so we take $m_{S}\sim 10^{4}$ GeV, which can be easily achieved
with $\rho\sim 100$ and $\mathcal{O}(1)$ values of $\eta_{4}$. Then, for the
cutoff scale to satisfy $M\lesssim 63$ GeV, we require
$m_{S^{\prime}}^{2}M_{N}\lesssim 10$ GeV3. For $m_{S^{\prime}}^{2}\gtrsim 10$
GeV2 we then require $M_{N}\lesssim 1$ GeV, which means that at the dark QHPT
$N_{R}$ and $N_{L}^{\prime}$ remain in equilibrium and the neutron portal is
not described by a single portal operator. Achieving
$m_{S^{\prime}}^{2}\lesssim 10$ GeV2 is not feasible without significant fine-
tuning, requiring $\eta_{3}\lesssim 10^{-3}$ and $\eta_{2}\lesssim 10^{-7}$
for $\rho\sim 100$. Thus, we do have to consider the situation where $N_{R}$
and $N_{L}^{\prime}$ have masses smaller than the dark QHPT temperature $T\sim
1$ GeV.
In this case, for the neutron portal to be active at the dark QHPT, we need
the effective operators induced by the $S$-/$S^{\prime}$-mediated interactions
of Fig. 8 to be in thermal equilibrium at $T\sim 1$ GeV. Assuming
$\mathcal{O}(1)$ Yukawa couplings, these operators are in thermal equilibrium
for $T\gtrsim(m_{S^{(\prime)}}^{4}/m_{\mathrm{Pl}})^{1/3}=200$ MeV assuming
$m_{S},m_{S^{\prime}}\sim 10^{4}$ GeV. So, the given completion for the
neutron portal allows it to be active at the dark QHPT temperature for
$m_{S},m_{S^{\prime}}\sim 10^{4}$ GeV and $M_{N}$ small enough for $N_{R}$ and
$N_{L}^{\prime}$ to remain in thermal equilibrium.
$d$$d$$u$$N_{R}$$S$$u^{\prime}$$N_{L}^{\prime}$$d^{\prime}$$d^{\prime}$$S^{\prime}$
Figure 8: Diagrams inducing effective operators for temperatures given by
$M_{N}<T<m_{S},m_{S^{\prime}}$.
However, this situation introduces an issue for the stability of our dark
matter candidate, the dark neutron: if $M_{N}$ is lower than $m_{n^{\prime}}$,
then the decay mode $n^{\prime}\rightarrow N_{L}^{\prime}\gamma^{\prime}$
becomes available. The dark neutron mass is a factor of a few times
$\Lambda_{\mathrm{QCD}}$, which also approximately gives the dark QHPT
temperature. So, if $M_{N}$ is smaller than the dark QHPT temperature to allow
the neutron portal to decouple between the visible and dark QHPTs, then it
will be lighter than $n^{\prime}$ and the dark matter will not be stable. The
flavour structure of the Yukawa couplings does prevent this decay occurring at
tree-level; however, the kinematically-allowed decay can still occur at one-
loop level.
This instability could be avoided if $M_{N}$ is greater than $m_{n}^{\prime}$,
at a value around a few GeV, and if the singlet fermions do not fall out of
thermal equilibrium until a temperature a factor of 10 or so smaller than
their mass. While possible, this places quite a tight restriction on the
$M_{N}$, as it must be only slightly higher than $m_{n}^{\prime}$ to be able
to remain in thermal equilibrium down to a temperature between the visible and
dark QHPT temperatures. If this is the case, however, then the neutron portal
can remain active down to the desired decoupling temperature.
### V.2 Asymmetry transfer
We now analyse the asymmetry transfer due to this specific neutron portal. As
this cross-sector interaction is active from the dEWPT at around $10^{5}$ GeV
down to the decoupling temperature between the visible and dark QHPTS at
around $1$ GeV, determining the final baryon ratio $B^{\prime}/B$ is more
complicated than the cases considered in Section IV.
Recall that at a given temperature, our process is to (i) identify which
particle species are in equilibrium and assign them chemical potentials, (ii)
identify the interactions in thermal equilibrium that constrain these chemical
potentials, (iii) identify the conserved charges that correspond to the
remaining free parameters, and (iv) solve for the chemical potentials in terms
of the initial conditions on these conserved charges. Since the neutron portal
is now active over a large temperature range, the chemical potentials re-
equilibrate and new charges become conserved as various particle species and
interactions fall out of equilibrium.
To account for this re-equilibration, we first identify the new conserved
charges as well as the temperatures at which they begin to be conserved
following the freeze-out of particular particle species and interactions. When
each new charge begins to be conserved, we can calculate its asymmetry in
terms of the other conserved charges at that temperature. So, beginning at the
dEWPT temperature $T\sim 10^{5}$ GeV with initial asymmetries given by
$B^{\prime}=X$ and $L^{\prime}=Y$, we can calculate each new conserved charge
in terms of $X$ and $Y$. Continuing this process down to $1$ GeV, we obtain
the final baryon ratio $B^{\prime}/B$ immediately prior to the dark QHPT.
Although the neutron portal freezes out between the visible and dark QHPT
temperatures due to the Boltzmann suppression of the quarks involved, we do
not continue to track the baryon ratio after the dark QHPT commences. This is
due to the non-perturbative strong dynamics of the dark QHPT, which cannot be
handled by our approximate calculation method. While this introduces
additional uncertainty to our calculations, our goal is not to calculate a
precise ratio, but just to show that a reasonable ratio of $B^{\prime}/B<5$
can be obtained by the neutron portal operator acting until low temperatures.
Additionally, given that the neutron portal freezes out shortly after the dark
QHPT commences, we claim that the final baryon number ratio should not change
after the transition by more than a factor of a few.
#### V.2.1 Conserved charges
To simplify the analysis further, we will drop generational indices from our
chemical potentials, setting equal chemical potentials for the particles of
each type in thermal equilibrium. We first work at a temperature on the order
of $10^{4}$ GeV where the scalar diquarks $S$ and $S^{\prime}$ have frozen out
but the dark Higgs boson $h^{\prime}$ is still in equilibrium. The particle
species and interactions in equilibrium are then the same as in Section IV.3,
but with the neutron portal constraint replaced by constraints from the $S$\-
and $S^{\prime}$-mediated effective operators
$\mu_{u_{R}}+\mu_{N_{R}}+2\mu_{d_{R}}=0,\quad\mu_{u_{L}^{\prime}}+\mu_{N_{L}^{\prime}}+2\mu_{d_{L}^{\prime}}=0$
(70)
and the gauge singlet mass term setting $\mu_{N_{R}}=\mu_{N_{l}^{\prime}}$.
Then there are only two conserved charges,
$\mathcal{L}_{1}=B+B^{\prime}-L,\quad\mathcal{L}_{2}=L^{\prime},$ (71)
with initial conditions $\mathcal{L}_{1}=X$ and $\mathcal{L}_{2}=Y$.
After the dark Higgs boson $h^{\prime}$ freezes out at around $10^{4}$ GeV,
the dark Yukawa interactions are replaced by four-fermion interactions
mediated by the dark Higgs doublets,
$\mu_{u_{R}^{\prime}}-\mu_{u_{L}^{\prime}}+\mu_{e_{R}^{\prime}}-\mu_{e_{L}^{\prime}}=0,\quad\mu_{u_{R}^{\prime}}-\mu_{u_{L}^{\prime}}+\mu_{d_{R}^{\prime}}-\mu_{d_{L}^{\prime}}=0,\quad\mu_{d_{R}^{\prime}}-\mu_{d_{L}^{\prime}}-\mu_{e_{R}^{\prime}}+\mu_{e_{L}^{\prime}}=0,$
(72)
and there is a new conserved charge given by
$\mathcal{L}_{3}=\mu_{u_{R}^{\prime}}-\mu_{d_{R}^{\prime}}+\mu_{\nu_{R}^{\prime}}-\mu_{e_{R}^{\prime}}$;
that is, $u_{R}^{\prime}$ and $\nu_{R}^{\prime}$ have $\mathcal{L}_{3}$ charge
1 and $d_{R}^{\prime}$ and $e_{R}^{\prime}$ have $\mathcal{L}_{3}$ charge -1.
These Higgs-mediated interactions remain in equilibrium for temperatures given
by
$T>\left((m_{h^{\prime}}w)^{4}/(4{m_{1}}^{2}{m_{2}}^{2}m_{\mathrm{Pl}})\right)^{1/3}$,
where $m_{1}$ and $m_{2}$ are the masses of the fermion species involved and
we have used the Cheng-Sher _Ansatz_ [57] for the dark Yukawa couplings.
We work at a benchmark scenario with $\rho\simeq 100$, $z_{2}\simeq 0.025$,
and selected dark quark masses of $m_{c^{\prime}}\simeq 50$ GeV,
$m_{\mu^{\prime}}\simeq 50$ GeV, and $m_{b^{\prime}}\simeq 500$ GeV. Then the
Higgs-mediated interaction constraints of Eq. 72 apply until $T\simeq 60$ GeV
(for
$c_{L}^{\prime}+c_{R}^{\prime}\leftrightarrow\mu_{L}^{\prime}+\mu_{R}^{\prime}$),
$T\simeq 500$ GeV (for $c_{L}^{\prime}+c_{R}^{\prime}\leftrightarrow
b_{L}^{\prime}+b_{R}^{\prime}$), and $T\simeq 500$ GeV (for
$b_{L}^{\prime}+b_{R}^{\prime}\leftrightarrow\mu_{L}^{\prime}+\mu_{R}^{\prime}$),
respectively.
So, after the constraints
$\mu_{u_{R}^{\prime}}-\mu_{u_{L}^{\prime}}+\mu_{d_{R}^{\prime}}-\mu_{d_{L}^{\prime}}=0$
and
$\mu_{d_{R}^{\prime}}-\mu_{d_{L}^{\prime}}-\mu_{e_{R}^{\prime}}+\mu_{e_{L}^{\prime}}=0$
freeze out at $T\sim 500$ GeV, the conserved charge
$\mathcal{L}_{3}=\mu_{u_{R}^{\prime}}-\mu_{d_{R}^{\prime}}+\mu_{\nu_{R}^{\prime}}-\mu_{e_{R}^{\prime}}$
splits into two conserved charges
$\mathcal{L}_{3}=\mu_{u_{R}^{\prime}}-\mu_{e_{R}^{\prime}}$ and
$\mathcal{L}_{4}=\mu_{d_{R}^{\prime}}-\mu_{\nu_{R}^{\prime}}$.
The next stage is shortly after the vEWPT at the electroweak sphaleron freeze
out temperature $T\sim 150$ GeV. This assumes the vEWPT is crossover as it is
in the SM [61]; we make this assumption since the dynamics of the vEWPT are
controlled by the couplings of $\Phi_{1}$ which are very similar to that of
the SM Higgs doublet. After this point, the charge
$\mathcal{L}_{1}=B+B^{\prime}-L$ splits into two conserved charges,
$\mathcal{L}_{1}=B+B^{\prime}$ and $\mathcal{L}_{5}=L$.
After the freeze out of the visible Higgs around its mass of $125$ GeV, there
is a new conserved charge
$\mathcal{L}_{6}=\mu_{u_{L}}-\mu_{d_{L}}+\mu_{\nu_{L}}-\mu_{e_{L}}$. The final
new conserved charge is
$\mathcal{L}_{7}=\mu_{u_{R}^{\prime}}+\mu_{d_{R}^{\prime}}$, which becomes
conserved following the freeze out of the dark Higgs-mediated constraint
$\mu_{u_{R}^{\prime}}-\mu_{u_{L}^{\prime}}+\mu_{e_{R}^{\prime}}-\mu_{e_{L}^{\prime}}=0$
at $60$ GeV. A summary of these conserved charges and the temperatures at
which they are first conserved is presented in Table 7.
$T$ [GeV] | Conserved Charges
---|---
$10^{5}$ | $\\{B+B^{\prime}-L^{\prime}$, $L^{\prime}\\}$
$10^{4}$ | $\\{B+B^{\prime}-L^{\prime}$, $L^{\prime}$, $\mu_{u_{R}^{\prime}}-\mu_{d_{R}^{\prime}}+\mu_{\nu_{R}^{\prime}}-\mu_{e_{R}^{\prime}}\\}$
$500$ | $\\{B+B^{\prime}-L^{\prime}$, $L^{\prime}$, $\mu_{u_{R}^{\prime}}-\mu_{e_{R}^{\prime}}$, $\mu_{d_{R}^{\prime}}-\mu_{\nu_{R}^{\prime}}\\}$
$150$ | $\\{B+B^{\prime}$, $L^{\prime}$, $\mu_{u_{R}^{\prime}}-\mu_{e_{R}^{\prime}}$, $\mu_{d_{R}^{\prime}}-\mu_{\nu_{R}^{\prime}}$, $L\\}$
$125$ | $\\{B+B^{\prime}$, $L^{\prime}$, $\mu_{u_{R}^{\prime}}-\mu_{e_{R}^{\prime}}$, $\mu_{d_{R}^{\prime}}-\mu_{\nu_{R}^{\prime}}$, $L$, $\mu_{u_{L}}-\mu_{d_{L}}+\mu_{\nu_{L}}-\mu_{e_{L}}\\}$
$60$ | $\\{B+B^{\prime}$, $L^{\prime}$, $\mu_{u_{R}^{\prime}}-\mu_{e_{R}^{\prime}}$, $\mu_{d_{R}^{\prime}}-\mu_{\nu_{R}^{\prime}}$, $L$, $\mu_{u_{L}}-\mu_{d_{L}}+\mu_{\nu_{L}}-\mu_{e_{L}}$, $\mu_{u_{R}^{\prime}}+\mu_{d_{R}^{\prime}}\\}$
Table 7: The set of conserved charges $\\{\mathcal{L}_{i}\\}$ along with the
approximate temperature $T$ at which each new charge first becomes conserved.
#### V.2.2 Results
At around $1$ GeV, just before the dark QHPT commences, we can calculate
$B^{\prime}$ and $B$ in terms of the seven conserved charges. Starting with
the initial conditions $\mathcal{L}_{1}=X$ and $\mathcal{L}_{2}$, we determine
each new conserved charge in terms of $X$ and $Y$; continuing all the way to
$1$ GeV, we obtain
$\begin{split}B^{\prime}_{f}&\simeq 0.28X+0.07Y\\\ B_{f}&\simeq
0.24X-0.003Y.\end{split}$ (73)
Consider a case where no dark lepton asymmetry is generate during dark EWBG
and thus $Y=0$; this could easily arise if the Yukawa coupling of the dark top
quark follows the SM pattern and is much higher than the rest of the dark
Yukawa couplings. Then, the final ratio of baryon numbers is given by
$\frac{B^{\prime}_{f}}{B_{f}}|_{Y=0}\simeq 1.1$ (74)
So, the neutron portal scenario we consider can naturally generate similar
asymmetries in visible and dark baryon number. We also check when the
constraint $B^{\prime}/B<5$ applies in terms of the relative sizes of the
initial dark baryon and lepton asymmetries, finding that is it satisfied for
$Y/X\lesssim 11.4$. Thus, we can remain fairly agnostic about the specifics of
the asymmetry generation through dark EWBG, as a reasonable and related baryon
ratio can be obtained for an initial lepton asymmetry up to an order of
magnitude larger than the initial baryon asymmetry.
## VI Conclusion
The 5:1 ratio between the present-day mass densities of dark matter and
visible matter is one of the few tantalising hints we have towards the
fundamental nature of dark matter. This apparent coincidence of both
cosmological number densities and mass scales between dark and visible relic
species suggests a deep link between the two forms of matter, motivating the
search for a comprehensive dark matter model where this relationship arises
naturally.
While asymmetric dark matter models provide a variety of ways to relate the
number densities of visible and dark matter, relating the particle masses
presents a challenge that is more difficult and thus less frequently
addressed. In this work we focused on extending the mirror two Higgs doublet
model of Ref. [24], where the dark matter consists of neutrons of a dark QCD
whose confinement scale is related to $\Lambda_{\mathrm{QCD}}$ by a discrete
symmetry that is spontaneously broken at a high scale. While this earlier work
generated related visible and dark baryon number densities through thermal
leptogenesis, we sought to implement electroweak baryogenesis at the dark
electroweak phase transition as the method for generating a particle
asymmetry.
In this work we did not present a fully detailed theory of electroweak
baryogenesis; rather, we completed some preliminary steps to demonstrate the
feasibility of such a model, and to show that it could be naturally realised
within the mirror 2HDM framework of Ref. [24]. We first showed in Section III
that for a number of regions of parameter space the dark electroweak phase
transition is strongly first-order, as is necessary to provide the out-of-
equilibrium dynamics in EWBG. In Section IV we then considered the
reprocessing of the dark baryon asymmetry generated through dark EWBG by
cross-sector effective operator interactions. For both interactions we
analysed – the neutron portal and the lepton portal – the final visible and
dark baryon number densities obtained were of a similar order. However, in the
case of the lepton portal, the present-day baryon asymmetry ratio depended on
the relative sizes of the dark lepton and baryon asymmetries generated at the
dEWPT. Determining these initial conditions requires a full calculation of the
EWBG dynamics.
In addition to providing the initial conditions for the lepton portal, a full
EWBG calculation would also show whether a sufficiently large dark baryon
asymmetry can be generated; that is, large enough that the correct visible
baryon number density is reproduced following the asymmetry transfer. The size
of the asymmetry depends upon the strength of the dEWPT and the magnitude of
$CP$-violation in the Yukawa interactions with $\Phi_{2}^{\prime}$ at the
bubble walls. Given the largely unconstrained couplings of the secondary Higgs
doublet, we expect that there is sufficient freedom to generate the required
baryon asymmetry; regardless, a quantitative EWBG analysis is necessary to
turn this work into a complete theory.
Lastly in Section V we considered the tricky issue of alleviating the BBN
bounds on dark radiation, which presents a common challenge in ADM theories.
The most promising and natural solution given in the original paper was a
neutron portal operator holding the visible and dark sectors in thermal
equilibrium until a point shortly after the dark quark-hadron phase
transition. The notion of the neutron portal serving a dual role by
transferring asymmetries and addressing the dark radiation issue is greatly
appealing; however, we showed it is difficult to implement the neutron portal
down to temperatures around $1$ GeV. In particular, we identified a UV
completion that allowed the neutron portal to operate successfully, but only
if the gauge singlet fermions $N_{R}$ and $N_{L}^{\prime}$ have a mass just
larger than the dark neutron. This is quite a tight restriction, and presents
an unwanted source of fine-tuning. Given this UV completion, we then
calculated the asymmetry transfer – noting the large uncertainty introduced by
the non-perturbative dynamics of the dark QHPT – and showed that the neutron
portal can generate visible and dark baryon asymmetries of a similar order
while also helping to obey the dark radiation constraints.
## Acknowledgements
We thank Stephen Lonsdale for helpful correspondence. This work was supported
in part by the Australian Research Council and the Australian Government
Research Training Program Scholarship initiative.
## References
* Hu _et al._ [2000] W. Hu, R. Barkana, and A. Gruzinov, Cold and fuzzy dark matter, Phys. Rev. Lett. 85, 1158 (2000), arXiv:astro-ph/0003365 .
* Carr and Kuhnel [2020] B. Carr and F. Kuhnel, Primordial Black Holes as Dark Matter: Recent Developments, Ann. Rev. Nucl. Part. Sci. 70, 355 (2020), arXiv:2006.02838 [astro-ph.CO] .
* Feng [2010] J. L. Feng, Dark Matter Candidates from Particle Physics and Methods of Detection, Ann. Rev. Astron. Astrophys. 48, 495 (2010), arXiv:1003.0904 [astro-ph.CO] .
* Aghanim _et al._ [2020] N. Aghanim _et al._ (Planck), Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO] .
* Davoudiasl and Mohapatra [2012] H. Davoudiasl and R. N. Mohapatra, On Relating the Genesis of Cosmic Baryons and Dark Matter, New J. Phys. 14, 095011 (2012), arXiv:1203.1247 [hep-ph] .
* Petraki and Volkas [2013] K. Petraki and R. R. Volkas, Review of asymmetric dark matter, Int. J. Mod. Phys. A28, 1330028 (2013), arXiv:1305.4939 [hep-ph] .
* Zurek [2014] K. M. Zurek, Asymmetric Dark Matter: Theories, Signatures, and Constraints, Phys. Rept. 537, 91 (2014), arXiv:1308.0338 [hep-ph] .
* Nussinov [1985] S. Nussinov, Technocosmology: could a technibaryon excess provide a ’natural’ missing mass candidate?, Phys. Lett. B 165, 55 (1985).
* Barr _et al._ [1990] S. M. Barr, R. Chivukula, and E. Farhi, Electroweak Fermion Number Violation and the Production of Stable Particles in the Early Universe, Phys. Lett. B 241, 387 (1990).
* Hodges [1993] H. Hodges, Mirror baryons as the dark matter, Phys. Rev. D 47, 456 (1993).
* Foot [2004a] R. Foot, Mirror matter-type dark matter, Int. J. Mod. Phys. D 13, 2161 (2004a), arXiv:astro-ph/0407623 .
* Chacko _et al._ [2006] Z. Chacko, H.-S. Goh, and R. Harnik, The Twin Higgs: Natural electroweak breaking from mirror symmetry, Phys. Rev. Lett. 96, 231802 (2006), arXiv:hep-ph/0506256 .
* Kribs _et al._ [2010] G. D. Kribs, T. S. Roy, J. Terning, and K. M. Zurek, Quirky Composite Dark Matter, Phys. Rev. D 81, 095001 (2010), arXiv:0909.2034 [hep-ph] .
* An _et al._ [2010] H. An, S.-L. Chen, R. N. Mohapatra, and Y. Zhang, Leptogenesis as a Common Origin for Matter and Dark Matter, JHEP 03, 124, arXiv:0911.4463 [hep-ph] .
* Frandsen and Sarkar [2010] M. T. Frandsen and S. Sarkar, Asymmetric dark matter and the Sun, Phys. Rev. Lett. 105, 011301 (2010), arXiv:1003.4505 [hep-ph] .
* Cline _et al._ [2014] J. M. Cline, Z. Liu, G. Moore, and W. Xue, Composite strongly interacting dark matter, Phys. Rev. D 90, 015023 (2014), arXiv:1312.3325 [hep-ph] .
* Appelquist _et al._ [2013] T. Appelquist _et al._ (Lattice Strong Dynamics (LSD)), Lattice Calculation of Composite Dark Matter Form Factors, Phys. Rev. D 88, 014502 (2013), arXiv:1301.1693 [hep-ph] .
* Farina [2015] M. Farina, Asymmetric Twin Dark Matter, JCAP 11, 017, arXiv:1506.03520 [hep-ph] .
* Garcia Garcia _et al._ [2015] I. Garcia Garcia, R. Lasenby, and J. March-Russell, Twin Higgs Asymmetric Dark Matter, Phys. Rev. Lett. 115, 121801 (2015), arXiv:1505.07410 [hep-ph] .
* Farina _et al._ [2016] M. Farina, A. Monteux, and C. S. Shin, Twin mechanism for baryon and dark matter asymmetries, Phys. Rev. D 94, 035017 (2016), arXiv:1604.08211 [hep-ph] .
* Beauchesne [2020] H. Beauchesne, Mirror neutrons as dark matter in the Mirror Twin Two Higgs Doublet Model, JHEP 09, 048, arXiv:2007.00052 [hep-ph] .
* Bai and Schwaller [2014] Y. Bai and P. Schwaller, Scale of dark QCD, Phys. Rev. D 89, 063522 (2014), arXiv:1306.4676 [hep-ph] .
* Newstead and TerBeek [2014] J. L. Newstead and R. H. TerBeek, Reach of threshold-corrected dark QCD, Phys. Rev. D 90, 074008 (2014), arXiv:1405.7427 [hep-ph] .
* Lonsdale and Volkas [2018] S. J. Lonsdale and R. R. Volkas, Comprehensive asymmetric dark matter model, Phys. Rev. D97, 103510 (2018), arXiv:1801.05561 [hep-ph] .
* Lee and Yang [1956] T. Lee and C.-N. Yang, Question of Parity Conservation in Weak Interactions, Phys. Rev. 104, 254 (1956).
* Kobzarev _et al._ [1966] I. Kobzarev, L. Okun, and I. Pomeranchuk, On the possibility of experimental observation of mirror particles, Sov. J. Nucl. Phys. 3, 837 (1966).
* Pavsic [1974] M. Pavsic, External inversion, internal inversion, and reflection invariance, Int. J. Theor. Phys. 9, 229 (1974), arXiv:hep-ph/0105344 .
* Blinnikov and Khlopov [1982] S. I. Blinnikov and M. Yu. Khlopov, On possible effects of ’mirror’ particles, Sov. J. Nucl. Phys. 36, 472 (1982), [Yad. Fiz.36,809(1982)].
* Foot _et al._ [1991] R. Foot, H. Lew, and R. R. Volkas, A Model with fundamental improper space-time symmetries, Phys. Lett. B272, 67 (1991).
* Foot _et al._ [1992] R. Foot, H. Lew, and R. Volkas, Possible consequences of parity conservation, Mod. Phys. Lett. A 7, 2567 (1992).
* Foot and Volkas [1995] R. Foot and R. Volkas, Neutrino physics and the mirror world: How exact parity symmetry explains the solar neutrino deficit, the atmospheric neutrino anomaly and the LSND experiment, Phys. Rev. D 52, 6595 (1995), arXiv:hep-ph/9505359 .
* Berezhiani _et al._ [2001] Z. Berezhiani, D. Comelli, and F. L. Villante, The Early mirror universe: Inflation, baryogenesis, nucleosynthesis and dark matter, Phys. Lett. B503, 362 (2001), arXiv:hep-ph/0008105 [hep-ph] .
* Bento and Berezhiani [2002] L. Bento and Z. Berezhiani, Baryon asymmetry, dark matter and the hidden sector, Fortsch. Phys. 50, 489 (2002).
* Berezhiani [2004] Z. Berezhiani, Mirror world and its cosmological consequences, Int. J. Mod. Phys. A 19, 3775 (2004), arXiv:hep-ph/0312335 .
* Ignatiev and Volkas [2003] A. Ignatiev and R. Volkas, Mirror dark matter and large scale structure, Phys. Rev. D 68, 023518 (2003), arXiv:hep-ph/0304260 .
* Foot [2004b] R. Foot, Implications of the DAMA and CRESST experiments for mirror matter type dark matter, Phys. Rev. D 69, 036001 (2004b), arXiv:hep-ph/0308254 .
* Foot and Volkas [2003] R. Foot and R. Volkas, Was ordinary matter synthesized from mirror matter? An attempt to explain why $\Omega_{\mathrm{Baryon}}\approx 0.2\Omega_{\mathrm{Dark}}$, Phys. Rev. D 68, 021304 (2003), arXiv:hep-ph/0304261 .
* Foot and Volkas [2004] R. Foot and R. R. Volkas, Explaining $\Omega_{\mathrm{Baryon}}\approx 0.2\Omega_{\mathrm{Dark}}$ through the synthesis of ordinary matter from mirror matter: A more general analysis, Phys. Rev. D69, 123510 (2004), arXiv:hep-ph/0402267 [hep-ph] .
* Ciarcelluti [2005a] P. Ciarcelluti, Cosmology with mirror dark matter. 1. Linear evolution of perturbations, Int. J. Mod. Phys. D 14, 187 (2005a), arXiv:astro-ph/0409630 .
* Ciarcelluti [2005b] P. Ciarcelluti, Cosmology with mirror dark matter. 2. Cosmic microwave background and large scale structure, Int. J. Mod. Phys. D 14, 223 (2005b), arXiv:astro-ph/0409633 .
* Foot [2010] R. Foot, A comprehensive analysis of the dark matter direct detection experiments in the mirror dark matter framework, Phys. Rev. D 82, 095001 (2010), arXiv:1008.0685 [hep-ph] .
* Foot [2014] R. Foot, Mirror dark matter: Cosmology, galaxy structure and direct detection, Int. J. Mod. Phys. A 29, 1430013 (2014), arXiv:1401.3965 [astro-ph.CO] .
* Cerulli _et al._ [2017] R. Cerulli, P. Villar, F. Cappella, R. Bernabei, P. Belli, A. Incicchitti, A. Addazi, and Z. Berezhiani, DAMA annual modulation and mirror Dark Matter, Eur. Phys. J. C 77, 83 (2017), arXiv:1701.08590 [hep-ex] .
* Berezhiani [1996] Z. Berezhiani, Astrophysical implications of the mirror world with broken mirror parity, Acta Phys. Polon. B 27, 1503 (1996), arXiv:hep-ph/9602326 .
* Foot _et al._ [2000] R. Foot, H. Lew, and R. Volkas, Unbroken versus broken mirror world: A Tale of two vacua, JHEP 07, 032, arXiv:hep-ph/0006027 .
* Berezhiani and Lepidi [2009] Z. Berezhiani and A. Lepidi, Cosmological bounds on the ’millicharges’ of mirror particles, Phys. Lett. B 681, 276 (2009), arXiv:0810.1317 [hep-ph] .
* Cui _et al._ [2012] J.-W. Cui, H.-J. He, L.-C. Lu, and F.-R. Yin, Spontaneous Mirror Parity Violation, Common Origin of Matter and Dark Matter, and the LHC Signatures, Phys. Rev. D 85, 096003 (2012), arXiv:1110.6893 [hep-ph] .
* Gu [2013] P.-H. Gu, From Dirac neutrino masses to baryonic and dark matter asymmetries, Nucl. Phys. B 872, 38 (2013), arXiv:1209.4579 [hep-ph] .
* Addazi _et al._ [2015] A. Addazi, Z. Berezhiani, R. Bernabei, P. Belli, F. Cappella, R. Cerulli, and A. Incicchitti, DAMA annual modulation effect and asymmetric mirror matter, Eur. Phys. J. C 75, 400 (2015), arXiv:1507.04317 [hep-ex] .
* Lonsdale and Volkas [2014] S. J. Lonsdale and R. R. Volkas, Grand unified hidden-sector dark matter, Phys. Rev. D90, 083501 (2014), [Erratum: Phys. Rev.D91,no.12,129906(2015)], arXiv:1407.4192 [hep-ph] .
* Lonsdale [2015] S. J. Lonsdale, Unified dark matter with intermediate symmetry breaking scales, Phys. Rev. D 91, 125019 (2015), arXiv:1412.1894 [hep-ph] .
* Caprini _et al._ [2020] C. Caprini _et al._ , Detecting gravitational waves from cosmological phase transitions with LISA: an update, JCAP 03, 024, arXiv:1910.13125 [astro-ph.CO] .
* Shelton and Zurek [2010] J. Shelton and K. M. Zurek, Darkogenesis: A baryon asymmetry from the dark matter sector, Phys. Rev. D82, 123512 (2010), arXiv:1008.1997 [hep-ph] .
* Feng _et al._ [2013] W.-Z. Feng, A. Mazumdar, and P. Nath, Baryogenesis from dark matter, Phys. Rev. D88, 036014 (2013), arXiv:1302.0012 [hep-ph] .
* Anderson _et al._ [2001] S. Anderson _et al._ (CLEO), Improved upper limits on the FCNC decays $B\to K\ell^{+}\ell^{-}$ and $B\to$ K*(892) $\ell^{+}\ell^{-}$, Phys. Rev. Lett. 87, 181803 (2001), arXiv:hep-ex/0106060 [hep-ex] .
* Branco _et al._ [2012] G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher, and J. P. Silva, Theory and phenomenology of two-Higgs-doublet models, Phys. Rept. 516, 1 (2012), arXiv:1106.0034 [hep-ph] .
* Cheng and Sher [1987] T. Cheng and M. Sher, Mass Matrix Ansatz and Flavor Nonconservation in Models with Multiple Higgs Doublets, Phys. Rev. D 35, 3484 (1987).
* Atwood _et al._ [1997] D. Atwood, L. Reina, and A. Soni, Phenomenology of two Higgs doublet models with flavor changing neutral currents, Phys. Rev. D 55, 3156 (1997), arXiv:hep-ph/9609279 .
* Morrissey and Ramsey-Musolf [2012] D. E. Morrissey and M. J. Ramsey-Musolf, Electroweak baryogenesis, New J. Phys. 14, 125003 (2012), arXiv:1206.2942 [hep-ph] .
* Sakharov [1967] A. D. Sakharov, Violation of CP Invariance, C asymmetry, and baryon asymmetry of the universe, Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967), [Usp. Fiz. Nauk161,no.5,61(1991)].
* Gurtler _et al._ [1998] M. Gurtler, E.-M. Ilgenfritz, and A. Schiller, The Endpoint of the electroweak phase transition, _Contents of LAT97 proceedings_ , Nucl. Phys. Proc. Suppl. 63, 566 (1998), [,566(1997)], arXiv:hep-lat/9709019 [hep-lat] .
* Gavela _et al._ [1994] M. B. Gavela, P. Hernandez, J. Orloff, and O. Pene, Standard model CP violation and baryon asymmetry, Mod. Phys. Lett. A9, 795 (1994), arXiv:hep-ph/9312215 [hep-ph] .
* Dolan and Jackiw [1974] L. Dolan and R. Jackiw, Symmetry Behavior at Finite Temperature, Phys. Rev. D9, 3320 (1974).
* Jackiw [1974] R. Jackiw, Functional evaluation of the effective potential, Phys. Rev. D9, 1686 (1974).
* Coleman and Weinberg [1973] S. R. Coleman and E. J. Weinberg, Radiative Corrections as the Origin of Spontaneous Symmetry Breaking, Phys. Rev. D7, 1888 (1973).
* Quiros [1999] M. Quiros, Finite temperature field theory and phase transitions, in _Proceedings, Summer School in High-energy physics and cosmology: Trieste, Italy, June 29-July 17, 1998_ (1999) pp. 187–259, arXiv:hep-ph/9901312 [hep-ph] .
* Wainwright [2012] C. L. Wainwright, CosmoTransitions: Computing Cosmological Phase Transition Temperatures and Bubble Profiles with Multiple Fields, Comput. Phys. Commun. 183, 2006 (2012), arXiv:1109.4189 [hep-ph] .
* Cline [2006] J. M. Cline, Baryogenesis, in _Les Houches Summer School - Session 86: Particle Physics and Cosmology: The Fabric of Spacetime Les Houches, France, July 31-August 25, 2006_ (2006) arXiv:hep-ph/0609145 [hep-ph] .
* Espinosa _et al._ [1993] J. R. Espinosa, M. Quiros, and F. Zwirner, On the nature of the electroweak phase transition, Phys. Lett. B314, 206 (1993), arXiv:hep-ph/9212248 [hep-ph] .
* Cline _et al._ [2011] J. M. Cline, K. Kainulainen, and M. Trott, Electroweak Baryogenesis in Two Higgs Doublet Models and B meson anomalies, JHEP 11, 089, arXiv:1107.3559 [hep-ph] .
* Parwani [1992] R. R. Parwani, Resummation in a hot scalar field theory, Phys. Rev. D45, 4695 (1992), [Erratum: Phys. Rev.D48,5965(1993)], arXiv:hep-ph/9204216 [hep-ph] .
* Arnold and Espinosa [1993] P. B. Arnold and O. Espinosa, The Effective potential and first order phase transitions: Beyond leading-order, Phys. Rev. D47, 3546 (1993), [Erratum: Phys. Rev.D50,6662(1994)], arXiv:hep-ph/9212235 [hep-ph] .
* Kainulainen _et al._ [2019] K. Kainulainen, V. Keus, L. Niemi, K. Rummukainen, T. V. I. Tenkanen, and V. Vaskonen, On the validity of perturbative studies of the electroweak phase transition in the Two Higgs Doublet model, JHEP 06, 075, arXiv:1904.01329 [hep-ph] .
* Patel and Ramsey-Musolf [2011] H. H. Patel and M. J. Ramsey-Musolf, Baryon Washout, Electroweak Phase Transition, and Perturbation Theory, JHEP 07, 029, arXiv:1101.4665 [hep-ph] .
* Cline and Lemieux [1997] J. M. Cline and P.-A. Lemieux, Electroweak phase transition in two Higgs doublet models, Phys. Rev. D55, 3873 (1997), arXiv:hep-ph/9609240 [hep-ph] .
* Fromme _et al._ [2006] L. Fromme, S. J. Huber, and M. Seniuch, Baryogenesis in the two-Higgs doublet model, JHEP 11, 038, arXiv:hep-ph/0605242 [hep-ph] .
* Dine _et al._ [1992] M. Dine, R. G. Leigh, P. Y. Huet, A. D. Linde, and D. A. Linde, Towards the theory of the electroweak phase transition, Phys. Rev. D46, 550 (1992), arXiv:hep-ph/9203203 [hep-ph] .
* Buchmuller [2000] W. Buchmuller, Some aspects of baryogenesis and lepton number violation, in _Recent developments in particle physics and cosmology: Proceedings. NATO ASI 2000. Cascais, Portugal, July 26 - Jul 7, 2000_ (2000) pp. 281–314, arXiv:hep-ph/0101102 [hep-ph] .
* Kuzmin _et al._ [1985] V. A. Kuzmin, V. A. Rubakov, and M. E. Shaposhnikov, On the Anomalous Electroweak Baryon Number Nonconservation in the Early Universe, Phys. Lett. 155B, 36 (1985).
* Mohapatra and Zhang [1992] R. N. Mohapatra and X.-m. Zhang, QCD sphalerons at high temperature and baryogenesis at electroweak scale, Phys. Rev. D45, 2699 (1992).
* Harvey and Turner [1990] J. A. Harvey and M. S. Turner, Cosmological baryon and lepton number in the presence of electroweak fermion number violation, Phys. Rev. D42, 3344 (1990).
* Kolb and Turner [1990] E. W. Kolb and M. S. Turner, The Early Universe, Front. Phys. 69, 1 (1990).
* Berezhiani and Bento [2006] Z. Berezhiani and L. Bento, Neutron - mirror neutron oscillations: How fast might they be?, Phys. Rev. Lett. 96, 081801 (2006), arXiv:hep-ph/0507031 .
* Gu and Sarkar [2011] P.-H. Gu and U. Sarkar, Baryogenesis and neutron-antineutron oscillation at TeV, Phys. Lett. B 705, 170 (2011), arXiv:1107.0173 [hep-ph] .
* Sirunyan _et al._ [2018] A. M. Sirunyan _et al._ (CMS), Search for narrow and broad dijet resonances in proton-proton collisions at $\sqrt{s}=13$ TeV and constraints on dark matter mediators and other new particles, JHEP 08, 130, arXiv:1806.00843 [hep-ex] .
|
# SOSD-Net: Joint Semantic Object Segmentation and Depth Estimation from
Monocular images
Lei He<EMAIL_ADDRESS>Jiwen Lu<EMAIL_ADDRESS>Guanghui Wang
<EMAIL_ADDRESS>Shiyu Song<EMAIL_ADDRESS>Jie Zhou
<EMAIL_ADDRESS>Baidu Autonomous Driving Technology Department (ADT)
Beijing National Research Center for Information Science and Technology
(BNRist), Department of Automation, Tsinghua University, Beijing, 100084,
China Department of Computer Science, Ryerson University, Toronto, ON, Canada
M5B 2K3 Tsinghua Shenzhen International Graduate School, Tsinghua University,
Shenzhen, 518055, China
###### Abstract
Depth estimation and semantic segmentation play essential roles in scene
understanding. The state-of-the-art methods employ multi-task learning to
simultaneously learn models for these two tasks at the pixel-wise level. They
usually focus on sharing the common features or stitching feature maps from
the corresponding branches. However, these methods lack in-depth consideration
on the correlation of the geometric cues and the scene parsing. In this paper,
we first introduce the concept of semantic objectness to exploit the geometric
relationship of these two tasks through an analysis of the imaging process,
then propose a Semantic Object Segmentation and Depth Estimation Network
(SOSD-Net) based on the objectness assumption. To the best of our knowledge,
SOSD-Net is the first network that exploits the geometry constraint for
simultaneous monocular depth estimation and semantic segmentation. In
addition, considering the mutual implicit relationship between these two
tasks, we exploit the iterative idea from the expectation-maximization
algorithm to train the proposed network more effectively. Extensive
experimental results on the Cityscapes and NYU v2 dataset are presented to
demonstrate the superior performance of the proposed approach.
###### keywords:
semantic objectness , depth estimation , semantic estimation , object
segmentation
## 1 Introduction
Depth estimation and semantic segmentation, as two major components in scene
understanding, have received a lot of attention in the computer vision
community. In recent years, with the successful applications of deep
convolutional neural networks, the performance of depth estimation and
semantic segmentation has been greatly improved [1, 2, 3, 4, 5], owing to the
superior representation ability of the deep features over the classical
handcrafted features [6, 7, 8, 9, 10].
Monocular depth estimation is an essential approach in understanding the 3D
geometry of a scene [11, 4, 12, 5, 13]. Depth estimation is usually formulated
as a regression problem that assigns each pixel a continuous depth value.
However, this task exists inherent ambiguity with some scene priors, as
analyzed in He et al. [12]. The scene priors refer to the elements that can
remedy the ambiguity of the monocular depth estimation, such as the physical
size of the objects in the scene, and the focal length information of the
camera, etc. To improve the accuracy of monocular depth estimation, these
ambiguous elements need to be properly integrated into the network during the
training and inferring process of the network. With the multi-scale fusion and
hierarchical representation of deep networks, the precision of semantic
segmentation [1, 2, 14] has been greatly improved. Nevertheless, most
segmentation models have limitations in certain scenarios, like segmenting
slender objects such as poles. If we can obtain an accurate depth map, there
generally exists a depth margin between the poles and the surrounding
background or objects. Thus, the depth information can greatly help to improve
the segmentation performance, especially in challenging situations.
In order to explore the correlation of the depth information and semantic
segmentation, jointly train a network to simultaneously learn the two tasks
become an attractive direction in scene understanding. One popular approach is
to combining multi-task neural activations via employing the network
architecture interaction [15, 16, 17]. However, the geometric constraint is
not explicitly explored in the fusion process. To obtain an optimal descent
direction of the common weights, another approach [18, 19, 20] is designing
the joint-optimization objective functions by adaptively selecting loss weight
of each task during the training phase. This approach is designed only to
pursue a better sharing feature representation, without considering the
geometric relationship of the two tasks.
In this paper, we propose to explore the geometric relationship between
monocular semantic segmentation and depth estimation, and design a novel
neural network (SOSD-Net) to embed the semantic objectness, making it possible
to simultaneously learn the geometric cues and scene parsing, as shown in
Figure 1. The proposed network is designed strictly according to the geometric
constraints to boost up the performance of the two tasks by integrating the
information of the objectness.
Figure 1: Joint optimization of monocular depth, semantic, and semantic
objectness.
Specifically, when inferring the monocular depth information, the semantic
objectness will fuse the features from the semantic segmentation, and vice
versa. In addition, the supervised learning of the two tasks is essentially a
parameter estimation problem of a Gaussian mixture model. Inspired by the idea
of Expectation Maximization (EM) algorithm, we propose an effective learning
strategy to alternative optimize the weights of the scene parsing and the
geometric cues during the training phase. The proposed method is extensively
evaluated on the CityScapes [21] and NYU v2 datasets [22], and the
experimental validation shows that the SOSD-Net outperforms the state-of-the-
art multi-task approaches in the one-stage training phase, demonstrating the
effectiveness of our proposed algorithm.
In summary, the key contributions of this paper include:
1. 1.
We propose a Semantic Objectness Segmentation and Depth Estimation Network
(SOSD-Net) to enhance the learning ability of joint monocular depth estimation
and semantic segmentation.
2. 2.
An effective learning strategy is proposed to alternatively update the
specific weights of SOSD-Net, which significantly improves the performance of
the two tasks.
3. 3.
We achieve competing results over the state-of-the-art one-stage models on two
popular benchmarks.
## 2 Related Work
In this section, we review the related work in the following three problems:
semantic segmentation, depth estimation, and multi-task learning.
### 2.1 Semantic Segmentation
With the powerful representational and inferring ability, many models [23, 24]
based on the deep convolutional neural networks have achieved significant
improvement on several segmentation datasets, especially compared with the
classical hand-crafted methods. Long et al. [1] made a breakthrough by
successfully converting the classification network to a pixel-wise
segmentation network, replacing the fully connected layers with convolutional
layers. Inspired by this idea, the recent semantic segmentation networks can
be broadly classified into three categories. The first group [25, 26, 2, 27,
28] designs convolutional encoder-decoder network structures to gradually
capture the high semantic information and recovering the spatial information.
The second group of methods [29, 30, 31, 32, 14] is to exploit multi-scale
information to grasp better global and contextual information. The last group
[33, 34, 35, 36, 37, 32] is to explore the conditional Markov Random Field to
optimize the segmentation result. In addition, Krešo et al. [38] propose a
novel scale selection layer to extract convolutional features at the scale of
the reconstructed depth to improve the performance of the semantic
segmentation.
### 2.2 Depth Estimation
Learning depth from a single image has been extensively studied in the
literature. To tackle this task, classic methods [39, 40, 41] usually make
strong geometric assumptions about the scene structure, and employ the Markov
Random Field (MRF) to infer the depth by leveraging the hand-crafted features.
Non-parametric algorithm [42, 7] is another type of classical methods, which
employ global scene features to search for candidate images that are close to
the input image from a training database in the feature space. Other methods
are based on the advanced deep learning models [11, 4, 43, 44, 45, 5, 46, 47].
Eigen et al. [11] addressed this issue by fusing the depths from the global
network and refined network, which was extended to use a multi-scale
convolutional network in a deeper neural network [4]. Recently, the
unsupervised learning methods [48, 49, 50, 51, 52] achieved significant
progress. By exploiting the epipolar geometry constraints, [48, 49, 50, 51]
taken the inferred monocular depth as an intermediate result in computing the
reconstruction loss. Due to the inherent ambiguity of the monocular depth
estimation, He et al. [12] proposed a novel deep neural network to remedy the
ambiguity caused by the focal length.
### 2.3 Multi-task Learning
Multi-task learning aims to improve the performance of various computer vision
problems. According to the design of network structure and loss function, the
methods of multi-task learning are mainly divided into two categories. One of
the methods [15, 53, 16, 17] is to let the network automatically learn the
connection relationship among tasks, where the loss function is a weighted sum
of all branches. Another method [18, 19, 20] is to search the optimal descent
direction of the gradient by adaptively selecting the weighting factors during
the training process. Xu et al. [54] proposed a PAD-Net to utilize auxiliary
tasks to facilitate optimizing the semantic segmentation and depth estimation.
R. Zamir et al. [55] proposes a fully computational approach to model the
structure of space of visual tasks, building the relationship between the
depth estimation and the semantic segmentation from normals. However, these
methods only directly learn the two tasks without explicitly exploring the
geometric constraints between the monocular depth estimation and semantic
segmentation. In this work, we propose an SOSD-Net to achieve a deep geometric
relationship between monocular depth estimation and semantic segmentation.
Figure 2: The projection process of a planar object. Where $O$ is the optical
center, $I$ is the image of the planar object $S$, $d$ is the depth of the
object, and $f$ is the focal length.
## 3 Method
In this section, we describe the proposed SOSD-Net for monocular depth
estimation and semantic segmentation. We first introduce the geometry
constraint to embed the deep relation of the monocular depth and semantic
information, then elaborate the network architecture of the SOSD-Net. Finally,
we present the details of the proposed learning strategy.
### 3.1 Geometry Constraint
Without loss of generality, we assume the space object is linear, as shown in
Figure 2. According to the perspective projection model [56], the image of the
planar space object $S$ under $(f,O)$ is $I$, which can be formulated by the
following equation.
$d\left[{\begin{array}[]{c}u_{1}\\\ v_{1}\\\ 1\\\
\end{array}}\right]=\left[{\begin{array}[]{ccc}f_{x}&0&u_{x}\\\
0&f_{y}&u_{y}\\\ 0&0&1\\\ \end{array}}\right]\left[{\begin{array}[]{c}X_{1}\\\
Y_{1}\\\ d\\\ \end{array}}\right]$ (1)
where ($X_{1},Y_{1}$) is the coordinate of a space point, $(u,v)$ is the
coordinates of the space point on the image, ($u_{x}$, $u_{y}$) is the
coordinate of the principal point, and ($f_{x},f_{y}$) corresponds to the
camera’s focal length.
Through the above projection equation, it is notable that the monocular depth
estimation is an ill-posed problem, which makes it difficult to accurately
recover the true depth. However, if we only consider the object-level depth
and assume that the depth of the inner region of the object is approximately
consistent, the geometric relationship can be reduced to the following 2D-3D
size information of an object.
$\Delta u=\frac{f_{x}\Delta X}{d},\;\Delta v=\frac{f_{y}\Delta Y}{d}$ (2)
where $\Delta u=u_{1}-u_{2},\;\Delta v=v_{1}-v_{2}$, $\Delta
X=X_{1}-X_{2},\;\Delta Y=Y_{1}-Y_{2}$. Furthermore, we can extend the above
2D-3D size information to 2D-3D area information as below.
$d^{2}=\frac{f_{x}f_{y}\Delta X\Delta Y}{\Delta u\Delta v}$ (3)
The geometric relationship of the equation (3) is called semantic objectness
in this paper, which embeds the correlation of the semantic and the
corresponding depth. In general, after semantic segmentation, we can obtain
the 2D area information $\Delta u\Delta v$ of an object by implementing a
simple post-process operation. In addition, the area information $\Delta
X\Delta Y$ of an object under a specific perspective is unique. Thus, we can
establish a close relationship between the object-level semantic and the
corresponding depth.
In practice, the depth of the inner region of most objects is not consistent.
However, if only taking a local area of the object into consideration, then
the assumption of consistent depth is satisfied. Current deep neural networks
can express very complex functions due to their non-linearity and a large
number of parameters. Therefore, we use this powerful tool to express the
local implicit relationship, and introduce a novel deep convolutional network
(SOSD-Net) to embed the semantic objectness relation between the monocular
depth estimation and semantic segmentation.
Figure 3: Our proposed SOSD-Net architecture leverages a shared-encoder
backbone and a Decoder for semantic feature, common representation and depth
feature, followed by depth-to-semantic and semantic-to-depth modules to learn
semantic segmentation and depth estimation from a single image, respectively.
### 3.2 SOSD-Net Architecture
The overall SOSD-Net architecture is depicted in Figure 3. It consists of four
components as described below: a CNN backbone to extract a contextual feature,
a Decoder for three feature maps (Common Representation, Semantic Feature,
Depth Feature), a semantic-to-depth unit to learn monocular depth, and a
depth-to-semantic unit to learn semantic segmentation.
Figure 4: The detailed structure of the Backbone.
#### 3.2.1 Backbone
The backbone takes an input image and generates an intermediate feature map to
be processed by each subtask. Similar to DeepLabV3+ [14], the backbone of the
proposed SOSD-Net consists of xception-65 [57] and three parallel components,
i.e., an atrous spatial pyramid pooling (ASPP), a cross-channel learner, and a
full-image extractor, as shown in Figure 4. The ASPP and the pure $1\times 1$
convolution are applied to effectively fuse complex contextual information,
guided by the global information from the full-image extractor.
Figure 5: The detailed structure of the Decoder. The Refined fp (green block)
is generated from the Backbone.
#### 3.2.2 Decoder
Based on the global feature map from the Backbone, the role of the decoder is
mainly to extract fine-grained feature maps for Semantic Feature, Common
Representation, and Depth Feature, respectively. In order to remedy the
structure loss caused by the stride convolution, the decoder fuses the Refined
fp (green block) from the Backbone in Figure 3. Based on the Refined fp, we
first take one convolution layer to extract information for each task,
respectively. Then combining the upsampling global feature maps, the decoder
employs one convolutional layer and two convolutional layers to generate the
Semantic Feature and Depth Feature, respectively. The detailed parameters of
the decoder are shown in Figure 5.
Figure 6: The structure of the semantic-to-depth module.
#### 3.2.3 Semantic-to-Depth
Having obtained the common features, a classic decoder for monocular depth
estimation employs the skip-connection and upsampling modules to obtain the
high-resolution depth maps. The weights of the network are updated by
minimizing the depth loss function. In order to embed the semantic objectness
information for the monocular depth estimation, as described in equation (3),
we propose a semantic-to-depth module to effectively fuse the deep 2D-3D area
information. As shown in Figure 6, the deep 3D area information is extracted
from the common feature maps, defined as the deep area information of an
object under a certain perspective.
In order to maintain the detailed structure of the subtasks, we set the stride
of the convolution to 1. In the semantic-to-depth unit of the SOSD-Net, we
first utilize two convolution layers with 2 and 1 channels to generate a
heatmap, which is referred to as 3D latent shared representation of an object,
related to $\Delta X\Delta Y$, as shown in Figure 6. In addition, we take
another two convolution layers with 2 and 1 channels to obtain another heatmap
from the semantic segmentation, which is the 2D latent shared representation
of an object, related to $(\Delta u\Delta v)^{-1}$. Since the public datasets
are of fixed-focal-length, we use batch normalization to automatically embed
the focal length information into the two sub-branches. Next, we employ the
deep 2D-3D area feature to infer the depth cue by leveraging pixel-wise
multiplication and square root operations. After combining the information
from the depth features and the Pixel Sqrt, this module conducts a convolution
layer with 1 channel to infer depth maps. At last, we employ an upsampling
operation (bilinear interpolation) on the previous depth to obtain the full-
resolution depth.
Figure 7: The structure of the depth-to-semantic module.
#### 3.2.4 Depth-to-Semantic
Similar to the semantic-to-depth, the semantic branch also embeds the deep
2D-3D area feature by integrating the features from previous components.
However, in terms of implementation details, the depth-to-semantic is
different from the semantic-to-depth, as shown in Figure 7. For example, we
first apply two convolution layers with 64 and 32 channels to generate a
latent variable from the pure depth branch, which is related to the $d^{-2}$.
As for the 3D latent shared representation of an object, we realize it by
leveraging another two convolution layers with 64 and 32 channels on the
common representation. After fusing the information from the two sub-branches
by pixel-wise multiplication, we employ a $1\times 1$ convolution with 32
channels to parse the semantic cue. Having concatenated the feature maps from
the semantic feature and the semantic cue, this module adds one convolution
layer to infer the semantic segmentation. To obtain the full-resolution
segmentation, we take the same upsampling operation to increase the resolution
of the semantic segmentation.
Algorithm 1 EM Learning Strategy
1:Parameters Initialization, set $p\to 0$
2:for $i=1$ to $N$ do
3: if $p=0$ then $\triangleright$ learning depth
4: for $t=1$ to $3$ do $\triangleright$
$(\theta^{dep},\theta^{3d},\theta^{2d})$
5:
$\Theta(t)=\Theta(t)-\eta\bigtriangledown_{\Theta(t)}\varphi(I,x_{sem};\Theta)$
6: end for
7:
$\theta^{com}=\theta^{com}-\eta\sum_{t=1}^{3}\alpha^{t}\bigtriangledown_{\theta^{com}}\varphi(I,x_{sem};\Theta)$
8: $p\to 1$
9: else$\triangleright$ learning semantic
10: for $t=1$ to $3$ do $\triangleright$
$(\theta^{sem},\theta^{3d},\theta^{d^{-2}})$
11:
$\Theta(t)=\Theta(t)-\eta\bigtriangledown_{\Theta(t)}\varphi(I,x_{dep};\Theta)$
12: end for
13:
$\theta^{com}=\theta^{com}-\eta\sum_{t=1}^{3}\alpha^{t}\bigtriangledown_{\theta^{com}}\varphi(I,x_{dep};\Theta)$
14: $p\to 0$
15: end if
16:end for
### 3.3 Learning and Loss Function
In essence, the weight learning of deep networks is equivalent to a problem of
maximum likelihood estimation. In the field of classical machine learning,
simultaneously learning the depth and semantic segmentation from a single
image can be regarded as a Gaussian Mixture Model (GMM), which can be
effectively solved by the EM algorithm [58]. In the process of parameter
optimization, EM can simplify the complex estimation problem. It first
optimizes some parameters ($\phi_{1}$) by fixing other parameters ($\phi_{2}$)
in the parameter space, and then optimize the other parameters $\phi_{2}$ by
fixing the parameters $\phi_{1}$ until achieving the optimal parameters.
Inspired by the strategy of EM, we propose an effective training method to
alternatively learn the weights of the SOSD-Net by taking the deep 2D-3D area
information as hidden variables, which first learns the weight of the depth
branch by fixing the weight of the semantic branch, and then learns the weight
of the semantic branch by fixing the weight of the depth branch until the
convergence of the proposed model.
Let $Y=\varphi(I,x_{sem};\Theta)$ denote the fused outputs of the depth branch
given an image $I$ and its semantic feature $x_{sem}$, where
$\Theta=(\theta^{dep},\theta^{3d},\theta^{2d},\theta^{com})$ corresponds to
the parameters involved in the depth features, $\Delta X\Delta Y$, $(\Delta
x\Delta y)^{-1}$ and backbone network, respectively, as shown in Figure 6. As
for the learning process of the monocular depth, we first update the weights
of the semantic-to-depth, and then merge the backward loss from each branch to
learn the weights of the common backbone. For example, for the green branch
($(\Delta x\Delta y)^{-1}$) in Figure 6, the update process of the weights can
be formulated by the following equation.
$\theta^{2d}(t)=\theta^{2d}(t)-\eta\bigtriangledown_{\theta^{2d}(t)}\varphi(I,x_{sem};\Theta)$
(4)
where $\eta$ is the learning rate, and
$\bigtriangledown_{\theta^{2d}(t)}\varphi(I,x_{sem};\Theta)$ is the gradient
of $\varphi$ with respect to $\theta^{2d}$.
We use the same strategy of weights update for the purple branch, and the
orange branch, respectively. Having updated the weights of the three branches,
we merge the backward loss and learn the weights of the backbone as the
following equation.
$\theta^{com}=\theta^{com}-\eta\sum_{t=1}^{3}\alpha^{t}\bigtriangledown_{\theta^{com}}\varphi(I,x_{sem};\Theta)$
(5)
where $\alpha^{t}$ is the weighting factor of the t-subnet in terms of
gradient-based backpropagation. We set $\alpha^{t}$ to 1 in this paper.
Similarly, when learning the semantic segmentation, we use the same strategy
to lean the weights of the proposed model, taking the monocular depth
information and the deep 3D area information as hidden variables. The detailed
learning strategy is shown in Algorithm 1. Finally, the parameters of the
proposed network are learned by the gradient-based backpropagation method,
whose goal is minimizing the loss function defined on the prediction and the
ground truth.
Semantic segmentation. The cross-entropy loss is employed to learn the pixel-
wise class probabilities, which is obtained by averaging the loss over the
pixels with semantic labels in each mini-batch during the training phase.
$L_{semantic}=-\frac{1}{N}\sum\limits_{i=1}^{N}c^{*}_{i}\log(c_{i})$ (6)
where $c_{i}=e^{z_{i}}/\sum_{c}e^{z_{i,c}}$ is the class prediction at pixel
$i$ given the output $z$ of the final feature maps, $c^{*}_{i}$ is the
corresponding ground truth, and $N$ is the number of pixels.
Depth estimation. $L_{1}$ loss is employed to learn the pixel-wise depth,
which minimizes the absolute Euclidean distance between the depth prediction
and the corresponding ground truth.
$L_{depth}=\frac{1}{N}\sum\limits_{i=1}^{N}|y_{i}-y^{*}_{i}|$ (7)
where $y_{i}$ is the depth prediction of the $i$-$th$ pixel, $y^{*}_{i}$ is
the corresponding ground truth, and $N$ is the number of valid pixels.
## 4 Experimental Analysis
To demonstrate the effectiveness of the proposed SOSD-Net for simultaneous
learning the monocular depth and semantic segmentation, we carry out
comprehensive experiments on two publicly available datasets: CityScapes [21]
and NYU v2 [22]. In the following subsections, we report the details of our
implementation and the evaluation results. Some ablations studies based on
CityScapes are discussed to give a more detailed analysis of our method.
### 4.1 Experimental Setup
Datasets and Data Augmentation. The CityScapes dataset [21] is a large dataset
for road scene understanding. It comprises stereo imagery from automotive-
grade stereo cameras with $22cm$ baseline, labeled with instance and semantic
segmentation of 20 classes. Inverse depth images are provided, labeled with
the SGM method [59]. The dataset was collected over 50 different cities
spanning several months, which consists of training, validation, and test sets
containing 2,975, 500, and 1,525 images, respectively.
Following the suggestion in Ozan et al. [20], the input images and the
corresponding depth maps are resized to $256\times 512$. The training data are
augmented on the fly during the training phase. The RGB and depth images are
scaled with a randomly selected ratio from $\\{0.5,0.75,1,1.25,1.5,1.75\\}$.
In addition, the RGB-D images are also transformed using color transformations
and flip with a chance of 0.5. Please note that the proposed method has the
potential to support training on full resolution input by the use of online
data preparation. For example, before feeding the data to the model, we can
randomly crop the input image to small patches, which will consume the same
memory of GPU with the strategy of resizing input samples. However, the state-
of-the-art approaches adopted the samples with a fixed resolution to train
their models. For the fairness of the comparison, we employ the same
resolution to train the proposed model for evaluation.
The NYU v2 [22] consists of 464 scenes ($480\times 640$), captured using
Microsoft Kinect. Following the official split, the training dataset is
composed of 249 scenes with 795 pair-wise images, and the testing dataset
includes 215 scenes with 654 pair-wise images. The input images and the
corresponding depths are augmented on the fly during the training phase, which
is scaled with a randomly selected ratio from $\\{1,1.2,1.5\\}$, transformed
using color transformations, and flipped with a chance of 0.5.
Evaluation Metrics. For quantitative evaluation of the depth estimation on the
NYU v2 dataset, we report errors obtained with the following widely adopted
error metrics. To evaluate the performance of the semantic segmentation on the
NYU v2 dataset, we use mean Intersection over Union (mIoU), mean accuracy, and
pixel accuracy as metrics.
* 1.
Average relative error: ${\bf
rel}=\frac{1}{N}\sum_{y_{i}\in|N|}\frac{|y_{i}-y^{*}_{i}|}{y^{*}_{i}}$
* 2.
Root mean squared error: ${\bf
rms}=\sqrt{\frac{1}{N}\sum_{y_{i}\in|N|}|y_{i}-y^{*}_{i}|^{2}}$
* 3.
Average $log_{10}$ error: ${\bf
log}_{10}=\frac{1}{N}\sum_{y_{i}\in|N|}|log_{10}(y_{i})-log_{10}(y^{*}_{i})|$
* 4.
Accuracy with threshold $t$: percentage (%) of $y_{i}$ subject to
$max(\frac{y^{*}_{i}}{y_{i}},\frac{y_{i}}{y^{*}_{i}})=\delta<t(t\in[1.25,1.25^{2},1.25^{3}])$
where $y_{i}$ is the estimated depth, $y^{*}_{i}$ denotes the corresponding
ground truth, and $N$ is the total number of valid pixels in all images of the
validation set.
For the CityScapes dataset, we use mean absolute error and mIoU to evaluate
the depth estimation and semantic segmentation, respectively.
Implementation Details. We implement the proposed model using both
PaddlePaddle [60] and TensorFlow frameworks, and train the network on the
NVIDIA Tesla P40 with 24GB memory. The results in this paper are from the
TensorFlow implementation. The objective function is optimized using Adam
method [61]. During the initialization stage, the weight layers in the first
part of the architecture are initialized using the corresponding pre-trained
model (Xception) on the ILSVRC [62] dataset for image classification. The
weights of the specific task are assigned by sampling a Gaussian with zero
mean and 0.01 variance, and the learning rate is set at 0.0001. We set the
batch size of the two datasets to 16. Finally, our model is trained with 60
epochs for the NYU Depth v2 dataset, and 40 epochs for the CityScapes dataset.
### 4.2 Ablation Study
We conduct various ablation studies to analyze the performance of our
approach. The baseline model (MTL) is a classical multi-task model with a
backbone extracting common features, with two task-specific paths to infer the
depth and the semantic, respectively, and the corresponding optimization
objective is a linear combination of each branch loss. According to the
description in section 3, the improved versions of the baseline include: (i)
SOSD-Net (SOSD-Net: adding semantic objectness to the baseline model), (ii)
ESOSD-Net (SOSD-Net with the EM learning strategy). The comparative
experimental results are shown in Table 1, Table 2 and Table 3. Note that the
baseline model, the improved versions, and the single-task model share the
same advanced backbone (Xception), which extracts the features for the subnets
to infer specific-task information.
Method | | Segmentation
---
mIoU $[\%]$
| Disparity
---
error $[px]$
| Inference
---
speed (ms)
| Number of
---
parameters (M)
Semantic only | 62.0 | - | 139.1 | 23.4
Depth only | - | 2.47 | 140.7 | 23.4
MTL | 65.6 | 2.64 | 142.2 | 23.6
SOSD-Net | 67.2 | 2.58 | 159.0 | 24.0
ESOSD-Net | 68.2 | 2.41 | 159.0 | 24.0
Table 1: Quantitative improvement when learning semantic segmentation and depth with the proposed SOSE-Net and EM-style learning strategy. Experiments were conducted on the CityScapes dataset (sub-sampled to a resolution of $256\times 512$). Results are shown from the validation set. It is clear that the inference speed and the number of parameters are comparable, we observe an improvement of performance when training with SOSD-Net, over both single-task models and MTL. Additionally, we observe a larger improvement when training on the two-tasks using the EM-style strategy (ESOSD-Net). The result shows that SOSD-Net can automatically build a better relation to embedding the scene parsing and depth estimation, and the EM-style can learn the two-tasks more effectively. Method | | Mean
---
IoU
| Mean
---
Accuracy
| Pixel
---
Accuracy
Semanitc only | 0.385 | 0.591 | 0.687
MTL | 0.417 | 0.610 | 0.710
SOSD-Net | 0.433 | 0.625 | 0.722
ESOSD-Net | 0.450 | 0.647 | 0.733
Table 2: Quantitative improvement when learning semantic segmentation with our
proposed model. Experiments are conducted on the NYU dataset ($480\times
640$). Results are shown from the test set. It is observed that SOSD-Net with
EM-style achieves better performance over both single-task models and MTL.
#### 4.2.1 SOSD-Net
For the CityScapes dataset, it can be observed that SOSD-Net obtains better
performance than the MTL model, e.g., SOSD-Net improves the Segmentation mIoU
by 1.6% (from 65.6% to 67.2%), and reduces the Disparity error by 0.06 (from
2.64 to 2.58), as reported in Table 1. SOSD-Net also outperforms the Semantic
only method, improving the mIoU of semantic segmentation by a margin of $5.2$,
while shows comparable performance with Depth only approach. In addition,
comparing with independent models and MTL model, SOSD-Net still maintains
comparable inference time and the number of parameters.
Meanwhile, we also evaluated SOSD-Net model on the NYU v2 dataset. As reported
in Table 2, SOSD-Net outperforms MTL (0.433 vs. 0.417, 0.625 vs. 0.610, 0.722
vs. 0.710) and Semantic only model (0.433 vs. 0.385, 0.625 vs. 0.591, 0.722
vs. 0.687) on all metrics of semantic segmentation. As for depth estimation
evaluation, SOSD-Net also obtains obvious performance gains on all metrics, as
shown in Table 3. These ablation studies demonstrate that using the semantic
objectness is able to improve the performance of monocular depth estimation
and semantic segmentation.
To further investigate the effect of SOSD-Net on the two tasks, we visualize
the feature maps leaned from the semantic-to-depth unit, as shown in Figure 8.
The final depth is learned by fusing the information from the pure depth
branch and the semantic-to-depth branch, respectively. Compared with the pure
depth branch (third row, second column), the final depth (first row, second
column) has a more detailed structure over the entire area of the pedestrians,
which benefits from the semantic-to-depth branch (third row, first column).
The visualization further verifies the outstanding performance of the
semantic-to-depth branch, so it is consistent with the geometric constraint,
as described in equation (3).
Method | | rel
---
| rms
---
| $log_{10}$
---
| $\delta_{1}$
---
| $\delta_{2}$
---
| $\delta_{3}$
---
Depth only | 0.167 | 0.637 | 0.078 | 0.713 | 0.935 | 0.984
MTL | 0.159 | 0.567 | 0.067 | 0.775 | 0.949 | 0.986
SOSD-Net | 0.149 | 0.527 | 0.064 | 0.797 | 0.957 | 0.991
ESOSD-Net | 0.145 | 0.514 | 0.062 | 0.805 | 0.962 | 0.992
Table 3: Quantitative improvement when learning monocular depth with our proposed model. Experiments are conducted on the NYU dataset ($480\times 640$). Results are shown from the test set. We observe a significant performance improvement when training SOSD-Net with EM-style strategy, over both single-task models and MTL. Figure 8: Visualization of the feature maps from the semantic-to-depth unit. The first row shows the input image and monocular depth estimation, the second row shows the feature map of the 2D latent shared representation and 3D latent shared representation, related to $(\triangle u\triangle v)^{-1}$ and $\triangle X\triangle Y$, the third row shows the depth intensity from the semantic-to-depth and pure depth branch, and the last row shows the ground truth of the semantic segmentation and depth estimation, respectively. Method | | Segmentation
---
mIoU $[\%]$
| Disparity
---
error $[px]$
Kendall [18] | 64.2 | 2.65
GradNorm [19] | 64.8 | 2.57
Ozan [20] | 66.6 | 2.54
ESOSD-Net | 68.2 | 2.41
Table 4: Performance of the multi-task algorithms in semantic segmentation and
depth estimation on the CityScapes dataset (sub-sampled to a resolution of
$256\times 512$).
#### 4.2.2 EM Learning Strategy
We verify the effectiveness of the EM learning strategy in boosting the
performance of the monocular depth estimation and semantic segmentation. For
the CityScapes dataset, it can be observed that the ESOSD-Net clearly
outperforms MTL and SOSD-Net in the two tasks, as reported in Table 1. For
example, compared with the SOSD-Net, ESOSD-Net improves the Segmentation mIoU
by 1.0% (from 67.2% to 68.2%) and reduces the Disparity error by 0.17 (from
2.58 to 2.41). Note that in terms of inference time and the number of
parameters, ESOSD-Net is the same as SOSD-Net. In addition, ESOSD-Net also
outperforms the single-task models, improving the mIoU of semantic
segmentation by a margin of $6.2$, and reducing the Disparity error by 0.06
(from 2.47 to 2.41).
In addition, we also evaluated ESOSD-Net on the NYU v2 dataset. As reported in
Table 2, ESOSD-Net obviously outperforms Semantic only model, MTL, and SOSD-
Net on all metrics of semantic segmentation. For example, compared with SOSD-
Net, ESOSD-Net improves the Mean IoU by 1.7% (from 0.433 to 0.450), the Mean
Accuracy by 2.2% (from 0.625 to 0.647), and Pixel Accuracy by 1.1% (from 0.722
o 0.733).
As for the depth estimation evaluation, ESOSD-Net also leads to a large
improvement on all metrics, as shown in Table 3. For example, compared with
SOSD-Net, ESOSD-Net reduces the rel, rms, and $\log_{10}$ by 0.4%, 0.013 and
0.002, and simultaneously improving the $\delta_{1}$, $\delta_{2}$, and
$\delta_{3}$ by 0.8%, 0.5%, and 0.1%, respectively. In addition, ESOSD-Net
also outperforms the Depth only method, reducing the rel, rms, and $\log_{10}$
by 2.2%, 0.123, and 0.016, and simultaneously improving the $\delta_{1}$,
$\delta_{2}$, and $\delta_{3}$ by a margin of 9.2%, 2.7%, and 0.8%,
respectively. The results of the ESOSD-Net clearly outperforms single-task,
MTL, and SOSD-Net, further demonstrating the effectiveness of the proposed EM
learning strategy.
### 4.3 Benchmark Performance
In the first series of experiments, we focus on the CityScapes dataset [21].
The proposed model is evaluated and compared with the state-of-the-art methods
including Kendall et al. [18], GradNorm [19] and Ozan [20], as reported in
Table 4. It can be seen that our ESOSD-Net improves the accuracy of semantic
segmentation by a margin of $2\%\sim 4\%$, compared with previous methods in
all settings. For inverse depth estimation, our ESOSD-Net outperforms the
previous methods with $0.1\sim 0.2$ points gap on the mean absolute error.
Figure 9: Qualitative examples of monocular depth estimation and 19-class
scene parsing results on the CityScapes dataset ($256\times 512$). The second
and the fourth rows corresponding to the predictions of the depth estimation
and semantic segmentation. The third and the last rows corresponding to the
ground truth of the depth estimation and semantic segmentation, respectively.
Meanwhile, we also evaluated the proposed model on the NYU Depth v2 dataset.
The comparison with the state-of-the-art algorithms are shown in Table 5 and
Table 6, respectively. As observed from Table 5, compared with Deng et al.
[63] , FCN [1] , Eigen and Fergus [4], and Context [36] , the proposed ESOSD-
Net achieves a remarkable improvement. When comparing the RefineNet [26], our
proposed method shows outstanding performance on the Mean Accuracy and
achieves competing performance on the mean IoU and pixel accuracy. As observed
in Table 5, our proposed method is also competitive with the two-stage
approaches, e.g., ESOSD-Net is comparable to the two-stage approach PAD-Net
[54] (0.450 vs. 0.502, 0.647 vs. 0.623, 0.733 vs. 0.752), and outperforms
Gupta et al. [64] and Arsalan et al. [53] in all metrics. These results
further demonstrate the effectiveness of ESOSD-Net.
Table 6 shows the evaluation result of the depth estimation. With the same
number of samples (795) and a one-stage training strategy, the proposed ESOSD-
Net model outperforms the state-of-the-art methods. For example, compared with
the E. and F. [4], ESOSD-Net improves the rel by 1.3% (from 0.158 to 0.145),
the $\delta_{1}$ by 3.6 % (from 76.9% to 80.5%), the $\delta_{2}$ by 1.2 %
(from 95.0% to 96.2%), and the $\delta_{3}$ by 0.4 % (from 98.8% to 99.2%),
respectively. Meanwhile, ESOSD-Net reports the rms of 0.514, an improvement of
0.127 over 0.641 achieved by the E. and F. [4]. The experiments demonstrate
the superior performance of the ESOSD-Net in depth estimation.
Method | | Mean
---
IoU
| Mean
---
Accuracy
| Pixel
---
Accuracy
Two-stage: | | |
Gupta et al. [64] | 0.286 | - | 0.603
Arsalan et al. [53] | 0.392 | 0.523 | 0.686
PAD-Net [54] | 0.502 | 0.623 | 0.752
One-stage: | | |
Deng et al. [63] | - | 0.315 | 0.638
FCN [1] | 0.292 | 0.422 | 0.600
Eigen and Fergus [4] | 0.341 | 0.451 | 0.656
Context [36] | 0.406 | 0.536 | 0.700
RefineNet [26] | 0.465 | 0.589 | 0.736
ESOSD-Net | 0.450 | 0.647 | 0.733
Table 5: Quantitative comparison with state-of-the-art methods on the scene parsing task on the NYU Depth v2 dataset ($480\times 640$). Method | samples | rel | rms | ${\bf log_{10}}$ | ${\bf\delta_{1}}$ | ${\bf\delta_{2}}$ | ${\bf\delta_{3}}$
---|---|---|---|---|---|---|---
Two-stage: | | | | | | |
Joint HCRF [15] | 795 | 0.220 | 0.745 | 0.094 | 0.605 | 0.890 | 0.970
Jafari et al. [65] | 795 | 0.157 | 0.673 | 0.068 | 0.762 | 0.948 | 0.988
PAD-Net [54] | 795 | 0.120 | 0.582 | 0.055 | 0.817 | 0.954 | 0.987
One-stage: | | | | | | |
Make3D [41] | 795 | 0.349 | 1.214 | - | 0.447 | 0.745 | 0.897
DepthTransfer [7] | 795 | 0.35 | 1.20 | 0.131 | - | - | -
Liu et al. [66] | 795 | 0.335 | 1.06 | 0.127 | - | - | -
Li et al. [67] | 795 | 0.232 | 0.821 | 0.094 | - | - | -
Liu et al. [68] | 795 | 0.230 | 0.824 | 0.095 | 0.614 | 0.883 | 0.975
Wang et al. [15] | 795 | 0.220 | 0.745 | 0.094 | 0.605 | 0.890 | 0.970
Eigen et al.[11] | 120k | 0.215 | 0.907 | - | 0.611 | 0.887 | 0.971
R. and T. [69] | 795 | 0.187 | 0.744 | 0.078 | - | - | -
E. and F. [4] | 795 | 0.158 | 0.641 | - | 0.769 | 0.950 | 0.988
He et al. [12] | 48k | 0.151 | 0.572 | 0.064 | 0.789 | 0.948 | 0.98
Lai [44] | 96k | 0.129 | 0.583 | 0.056 | 0.811 | 0.953 | 0.988
DORN [5] | 120k | 0.115 | 0.509 | 0.051 | 0.828 | 0.965 | 0.992
ESOSD-Net | 795 | 0.145 | 0.514 | 0.062 | 0.805 | 0.962 | 0.992
Table 6: Quantitative comparison with state-of-the-art methods on the depth
estimation task on the NYU Depth v2 dataset ($480\times 640$).
When comparing with other one-stage approaches using a large number of
training samples, ESOSD-Net also achieves competing performance. As reported
in Table 6, ESOSD-Net outperforms He et al. [12] on all metrics, and achieves
comparable performance with Lai [44] and DORN [5] (rms, $\delta_{1}$,
$\delta_{2}$, $\delta_{3}$), while shows slightly weakness on rel and
$\log_{10}$. This is mainly because that Lai [44] and DORN [5] utilized a
large number of samples: 96k and 120k, respectively, which are 120$\times$ and
150$\times$ than 795 samples used in our model.
At last, we can observe that the depth performance of ESOSD-Net is also
competitive with the two-stage approaches, e.g., ESOSD-Net outperforms the
two-stage approach Joint HCRF [15] and Jafari et al. [65] on all metrics.
Compared with PAD-Net [54], ESOSD-Net achieves outstanding performance in the
terms of the rms (0.514 vs. 0.582), $\delta_{2}$ (0.962 vs. 0.954) and
$\delta_{3}$ (0.992 vs. 0.987), while shows slightly weakness on the rel
(0.145 vs. 0.120), $\log_{10}$ (0.062 vs. 0.055), and $\delta_{1}$ (0.805 vs.
0.817). Nevertheless, it should be mentioned that PAD-Net [54] is trained by
auxiliary tasks with two-stage strategy, benefiting from the additional
supervised information.
In summary, the proposed method achieves better performance than the state-of-
the-art methods using a one-stage training strategy, under the same number of
training samples. Furthermore, compared with the two-stage methods, the
proposed ESOSD-Net also achieves competing performance. The experiments
strongly demonstrate the effectiveness and superior performance of the ESOSD-
Net.
Figure 10: Qualitative examples of monocular depth estimation and 40-classes
scene parsing results on the NYU Depth v2 dataset ($480\times 640$). The
second and the fourth rows corresponding to the predictions of the depth
estimation and semantic segmentation. The third and the last rows
corresponding to the ground truth of the depth estimation and semantic
segmentation, respectively.
### 4.4 Visualization
We select several samples from the CityScapes dataset and the NYU Depth v2
dataset for visualization, as shown in Figure 9. It is obvious that even there
exist holes in the depth ground truth of some vehicles, ESOSD-Net can not only
predict accurate depth maps with good smoothness, it can also parse the same
visual semantic effect as the ground truth. In addition, as shown in Figure
10, the proposed model exhibits very close qualitative visualization effects
in comparison with the corresponding ground truth on the NYU v2 dataset, which
demonstrates the effectiveness of the proposed method.
## 5 Conclusion
In this paper, we have proposed a geometric constraint to reveal the semantic
objectness relationship between the monocular depth estimation and semantic
segmentation. Through this constraint, we can employ the semantic information
of the scene to alleviate the ambiguity in monocular depth estimation, and
simultaneously boost the accuracy of the semantic segmentation. In order to
explore this constraint, the paper proposes a novel network structure (SOSD-
Net) to effectively embed semantic objectness information from the geometry
cues and scene parsing. We have also proposed an EM-style learning strategy to
effectively train the SOSD-Net. Through extensive experimental evaluations and
comparisons on the CityScapes dataset and NYU v2 dataset, the proposed ESOSD-
Net achieves outstanding performance over state-of-the-art multi-task methods
using the one-stage training strategy.
## Acknowledgement
This work was supported in part by the Shenzhen Fundamental Research Fund
(Subject Arrangement) under Grant JCYJ20170412170602564, and the National
Natural Science Foundation of China under 61822603, Grant U1813218, Grant
U1713214, Grant 61672306, Grant 61572271. This work was jointly supported by
Baidu Inc, Tsinghua University, and the University of Ryerson. The authors
would like to thank Baidu for providing the computing resources. Thanks go to
Ruijie Hou, Lixia Shen and Guangyao Yang for discussion.
## References
* [1] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
* [2] V. Badrinarayanan, A. Kendall, R. Cipolla, Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE transactions on pattern analysis and machine intelligence 39 (12) (2017) 2481–2495.
* [3] L. He, Q. Dong, G. Wang, Fast depth extraction from a single image, International Journal of Advanced Robotic Systems 13 (6) (2016) 1729881416663370\.
* [4] D. Eigen, R. Fergus, Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 2650–2658.
* [5] H. Fu, M. Gong, C. Wang, K. Batmanghelich, D. Tao, Deep ordinal regression network for monocular depth estimation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2002–2011.
* [6] F. Cen, X. Zhao, W. Li, G. Wang, Deep feature augmentation for occluded image classification, Pattern Recognition 111 (2020) 107737.
* [7] K. Karsch, C. Liu, S. B. Kang, Depth transfer: Depth extraction from video using non-parametric sampling, IEEE transactions on pattern analysis and machine intelligence 36 (11) (2014) 2144–2158.
* [8] W. Ma, Y. Wu, F. Cen, G. Wang, Mdfn: Multi-scale deep feature learning network for object detection, Pattern Recognition 100 (2020) 107149.
* [9] Y. Liu, B. Fan, L. Wang, J. Bai, S. Xiang, C. Pan, Semantic labeling in very high resolution images via a self-cascaded convolutional neural network, ISPRS journal of photogrammetry and remote sensing 145 (2018) 78–95.
* [10] L. Yan, B. Fan, H. Liu, C. Huo, S. Xiang, C. Pan, Triplet adversarial domain adaptation for pixel-level classification of vhr remote sensing images, IEEE Transactions on Geoscience and Remote Sensing 58 (5) (2019) 3558–3573.
* [11] D. Eigen, C. Puhrsch, R. Fergus, Depth map prediction from a single image using a multi-scale deep network, in: Advances in neural information processing systems, 2014, pp. 2366–2374.
* [12] L. He, G. Wang, Z. Hu, Learning depth from single images with deep neural network embedding focal length, IEEE Transactions on Image Processing 27 (9) (2018) 4676–4689.
* [13] J. Yang, J. Zhang, G. Wang, M. Li, Semantic map building based on object detection for indoor navigation, International Journal of Advanced Robotic Systems 12 (12) (2015) 178.
* [14] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 801–818.
* [15] P. Wang, X. Shen, Z. Lin, S. Cohen, B. Price, A. L. Yuille, Towards unified depth and semantic prediction from a single image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2800–2809.
* [16] I. Misra, A. Shrivastava, A. Gupta, M. Hebert, Cross-stitch networks for multi-task learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3994–4003.
* [17] Z. Zhang, Z. Cui, C. Xu, Z. Jie, X. Li, J. Yang, Joint task-recursive learning for semantic segmentation and depth estimation, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 235–251.
* [18] A. Kendall, Y. Gal, R. Cipolla, Multi-task learning using uncertainty to weigh losses for scene geometry and semantics, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7482–7491.
* [19] Z. Chen, V. Badrinarayanan, C.-Y. Lee, A. Rabinovich, Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks, in: International Conference on Machine Learning, 2018, pp. 793–802.
* [20] O. Sener, V. Koltun, Multi-task learning as multi-objective optimization, in: Advances in Neural Information Processing Systems, 2018, pp. 525–536.
* [21] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele, The cityscapes dataset for semantic urban scene understanding, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
* [22] N. Silberman, D. Hoiem, P. Kohli, R. Fergus, Indoor segmentation and support inference from rgbd images, in: European Conference on Computer Vision, Springer, 2012, pp. 746–760.
* [23] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: Integrated recognition, localization and detection using convolutional networks, arXiv preprint arXiv:1312.6229.
* [24] C. Sun, A. Shrivastava, S. Singh, A. Gupta, Revisiting unreasonable effectiveness of data in deep learning era, in: Proceedings of the IEEE international conference on computer vision, 2017, pp. 843–852.
* [25] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234–241.
* [26] G. Lin, A. Milan, C. Shen, I. Reid, Refinenet: Multi-path refinement networks for high-resolution semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1925–1934.
* [27] H. Noh, S. Hong, B. Han, Learning deconvolution network for semantic segmentation, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 1520–1528.
* [28] T. Pohlen, A. Hermans, M. Mathias, B. Leibe, Full-resolution residual networks for semantic segmentation in street scenes, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4151–4160.
* [29] W. Liu, A. Rabinovich, A. C. Berg, Parsenet: Looking wider to see better, arXiv preprint arXiv:1506.04579.
* [30] H. Zhao, J. Shi, X. Qi, X. Wang, J. Jia, Pyramid scene parsing network, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890.
* [31] L.-C. Chen, G. Papandreou, F. Schroff, H. Adam, Rethinking atrous convolution for semantic image segmentation, arXiv preprint arXiv:1706.05587.
* [32] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE transactions on pattern analysis and machine intelligence 40 (4) (2018) 834–848.
* [33] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, Semantic image segmentation with deep convolutional nets and fully connected crfs, arXiv preprint arXiv:1412.7062.
* [34] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, P. H. Torr, Conditional random fields as recurrent neural networks, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 1529–1537.
* [35] Z. Liu, X. Li, P. Luo, C.-C. Loy, X. Tang, Semantic image segmentation via deep parsing network, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 1377–1385.
* [36] G. Lin, C. Shen, A. Van Den Hengel, I. Reid, Efficient piecewise training of deep structured models for semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3194–3203.
* [37] V. Jampani, M. Kiefel, P. V. Gehler, Learning sparse high dimensional filters: Image filtering, dense crfs and bilateral neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4452–4461.
* [38] I. Krešo, D. Čaušević, J. Krapac, S. Šegvić, Convolutional scale invariance for semantic segmentation, in: German Conference on Pattern Recognition, Springer, 2016, pp. 64–75.
* [39] D. Hoiem, A. A. Efros, M. Hebert, Automatic photo pop-up, ACM transactions on graphics (TOG) 24 (3) (2005) 577–584.
* [40] A. Saxena, S. H. Chung, A. Y. Ng, 3-d depth reconstruction from a single still image, IJCV 76 (1) (2008) 53–69.
* [41] A. Saxena, M. Sun, A. Y. Ng, Make3d: Learning 3d scene structure from a single still image, IEEE transactions on pattern analysis and machine intelligence 31 (5) (2009) 824–840.
* [42] J. Konrad, M. Wang, P. Ishwar, C. Wu, D. Mukherjee, Learning-based, automatic 2d-to-3d image and video conversion, IEEE Transactions on Image Processing 22 (9) (2013) 3485–3496.
* [43] F. Liu, C. Shen, G. Lin, I. Reid, Learning depth from single monocular images using deep convolutional neural fields, IEEE transactions on pattern analysis and machine intelligence 38 (10) (2016) 2024–2039.
* [44] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, N. Navab, Deeper depth prediction with fully convolutional residual networks, in: 2016 Fourth International Conference on 3D Vision (3DV), IEEE, 2016, pp. 239–248.
* [45] D. Xu, E. Ricci, W. Ouyang, X. Wang, N. Sebe, Multi-scale continuous crfs as sequential deep networks for monocular depth estimation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5354–5362.
* [46] L. He, M. Yu, G. Wang, Spindle-net: Cnns for monocular depth inference with dilation kernel method, 2018 24th International Conference on Pattern Recognition (ICPR) (2018) 2504–2509.
* [47] H. Liu, X. Tang, S. Shen, Depth-map completion for large indoor scene reconstruction, Pattern Recognition 99 (2020) 107112.
* [48] R. Garg, G. Carneiro, I. Reid, Unsupervised cnn for single view depth estimation: Geometry to the rescue, in: ECCV, Springer, 2016, pp. 740–756.
* [49] C. Godard, O. Mac Aodha, G. J. Brostow, Unsupervised monocular depth estimation with left-right consistency, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 270–279.
* [50] T. Zhou, M. Brown, N. Snavely, D. G. Lowe, Unsupervised learning of depth and ego-motion from video, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1851–1858.
* [51] R. Mahjourian, M. Wicke, A. Angelova, Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5667–5675.
* [52] B. Fan, H. Liu, H. Zeng, J. Zhang, X. Liu, J. Han, Deep unsupervised binary descriptor learning through locality consistency and self distinctiveness, IEEE Transactions on Multimedia.
* [53] A. Mousavian, H. Pirsiavash, J. Košecká, Joint semantic segmentation and depth estimation with deep convolutional networks, in: 2016 Fourth International Conference on 3D Vision (3DV), IEEE, 2016, pp. 611–619.
* [54] D. Xu, W. Ouyang, X. Wang, N. Sebe, Pad-net: multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 675–684.
* [55] A. R. Zamir, A. Sax, W. Shen, L. J. Guibas, J. Malik, S. Savarese, Taskonomy: Disentangling task transfer learning, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3712–3722.
* [56] G. Wang, Q. J. Wu, Guide to three dimensional structure and motion factorization, Springer, 2011.
* [57] F. Chollet, Xception: Deep learning with depthwise separable convolutions, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
* [58] A. P. Dempster, N. M. Laird, D. B. Rubin, Maximum likelihood from incomplete data via the em algorithm, Journal of the Royal Statistical Society: Series B (Methodological) 39 (1) (1977) 1–22.
* [59] H. Hirschmuller, Stereo processing by semiglobal matching and mutual information, IEEE Transactions on pattern analysis and machine intelligence 30 (2) (2008) 328–341.
* [60] Paddlepaddle: Parallel distributed deep learning (2017), https://github.com/PaddlePaddle/Paddle, accessed 15 Oct 2019.
* [61] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.
* [62] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., Imagenet large scale visual recognition challenge, International journal of computer vision 115 (3) (2015) 211–252.
* [63] Z. Deng, S. Todorovic, L. Jan Latecki, Semantic segmentation of rgbd images with mutex constraints, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 1733–1741.
* [64] S. Gupta, R. Girshick, P. Arbeláez, J. Malik, Learning rich features from rgb-d images for object detection and segmentation, in: European Conference on Computer Vision, Springer, 2014, pp. 345–360.
* [65] O. H. Jafari, O. Groth, A. Kirillov, M. Y. Yang, C. Rother, Analyzing modular cnn architectures for joint depth prediction and semantic segmentation, in: 2017 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2017, pp. 4620–4627.
* [66] M. Liu, M. Salzmann, X. He, Discrete-continuous depth estimation from a single image, in: CVPR, 2014, pp. 716–723.
* [67] B. Li, C. Shen, Y. Dai, A. van den Hengel, M. He, Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs, in: CVPR, 2015, pp. 1119–1127.
* [68] F. Liu, C. Shen, G. Lin, Deep convolutional neural fields for depth estimation from a single image, in: CVPR, 2015, pp. 5162–5170.
* [69] A. Roy, S. Todorovic, Monocular depth estimation using neural regression forest, in: CVPR, 2016, pp. 5506–5514.
|
# Submodular Maximization via Taylor Series Approximation
Gözde Özcan1 Armin Moharrer1 Stratis Ioannidis1 1{gozcan, amoharrer,
<EMAIL_ADDRESS>Electrical and Computer Engineering Department,
Northeastern University, Boston, MA, USA.
###### Abstract
We study submodular maximization problems with matroid constraints, in
particular, problems where the objective can be expressed via compositions of
analytic and multilinear functions. We show that for functions of this form,
the so-called _continuous greedy_ algorithm [1] attains a ratio arbitrarily
close to $(1-1/e)\approx 0.63$ using a deterministic estimation via Taylor
series approximation. This drastically reduces execution time over prior art
that uses sampling.
## 1 Introduction.
Submodular functions are set functions that exhibit a diminishing returns
property. They naturally arise in many applications, including data
summarization [2, 3, 4], facility location [5], recommendation systems [6],
influence maximization [7], sensor placement [8], dictionary learning [9, 10],
and active learning [11]. In these problems, the goal is to maximize a
submodular function subject to matroid constraints. These problems are in
general NP-hard, but a celebrated greedy algorithm [12] achieves a $1-1/e$
approximation ratio on uniform matroids. Unfortunately, for general matroids
the approximation ratio drops to $1/2$ [13].
The _continuous greedy_ algorithm [14, 1] improves this bound. The algorithm
maximizes the _multilinear relaxation_ of a submodular function in the
continuous domain, guaranteeing a $1-1/e$ approximation ratio [1]. The
fractional solution is then rounded to a feasible integral solution (without
compromising the objective value), e.g., via pipage rounding [15] or swap
rounding [16]. The multilinear relaxation of a submodular function is its
expected value under independent Bernoulli trials; however, computing this
expectation is hard in general. The state of the art is to estimate the
multilinear relaxation via sampling [1, 14]. Nonetheless, the number of
samples required in order to achieve the superior $1-1/e$ guarantee is quite
high; precisely because of this, the resulting running time of continuous
greedy is $O(N^{8})$ in input size $N$ [1].
Nevertheless, for some submodular functions, the multilinear relaxation can be
computed efficiently. One well-known example is the _coverage function_ ,
which we describe in Sec. 4; given subsets of a ground set, the coverage
function computes the number of elements covered in the union of these
subsets. The multilinear relaxation for coverage can be computed precisely,
without sampling, in polynomial time. This is well-known, and has been
exploited in several different contexts [17, 18, 15].
We extend the range of problems for which the multilinear relaxation can be
computed efficiently. First, we observe that this property naturally extends
to _multilinear functions_ , a class that includes coverage functions. We then
consider a class of submodular objectives that are a summation over non-linear
functions of these multilinear functions. Our key observation is that the
polynomial expansions of these functions are again multilinear; hence,
compositions of multilinear functions with arbitrary _analytic_ functions,
that can be approximated by a Taylor series, can be computed efficiently. A
broad range of problems, e.g., data summarization, influence maximization,
facility location, and cache networks (c.f. Sec. 6), can be expressed in this
manner and solved efficiently via our approach.
In summary, we make the following contributions:
* •
We introduce a class of submodular functions that can be expressed as weighted
compositions of analytic and multilinear functions.
* •
We propose a novel polynomial series estimator for approximating the
multilinear relaxation of this class of problems.
* •
We provide strict theoretical guarantees for a variant of the continuous
greedy algorithm that uses our estimator. We show that the sub-optimality due
to our polynomial expansion is bounded by a quantity that can be made
arbitrarily small by increasing the polynomial order.
* •
We show that multiple applications, e.g., data summarization, influence
maximization, facility location, and cache networks can be cast as instances
of our framework.
* •
We conduct numerical experiments for multiple problem instances on both
synthetic and real datasets. We observe that our estimator achieves $74\%$
lower error, in $89\%$ less time, in comparison with the sampling estimator.
The remainder of the paper is organized as follows. We review related work and
technical background in Sections 2 and 3, respectively. We introduce
multilinear functions in Sec. 4. We present our estimator and main results in
Sec. 5, examples of cases that can be instances of our problem in Sec. 6, and
our numerical evaluation in Sec. 7. We conclude in Sec. 8.
## 2 Related Work.
We refer the reader to Krause and Golovin [5] for a thorough review of
submodularity and its applications.
Accelerating Greedy. The seminal greedy algorithm proposed by Nemhauser et al.
[12] provides a $1-1/e$ approximation ratio for submodular maximization
problems subject to the uniform matroids. However, for general matroids this
approximation ratio deteriorates to 1/2 [13]. Several works have introduced
variants to greedy algorithm to accelerate it [19, 20, 21], particularly for
influence maximization [22, 23]. However, these accelerations do not readily
apply to the continuous greedy algorithm.
Multilinear Relaxation. The continuous greedy algorithm was proposed by
Vondrák [14] and Calinescu et al. [1]. Maximizing the multilinear relaxation
of submodular functions improves the 1/2 approximation ratio of the greedy
algorithm [13] to $1-1/e$ [1] over general matroids. Beyond maximization over
matroid constraints, the multilinear relaxation has been used to obtain
guarantees for non-monotone submodular maximization [24, 25], as well as in
pipage rounding [15]. All of these approaches resort to sampling; as we
provide general approximation guarantees, our approach can be used to
accelerate these algorithms as well.
DR-Submodularity. Submodular functions have also been studied in the
continuous domain recently. Continuous functions that exhibit the diminishing
returns property are termed _DR-submodular_ functions [26, 27, 28, 29, 30,
31], and arise in mean field inference [32], budget allocation [33], and non-
negative quadratic programming [27, 34]. DR-submodular functions are in
general neither convex nor concave; however, gradient-based methods [26, 27,
35, 28] provide constant approximation guarantees. The multilinear relaxation
is also a DR-submodular function; hence, obtaining fractional solutions to
multilinear relaxation maximization problems, without rounding, is of
independent interest. Our work can thus be used to accelerate precisely this
process.
Stochastic Submodular Maximization. Stochastic submodular maximization, in
which the objective is itself random, has attracted great interest recently
[36, 37, 17, 35, 38], both in the discrete and continuous domains. A
quintessential example is influence maximization [7], where the total number
of influenced nodes is determined by random influence models. In short, when
submodular or DR-submodular objectives are expressed as expectations, sampling
in gradient-based methods has two sources of randomness (one for sampling the
objective, and one for estimating the multilinear relaxation/sampling inputs);
continuous greedy still comes with guarantees. Our work is orthogonal, in that
it can be used to eliminate the second source of randomness. It can therefore
be used in conjunction with stochastic methods whenever our assumptions apply.
Connection to Other Works. Our work is closest to, and inspired by, Mahdian et
al. [39] and Karimi et al. [17]. To the best of our knowledge, the only other
work that approximates the multilinear relaxation via a power series is [39].
The authors apply this technique to a submodular maximization problem
motivated by cache networks. We depart by (a) extending this approach to more
general submodular functions, (b) establishing formal assumptions under which
this generalization yields approximation guarantees, and (c) improving upon
earlier guarantees for cache networks by [39]. In particular, the authors
assume that derivatives are bounded; we relax this assumption, that does not
hold for any of the problems we study here.
Karimi et al. [17] maximize stochastic _coverage functions_ subject to matroid
constraints, showing that many different problems can be cast in this setting.
Some of the examples we consider (see Sec. 6) consist of compositions of
analytic, non-linear functions with coverage functions; hence, our work can be
seen as a direct generalization of [17].
## 3 Technical Preliminaries.
### 3.1 Submodularity and Matroids.
Given a ground set $V=\\{1,\ldots,N\\}$ of $N$ elements, a set function
$f:2^{V}\rightarrow\mathbb{R}_{+}$ is submodular if and only if
$f(B\cup\\{e\\})-f(B)\leq f(A\cup\\{e\\})-f(A)$, for all $A\subseteq
B\subseteq V$ and $e\in V$. Function $f$ is _monotone_ if $f(A)\leq f(B)$, for
every $A\subseteq B$.
Matroids. Given a ground set $V$, a matroid is a pair
$\mathcal{M}=(V,\mathcal{I})$, where $\mathcal{I}\subseteq 2^{V}$ is a
collection of _independent sets_ , for which the following holds:
1. 1.
If $B\in\mathcal{I}$ and $A\subset B$, then $A\in\mathcal{I}$.
2. 2.
If $A,B\in\mathcal{I}$ and $|A|<|B|,$ there exists $x\in B\setminus A$ s.t.
$A\cup\\{x\\}\in\mathcal{I}$.
The _rank_ of a matroid $r_{\mathcal{M}}(V)$ is the largest cardinality of its
elements, i.e.: $r_{\mathcal{M}}(V)=\max\\{|A|:{A}\in\mathcal{I}\\}.$ We
introduce two examples of matroids:
1. 1.
Uniform Matroids. The uniform matroid with cardinality $k$ is
$\mathcal{I}=\\{S\subseteq V,\,|S|\leq k\\}$.
2. 2.
Partition Matroids. Let $\mathcal{B}_{1},\ldots,\mathcal{B}_{m}\subseteq V$ be
a partitioning of $V$, i.e.,
$\bigcap_{\ell=1}^{m}\mathcal{B}_{\ell}=\emptyset$ and
$\bigcup_{\ell=1}^{m}\mathcal{B}_{\ell}=V$. Let also
$k_{\ell}\in\mathbb{N},\ell=1,\ldots,m$, be a set of cardinalities. A
partition matroid is defined as $\mathcal{I}=\\{S\subseteq
2^{V}\,\mid\,|S\cap\mathcal{B}_{\ell}|\leq k_{l},\text{ for all
}\ell=1,\ldots,m\\}.$
Change of Variables. There is a one-to-one correspondence between a binary
vector $\mathbf{x}\in\\{0,1\\}^{N}$ and its support
$S=\texttt{supp}(\mathbf{x})$. Hence, a set function
$f:2^{V}\rightarrow\mathbb{R}_{+}$ can be interpreted as
$f:\\{0,1\\}^{N}\rightarrow\mathbb{R}_{+}$ via: $f(\mathbf{x})\triangleq
f(\texttt{supp}(\mathbf{x}))$ for $\mathbf{x}\in\\{0,1\\}^{N}$. We adopt this
convention for the remainder of the paper. We also treat matroids as subsets
of $\\{0,1\\}^{N}$, defined consistently with this change of variables via
(3.1)
$\displaystyle\mathcal{M}=\\{\mathbf{x}\in\\{0,1\\}^{N}:\operatorname{supp}(\mathbf{x})\in\mathcal{I}\\}.$
For example, a partition matroid is:
(3.2)
$\displaystyle\mathcal{M}=\textstyle\left\\{\mathbf{x}\in\\{0,1\\}^{N}\,\mid\bigcap_{\ell=1}^{m}\left(\sum_{i\in
B_{\ell}}x_{i}\leq k_{\ell}\right)\right\\}.$
The _matroid polytope_ $P(\mathcal{M})\subseteq[0,1]^{N}$ is the convex hull
of matroid $\mathcal{M}$, i.e., $P(\mathcal{M})=\texttt{conv}(\mathcal{M}).$
### 3.2 Submodular Maximization Subject to Matroid Constraints.
We consider the problem of maximizing a submodular function
$f:\\{0,1\\}^{N}\to\mathbb{R}_{+}$ subject to matroid constraints
$\mathcal{M}$:
(3.3) $\displaystyle\textstyle\max_{\mathbf{x}\in\mathcal{M}}f(\mathbf{x}).$
As mentioned in the introduction, the classic greedy algorithm achieves a 1/2
approximation ratio over general matroids, while the continuous greedy
algorithm [1] achieves a $1-1/e$ approximation ratio. We review the continuous
greedy algorithm below.
### 3.3 Continuous Greedy Algorithm.
The multilinear relaxation of a submodular function $f$ is the expectation of
$f$, assuming inputs $x_{i}$ are independent Bernoulli random variables, i.e.,
$G:[0,1]^{N}\rightarrow\mathbb{R}_{+}$, and
(3.4)
$\displaystyle\begin{split}G(\mathbf{y})&\\!=\\!\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[f(\mathbf{x})]\\!=\\!\\!\\!\\!\\!\\!\sum_{\mathbf{x}\in\\{0,1\\}^{N}}\\!\\!\\!\\!\\!f(\mathbf{x})\\!\\!\prod_{i:x_{i}=1}\\!\\!y_{i}\\!\\!\prod_{i:x_{i}=0}\\!\\!(1-y_{i}),\\!\\!\\!\\!\\!\end{split}$
where $\mathbf{y}=[y_{i}]_{i=1}^{N}\in[0,1]^{N}$ is the vector of
probabilities $y_{i}=\mathbb{P}[x_{i}=1]$. The continuous greedy algorithm
first maximizes $G$ in the continuous domain, producing an approximate
solution to:
(3.5) $\displaystyle\textstyle\max_{\mathbf{y}\in
P(\mathcal{M})}G(\mathbf{y}).$
The algorithm initially starts with $\mathbf{y}_{0}=\mathbf{0}$. Then, it
proceeds in iterations, where in the $k$-th iteration, it finds a feasible
point $\mathbf{m}_{k}\in P(\mathcal{M})$ which is a solution for the following
linear program:
(3.6) $\textstyle\max_{\mathbf{m}\in
P(\mathcal{M})}\big{\langle}\mathbf{m},\nabla G(\mathbf{y}_{k})\big{\rangle},$
After finding $\mathbf{m}_{k}$, the algorithm updates the current solution
$\mathbf{y}$ as follows:
(3.7) $\mathbf{y}_{k+1}=\mathbf{y}_{k}+\gamma_{k}\mathbf{m}_{k},$
where $\gamma_{k}\in[0,1]$ is a step size. We summarize the continuous greedy
algorithm in Alg. 1.
The output of Alg. 1 is within a $1-1/e$ factor from the optimal solution
$\mathbf{y}^{*}\in P(\mathcal{M})$ to (3.5) (see Thm. 3.1 below). This
fractional solution can be rounded to produce a solution to (3.3) with the
same approximation guarantee using, e.g., either the pipage rounding [15] or
the swap rounding [1, 16] methods. Both are reviewed in detail in App. A.
Sample Estimator. The gradient $\nabla G$ is needed to perform step (3.6);
computing it directly via (3.4), involves a summation over $2^{N}$ terms.
Instead, Calinescu et al. [1] estimate it via sampling. First, observe that
function $G$ is affine w.r.t a coordinate $y_{i}$. As a result,
(3.8) $({\partial G(\mathbf{y})}/{\partial
y_{i}})=\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[f\left([\mathbf{x}]_{+i}\right)]-\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[f\left([\mathbf{x}]_{-i}\right)],$
where $[\mathbf{x}]_{+i}$ and $[\mathbf{x}]_{-i}$ are equal to the vector
$\mathbf{x}$ with the $i$-th coordinate set to $1$ and $0$, respectively. The
gradient of $G$ can thus be estimated by (a) producing $T$ random samples
$\mathbf{x}^{(l)}$, for $l\in\\{1,\ldots,T\\}$ of the random vector
$\mathbf{x}$, consisting of independent Bernoulli coordinates with
$\mathbf{P}(x_{i}=1)=y_{i}$, and (b) computing the empirical mean of the
r.h.s. of (3.8), yielding:
(3.9) $\widehat{\frac{\partial G(\mathbf{y})}{\partial
y_{i}}}=\frac{1}{T}\sum\limits_{l=1}^{T}(f([\mathbf{x}^{(l)}]_{+i})-f([\mathbf{x}^{(l)}]_{-i})).$
This estimator yields the following guarantee:
###### Theorem 3.1
[Calinescu et al. [1]] Consider Algorithm 1, with $\nabla G(\mathbf{y}_{k})$
replaced by $\widehat{\nabla G}(\mathbf{y}_{k})$ given by (3.9). Set
$T=\frac{10}{\delta^{2}}(1+\ln{|V|})$, where $\delta=\frac{1}{40d^{2}|V|}$ and
$d=r_{\mathcal{M}}(V)$ is the rank of the matroid. The algorithm terminates
after $K=\frac{1}{\delta}$ steps and, w.h.p.,
(3.10) $\displaystyle
G(\mathbf{y}_{K})\geq(1-(1-\delta)^{\frac{1}{\delta}})G(\mathbf{y}^{*})\geq(1-\frac{1}{e})G(\mathbf{y}^{*})$
where $\mathbf{y}^{*}$ is an optimal solution to (3.5).
Algorithm 1 the Continuous Greedy algorithm
1:Input: $G:P(\mathcal{M})\rightarrow\mathbb{R}_{+}$, $0<\gamma\leq 1$
2:$\mathbf{y}_{0}\leftarrow 0,\,t\leftarrow 0,\,k\leftarrow 0$
3:while $t<1$ do
4: $\mathbf{m}_{k}\leftarrow\mathop{\arg\,\max}_{\mathbf{m}\in
P(\mathcal{M})}\langle\mathbf{v},\nabla G(\mathbf{y}_{k})\rangle$
5: $\gamma_{k}\leftarrow\min(\gamma,1-t)$
6: $\mathbf{y}_{k+1}\leftarrow\mathbf{y}_{k}+\gamma_{k}\mathbf{m}_{k}$,
$t\leftarrow t+\gamma_{k}$, $k\leftarrow k+1$
7:end while
8:return $\mathbf{y}_{k}$
## 4 Multilinear Functions.
In practice, estimating $G$ (and, through (3.8), its gradient) via sampling
poses a considerable computational burden. Attaining the guarantees of Thm.
3.1 requires the number of samples per estimate to grow as $N^{2}d^{4}$, that
can quickly become prohibitive.
In some cases, however, the multilinear relaxation $G(\mathbf{y})$ has a
polynomially-computable closed form. A prominent example is the coverage
function, that arises in several different contexts [15, 17]. Let
$U=\\{\mathcal{J}_{1},\ldots,\mathcal{J}_{n}\\}$ be a collection of subsets of
some ground set $V=\\{1,\ldots,N\\}$. The coverage
$f:\\{0,1\\}^{N}\rightarrow\mathbb{R}_{+}$ is:
(4.11)
$f(\mathbf{x})=\textstyle\sum_{\ell=1}^{n}\left(1-\prod_{i\in\mathcal{J}_{\ell}}(1-x_{i})\right).$
It is easy to confirm that:
$\displaystyle G(\mathbf{y})$
$\displaystyle=\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[f(\mathbf{x})]=\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}\big{[}\sum_{\ell=1}^{n}\big{(}1-\prod_{i\in\mathcal{J}_{\ell}}(1-x_{i})\big{)}\big{]}$
(4.12)
$\displaystyle=\sum_{\ell=1}^{n}\big{(}1-\prod_{i\in\mathcal{J}_{\ell}}(1-\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[x_{i}])\big{)}=f(\mathbf{y}).$
In other words, the multilinear relaxation evaluated over
$\mathbf{y}\in[0,1]^{N}$ is actually equal to $f(\mathbf{y})$, when the latter
has form (4.11). Therefore, computing it does not require sampling; crucially,
(4.11) is $O(nN)$, i.e., polynomial in the input size.
This clearly has a computational advantage when executing the continuous
greedy algorithm. In fact, (4.12) generalizes to a broader class of functions:
it holds as long as the objective $f$ is, itself, multilinear. Formally, a
function, $f:\mathbb{R}^{N}\rightarrow\mathbb{R}$ is multilinear if it is
affine w.r.t. each of its coordinates [40]. Put differently, multilinear
functions are polynomial functions in which the degree of each variable in a
monomial is at most $1$; that is, multilinear functions can be written as:
(4.13)
$g(\mathbf{x})=\textstyle\sum_{\ell\in\mathcal{I}}c_{\ell}\prod_{i\in\mathcal{J}_{\ell}}x_{i},$
where $c_{\ell}\leavevmode\nobreak\ \in\leavevmode\nobreak\ \mathbb{R}$ for
$\ell$ in some index set $\mathcal{I}$, and subsets
$\mathcal{J}_{\ell}\leavevmode\nobreak\ \subseteq\leavevmode\nobreak\ V$.111By
convention, if $\mathcal{J}_{\ell}=\emptyset$, we set
$\prod_{i\in\mathcal{J}_{\ell}}x_{i}=1$. Clearly, both the coverage function
(4.11) and the multilinear relaxation (3.4) are multilinear in their
respective arguments.
Eq. (4.12) generalizes to _any multilinear function_. In particular:
###### Lemma 4.1
Let $f:\mathbb{R}^{N}\rightarrow\mathbb{R}_{+}$ be a multilinear function and
let $\mathbf{x}\in\\{0,1\\}^{N}$ be a random vector of independent Bernoulli
coordinates parameterized by $\mathbf{y}\in\leavevmode\nobreak\ [0,1]^{N}$.
Then,
$G(\mathbf{y})=\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[f(\mathbf{x})]=f(\mathbf{y}).$
The proof can be found in App. B.1. Lem. 4.1 immediately implies that all
polytime-computable, submodular multilinear functions behave like the coverage
function: computing their multilinear relaxation _does not require sampling_.
Hence, continuous greedy admits highly efficient implementations in this
setting. Our main contribution is to extend this to a broader class of
functions, by leveraging Taylor series approximations. We discuss this in
detail in the next section.
## 5 Main Results
Table 1: Notation Summary
$\mathbb{R}$ | Set of real numbers
---|---
$\mathbb{R}_{+}$ | Set of non-negative real numbers
$G(V,E)$ | Graph $G$ with nodes $V$ and edges $E$
$V$ | Ground set of $N$ elements
$f$ | A monotone, submodular set function
$\mathcal{I}$ | Collection of independent sets in $2^{V}$
$\mathcal{M}$ | Matroid denoting the $(V,\mathcal{I})$ pair
conv$(\cdot)$ | Convex hull of a set
$k$ | Cardinality constraint of a uniform matroid
$\mathbf{x}$ | Global item placement vector of $x_{i}$’s in $\\{0,1\\}^{N}$
$[\mathbf{x}]_{+i}$ | Vector $\mathbf{x}$ with the $i$th coordinate set to $1$
$[\mathbf{x}]_{-i}$ | Vector $\mathbf{x}$ with the $i$th coordinate set to $0$
$y_{i}$ | Probability of $i\in S$
$\mathbf{y}$ | Vector of marginal probabilities $y_{i}$’s in $[0,1]^{N}$
$G(\mathbf{y})$ | Multilinear extension with marginals $\mathbf{y}$
$h_{i}$ | An analytic function
$g_{i}$ | A multilinear function
$w_{i}$ | Weights in $\mathbb{R}$
$\hat{h}_{L}$ | Polynomial estimator of $h_{i}$ of degree $L$
$R_{i,L}$ | Residual error of the estimator $\hat{h}_{L}$
$\hat{f}_{L}(\mathbf{x})$ | Polynomial estimator of $f(\mathbf{x})$ of degree $L$
$R_{L}(\mathbf{x})$ | Residual error vector of the polynomial estimator $\hat{f}_{L}(\mathbf{x})$
$\epsilon_{i,L}(\mathbf{y})$ | Residual error of the estimator $\partial\widehat{G(\mathbf{y})}/\partial y_{i}$
$\varepsilon(L)$ | Bias of the estimator $\widehat{\nabla G(\mathbf{y})}$
| Influence Maximization
$M$ | Number of cascades
| Facility Location
$V$ | Number of facilities
$M$ | Number of customers
| Summarization
$M$ | Number of partitions
In this section, we show that Eq. (4.12) can be extended to submodular
objectives that can be expressed via compositions of analytic functions and
multilinear functions. In a nutshell, our approach is based on two
observations: (a) when restricted to binary values, polynomials of multilinear
functions are themselves multilinear functions, and (b) analytic functions are
approximated at arbitrary accuracy via polynomials. Exploiting these two
facts, we approximate the multilinear relaxation of an arbitrary analytic
function via an appropriate Taylor series; the resulting approximation is
multilinear and, hence, directly computable without sampling.
### 5.1 Motivation and Intuition.
We begin by establishing that polynomials of multilinear functions are
themselves multilinear functions, when restricted to binary values. Formally:
###### Lemma 5.1
The set of multilinear functions restricted over the domain $\\{0,1\\}^{N}$ is
closed under addition, multiplication, and multiplication with a scalar.
Put differently, multilinear functions restricted over the domain
$\\{0,1\\}^{N}$ form both a ring and a vector space. The proof of Lem. 5.1 can
be found in App. B.2. It is important to note that multilinear functions are
closed under multiplication only when restricted to domain $\\{0,1\\}^{N}$.
The general set of multilinear functions
$f:[0,1]^{N}\rightarrow\mathbb{R}_{+}$ is _not_ closed under multiplication.
Lem. 5.1 has the following implication. Consider a submodular function
$f:\\{0,1\\}\to\mathbb{R}_{+}$ of the form $f(\mathbf{x})=h(g(\mathbf{x}))$
where $g:\mathbb{R}^{N}\to\mathbb{R}$ is a multilinear function, and
$h:\mathbb{R}\to\mathbb{R}_{+}$ is an analytic function (e.g., $\log$, $\exp$,
$\sin$, etc.). As $h$ is analytic, it can be approximated by a polynomial
$\hat{h}$ around a certain value in its domain. This gives us a way to
estimate the multilinear relaxation of $f$ without sampling. First, we
approximate $f$ by replacing $h$ with $\hat{h}$, getting $\hat{f}=\hat{h}(g)$.
As $\hat{f}$ is the polynomial of a multilinear function restricted to
$\\{0,1\\}^{N}$, by Lem. 5.1, $\hat{f}$ _can also be expressed as a
multilinear function_. Thus, $G$ can be estimated _without sampling_ via the
estimator $\hat{G}(\mathbf{y})\triangleq\hat{f}(\mathbf{y})$.
In the remainder of this section, we elaborate further on construction,
slightly generalizing the setup, and providing formal approximation
guarantees.
### 5.2 Assumptions.
Formally, we consider set functions $f:\\{0,1\\}^{N}\to\mathbb{R}_{+}$ that
satisfy two assumptions:
###### Assumption 1
Function $f:\\{0,1\\}^{N}\to\mathbb{R}_{+}$ is monotone and submodular.
###### Assumption 2
Function $f:\\{0,1\\}^{N}\to\mathbb{R}_{+}$ has form
(5.14) $\displaystyle
f(\mathbf{x})=\textstyle\sum_{j=1}^{M}w_{j}h_{j}(g_{j}(\mathbf{x})),$
for some $M\in\mathbb{N}$, and $w_{j}\in\mathbb{R}$,
$h_{j}:[0,1]\rightarrow\mathbb{R}_{+}$, and $g_{j}:[0,1]^{N}\rightarrow[0,1]$,
for $j\in\\{1,\ldots,M\\}$. Moreover, for every $j\in\\{1,\ldots,M\\}$, the
following hold:
1. 1.
Function $g_{j}:[0,1]^{N}\to[0,1]$ is multilinear.
2. 2.
There exists a polynomial $\hat{h}_{L}:[0,1]\to\mathbb{R}$ of degree $L$ for
$L\in\mathbb{N}$, such that $|h_{j}(s)-\hat{h}_{L}(s)|\leq R_{j,L}(s)$, where
$\lim_{L\to\infty}R_{j,L}(s)=0,$ for all $s\in[0,1]$.
Asm. 2 implies that $f$ can be written as a linear combination of compositions
of analytic functions $h_{j}$ with multilinear functions $g_{j}$. The former
can be arbitrarily well approximated by polynomials of degree $L$; any
residual error from this approximation converges to zero as the degree of the
polynomial increases.
Tab. 2 summarizes several problems that satisfy Assumptions 1 and 2. We review
each of these problems in more detail in Sec. 6; in the remainder of this
section, we provide approximation guarantees for objectives that satisfy these
two assumptions.
Table 2: Summary of problems satisfying Assumptions 1 & 2.
| Input | $g_{j}:\\{0,1\\}^{|V|}\rightarrow[0,1]$ $\mathbf{x}\rightarrow g_{j}(\mathbf{x})$ | $h_{j}:[0,1]\rightarrow\mathbb{R}_{+}$ $s\rightarrow h_{j}(s)$ | $f:\\{0,1\\}^{|V|}\rightarrow\mathbb{R}_{+}$ $\mathbf{x}\rightarrow f(\mathbf{x})$ | Bias $\varepsilon(L)$
---|---|---|---|---|---
SM | Partitions $\bigcup_{j=1}^{M}\\{P_{j}\\}=V$ weights $\mathbf{r}\in\mathbb{R}_{+}^{N}$, and $\sum_{i=1}^{N}r_{i}=1$ | $\sum\limits_{i\in P_{j}}r_{i}x_{i}$ | $\log(1+s)$ | $\sum\limits_{j=1}^{M}h(s_{j})$ | $\frac{M\sqrt{N}}{(L+1)2^{L}}$
IM | Instances $G=(V,E)$ of a directed graph, partitions $\\{P_{v}^{j}\\}_{j=1}^{N}\subset V$ | $\sum\limits_{i\in V}\frac{1}{N}\Big{(}1-\prod\limits_{u\in P_{i}^{j}}(1-x_{u})\Big{)}$ | $\log(1+s)$ | $\frac{1}{M}\sum\limits_{j=1}^{M}h(s_{j})$ | $\frac{\sqrt{N}}{(L+1)2^{L}}$
FL | Complete weighted bipartite graph $G=(V\cup V^{\prime})$ weights $w_{i_{\ell},j}\in[0,1]^{N\times M}$ | $\sum\limits_{\ell=1}^{N}(w_{i_{\ell},j}-w_{i_{\ell+1},j})\left(1-\prod\limits_{k=1}^{\ell}(1-x_{i_{k}})\right)$ | $\log(1+s)$ | $\frac{1}{M}\sum\limits_{j=1}^{M}h(s_{j})$ | $\frac{\sqrt{N}}{(L+1)2^{L}}$
CN | Graph $G=(V,E)$, service rates $\mu\in\mathbb{R}_{+}^{M}$, requests $r\in\mathcal{R}$, $P_{j}$ path of $r$, arrival rates $\lambda\in\mathbb{R}_{+}^{|\mathcal{R}|}$ | $\frac{1}{\mu_{j}}\sum_{r\in\mathcal{R}:j\in p^{r}}\lambda^{r}\prod_{k^{\prime}=1}^{k_{p^{r}}(v)}(1-x_{p_{k}^{r},i^{r}})$ | $\frac{s}{1-s}$ | $\sum\limits_{j=1}^{M}h(s_{0})-\sum\limits_{j=1}^{M}h(s_{j})$ | $2M\sqrt{{|V||\mathcal{C}|}}\frac{\bar{s}^{L+1}}{1-\bar{s}}$
### 5.3 A Polynomial Estimator.
Given a function $f$ that satisfies Asm. 2, we construct the _polynomial
estimator of $f(\mathbf{x})$ of degree $L$_ via
(5.15)
$\displaystyle\hat{f}_{L}(\mathbf{x})\triangleq\textstyle\sum_{j=1}^{M}w_{j}\hat{h}_{L}(g_{j}(\mathbf{x})).$
By Lem. 5.1, function $\hat{f}_{L}:\\{0,1\\}^{N}\to\mathbb{R}$ can be
expressed as a multilinear function. We define an estimator $\widehat{\nabla
G_{L}}$ of the gradient of the multilinear relaxation $G$ as follows: for all
$i\in V$,
$\displaystyle(\widehat{{\partial G_{L}}}/{\partial
y_{i}})\big{|}_{\mathbf{y}}$
$\displaystyle=\mathbb{E}_{\mathbf{y}}[\hat{f}_{L}([\mathbf{x}]_{+i})]-\mathbb{E}_{\mathbf{y}}[\hat{f}_{L}([\mathbf{x}]_{-i})]$
(5.16) $\displaystyle\stackrel{{\scriptstyle\text{Lem.}\leavevmode\nobreak\
\ref{lem:relaxation_of_multi}}}{{=}}\hat{f}_{L}([\mathbf{y}]_{+i})-\hat{f}_{L}([\mathbf{y}]_{-i}).$
We characterize the quality of this estimator via the following theorem, whose
proof is in App. C:
###### Theorem 5.1
Assume that function $f$ satisfies Asm. 2. Let $\widehat{\nabla G_{L}}$ be the
estimator of the multilinear relaxation given by (5.16), and define
$R_{L}(\mathbf{x})\triangleq\sum_{j}|w_{j}||R_{j,L}(g_{j}(\mathbf{x}))|$ for
$\mathbf{x}\in\\{0,1\\}^{N}$. Then,
(5.17) $\big{\|}\nabla G(\mathbf{y})-\widehat{\nabla
G_{L}}(\mathbf{y})\big{\|}_{2}\leq\|\epsilon_{L}(\mathbf{y})\|_{2}$
where
$\epsilon_{L}(\mathbf{y})=[\epsilon_{i,L}(\mathbf{y})]_{i=1}^{N}\in\mathbb{R}^{N}$
and
(5.18)
$\displaystyle\epsilon_{i,L}(\mathbf{y})\triangleq\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{+i})]+\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{-i})].$
Moreover, $\lim_{L\to\infty}\|\epsilon_{L}(\mathbf{y})\|_{2}=0,$ uniformly on
$[0,1]^{N}$.
The theorem implies that, under Asm. 2, we can approximate $\nabla G$
arbitrarily well, uniformly over all $\mathbf{y}\in[0,1]^{N}$. This
approximation can be used in continuous greedy, achieving the following
guarantee:
###### Theorem 5.2
Assume a function $f:\\{0,1\\}^{N}\leavevmode\nobreak\
\rightarrow\leavevmode\nobreak\ \mathbb{R}_{+}$ satisfies Assumptions 1 and 2.
Then, consider Alg. 1, in which $\nabla G(\mathbf{y}_{K})$ is estimated via
the polynomial estimator given in (5.16). Then,
(5.19) $\displaystyle
G(\mathbf{y}_{K})\geq\left(1-\frac{1}{e}\right)G(\mathbf{y}^{*})-D\,\varepsilon(L)-\frac{P}{2K},$
where $K=(1/\gamma)$ is the number of iterations, $\mathbf{y}^{*}$ is an
optimal solution to (3.5), $D=\max_{\mathbf{y}\in
P(\mathcal{M})}\|\mathbf{y}\|_{2}$ is the diameter of the polymatroid,
$\varepsilon(L)=\max_{k}\|\epsilon_{L}(\mathbf{y}_{k})\|_{2}$ is the bias of
the estimator, and $P=2\max_{\mathbf{x}\in\mathcal{M}}f(\mathbf{x})$.
The proof can be found in App. D. Uniform convergence in Thm. 5.1 implies that
the estimator bias $\varepsilon(L)$ converges to zero. Hence, Thm. 5.2 implies
that we can obtain an approximation arbitrarily close to $1-1/e$, by setting
$L$ and $K$ appropriately.
We note that Thm. 5.2 provides a tighter guarantee than the one achieved by
Mahdian et al. [39] (see App. E for a detailed comparison); in particular,
they assume that derivatives of functions $h_{j}$ are bounded; we make no such
assumption. This is an important distinction, as none of the examples in Sec.
6/Tab. 2 have bounded derivatives (see App. G.1).
### 5.4 Time Complexity.
For all examples in Tab. 2, the error $\varepsilon(L)$ decays exponentially
with $L$. Hence, to achieve an approximation $1-1/e+\varepsilon$, we must have
$L=\Theta\left(\log\left(\frac{1}{\varepsilon}\right)\right)$. Hence, if
multilinear functions $g_{j}$, $j\in\\{1,\ldots,M\\}$ are polynomially
computable w.r.t $N$ (as is the case for our examples), the total number of
terms in $\widehat{f}_{L}$ will be polynomial in both $N$ and
$\frac{1}{\varepsilon}$. We further elaborate on complexity issues in App. F.
## 6 Examples.
In this section, we list three problems that can be tackled through our
approach, also summarized in Tab. 2; we also review cache networks (CN) in
App. H.
### 6.1 Data Summarization (SM)[2, 6].
In data summarization, ground set $V$ is a set of tokens, representing, e.g.,
sentences in a document or documents in a corpus. The goal is to select a
“summary” $S\subseteq V$ that is representative of $V$. We present here the
diversity reward function proposed by Lin and Bilmes[2]. Assume that each
token $i$ has a value $r_{i}\in[0,1]$, where $\sum_{i}r_{i}=1$. The summary
$S$ should contain tokens of high value, but should simultaneously be diverse.
The authors achieve this by partitioning $V$ to sets $\\{P_{j}\\}_{j=1}^{M}$,
where each set $P_{j}\subset V$ contains tokens that are similar. They then
seek a summary that maximizes
(6.20) $f(S)=\textstyle\sum_{j=1}^{M}h\left(\sum_{i\in P_{j}\cap
S}r_{i}\right),$
where $h:\mathbb{R}_{+}\to\mathbb{R}_{+}$ is a non-decreasing concave function
(e.g., $h(s)=\log(1+s)$, $h(s)=s^{\alpha}$, where $\alpha<1$, etc.).
Intuitively, the use of $h$ suppresses the selection of similar items (in the
same $P_{j}$), even if they have high values, thereby promoting diversity.
Objective (6.20) is clearly of form (5.14). For example, for $h=\log(1+s)$,
$f$ is monotone and submodular [2], and is the sum of compositions of $h$ with
multilinear functions $g_{j}(\mathbf{x})=\sum_{i\in P_{j}}r_{i}x_{i},$ as
illustrated in Tab. 2. Moreover, $h$ is analytic and can be approximated
within arbitrary accuracy by its $L^{\text{th}}$-order Taylor approximation
around 1/2, given by:
(6.21)
$\hat{h}_{L}(s)=\textstyle\sum_{\ell=0}^{L}\frac{h^{(\ell)}(1/2)}{\ell!}(s-{1}/{2})^{\ell}.$
We show in App. G.1 that this estimator ensures that $f$ indeed satisfies Asm.
2. Moreover, The estimator bias appearing in Thm. 5.2 is also bounded:
###### Theorem 6.1
Assume a diversity reward function $f:\leavevmode\nobreak\
\\{0,1\\}^{N}\rightarrow\mathbb{R}_{+}$ that is given by (6.20), with
$h(s)=\log(1+s)$. Then, consider the estimator $\widehat{\nabla
G}(\mathbf{y}_{K})$ given in (5.16) using $\hat{h}_{L}(\mathbf{x})$, the
$L^{th}$ Taylor polynomial of $f(\mathbf{x})$ around $1/2$, given by (6.21).
Then, the bias of the estimator satisfies
$\varepsilon(L)\leq\frac{M\sqrt{N}}{(L+1)2^{L}}.$
The proof of this theorem can be found in App. G.1. Our work directly allows
for the optimization of such objectives over matroid constraints. For example,
a partition matroid (distinct from $\\{P_{j}\\}_{j=1}^{M}$) could be used to
enforce that no more than $k_{\ell}$ sentences come from $\ell$-th paragraph,
etc.
### 6.2 Influence Maximization (IM) [7, 41].
Influence maximization problems can be expressed as weighted coverage
functions (see, e.g., [17]). In short, given a directed graph $G=(V,E)$, we
wish to maximize the expected fraction of nodes reached if we infect a set of
nodes $S\subseteq V$ and the infection spreads via the Independent Cascade
(IC) model [7]. In our notation this objective can be written as
(6.22) $\displaystyle
f(\mathbf{x})=\textstyle\frac{1}{M}\sum_{j=1}^{M}\frac{1}{N}\sum_{v\in
V}\left(1-\prod_{i\in P_{v}^{j}}(1-x_{i})\right),\\!\\!\\!$
where $P_{v}^{j}\subseteq V$ is the set of nodes reachable from $v$ in a
random simulation of the IC model. This is a multilinear function. Our
approach allows us to extend this to maximizing the expectation of _analytic
functions_ $h$ of the fraction of infected nodes. For example, for
$h(s)=\log(1+s)$, we get:
(6.23) $g_{j}(\mathbf{x})=\textstyle\sum_{v\in
V}\frac{1}{N}\big{(}1-\prod_{i\in P_{v}^{j}}(1-x_{i})\big{)},$
for $j=1,\ldots,M$, and
(6.24)
$f(\mathbf{x})=\textstyle\frac{1}{M}\sum_{j=1}^{M}h\left(g_{j}(\mathbf{x})\right).$
Functions $g_{j}:[0,1]^{N}\to[0,1]$ are multilinear, monotone submodular, and
$O(N^{2})$ computable, while $h:[0,1]\to\mathbb{R}$ is non-decreasing and
concave. As a result, (6.24) satisfies Asm. 1. Again, $h$ can be approximated
within arbitrary accuracy by its $L^{\text{th}}$-order Taylor approximation
around 1/2, given by (6.21). This again ensures that $f$ indeed satisfies Asm.
2. Moreover, we bound the estimator bias appearing in Thm. 5.2 as follows:
###### Theorem 6.2
For function $f:\leavevmode\nobreak\ \\{0,1\\}^{N}\rightarrow\mathbb{R}_{+}$
that given by (6.24), consider the estimator $\widehat{\nabla G}$ given in
(5.16) using $\hat{h}_{L}$, the $L^{\text{th}}$-order Taylor approximation of
$h$ around $1/2$, given by (6.21). Then, the bias of estimator
$\widehat{\nabla G}$ satisfies
$\varepsilon(L)\leq\frac{\sqrt{N}}{(L+1)2^{L}}.$
The proof of the theorem can be found in App. G.2. Partition matroid
constraints could be used in this setting to bound the number of seeds from
some group (e.g., males/females, people in a zip code, etc.).
### 6.3 Facility Location (FL)[36, 42].
Facility location is another classic example of submodular maximization [5].
Given a complete weighted bipartite graph $G=(V\cup V^{\prime})$ and weights
$w_{v,v^{\prime}}\in[0,1]$, $v\in V$, $v^{\prime}\in V^{\prime}$, we wish to
maximize:
(6.25) $f(S)=\textstyle\frac{1}{M}\sum_{j=1}^{M}\max_{i\in S}w_{i,j}\,.$
Intuitively, $V$ and $V^{\prime}$ represent facilities and customers
respectively and $w_{v,v^{\prime}}$ is the utility of facility $v$ for
customer $v^{\prime}$. The goal is to select a subset of facility locations
$S\subset{V}$ to maximize the total utility, assuming every customer chooses
the facility with the highest utility in the selection $S$. This too becomes a
coverage problem by observing that $\max_{i\in S}w_{i,j}$ equals [17]:
(6.26)
$g_{j}(\mathbf{x})=\sum\limits_{\ell=1}^{N}(w_{i_{\ell},j}-w_{i_{\ell+1},j})\big{(}1-\prod\limits_{k=1}^{\ell}(1-x_{i_{k}})\big{)},\\!\\!\\!\\!$
where, for a given $j\in V^{\prime}$, weights have been pre-sorted in a
descending order as $w_{i_{1},j}\geq\ldots\geq w_{i_{n},j}$ and
$w_{i_{n+1},j}\triangleq 0$. We can again extend this problem to maximizing
analytic functions $h$ of the utility of a user. For example, for
$h(s)=\log(1+s)$, we can maximize
(6.27)
$f(\mathbf{x})=\textstyle\frac{1}{M}\sum_{j=1}^{M}\log\left(1+g_{j}(\mathbf{x})\right).$
In a manner similar to the influence maximization problem, we can show that
this function again satisfies Assumptions 1 and 2, using the
$L^{\text{th}}$-order Taylor approximation of $g$, given by (6.21). Moreover,
as in Thm. 6.2, the corresponding estimator bias is again
$\varepsilon(L)\leq\frac{\sqrt{N}}{(L+1)2^{L}}$. We can again therefore
optimize such an objective over arbitrary matroids, which can enforce, e.g.,
that no more than $k$ facilities are selected from a geographic area or some
other partition of $V$.
## 7 Experimental Study.
instance | dataset | $M$ | $N$ | $\sum_{j=1}^{M}\mathcal{I}$ | $\bar{\mathcal{J}}$ | m | k | $f^{*}$
---|---|---|---|---|---|---|---|---
IM | IMsynth1 | 1 | 200 | 200 | 5.2 | 10 | 3 | 0.3722
IM | IMsynth2 | 1 | 200 | 200 | 5.1 | 10 | 3 | 0.6031
FL | FLsynth1 | 200 | 200 | 40000 | 4.3 | 10 | 5 | 0.5197
FL | MovieLens | 100 | 100 | 10000 | 4.6 | 10 | 4 | 0.5430
IM | Epinions | 10 | 100 | 1000 | 3.2 | 2 | 2 | 0.5492
SM | SMsynth1 | 5 | 200 | 200 | 7.4 | 2 | 10 | 0.7669
Table 3: Datasets and Experiment Parameters.
### 7.1 Experiment Setup.
We execute Alg. 1 with sampling and polynomial estimators over $6$ different
graph settings and $3$ different problem instances, summarized in Tab. 3. Our
code is publicly available.222 https://github.com/neu-spiral/WDNFFunctions
Influence Maximization. We experiment on two synthetic datasets and one real
dataset. For synthetic data, we generate two bipartite graphs with
$|V_{1}|=|V_{2}|=100$, $|E|=400$ and $M=1$. Seeds are always selected from
$V_{1}$. We select the edges across $V_{1}$ and $V_{2}$ u.a.r. (`IMsynth1`) or
by a power law distribution (`IMsynth2`). We construct a partition matroid of
$m=10$ equal-size partitions of $V_{1}$ and set $k=3$. The real dataset is the
Epinions dataset [43] on SNAP [44]. We use the subgraph induced by the top
$N=100$ nodes with the largest out-degree and use the IC model [7] with $M=10$
cascades. The probability for each node to influence its neighbors is set to
$p=0.02$. We construct a matroid of $m=2$ equal-size partitions and set $k=5$.
(a) IMsynth1
(b) IMsynth2
(c) FLsynth1
(d) MovieLens
Figure 1: Trajectory of the FW algorithm. Utility of the function at the
current $\mathbf{y}$ as a function of time is marked for every $10$th
iteration.
Facility Location. We experiment on one synthetic and one real dataset. We
generate a bipartite graph with $N=M=200$, $|E|=800$ and select the edges
across $V$ and $V^{\prime}$ u.a.r (FLsynth1). Weights of the edges ($w_{i,j}$)
are selected randomly from $\\{0.0,0.2,0.4,0.6,0.8,1.0\\}$. We construct a
matroid of $m=10$ equal-size partitions and set to $k=4$. The real one is a
subgraph of the MovieLens 1M dataset with the top $N=100$ users who rated the
most movies and the $M=100$ movies chosen u.a.r. among the movies rated by the
user who rated the most movies [45]. In this problem, we treat movies as
facilities, users as customers, and ratings as $w_{i,j}$. We construct a
matroid of $m=10$ partitions by dividing movies according to their genres. We
consider the first genre name listed if a movie belongs to multiple genres and
we set $k=2$.
Summarization. We generate a synthetic dataset with $N=200$ nodes (SMsynth1).
We assign a reward $r_{i}$ to each node $i$ u.a.r between $[0,1]$ and divide
each $r_{i}$ with $\sum_{i}r_{i}$. We divide the nodes into $M=5$ equal-size
$P_{j}$. We construct a matroid of $m=2$ equal-size partitions and set $k=10$.
(a) IMsynth1
(b) IMsynth2
(c) FLsynth1
(d) MovieLens
(e) Epinions
(f) SMsynth1
Figure 2: Comparison of different estimators on different problems. Blue lines
represent the performance of the POLY estimators and the marked points
correspond to POLY1, POLY2, POLY3 respectively. Orange lines represent the
performance of the SAMP estimators and the marked points correspond to SAMP1,
SAMP10, SAMP100, SAMP1000 respectively.
Algorithms. We compare the performance of different estimators. These
estimators are: (a) sampling estimator (SAMP) with $T\leavevmode\nobreak\
=\leavevmode\nobreak\ 1,10,100,1000$ and (b) polynomial estimator (POLY) with
$L\leavevmode\nobreak\ =\leavevmode\nobreak\ 1,2,3$.
Metrics. We measure the performance of the estimators via
$\mathtt{err}=(f(\mathbf{y})-f^{*})/f^{*}$, where $f^{*}=\max f(\mathbf{y})$
is the maximum utility achieved using the best estimator for a given setting,
and execution time. $f^{*}$ values are reported on Table 3.
### 7.2 Results.
The trajectory of the normalized difference between the utility obtained at
each iteration of the continuous greedy algorithm ($\mathtt{err}$) is shown as
a function of time in Figure 1. In Fig. 1(a), we see that both POLY1 and POLY2
outperforms sampling estimators. Moreover, POLY1 is almost $60$ times faster
than SAMP100. In Fig. 1(b), POLY1 runs as fast as SAMP1 and outperforms all
estimators. It is important to note that POLY3 runs $2.5$ times faster than
SAMP1000. In Fig. 1(c), POLY1 visibly outperforms SAMP1 and in Fig. 1(d)
polynomial estimators give comparable results to sampler estimators. Note
that, even though small number of samples give comparable results, setting
$T\leq 100$, is below the value needed to attain the theoretical guarantees of
the continuous-greedy algorithm. These comparable results can be explained by
the $1/2$ approximation guarantee of the greedy algorithm.
The $\mathtt{err}$ of the final results of the estimators are reported as a
function of time in Figure 2. In all figures except Fig. 2(a), POLY1
outperforms other estimators in terms of time and/or utility whereas in Fig.
2(a) POLY2 is the best performer. As the number of samples increases, the
quality of the sampling estimators increases and they catch up with the
polynomial estimators. However, considering the running time, POLY1 still
remains the better choice.
## 8 Conclusion.
We have shown that polynomial estimators can replace sampling of the
multilinear relaxation. Our approach applies to other tasks, including
rounding (see App. I) and stochastic optimization methods [17]. For example,
sampling terms of the polynomial approximation can extend our method to even
larger problems.
## References
* [1] G. Calinescu, C. Chekuri, M. Pal, and J. Vondrák, “Maximizing a monotone submodular function subject to a matroid constraint,” SICOMP, 2011.
* [2] H. Lin and J. Bilmes, “A class of submodular functions for document summarization,” in ACL, 2011.
* [3] H. Lin and J. Bilmes, “Multi-document summarization via budgeted maximization of submodular functions,” in NAACL, 2010.
* [4] M. Gygli, H. Grabner, and L. Van Gool, “Video summarization by learning submodular mixtures of objectives,” in CVPR, 2015.
* [5] A. Krause and D. Golovin, “Submodular function maximization,” in Tractability: Practical Approaches to Hard Problems, Cambridge University Press, 2014.
* [6] B. Mirzasoleiman, A. Badanidiyuru, and A. Karbasi, “Fast constrained submodular maximization: Personalized data summarization.,” in ICML, 2016\.
* [7] D. Kempe, J. Kleinberg, and É. Tardos, “Maximizing the spread of influence through a social network,” in KDD, 2003.
* [8] A. Krause, A. Singh, and C. Guestrin, “Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies,” JMLR, 2008.
* [9] Z. Jiang, G. Zhang, and L. S. Davis, “Submodular dictionary learning for sparse coding,” in CVPR, 2012.
* [10] F. Zhu, L. Shao, and M. Yu, “Cross-modality submodular dictionary learning for information retrieval,” in CIKM, 2014.
* [11] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause, “Streaming submodular maximization: Massive data summarization on the fly,” in KDD, 2014.
* [12] G. L. Nemhauser and L. A. Wolsey, “Best algorithms for approximating the maximum of a submodular set function,” Mathematics of operations research, 1978.
* [13] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions—i,” Mathematical programming, 1978.
* [14] J. Vondrák, “Optimal approximation for the submodular welfare problem in the value oracle model,” in STOC, 2008.
* [15] A. A. Ageev and M. I. Sviridenko, “Pipage rounding: A new method of constructing algorithms with proven performance guarantee,” Journal of Combinatorial Optimization, 2004.
* [16] C. Chekuri, J. Vondrak, and R. Zenklusen, “Dependent randomized rounding via exchange properties of combinatorial structures,” in FOCS, 2010.
* [17] M. Karimi, M. Lucic, H. Hassani, and A. Krause, “Stochastic submodular maximization: The case of coverage functions,” in NeurIPS, 2017.
* [18] Y. Singer, “How to win friends and influence people, truthfully: influence maximization mechanisms for social networks,” in WSDM, 2012.
* [19] M. Minoux, “Accelerated greedy algorithms for maximizing submodular set functions,” in Optimization techniques, Springer, 1978.
* [20] R. Kumar, B. Moseley, S. Vassilvitskii, and A. Vattani, “Fast greedy algorithms in mapreduce and streaming,” TOPC, 2015.
* [21] B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi, J. Vondrák, and A. Krause, “Lazier than lazy greedy,” in AAAI, 2015.
* [22] C. Borgs, M. Brautbar, J. Chayes, and B. Lucier, “Maximizing social influence in nearly optimal time,” in SODA, 2014.
* [23] Y. Tang, Y. Shi, and X. Xiao, “Influence maximization in near-linear time: A martingale approach,” in SIGMOD, 2015.
* [24] M. Feldman, J. Naor, and R. Schwartz, “A unified continuous greedy algorithm for submodular maximization,” in FOCS, 2011.
* [25] C. Chekuri, J. Vondrák, and R. Zenklusen, “Submodular function maximization via the multilinear relaxation and contention resolution schemes,” SICOMP, 2014.
* [26] A. Bian, K. Levy, A. Krause, and J. M. Buhmann, “Continuous dr-submodular maximization: Structure and algorithms,” in NeurIPS, 2017.
* [27] A. A. Bian, B. Mirzasoleiman, J. Buhmann, and A. Krause, “Guaranteed non-convex optimization: Submodular maximization over continuous domains,” in AISTATS, 2017.
* [28] C. Chekuri, T. Jayram, and J. Vondrák, “On multiplicative weight updates for concave and submodular function maximization,” in ITCS, 2015.
* [29] F. Bach, “Submodular functions: from discrete to continuous domains,” Mathematical Programming, 2019.
* [30] R. Niazadeh, T. Roughgarden, and J. Wang, “Optimal algorithms for continuous non-monotone submodular and dr-submodular maximization,” in NeurIPS, 2018\.
* [31] T. Soma and Y. Yoshida, “Non-monotone dr-submodular function maximization,” in AAAI, 2017.
* [32] Y. Bian, J. Buhmann, and A. Krause, “Optimal continuous dr-submodular maximization and applications to provable mean field inference,” in ICML, 2019.
* [33] M. Staib and S. Jegelka, “Robust budget allocation via continuous submodular functions,” in ICML, 2017.
* [34] M. Skutella, “Convex quadratic and semidefinite programming relaxations in scheduling,” JACM, 2001.
* [35] H. Hassani, M. Soltanolkotabi, and A. Karbasi, “Gradient methods for submodular maximization,” in NeurIPS, 2017.
* [36] A. Mokhtari, H. Hassani, and A. Karbasi, “Conditional gradient method for stochastic submodular maximization: Closing the gap,” in AISTATS, 2018\.
* [37] A. Mokhtari, H. Hassani, and A. Karbasi, “Stochastic conditional gradient methods: From convex minimization to submodular maximization,” JMLR, 2020\.
* [38] A. Asadpour, H. Nazerzadeh, and A. Saberi, “Stochastic submodular maximization,” in WINE, 2008.
* [39] M. Mahdian, A. Moharrer, S. Ioannidis, and E. Yeh, “Kelly cache networks,” IEEE/ACM Transactions on Networking, 2020.
* [40] J. Broida and S. Williamson, A Comprehensive Introduction to Linear Algebra. Advanced book program, Addison-Wesley, 1989.
* [41] W. Chen, Y. Wang, and S. Yang, “Efficient influence maximization in social networks,” in KDD, 2009.
* [42] G. Cornuejols, M. Fisher, and G. Nemhauser, “Location of bank accounts of optimize float: An analytic study of exact and approximate algorithm,” Management Science, 1977.
* [43] M. Richardson, R. Agrawal, and P. Domingos, “Trust management for the semantic web,” in ISWC, 2003.
* [44] J. Leskovec and A. Krevl, “SNAP Datasets: Stanford large network dataset collection,” June 2014.
* [45] F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” TiiS, 2015.
## A Rounding
Several poly-time algorithms can be used to round the fractional solution that
is produced by Alg. 1 to an integral $\mathbf{x}\in\mathcal{M}$. We briefly
review two such rounding algorithms: pipage rounding [15], which is
deterministic, and swap-rounding [16], which is randomized. As in all the
stated examples, the constraints are partition matroids (see Sec. 3.1), here
we limit our explanation to this case. For a more rigorous treatment, we refer
the reader to [15] for pipage rounding, and [16] for swap rounding.
Pipage Rounding. This technique uses the following property of the multilinear
relaxation $G$: given a fractional solution $\mathbf{y}\in P(\mathcal{M})$,
there are at least two fractional variables $y_{i}$ and $y_{i^{\prime}}$,
where $i,i^{\prime}\in B_{j}$ for some $j\in\\{1,\ldots,m\\}$, such that
transferring mass from one to the other, $(1)$ makes at least one of them 0 or
1, $(2)$ the new $\hat{\mathbf{y}}$ remains feasible in $P(\mathcal{M})$, and
$(3)$ $G(\hat{\mathbf{y}})\geq G(\mathbf{y}(1))$, that is, the expected
caching gain at $\hat{\mathbf{y}}$ is at least as good as $\mathbf{y}$. This
process is repeated until $\hat{\mathbf{y}}$ does not have any fractional
elements, at which point pipage rounding terminates and return
$\hat{\mathbf{y}}$. This procedure has a run-time of $O(N)$, and since (a) the
starting solution $\mathbf{y}$ is such that
$G(\mathbf{y})\geq(1-{1}/{e})G(\mathbf{y}^{*}),$
where $\mathbf{y}^{*}$ is an optimizer of $G$ in $P(\mathcal{M})$, and (b)
each rounding step can only increase $G$, it follows that the final integral
$\hat{\mathbf{y}}\in\mathcal{M}$ must satisfy
$f(\hat{\mathbf{y}})=G(\hat{\mathbf{y}})\geq
G(\mathbf{y})\geq(1-\frac{1}{e})G(\mathbf{y}^{*})\geq(1-\frac{1}{e})f(\mathbf{x}^{*}),$
where $\mathbf{x}^{*}$ is an optimal solution to (3.3). Here, the first
equality holds because $f$ and $G$ are equal at integral points, while the
last inequality holds because (3.5) is a relaxation of (3.3), maximizing the
same objective over a larger domain.
Note that pipage rounding requires evaluating the multilinear relaxation $G$.
This can be done via a sampling estimator, but also using the Taylor estimator
we have constructed in our work. We present approximation guarantees for
pipage rounding using our estimator in App. I.
Swap rounding. In this method, given a fractional solution $\mathbf{y}\in
P(\mathcal{M})$ produced by Alg. 1 observe that it can be written as a convex
combination of integral vectors in $\mathcal{M}$, i.e.,
$\mathbf{y}=\sum_{k=1}^{K}\gamma_{k}\mathbf{m}_{k},$ where
$\gamma_{k}\in[0,1],\sum_{k=1}^{K}\gamma_{k}=1,$ and
$\mathbf{m}_{k}\in\mathcal{M}$. Moreover, by construction, each such vector
$\mathbf{m}_{k}$ is maximal, i.e., all constraints in (3.2) are satisfied with
equality.
Swap rounding iteratively merges these constituent integral vectors, producing
an integral solution. At each iteration $i$, the present integral vector
$\mathbf{c}_{k}$ is merged with $\mathbf{m}_{k+1}\in\mathcal{M}$ into a new
integral solution $\mathbf{c}_{k+1}\in\mathcal{M}$ as follows: if the two
solutions $\mathbf{c}_{k}$, $\mathbf{m}_{k+1}$ differ at two indices
$i,i^{\prime}\in B_{j}$, for some $j\in[m]$, (the former vector is 1 at
element $i$ and 0 at $i^{\prime}$, while the latter is 1 at $i^{\prime}$ and 0
at $i$) the masses in the corresponding elements are swapped to reduce the set
difference. Either the mass (of 1) in the $i$-th element of $\mathbf{c}_{k}$
is transferred to the $i$-th element of $\mathbf{m}_{k+1}$ and its
$i^{\prime}$ is set to 0, or the mass in the $i^{\prime}$ element of
$\mathbf{m}_{k+1}$ is transferred to the $i$-th element in $\mathbf{c}_{k}$
and its $i$-th element is set to 0; the former occurs with probability
proportional to $\sum_{\ell=1}^{k}\gamma_{\ell}$, and the latter with
probability proportional to $\gamma_{k+1}$. The swapping is repeated until the
two integer solutions become identical; this merged solution becomes
$\mathbf{c}_{k+1}$. This process terminates after $K-1$ steps, after which all
the points $\mathbf{m}_{k}$ are merged into a single integral vector
$\mathbf{c}_{K}\in\mathcal{M}$.
Observe that, in contrast to pipage rounding, swap rounding does not require
any evaluation of the objective $G$ during rounding. This makes swap rounding
significantly faster to implement; this comes at the expense of the
approximation ratio, however, as the resulting guarantee $1-1/e$ is in
expectation.
## B Proofs of Multilinear Function Properties
### B.1 Proof of Lemma 4.1
As $g$ is multilinear, it can be written as
$g(\mathbf{x})=\sum_{\ell\in\mathcal{I}}c_{\ell}\prod_{i\in\mathcal{J}_{\ell}}x_{i}$,
for some subset $\mathcal{I}$, $c_{\ell}\in\mathbb{R}_{+}$, and index sets
$\mathcal{J}_{\ell}\leavevmode\nobreak\ \subseteq\leavevmode\nobreak\
\\{1,\ldots,n\\}$. Then,
$\displaystyle\mathbb{E}_{\mathbf{y}}[g(\mathbf{x})]$
$\displaystyle=\textstyle\sum_{\ell\in\mathcal{I}}c_{\ell}\mathbb{E}_{\mathbf{y}}\left[\prod_{i\in\mathcal{J}_{\ell}}x_{i}\right]$
$\displaystyle=\textstyle\sum_{\ell\in\mathcal{I}}c_{\ell}\prod_{i\in\mathcal{J}_{\ell}}\mathbb{E}_{\mathbf{y}}\left[x_{i}\right]=\sum_{\ell\in\mathcal{I}}c_{\ell}\prod_{i\in\mathcal{J}_{\ell}}y_{i}$
$\displaystyle=g(\mathbf{y}).$
### B.2 Proof of Lemma 5.1
It is straightforward to see that the lemma holds for addition and
multiplication with a scalar.
To proof that lemma holds for multiplication, let two multilinear functions
$g_{1},g_{2}:\\{0,1\\}^{N}\rightarrow\mathbb{R}_{+}$, given by
$g_{1}(\mathbf{x})=\textstyle\sum_{\ell\in\mathcal{I}_{1}}c_{\ell}\prod_{i\in\mathcal{J}_{\ell}}x_{i}$
and
$g_{2}(\mathbf{x})=\textstyle\sum_{\ell^{\prime}\in\mathcal{I}_{2}}c_{\ell^{\prime}}\prod_{i\in\mathcal{J}_{\ell^{\prime}}}x_{i}.$
Observe that their product $g_{1}\cdot g_{2}$ is
$g_{1}(\mathbf{x})g_{2}(\mathbf{x})=\sum_{(\ell,\ell^{\prime})\in\mathcal{I}_{1}\times\mathcal{I}_{2}}c_{\ell}c_{\ell^{\prime}}\prod_{i\in\mathcal{J}_{\ell}\cap\mathcal{J}_{\ell^{\prime}}}x_{i}^{2}\prod_{i\in\mathcal{J}_{\ell}\triangle\mathcal{J}_{\ell^{\prime}}}x_{i}$
where $\triangle$ is the symmetric set difference. Since
$x_{i}\leavevmode\nobreak\ \in\leavevmode\nobreak\ \\{0,1\\}$,
$x_{i}^{2}=x_{i}$. Therefore,
$g_{1}(\mathbf{x})g_{2}(\mathbf{x})=\textstyle\sum_{(\ell,\ell^{\prime})\in\mathcal{I}_{1}\times\mathcal{I}_{2}}c_{\ell}c_{\ell^{\prime}}\prod_{i\in\mathcal{J}_{\ell}\cup\mathcal{J}_{\ell^{\prime}}}x_{i}$
is multilinear.
## C Proof of Theorem 5.1
We start by showing that the norm of the residual error vector of the
estimator converges to $0$. Recall that, by Asm. 2 the residual error of the
polynomial estimation $\hat{h}_{j,L}(s)$ is bounded by $R_{j,L}(s)$. Thus, for
functions $f:\\{0,1\\}^{N}\rightarrow\mathbb{R}_{+}$ satisfying Asm. 2, we
have that
(C.1) $\displaystyle\lvert f(\mathbf{x})-\hat{f_{L}}(\mathbf{x})\rvert\leq
R_{L}(\mathbf{x}),$
where
$R_{L}(\mathbf{x})\triangleq\sum_{j}|w_{j}||R_{j,L}(g_{j}(\mathbf{x}))|$.
Since $\lim_{L\to\infty}R_{j,L}(s)=0$ for all $j\in M$ and $s\in[0,1]$, and
$g_{j}(\mathbf{x})\in[0,1]$ for all $j$ and $\mathbf{x}$, we get that, for all
$\mathbf{x}\in\\{0,1\\}^{N}$,
(C.2) $\displaystyle\lim_{L\to\infty}\lvert
f(\mathbf{x})-\hat{f_{L}}(\mathbf{x})\rvert\leq\lim_{L\to\infty}R_{L}(\mathbf{x})=0.$
In fact, this convergence happens uniformly over all
$\mathbf{x}\in\\{0,1\\}^{N}$, as $\\{0,1\\}^{N}$ is a finite set. Moreover,
$\displaystyle\left|\frac{\partial G_{L}(\mathbf{y})}{\partial
y_{i}}-\frac{\widehat{\partial G_{L}}(\mathbf{y})}{\partial y_{i}}\right|$
$\displaystyle=\big{|}\mathbb{E}_{\mathbf{y}}[f([\mathbf{x}]_{+i})]-\mathbb{E}_{\mathbf{y}}[f([\mathbf{x}]_{-i})]$
$\displaystyle-\mathbb{E}_{\mathbf{y}}[\hat{f_{L}}([\mathbf{x}]_{+i})]+\mathbb{E}_{\mathbf{y}}[\hat{f_{L}}([\mathbf{x}]_{-i})]\big{|}$
$\displaystyle\leq\mathbb{E}_{\mathbf{y}}[|f([\mathbf{x}]_{+i})-\hat{f_{L}}(\mathbf{[\mathbf{x}]_{+i}})|]$
$\displaystyle+\mathbb{E}_{\mathbf{y}}[|f([\mathbf{x}]_{-i})-\hat{f_{L}}([\mathbf{x}]_{-i})|]$
$\displaystyle\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:R_L})}}}}{{\leq}}\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{+i})]+\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{-i})]$
$\displaystyle=\epsilon_{i,L}(\mathbf{y}),$
where $\epsilon_{i,L}$ is given by (5.18). By the uniform convergence (C.2),
$\lim_{L\to\infty}\epsilon_{i,L}(\mathbf{y})=0$, also uniformly on
$\mathbf{y}\in[0,1]^{N}$ (as the expectation is a weighted sum, with weights
in $[0,1]$). Setting
$\epsilon_{L}(\mathbf{y})=[\epsilon_{i,L}(\mathbf{y})]_{N}\in\mathbb{R}^{N}$,
we conclude that
$\displaystyle\big{\|}\nabla G(\mathbf{y})-\widehat{\nabla
G_{L}}(\mathbf{y})\big{\|}_{2}\leq\|\epsilon_{L}(\mathbf{y})\|_{2}$
where $\lim_{L\to\infty}\|\epsilon_{L}(\mathbf{y})\|=0$, for all
$\mathbf{y}\in[0,1]^{N}$.
## D Proof of Theorem 5.2
We begin by proving the following auxiliary lemma:
###### Lemma D.1
$G$ is P-Lipschitz continuous with $P=2\max_{x\in\mathcal{M}}f(\mathbf{x})$.
* Proof.
$\displaystyle|G(\mathbf{y})-G(\mathbf{y}^{\prime})|$
$\displaystyle=\Big{|}\sum_{x\in\\{0,1\\}^{N}}f(\mathbf{x})\prod_{x_{i}=1}y_{i}\prod_{x_{i}=0}(1-y_{i})$
$\displaystyle\quad-\sum_{x\in\\{0,1\\}^{N}}f(\mathbf{x})\prod_{x_{i}=1}y_{i}^{\prime}\prod_{x_{i}=0}(1-y_{i}^{\prime})\Big{|}$
$\displaystyle=\Big{|}\sum_{x\in\\{0,1\\}^{N}}f(\mathbf{x})\Big{(}\prod_{x_{i}=1}y_{i}\prod_{x_{i}=0}(1-y_{i})$
$\displaystyle\quad-\prod_{x_{i}=1}y_{i}^{\prime}\prod_{x_{i}=0}(1-y_{i}^{\prime})\Big{)}\Big{|}$
$\displaystyle\leq\sum_{x\in\\{0,1\\}^{N}}|f(\mathbf{x})|\Big{|}\prod_{x_{i}=1}y_{i}\prod_{x_{i}=0}(1-y_{i})$
$\displaystyle\quad-\prod_{x_{i}=1}y_{i}^{\prime}\prod_{x_{i}=0}(1-y_{i}^{\prime})\Big{|}$
$\displaystyle\leq
f(\mathbf{x})\bigg{(}\sum_{x\in\\{0,1\\}^{N}}\Big{|}\prod_{x_{i}=1}y_{i}\prod_{x_{i}=0}(1-y_{i})\Big{|}$
$\displaystyle\quad+\sum_{x\in\\{0,1\\}^{N}}\Big{|}\prod_{x_{i}=1}y_{i}^{\prime}\prod_{x_{i}=0}(1-y_{i}^{\prime})\Big{|}\bigg{)}$
$\displaystyle\leq 2\max_{x\in\mathcal{M}}f(\mathbf{x}).$
The remainder of the proof follows the proof structure in [27]. Let
$\mathbf{m}^{*}\triangleq(\mathbf{y}^{*}\vee\mathbf{y})-\mathbf{y}=(\mathbf{y}^{*}-\mathbf{y})\vee\mathbf{0}\geq\mathbf{0}$,
where $\mathbf{x}\vee\mathbf{y}\triangleq[\max\\{x_{i},y_{i}\\}]_{i}$. Since
$\mathbf{m}^{*}\leq\mathbf{y}^{*}$ and $P(\mathcal{M})$ is down-closed,
$\mathbf{m}^{*}\in P(\mathcal{M})$. By Asm. 1, $f$ is monotone. Thus,
$G(\mathbf{y}+\mathbf{m}^{*})=G(\mathbf{y}^{*}\vee\mathbf{y})\geq
G(\mathbf{y}^{*})$. If we define a uni-variate auxiliary function
$h_{\mathbf{y},\mathbf{m}}(\xi)\triangleq G(\mathbf{y}+\xi\mathbf{m}^{*})$,
where $\xi\geq 0$,
$\frac{dh_{\mathbf{y},\mathbf{m}}(\xi)}{d\xi}=\langle\mathbf{m}^{*},\nabla
G(\mathbf{y}+\xi\mathbf{m}^{*})\rangle$. $h_{\mathbf{y},\mathbf{m}}(\xi)$ is
concave because the multilinear relaxation $G$ is concave along non-negative
directions due to submodularity of $f$, given by Asm. 1. Hence,
(D.3)
$\begin{split}h_{\mathbf{y},\mathbf{m}}(1)-h_{\mathbf{y},\mathbf{m}}(0)&=G(\mathbf{y}+\mathbf{m}^{*})-G(\mathbf{y})\\\
&\leq\frac{dh_{\mathbf{y},\mathbf{m}}(\xi)}{d\xi}\Bigg{|}_{\xi=0}\times
1=\langle\mathbf{m}^{*},\nabla G(\mathbf{y})\rangle\end{split}$
For the $k^{th}$ iteration of the continuous greedy algorithm, let
$\mathbf{m}_{k}\triangleq\mathop{\arg\,\max}_{\mathbf{m}\in
P(\mathcal{M})}\langle\mathbf{m},\nabla\widehat{G_{L}}(\mathbf{y}_{k})\rangle$,
$\mathbf{y}_{k}\in P(\mathcal{M})$ be the output solution obtained by the
algorithm and $\mathbf{y}^{*}$ be the optimal solution of (3.5). Since
$\mathbf{y}_{k}$ is a convex linear combination of the points in
$P(\mathcal{M})$, $\mathbf{y}_{k}\in P(\mathcal{M})$. Using Thm. 5.1 for
$\mathbf{m}\geq\mathbf{0}$, due to Asm. 2:
$\displaystyle\max_{\mathbf{m}\in
P(\mathcal{M})}\langle\mathbf{m},\nabla\widehat{G_{L}}(\mathbf{y}_{k})\rangle\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:estimator_bound})}}}}{{\geq}}$
$\displaystyle\qquad\max_{\mathbf{m}\in P(\mathcal{M})}(\mathbf{m}^{T}\nabla
G(\mathbf{y}_{k})-\mathbf{m}^{T}\epsilon_{L}(\mathbf{y}_{k}))$
$\displaystyle\quad\geq\max_{\mathbf{m}\in P(\mathcal{M})}\mathbf{m}^{T}\nabla
G(\mathbf{y}_{k})-\max_{\mathbf{m}\in
P(\mathcal{M})}\mathbf{m}^{T}\epsilon_{L}(\mathbf{y}_{k})$
$\displaystyle\quad\geq\max_{\mathbf{m}\in P(\mathcal{M})}\mathbf{m}^{T}\nabla
G(\mathbf{y}_{k})-\max_{\mathbf{m}\in
P(\mathcal{M})}\|\mathbf{m}\|\,\|\epsilon_{L}(\mathbf{y}_{k})\|$
due to Cauchy-Schwarz inequality. Replacing $D=\max_{\mathbf{m}\in
P(\mathcal{M})}\|\mathbf{m}\|_{2}$ and
$\varepsilon(L)=\max_{k}\|\epsilon_{L}(\mathbf{y}_{k})\|_{2}$,
(D.4)
$\begin{split}\langle\mathbf{m}_{k},\nabla\hat{G}_{L}(\mathbf{y}_{k})\rangle&\geq\langle\mathbf{m}^{*},\nabla
G(\mathbf{y}_{k})\rangle-D\,\varepsilon(L)\\\
&\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:
univariate})}}}}{{\geq}}G(\mathbf{y}+\mathbf{m}^{*})-G(\mathbf{y})-D\,\varepsilon(L)\\\
&\geq G(\mathbf{y}^{*})-G(\mathbf{y}_{k})-D\,\varepsilon(L)\end{split}$
The uni-variate auxiliary function $h_{\mathbf{y},\mathbf{m}}$ is
$P$-Lipschitz since the multilinear realization $G$ is $P$-Lipschitz by Lem.
D.1. Then for $h_{\mathbf{y},\mathbf{m}}(\xi)$ with $P$-Lipschitz continuous
derivative in $[0,1]$ where $(P>0)$, we have
(D.5) $\begin{split}-\frac{P}{2}\xi^{2}&\leq
h_{\mathbf{y},\mathbf{m}}(\xi)-h_{\mathbf{y},\mathbf{m}}(0)-\xi\nabla
h_{\mathbf{y},\mathbf{m}}(0)\\\
&=G(\mathbf{y}+\xi\mathbf{m})-G(\mathbf{y})-\xi\langle\mathbf{m},\nabla
G(\mathbf{y})\rangle\end{split}$
$\forall\xi\in[0,1]$. Hence the difference between the ${(k+1)}^{th}$ and
$k^{th}$ iteration becomes
$\displaystyle
G(\mathbf{y}_{k+1})-G(\mathbf{y}_{k})=G(\mathbf{y}_{k}+\gamma_{k}\mathbf{m}_{k})-G(\mathbf{y}_{k})$
$\displaystyle\quad=h_{\mathbf{y},\mathbf{m}}(\gamma_{k})-h_{\mathbf{y},\mathbf{m}}(0)$
$\displaystyle\quad\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:
LipAssump})}}}}{{\geq}}\gamma_{k}\langle\mathbf{m},\nabla
G(\mathbf{y})\rangle-\frac{P}{2}\gamma_{k}^{2}$
$\displaystyle\quad\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:
k-th_step})}}}}{{\geq}}\gamma_{k}[G(\mathbf{y}^{*})-G(\mathbf{y}_{k})]-\gamma_{k}D\,\varepsilon(L)-\frac{P}{2}\gamma_{k}^{2}$
Rearranging the terms,
$\displaystyle G(\mathbf{y}_{k+1})-G(\mathbf{y}^{*})$
$\displaystyle\geq(1-\gamma_{k})[G(\mathbf{y}_{k})-G(\mathbf{y}^{*})]$
$\displaystyle\quad-\gamma_{k}D\varepsilon(L)-\frac{P}{2}\gamma_{k}^{2}$
If we sum up the inequalities $\forall k=0,1,...\,,K-1$. We get,
$\displaystyle G(\mathbf{y}_{K})-G(\mathbf{y}^{*})$
$\displaystyle\geq\prod_{k=0}^{K-1}(1-\gamma_{k})[G(0)-G(\mathbf{y}^{*})]$
$\displaystyle-D\varepsilon(L)\sum_{k=0}^{K-1}\gamma_{k}-\frac{P}{2}\sum_{k=0}^{K-1}\gamma_{k}^{2}$
Knowing that $\sum_{k=0}^{K-1}\gamma_{k}=1$, and $1-\gamma_{k}\leq
e^{-\gamma_{k}}$,
$\displaystyle G(\mathbf{y}^{*})-G(\mathbf{y}_{K})$ $\displaystyle\leq
e^{-\sum_{k=0}^{K-1}\gamma_{k}}[G(\mathbf{y}^{*})-G(0)]$
$\displaystyle+D\varepsilon(L)+\frac{P}{2}\sum_{k=0}^{K-1}\gamma_{k}^{2}$
Rearranging the terms,
(D.6) $\displaystyle
G(\mathbf{y}_{K})\geq\Big{(}1-\frac{1}{e}\Big{)}G(\mathbf{y}^{*})-D\varepsilon(L)-\frac{P}{2}\sum_{k=0}^{K-1}\gamma_{k}^{2}+\frac{1}{e}G(\mathbf{0})$
In order to minimize $\sum_{k=0}^{K-1}\gamma_{k}^{2}$ when
$\sum_{k=0}^{K-1}\gamma_{k}=1$, Lagrangian method can be used. Let $\lambda$
be the Lagrangian multiplier, then
$\displaystyle\mathcal{L}(\gamma_{0},...,\gamma_{K-1},\lambda)=\sum_{k=0}^{K-1}\gamma_{k}^{2}+\lambda\bigg{[}\sum_{k=0}^{K-1}\gamma_{k}-1\bigg{]}$
For $\gamma_{0}=...=\gamma_{K-1}=\frac{1}{K}$,
$\sum_{k=0}^{K-1}\gamma_{k}^{2}$ reaches its minimum which is $\frac{1}{K}$.
Moreover, we have $\mathbf{y}_{0}=\mathbf{0}$, and hence
$G(\mathbf{y}_{0})=0$. Rewriting (D.6),
$\displaystyle
G(\mathbf{y}_{K})\geq\Big{(}1-\frac{1}{e}\Big{)}G(\mathbf{y}^{*})-D\varepsilon(L)-\frac{P}{2K}$
## E Detailed Comparison to Bound by Mahdian et al. [39]
We start by rewriting the bound provided by Mahdian et. al. [39] with our
notation. In App. C.2 of [39], given a set of continuous functions
$\\{h_{j}\\}_{j\in\\{1,\ldots,M\\}}$ where their first $L+1$ derivatives are
in $[0,1)$, they give an upper bound on the bias of the polynomial estimator
given in (5.16) as:
$\varepsilon(L)\leq\frac{2MW}{(L+1)!},$
where
$W=\max_{j\in\\{1,\ldots,M\\},s^{\prime}\in[0,1)}{h_{j}^{(L+1)}(s^{\prime})}$.
This statement holds under the assumption that $W$ is a finite constant,
independent of $L$. However, this does not hold for $h(s)=\frac{s}{1-s}$ and
$h(s)=\log(1+s)$. In fact, for $h(s)=\frac{s}{1-s}$ and $h(s)=\log(1+s)$, $W$
goes to infinity as $L$ goes to infinity. In contrast, we make no such
assumption on the derivatives when providing a bound for the bias
$\varepsilon(L)$ (see Appendices G.1, G.2, and H).
## F Complexity
The continuous-greedy algorithm described in Alg. 1 runs for $K=1/\gamma$
iterations. In each iteration, $\widehat{\nabla G_{L}}$ is calculated and
(3.6) is solved with that $\widehat{\nabla G_{L}}$. The complexity of
calculating $\widehat{\nabla G_{L}}$ is polynomial with the size of the ground
set, $N$, with the total number of monomials in (4.13),
$\sum_{j=1}^{M}\mathcal{I}$, and with the average number of variables
appearing in each monomial, $\bar{\mathcal{J}}$. For polymatroids, solving
(3.6) amounts to solving a linear program, which can also be done in
polynomial time that depends on the type of matroid [1]. Specifically for
partition matroids however, the solution has a simple water-filing property,
and can be obtained $N\log N$ time by sorting the gradient elements
corresponding to each partition. Hence, for partition matroids, the entire
algorithm takes $O(K(N(\sum_{j=1}^{M}\mathcal{I})\bar{\mathcal{J}}+m(N\log
N+k+m)))$ steps where $m$ is the number of partitions and $k$ is the
constraint on each partition.
## G Proofs of Example Properties
### G.1 Proof of Theorem 6.1.
We begin by characterizing the residual error of the Taylor series of
$h(s)=\log(1+s)$ around $1/2$:
###### Lemma G.1
Let $\hat{h}_{L}(s)$ be the $L^{\text{th}}$ order Taylor approximation of
$h(s)=\log(1+s)$ around $1/2$, given by (6.21). Then, $\hat{h}$, satisfies the
second condition of Asm. 2, with residuals:
(G.7) $R_{j,L}(s)=\frac{1}{(L+1)2^{L+1}}.$
* Proof.
By the Lagrange remainder theorem,
$\begin{split}\left\lvert
h_{i}(s)-\hat{h}_{L}(s)\right\rvert&=\left\lvert\frac{h_{i}^{(L+1)}(s^{\prime})}{(L+1)!}\left(s-\frac{1}{2}\right)^{L+1}\right\rvert\\\
&=\left\rvert\frac{\left(s-{1}/{2}\right)^{L+1}}{(L+1)\left(1+s^{\prime}\right)^{L+1}}\right\lvert\end{split}$
for some $s^{\prime}$ between $s$ and $1/2$. Since $s\in[0,1]$, (a)
$|s-\frac{1}{2}|\leq\frac{1}{2}$, and (b) $s^{\prime}\in[0,1]$. Hence
$\left\lvert h_{i}(s)-\hat{h}_{i,L}(s)\right\rvert\leq\frac{1}{(L+1)2^{L+1}}.$
To conclude the theorem, observe that:
$\displaystyle\epsilon_{i,L}(\mathbf{y})$
$\displaystyle=\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{+i})]+\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{+i})]$
$\displaystyle=2\mathbb{E}_{\mathbf{y}}\left[\textstyle\sum_{i=1}^{M}|R_{i,L}(s_{i})|\right]$
$\displaystyle\leq
2\mathbb{E}_{\mathbf{y}}\left[\textstyle\sum_{i=1}^{M}\frac{1}{(L+1)2^{L+1}}\right]=\frac{2M}{(L+1)2^{L+1}}$
Then, $\varepsilon(L)\leq\frac{M\sqrt{N}}{(L+1)2^{L}}$.
### G.2 Proof of Theorem 6.2.
To prove the theorem, observe that:
$\begin{split}\epsilon_{i,L}(\mathbf{y})&=\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{+i})]+\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{-i})]\\\
&\leq
2\mathbb{E}_{\mathbf{y}}\left[\textstyle\sum_{i=1}^{M}\frac{1}{M(L+1)2^{L+1}}\right]=\frac{1}{(L+1)2^{L}}\end{split}$
Hence, for all $\mathbf{y}\in[0,1]^{N}$,
$\varepsilon(L)\leq\frac{\sqrt{N}}{(L+1)2^{L}}$.
## H Example: Cache Networks (CN)[39].
A Kelly cache network can be represented by a graph $G(V,E)$, $|E|=M$, service
rates $\mu_{j}$, $j\in E$, storage capacities $c_{v}$, $v\in V$, a set of
requests $\mathcal{R}$, and arrival rates $\lambda_{r}$, for
$r\in\mathcal{R}$. Each request is characterized by an item
$i^{r}\in\mathcal{C}$ requested, and a path $p^{r}\subset V$ that the request
follows. For a detailed description of these variables, please refer to [39].
Requests are forwarded on a path until they meet a cache storing the requested
item. In steady-state, the traffic load on an edge $(u,v)$ is given by
(H.8) $g_{(u,v)}(\mathbf{x})=\frac{1}{\mu_{u,v}}\sum_{r\in\mathcal{R}:(v,u)\in
p^{r}}\lambda^{r}\prod_{k^{\prime}=1}^{k_{p^{r}}(v)}(1-x_{p_{k}^{r},i^{r}}).$
where $\mathbf{x}\in\\{0,1\\}^{|V||\mathcal{C}|}$ is a vector of binary
coordinates $x_{vi}$ indicating if $i\in\mathcal{C}$ is stored in node $v\in
V$. If $s$ is the load on an edge, the expected total number of packets in the
system is given by $h(s)=\frac{s}{1-s}$. Then using the notation $j=(u,v)\in
E$ to index edges, the expected total number of packets in the system in
steady state can indeed be written as $\sum_{j=1}^{M}h_{j}(g_{j}(\mathbf{x}))$
[39]. Mahdian et al. maximize the _caching gain_
$f:\\{0,1\\}^{|V||\mathcal{C}|}\rightarrow\mathbb{R}_{+}$ as
(H.9)
$f(\mathbf{x})=\textstyle\sum_{j=1}^{M}h_{j}(g_{j}(\mathbf{0}))-\sum_{j=1}^{M}h_{j}(g_{j}(\mathbf{x}))$
subject to the capacity constraints in each class. The caching gain
$f(\mathbf{x})$ is monotone and submodular, and the capacity constraints form
a partition matroid [39]. Moreover, $h(s)=\frac{s}{1-s}$ can be approximated
within arbitrary accuracy by its $L^{\text{th}}$-order Taylor approximation
around $0$, given by:
(H.10) $\hat{h}_{L}(s)=\textstyle\sum_{\ell=1}^{L}s^{\ell}$
We show in the following lemma that this estimator ensures that $f$ indeed
satisfies Ass. 2:
###### Lemma H.1
Let $\hat{h}_{j,L}(s)$ be the $L^{th}$ Taylor polynomial of
$h_{j}(s)=\frac{s}{1-s}$ around $0$. Then, $h_{j}(s)$ and its polynomial
estimator of degree $L$, $\hat{h}_{L}(s)$, satisfy Asm. 2 where
(H.11) $R_{j,L}(s)\leq\frac{\bar{s}^{L+1}}{1-\bar{s}}.$
$L^{th}$ Taylor polynomial of $h_{i}(s)$ around $0$ is
(H.12)
$\hat{h}_{L}(s)=\textstyle\sum_{l=0}^{L}\frac{h_{i}^{(\ell)}(0)}{\ell!}s^{\ell}=\sum_{\ell=1}^{L}s^{\ell}$
where $h_{i}^{(\ell)}(s)=\frac{\ell!}{(1-s)^{\ell+1}}$ for
$f_{i}(s)=\frac{s}{1-s}$.
$\displaystyle h_{i}(s)$
$\displaystyle=\frac{s}{1-s}=\textstyle\sum_{\ell=1}^{\infty}s^{\ell}=\sum_{\ell=1}^{L}s^{\ell}+\sum_{\ell=L+1}^{\infty}s^{\ell}$
$\displaystyle=\textstyle\sum_{\ell=1}^{L}s^{\ell}+s^{L}\sum_{\ell=1}^{\infty}s^{\ell}=\sum_{\ell=1}^{L}s^{\ell}+\frac{s^{L+1}}{1-s}$
Then, the bias of the Taylor Series Estimation around $0$ becomes:
$\displaystyle\left|\frac{s}{1-s}-\textstyle\sum_{n=1}^{L}s^{n}\right|=\frac{s^{L+1}}{1-s}\leq\frac{\bar{s}^{L+1}}{1-\bar{s}}=R_{i,L}(s).$
for all $s\in[0,\bar{s}]$ where $\bar{s}=\max_{i\in M}s_{i}$.
Furthermore, we bound the estimator bias appearing in Thm. 5.2 as follows:
###### Theorem H.1
Assume a caching gain function $f:\leavevmode\nobreak\
\\{0,1\\}^{|V||\mathcal{C}|}\rightarrow\mathbb{R}_{+}$ that is given by (H.9).
Then, consider Algorithm 1 in which $\nabla G(\mathbf{y}_{K})$ is estimated
via the polynomial estimator given in (5.16) where $\hat{f}_{L}(\mathbf{x})$
is the $L^{th}$ Taylor polynomial of $f(\mathbf{x})$ around $0$. Then, the
bias of the estimator is bounded by
(H.13) $\varepsilon(L)\leq
2M\sqrt{{|V||\mathcal{C}|}}\frac{\bar{s}^{L+1}}{1-\bar{s}},$
where $\bar{s}<1$ is the largest load among all edges when caches are empty.
* Proof.
Since $\lim_{L\to\infty}\frac{\bar{s}^{L+1}}{1-\bar{s}}=0$, for all
$\bar{s}\in[0,1)$, Taylor approximation gives an approximation guarantee for
maximizing the queue size function by Asm. 2, where the error of the
approximation is given by Thm. 5.1 as
$\displaystyle\epsilon_{i,L}(\mathbf{y})$
$\displaystyle=2\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{+i})]+\mathbb{E}_{\mathbf{y}}[R_{L}([\mathbf{x}]_{+i})]$
$\displaystyle=\mathbb{E}_{\mathbf{y}}\left[\textstyle\sum_{i=1}^{M}|R_{i,L}(s_{i})|\right]$
$\displaystyle\leq
2\mathbb{E}_{\mathbf{y}}\left[\textstyle\sum_{i=1}^{M}\frac{\bar{s}^{L+1}}{1-\bar{s}}\right]=2M\frac{\bar{s}^{L+1}}{1-\bar{s}}$
Then, $\varepsilon(L)\leq
2M\sqrt{{|V||\mathcal{C}|}}\frac{\bar{s}^{L+1}}{1-\bar{s}}$.
## I Pipage Rounding via Taylor Estimator
As explained, each step of pipage rounding requires evaluating the multilinear
relaxation $G(\hat{\mathbf{y}}),$ which is generally infeasible and is usually
computed via the time-consuming sampling estimator (see Sec. 3.3). Here we
show that these evaluations can be alternatively done via the polynomial
estimator, while having theoretical guarantees. First note that similar to the
case of gradients in Thm. 5.1 the difference between $G$ and the multilinear
relaxation of polynomial estimator
$\hat{G}(\mathbf{y})\triangleq\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[{\hat{f}_{L}(\mathbf{x}})]=\hat{f}_{L}(\mathbf{y})$
is bounded:
(I.14) $\displaystyle|G(\mathbf{y})-\hat{G}(\mathbf{y})|$
$\displaystyle\leq\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[R_{L}(\mathbf{x})]\leq\bar{R}_{L},$
where $\bar{R}_{L}\triangleq\max_{\mathbf{y}\in
P(\mathcal{M})}\mathbb{E}_{\mathbf{x}\sim\mathbf{y}}[R_{L}(\mathbf{x})]$.
Again similar to the proof in App. C and due to the uniform convergence in
(C.2) it holds that that $\lim_{L\to\infty}\bar{R}_{L}=0.$ Now we can show our
main result on pipage rounding via our polynomial estimator.
###### Theorem I.1
Given a fractional solution $\mathbf{y}\in P(\mathcal{M})$ the pipage rounding
method in which the polynomial estimator $\hat{G}$ is used instead of $G$
terminates in $O(N)$ rounds and the obtained solution
$\hat{\mathbf{y}}\in\mathcal{M}$ satisfies the following
$\displaystyle G(\hat{\mathbf{y}})\geq G(\mathbf{y})-2(N+1)\bar{R}_{L}.$
* Proof.
At round $k$, given a solution $\mathbf{y}^{(k)}\in P(\mathcal{M})$ due to the
properties of the multilinear relaxation there exists a point
$\hat{\mathbf{y}}^{(k)}$, s.t., (a) $G(\hat{\mathbf{y}}^{(k)})\geq
G(\mathbf{y}^{(k)})$ and (b) $\hat{\mathbf{y}}^{(k)}$ has at least one less
fractional element, i.e.,
$\\{j\in\\{1,\ldots,N\\}\,|\,\mathbf{y}^{(k)}_{j}\in\\{0,1\\}\\}\subset\\{j\in\\{1,\ldots,N\\}\,|\,\hat{\mathbf{y}}^{(k)}_{j}\in\\{0,1\\}\\}$
[15]. From (I.14) and (a) we have the following:
$\displaystyle\hat{G}(\hat{\mathbf{y}}^{(k)})\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:bound})}}}}{{\geq}}G(\hat{\mathbf{y}}^{(k)})-\bar{R}_{L}$
$\displaystyle\stackrel{{\scriptstyle\mbox{\tiny{(a)}}}}{{\geq}}G(\mathbf{y}^{(k)})-\bar{R}_{L}$
(I.15)
$\displaystyle\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:bound})}}}}{{\geq}}\hat{G}(\mathbf{y}^{(k)})-2\bar{R}_{L},$
in other words the estimated objective at $\hat{\mathbf{y}}^{(k)}$ is at most
$2\bar{R}_{L}$ worse than the estimated value at $\mathbf{y}^{(k)}.$ Now given
input to pipage rounding as $\mathbf{y}^{(0)}=\mathbf{y}$ and at each round
setting $\mathbf{y}^{(k+1)}=\hat{\mathbf{y}}^{(k)}$ from (Proof.) we have
that:
(I.16)
$\displaystyle\hat{G}(\mathbf{y}^{(k)})\geq\hat{G}(\mathbf{y}^{(0)})-2k\bar{R}_{L}\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:bound})}}}}{{\geq}}G(\mathbf{y}^{(0)})-2k\bar{R}_{L}-\bar{R}_{L}.$
Furthermore, from (b) it follows that this process ends at $k^{*}\leq N$
rounds as $\mathbf{y}^{(0)}$ has at most $N$ fractional elements. Plus, for
the final solution $\hat{\mathbf{y}}=\mathbf{y}^{(k^{*})}$ it holds that:
$\displaystyle
G(\hat{\mathbf{y}})\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:bound})}}}}{{\geq}}\hat{G}(\hat{\mathbf{y}})-\bar{R}_{L}\stackrel{{\scriptstyle\mbox{\tiny{(\ref{eq:round_telescope})}}}}{{\geq}}$
$\displaystyle G(\mathbf{y})-2(k^{*}+1)\bar{R}_{L}$ $\displaystyle\geq$
$\displaystyle G(\mathbf{y})-2(N+1)\bar{R}_{L}.$
|
# Compressive Spectral Image Reconstruction using Deep Prior and Low-Rank
Tensor Representation
Jorge Bacca Yesid Fonseca and Henry Arguello Department of Systems
Engineering, Universidad Industrial de Santander, Bucaramanga, Colombia
<EMAIL_ADDRESS>
###### Abstract
Compressive spectral imaging (CSI) has emerged as an alternative spectral
image acquisition technology, which reduces the number of measurements at the
cost of requiring a recovery process. In general, the reconstruction methods
are based on hand-crafted priors used as regularizers in optimization
algorithms or recent deep neural networks employed as an image generator to
learn a non-linear mapping from the low-dimensional compressed measurements to
the image space. However, these data-driven methods need many spectral images
to obtain good performance. In this work, a deep recovery framework for CSI
without training data is presented. The proposed method is based on the fact
that the structure of some deep neural networks and an appropriated low-
dimensional structure are sufficient to impose a structure of the underlying
spectral image from CSI. We analyzed the low-dimension structure via the
Tucker representation, modeled in the first net layer. The proposed scheme is
obtained by minimizing the $\ell_{2}$-norm distance between the compressive
measurements and the predicted measurements, and the desired recovered
spectral image is formed just before the forward operator. Simulated and
experimental results verify the effectiveness of the proposed method.
††journal: osajournal††articletype: Research Article
## 1 Introduction
Spectral imaging (SI) deals with capturing the spatial information of a target
in a broader range of the electromagnetic spectrum compared to a conventional
RGB imaging system. This additional information is useful for some
applications such as biomedical imaging [1], crop identification [2], and
surveillance [3]. SI can be denoted as a 3D tensor
$\bm{\mathcal{X}}\in\mathbb{R}^{M\times N\times L}$ with $M\times N$ as the
spatial pixels and $L$ spectral bands [2]. Traditional methods to acquire SI
are based on scanning along one of its tensor modes, which results in time-
consuming systems, and therefore, prohibits its usage in dynamic scenes [4].
Alternatively, based on the compressive sensing (CS) theory, new imaging
snapshots systems acquire 2D multiplexed projections of a scene instead of
directly acquire all voxels, resulting in an image compression via hardware
[5]. To date, different compressive spectral imaging (CSI) techniques have
been proposed [6, 7, 8, 9, 10, 11]. For instance, the pioneer coded aperture
snapshot spectral imaging (CASSI) system [10] uses optical elements to encode
and disperse the incoming light to acquire 2D intensity projections. Even
though CSI yield efficient sensing, a reconstruction process from the
compressed measurements is needed, since it results in finding a solution to
an under-determined system [5]. This recovery problem is addressed by
representing the 3D scene as a 1D vector and assuming particular spectral
image nature priors in different dimensions used as regularization in an
optimization problem [4, 12]. For instance, [13, 14] assume low total
variation, [9, 7] explore the sparsity assumption of the scene in some
orthogonal basis, [15, 16] use non-local similarity, and [17, 18] employ low-
rank structures. However, these hand-crafted priors do not often represent the
wide variety and non-linearity of spectral images, and the vectorization
ignores the high-dimensional structure of the scene, resulting in low
reconstruction quality [19].
On the other hand, data-driven recovery methods are based on the power of the
deep neural networks as image generators, where the goal is to learn a non-
linear transformation that maps a low-dimensional feature into realistic
spectral images [20]. In particular, with a vast spectral data set, [21, 22,
23, 24] learn inverse networks that map the low-dimensional compressed
measurements to the desired spectral image [25]. These methods have shown high
performance speed and reconstrucion quality. However, they are very dependent
on training data, and small variations in the sensing system would require re-
training of the model [19]. Alternative solutions such as [26], takes the
sensing model into account when solving an optimization problem where the
prior is learned using convolutional auto-encoder with a spectral data set,
and more recently [26, 19, 27, 28] use unrolled-based methods that incorporate
the sensing process into the deep network design, where the prior is
intrinsically learned through end-to-end optimization. Although these methods
have proven to be more general, they still depend on training data.
In this paper, a deep recovery framework for reconstructing spectral images
from CSI measurements without training data requirements is proposed. The
method is based on the fact that the deep convolutional neural networks and
the appropriated low-dimensional representation are sufficient to
learn/generate the image representation without any training data, and
therefore, to recover a spectral image directly from the CSI measurements. In
particular, the proposed method designs a deep neural network, where the first
layer learns a low-dimensional 3D tensor, which is then refined by
convolutional operations to generate the desired reconstruction. The weights
of this neural network are randomly initialized and fitted to guarantee that
the reconstruction suits the CSI measurement via $\ell_{2}$-norm minimization
over the CSI measurement; therefore, the recovered image is formed just before
the forward operator. The proposed method is expressed as an end-to-end
optimization by modeling the forward compressive sensing model as a non-
trainable layer; consequently, it can be solved using any deep learning
algorithm like stochastic gradient descent. Additionally, we analyzed the
importance of the low-dimensional tensor structure in the first layer via low-
rank Tucker representation, which imposes a low-rank 3D-prior. Since there is
no more information available other than the compressive spectral
measurements, the proposed method is more related to hand-crafted techniques.
Results in simulated and real data demonstrate that the proposed method
outperforms the hand-crafted methods in many scenarios and obtains comparable
results with data-driven approaches.
## 2 Related work
### 2.1 Hand-Crafted CS Reconstruction
The traditional CS recovery algorithms are considered hand-designed since they
use some expert knowledge of the signal, known as a signal prior [26]. These
methods are based on optimization techniques that design a data fidelity term,
and incorporate the prior as a regularization term [29]. The most common prior
is assuming that the signal is sparse on a given basis, such as Wavelet [30],
discrete cosine transform (DCT)[5], among others[5]. This sparsity assumption
is imposed in different methods by applying $\ell_{0}$ or $\ell_{1}$
regularizers. Examples of algorithms that use sparsity priors include, the
GPSR [29], ADMM [31], CSALSA[32], ISTA [33], AMP [34] among others. In CSI,
some specific kinds of prior are used. For instance, [9] assumes low total
variation, [7] explores the spatial sparsity assumption of the scene in
Wavelet domain, and the spectral sparsity in the DCT domain [15, 16];
furthermore, [17, 18] employ low-rank structures based on the linear mixture
model. Exploring tensor structure, low-rank tensor recovery methods have been
also proposed [12, 35]. However, these hand-crafted methods require expert
knowledge of the target to select which prior to use. Therefore, they do not
represent the wide variety and the non-linearity of spectral image
representations.
### 2.2 Data-Driven CS Reconstruction
Data-driven recovery methods are based on learning a non-linear inverse
mapping from the compressive measurements to a realistic image. In particular,
with a vast dataset of ground-truth and compressive measurement pairs, these
methods are used to learn a non-linear network by minimizing the distance
between the output of the net and the ground-truth. The main difference
between the state-of-the-art methods is their network architecture. For
instance, [36] learns a stacked auto-encoder, convolution layers are applied
in [37], and convolutional, residual, and fully-connected layers are also used
in [38, 39, 40, 41]. In particular, for CSI, [22] was the first work that used
a data-driven approach, where, an initialization obtained from TwiST [42] was
refined using denoising networks; [19] proposed a particular model to explore
the spatial and spectral information and to design the coded aperture usually
included in CSI architectures. Furthermore, based on the structure of the
U-net, [24] proposed a non-linear mapping replacing the 2D for 3D
convolutions, and [23] developed a generative model based on the U-net. These
methods have shown high performance in reconstruction quality, and once
trained, they allow real-time reconstruction. However, these approaches are
highly dependent on the data-set used. Furthermore, small-variations in the
compressive measurements, such as type of noise or changes in the sensing
matrix, would require a time-consuming re-training.
### 2.3 Non-Linear Image Priors for CS Methods
Recently, some works have considered the sensing model to proposed a mixed
approach which considers the hand crafted as well as the data-driven CS
reconstruction. In particular, these methods use a deep network or denoiser to
replace the hand-crafted prior, then, this non-linear prior is employed in the
optimization algorithm [38]. For instance, Plug-and-play priors (PnP) use pre-
existing denoisers as a proximal step [43, 44], [45] learns the proximal
mapping using a convolutional network, and [26] learns a SI prior, through a
convolutional autoencoder, which is then incorporated into the optimization
problem. More recently, D-AMP [46], ISTA-Net [47], ADMM-Net [48], and DNU [28]
use the unrolled based method that incorporates the optimization steps into
the deep network architecture using residual networks; consequently, they can
learn the prior and the parameters via end-to-end training. This strategy is
also employed for CSI in [19, 27]. Although these methods have proven to be
more general, they still depend on training data, which is limited in SI.
### 2.4 Deep Image Prior using Generative Model
The generative model (GM) has been used for CS recovery [49]. The goal in GM
is to generate a realistic image from a low-dimensional latent representation.
For instance, [49, 50] use a pre-trained deep neural network and obtains the
low-dimensional representation, which minimizes the distance between the
compressive measurements and the output of the net. On the other hand, [51]
shows that a pre-trained network is not necessary. Instead of finding the low-
dimensional latent space, [51] uses a fixed random variable as latent space,
then the weights of the model are updated to obtain an optimal result. The
drawback of this method is its sensitivity to changes in the application, the
fixed latent space or the network architecture, which usually require small
random disturbances to obtain a good performance. The proposed method in this
work is closely related to [50, 51], where the parameters of the network model
and the low-dimensional representation (based on a Tucker representation,
which is useful for SI) are optimized in an end-to-end approach for a CSI
architecture.
### Notation:
Through the paper, vectors are represented with boldface lowercase letters,
e.g., $\bm{x},$ and matrices are denoted as boldface capital letters
$\mathbf{X}$. The 3D tensors are denoted as
$\bm{\mathcal{X}}\in\mathbb{R}^{M\times N\times L}$ and the $1$-mode product
of a tensor $\bm{\mathcal{X}}_{o}\in\mathbb{R}^{M_{p}\times N_{p}\times
L_{p}}$ with a matrix $\mathbf{U}\in\mathbb{R}^{M\times M_{p}}$ is written as
$\bm{\mathcal{X}}=\bm{\mathcal{X}}_{o}\times_{1}\mathbf{U}$ where
$\bm{\mathcal{X}}\in\mathbb{R}^{M\times N_{p}\times L_{p}}$, and
$\displaystyle\bm{\mathcal{X}}_{(m,n,\ell)}=\sum_{\hat{m}=1}^{M_{p}}\mathbf{U}_{(m,\hat{m})}\bm{\mathcal{X}}_{o(\hat{m},n,\ell)}.$
In the same way, the 2-mode and 3-mode products can be defined. We introduce
the function $\text{shift}_{\ell}(\cdot):\mathbb{R}^{M\times
N}\rightarrow\mathbb{R}^{M\times(N+L-1)}$ which refers to a shifting operator,
i.e., for a given $\mathbf{X}$ we have that
$\displaystyle\text{shift}_{\ell}(\mathbf{X}):=\begin{cases}\mathbf{X}_{(m,n-\ell)},&\text{
if }1\leq n-l\leq N\\\ 0,&\text{ otherwise}.\end{cases}$
Finally, the function $\text{vect}(\cdot):\mathbb{R}^{M\times N\times
L}\rightarrow\mathbb{R}^{MNL}$ represents the vectorization of a tensor.
## 3 Compressed Measurements Acquisition
Figure 1: Physical sensing phenomena in CASSI, which is the CSI prototype used
to validate the proposed approach.
The CASSI sensing approach is used in order to acquire the compressed
measurements of a spectral scene [10]. This architecture is composed of three
main optical elements: a coded aperture, a prism as a dispersive element, and
a gray-scale detector, as illustrated in Fig 1. The spatial-spectral data cube
is represented as $\bm{\mathcal{X}}\in\mathbb{R}^{M\times N\times L}$ with
$M\times N$ spatial dimensions, $L$ spectral bands, and
$\mathbf{X}_{\ell}\in\mathbb{R}^{M\times N}$ denotes the 2D spectral intensity
image of $\bm{\mathcal{X}}$ at the $\ell$-th spectral band. As shown in Fig.
1, each spatial position of the scene is modulated by a coded aperture
$\mathbf{C}\in\\{0,1\\}^{M\times N}$, which block/unblock the incoming light,
then, the coded spectral scene passes through the prism creating a horizontal
shifting. Finally, the coded shifted spectral scene is integrated along the
spectral axis by the detector, resulting in the 2D compressed measurement
$\mathbf{Y}\in\mathbb{R}^{M\times(N+L-1)}$. In CSI, it is possible to acquire
$S<L$ different measurement snapshots of the same spectral data cube employing
$S$ different patterns in the coded aperture. Therefore, the output of the
sensing process at the $s$-th spectral snapshot can be mathematically
expressed as
$\mathbf{Y}^{(s)}=\sum_{\ell=1}^{L}\text{shift}_{\ell-1}\left(\mathbf{X}_{\ell}\odot\mathbf{C}^{(s)}\right),$
(1)
where the $\ell$-th spectral band, $\mathbf{X}_{\ell}$, of the tensor
$\bm{\mathcal{X}}$ is shifted with the operator
$\text{shift}_{\ell-1}(\cdot)$, and $\odot$ denotes the element-wise product
with the 2D coded aperture $\mathbf{C}^{(s)}$.
The CASSI sensing model can be seen as a linear operator, after stacking the
measurements of multiple shots as
$\bm{y}=[\text{vect}(\mathbf{Y}^{(1)})^{T},\cdots\text{vect}(\mathbf{Y}^{(S)})^{T}]$.
Thus, the system matrix model can be expressed as
$\bm{y}=\mathbf{H}\text{vect}(\bm{\mathcal{X}}),$ (2)
where $\mathbf{H}\in\mathbb{R}^{SM(N+L-1)\times MNL}$ represents the linear
sensing matrix of CASSI.
## 4 Compressive Spectral Reconstruction
Figure 2: Visual representation of the proposed deep neural scheme, where the
boxes with background color represent the learning parameters, the white box
stand for the non-trainable CSI system, and the non-box blocks represent the
outputs of the layers.
The goal in CSI is to recover the spectral image
$\bm{\mathcal{X}}\in\mathbb{R}^{M\times N\times L}$ from the compressive
measurements $\bm{y}$. Since $SM(N+L-1)\ll MNL$, this problem consists in
solving an undetermined system, which is addressed by restricting the feasible
set of solutions using image priors as regularizers. A tensor formulation for
addressing this problem is described below
$\displaystyle\underset{\bm{\mathcal{Z}}_{o}\in\mathbb{R}^{M\times N\times
L}}{\mbox{minimize}}\hskip 8.53581pt$
$\displaystyle\frac{1}{2}\left\|\bm{y}-\mathbf{H}\text{vect}\left(\bm{\mathcal{X}}\right)\right\|^{2}_{2}+\lambda\cdot\phi(\bm{\mathcal{Z}^{\prime}}_{o})$
(3) subject to
$\displaystyle\bm{\mathcal{X}}=\bm{\mathcal{Z}^{\prime}}_{o}\times_{1}\mathbf{U}^{\prime}\times_{2}\mathbf{V}^{\prime}\times_{3}\mathbf{W}^{\prime},$
where the matrices $\mathbf{U}^{\prime}\in\mathbb{R}^{M\times
M},\mathbf{V}^{\prime}\in\mathbb{R}^{N\times N}$ and
$\mathbf{W}^{\prime}\in\mathbb{R}^{L\times L}$ are fixed and known orthogonal
matrices, which usually are the matrix representation of the Wavelet and the
Discrete Cosine transforms; $\bm{\mathcal{Z}^{\prime}}_{o}$ is the
representation of the spectral image in the given basis and
$\phi(\cdot):\mathbb{R}^{M\times N\times L}\rightarrow\mathbb{R}$ is a
regularization function that imposes particular image priors with $\lambda$ as
the regularization parameter [29].
Unlike the hand-craft priors as sparsity [5], we explore the power of some
deep neural networks as image generators that map a low-dimensional feature
tensor $\bm{\mathcal{Z}}\in\mathbb{R}^{M\times N\times L}$ to the image as
$\bm{\mathcal{X}}=\mathcal{M}_{\bm{\theta}}(\bm{\mathcal{Z}}),$ (4)
where $\mathcal{M}_{\bm{\theta}}(\cdot)$ represents a deep network, with
${\bm{\theta}}$ as the net-parameters. To ensure a low-dimensional structure
over the feature tensor, this work used the Tucker representation, i.e.,
$\bm{\mathcal{Z}}=\bm{\mathcal{Z}}_{o}\times_{1}\mathbf{U}\times_{2}\mathbf{V}\times_{3}\mathbf{W}$
with $\bm{\mathcal{Z}}_{o}\in\mathbb{R}^{M_{\rho}\times N_{\rho}\times
L_{\rho}}$ as a 3D low dimensional tensor, with $M_{\rho}<M$, $N_{\rho}<N$ and
$L_{\rho}<L$. This representation maintains the 3D structure of the spectral
images, exploits the inherent low-rank of this data [52, 53], and also
implicitly constraint the output $\bm{\mathcal{X}}$ in a low-dimensional
manifold via the architecture and the weights of the net [50].
In this paper, we are focused in a blind representation, where instead of have
a pre-training network or huge amount of data to train this deep neural
representation, we express an optimization problem which learns the weight
$\bm{\theta}$ in the generative network $\mathcal{M}_{\bm{\theta}}$ and also
the tensor feature $\bm{\mathcal{Z}}$ with its Tucker representation elements
as $\bm{\mathcal{Z}}_{o},\mathbf{U},\mathbf{V}$ and $\mathbf{W}$. All the
parameters of this optimization problem are randomly initialized and the only
available information are the compressive measurements and the sensing model,
i.e, the optimization problem is data training independent. In particular, we
explore the prior implicitly captured by the choice of the generator network
structure, which is usually composed of convolutional operations, and the
importance of the low-rank representation feature, therefore, the proposed
method consists of solving the following optimization problem
$\displaystyle\underset{{\bm{\theta}},\bm{\mathcal{Z}}_{o},\mathbf{U,V,W}}{\mbox{minimize}}\hskip
8.53581pt$
$\displaystyle\frac{1}{2}\left\|\bm{y}-\mathbf{H}\text{vect}\left(\mathcal{M}_{{\bm{\theta}}}\left(\bm{\mathcal{Z}}\right)\right)\right\|^{2}_{2}$
(5) subject to
$\displaystyle\bm{\mathcal{Z}}=\bm{\mathcal{Z}}_{o}\times_{1}\mathbf{U}\times_{2}\mathbf{V}\times_{3}\mathbf{W},$
where the recovery is
$\bm{\mathcal{X}}^{*}=\mathcal{M}_{\bm{\theta}^{*}}(\bm{\mathcal{Z}}_{o}^{*}\times_{1}\mathbf{U}^{*}\times_{2}\mathbf{V}^{*}\times_{3}\mathbf{W}^{*})$.
This optimization problem can be solved using an end-to-end neural network
framework, as shown in Fig. 2. In this way, the input, that is common in all
neural networks, is replaced with a custom layer with
$\bm{\mathcal{Z}}_{o},\mathbf{U,V,W}$ as learnable parameters, which construct
the low-rank Tucker representation of $\bm{\mathcal{Z}}$, then this tensor
$\bm{\mathcal{Z}}$ is refined with convolutional layers via
$\mathcal{M}_{\bm{\theta}}(\bm{\mathcal{Z}})$; these optimization variables
are represented by the first two blue-blocks in the Fig. 2. The final layer in
the proposed method is a non-training layer which models the forward sensing
operator
$\mathbf{H}\text{vect}\left(\mathcal{M}_{{\bm{\theta}}}\left(\bm{\mathcal{Z}}\right)\right)$
to obtain the compressive measurements $\bm{y}$ as the output of the net.
Therefore, the problem in (5) can be solved with state-of-the-art deep
learning optimization algorithm, such as, stochastic gradient descent. Once
the parameters are optimized, the desired SI is recovered just before the non-
trainable layer labeled as "CSI system" in Fig. 2.
## 5 Simulation and Results
In this section, the performance of the proposed compressive spectral image
reconstruction approach is presented. The performance metrics used are the
peak-signal-to-noise (PSNR) [5], the structural similarity (SSIM) [54], and
the spectral angle mapping (SAM) [17]. PSNR and SSIM are calculated as the
average of each 2D spatial image through the bands, and the SAM is the average
of all spectral pixels. Four different tests are presented to validate the
proposed method. The first test evaluates the importance of the low-rank
tensor representation; the second test compares the recovery of the data-
driven methods with the proposed method; the third evaluates the proposed
method in different noisy scenarios and for a different number of shots
against the non-data dependent state-of-the-art algorithms; and finally, the
proposed method is evaluated using two compressive spectral images obtained
with a real test-bed implementation 111The code can be find
https://github.com/jorgebaccauis/Deep_Prior_Low_Rank.
### 5.1 Rank level
Figure 3: Visual representation of the three network models used: U-Net-based,
AutoencoderNet-based and ResNet-based. The color represents the different
layers in each network. Figure 4: PSNR Box plot for the different network
architectures varying the rank factor $\rho$, with $5$ run trials.
This section evaluates the importance of the rank level in the 3D tensor using
the Tucker representation, which is placed at the first block of our model, as
illustrated in Fig. 2. For that, two spectral images with $M=256\times N=256$
pixels, and $L=10$ spectral bands between $400$ and $700$nm from [55] where
chosen. Three different network architectures were tested as ”Convolutional
Layers” for the second block in the Figure 2. The first network architecture
is a simple ResNet-based model [56], with a single skip connection and four
convolutional layers, as shown in the Figure 3 with $2.150$ parameters. The
second architecture, also shown in Fig. 3, is a convolution Autoencoder-based
[57], with $8.160$ training parameters, and six convolutional layers. The
third architecture tested and depicted in FIg. 3, is a Unet-based [58],
without drop-out layers, and, in the contracting part, the feature information
is increased using multiples of $L=10$, i.e., $L,2L$ and $3L$ as is
illustrated in Fig.3, resulting in $92.190$ training parameters. This test is
focused on a single snapshot for a randomly coded aperture generated from a
Bernoulli distribution with mean 0.5.
As mentioned, the tensor feature $\bm{\mathcal{Z}}\in\mathbb{R}^{M\times
N\times L}$ comes from a low-dimensional kernel
$\bm{\mathcal{Z}}_{o}\in\mathbb{R}^{M_{p}\times N_{p}\times L_{p}}$; then, to
evaluate the importance of the rank-level in the Tucker representation, we
establish the following relationship
$\frac{M_{p}}{M}=\frac{N_{p}}{N}=\frac{L_{p}}{L}=\rho,$ (6)
where $\rho\in(0,1]$, is referred as the hyper-parameter rank factor.
Furthermore, as the parameters of the problem in (5) are randomly initialized,
we simulated five realizations. The average results for this 5 realizations
are summarized in the Figure 4. Notice that for the three network
architectures and the two datasets, the rank factor is a crucial hyper-
parameter to obtain a good reconstruction. In particular, for the DataSet 1
the optimal value is $\rho=\\{0.5,0.7,0.6\\}$ for the ResNet-based,
AutoeconderNet-based, and Unet-based, respectively; and
$\rho=\\{0.4,0.6,0.7\\}$ for DataSet 2. Furthermore, notice that a small value
of $\rho$ presents the worst case for all the networks. Also, notice all the
network configurations obtain around 31 dB, which is the best-obtained
results, for different $\rho$ values; however, the AutoencoderNet-based is
more stable compared with the other networks. This result shows the importance
of the low-rank tensor representation in the first layer, where the optimal
value changes for each dataset and each network architecture.
### 5.2 Data-Driven Methods Comparison
Figure 5: Two reconstructed scenes using the 5 learning-based methods and the
two variations of the proposed method, i.e., (AutoEncoder, UNet)-Based.
Although the proposed method does not need data to work, this test compares
its results with the data-driven approaches to demonstrate the quality
achieved. In particular, we use five learning-based methods for comparison:
HSCNN [22], ISTA-Net [47], Autoencoder [26]; HIR-DSSP [19] and DNU [28]. These
methods where trained using the public ICVL [59], Harvard [60], and KAIST [26]
hyperspectral image data-sets using their available codes; the sensing process
was evaluated for a single snapshot, according to [28]. For the proposed
method, the two network architectures were evaluated, i.e., AutoEnconder-
Based, and UNet-based. Two testing images of $512\times 512$ of spatial
resolution and $31$ spectral bands were chosen to evaluate the different
methods, and the reconstruction results and ground truth are shown in Fig. 5.
It can be observed that the two variants of the proposed method outperform in
visual and quantitative results to HSCNN, ISTA-Net, AutoEnconder, HIR-DSSP, up
to $(5/0.030/0.020)$ in terms of (PSNR/SSIM/SAM),respectively, and show
comparable/close results with respect to the DNU method, which is the best
data-driven method. However, the proposed method has the advantage that it
does not require training data compared with the driven-data methods, i.e.,
only the compressive measurements are available for the proposed approach.
Table 1: Mean performance comparison for the different recovery methods
varying the number of snapshots and noise in SNR dB.
Shots | Noise | Metrics | GPSR | ADMM | CSALSA | | PnP
---
ADMM
DIP | Prop.
1 | $\infty$ | PSNR | 25.66 | 24.32 | 25.59 | 28.99 | 27.93 | 30.92
SSIM | 0.701 | 0.726 | 0.790 | 0.860 | 0.766 | 0.874
SAM | 0.145 | 0.108 | 0.152 | 0.060 | 0.089 | 0.055
30 | PSNR | 25.52 | 22.68 | 25.46 | 28.82 | 27.19 | 29.29
SSIM | 0.699 | 0.653 | 0.701 | 0.844 | 0.772 | 0.864
SAM | 0.156 | 0.112 | 0.167 | 0.073 | 0.089 | 0.062
20 | PSNR | 24.67 | 21.45 | 22.19 | 25.42 | 27.53 | 27.94
SSIM | 0.682 | 0.625 | 0.672 | 0.713 | 0.783 | 0.794
SAM | 0.210 | 0.220 | 0.195 | 0.138 | 0.084 | 0.080
2 | $\infty$ | PSNR | 28.15 | 27.14 | 29.45 | 32.28 | 30.578 | 32.88
SSIM | 0.802 | 0.786 | 0.823 | 0.916 | 0.867 | 0.918
SAM | 0.095 | 0.120 | 0.098 | 0.041 | 0.068 | 0.039
30 | PSNR | 27.95 | 26.15 | 29.02 | 31.68 | 30.39 | 31.99
SSIM | 0.789 | 0.726 | 0.835 | 0.903 | 0.878 | 0.910
SAM | 0.099 | 0.130 | 0.099 | 0.055 | 0.066 | 0.053
20 | PSNR | 26.34 | 23.41 | 24.49 | 27.48 | 30.06 | 30.56
SSIM | 0.752 | 0.687 | 0.701 | 0.764 | 0.865 | 0.880
SAM | 0.143 | 0.247 | 0.201 | 0.125 | 0.071 | 0.070
3 | $\infty$ | PSNR | 29.99 | 28.25 | 30.80 | 34.40 | 32.21 | 34.30
SSIM | 0.820 | 0.775 | 0.880 | 0.949 | 0.894 | 0.950
SAM | 0.092 | 0.098 | 0.075 | 0.031 | 0.064 | 0.031
30 | PSNR | 29.53 | 27.50 | 30.78 | 33.19 | 31.54 | 33.45
SSIM | 0.801 | 0.758 | 0.875 | 0.925 | 0.910 | 0.930
SAM | 0.090 | 0.101 | 0.089 | 0.047 | 0.057 | 0.040
20 | PSNR | 27.98 | 24.42 | 25.26 | 28.10 | 29.27 | 31.15
SSIM | 0.775 | 0.685 | 0.695 | 0.778 | 0.883 | 0.900
SAM | 0.114 | 0.220 | 0.205 | 0.124 | 0.066 | 0.062
4 | $\infty$ | PSNR | 30.59 | 30.78 | 31.24 | 38.15 | 32.60 | 36.42
SSIM | 0.889 | 0.891 | 0.921 | 0.973 | 0.914 | 0.966
SAM | 0.072 | 0.062 | 0.043 | 0.022 | 0.057 | 0.29
30 | PSNR | 30.02 | 28.88 | 30.79 | 34.44 | 32.57 | 34.21
SSIM | 0.832 | 0.804 | 0.820 | 0.939 | 0.847 | 0.939
SAM | 0.085 | 0.902 | 0.062 | 0.043 | 0.076 | 0.042
20 | PSNR | 28.16 | 25.67 | 26.74 | 28.33 | 32.30 | 33.74
SSIM | 0.798 | 0.721 | 0.742 | 0.776 | 0.909 | 0.925
SAM | 0.130 | 0.175 | 0.201 | 0.127 | 0.059 | 0.102
### 5.3 Robustness Analysis
Figure 6: Two RGB false color reconstructed scenes using the non-data driven
methods and the proposed method with its respective metrics are presented.
Additionally, the ground-truth and a spectral point of each scene is shown.
Numerical simulations were conducted to demonstrate the robustness of the
proposed method at different levels of additive Gaussian noise and the number
of shots, using the two spectral image obtained in [55]. It is well known that
the data-driven methods distribution of training and test data must be similar
to obtain good results, for this reason, in this experiment, the proposed
method was compared with the state-of-art non-data driven methods.
Specifically, we compare the proposed method with the GPSR [29], using
sparsity assumption in the Wavelet Kronecker Discrete Cosine transform
implemented as in [8], ADMM [31] using low-rank prior implemented as in [17],
CSALSA [32] using the 3D total variation, PnP-ADMM [43] using BM3D as
denoiser, and Deep Image Prior [51] using the ResNet based network. Three
different noise levels were evaluated: 20, 30 dB of signal to noise ratio
(SNR) and noiseless case that results in $\infty$ dB. Further, the number of
snapshots is varied between 1, 2, 3, and 4 snapshots using the CASSI system,
expressed mathematically as in (2). For this experiment, the ResNet-based
network was used as the "Convolutional layers" in the proposed model, and the
rank factor was fixed as $\rho=0.5$ and $\rho=0.4$ for the DataSet 1 and
DataSet 2, respectively. Table LABEL:table:Robust_analysis, presents a
comparison of the performance in terms of PSNR, SSIM, and SAM metrics, for the
different methods (The results are the average of the two DataSet). Boldface
indicates the best result for each case, and the second-best result is
underlined. From the Table LABEL:table:Robust_analysis, it can be seen that
the proposed method outperforms in almost all cases the other methods.
Furthermore, the proposed method shows better noise robustness compared to the
other methods, since the maximum quality difference between the noise levels
studied is 3 dB, compared to 3dB, 5dB, 5dB, 5dB, 10dB and 3dB for the GPSR,
ADMM, CSALSA, PnP -ADMM and Deep Image Prior (DIP), respectively.
Additionally, as expected, when the number of snapshots per image increases,
all the methods improve their reconstruction quality; in particular, the
difference between 1 and 4 snapshots for the noiseless case, is up to, 5dB,
6dB, 6dB, 10dB 5dB and 6dB for GPSR, ADMM, CSALSA, PnP-ADMM, DIP and the
proposed method, respectively.
To visualize the reconstructions and analyze the results in more detail,
Figure 6 shows an RGB false color for the reconstruction of each method, for a
single CASSI shot, which is the extreme case in terms of compression. Note,
that the proposed method, in the zoomed insets, is much cleaner than its
counterparts. Additionally, to see the behavior, a single spatial point of
each reconstruction for the two Datasets are also presented in Figure 6. It
can be seen that the spectral signatures obtained by the proposed method
closely resemble the ground-truth.
### 5.4 Validation in a Real Testbed Implementation
This section evaluates the proposed method with real measurements acquired
using a testbed implementation. For this section, the ResNet-based model was
used with ($\rho=0.4$), and learning rate $1e-3$. Specifically, two different
scenarios of compressed projections were assessed, which are described as
follows.
Figure 7: Testbed CASSI implementation where the relay lens focuses the
encoded light by the DMD into the sensor after dispersed by the prism.
#### 5.4.1 Binary Coded Aperture
This scenario was carried out for one snapshot of the CASSI testbed laboratory
implementation depicted in Fig. 7. This setup contains a $100$-$nm$ objective
lens, a high-speed digital micro-mirror device (DMD) (Texas Instruments-
DLI4130), with a pixel size of $13,6\mu m$, where the CA is implemented, an
Amici Prism (Shanghai Optics), and a CCD (AVT Stingray F-145B) camera with
spatial resolution $1388\times 1038$, and pitch size of $6.45\mu m$. The CA
spatial distribution for the snapshot comes from blue noise patterns, i.e.,
this CA is designed according to [61]. Notice that the robustness analysis
summarized in Table LABEL:table:Robust_analysis, showed that the three best
recovery methods were the PnP-ADMM, DIP, and the proposed method; therefore,
we decided also to compare them using this real data.
Figure 8: (Left) RGB visual representation of the scene obtained with the
different methods, (Right), two spectral signatures of the recovered scenes.
Figure 9: (Top) RGB visual representation of the scene obtained with the GPSR
method used in [62] and the proposed method, (Bottom), normalized spectral
signatures of the recovered scenes.
Figure 8 presents the RGB scene obtained with a traditional camera, and the
false-colored RGB images corresponding to reconstructed spectral images using
the different solvers. Furthermore, the spectral responses of two particular
spatial locations in the scene, indicated as red points in the images, are
also included and compared with the spectral behavior using a commercially
available spectrometer (Ocean Optics USB2000+). The visual results show that
the proposed method yield better spatial and spectral reconstruction since the
RGB reconstructed is sharper in the proposed scheme, and the spectral
signatures are closer to those taken by the spectrometer, this is, the SAM of
the normalized signatures obtained from the PnP-ADMM algorithm is 0.188, Deep
Image Prior is 0.205, and the SAM associated to the proposed method is 0.120.
These numerical results validate the performance of the proposed method with
real data for a real CASSI setup using a binary-coded aperture.
#### 5.4.2 Colored Coded Aperture
The real data for this second test was provided by [62]. In particular, the
main difference with the data of Section 5.4.1 is that the spatial modulation
is a Colored CA, where each pixel can be seen as a filter with its spectral
response, (further details regarding Colored CA can be found in [8, 62]). The
optical elements in this testbed implementation were the same used in the
previous setup, where the DMD was used to emulate the Colored CA. The coding
and the scene were implemented to have a spatial resolution of $256\times 256$
pixels and $L=8$ as the resolvable bands. [62] uses a hand-crafted method,
which does not require training data and the GPSR algorithm was used as a
recovery algorithm; therefore, the proposed method was compared with this
method. Figure 9(Top) shows the RGB mapping of the recovered scene. There, it
can be seen that the proposed method provides a cleaner version of the scene.
Additionally, two spatial points were chosen to evaluate the spectral
behavior. The normalized spectral response of the proposed method and the GPSR
algorithm are also included in the Figure 9(Bottom). It can be seen that the
spectral signature provided by the proposed method is closer to the obtained
with the spectrometer compared with the GPSR method, in fact, the SAM of the
normalized signatures obtained from the GPSR algorithm is 0.120 and the SAM
associated to the proposed method is 0.057. These results validate the
effectiveness of the proposed method on real data for two variations of CASSI
systems.
## 6 Conclusions
A method for reconstructing spectral images from the CSI measurements has been
proposed. The proposed scheme is based on the fact that the spectral images
can be generated from a convolutional network whose input features comes from
a low-rank Tucker representation. Although the proposed method is based on a
convolutional network framework, it does not require training data, only the
compressed measurements. This method was evaluated in three scenarios:
noiseless, noisy, and real data implementation. In all of them, the proposed
method outperforms the image quality reconstruction compared with state-of-
the-art methods. In particular, the proposed method with 20 SNR levels of
noise in the CSI measurements outperforms its counterparts in up to 4 dB in
the PSNR measure. Although the proposed method was tested in two variation of
CSI system, it can be extended and used in compressive systems where the data
set is limited.
## References
* [1] G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” Journal of biomedical optics 19, 010901 (2014).
* [2] L. Zhang, L. Zhang, D. Tao, and X. Huang, “Tensor discriminative locality alignment for hyperspectral image spectral–spatial feature extraction,” IEEE Transactions on Geoscience and Remote Sensing 51, 242–256 (2012).
* [3] P. W. Yuen and M. Richardson, “An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition,” The Imaging Science Journal 58, 241–253 (2010).
* [4] C. Hinojosa, J. Bacca, and H. Arguello, “Coded aperture design for compressive spectral subspace clustering,” IEEE Journal of Selected Topics in Signal Processing 12, 1589–1600 (2018).
* [5] G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31, 105–115 (2014).
* [6] X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33, 95–108 (2016).
* [7] C. V. Correa, C. Hinojosa, G. R. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56, 041309 (2016).
* [8] H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Transactions on Image Processing 23, 1896–1908 (2014).
* [9] A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Applied optics 47, B44–B51 (2008).
* [10] M. Gehm, R. John, D. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics express 15, 14013–14027 (2007).
* [11] S. Shauli, O. Yaniv, A. Marwan, A. Ibrahim, D. G. Blumberg, and A. Stern, “Dual-camera design for hyperspectral and panchromatic imaging, using a wedge shaped liquid crystal as a spectral multiplexer,” Scientific Reports (Nature Publisher Group) 10 (2020).
* [12] S. Zhang, L. Wang, Y. Fu, X. Zhong, and H. Huang, “Computational hyperspectral imaging based on dimension-discriminative low-rank tensor recovery,” in _Proceedings of the IEEE International Conference on Computer Vision,_ (2019), pp. 10183–10192.
* [13] D. Kittle, K. Choi, A. Wagadarikar, and D. J. Brady, “Multiframe image estimation for coded aperture snapshot spectral imagers,” Applied Optics 49, 6824–6833 (2010).
* [14] L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Applied optics 54, 848–858 (2015).
* [15] Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,_ (2016), pp. 3727–3736.
* [16] L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE transactions on pattern analysis and machine intelligence 39, 2104–2111 (2016).
* [17] J. Bacca, C. V. Correa, and H. Arguello, “Noniterative hyperspectral image reconstruction from compressive fused measurements,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12, 1231–1239 (2019).
* [18] T. Gelvez, H. Rueda, and H. Arguello, “Joint sparse and low rank recovery algorithm for compressive hyperspectral imaging,” Applied optics 56, 6785–6795 (2017).
* [19] L. Wang, C. Sun, Y. Fu, M. H. Kim, and H. Huang, “Hyperspectral image reconstruction using a deep spatial-spectral prior,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,_ (2019), pp. 8032–8041.
* [20] R. Hyder and M. S. Asif, “Generative models for low-rank video representation and reconstruction from compressive measurements,” in _2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP),_ (IEEE, 2019), pp. 1–6.
* [21] L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Transactions on Image Processing 28, 2257–2270 (2018).
* [22] Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in _Proceedings of the IEEE International Conference on Computer Vision Workshops,_ (2017), pp. 518–525.
* [23] X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “$\lambda$-net: Reconstruct hyperspectral images from a snapshot measurement,” in _IEEE/CVF Conference on Computer Vision (ICCV),_ vol. 1 (2019).
* [24] D. Gedalin, Y. Oiknine, and A. Stern, “Deepcubenet: reconstruction of spectrally compressive sensed hyperspectral images with deep neural networks,” Optics Express 27, 35811–35822 (2019).
* [25] J. Bacca, L. Galvis, and H. Arguello, “Coupled deep learning coded aperture design for compressive image classification,” Optics Express 28, 8528–8540 (2020).
* [26] I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Transactions on Graphics (TOG) 36, 1–13 (2017).
* [27] T. Zhang, Y. Fu, L. Wang, and H. Huang, “Hyperspectral image reconstruction using deep external and internal learning,” in _Proceedings of the IEEE International Conference on Computer Vision,_ (2019), pp. 8559–8568.
* [28] L. Wang, C. Sun, M. Zhang, Y. Fu, and H. Huang, “Dnu: Deep non-local unrolling for computational spectral imaging,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,_ (2020), pp. 1661–1671.
* [29] M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE Journal of selected topics in signal processing 1, 586–597 (2007).
* [30] E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE signal processing magazine 25, 21–30 (2008).
* [31] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein _et al._ , “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine learning 3, 1–122 (2011).
* [32] M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20, 681–695 (2010).
* [33] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 57, 1413–1457 (2004).
* [34] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Sciences 106, 18914–18919 (2009).
* [35] S. Yang, M. Wang, P. Li, L. Jin, B. Wu, and L. Jiao, “Compressive hyperspectral imaging via sparse tensor and nonlinear compressed sensing,” IEEE Transactions on Geoscience and Remote Sensing 53, 5943–5957 (2015).
* [36] A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in _2015 53rd annual allerton conference on communication, control, and computing (Allerton),_ (IEEE, 2015), pp. 1336–1343.
* [37] A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in _2017 IEEE international conference on acoustics, speech and signal processing (ICASSP),_ (IEEE, 2017), pp. 2272–2276.
* [38] A. Dave, A. K. Vadathya, R. Subramanyam, R. Baburajan, and K. Mitra, “Solving inverse computational imaging problems using deep pixel-level prior,” IEEE Transactions on Computational Imaging 5, 37–51 (2018).
* [39] H. Palangi, R. Ward, and L. Deng, “Distributed compressive sensing: A deep learning approach,” IEEE Transactions on Signal Processing 64, 4504–4518 (2016).
* [40] H. Yao, F. Dai, S. Zhang, Y. Zhang, Q. Tian, and C. Xu, “Dr2-net: Deep residual reconstruction network for image compressive sensing,” Neurocomputing 359, 483–493 (2019).
* [41] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,_ (2016), pp. 449–458.
* [42] J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image processing 16, 2992–3004 (2007).
* [43] X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,_ (2020), pp. 1447–1457.
* [44] S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play admm for image restoration: Fixed-point convergence and applications,” IEEE Transactions on Computational Imaging 3, 84–98 (2016).
* [45] J. Rick Chang, C.-L. Li, B. Poczos, B. Vijaya Kumar, and A. C. Sankaranarayanan, “One network to solve them all–solving linear inverse problems using deep projection models,” in _Proceedings of the IEEE International Conference on Computer Vision,_ (2017), pp. 5888–5897.
* [46] C. Metzler, A. Mousavi, and R. Baraniuk, “Learned d-amp: Principled neural network based compressive image recovery,” in _Advances in Neural Information Processing Systems,_ (2017), pp. 1772–1783.
* [47] J. Zhang and B. Ghanem, “Ista-net: Interpretable optimization-inspired deep network for image compressive sensing,” in _Proceedings of the IEEE conference on computer vision and pattern recognition,_ (2018), pp. 1828–1837.
* [48] J. Sun, H. Li, Z. Xu _et al._ , “Deep admm-net for compressive sensing mri,” in _Advances in neural information processing systems,_ (2016), pp. 10–18.
* [49] A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv preprint arXiv:1703.03208 (2017).
* [50] Y. Wu, M. Rosca, and T. Lillicrap, “Deep compressed sensing,” arXiv preprint arXiv:1905.06723 (2019).
* [51] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,_ (2018), pp. 9446–9454.
* [52] Y. Wang, L. Lin, Q. Zhao, T. Yue, D. Meng, and Y. Leung, “Compressive sensing of hyperspectral images via joint tensor tucker decomposition and weighted total variation regularization,” IEEE Geoscience and Remote Sensing Letters 14, 2457–2461 (2017).
* [53] K. M. León-López and H. A. Fuentes, “Online tensor sparsifying transform based on temporal superpixels from compressive spectral video measurements,” IEEE Transactions on Image Processing 29, 5953–5963 (2020).
* [54] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing 13, 600–612 (2004).
* [55] M. Marquez, H. Rueda-Chacon, and H. Arguello, “Compressive spectral light field image reconstruction via online tensor representation,” IEEE Transactions on Image Processing 29, 3558–3568 (2020).
* [56] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition,_ (2016), pp. 770–778.
* [57] J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber, “Stacked convolutional auto-encoders for hierarchical feature extraction,” in _International conference on artificial neural networks,_ (Springer, 2011), pp. 52–59.
* [58] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _International Conference on Medical image computing and computer-assisted intervention,_ (Springer, 2015), pp. 234–241.
* [59] B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in _European Conference on Computer Vision,_ (Springer, 2016), pp. 19–34.
* [60] A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in _CVPR 2011,_ (IEEE, 2011), pp. 193–200.
* [61] C. V. Correa, H. Arguello, and G. R. Arce, “Spatiotemporal blue noise coded aperture design for multi-shot compressive spectral imaging,” JOSA A 33, 2312–2322 (2016).
* [62] L. Galvis, E. Mojica, H. Arguello, and G. R. Arce, “Shifting colored coded aperture design for spectral imaging,” Applied optics 58, B28–B38 (2019).
|
# Average skew information-based coherence and its typicality for random
quantum states
Zhaoqi Wu1,4, Lin Zhang2,4, Shao-Ming Fei3,4, Xianqing Li-Jost4
1\. Department of Mathematics, Nanchang University, Nanchang 330031, P R China
2\. Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018, P R
China
3\. School of Mathematical Sciences, Capital Normal University, Beijing
100048, P R China
4\. Max Planck Institute for Mathematics in the Sciences, 04103 Leipzig,
Germany Corresponding author<EMAIL_ADDRESS>
Abstract We study the average skew information-based coherence for both random
pure and mixed states. The explicit formulae of the average skew information-
based coherence are derived and shown to be the functions of the dimension $N$
of the state space. We demonstrate that as $N$ approaches to infinity, the
average coherence is $1$ for random pure states, and a positive constant less
than 1/2 for random mixed states. We also explore the typicality of average
skew information-based coherence of random quantum states. Furthermore, we
identify a coherent subspace such that the amount of the skew information-
based coherence for each pure state in this subspace can be bounded from below
almost always by a fixed number that is arbitrarily close to the typical value
of coherence.
Key Words: Average coherence; skew information; random quantum states;
typicality
1\. Introduction
Quantum coherence is a fundamental issue in quantum mechanics, and an
important physical resource in quantum information theory [1]. An axiomatic
definition of a valid quantum coherence measure has been proposed in [2],
which intrigued great interest in quantifying and studying the properties of
quantum coherence. Many distance measures and information related quantities,
such as relative entropy [2], $l_{1}$ norm [2], robustness of coherence [3],
max-relative entropy [4], geometric coherence [5, 6], fidelity [7], trace
distance [8], modified trace distance [9, 10], skew information [14, 12, 13,
15], coherence weight [16], affinity [17, 18], generalized
$\alpha$-$z$-relative Rényi entropy [19] and logarithmic coherence number
[20], have been exploited to quantify quantum coherence. Quantum coherence
from other resource-theoretical perspectives, such as coherence distillation
and coherence dilution [21, 22, 23, 24, 25, 26, 27, 28, 29], no-broadcasting
of quantum coherence [30, 31], interconversion between quantum coherence and
quantum entanglement [5, 32, 33, 34] or quantum correlations [35, 36, 37, 38,
39, 40] and cohering power of quantum operations [41]. Coherence manipulation
under incoherent operations [42] have also been studied.
Wigner-Yanase (WY) skew information [43] is a very important information
quantity, which has been widely used and explored in studying quantum
information problems in recent years. WY skew information has been exploited
to define different coherence measures, such as $K$-coherence [11], modified
$K$-coherence [13] and skew information-based coherence [14]. In particular,
skew information-based coherence was proven to be a well-defined measure, with
tight connections with quantum correlations and the corresponding experimental
implementations.
In quantifying the coherence of a quantum state [2], a coherence measure is
defined with respect to a certain basis. To eliminate the impact of the basis,
two questions need to be addressed: first of all, for a given coherence
measure, if the coherence of a state with respect to one basis is very large,
how large the coherence of it could be with respect to another basis? Does any
tradeoff relation exists? This question has been examined for a set of
mutually unbiased bases (MUBs) for $l_{1}$ norm of coherence and relative
entropy of coherence in [44] and for skew information-based coherence in [45].
Secondly, is it possible to characterize the coherence of a quantum state
without referring to any particular basis? This question is answered by
considering average measure of coherence over all bases [44]. Since all
reference bases can be generated from unitary operations on a given basis,
what we need is to calculate the integration over the unitary orbit of a fixed
basis, or equivalently, the integration over the unitary group equipped with
the normalized Haar measure [45]. This averaging shows the degree to which the
state is coherent if a basis is chosen at random, which has been studied for
$l_{1}$ norm of coherence and relative entropy of coherence in [44] and for
skew information-based coherence in [45]. It is found that for skew
information-based coherence, the average coherence over all orthonormal bases
is equal to the average coherence over any complete MUBs. These study reveals
intrinsic essence of coherence encoded in a state.
On the other hand, the random matrix theory provides new perspectives to study
quantum physics and quantum information theory [46]. From the view of
probability and statistics, average value represents the first moment, which
is an important numerical characteristics, and can further characterize some
problems such as the law of large numbers and other convergence properties.
Random pure quantum states possess many important properties including the
concentration of measure phenomenon or typicality [47], which allow one to get
more information on the structures of the quantum system [46, 47, 48, 49]. The
entanglement features of pure bipartite quantum states sampled from the
uniform Haar measure have been studied in recent years [48, 49, 50, 51, 52,
53, 54, 55, 56, 57, 58, 59], among which the average entropy of a subsystem
has been calculated and investigated [50, 51, 52]. It has been shown that a
typical pure state of an $N\times N$ system is almost maximally entangled
[49]. New analytical formulae describing the levels of entanglement expected
in random pure states have also been presented [60].
The results in [44] and [45] concern the average coherence of given quantum
states. It is thus natural to consider average coherence of random quantum
states with respect to the Haar measure on the unitary group. Based on the
average value of coherence, the concentration of measure phenomenon can be
further studied, which can reveal statistical behavior and characteristics of
quantum coherence. In recent years, average coherence based on relative
entropy of coherence and its typicality for random pure states [61] and random
mixed states [62] have been derived, and average subentropy, coherence and
entanglement of random mixed quantum states have been discussed [63].
Moreover, the average of uncertainty product for bounded observables has been
also calculated [64].
Since skew information-based coherence is of great significance, the following
questions naturally arise: can we calculate the average skew information-based
coherence for random pure/mixed states? what is the concentration measure of
phenomenon (typicality) of this average coherence for random pure/mixed
states? In this paper, we will answer these questions.
The paper is organized as follows. We begin with a retrospect of the framework
for quantification of coherence, skew information and the coherence measure
based on it in Sec. 2. In Sec. 3, we recall random pure quantum states, Lévy’s
Lemma, random mixed quantum states and related preliminaries. In Sec. 4, we
calculate the average skew information-based coherence for random pure states
sampled from the uniform Haar distribution, investigate the typicality of the
obtained average coherence, and figure out the dimension of the subspace of
the total Hilbert space such that all the pure states in this subspace have a
fixed nonzero amount of coherence. For random mixed states, we also calculate
the average skew information-based coherence and study its typicality in Sec.
5, which turned out to have different features compared with random pure
states. Finally, we conclude in Sec. 6 with a summary and discussions on the
significance and implementations of the obtained results.
2\. Skew information-based coherence
Let $\mathcal{H}=\mathbb{C}^{N}$ be a Hilbert space of dimension $N$, and
$\mathrm{B}\mathcal{(H)}$, $\mathrm{S}\mathcal{(H)}$ and
$\mathrm{D}\mathcal{(H)}$ be the set of all bounded linear operators,
Hermitian operators and density operators on $\mathcal{H}$, respectively.
Mathematically, a state and a channel are described by a density operator
(positive operator of trace $1$) and a completely positive trace preserving
(CPTP) map, respectively [65].
Fix an orthonormal basis $\\{|k\rangle\\}^{N}_{k=1}$ of $\mathcal{H}$. The set
of incoherent states, which are diagonal in this basis, can be written as
$\mathcal{I}=\\{\delta\in\mathrm{D}\mathcal{(H)}|\delta=\sum^{N}_{k=1}p_{k}|k\rangle\langle
k|,~{}p_{k}\geq 0,~{}\sum^{N}_{k}p_{k}=1\\}.$
Let $\Lambda$ be a CPTP map $\Lambda(\rho)=\sum_{n}K_{n}\rho K_{n}^{\dagger},$
where $K_{n}$ are Kraus operators satisfying
$\sum_{n}K_{n}^{\dagger}K_{n}=I_{N}$ with $I_{N}$ the identity operator on
$\mathcal{H}$. $K_{n}$ are called incoherent Kraus operators if
$K_{n}^{\dagger}\mathcal{I}K_{n}\in\mathcal{I}$ for all $n$, and the
corresponding $\Lambda$ is called an incoherent operation.
A well-defined coherence measure $C(\rho)$ of a quantum state should satisfy
the following conditions [2]:
$(C1)$ (Faithfulness) $C(\rho)\geq 0$ and $C(\rho)=0$ iff $\rho$ is
incoherent.
$(C2)$ (Convexity) $C(\cdot)$ is convex in $\rho$.
$(C3)$ (Monotonicity) $C(\Lambda(\rho))\leq C(\rho)$ for any incoherent
operation $\Lambda$.
$(C4)$ (Strong monotonicity) $C(\cdot)$ does not increase on average under
selective incoherent operations, i.e.,
$C(\rho)\geq\sum_{n}p_{n}C(\varrho_{n}),$ where $p_{n}=\mathrm{Tr}(K_{n}\rho
K_{n}^{\dagger})$ are probabilities and $\varrho_{n}=\frac{K_{n}\rho
K_{n}^{\dagger}}{p_{n}}$ are the post-measurement states, $K_{n}$ are
incoherent Kraus operators.
For a state $\rho\in\mathrm{D}\mathcal{(H)}$ and an observable
$K\in\mathrm{S}\mathcal{(H)}$, the Wigner-Yanase (WY) skew information is
defined by [43]
$I(\rho,K)=-\frac{1}{2}\mathrm{Tr}([\rho^{\frac{1}{2}},K]^{2}),$ (1)
where $[X,Y]:=XY-YX$ is the commutator of $X$ and $Y$.
Girolami has utilized the Wigner-Yanase skew information $I(\rho,K)$ to give a
coherence measure in a direct manner, where $K$ is diagonal in the basis
$\\{|k\rangle\\}_{k=1}^{N}$, and called it $K$-coherence [11]. This quantity
is in fact a quantifier for coherence of $\rho$ with respect to $K$ rather
than the orthonormal basis $\\{|k\rangle\\}_{k=1}^{N}$.
It is argued that the $K$-coherence satisfies $(C1)$ and $(C2)$, but fails to
meet the requirement $(C3)$ [66, 67]. By considering coherence with respect to
the Lüders measurements induced from the observable $K$, it is shown that the
$K$-coherence can be readily adapted to a bona fide measure of coherence
satisfying $(C1)$-$(C3)$ [13] (which is the coherence in the context of
partially decoherent operations, and has been called partial coherence in
[13]). Another way to resolve the above problem is to introduce the skew
information-based coherence measure defined by [14]
$C_{I}(\rho)=\sum_{k=1}^{N}I(\rho,|k\rangle\langle k|),$ (2)
where $I(\rho,|k\rangle\langle
k|)=-\frac{1}{2}\mathrm{Tr}\\{[\rho,|k\rangle\langle k|]\\}^{2}$ is the skew
information of the state $\rho$ with respect to the projection
$|k\rangle\langle k|$. Direct calculations show that (2) can be further
written as [14]
$C_{I}(\rho)=1-\sum_{k=1}^{N}\langle k|\sqrt{\rho}|k\rangle^{2}.$ (3)
It is easy to see that $\max_{\rho}C_{I}(\rho)=1-\frac{1}{N}$, and the maximum
is attained for the maximally coherent state
$|\psi\rangle=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}e^{i\theta_{j}}|j\rangle$.
If $\rho=|\psi\rangle\langle\psi|$ is a pure state, one has
$C_{I}(\psi)=1-\sum_{k=1}^{N}|\langle k|\psi\rangle|^{4}.$ (4)
In [14], it has been proved that the coherence measure defined in (2)
satisfies all the criteria $(C1)$-$(C4)$, while the $K$-coherence does not
satisfy $(C4)$ (strong monotonicity). The advantage of this coherence measure
is that it has an analytic expression. Also, an operational meaning in
connection with quantum metrology has been revealed. The distribution of this
coherence measure among the multipartite systems has been investigated and a
corresponding polygamy relation has been proposed. It is also found that this
coherence measure provides the natural upper bounds of quantum correlations
prepared by incoherent operations. Moreover, it is shown that this coherence
measure can be experimentally measured [14]. Since the skew information-based
coherence measure (2) is well-defined and can be analytically expressed, it is
of great significance both theoretically and practically, and worth evaluating
the average coherence based on this measure for both random pure quantum
states and random mixed quantum states.
3\. Random pure quantum states, Lévy’s Lemma, random mixed quantum states
Random pure quantum states. Let $\mathcal{H}=\mathbb{C}^{N}$ be a Hilbert
space of dimension $N$, $\mathrm{U(N)}$ be the group of all $N\times N$
unitary matrices, $\mathrm{M_{N}}(\mathbb{C})$ be the set of all $N\times N$
complex matrices, and $\mathrm{D}(\mathbb{C}^{N})$ be the set of all density
matrices on $\mathbb{C}^{N}$. The set of pure states on $\mathbb{C}^{N}$ is
the complex projective space $\mathbb{C}\mathrm{P}^{N-1}$. For the space of
pure states $|\psi\rangle$ there exists a unique measure $\mathrm{d}(\psi)$
induced from the uniform Haar measure $\mathrm{d}\mu(U)$ on the unitary group
$\mathrm{U(N)}$, which implies that any random pure state $|\psi\rangle$ can
be obtained via a unitary operation on a fixed pure state $|\psi_{0}\rangle$:
$|\psi\rangle=U|\psi_{0}\rangle$. The average value of a function $g(\psi)$ of
pure states $|\psi\rangle$ is defined as
$\mathbb{E}_{\psi}[g(\psi)]=\int\mathrm{d}(\psi)g(\psi)=\int_{\mathrm{U(N)}}\mathrm{d}\mu(U)g(U\psi_{0}).$
Lipschitz continuous function and Lipschitz constant. Let $(X,d_{1})$ and
$(Y,d_{2})$ be two metric spaces and $T:X\rightarrow Y$ be a mapping. $T$ is
called a Lipschitz continuous mapping on $X$ with the Lipschitz constant
$\eta$, if there exists $\eta>0$ such that
$d_{2}(T(x),T(y))\leq\eta d_{1}(x,y)$
holds for all $x,y\in X$ [68]. Note that any real number larger than $\eta$ is
also a Lipschitz constant for the mapping $T$ [68].
In this work, we will use the concept of a Hilbert-Schmidt norm of a matrix
$A$, which is defined as $\|A\|_{2}:=\sqrt{\mathrm{Tr}A^{\dagger}A}$ [69].
Also, in deriving the Lipschitz constant for discussing the typicality for
random pure/mixed states, we need the notion of the gradient of a function.
The best linear approximation to a differentiable function
$f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ at a point $x$ in $\mathbb{R}^{n}$ is
linear mapping from $\mathbb{R}^{n}$ to $\mathbb{R}$ which is often denoted by
$\mathrm{d}f_{x}$ or $Df(x)$ and called the differential or (total) derivative
of $f$ at $x$. The _gradient_ is then related to the differential by the
formula $(\nabla f)_{x}\cdot v=\mathrm{d}f_{x}(v)$ for any
$v\in\mathbb{R}^{n}$, that is, the one-form (i.e., a linear functional) acting
on vectors induced a vector representation111This fact is just like Riesz
representation in Hilbert space. $(\nabla f)_{x}$ with respect to the scalar
product. The function $\mathrm{d}f$, which maps $x$ to $\mathrm{d}f_{x}$, is
called the differential or (exterior) derivative of $f$ and is an example of
differential one-form. If $\mathbb{R}^{n}$ is viewed as the space of
(dimension $n$) column vectors (of real numbers), then one can regard
$\mathrm{d}f$ as the row vector with components $\left(\frac{\partial
f}{\partial x_{1}},\cdots,\frac{\partial f}{\partial x_{n}}\right)$, so that
$\mathrm{d}f_{x}(v)$ is given by matrix multiplication. The gradient is then
the corresponding column vector, i.e., $(\nabla f)_{i}=\mathrm{d}f_{i}^{T}$
[70].
Lévy’s Lemma (see [47] and [49]). Let $T:\mathbb{S}^{k}\rightarrow\mathbb{R}$
be a Lipschitz continuous function from the $k$-sphere to the real line with a
Lipschitz constant $\eta$ (with respect to the Hilbert-Schmidt norm). Let
$z\in\mathbb{S}^{k}$ be a chosen uniformly at random. Then for any
$\epsilon>0$, we have
$\mathrm{Pr}\\{|T(z)-\mathbb{E}[T]|>\epsilon\\}\leq
2\mathrm{exp}\left(-\frac{(k+1)\epsilon^{2}}{9\pi^{3}\eta^{2}\mathrm{ln2}}\right),$
(5)
where $\mathbb{E}[T]$ is the expected value of $T$.
Note that the average over the Haar distributed $N$-dimensional pure states is
equivalent to the average over the $k$ sphere with $k=2N-1$.
Existence of small nets. To prove the existence of concentrated subspaces with
a fixed amount of coherence, we need the notion of small nets [48]. Given a
Hilbert space $\mathcal{H}$ of dimension $N$ and $0<\epsilon_{0}<1$, there
exists a set $\mathcal{N}$ of pure states in $\mathcal{H}$ with
$|\mathcal{N}|\leq(5/\epsilon_{0})^{2N}$ such that for every pure state
$|\psi\rangle\in\mathcal{H}$, there exists
$|\tilde{\psi}\rangle\in\mathcal{N}$ such that
$\||\psi\rangle-|\tilde{\psi}\rangle\|_{2}\leq\frac{\epsilon_{0}}{2}$, where
$\|\cdot\|_{2}$ is the Hilbert-Schmidt norm of a matrix. This set
$\mathcal{N}$ is called an $\epsilon_{0}$ net.
Random mixed quantum states. Quantum ensembles are defined by choosing
probability measures on $\mathrm{D}(\mathbb{C}^{N})$. It is worth noting that
such measure may not be unique, and different measures may have different
physical motivations, advantages and drawbacks, while the Fubini-Study (FS)
measure is the only natural measure in defining random pure states [71].
We have to pay a high price for considering a Riemannian geometry on
$\mathrm{D}(\mathbb{C}^{N})$, since it is usually difficult to tackle with the
emerged monotone metrics when $N>2$. Luckily, the measures that induced from
some chosen monotone metrics are not that difficult to deal with. The
technique is the same as that one uses in flat space, when the Euclidean
measure is decomposed into a product. The set of quantum mixed states that can
be written in the form $\rho=U\Lambda U^{\dagger}$, for a fixed diagonal
matrix $\Lambda$ with strictly positive eigenvalues, is a flag manifold ${\bf
F}^{(N)}=\mathrm{U(N)/[U(1)]}^{N}$. It is naturally assumed that a probability
distribution in $\mathrm{D}(\mathbb{C}^{N})$ possess the invariance with
respect to unitary rotations, $P(\rho)=P(W\rho W^{\dagger})$. This assumption
can be guaranteed if (a) the chosen eigenvalues and eigenvectors are
independent, and (b) the eigenvectors are drawn according to the Haar measure,
$\mathrm{d\mu_{\mathrm{Haar}}}(W)=\mathrm{d\mu_{\mathrm{Haar}}}(UW)$ [71].
Combining the two measures, a product measure on the Cartesian product of the
flag manifold and the simplex ${\bf F}^{(N)}\times\Delta_{N-1}$ can be
defined: $\mathrm{d\mu(\rho)=d\mu_{Haar}}(U)\times\mathrm{d}\nu(\Lambda)$,
which induces the corresponding probability distribution,
$P(\rho)=P_{\mathrm{Haar}}({\bf F}^{(N)})\times P(\Lambda)$, where the first
factor denotes the natural, unitarily invariant distribution on the flag
manifold ${\bf F}^{(N)}=\mathrm{U(N)/[U(1)]}^{N}$ induced by the Haar measure
on $\mathrm{U(N)}$. Note that the Haar measure on $\mathrm{U(N)}$ is unique
while there is no unique choice for $\nu$ [63, 71, 72].
The measures used frequently over $\mathrm{D}(\mathbb{C}^{N})$ can be obtained
by taking partial trace over a $M$-dimensional environment of an ensemble of
pure states distributed according to the unique, unitarily invariant FS
measure on the space $\mathbb{C}\mathrm{P}^{MN-1}$ of pure states of the
composite system. There is a simple physical motivation for such measures:
they can be used if anything is known about the density matrix, apart from the
dimensionality $M$ of the environment. When $M=1$, we get the FS measure on
the space of pure states. Since the rank of $\rho$ is limited by $M$, when
$M\geq N$ the induced measure covers the full set of
$\mathrm{D}(\mathbb{C}^{N})$. Since the pure state $|\psi\rangle$ is drawn
according to the FS measure, the induced measure is of the product form
$P(\rho)=P_{\mathrm{Haar}}({\bf F}^{(N)})\times P(\Lambda)$. Hence the
distribution of the eigenvectors of $\rho$ is determined by the Haar measure
on $\mathrm{U(N)}$ [71].
The measure for the joint probability distribution of spectrum
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{N}\\}$ of $\rho$ is given by [72]
$\displaystyle\mathrm{d}\nu_{N,M}(\Lambda)=C_{N,M}\delta\left(1-\sum^{N}_{j=1}\lambda_{j}\right)\prod_{1\leq
i<j\leq
N}(\lambda_{i}-\lambda_{j})^{2}\prod^{N}_{j=1}\lambda^{M-N}_{j}\theta(\lambda_{j})\mathrm{d}\lambda_{j},$
(6)
where $\delta$ is the Dirac delta function, the theta function $\theta$
ensures that $\rho$ is positive definite, and $C_{N,M}$ is the normalization
constant,
$C_{N,M}=\frac{\Gamma(NM)}{\prod^{N-1}_{j=0}\Gamma(N-j+1)\Gamma(M-j)}.$
In particular, we will consider the case $N=M$ in this paper. In this
scenario, we deal with non-Hermitian square random matrices characteristic of
the Ginibre ensemble [73, 74] and obtains the Hilbert-Schmidt measure [71].
Denote $\mathrm{d\nu_{N,N}=d\nu}$ and $C_{N}^{\mathrm{HS}}=C_{N,N}$. Thus we
have [64, 72]
$\mathrm{d\mu_{HS}(\rho)=d\mu_{Haar}}(U)\times\mathrm{d}\nu(\Lambda)$ (7)
for $\rho=U\Lambda U^{\dagger}$. Here $\mathrm{d\nu(\Lambda)}$ is given by
[64, 72]
$\mathrm{d\nu(\Lambda)}=C_{N}^{\mathrm{HS}}\delta\left(1-\sum_{j=1}^{N}\lambda_{i}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j},$
(8)
where $|\Delta(\lambda)|^{2}=\prod_{1\leq i<j\leq
N}(\lambda_{i}-\lambda_{j})^{2}$, and $C^{N}_{\mathrm{HS}}$ is the
normalization constant,
$C^{N}_{\mathrm{HS}}=\frac{\Gamma(N^{2})}{\Gamma(N+1)\prod^{N}_{j=1}\Gamma(j)^{2}}.$
(9)
4\. Average skew information-based coherence and its typicality for random
pure states
We first calculate the average skew information-based coherence for random
pure states.
Theorem 1 The average skew information-based coherence for a random pure state
$|\psi\rangle\in\mathrm{D}(\mathbb{C}^{N})$ is given by
$\mathbb{E}_{\psi}[C_{I}(\psi)]=\frac{N-1}{N+1}.$ (10)
Proof. From Eq. (4), the expected value of the coherence based on skew
information is given by
$\mathbb{E}_{\psi}[C_{I}(\psi)]:=\int\mathrm{d}\mu(\psi)\left(1-\sum_{k=1}^{N}|\langle
k|\psi\rangle|^{4}\right),$ (11)
where $\mu$ is a unitarily invariant uniform probability measure.
Take $|\psi\rangle=U|1\rangle$, where $U$ is sampled from the Haar
distribution and $|1\rangle$ is a fixed state. Noting that the Haar measure is
left-invariant, we obtain
$\displaystyle\mathbb{E}_{\psi}[C_{I}(\psi)]$ $\displaystyle=$ $\displaystyle
1-\sum_{k=1}^{N}\int\mathrm{d}\mu(U)|\langle k|U|1\rangle|^{4}$
$\displaystyle=$ $\displaystyle 1-N\int\mathrm{d}\mu(U)|U_{11}|^{4},$
where $U_{11}=\langle 1|U|1\rangle$. The distribution of $|U_{11}|^{2}$ is
given by $(N-1)(1-r)^{N-2}\mathrm{d}r$, where $0\leq r\leq 1$ [61]. Therefore,
we get
$\mathbb{E}_{\psi}[C_{I}(\psi)]=1-N(N-1)\int_{0}^{1}r^{2}(1-r)^{d-2}\mathrm{d}r=1-N(N-1)B(3,N-1),$
(12)
where $B(\alpha,\beta)$ is the $\beta$ function
$B(\alpha,\beta):=\int_{0}^{1}r^{\alpha-1}(1-r)^{\beta-1}\mathrm{d}r=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}.$
(13)
Noting that
$B(3,N-1)=\frac{\Gamma(3)\Gamma(N-1)}{\Gamma(N+2)}=\frac{2}{(N+1)N(N-1)},$
we obtain from Eq. (12) the formula (10). $\Box$
By Theorem 1, it is easy to see that
$\mathbb{E}_{\psi}[C_{I}(\psi)]=\frac{1}{3}$ for qubit pure states and
$\mathbb{E}_{\psi}[C_{I}(\psi)]=\frac{1}{2}$ for qutrit pure states. The limit
is $\lim_{N\rightarrow\infty}\mathbb{E}_{\psi}[C_{I}(\psi)]=1$ as
$N\rightarrow\infty$.
Moreover, it is easy to see that
$(1-\frac{1}{N})-(\frac{N-1}{N+1})<1-\frac{N-1}{N+1}$ for all integers $N\geq
2$, i.e.,
$\max_{\psi}C_{I}(\psi)-\mathbb{E}_{\psi}[C_{I}(\psi)]<\mathbb{E}_{\psi}[C_{I}(\psi)]-\min_{\psi}C_{I}(\psi)$,
which means that the average coherence is always closer to the maximum
coherence than the minimum coherence for skew information-based coherence
measure. It can be also found that
$\lim_{N\rightarrow\infty}(\max_{\psi}C_{I}(\psi)-\mathbb{E}_{\psi}[C_{I}(\psi)])=\lim_{N\rightarrow\infty}\frac{N-1}{N(N+1)}=0$,
and
$\lim_{N\rightarrow\infty}\frac{\max_{\psi}C_{I}(\psi)}{\mathbb{E}_{\psi}[C_{I}(\psi)]}=\frac{N+1}{N}=1$.
This fact illustrate that for high dimensional quantum systems, the quantum
coherence of a randomly-chosen pure state sampled from the uniform Haar
measure is almost maximal.
Based on the above result, we can further give the following theorem about the
concentration of measure phenomenon for quantum coherence with respect to
random pure states.
Theorem 2 (Typicality of skew information-based coherence for random pure
states) Let $|\psi\rangle\in\mathrm{D}(\mathbb{C}^{N})$ be a random pure
state. Then for all $\epsilon>0$, we have
$\mathrm{Pr}\left\\{\left|C_{I}(\psi)-\frac{N-1}{N+1}\right|>\epsilon\right\\}\leq
2\mathrm{exp}\left(-\frac{N^{3}\epsilon^{2}}{72\pi^{3}\mathrm{ln2}}\right).$
(14)
Proof. Consider the map $T:|\psi\rangle\rightarrow T(\psi):=C_{I}(\psi)$. It
follows from Eq. (10) that $\mathbb{E}_{\psi}[T(\psi)]=\frac{N-1}{N+1}$. Set
$k=2N-1$ in Eq. (5). We need to fix the Lipschitz constant $\eta$ for $T$
satisfying $|T(\psi)-T(\phi)|\leq\eta\|\psi-\phi\|_{2}$. Suppose that
$|\psi\rangle=\sum_{k=1}^{N}\psi_{k}|k\rangle$ with
$\sum_{k=1}^{N}|\psi_{k}|^{2}=1$. Denote $p_{k}=|\psi_{k}|^{2}$. Since
$T(\psi)=1-\sum_{k=1}^{N}|\langle
k|\psi\rangle|^{4}=1-\sum_{k=1}^{N}|\psi_{k}|^{4}$, we have
$\displaystyle\eta^{2}:=\sup_{\langle\psi|\psi\rangle\leq 1}\nabla
T\cdot\nabla T$ $\displaystyle=$
$\displaystyle\sup_{\langle\psi|\psi\rangle\leq
1}\sum_{k=1}^{N}(4|\psi_{k}|^{3})^{2}=\sup_{\langle\psi|\psi\rangle\leq
1}16\sum_{k=1}^{N}|\psi_{k}|^{6}$ (15) $\displaystyle=$
$\displaystyle\sup_{\langle\psi|\psi\rangle\leq 1}16\sum_{k=1}^{N}p_{k}^{3}$
$\displaystyle=$ $\displaystyle
16N\left(\frac{1}{N}\right)^{3}=\frac{16}{N^{2}},$
where the first equality in the last line of Eq. (15) can be obtained by using
Lagrange multipliers. Therefore, $\eta\leq\frac{4}{N}$. By definition, we can
take $\eta=\frac{4}{N}$ as the Lipschitz constant. This completes the proof.
$\Box$
The inequality (14) implies that, similar to the relative entropy of
coherence, for large $N$, with high probability, the skew information-based
coherence of $N$-dimensional pure states is $\frac{N-1}{N+1}$. Namely, most
randomly-chosen pure states have almost $\frac{N-1}{N+1}$ amount of skew
information-based coherence. This is just the so-called concentration of skew
information-based coherence around its expected value, i.e., the typicality of
the skew information-based coherence.
Next, we shall identify a coherent subspace, i.e., a large subspace of the
Hilbert space such that the amount of the skew information-based coherence for
each pure state in this subspace can be bounded from below almost always by a
fixed number that is arbitrarily close to the typical value of coherence.
Theorem 3 (Coherent subspaces) Let $\mathcal{H}=\mathbb{C}^{N}$ be a Hilbert
space of dimension $N$. Then for any $0<\epsilon<\frac{1}{N}$, there exists a
subspace $\mathcal{S}\subset\mathcal{H}$ of dimension
$s=\left\lfloor\frac{N^{3}\epsilon^{2}-1}{3095(3-\mathrm{ln}\epsilon
N)}\right\rfloor,$ (16)
such that all the pure states $|\psi\rangle\in\mathcal{S}$ almost always
satisfy $C_{I}(\psi)\leq\frac{N-1}{N+1}-\epsilon$. Here $\lfloor\rfloor$
denotes the floor function.
Proof. Let $\mathcal{S}$ be a random $s$-dimensional subspace of
$\mathcal{H}$. Let $\mathcal{N}_{S}$ be an $\epsilon_{0}$ net for states on
$\mathcal{S}$, where $\epsilon_{0}=\frac{\epsilon}{4/N}$. It follows from the
definition that $|\mathcal{N}_{S}|\leq(5/\epsilon_{0})^{2s}$. Identify
$\mathcal{S}$ with $U\mathcal{S}_{0}$, where $\mathcal{S}_{0}$ is fixed, and
$U$ is a unitary distributed according to the Haar measure. Endow the net
$\mathcal{N}_{S_{0}}$ on $\mathcal{S}_{0}$ and let
$\mathcal{N}_{S}=U\mathcal{N}_{S_{0}}$. Given $|\psi\rangle\in\mathcal{S}$, we
can choose $|\tilde{\psi}\rangle\in\mathcal{N}_{S}$ such that
$\||\psi\rangle-|\tilde{\psi}\rangle\|_{2}\leq\frac{\epsilon_{0}}{2}$. Since
$C_{I}(\psi)$ is a Lipschitz continuous function with the Lipschitz constant
$\eta=\frac{4}{N}$, by the definition of the $\epsilon_{0}$ set, we have
$|C_{I}(\psi)-C_{I}(\tilde{\psi})|\leq\eta\||\psi\rangle-|\tilde{\psi}\rangle\|_{2}\leq\eta\frac{\epsilon_{0}}{2}=\epsilon/2.$
Define
$\mathbb{P}=\mathrm{Pr}\\{\min_{|\psi\rangle\in\mathcal{S}}C_{I}(\psi)<\frac{N-1}{N+1}-\epsilon\\}$.
From Theorem 2 we have
$\displaystyle\mathbb{P}$ $\displaystyle\leq$
$\displaystyle\mathrm{Pr}\left\\{\min_{|\psi\rangle\in\mathcal{S}}C_{I}(\psi)<\frac{N-1}{N+1}-\frac{\epsilon}{2}\right\\}$
(17) $\displaystyle\leq$
$\displaystyle|\mathcal{N}_{S}|\mathrm{Pr}\left\\{C_{I}(\psi)<\frac{N-1}{N+1}-\frac{\epsilon}{2}\right\\}$
$\displaystyle\leq$ $\displaystyle 2\left(\frac{20}{\epsilon
N}\right)^{2s}\mathrm{exp}\left(-\frac{N^{3}\epsilon^{2}}{72\pi^{3}\mathrm{ln}2}\right).$
If the probability $\mathbb{P}<1$, a subspace with the properties mentioned in
the theorem will exist. This fact holds if
$s<\frac{N^{3}\epsilon^{2}-1}{3095(3-\mathrm{ln}\epsilon N)}.$
Noting that $\epsilon<\frac{1}{N}$, for $s\geq 2$, we require that $N\geq
32941$. Therefore, we get
$s=\left\lfloor\frac{N^{3}\epsilon^{2}-1}{3095(3-\mathrm{ln}\epsilon
N)}\right\rfloor$. This completes the proof. $\Box$
5\. Average skew information-based coherence and its typicality for random
mixed states
We now turn to the average skew information-based coherence and its typicality
for random mixed quantum states. We first present the following lemma.
Lemma 1 Denote $|\Delta(\mu)|^{2}=\prod_{1\leq i<j\leq
N}(\mu_{i}-\mu_{j})^{2}$. It holds that
$\displaystyle\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}$
$\displaystyle=(N-2)!\prod^{N}_{j=1}\Gamma(j)^{2}\left[\left(\sum_{k=1}^{N}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k,l=1}^{N}\left(I_{kl}^{(\frac{1}{2})}\right)^{2}\right],$
(18)
where
$I_{kl}^{(\frac{1}{2})}=\sum_{r=0}^{\min(k,l)}(-1)^{k+l}\tbinom{\frac{1}{2}}{k-r}\tbinom{\frac{1}{2}}{l-r}\frac{\Gamma(\frac{3}{2}+r)}{r!}.$
The proof of Lemma 1 is given in Appendix A. Based on the above lemma, we can
give the analytical formula of average skew information-based coherence for
random mixed states in terms of the dimension $N$.
Theorem 4 The average skew information-based coherence for a random mixed
state $\rho\in\mathrm{D}(\mathbb{C}^{N})$ is given by
$\displaystyle\mathbb{E}_{\rho}[C_{I}(\rho)]$ $\displaystyle:=$
$\displaystyle\int_{\mathrm{D}(\mathbb{C}^{N})}C_{I}(\rho)\mathrm{d\mu_{HS}}(\rho)$
(19) $\displaystyle=$ $\displaystyle
1-\frac{1}{N+1}\left(2+\frac{1}{N^{2}}\left[\left(\sum_{k=0}^{N-1}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k,l=0}^{N-1}\left(I_{kl}^{(\frac{1}{2})}\right)^{2}\right]\right),$
where $\mathrm{d\mu_{HS}}$ is a normalized Hilbert-Schmidt measure, i.e.,
$\int_{\mathrm{D}(\mathbb{C}^{N}})\mathrm{d\mu_{HS}}(\rho)=1$, and
$I_{kl}^{(\frac{1}{2})}=\sum_{r=0}^{\min(k,l)}(-1)^{k+l}\tbinom{\frac{1}{2}}{k-r}\tbinom{\frac{1}{2}}{l-r}\frac{\Gamma(\frac{3}{2}+r)}{r!}.$
The proof of Theorem 4 is given in Appendix B. Setting $N=2$ and $N=3$ in
Theorem 4, we obtain the explicit values of the average coherence for qubit
states and qutrit states,
$\mathbb{E}_{\rho}[C_{I}(\rho)]=1-\frac{1}{3}\left(2+\frac{3\pi}{16}\right)=\frac{1}{3}-\frac{\pi}{16}\approx
0.137$
and
$\mathbb{E}_{\rho}[C_{I}(\rho)]=1-\frac{1}{4}\left(2+\frac{103\pi}{256}\right)=\frac{1}{2}-\frac{103\pi}{1024}\approx
0.184,$
respectively.
In Figure 1, we plot the average skew information-based coherence for random
mixed states. The $A$-axis shows the value of $\mathbb{E}_{\rho}[C_{I}(\rho)]$
given by Eq.(19). Numerical calculations show that as the dimension $N$
increases, the expectation value $\mathbb{E}_{\rho}[C_{I}(\rho)]$ approaches
to a number which is close to 0.28. Numerical computation shows that unlike
the random pure states, for random mixed states, the average skew information-
based coherence is closer to the minimal coherence $0$ than the maximum
coherence $1-\frac{1}{N}$.
Figure 1: The average skew information-based coherence
$A=\mathbb{E}_{\rho}[C_{I}(\rho)]$ as a function of $N=2^{m}$.
Based on the above result, we can similarly discuss the typicality of quantum
coherence $C_{I}(\rho)$ for random mixed states.
Theorem 5 (Typicality of skew information-based coherence for random mixed
states) Let $\rho_{A}\in\mathrm{D}(\mathbb{C}^{N})$ be a random mixed quantum
state obtained via partial tracing over a Haar distributed pure state
$|\psi\rangle_{AB}$ in $\mathbb{C}^{N}\otimes\mathbb{C}^{N}$. Then for all
$\epsilon>0$, we have
$\mathrm{Pr}\left\\{\left|C_{I}(\rho_{A})-\mathbb{E}_{\rho}[C_{I}(\rho_{A})]\right|>\epsilon\right\\}\leq
2\mathrm{exp}\left(-\frac{N\epsilon^{2}}{72\pi^{3}\mathrm{ln2}}\right),$ (20)
where $\mathbb{E}_{\rho}[C_{I}(\rho_{A})]$ is given by Eq. (19).
Proof. Define the map $T:\mathbb{S}^{N^{2}}\mapsto\mathbb{R}$ as
$T(\psi_{AB})=C_{I}(\rho_{A})$. Let
$|\psi\rangle_{AB}=\sum_{k,l=1}^{N}\psi_{kl}|k\rangle_{A}|l\rangle_{B}$. Then
$\rho_{A}=\sum_{k,k^{\prime}=1}^{N}p_{kk^{\prime}}|k\rangle_{A}\langle
k^{\prime}|$, where
$p_{kk^{\prime}}=\sum_{l=1}^{N}\psi_{kl}\overline{\psi_{k^{\prime}l}}$. For a
bipartitie pure state $|\psi\rangle_{AB}$, it has been shown that
$1-C_{I}(\psi_{AB})\leq[1-C_{I}(\rho_{A})][1-C_{I}(\rho_{B})]$ [14]. Since
$0\leq C_{I}(\rho_{B})\leq 1-\frac{1}{N}$, we have $C_{I}(\rho_{A})\leq
C_{I}(\psi_{AB})=1-\sum_{k,l=1}^{N}|\langle k\otimes
l|\psi\rangle|^{4}=1-\sum_{k,l=1}^{N}|\psi_{kl}|^{4}$. Denote
$\tilde{T}(\psi_{AB})=C_{I}(\psi_{AB})$. Noting that
$p_{kk}=\sum_{l=1}^{N}|\psi_{kl}|^{2}$ with $\sum_{k=1}^{N}p_{kk}=1$, we have
$\displaystyle\eta^{2}:=\sup_{\langle\psi|\psi\rangle\leq
1}\nabla\tilde{T}\cdot\nabla\tilde{T}$ $\displaystyle=$
$\displaystyle\sup_{\langle\psi|\psi\rangle\leq
1}\sum_{k,l=1}^{N}(|\psi_{kl}|^{3})^{2}=\sup_{\langle\psi|\psi\rangle\leq
1}16\sum_{k,l=1}^{N}|\psi_{kl}|^{6}$ (21) $\displaystyle=$
$\displaystyle\sup_{\langle\psi|\psi\rangle\leq
1}16\sum_{k,l=1}^{N}(|\psi_{kl}|^{2})^{3}$ $\displaystyle\leq$
$\displaystyle\sup_{\langle\psi|\psi\rangle\leq
1}16\left(\sum_{k,l=1}^{N}|\psi_{kl}|^{2}\right)^{3}=16,$
which implies that $\eta\leq 4$. Now, the Lipschitz constant for $T$ can be
obtained in the following way. Suppose that $\sigma_{A}$ is the reduced state
of another pure state $|\phi\rangle_{AB}$. Without loss of generality, assume
that $C_{I}(\sigma_{A})\leq C_{I}(\rho_{A})$. We can choose
$|\phi\rangle_{AB}$ such that $C_{I}(\sigma_{A})=C_{I}(\phi_{AB})$. Then
$C_{I}(\rho_{A})-C_{I}(\sigma_{A})\leq
C_{I}(\psi_{AB})-C_{I}(\phi_{AB})\leq\eta\||\psi\rangle_{AB}-|\phi\rangle_{AB}\|_{2},$
Thus the Lipschitz constant of $T$ is bounded by that of $\tilde{T}$ and can
be chosen to be $4$. This completes the proof. $\Box$
6\. Conclusions and discussions
We have deduced the explicit formulae for the skew information-based coherence
for both random pure states and random mixed states. It is found that as $N$
approaches to infinity, the limit of the average coherence for random pure
states is $1$, while this limit for random mixed states is a positive constant
less than $\frac{1}{2}$ by numerical computation. The average skew
information-based coherence is always closer to the maximum coherence than the
minimum coherence for random pure states, while it is always closer to the
minimum coherence than the maximum coherence for random mixed states, which
demonstrate that for a randomly-chosen state, a quantum pure state may give
rise to more coherence as a resource compared with a quantum mixed one. This
property coincides with the one when relative entropy of coherence is taken
into consideration.
From Eq. (10) it is found that $0\leq\mathbb{E}_{\psi}[C_{I}(\psi)]\leq 1$,
i.e., the average skew information-based coherence for a random pure state is
always uniformly bounded, while the average relative entropy of coherence for
a random pure state is $\mathbb{E}_{\psi}[C_{r}(\psi)]=H_{N}-1$ [61], which
approaches to infinity as the dimension $N$ increases, where
$H_{N}=\sum_{k=1}^{N}1/k$ is the $N$th harmonic number. Unlike a pure state,
in [62], it is shown that the average relative entropy of coherence for a
random mixed state is $\mathbb{E}_{\rho}[C_{r}(\rho)]=\frac{N-1}{2N}$.
Combining this fact with the equality given in Eq. (19), we conclude that in
the mixed state case, the average coherence for skew information-based
coherence and relative entropy of coherence are both uniformly bounded. Also,
it can be seen that
$\mathbb{E}_{\psi}[C_{r}(\psi)]>\mathbb{E}_{\psi}[C_{I}(\psi)]$ and
$\mathbb{E}_{\rho}[C_{r}(\rho)]>\mathbb{E}_{\rho}[C_{I}(\rho)]$, which implies
that for both a random pure state and a random mixed state, more coherence as
a resource could be generated when the relative entropy of coherence measure
is utilized rather than the skew information-based one. Moreover, in random
pure state case, it is interesting to note that for skew information-based
coherence, the gap between the maximal coherence and the average coherence is
$1-\frac{1}{N}-\frac{N-1}{N+1}=\frac{N-1}{N(N+1)}>0$, and the limit approaches
to $0$ as $N$ approaches to infinity, while for the relative entropy of
coherence, it is found that this gap $\mathrm{ln}N-H_{N}+1\gg 0$.
Furthermore, we have shown that the average skew information-based coherence
of pure quantum states (resp. mixed quantum states) sampled randomly from the
uniform Haar measure is typical, i.e., the probability that the skew
information-based coherence of a randomly chosen pure quantum state (resp.
mixed quantum state) is not equal to the average relative entropy of coherence
(within an arbitrarily small error) is exponentially small in the dimension of
the Hilbert space.
We have also identified a coherent subspace, a large subspace of the Hilbert
space such that the amount of the skew information-based coherence for each
pure state in this subspace can be bounded from below almost always by a fixed
number that is arbitrarily close to the typical value of coherence. The
obtained results in this paper complement the corresponding results for
relative entropy of coherence, and may shed new light on the study of quantum
coherence from the probabilistic and statistical perspective.
## Acknowledgements
The authors would like to thank the referees for their valuable comments,
which greatly improved this paper. This work was supported by National Natural
Science Foundation of China (Grant Nos. 11701259, 11971140, 11461045,
11675113), the China Scholarship Council (Grant No.201806825038), Natural
Science Foundation of Jiangxi Province of China (Grant No. 20202BAB201001),
the Key Project of Beijing Municipal Commission of Education (Grant No.
KZ201810028042), Beijing Natural Science Foundation (Grant No. Z190005),
Natural Science Foundation of Zhejiang Province of China (Grant
No.LY17A010027). This work was completed while Zhaoqi Wu and Lin Zhang were
visiting Max-Planck-Institute for Mathematics in the Sciences in Germany.
Appendix A: Proof of Lemma 1
Proof of Lemma 1. Note that $\prod_{1\leq i<j\leq N}(\mu_{i}-\mu_{j})$ is the
classical Vandermonde determinant
$\prod_{1\leq i<j\leq
N}(\mu_{i}-\mu_{j})=\left|\begin{array}[]{ccc}1&\cdots&1\\\
\mu_{1}&\cdots&\mu_{N}\\\ \vdots&\ddots&\vdots\\\
\mu_{1}^{N-1}&\cdots&\mu_{N}^{N-1}\\\ \end{array}\right|.$
It can be seen that if $P_{0},P_{1},\cdots,P_{N-1}$ are polynomials of
respective degrees $0,1,\cdots,N-1$ and respective dominant coefficients
$a_{0},a_{1},\cdots,a_{N-1}$, one has
$\prod_{1\leq i<j\leq
N}(\mu_{i}-\mu_{j})=\frac{1}{\prod_{k=0}^{N-1}a_{k}}\left|\begin{array}[]{ccc}P_{0}(\mu_{1})&\cdots&P_{0}(\mu_{N})\\\
P_{1}(\mu_{1})&\cdots&P_{1}(\mu_{N})\\\ \vdots&\ddots&\vdots\\\
P_{N-1}(\mu_{1})&\cdots&P_{N-1}(\mu_{N})\\\ \end{array}\right|$
Now choose $P_{k}(x)$ to be Laguerre polynomials $L_{k}(x)$:
$L_{k}(x)=\sum_{j=0}^{k}(-1)^{k}\tbinom{k}{k-j}\frac{x^{j}}{j!}.$
Note that $L_{k}(x)$ have the orthogonality property
$\int_{0}^{\infty}L_{k}(x)L_{l}(x)e^{-x}dx=\delta_{kl},$ (22)
and the coefficient of the term with the highest degree is
$a_{k}=\frac{(-1)^{k}}{k!}$. We have
$\displaystyle\prod_{1\leq i<j\leq N}(\mu_{i}-\mu_{j})^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{\prod_{k=0}^{N-1}a_{k}^{2}}\left|\begin{array}[]{ccc}L_{0}(\mu_{1})&\cdots&L_{0}(\mu_{N})\\\
L_{1}(\mu_{1})&\cdots&L_{1}(\mu_{N})\\\ \vdots&\ddots&\vdots\\\
L_{N-1}(\mu_{1})&\cdots&L_{N-1}(\mu_{N})\\\ \end{array}\right|$ (27)
$\displaystyle=$ $\displaystyle\prod_{k=0}^{N-1}(k!)^{2}\sum_{\sigma,\tau\in
S_{N}}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)L_{\sigma(k)-1}(\mu_{k})L_{\tau(k)-1}(\mu_{k}),$
(28)
which implies that
$\displaystyle\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}$
$\displaystyle=\prod_{k=0}^{N-1}(k!)^{2}\sum_{\sigma,\tau\in
S_{N}}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)\left(\int_{0}^{\infty}\sqrt{\mu_{1}}e^{-\mu_{1}}L_{\sigma(1)-1}(\mu_{1})L_{\tau(1)-1}(\mu_{1})\mathrm{d}\mu_{1}\right)$
$\displaystyle\left(\int_{0}^{\infty}\sqrt{\mu_{2}}e^{-\mu_{2}}L_{\sigma(2)-1}(\mu_{2})L_{\tau(2)-1}(\mu_{1})\mathrm{d}\mu_{2}\right)\left(\prod_{k=3}^{N}\int_{\mathbb{R}_{+}^{N-2}}e^{-\mu_{k}}L_{\sigma(k)-1}(\mu_{k})L_{\tau(k)-1}(\mu_{k})\mathrm{d}\mu_{k}\right),$
where $S_{N}$ is the permutation group on $\\{1,2,\cdots,N\\}$.
Denote $I_{kl}^{(q)}:=\int_{0}^{\infty}L_{k}(x)L_{l}(x)e^{-x}x^{q}dx$, where
$q>-1$. It holds that [52]
$I_{kl}^{(q)}=\sum_{r=0}^{\min(k,l)}(-1)^{k+l}\tbinom{q}{k-r}\tbinom{q}{l-r}\frac{\Gamma(q+r+1)}{r!},~{}~{}q>-1.$
(29)
Note that
$\int_{0}^{\infty}\sqrt{\mu_{i}}e^{-\mu_{i}}L_{\sigma(i)-1}(\mu_{i})L_{\tau(i)-1}(\mu_{i})\mathrm{d}\mu_{i}=I_{\sigma(i)-1,\tau(i)-1}^{(\frac{1}{2})},~{}~{}i=1,2$
and
$\int_{0}^{\infty}\sqrt{\mu_{i}}e^{-\mu_{i}}L_{\sigma(1)-1}(\mu_{i})L_{\sigma(2)-1}(\mu_{i})\mathrm{d}\mu_{i}=I_{\sigma(1)-1,\sigma(2)-1}^{(\frac{1}{2})},~{}~{}i=1,2.$
We calculate the integral
$\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}$
by considering the following two cases.
Case I: $\sigma=\tau$. Denote $I=\sum_{k=0}^{N-1}I_{kk}^{(\frac{1}{2})}$, we
have
$\displaystyle\sum_{\sigma,\tau\in
S_{N},\sigma=\tau}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)\left(\int_{0}^{\infty}\sqrt{\mu_{1}}e^{-\mu_{1}}L_{\sigma(1)-1}(\mu_{1})L_{\tau(1)-1}(\mu_{1})\mathrm{d}\mu_{1}\right)$
$\displaystyle\left(\int_{0}^{\infty}\sqrt{\mu_{2}}e^{-\mu_{2}}L_{\sigma(2)-1}(\mu_{2})L_{\tau(2)-1}(\mu_{1})\mathrm{d}\mu_{2}\right)\left(\prod_{k=3}^{N}\int_{\mathbb{R}_{+}^{N-2}}e^{-\mu_{k}}L_{\sigma(k)-1}(\mu_{k})L_{\tau(k)-1}(\mu_{k})\mathrm{d}\mu_{k}\right)$
$\displaystyle=\sum_{\sigma\in
S_{N}}I_{\sigma(1)-1,\sigma(1)-1}^{(\frac{1}{2})}I_{\sigma(2)-1,\sigma(2)-1}^{(\frac{1}{2})}=(N-2)!\sum_{k\neq
l}I_{kk}^{(\frac{1}{2})}I_{ll}^{(\frac{1}{2})}$
$\displaystyle=(N-2)!\left[\left(\sum_{k=0}^{N-1}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k=0}^{N-1}\left(I_{kk}^{(\frac{1}{2})}\right)^{2}\right].$
(30)
Case II: $\sigma\neq\tau$. First, note that if there exists
$k_{0}\in\\{3,4,\cdots,N\\}$ such that $\sigma(k_{0})\neq\tau(k_{0})$, then by
Eq. (22) we have
$\left(\prod_{k=3}^{N}\int_{\mathbb{R}_{+}^{N-2}}e^{-\mu_{k}}L_{\sigma(k)-1}(\mu_{k})L_{\tau(k)-1}(\mu_{k})\mathrm{d}\mu_{k}\right)=0.$
Thus
$\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}=0$.
Otherwise, $\sigma(i)=\tau(i)$ for $i=3,\cdots,N$, which implies that
$\sigma(1)=\tau(2)$ and $\sigma(2)=\tau(1)$, i.e., $\tau=\sigma(12)$. Then we
have
$\displaystyle\sum_{\sigma,\tau\in
S_{N},\sigma\neq\tau}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)\left(\int_{0}^{\infty}\sqrt{\mu_{1}}e^{-\mu_{1}}L_{\sigma(1)-1}(\mu_{1})L_{\tau(1)-1}(\mu_{1})\mathrm{d}\mu_{1}\right)$
$\displaystyle\left(\int_{0}^{\infty}\sqrt{\mu_{2}}e^{-\mu_{2}}L_{\sigma(2)-1}(\mu_{2})L_{\tau(2)-1}(\mu_{1})\mathrm{d}\mu_{2}\right)\left(\prod_{k=3}^{N}\int_{\mathbb{R}_{+}^{N-2}}e^{-\mu_{k}}L_{\sigma(k)-1}(\mu_{k})L_{\tau(k)-1}(\mu_{k})\mathrm{d}\mu_{k}\right)$
$\displaystyle=\sum_{\sigma\in
S_{N}}(-1)I_{\sigma(1)-1,\sigma(2)-1}^{(\frac{1}{2})}I_{\sigma(2)-1,\sigma(1)-1}^{(\frac{1}{2})}=-(N-2)!\sum_{k\neq
l}(I_{kl}^{(\frac{1}{2})})^{2}.$ (31)
Combining Eqs. (Acknowledgements) and (Acknowledgements), we have
$\displaystyle\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}$
$\displaystyle=\prod_{k=0}^{N-1}(k!)^{2}\left[(N-2)!\left(\left(\sum_{k=0}^{N-1}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k=0}^{N-1}\left(I_{kk}^{(\frac{1}{2})}\right)^{2}\right)-(N-2)!\sum_{k\neq
l}\left(I_{kl}^{(\frac{1}{2})}\right)^{2}\right]$
$\displaystyle=(N-2)!\prod^{N}_{j=1}\Gamma(j)^{2}\left[\left(\sum_{k=0}^{N-1}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k,l=0}^{N-1}\left(I_{kl}^{(\frac{1}{2})}\right)^{2}\right],$
(32)
where
$I_{kl}^{(\frac{1}{2})}=\sum_{r=0}^{\min(k,l)}(-1)^{k+l}\tbinom{\frac{1}{2}}{k-r}\tbinom{\frac{1}{2}}{l-r}\frac{\Gamma(\frac{3}{2}+r)}{r!}.$
$\Box$
Appendix B: Proof of Theorem 4
Proof of Theorem 4. Since $\mathrm{d\mu_{HS}}$ is a normalized Hilbert-Schmidt
measure, by the definition of $C_{I}(\rho)$, we have
$\displaystyle\int_{\mathrm{D}(\mathbb{C}^{N})}C_{I}(\rho)\mathrm{d\mu_{HS}}(\rho)$
$\displaystyle=$
$\displaystyle\int_{\mathrm{D}(\mathbb{C}^{N})}\left[1-\sum_{k=1}^{N}\langle
k|\sqrt{\rho}|k\rangle^{2}\right]\mathrm{d\mu_{HS}}(\rho)$ (33)
$\displaystyle=$ $\displaystyle
1-\int_{\mathrm{D}(\mathbb{C}^{N})}\sum_{k=1}^{N}\langle k^{\otimes
2}|\sqrt{\rho}^{\otimes 2}|k^{\otimes 2}\rangle\mathrm{d\mu_{HS}}(\rho)$
$\displaystyle=$ $\displaystyle 1-\sum_{k=1}^{N}\left\langle k^{\otimes
2}\left|\int_{\mathrm{D}(\mathbb{C}^{N})}\sqrt{\rho}^{\otimes
2}\mathrm{d\mu_{HS}}(\rho)\right|k^{\otimes 2}\right\rangle.$
It suffices to compute the integral
$\int_{\mathrm{D}(\mathbb{C}^{N})}\sqrt{\rho}^{\otimes
2}\mathrm{d\mu_{HS}}(\rho).$ In fact, by the factorization in Eq. (7), it
follows that
$\displaystyle\int_{\mathrm{D}(\mathbb{C}^{N})}\sqrt{\rho}^{\otimes
2}\mathrm{d\mu_{HS}}(\rho)$ (34)
$\displaystyle=\int\mathrm{d\nu(\Lambda)}\int_{\mathrm{U(N)}}\left[(U\otimes
U)(\sqrt{\Lambda}\otimes\sqrt{\Lambda})(U\otimes
U)^{{\dagger}}\mathrm{d\mu_{Haar}}(U)\right].$
Using the following formula for integral over unitary groups [75]:
$\displaystyle\int_{\mathrm{U(N)}}(U\otimes U)A(U\otimes
U)^{{\dagger}}\mathrm{d\mu_{Haar}}(U)$
$\displaystyle=\left(\frac{\mathrm{Tr}(A)}{N^{2}-1}-\frac{\mathrm{Tr}(AF)}{N(N^{2}-1)}\right)\mathbf{1}_{N^{2}}-\left(\frac{\mathrm{Tr}(A)}{N(N^{2}-1)}-\frac{\mathrm{Tr}(AF)}{N^{2}-1}\right)F,$
(35)
where $A\in M_{N^{2}}(\mathbb{C})$ and $F$ is the swap operator defined by
$F|ij\rangle=|ji\rangle$ for all $i,j=1,2,\cdots,N$, we have
$\displaystyle\int_{\mathrm{U(N)}}(U\otimes
U)(\sqrt{\Lambda}\otimes\sqrt{\Lambda})(U\otimes
U)^{{\dagger}}\mathrm{d\mu_{Haar}}(U)=\frac{N(\mathrm{Tr}\sqrt{\Lambda})^{2}-1}{N(N^{2}-1)}\mathbf{1}_{N^{2}}+\frac{N-(\mathrm{Tr}\sqrt{\Lambda})^{2}}{N(N^{2}-1)}F.$
(36)
Noting that
$\displaystyle\int(\mathrm{Tr}\sqrt{\Lambda})^{2}\mathrm{d\nu(\Lambda)}$
$\displaystyle=$ $\displaystyle\int\mathrm{d\nu(\Lambda)}+2\int\sum_{1\leq
i<j\leq N}\sqrt{\lambda_{i}\lambda_{j}}\mathrm{d\nu(\Lambda)}$ (37)
$\displaystyle=$ $\displaystyle 1+2\int\sum_{1\leq i<j\leq
N}\sqrt{\lambda_{i}\lambda_{j}}\mathrm{d\nu(\Lambda)}$ $\displaystyle=$
$\displaystyle 1+2C_{\mathrm{HS}}^{N}\int_{\mathbb{R}_{+}^{N}}\sum_{1\leq
i<j\leq
N}\sqrt{\lambda_{i}\lambda_{j}}\delta\left(1-\sum_{j=1}^{N}\lambda_{j}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j}$
$\displaystyle=$ $\displaystyle
1+2C_{\mathrm{HS}}^{N}\tbinom{N}{2}\int_{\mathbb{R}_{+}^{N}}\sqrt{\lambda_{1}\lambda_{2}}\delta\left(1-\sum_{j=1}^{N}\lambda_{j}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j},$
where $C_{\mathrm{HS}}^{N}$ is given in Eq. (9), we only need to calculate
$\int_{\mathbb{R}_{+}^{N}}\sqrt{\lambda_{1}\lambda_{2}}\delta\left(1-\sum_{j=1}^{N}\lambda_{j}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j}.$
Denote
$F(t)=\int_{\mathbb{R}_{+}^{N}}\sqrt{\lambda_{1}\lambda_{2}}\delta\left(t-\sum_{j=1}^{N}\lambda_{j}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j}.$
By performing Laplace transform $(t\rightarrow s)$ of $F(t)$, and letting
$\mu_{j}=s\lambda_{j},j=1,2$, we get
$\displaystyle\tilde{F}(s)$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}_{+}^{N}}\sqrt{\lambda_{1}\lambda_{2}}\mathrm{exp}\left(-s\sum_{j=1}^{N}\lambda_{j}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j}$
(38) $\displaystyle=$ $\displaystyle
s^{-(N^{2}+1)}\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}.$
Utilizing the inverse Laplace transform $(s\rightarrow
t):\mathscr{L}^{-1}(s^{\alpha})=\frac{t^{-\alpha-1}}{\Gamma(-\alpha)}$, we
obtain
$F(t)=\frac{t^{N^{2}}}{\Gamma(N^{2}+1)}\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}.$
(39)
Thus
$\displaystyle\int_{\mathbb{R}_{+}^{N}}\sqrt{\lambda_{1}\lambda_{2}}\delta\left(1-\sum_{j=1}^{N}\lambda_{j}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j}$
$\displaystyle=\frac{1}{\Gamma(N^{2}+1)}\int_{\mathbb{R}_{+}^{N}}\sqrt{\mu_{1}\mu_{2}}\mathrm{exp}\left(-\sum_{j=1}^{N}\mu_{j}\right)|\Delta(\mu)|^{2}\prod_{j=1}^{N}\mathrm{d}\mu_{j}.$
(40)
Substituting Eq. (Average skew information-based coherence and its typicality
for random quantum states) into Eq. (Acknowledgements) yields
$\displaystyle\int_{\mathbb{R}_{+}^{N}}\sqrt{\lambda_{1}\lambda_{2}}\delta\left(1-\sum_{j=1}^{N}\lambda_{j}\right)|\Delta(\lambda)|^{2}\prod_{j=1}^{N}\mathrm{d}\lambda_{j}$
$\displaystyle=\frac{(N-2)!\prod^{N}_{j=1}\Gamma(j)^{2}}{\Gamma(N^{2}+1)}\left[\left(\sum_{k=1}^{N}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k,l=1}^{N}\left(I_{kl}^{(\frac{1}{2})}\right)^{2}\right],$
(41)
which by Eqs. (9) and (37) gives rise to
$\displaystyle\int(\mathrm{Tr}\sqrt{\Lambda})^{2}\mathrm{d\nu(\Lambda)}=1+\frac{1}{N^{2}}\left[\left(\sum_{k=1}^{N}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k,l=1}^{N}\left(I_{kl}^{(\frac{1}{2})}\right)^{2}\right].$
(42)
Combining Eqs. (34), (36) and (42), we obtain
$\displaystyle\int_{\mathrm{D}(\mathbb{C}^{N})}\sqrt{\rho}^{\otimes
2}\mathrm{d\mu_{HS}}(\rho)$
$\displaystyle=\int\left[\frac{N(\mathrm{Tr}\sqrt{\Lambda})^{2}-1}{N(N^{2}-1)}\mathbf{1}_{N^{2}}+\frac{N-(\mathrm{Tr}\sqrt{\Lambda})^{2}}{N(N^{2}-1)}F\right]\mathrm{d\nu(\Lambda)}$
$\displaystyle=\frac{N\mathbf{1}_{N^{2}}-F}{N(N^{2}-1)}\int(\mathrm{Tr}\sqrt{\Lambda})^{2}\mathrm{d\nu(\Lambda)}+\frac{NF-\mathbf{1}_{N^{2}}}{N(N^{2}-1)}\int\mathrm{d\nu(\Lambda)}$
$\displaystyle=\frac{N\mathbf{1}_{N^{2}}-F}{N(N^{2}-1)}\left(1+\frac{1}{N^{2}}\left[\left(\sum_{k=1}^{N}I_{kk}^{(\frac{1}{2})}\right)^{2}-\sum_{k,l=1}^{N}\left(I_{kl}^{(\frac{1}{2})}\right)^{2}\right]\right)+\frac{NF-\mathbf{1}_{N^{2}}}{N(N^{2}-1)}.$
Finally, by using the fact that $\sum_{k=1}^{N}\langle k^{\otimes
2}|F|k^{\otimes 2}\rangle=N,$ we have
$\sum_{k=1}^{N}\langle k^{\otimes 2}|N\mathbf{1}_{N^{2}}-F|k^{\otimes
2}\rangle=\sum_{k=1}^{N}\langle k^{\otimes 2}|NF-\mathbf{1}_{N^{2}}|k^{\otimes
2}\rangle=\frac{N^{2}-N}{N(N^{2}-1)}=\frac{1}{N+1}.$
From Eq. (33) we get (19). $\Box$
## References
* [1] Streltsov A, Adesso G and Plenio M B 2017 Colloquium: Quantum coherence as a resource Rev. Mod. Phys. 89 041003
* [2] Baumgratz T, Cramer M and Plenio M B 2014 Quantifying coherence Phys. Rev. Lett. 113 140401
* [3] Napoli C, Bromley T R, Cianciaruso M, Piani M, Johnston N and Adesso G 2016 Robustness of Coherence: An operational and observable measure of quantum coherence Phys. Rev. Lett. 116 150502
* [4] Bu K, Singh U, Fei S M, Pati A K and Wu J 2017 Maximum relative entropy of coherence: an operational coherence measure Phys. Rev. Lett. 119 150405
* [5] Streltsov A, Singh U, Dhar H S, Bera M N and Adesso G 2015 Measuring quantum coherence with entanglement Phys. Rev. Lett. 115 020403
* [6] Xiong C and Wu J 2018 Geometric coherence and quantum state discrimination J. Phys. A: Math. Theor. 51 414005
* [7] Shao L H, Xi Z, Fan H and Li Y 2015 Fidelity and Trace-Norm Distances for Quantifying Coherence Phys. Rev. A 91 042120
* [8] Rana S, Parashar P and Lewenstein M 2016 Trace-distance measure of coherence Phys. Rev. A 93 012110
* [9] Yu X D, Zhang D J, Xu G F and Tong D M 2016 Alternative framework for quantifying coherence Phys. Rev. A 94 060302(R)
* [10] Chen B and Fei S M 2018 Notes on modified trace distance measure of coherence Quantum Inf. Process. 17 107
* [11] Girolami D 2014 Observable Measure of Quantum Coherence in Finite Dimensional Systems Phys. Rev. Lett. 113 170401
* [12] Luo S and Sun Y 2017 Quantum coherence versus quantum uncertainty Phys. Rev. A 96 022130
* [13] Luo S and Sun Y 2017 Partial coherence with application to the monotonicity problem of coherence involving skew information Phys. Rev. A 96 022136
* [14] Yu C S 2017 Quantum coherence via skew information and its polygamy Phys. Rev. A 95 042337
* [15] Luo S and Sun Y 2018 Coherence and complementarity in state-channel interaction Phys. Rev. A 98 012113
* [16] Bu K, Anand N and Singh U 2018 Asymmetry and coherence weight of quantum states Phys. Rev. A 97 032342
* [17] Xiong C, Kumar A and Wu J 2018 Family of coherence measure and duality between quantum coherence and path distinguishability Phys. Rev. A 98 032324
* [18] Xiong C, Kumar A, Huang M, Das S, Sen U and Wu J 2019 Partial coherence and quantum correlation with fidelity and affinity distances Phys. Rev. A 99 032305
* [19] Zhu X N, Jin Z X and Fei S M 2019 Quantifying quantum coherence based on the generalized $\alpha$-$z$-relative Rényi entropy Quantum Inf. Process. 18 179
* [20] Xi Z and Yuwen S 2019 Coherence measure: Logarithmic coherence number Phys. Rev. A 99 022340
* [21] Winter A and Yang D 2016 Operational resource theory of coherence Phys. Rev. Lett. 116 120404
* [22] Chitambar E, Streltsov A, Rana S, Bera M N, Adesso G and Lewenstein M 2016 Assisted distillation of quantum coherence Phys. Rev. Lett. 116 070402
* [23] Regula B, Fang K, Wang X and Adesso G 2018 One-shot coherence distillation Phys. Rev. Lett. 121 010401
* [24] Zhao Q, Liu Y, Yuan X, Chitambar E and Winter A 2019 IEEE Trans. Inf. Theory 65 6441
* [25] Fang K, Wang X, Lami L, Regula B and Adesso G 2018 Probabilistic distillation of quantum coherence Phys. Rev. Lett. 121 070404
* [26] Liu C L and Zhou D L 2019 Deterministic coherence distillation Phys. Rev. Lett. 123 070402
* [27] Lami L, Regula B and Adesso G 2019 Generic bound coherence under strictly incoherent operations Phys. Rev. Lett. 122 150402
* [28] Zhao J M, Ma T, Quan Q, Fan H and Pereira R 2019 $l_{1}$-norm coherence of assistance Phys. Rev. A 100 012315
* [29] Zhao Q, Liu Y, Yuan X, Chitambar E and Ma X 2018 One-shot coherence dilution Phys. Rev. Lett. 120 070403
* [30] Lostaglio M and Müller M P 2019 Coherence and asymmetry cannot be broadcast Phys. Rev. Lett. 123 020403
* [31] Marvian I and Spekkens R W 2019 No-broadcasting theorem for quantum asymmetry and coherence and a trade-off relation for approximate broadcasting Phys. Rev. Lett. 123 020404
* [32] Chitambar E and Hsieh M H 2016 Relating the resource theories of entanglement and quantum coherence Phys. Rev. Lett. 117 020402
* [33] Zhu H, Ma Z, Cao Z, Fei S M and Vedral V 2017 Operational one-to-one mapping between coherence and entanglement measures Phys. Rev. A 96 032316
* [34] Xi Y, Zhang T , Zheng Z J, Li-Jost X and Fei S M 2019 Converting quantum coherence to genuine multipartite entanglement and nonlocality Phys. Rev. A 100 022310
* [35] Ma J, Yadin B, Girolami D, Vedral V and Gu M 2016 Converting coherence to quantum correlations Phys. Rev. Lett. 116 160407
* [36] Sun Y, Mao Y and Luo S 2017 From quantum coherence to quantum correlations Europhys. Lett. 118 60007
* [37] Hu M L, Hu X, Wang J, Peng Y, Zhang X R and Fan H 2018 Quantum coherence and geometric quantum discord Phys. Rep. 762-764 1
* [38] Kim S, Li L, Kumar A and Wu J 2018 Interrelation between partial coherence and quantum correlations Phys. Rev. A 98 022306 (2018).
* [39] Wu K D, Hou Z, Zhao Y Y, Xiang G Y, Li C F, Guo G C, Ma J, He Q Y, Thompson J and Gu M 2018 Experimental cyclic interconversion between coherence and quantum correlations Phys. Rev. Lett. 121 050401
* [40] Guo Z and Cao H 2019 Creating quantum correlation from coherence via incoherent quantum operations J. Phys. A: Math. Theor. 52 265301
* [41] Bu K, Kumar A, Zhang L and Wu J 2017 Cohering power of quantum operations Phys. Lett. A 381 1670
* [42] Du S, Bai Z and Qi X 2019 Coherence Manipulation under incoherent operations Phys. Rev. A 100 032313
* [43] Wigner E P and Yanase M M 1963 Information contents of distributions Proc. Natl. Acad. Sci. USA 49 910
* [44] Cheng S and Hall M J W 2015 Complementarity relations for quantum coherence Phys. Rev. A 92 042101
* [45] Luo S and Sun Y 2019 Average versus maximal coherence Phys. Lett. A 383 2869
* [46] Collins B and Nechita I 2016 Random matrix techniques in quantum information theory J. Math. Phys. 57 015215
* [47] Ledoux M 2015 The Concentration of Measure Phenomenon (American Mathematical Society, Providence, RI)
* [48] Hayden P, Leung D, Shor P W and Winter A 2004 Randomizing quantum states: Constructions and applications Commun. Math. Phys. 250 371
* [49] Hayden P, Leung D W and Winter A 2006 Aspects of Generic Entanglement Commun. Math. Phys. 265 95
* [50] Page D N 1993 Average entropy of a subsystem Phys. Rev. Lett. 71 1291
* [51] Foong S K and Kanno S 1994 Proof of Page’s Conjecture on the average entropy of a subsystem Phys. Rev. Lett. 72 1148
* [52] Sánchez-Ruiz J 1995 Simple proof of Page’s conjecture on the average entropy of a subsystem Phys. Rev. E 52 5653
* [53] Sen S 1996 Average entropy of a quantum subsystem Phys. Rev. Lett. 77 1
* [54] Malacarne L C, Mendes R S and Lenzi E K 2002 Average entropy of a subsystem from its average Tsallis entropy Phys. Rev. E 65 046131
* [55] Datta A 2010 Negativity of random pure states Phys. Rev. A 81 052312
* [56] Hamma A, Santra S and Zanardi P 2012 Quantum entanglement in random physical states Phys. Rev. Lett. 109 040502
* [57] Dahlsten O C O, Lupo C, Mancini S and Serafini A 2014 Entanglement typicality J. Phys. A: Math. Theor. 47 363001
* [58] Zhang L and Xiang H 2017 Average entropy of a subsystem over a global unitary orbit of a mixed bipartite state Quantum Inf. Process. 16 112
* [59] Werner R F and Holevo A S 2002 Counterexample to an additivity conjecture for output purity of quantum channels J. Math. Phys. 43 4353
* [60] Scott A J and Caves C M 2003 Entangling power of the quantum baker’s map J. Phys. A: Math. Gen. 36 9553
* [61] Singh U, Zhang L and Pati A K 2016 Average coherence and its typicality for random pure states Phys. Rev. A 93 032125
* [62] Zhang L 2017 Average coherence and its typicality for random mixed quantum states J. Phys. A: Math. Theor. 50 155303
* [63] Zhang L, Singh U and Pati A K 2017 Average subentropy, coherence and entanglement of random mixed quantum states Ann. Phys. 377 125
* [64] Zhang L and Wang J 2018 Average of uncertainty product for bounded observables Open Syst. Inf. Dyn. 25(2) 1850008
* [65] Nielsen M A and Chuang I L 2000 Quantum Computation and Quantum Information (Cambridge University Press, Cambridge)
* [66] Du S and Bai Z 2015 The Wigner-Yanase information can increase under phase sensitive incoherent operations Ann. Phys. (NY) 359 136
* [67] Marvian I, Spekkens R W and Zanardi P 2016 Quantum speed limits, coherence and asymmetry Phys. Rev. A 93 052331
* [68] ÓSearcóid M 2007 Metric Spaces (Springer-Verlag, London)
* [69] Wilde M M 2013 Quantum Information Theory (Cambridge University Press, Cambridge, UK)
* [70] Korn G A and Korn T M 2000 Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review (Dover Publications, Dover)
* [71] Bengtsson I and Życzkowski K 2017 Geometry of Quantum States: An Introduction to Quantum Entanglement 2nd ed (Cambridge University Press, Cambridge)
* [72] Życzkowski K and Sommers H J 2001 Induced measures in the space of mixed quantum states J. Phys. A : Math. Gen. 34 7111
* [73] Ginibre J 1965 Statistical ensembles of complex, quaternion, and real matrices J. Math. Phys. 6 440
* [74] Mehta M 1991 Random Matrices 2nd ed (Academic Press, New York)
* [75] Zhang L Matrix integrals over unitary groups: An application of Schur-Weyl duality arXiv:1408.3782
|
Chan, Har-Peled, and Jones [2020] recently developed locality-sensitive ordering (LSO), a new tool that allows one to reduce problems in the Euclidean space $\mathbb{R}^d$ to the $1$-dimensional line. They used LSO's to solve a host of problems.
Later, Buchin, Har-Peled, and Oláh [2019,2020] used the LSO of Chan et al. to construct very sparse reliable spanners for the Euclidean space. A highly desirable feature of a reliable spanner is its ability to withstand a massive failure: the network remains functioning even if 90% of the nodes fail.
In a follow-up work, Har-Peled, Mendel, and Oláh [2021] constructed reliable spanners for general and topologically structured metrics. Their construction used a different approach, and is based on sparse covers.
In this paper, we develop the theory of LSO's in non-Euclidean metrics by introducing new types of LSO's suitable for general and topologically structured metrics.
We then construct such LSO's, as well as constructing considerably improved LSO's for doubling metrics.
Afterwards, we use our new LSO's to construct reliable spanners with improved stretch and sparsity parameters.
Most prominently, we construct $\tilde{O}(n)$-size reliable spanners for trees and planar graphs with the optimal stretch of $2$.
Along the way to the construction of LSO's and reliable spanners, we introduce and construct ultrametric covers, and construct $2$-hop reliable spanners for the line.
§ INTRODUCTION
The Algorithmist's toolkit consists of diverse “tools” frequently utilized for many different problems. In the geometric context, some tools apply to general metric spaces such as metric embeddings <cit.> and padded decompositions <cit.>, while many tools apply mainly to Euclidean spaces, such as dimension reduction <cit.>, locality-sensitive hashing <cit.>, well-separated pair decomposition (WSPD) <cit.>, and many others. Recently, Chan, Har-Peled, and Jones <cit.> developed a new and exciting tool for Euclidean spaces called Locality-Sensitive Ordering (LSO).
Given a metric space $(X,d_{X})$, we say that a collection $\Sigma$
of orderings is a $(\tau,\rho)$-LSO (locality-sensitive ordering) if
$\left|\Sigma\right|\le\tau$, and for every $x,y\in X$, there is
a linear ordering $\sigma\in\Sigma$ such that (w.l.o.g.) $x\prec_{\sigma}y$ and the points between $x$ and $y$ w.r.t. $\sigma$ could be partitioned
into two consecutive intervals $I_{x},I_{y}$ where $I_{x}\subseteq B_{X}(x,\rho\cdot d_{X}(x,y))$ and $I_{y}\subseteq B_{X}(y,\rho\cdot d_{X}(x,y))$. Parameter $\rho$ is called the stretch parameter.
The main reason that LSO has become an extremely useful tool is that it reduces the problem at hand in the $d$-dimensional Euclidean space to the same problem in a much simpler space: the $1$-dimensional line.
<cit.> constructed an $(O(\epsilon)^{-d}\log\frac{1}{\epsilon},\eps)$-LSO for any given set of points in the $d$-dimensional Euclidean space $\mathbb{R}^d$ (more generally, Chan <cit.> constructed $\left(O(\epsilon^{-1}\cdot \log n)^{O(d)},\eps\right)$-LSO for metric spaces with doubling dimension[A metric space $(X, d)$ has doubling dimension $d$ if every ball of radius $2r$ can be covered by $2^{d}$ balls of radius $r$.] $d$).
They used their LSO to design simple dynamic algorithms for approximate nearest neighbor search, approximate bichromatic closest pair, approximate MST, spanners, and fault-tolerant spanners. Afterwards, Buchin, Har-Peled, and Oláh <cit.> used the LSO of Chan <cit.> to construct reliable spanners (see <Ref>) for Euclidean spaces following the same methodology: reducing the problem to the construction on the line. In this work, we introduce new notions of LSO and apply them to construct reliable spanners for non-Euclidean metrics.
Given a metric space $(X,d_X)$, a $t$-spanner is a weighted graph $H=(X,E,w)$ over[Often in the literature, the metric space $(X,d_X)$ is the shortest path metric of a graph $G$, and there is a requirement that $H$ will be a subgraph of $G$. We will not have such a requirement in this paper.] $X$ where for every pair of points $x,y\in X$, $d_X(x,y)\le d_H(x,y)\le t\cdot d_X(x,y)$, with $d_H$ being the shortest path metric of $H$. The parameter $t$ is called the stretch of the spanner. A highly desirable property of a $t$-spanner is the ability to withstand extensive vertex failures. Levcopoulos, Narasimhan, and Smid <cit.> introduced the notion of a fault-tolerant spanner. A subgraph $H=(V,E_H,w)$ is an $f$-vertex-fault-tolerant $t$-spanner of a weighted graph $G=(V,E,w)$, if for every set $F\subset V$ of at most $f$ vertices, it holds that $\forall u,v\notin F$, $d_{H\setminus F}(u,v)\le t\cdot d_{G\setminus F}(u,v)$. A major limitation of fault-tolerant spanners is that the number of failures must be determined in advance; in particular, such spanners cannot withstand a massive failure.
One can imagine a scenario where a significant portion (even 90%) of a network fails and ceases to function (due to, e.g., close-down during a pandemic), it is important that the remaining parts of the network (or at least most of it) will remain highly connected and functioning. To this end, Bose <cit.> introduced the notion of a reliable spanner. Here, given a failure set $B\subseteq X$, the residual spanner $H\setminus B$ is a $t$-spanner for $X\setminus B^+$, where $B^+\supseteq B$ is a set slightly larger than $B$.
Buchin <cit.> relaxed the notion of reliable spanners by allowing the size of $B^+$ to be bounded only in expectation.
A weighted graph $H$ over point set $X$ is a deterministic $\nu$-reliable $t$-spanner
of a metric space $(X,d_{X})$ if $d_{H}$ dominates
[Metric space $(X,d_H)$ dominates metric space $(X,d_X)$ if $\forall u,v\in X$, $d_X(u,v)\le d_H(u,v)$.]
$d_{X}$, and for every
set $B\subseteq X$ of points, called an attack set, there is a set $B^{+}\supseteq B$, called a faulty extension of $B$,
such that:
* $|B^{+}|\le(1+\nu)|B|$.
* For every $x,y\notin B^{+}$, $d_{H[X\setminus B]}(x,y)\le t\cdot d_{X}(x,y)$.
An oblivious $\nu$-reliable $t$-spanner is a distribution $\mathcal{D}$ over dominating graphs $H$, such that for every attack set $B\subseteq X$ and $H\in\supp(\mathcal{D})$,
exist a superset $B^{+}$ of $B$ such that, for
every $x,y\notin B^{+}$, $d_{H[X\setminus B]}(x,y)\le t\cdot d_{X}(x,y)$,
and $\mathbb{E}_{H\sim\mathcal{D}}\left[|B^{+}|\right]\le(1+\nu)|B|$. We say that the oblivious spanner $\mathcal{D}$ has $m$ edges if every graph $H\in\supp(\mathcal{D})$ has at most $m$ edges.
We call the distribution $\mathcal{D}$ in <Ref> an oblivious $\nu$-reliable $t$-spanner because the adversary is oblivious to the specific spanner produced by the distribution (it may be aware to the distribution itself).
For constant dimensional Euclidean spaces, Bose <cit.> constructed a deterministic reliable $O(1)$-spanner, such that for every attack $B$, the faulty extension $B^+$ contains at most $O(|B|^2)$ vertices. The construction of reliable spanners where the size of $B^+$ is a linear function of $B$ was left as an open question.
For every $\nu,\eps\in(0,1)$, and $n$ points in $d$-dimensional Euclidean space $(\R^d,\|\cdot\|_2)$, Buchin <cit.> used the LSO of Chan <cit.> to construct a deterministic $\nu$-reliable $(1+\eps)$-spanner with $n\cdot\nu^{-6}\cdot\tilde{O}(\epsilon)^{-7d}\cdot\tilde{O}(\log n)$
edges (see also <cit.>).
Later, for the oblivious case, Buchin <cit.> applied the same LSO to construct an oblivious $\nu$-reliable $(1+\eps)$-spanner with
$n\cdot\tilde{O}(\epsilon)^{-2d}\cdot\tilde{O}(\nu^{-1}(\log\log n)^{2})$ edges.
Very recently, Har-Peled, Mendel, and Oláh <cit.> constructed reliable spanners for general metric spaces, as well as for topologically structured spaces (e.g. trees and planar graphs). They showed that for every integer $k$, every general $n$-point metric space admits an oblivious $\nu$-reliable $(512\cdot k)$-spanner
[<cit.> did not compute the constant explicitly. Their construction is based on the Ramsey-trees of Mendel-Naor <cit.>, which have stretch $128k$. Using state of the art Ramsey trees <cit.> of stretch $2ek$ instead (see also <cit.>), the approach of <cit.> provides stretch $8ek$.]
with $n^{1+1/k}\cdot O(\nu^{-1}k\log^{2}\Phi\log\frac{n}{\nu})$ edges, where $\Phi=\frac{\max_{x,y}d_X(x,y)}{\min_{x,y}d_X(x,y)}$ is the aspect ratio of the metric space (also known as the spread, which a priori is unbounded). Additionally, they showed that ultrametrics (see <Ref>) admit oblivious $\nu$-reliable $(2+\eps)$-spanners with $n\cdot \tilde{O}(\nu^{-1}\epsilon^{-2}\log^{2}\Phi)$ edges,
tree metrics admit oblivious $\nu$-reliable $(3+\eps)$-spanners with $n\cdot \tilde{O}(\nu^{-1}\epsilon^{-2}\log^2n\log^{2}\Phi)$ edges,
and planar metrics admit oblivious $\nu$-reliable $(3+\eps)$-spanners with $n\cdot \tilde{O}(\nu^{-1}\epsilon^{-4}\log^{2}\Phi)$ edges (see <Ref>).
The reliable spanner constructions of Har-Peled <cit.> are based on sparse covers. A $(\tau,\rho)$-sparse cover is a collection $\mathcal{C}$ of clusters such that every point belongs to at most $\tau$ clusters, and for every pair $x,y\in X$, there is a cluster $C\in\mathcal{C}$ containing both $x,y\in C$ where $\diam(C)\le \rho\cdot d_X(x,y)$; $\rho$ is called the stretch of the cover $\mathcal{C}$. They then treat each cluster in $\mathcal{C}$ as a uniform metric, construct a reliable spanner for each cluster, and return the union of all the constructed spanners.
Thus the main task becomes constructing a reliable spanner for the uniform metric.
Specifically, instead of the oblivious $\nu$-reliable $1$-spanner for the line constructed in <cit.>, Har-Peled <cit.> constructed an oblivious $\nu$-reliable $2$-spanner for the uniform metric, which is the best stretch possible for subquadratic size spanners (see <Ref>).
Indeed, this additional factor $2$ appears in the stretch parameter in all the spanners in <cit.>. Most prominently, for trees they constructed an $(O(\eps^{-1}\log\Phi\log n),2+\eps)$-sparse cover, resulting in a stretch $4+\eps$ spanner, [With an additional effort, <cit.> reduced the stretch of the spanner to $3+\eps$. This analysis is tight, and their technique cannot give a reliable spanner with a stretch factor smaller than $3$.] while the natural lower bound is $2$ (<Ref>).
A similar phenomenon occurs for planar graphs.
An additional drawback in the sparse cover based approach of <cit.> is its dependency on the aspect ratio $\Phi$ (which a priori can be unbounded). This dependency on the aspect ratio is inherent in their technique and cannot be avoided (see Lemma 20 in <cit.>).
§.§ Our contribution
Our major contribution is to the theory of locality-sensitive orderings. Specifically, we significantly improve the parameters of LSO in doubling metrics $^{\ref{foot:doubling}}$, and extend the idea of LSO to general metrics, as well as to topologically structured metrics. This is done by introducing left-sided LSO and triangle-LSO (see <Ref>).
LSO's are a powerful tool enabling one to reduce many problems to the line. LSO's already have many applications in computational geometry <cit.>; we expect that our LSO for doubling metrics, as well as those for general and topologically structured graphs, will find many additional applications in the future.
Next, we use these newly introduced LSO's (or improved in the case of doubling) to construct oblivious reliable spanners. Our constructions have smaller stretch (optimal in the case of topologically structured metrics) and smaller sparsity (see <Ref>). Below we describe each type of LSO in detail, and which spanners it was used to construct.
Our constructions of LSO for general and doubling metrics are going through the construction of ultrametric covers. An ultrametric cover is a collection of dominating ultrametrics such that the distance between every pair of points is well approximated by some ultrametric in the collection.
We construct the first ultrametric cover for doubling metrics with stretch $1+\eps$ (previously only tree covers were known), and improve the stretch parameter in the ultrametric covers of general metrics (see <Ref>).
Finally, a crucial ingredient when one constructs reliable spanners using LSO is reliable spanners for the line. Buchin <cit.> constructed such spanners; however their spanners have $\Omega(\log n)$ hops, which will incur additional $\log n$ factor in the stretch (in all cases other than Euclidean/doubling). To avoid this overhead, we construct a $2$-hop reliable spanner for the line, and a $2$-hop left spanner, which is a newly defined type of spanner suitable for our left-LSO (see <Ref>).
See <Ref> for a graphic illustration of how the different parts in the paper are related.
Finally, we answer an open question by Har-Peled <cit.> regarding sub-graph reliable spanners, by providing matching upper and lower bounds for reliable connectivity preservers.
Relationships between different concepts; new concepts introduced in this papers are green-shaded.
LSO type Metric Space # of orderings ($\tau$) Stretch Ref
1|c|3*(Classic) LSO Euclidean space $\mathbb{R}^d$ $O(\epsilon)^{-d}\cdot \log\frac{1}{\epsilon}$ $\eps$ <cit.>
1|c| 2*Doubling dimension $d$ $O(\epsilon^{-1}\cdot \log n)^{O(d)}$ $\eps$ <cit.>
1|c| $\eps^{-O(d)}$ $\eps$ <Ref>
2*Triangle-LSO General metric $\tilde{O}(n^{1/k}\cdot\eps^{-1})$ $2k+\eps$ <Ref>
Ultrametric $1$ $1$ <Ref>
4*Left-sided LSO Tree $\log n$ $1$ <Ref>
Treewidth $k$ $k\cdot \log n$ $1$ <Ref>
Planar graph $\frac{1}{\eps}\cdot \log^2 n$ $1+\eps$ <Ref>
Minor Free $\frac{1}{\eps}\cdot \log^2 n$ $1+\eps$ <Ref>
Summery of all known results, on all the different types of locality sensitive orderings (LSO). $k\in\N$ is an integer, $\eps\in(0,1)$ is an arbitrarily small parameter.
Family stretch guarantee size ref
$(\R^d,\|\cdot\|_2)$ $O(1)$ Deterministic $\Omega(n\log n)$
$1+\eps$ Deterministic $n\cdot\tilde{O}(\epsilon)^{-7d}\nu^{-6}\cdot\tilde{O}(\log n)$
$1+\eps$ Oblivious $n\cdot\tilde{O}(\epsilon)^{-2d}\cdot\tilde{O}(\nu^{-1}(\log\log n)^{2})$ <cit.>
dimension $d$ $1+\eps$ Deterministic $n\cdot\epsilon^{-O(d)}\nu^{-6}\cdot\tilde{O}(\log n)$
$1+\eps$ Oblivious $n\cdot\epsilon^{-O(d)}\nu^{-1}\log\nu^{-1}\cdot\tilde{O}(\log\log n)^{2}$
$512\cdot k$ $^{\ref{foot:MN07}}$
Oblivious $\tilde{O}(n^{1+\nicefrac{1}{k}}\cdot\nu^{-1})\cdot\log^{2}\Phi$ <cit.>
$8k+\eps$ Oblivious $\tilde{O}(n^{1+\nicefrac{1}{k}}\cdot\eps^{-2})\cdot\nu^{-1}$ <Ref>
Tree, planar $k$ Deterministic $\Omega(n^{1+1/k})$ <cit.>
$k<2$ Oblivious $\Omega(n^2)$ <Ref>
2*Ultrametric $2+\eps$ Oblivious $n\cdot \tilde{O}(\nu^{-1}\epsilon^{-2}\log^{2}\Phi)$ <cit.>
$2$ (tight) Oblivious $n\cdot \tilde{O}\left(\log^{2}n+\nu^{-1}\log n\right)$ <Ref>
2*Tree $3+\eps$ Oblivious $n\cdot \tilde{O}(\nu^{-1}\epsilon^{-2}\log^{2}n\log^{2}\Phi)$ <cit.>
$2$ (tight) Oblivious $n\cdot O(\nu^{-1}\log^{3}n)$ <Ref>
Treewidth $k$ $2$ (tight) Oblivious $n\cdot O(\nu^{-1}k^{2}\log^{3}n)$ <Ref>
2*Planar $3+\eps$ Oblivious $n\cdot \tilde{O}(\nu^{-1}\epsilon^{-4}\log^{2}n\log^{2}\Phi)$ <cit.>
$2+\eps$ (tight) Oblivious $n\cdot O(\nu^{-1}\eps^{-2}\log^{5}n)$ <Ref>
Minor-free $2+\eps$ (tight) Oblivious $n\cdot O(\nu^{-1}\eps^{-2}\log^{5}n)$ <Ref>
Comparison between previous and new constructions of reliable spanners. All spanners (except <cit.>) constructed on $n$-point metric spaces with reliability parameter $\nu$.
For doubling metrics, we recover the same strong results previously known only for Euclidean space (up to a polynomial dependence on $\eps$).
Both lower bounds hold for the uniform metric (which is sub-metric of a star metric).
For all other metric spaces,
we improve both stretch and sparsity, and remove the undesirable dependence on the aspect ratio (spread) $\Phi$.
Most notably, for trees and planar graphs, the stretch was improved from $3+\eps$, to the best possible stretch $2$.
Finally, for general graphs, our spanner has stretch $8k$, considerably improving the constant hiding in <cit.>. This constant is highly important as it governs the parameter in the power of $n$.
Classic LSO. Chan <cit.> constructed a $\left((\eps^{-1}\cdot \log n)^{O(d)},\eps\right)$-LSO for metric spaces of doubling dimension $d$. Applying the (implicit) framework of <cit.> yields a reliable spanner with $n\cdot(\eps^{-1}\cdot \log n)^{O(d)}$ edges.
In this work, we completely remove the dependency on $n$ of the number of orderings. Specifically, we construct an $\left(\epsilon^{-O(d)},\epsilon\right)$-LSO for doubling metrics (<Ref>); this immediately implies reliable spanners for metric spaces of doubling dimension $d$ with the same performance, up to the dependency on $\eps$, as for Euclidean spaces (<Ref>).
Triangle LSO. A $(\tau,\rho)$-triangle LSO for a metric space $(X,d_X)$ is a collection $\Sigma$ of at most $\tau$ linear orderings over $X$, such that for every $x,y\in X$, there is an ordering $\sigma\in\Sigma$ such that for every two points $z,q\in X$ satisfying $x\preceq_{\sigma}z\preceq_{\sigma}q\preceq_{\sigma}y$ (or $y\preceq_{\sigma}z\preceq_{\sigma}q\preceq_{\sigma}x$) it holds that $d_X(z,q)\le \rho\cdot d_X(x,y)$ (<Ref>).
Note that every $(\tau,\rho)$-triangle LSO is also a $(\tau,\rho)$-LSO; however, a $(\tau,\rho)$-LSO is only a $(\tau,2\rho+1)$-triangle LSO (by the triangle inequality). Hence a triangle-LSO is preferable to the classic LSO.
We observe that ultrametrics admit a $(1,1)$-triangle-LSO (<Ref>), and show that general $n$-point metric spaces admit an $\left(\tilde{O}(n^{\frac{1}{k}}\cdot\eps^{-1}),2k+\eps\right)$-triangle-LSO (<Ref>).
We then prove a meta-theorem stating that every metric space admitting a $(\tau,\rho)$-triangle LSO has an oblivious $\nu$-reliable $2\rho$-spanner with $n\tau\cdot O\left(\log^{2}n+\nu^{-1}\tau\log n\cdot\log\log n\right)$ edges (<Ref>). This gives oblivious reliable spanners for ultrametrics and general metric spaces.
Our spanners for general metrics have significantly smaller stretch compared to <cit.> ($8k$ compared to $512k$ $^{\ref{foot:MN07}}$), this constant is highly important as it governs the parameter in the power of $n$. An additional advantage is that we remove the dependency on the aspect ratio (which a priori can be unbounded).
Left-sided LSO.
A $(\tau,\rho)$-left-sided LSO for a metric space $(X,d_X)$ is a collection $\Sigma$ of linear orderings over subsets of $X$, called partial orderings, such that every point $x$ belongs to at most $\tau$ partial orderings, and for every $x,y\in X$, there is a partial ordering $\sigma\in\Sigma$ such that for every two points $x',y'\in X$ satisfying $x\preceq_{\sigma}x'$ and $y'\preceq_{\sigma}y$, it holds that $d_X(x',y')\le \rho\cdot d_X(x,y)$ (<Ref>). Note that the stretch guarantee of a $(\tau,\rho)$-left-sided LSO implies that of a $(\tau,\rho)$-LSO (but not the vice versa). However, there could be $\Omega(n)$ (partial) orderings in a $(\tau,\rho)$-left-sided LSO. By lifting the restriction on the total number of partial orderings, we can construct a left-sided LSO with an optimal stretch of $1$ or a nearly optimal stretch of $1+\eps$; see <Ref>. This small stretch ultimately leads to the (nearly) optimal stretch for the reliable spanners of tree and planar metrics constructed in this work, which is not attainable in previous work <cit.>.
We then prove a meta-theorem stating that every metric space admitting a $(\tau,\rho)$-left-sided LSO has an oblivious $\nu$-reliable $2\rho$-spanner with $n\cdot O(\nu^{-1}\tau^2\log n)$ edges (<Ref>).
We show that $n$-vertex trees admit a $(\log n,1)$-left-sided LSO and conclude that trees have oblivious $\nu$-reliable $2$-spanners with $n\cdot O(\nu^{-1}\log^{3}n)$ edges (<Ref>).
Note that the stretch parameter $2$ is optimal.
Later, we show that planar graphs admit a $(\frac{1}{\eps}\log^2 n,1+\eps)$-left-sided LSO for every $\eps\in(0,1)$ (<Ref>). An oblivious $\nu$-reliable $(2+\eps)$-spanner with $n\cdot O(\nu^{-1}\eps^{-2}\log^{5}n)$ edges follows (<Ref>). The same results also hold for bounded treewidth graphs and graphs excluding a fixed minor.
Ultrametric cover.
A $(\tau,\rho)$-tree cover for a metric space $(X,d_X)$ is a set $\mathcal{T}$ of $\tau$ dominating trees $^{\ref{foot:dominating}}$ such that the distance between every pair of points is preserved up to a factor $\rho$ in at least one tree ($\forall u,v,~\min_{T\in\mathcal{T}}d_T(u,v)\le \rho\cdot d_X(u,v)$). When all trees in the cover are ultrametrics, we call it an ultrametric cover (<Ref>).
The first study on tree covers was for Euclidean spaces by Arya <cit.> who constructed the so-called Dumbbell trees.
For general metrics, Mendel and Naor <cit.> (implicitly) constructed an ultrametric cover from Ramsey type embeddings. These covers actually have a stronger guarantee, where every vertex $v$ is guaranteed to have an ultrametric in the cover approximating its shortest path tree ($\forall v\exists T\forall u,~d_T(v,u)\le \rho\cdot d_X(v,u)$).
There is a long line of work on Ramsey-type embeddings <cit.>. The state of the art covers follow from Naor and Tao <cit.>, and implies a $(2e\cdot k,O(k\cdot n^{1/k}))$ ultrametric cover.
For doubling metrics, Bartal, Fandina, and Neiman <cit.> constructed a $(1+\eps,\eps^{-O(d)})$ tree cover.
We refer to <cit.> for further results and background on tree covers.
We observe that every ultrametric admits a $(1,1)$-triangle LSO, which implies that given a $(\tau,\rho)$-ultrametric cover, one can construct a $(\tau,\rho)$-triangle LSO (<Ref>).
Indeed, the main step in our construction of a triangle LSO for general metrics is a construction of a $\left(\tilde{O}(n^{\frac{1}{k}}\cdot\eps^{-1}),2k+\eps\right)$-ultrametric cover (<Ref>). Our construction provides a constant improvement in the stretch parameter (equivalently, a polynomial improvement in the number of ultrametrics in the cover) compared to previous results.
A more structured case is that of a $(\tau,\rho,k,\delta)$-ultrametric cover, where in addition to being a $(\tau,\rho)$-ultrametric cover, we require that each ultrametric will be a $k$-HST of degree at most $\delta$ (see <Ref>).
We show that every $\Omega(\frac1\eps)$-HST of degree bounded by $\delta$ admits a (classic) $(\frac\delta2,\eps)$-LSO (<Ref>). It follows that a $(\tau,\rho,\Omega(\frac1\eps),\delta)$-ultrametric cover implies a $\left(\tau\cdot\frac{\delta}{2}, (1+\eps)\rho\right)$-LSO (<Ref>).
The trees in the tree cover for doubling metrics of <cit.> are far from being ultrametrics and cannot be used in our framework.
We then construct an $(\epsilon^{-O(d)},1+\eps,\frac{1}{\epsilon}, \epsilon^{-O(d)})$-ultrametric cover for spaces with doubling dimension $d$ (<Ref>), which implies the respective LSO. Interestingly, having such an ultrametric cover is a characterizing property for metric spaces of bounded doubling dimension (<Ref>).
See <Ref> for a summary.
1|l|Space type stretch # of trees ref
1|l|Euclidean $\mathbb{R}^d$ tree $1+\eps$ $O((\frac{d}{\eps})^d\log\frac{d}{\eps})$ <cit.>
dimension $d$ ultrametric $O(d^2)$ $O(d\log d)$ <cit.>
tree $1+\eps$ $\eps^{-O(d)}$ <cit.>
ultrametric $1+\eps$ $\eps^{-O(d)}$ <Ref>
metric ultrametric $2e\cdot k$ $O(k\cdot n^{1/k})$ <cit.>
ultrametric $2k+\eps$ $\tilde{O}(n^{\frac{1}{k}}\cdot\eps^{-1})$ <Ref>
New and previous constructions of tree and ultrametric covers.
$2$-hop reliable spanners for the path graph.
Using (different types of) LSO, we can reduce the problem of constructing reliable spanners for different complicated metric spaces to that of constructing reliable spanners for the $1$-dimensional path graph.
Buchin <cit.> constructed an oblivious $\nu$-reliable $1$-spanner with $n\cdot\tilde{O}\left(\nu^{-1}(\log\log n)^{2}\right)$ edges for the path graph. However, the shortest path between two given vertices in their spanner could contain $\Omega(\log n)$ edges (called hops).
While $(\log n)$-hop spanners are acceptable when applying them upon a $(\tau,\eps)$-LSO, using a $h$-hop spanner of the path graph for $(\tau,\rho)$-triangle-LSO will result in distortion $h\cdot \rho$. It is therefore desirable to minimize the number of hops used by the spanner. Having a $1$-hop spanner will require $\Omega(n^2)$ edges; we thus settle for the next best thing: a $2$-hop reliable spanner.
Specifically, we construct an oblivious $\nu$-reliable, $2$-hop $1$-spanner with $n\cdot O\left(\log^{2}n+\nu^{-1}\log n\cdot\log\log n\right)$ edges for the path graph (<Ref>).
This spanner is later used in our Thm:TriangleLSOtoSpanner to construct reliable spanners for metric spaces admitting a $(\tau,\rho)$-triangle-LSO.
For the left-sided LSO case, we also need a $2$-hop spanner for the path graph. However, the shortest $2$-hop path between the $i$'th and the $j$'th vertices for $i < j$ must go through a vertex to the left of $i$, i.e., a vertex in $[1,i]$ (as opposed to a vertex in $[i,j]$ in the triangle-LSO case). This requirement inspires us to define a left-spanner (<Ref>); we then construct an oblivious $\nu$-reliable $2$-hop left-spanner with $n\cdot O(\nu^{-1}\log n)$ edges (<Ref>). The left-spanner is later used in thm:MetaLeftLSOMain to construct a reliable spanner from a left-sided LSO.
Type guarantee size hops ref
3*$1$ spanner Deterministic $n\cdot O(\log n\cdot \nu^{-6})$ $O(\log n)$ <cit.>
Oblivious $n\cdot O(\nu^{-1}\cdot\log\nu^{-1})$ $O(\log n)$ <cit.>
Oblivious $n\cdot O\left(\log^{2}n+\nu^{-1}\log n\cdot\log\log n\right)$ $2$ <Ref>
Left spanner Oblivious $n\cdot O(\nu^{-1}\log n)$ $2$ <Ref>
Construction of reliable spanners for the line.
<cit.> and <cit.> constructed sparse (both deterministic and oblivious) reliable $1$-spanners for points on the line.
However, their spanners have $O(\log n)$ hops, which will incur distortion $O(\rho\cdot\log n)$ when applied on a $(\tau,\rho)$-triangle LSO (with $\rho>1$). We construct a $1$-spanner with only $2$-hops, which we later use to construct a reliable spanner from a triangle-LSO. In addition, we construct a $2$-hop left spanner for the line, which is later used to construct a reliable spanner from a left-sided LSO.
Connectivity preservers. While research on reliable spanners for metric spaces has been fruitful, nothing is known for reliable spanners of graphs, where we require the spanner to be a subgraph of the input graph. In a recent talk, Har-Peled <cit.> asked a “probably much harder question”: whether it is possible to construct a non-trivial subgraph reliable spanner. We show that, even for a much simpler problem where one seeks a subgraph to only preserve connectivity for vertices outside $B^+$, the faulty extension of $B$, the subgraph must have $\Omega(n^2)$ edges in the worst case. Indeed, our lower bound is much more general: it applies to $g$-reliable connectivity preservers for some function $g$. A $g$-reliable connectivity preserver of a graph $G=(V,E)$ is a subgraph $H$ of $G$ such that for every attack $B\subseteq V$, there is a superset $B^+\supseteq B$
of size at most $g(|B|)$, such that for every $u,v\in V\setminus B^{+}$, if $u$ and $v$ are connected in $G\setminus B$, then they are also connected in $H\setminus B$. Observe that a $\nu$-reliable spanner defined in <Ref> is a $g$-reliable (non-subgraph) spanners for the linear function $g(x)=(1+\nu)x$. We showed that there is an $n$-vertex graph $G$ such that every oblivious $g_{k}$-reliable connectivity preserver has $\Omega(n^{1+1/k})$ edges for any function $g = O(x^k)$ (see <Ref>). Taking $k = 1$ gives a lower bound $\Omega(n^2)$ on the number of edges of subgraph $\nu$-reliable spanners.
On the positive side, we provide a construction of a deterministic connectivity preserver matching the lower bound (<Ref>).
§.§ Related work
The tradeoff between stretch and sparsity (number of edges) of (regular) $t$-spanners has been extensively studied <cit.>; see the recent survey of Ahmed <cit.>, and the book <cit.> and references therein for more details. The bottom line is that $n$-point metric spaces admit $(2k-1)$-spanners (for every integer $k$) with $O(n^{1+1/k})$ edges <cit.>, while the metric induced by $n$ points in $d$ dimensional Euclidean space admits a $(1+\eps)$-spanner with $n\cdot O(\eps)^{1-d}$ edges <cit.>. Similarly, $n$-point metric spaces with doubling dimension $d$ admit $(1+\eps)$-spanners with $n\cdot \eps^{-O(d)}$ edges <cit.>.
For vertex-fault-tolerant spanner, it was shown that every $n$-point set in $\mathbb{R}^d$, or more generally in a space of doubling dimension $d$, admits an $f$-vertex-fault-tolerant $(1+\eps)$-spanner with $\eps^{-O(d)}\cdot f\cdot n$ edges <cit.>.
For general graphs, after a long line of works <cit.>, it was shown that every $n$-vertex graph admits an efficiently constructible $f$-vertex-fault-tolerant $(2k-1)$-spanner with $O(f^{1-1/k}\cdot n^{1+1/k})$ edges, which is optimal assuming the Erdös' Girth Conjecture <cit.>.
A related notion is that of a vertex-fault-tolerant (VFT) emulator. Unlike spanners, emulators are not required to be subgraphs, and the weight of an emulator edge is determined w.r.t. the faulty set.
It was recently shown that vertex-fault-tolerant (VFT) emulators are asymptotically sparser from their spanner counterparts <cit.>.
In addition to vertex-fault-tolerant (VFT) spanners, also edge-fault-tolerant (EFT) spanners were studied, where the guarantee is to withstand up to $f$-edge faults (as opposed to $f$ vertex faults in VFT).
Bodwin, Dinitz, and Robelle <cit.> constructed an $f$-EFT $2k-1$-spanners with $O(k^2f^{\frac12-\frac{1}{2k}}\cdot n^{1+\frac1k}+kfn)$ $\big/$ $O(k^2f^{\frac12}\cdot n^{1+\frac1k}+kfn)$ edges for odd$\big/$even values of $k$ respectively.
There is also a lower bound of $\Omega(f^{\frac12-\frac{1}{2k}}n^{1+1/k})$ <cit.>.
Abam <cit.> introduced the notion of region fault-tolerant spanners for the Euclidean plane. They showed that one can construct a $t$-spanner with $O(n\log n)$ edges in such a way that if points belonging to a convex region are deleted, the residual graph is still a spanner for the remaining points.
Spanners with low hop diameter for Euclidean spaces of fixed dimension were studied in the pioneering work of Arya <cit.>. State of the art is a $(1+\epsilon)$-spanner constructible in $O(n\log n)$ time by Solomon <cit.> that has $O(n\alpha_k(n))$ [$\alpha_k(n)$ is the inverse function of a very fast growing function at level $k$ of the the primitive recursive hierarchy; see <cit.> for a more formal description.] edges and hop diameter $k$.
In addition to having a small number of edges, it is desirable to have a spanner with a small total edge weight, called a light spanner. Light spanners have been thoroughly studied in the spanner literature <cit.>.
Sparse (and light) spanners were constructed efficiently in different computational models such as LOCAL <cit.>, CONGEST <cit.>, streaming <cit.>, massive parallel computation (MPC) <cit.> and dynamic graph algorithms <cit.>.
§ PRELIMINARIES
Let $(X,d_X)$ be a metric space. The aspect ratio, or spread, denoted by $\Phi$, is defined as follows: $\Phi = \frac{\max_{x,y\in X}d_X(x,y)}{\min_{x\not=y \in X}d_X(x,y)}$.
We denote by $[n]$ the set of integers $\{1,2,\ldots, n\}$. For two integers $a\leq b$, we define $[a:b] = \{a,a+1,\ldots, b\}$.
We use $\tilde{O}$ notation hides poly-logarithmic factors. That is $\tilde{O}(f)=O(f)\cdot\log^{O(1)}(f)$.
Let $G$ be a graph. We denote the vertex set and edge set of $G$ by $V(G)$ and $E(G)$, respectively. When we want to explicitly specify the vertex set $V$ and edge set $E$ of $G$, we write $G=(V,E)$. If $G$ is a weighted graph, we write $G=(V,E,w)$ with $w:E\rightarrow\R_+$ being the weight function on the edges of $G$.
For every pair of vertices $x,y\in V$, we denote by $d_G(x,y)$ the shortest path distance between $x$ and $y$ in $G=(V,E,w)$. Given a path $P\subseteq G$, we define the hop length of $P$ to be the number of edges on the path.
A $t$-spanner for a metric space $(X,d_X)$ is a weighted graph $H(V,E,w)$ that has $V = X$, $w(u,v) = d_X(u,v)$ for every edge $(u,v)\in E$ and $d_X(x,y) \leq d_H(x,y) \leq t\cdot d_X(x,y)$ for every pair of points $x,y\in X$. We say that a $t$-spanner $H$ has hop number $h$ if for every pair of vertices $x,y$, there is an $x-y$ path $P$ in $H$ of at most $h$ hops such that $w_H(P)\le t\cdot d_X(x,y)$.
The path graph $P_{n}$ contains $n$ vertices
$v_{1},v_{2},\dots,v_{n}$ and there is (unweighted) edge between
$v_{i}$ and $v_{j}$ iff $|i-j|=1$. A path $v_{i_{1}},v_{i_{2}},\dots v_{i_{s}}$
is monotone iff for every $j$, $i_{j}<i_{j+1}.$ Note that if a spanner
$H$ contains a monotone path between $v_{i},v_{j}$ then $d_{H}(v_{i},v_{j})=d_{P_{n}}(v_{i},v_{j})=|i-j|$. We sometimes identify vertices of $P_n$ with numbers in $\{1,2,\ldots, n\}$, and refer to $\{1,2,\ldots, n\}$ as the vertex set of $P_n$.
A metric $(X,d_X)$ has doubling dimension $d$ if every ball of radius $r$ can be covered by at most $2^{d}$ balls of radius $r/2$. The following lemma gives the standard packing property of doubling metrics (see, e.g., <cit.>).
Let $(X,d)$ be a metric space with doubling dimension $d$.
If $S \subseteq X$ is a subset of points with minimum interpoint distance $r$ that is contained in a ball of radius $R$, then
$|S| = \left(\frac{2R}{r}\right)^{O(d)}$ .
In the following lemma, we show that when constructing oblivious spanners, it is enough to bound the number of edges in expectation to obtain a worst-case guarantee.
Consider an $n$-vertex graph $G=(V,E,w)$ that admits an oblivious $\nu$-reliable $t$-spanner with $m$ edges in expectation.
Then $G$ admits an oblivious $2\nu$-reliable $t$-spanner with $2m$ edges in the worst case.
Formally, there is a distribution $\mathcal{D}$ over spanners $H$ such that for every attack $B\subseteq V$, $\mathbb{E}[|B^+\setminus B|]\le \nu|B|$, and $\mathbb{E}[|H|]\le m$.
Let $\mathcal{D}'$ be the distribution over spanners $H$ obtained by conditioning $\mathcal{D}$ on the event $|H|\le 2m$. Clearly, all the spanners in $\supp\{\mathcal{D}'\}$ have at most $2m$ edges. Furthermore, for every attack $B\subseteq V$, it holds that
\begin{align*}
\mathbb{E}_{H\sim\mathcal{D}'}[|B^{+}\setminus B|] & =\mathbb{E}_{H\sim\mathcal{D}}[|B^{+}\setminus B|~\bigl|~|H|\le2m]\\
& =\frac{1}{\Pr\left[|H|\le2m\right]}\cdot\left(\mathbb{E}_{H\sim\mathcal{D}'}[|B^{+}\setminus B|~\bigl| ~|H|\le2m]\cdot\Pr\left[|H|\le2m\right]\right)\\
& \le\frac{1}{\Pr\left[|H|\le2m\right]}\cdot\mathbb{E}_{H\sim\mathcal{D}}[|B^{+}\setminus B|]\le2\nu\cdot|B|~,
\end{align*}
where in the last inequality, we use Markov's inequality.
§ ULTRAMETRIC COVERS
Ultrametric. An ultrametric $\left(X,d\right)$ is a metric space satisfying a strong form of the triangle inequality, that is, for all $x,y,z\in X$,
$d(x,z)\le\max\left\{ d(x,y),d(y,z)\right\}$. A related notion is a $k$-hierarchical well-separated tree ($k$-HST).
A metric $(X,d_X)$ is a $k$-hierarchical well-separated tree ($k$-HST) if there exists a bijection $\varphi$ from $X$ to leaves of a rooted tree $T$ in which:
* Each node $v\in T$ is associated with a label $\Gamma_{v}$ such that $\Gamma_{v} = 0$ if $v$ is a leaf and $\Gamma_{v}\geq k\Gamma_{u}$ if $v$ is an internal node and $u$ is any child of $v$.
* $d_X(x,y) = \Gamma_{\lca(\varphi(x),\varphi(y))}$ where $\lca(u,v)$ is the least common ancestor of any two given nodes $u,v$ in $T$.
It is well known that any ultrametric is a $1$-HST, and any $k$-HST is an ultrametric (see <cit.>).
Ultrametric cover. Consider a metric space $(X,d_X)$, a distance measure $d_Y$ is said to be dominating if $\forall x,y\in X$, $d_X(x,y)\le d_Y(x,y)$. A tree/ultrametric over $X$ is said to be dominating if their metric is dominating.
Bartal, Fandina, and Neiman <cit.> studied tree covers: a metric space $(X,d_X)$ admits a $(\tau,\rho)$-tree cover if there are at most $\tau$ dominating trees $\{T_1,T_2,\dots,T_{\tau}\}$ such that $X\subseteq V(T_i)$ for every $i\in [\tau]$ and for every pair of points $x,y \in X$, there is some tree $T_i$ where $d_{T_i}(x,y)\le\rho\cdot d_X(x,y)$. Bartal <cit.> observed that the previous constructions of Ramsey trees[Ramsey trees have additional de`sired property compared to general tree covers: for every vertex $v$, there is a single tree in the cover satisfying all its pairwise distances, as oppose to union of all the trees in a general tree cover.] <cit.> give an $(\tilde{O}(n^{1/k}),2ek)$-tree cover for general metrics, and explicitly constructed an $(\eps^{-O(d)},1+\eps)$-tree cover for metric spaces with doubling dimension $d$.
Here we initiate the study of ultrametric covers.
A $(\tau,\rho)$-ultrametric cover for a space $(X,d)$ is a collection of at most $\tau$ dominating ultrametrics $\mathcal{U} = \{(U_i,d_{U_i})\}_{i=1}^{\tau}$ over $X$, such that for every $x,y\in X$ there is an ultrametric $U_i$ for which $d_{U_i}(x,y)\le \rho\cdot d_X(x,y)$.
If every metric $(U,d_{U}) \in \mathcal{U}$ is a $k$-HST and the corresponding tree $T_{U}$ of $U$ has maximum degree at most $\delta$, we say that $\mathcal{U}$ is a $(\tau,\rho,k,\delta)$-ultrametric cover of $(X,d_X)$.
Note that ultrametrics are much more structured than general trees. For example, every ultrametric embeds isometrically into $\ell_{2}$, while trees require distortion $\sqrt{\log\log n}$ <cit.>.
Later, we will show how to use ultrametric covers to construct locality-sensitive orderings (see <Ref> and <Ref>).
The first main result of this section is <Ref> where we construct an ultrametric cover for general metrics.
[Ultrametric Cover For General Metrics]theoremGeneralUltrametricCover
For every $k\in \mathbb{N}$ and $\eps\in(0,\frac12)$, every $n$-point metric space admits an $\left(O(n^{\frac{1}{k}}\cdot\log n\cdot\frac{k^{2}}{\eps}\cdot\log\frac{k}{\eps}),2k+\eps\right)$-ultrametric cover.
Interestingly, the tree cover in <cit.> for general metrics actually consists of ultrametrics; in other words, Bartal <cit.> obsereved that Ramsey trees constitute an $(\tilde{O}(n^{1/k}),2ek)$-ultrametric cover. Thus, we obtain a polynomial improvement in the number of ultrametrics in the cover. Specifically, to guarantee stretch $\approx2(k+1)$, our cover uses $\tilde{O}(n^{1/k})$ ultrametrics, while previous covers have $\Omega(n^{\frac{e}{k+1}})$ ultrametrics.
Next, in <Ref> below, we show that every metric space with doubling dimension $d$ admits an $(\epsilon^{-O(d)},1+\eps,\frac{1}{\epsilon}, \epsilon^{-O(d)})$-ultrametric cover for any parameter $\eps \in (0,\frac{1}{6})$. It turns out that this property is actually a characterization of doubling spaces. The proof of <Ref> is provided in <Ref>.
[Ultrametric Cover For Doubling Metrics]theoremDoublingUltrametricCover
Every metric space $(X,d_{X})$ with doubling dimension $d$ admits an $(\epsilon^{-O(d)},1+\eps,\frac{1}{\epsilon}, \epsilon^{-O(d)})$-ultrametric cover for any parameter $\eps \in (0,\frac{1}{6})$.
Conversely, if a metric space $(X,d_X)$ admits a $(\tau, \rho, k, \delta)$-ultrametric cover for $k \geq 2\rho$, then it has doubling dimension $d\le\log(\tau\delta)$.
The main tool in proving <Ref> is pairwise partition cover, a newly introduced notion, which is closely related to the previously introduced stochastic/padded decompositions and sparse covers <cit.>.
A partition $\mathcal{P}$ of a metric space $(X,d_X)$ is $\Delta$-bounded if every cluster $C\in\mathcal{P}$ has diameter at most $\Delta$.
A collection of partitions $\mathbb{P} = \{\mathcal{P}_{1},\dots,\mathcal{P}_{s}\}$
is $(\tau,\rho,\eps,\Delta)$-pairwise partition cover if (a) $s\le \tau$, (b) every partition $\mathcal{P}_{i}$ is $\Delta$-bounded,
and (c) for every pair $x,y$ such that $\frac{\Delta}{2\rho}\le d_X(x,y)\le\frac{\Delta}{\rho}$, there is a cluster $C$ in one of the partitions $\mathcal{P}_{i}$ such that $C$ contains both closed balls $B(x,\eps\Delta),B(y,\eps\Delta)$.
A space $(X,d_{X})$ admits a $(\tau,\rho,\eps)$-pairwise partition cover scheme if for every $\Delta$, it admits a $(\tau,\rho,\eps,\Delta)$-pairwise partition cover.
We will show that given a pairwise partition cover scheme, one can construct an ultrametric cover. The proof appears in <Ref>.
Suppose that a metric space $(X,d_{X})$ admits a $(\tau,\rho,\epsilon)$-pairwise partition cover scheme for $\tau\in\N$, $\rho\ge1$, and $\eps\in(0,\frac12)$. Then $X$ admits an $\left(O(\frac{\tau}{\epsilon}\log\frac{\rho}{\epsilon}),\rho(1+7\epsilon)\right)$-ultrametric cover.
Furthermore, every ultrametric in the cover is a $\Theta(\frac{\rho}{\epsilon})$-HST.
In <Ref> we construct a pairwise partition cover for general metrics:
Every $n$-point metric space $(X,d_X)$ admits an
$(O(n^{\frac{1}{k}}\log n), 2k+\delta,\frac{\delta}{2k(2k+\delta)})$-pairwise partition cover scheme for any $\delta \in [0,1]$ and integer $k\ge 1$.
We are now ready to prove <Ref>:
Let $(X,d_X)$ be an $n$-point metric space, and fix $\delta=\frac\eps8$.
By <Ref>,
$X$ admits an $(O(n^{\frac{1}{k}}\log n), 2k+\delta,\frac{\delta}{2k(2k+\delta)})$-pairwise partition cover.
By <Ref>,
$X$ admits an ultrametric cover with $O(\frac{n^{\frac{1}{k}}\log n}{\delta}\cdot2k(2k+\delta)\cdot\log\frac{(2k+\delta)2k(2k+\delta)}{\delta}=O(n^{\frac{1}{k}}\log n\cdot\frac{k^{2}}{\delta}\cdot\log\frac{k}{\delta})=O(n^{\frac{1}{k}}\log n\cdot\frac{k^{2}}{\eps}\cdot\log\frac{k}{\eps})$
ultrametrics, and stretch $(2k+\delta)(1+\frac{7\delta}{2k(2k+\delta)})<2k+8\delta=2k+\eps$.
§.§ From Pairwise Partition Cover to Ultrametric Cover: Proof of <Ref>
<Ref> is a reduction from pairwise partition cover scheme to ultrametric cover.
In essence, an ultrametric is simply a hierarchical partition.
Thus, this reduction takes unrelated partitions in all possible scales, and combines them into hierarchical/laminar partitions. Reductions similar in spirit were constructed in the context of the Steiner point removal problem <cit.>, stochastic Steiner point removal <cit.>, universal Steiner tree <cit.>, and others.
We follow here a bottom-up approach, where the ratio between consecutive scales in a single hierarchical partition (a.k.a. ultrametric) is $O(\frac\rho\eps)$. When constructing the next level in the hierarchical partition, we take partitions from a pairwise partition cover of the current scale, and slightly “round” them around the “borders” so that no previously created cluster will be divided (see <Ref>). The argument is that due to the large ratio between consecutive scales, the effects of this rounding are marginal.
Assume w.l.o.g. that the minimal pairwise distance in $X$ is $1$, while the maximal pairwise distance is $\Phi$.
$c\ge 1$ to be determined later.
For $i\ge0$, set
$\Delta_{i}=c\cdot(\frac{4\rho}{\epsilon})^{i}$, and let
$\mathbb{P}_i=\{\mathcal{P}_{1}^{i},\dots,\mathcal{P}_{\tau}^{i}\}$ be a $(\tau,\rho,\Delta_{i})$-padded partition cover (we assume that $\mathbb{P}_i$ has exactly $\tau$ partitions; we can enforce this assumption by duplicating partitions if necessary).
Fix some $j$, let $\mathcal{P}^{-1}_{j}$ be the partition where each vertex is a singleton, and consider $\{\mathcal{P}^{i}_{j}\}_{i\ge -1}$. We will inductively define a new set of partitions, enforcing it to be a laminar system. The basic idea of doing this is to produce a tree of partitions where the lower level is a refinement of the higher level, and we do so by grouping a cluster at a lower level to one of the clusters at a higher level separating it.
The lowest level $\mathcal{P}^{-1}_{j}$ where each set in the partition is a singleton, stays as-is.
Inductively, for any $i\geq 0$, after constructing $\tilde{\mathcal{P}}_{j}^{i-1}$ from $\mathcal{P}_{j}^{i-1}$, we will construct $\tilde{\mathcal{P}}_{j}^{i}$
from $\mathcal{P}_{j}^{i}$ using $\tilde{\mathcal{P}}_{j}^{i-1}$.
Let $\mathcal{P}_{j}^{i}=\left\{ C_{1},\dots,C_{\phi}\right\}$ be the clusters in the partition $\mathcal{P}_{j}^{i}$. For each $q\in[1,\phi]$, let $Y_{q}=X\setminus\cup_{a<q}\tilde{C}_{a}$ be the set of unclustered points (w.r.t. level $i$, before iteration $q$).
Let $C'_{q}=C_{q}\cap Y_{q}$ be the cluster $C_q$ restricted to vertices in $Y_q$, and let $S_{C'_{q}}=\left\{ C\in\tilde{\mathcal{P}}_{j}^{i-1}\mid C\cap C'_{q}\ne\emptyset\right\}$ be the set of new level-$(i-1)$ clusters with non empty intersection with $C'_{q}$.
We set the new cluster $\tilde{C}_{q}=\cup S_{C'_{q}}$ to be the union of these clusters, and continue iteratively.
See <Ref> for illustration.
Clearly, $\tilde{\mathcal{P}}_{j}^{i-1}$
is a refinement of $\tilde{\mathcal{P}}_{j}^{i}$. We conclude that
$\left\{ \tilde{\mathcal{P}}_{j}^{i}\right\} _{i\ge-1}$ is a laminar hierarchical set of partitions that refine each other.
Illustration of the construction of the partition $\tilde{\mathcal{P}}_{j}^{i}$ given $\mathcal{P}_{j}^{i}$ and $\tilde{\mathcal{P}}_{j}^{i-1}$.
The black lines in both the left and right parts of the figure border clusters in $\tilde{\mathcal{P}}_{j}^{i-1}$. On the left illustrated the partition $\mathcal{P}_{j}^{i}=\{C_1,C_2,C_3,C_4\}$, where different clusters colored by different colors. On the right illustrated the modified partition $\mathcal{P}_{j}^{i}=\{\tilde{C}_1,\tilde{C}_2,\tilde{C}_3,\tilde{C}_4\}$. $\tilde{C}_1$ contains all the clusters in $\tilde{\mathcal{P}}_{j}^{i-1}$ intersecting $C_1$. $C_2$ contains all the clusters in $\tilde{\mathcal{P}}_{j}^{i-1}\setminus \tilde{C}_1$ intersecting $C_2$, and so on.
We next argue by induction that $\tilde{\mathcal{P}}_{j}^{i}$ has diameter $\Delta_{i}(1+\epsilon)$.
Consider $\tilde{C}_{q}\in\tilde{\mathcal{P}}_{j}^{i}$; it consists
of $C'_{q}\subseteq C_{q}\in\mathcal{P}_{j}^{i}$ and of clusters in $\tilde{\mathcal{P}}_{j}^{i-1}$
intersecting $C'_{q}$.
As the diameter of $C'_{q}$ is bounded by $\diam(C_{q})\le \Delta_i$, and by the induction hypothesis, the diameter of each cluster $C\in \tilde{\mathcal{P}}_{j}^{i-1}$ is bounded by $(1+\eps)\Delta_{i-1}$, we conclude that the diameter of $\tilde{C}_{q}$ is bounded by
\[
\Delta_{i}+2\cdot(1+\epsilon)\Delta_{i-1}=\Delta_{i}\left(1+\frac{2(1+\epsilon)}{4\rho}\cdot\epsilon\right)\le\Delta_{i}(1+\epsilon)~,
\]
since $\rho \geq 1$ and $\eps < 1$.
Next we argue that
$ \tilde{\mathbb{P}}_i = \{\tilde{\mathcal{P}}_{1}^{i},\dots,\tilde{\mathcal{P}}_{\tau}^{i}\}$ is a $(\tau,(1+\eps)\rho,0,(1+\eps)\Delta)$-pairwise partition cover. Observe that it contains $\tau$ partitions, and we have shown that all the clusters have diameter at most $(1+\eps)\Delta$. Thus, it remains to prove that
for every pair $x,y$ at distance $d_{X}(u,v)\in\left[\frac{(1+\eps)\Delta_{i}}{2(1+\eps)\rho},\frac{(1+\eps)\Delta_{i}}{(1+\eps)\rho}\right]=\left[\frac{\Delta_{i}}{2\rho},\frac{\Delta_{i}}{\rho}\right]$
contained in some cluster.
As $d_{X}(u,v)\in\left[\frac{\Delta_{i}}{2\rho},\frac{\Delta_{i}}{\rho}\right]$, there is some index $j$ such that $B_X(u,\eps\Delta_i),B_X(v,\eps\Delta_i)\subseteq C_{i}\in\mathcal{P}_{j}^{i}$. That is, the balls of radius $\eps\Delta_i$ around $u,v$ are contained in a cluster of $\mathcal{P}_{j}^{i}$.
We argue that
$u,v\in \tilde{C}_{i}\in\tilde{\mathcal{P}}_{j}^{i}$.
Let $\tilde{C}_{v},\tilde{C}_{u}\in\tilde{\mathcal{P}}_{j}^{i-1}$ be the clusters containing $v,u$ respectively at $(i-1)$-th level. Note that they both have diameter at most $(1+\eps)\Delta_{i-1}=\frac{(1+\epsilon)\eps}{4\rho}\Delta_{i}<\eps\Delta_{i}$. Hence $\tilde{C}_{v}\subseteq B_X(v,\eps \Delta_i)\subseteq C_{i}$, and similarly $\tilde{C}_{u}\subseteq C_{i}$.
By the partitioning algorithm, it follows that $\tilde{C}_{u},\tilde{C}_{v}\subseteq \tilde{C}_{i}$ (as $\tilde{C}_u,\tilde{C}_v$ do not intersect any other clusters), and in particular $u,v\in \tilde{C}_{i}$ as required.
Finally, we construct an ultrametric cover.
Fix an index $j\in [1,\tau]$; we construct a $(\frac{4\rho}{\eps})$-HST $U_j$ as follows. Leaves of $U_j$ bijectively correspond to points in $X$ and have label $0$. For each $i \in [0,I]$ where $I = \lceil \log_{\nicefrac{4\rho}{\epsilon}}\nicefrac\Phi c\rceil$, internal nodes at level $i$ bijectively correspond to the clusters $\tilde{\mathcal{P}}_j^i$
(leaves of $U_j$ is at level $-1$), and have label $(1+\eps)\Delta_i$. There is an edge from each node corresponding to a cluster $\tilde{C}_{i-1}\in\tilde{\mathcal{P}}_j^{i-1}$ to the node corresponding the unique cluster $\tilde{C}_{i}\in\tilde{\mathcal{P}}_j^{i}$ containing $\tilde{C}_{i-1}$.
The root of $U_j$ is the unique single cluster in $\tilde{\mathcal{P}}_j^{I}$.
Clearly, the ultrametric cover $\{U_j\}_{j=1}^{\tau}$ is dominating, and every ultrametric is a $\frac{4\rho}{\eps}$-HST.
To bound the stretch, we will construct such an ultrametric cover with $c=(1+\eps)^l$ for every $l \in [0, \lfloor\log_{1+\epsilon}\frac{4\rho}{\epsilon}\rfloor]$. The final ultrametric cover will be a union of these $O(\log_{1+\epsilon}\frac{4\rho}{\epsilon})$ ultrametric covers. Clearly, their cardinality is bounded by $\tau\cdot O(\log_{1+\epsilon}\frac{4\rho}{\epsilon})=O(\frac{\tau}{\eps}\log\frac{\rho}{\epsilon})$.
Consider a pair $x,y\in X$.
Let $l\in[0,\lfloor\log_{1+\epsilon}\frac{4\rho}{\epsilon}\rfloor]$, and $i\ge0$ be the unique indices such that $(1+\epsilon)^{l-1}(\frac{4\rho}{\epsilon})^i\leq (1+\epsilon)\rho \cdot d_{X}(x,y)\le(1+\epsilon)^{l}(\frac{4\rho}{\epsilon})^i$.
For $c=(1+\epsilon)^{l}$, there is some index $j$, and a cluster $\tilde{C}_i\in\tilde{\mathcal{P}}_j^i$ such that $x,y\in\tilde{C}_{i}\in\mathcal{\tilde{P}}_{j}^{i}$. Thus in the corresponding ultrametric, $x,y$ both decedents of an internal node with label
$(1+\epsilon)^{l+1}(\frac{4\rho}{\epsilon})^{i}\le(1+\eps)^{3}\rho\cdot d_{X}(x,y)$, the stretch guarantee follows.
In summary, we have constructed an $\left(O(\frac{\tau}{\epsilon}\log\frac{\rho}{\epsilon}),\rho(1+7\epsilon)\right)$-ultrametric cover, consisting of $(\frac{4\rho}{\eps})$-HST's.
§.§ Pairwise Partition Cover for General Metrics: Proof of <Ref>
Fix parameter $\delta\in(0,1]$. We begin by creating a distribution over partitions, such that for every pair of points $u,v$ at distance $\frac{\Delta}{2k+\delta}$, there is a non trivial probability that the some balls around $u,v$ contained in a single cluster.
Later, <Ref> will follow by taking the union of many independently sampled such partitions.
For every $n$-point metric space $(X,d_X)$, integer $k\ge 1$, $\delta\in[0,1]$, and $\Delta>0$ there is a distribution over $\Delta$-bounded partitions such that for every pair of points $u,v$ where $d_X(u,v)\le \frac{\Delta}{2k+\delta}$, with probability at least $n^{-\frac{1}{k}}$, the balls $B_X(u,\frac{\delta}{2k(2k+\delta)}\Delta),B_X(v,\frac{\delta}{2k(2k+\delta)}\Delta)$ contained in a single cluster.
For the case where $\delta=0$, is a distribution formerly constructed by the first author <cit.>.[This is the full version, see also the conference version <cit.>.]
Our proof here follows the steps of <cit.> (which is based on the <cit.> partition).
Pick u.a.r. a radius $r\in\{\frac1k,\frac2k,\dots,\frac kk\}$, and a random permutation $\pi=\{v_1,v_2,\dots,v_n\}$ over the points. Then set $C_i=B_X(v_i,r\cdot\frac{\Delta}{2})\setminus\cup_{j<i}B_X(v_j,r\cdot\frac{\Delta}{2})$.
As a result we obtain a $\Delta$ bounded partition $\{C_i\}_{i=1}^n$.
Let $T=B_X(u,\frac{\delta}{2k(2k+\delta)}\cdot\Delta)\cup B_X(u,\frac{\delta}{2k(2k+\delta)}\cdot\Delta)$.
Note that for every pair of points $x,y\in T$, by triangle inequality it holds that
\[
d_{X}(x,y)\le d_{X}(u,v)+\frac{2\delta}{2k(2k+\delta)}\cdot\Delta\le\left(\frac{1}{2k+\delta}+\frac{2\delta}{2k(2k+\delta)}\right)\cdot\Delta=\frac{\Delta}{2k}~.
\]
Let $A_s=\{v_j\mid d_X(v_j,T)\le\frac{s}{k}\cdot\frac{\Delta}{2}\}$. Then $A_0=T$.
Suppose that $r=\frac sk$, and let $v_i$ be the vertex with minimal index such that $d_X(T,v_i)\le\frac sk\cdot\frac\Delta2$. Then no vertex in $T$ will join the clusters $C_1,\dots,C_{i-1}$, and some vertex in $T$ will join $C_i$.
Let $z\in T\cap C_i$, and suppose further that $v_i\in A_{s-1}$.
By the triangle inequality, for every $y\in T$, $d_{X}(y,v_{i})\le d_{X}(y,z)+d_{X}(z,v_{i})\le\frac{\Delta}{2k}+\frac{s-1}{k}\cdot\frac{\Delta}{2}=\frac{s}{k}\cdot\frac{\Delta}{2} $. Hence all the points in $T$ will join the cluster of $v_i$.
Denote by $\Psi$ the event that all the vertices in $T$ are contained in a single cluster. Using the law of total probability, we conclude
\[
\Pr[\Psi]=\frac{1}{k}\cdot\sum_{s=1}^{k}\Pr[\Psi\mid r=\frac{s}{k}]\ge\frac{1}{k}\cdot\sum_{s=1}^{k}\frac{|A_{s-1}|}{|A_{s}|}\ge\left(\Pi_{s=1}^{k}\frac{|A_{s-1}|}{|A_{s}|}\right)^{\frac{1}{k}}=\left(\frac{|A_{0}|}{|A_{k}|}\right)^{\frac{1}{k}}\ge n^{-\frac{1}{k}}~,
\]
where the second inequality follows by the inequality of arithmetic and geometric means.
We now ready to prove <Ref> (restated for convenience).
Fix $\Delta$.
Sample $s=n^{\frac{1}{k}}\cdot2\ln n$ i.i.d. partitions using <Ref>. Consider a pair of points $u,v$ such that $d_X(u,v)\le \frac{\Delta}{2k+\delta}$. Then in each sampled partition, the probability that the balls $B_X(u,\frac{\delta}{2k(2k+\delta)}\Delta),B_X(v,\frac{\delta}{2k(2k+\delta)}\Delta)$ contained in a single cluster is at least $p=n^{-\frac{1}{k}}$. The probability that $u,v$ are not satisfied by any partition, is at most $(1-p)^{s}\le e^{-ps}=e^{-2\ln n}=n^{-2}$. As there are at most ${n\choose2}\le\frac{n^2}{2}$ pairs at distance at most $\frac{\Delta}{2k+\delta}$, by union bound, with probability at least $\frac12$, every pair is satisfied by some partition. It follows that the union of $s$ random partitions is, with a probability at least $\frac12$, an $(O(n^{\frac{1}{k}}\log n), 2k+\delta,\frac{\delta}{2k(2k+\delta)})$-pairwise partition cover as required.
§.§ Ultrametric cover for doubling spaces
In this section, we will construct a pairwise partition cover for doubling spaces, and then use them to construct ultrametric covers, and thus proving <Ref>.
We begin with the following combinatorial lemma.
Consider a graph $G=(V,E_{b}\cup E_{r})$ with
disjoint sets of blue edges $E_{b}$ and red edges $E_{r}$, such the maximal blue degree
is $\delta_{b}\ge1$, and the maximal red degree is $\delta_{r}\ge1$. Then
there is a set of at most $\gamma = O(\delta_{r}\delta_{b})$ matching $\mathcal{M} = \{M_{1},M_{2},\dots,M_{\gamma}\}$
of $G$ such that (a) $E_{b}\subseteq\cup_{i=1}^{\gamma}M_{i}$, and (b) for every matching $M\in \mathcal{M}$, there is no red edge whose both endpoints are matched by $M$.
We construct $\mathcal{M}$ greedily. Initially, $\mathcal{M} = \emptyset$. Let $E_{b}'$ be the set of blue edges of $G$ that are not added to any matching in $\mathcal{M}$. Let $M\subseteq E_{b}'$ be a maximal matching such that there is no red edge whose endpoints are both matched by $M$ (such maximal matching could be found greedily in linear time); we add $M$ to $\mathcal{M}$ and repeat.
We argue by contradiction that the greedy algorithm adds at most $4\delta_{r}\delta_{b}$ matching to $\mathcal{M}$. Consider a vertex $v$ such that after $\delta_{b}(2\delta_{r}+2)$
maximal matchings added to $\mathcal{M}$, there remains at least one blue edge incident to $v$ that is not covered by any matching in $\mathcal{M}$. Since there is at most $\delta_{b}$ blue edges incident to $v$, there must be a set $\mathcal{M}_v \subseteq \mathcal{M}$ of at least $\delta_{b}(2\delta_{r}+1)$ matchings where $v$ is not matched by any of the matchings in $\mathcal{M}_v$. By the maximality, in each matching $M\in \mathcal{M}_v$, either:
(a) A red neighbor of $v$ is matched by $M$.
(b) For every blue neighbor $u$ of $v$, either $u$ is matched, or a red neighbor of $u$ is matched by $M$, which prevents $u$ from being matched.
Since $v$ has at most $\delta_{r}$ red neighbors, and each of them can be matched at most $\delta_{b}$ times, case (a) happens at most $\delta_{b}\delta_{r}$ times.
The blue neighbors of $v$ could be matched at most $\delta_{b}-1$ times, while their red neighbors could be matched at most $\delta_{r}\delta_{b}$ times. Thus, case (b) happens at most $\delta_{b}-1+\delta_{r}\delta_{b}<\delta_b(\delta_r+1)-1$ times.
We conclude that $|\mathcal{M}_{v}|\leq\delta_{r}\delta_{b}+\delta_{b}(\delta_{r}+1)-1=\delta_{b}(2\delta_{r}+1)-1$, a contradiction.
Every metric space $(X,d_X)$ with doubling dimension $d$ admits an $(\epsilon^{-O(d)}, (1+\epsilon),\eps)$-pairwise partition cover scheme for any $\epsilon \in (0,1/16)$.
Let $\Delta > 0$ be any given real number. We show that $(X,d_X)$ admits an $(\epsilon^{-O(d)}, (1+8\epsilon),\frac{\eps}{2}, (1+8\epsilon)\Delta)$-pairwise partition cover $\mathbb{P}$, the lemma then follows by rescaling $\eps$ and $\Delta$.
Let $N$ be an $(\epsilon\Delta)$-net of $(X,d_X)$. We construct a graph $G$ with $N$ as the vertex set; there is a blue edge $(u,v)\in E_b$ in $G$ iff $d_{X}(u,v)\in\left[(1-4\eps)\frac{\Delta}{2},(1+2\eps)\Delta\right]$, and there is a red edge $(u,v) \in E_r$ iff $d_{X}(u,v)\le 4\eps\Delta$.
As $\eps<\frac{1}{12}$, the set of blue and red edges are disjoint.
By the packing property of doubling metrics (<Ref>), every vertex in $G$ has blue degree $\epsilon^{-O(d)}$ and red degree $2^{O(d)}$. Let $\mathcal{M}$ be the set of matching of $G(N,E_b\cup E_r)$ guaranteed by <Ref>; $|\mathcal{M}| = O(\epsilon^{-O(d)}2^{O(d)}) = \epsilon^{-O(d)}$.
For each matching $M\in \mathcal{M}$, we construct a partition $\mathcal{P}_{M}$ as follows: for every edge $\{u, v\}\in M$, we add $B_X(u,2\eps\Delta)\cup B_X(v,2\eps\Delta)$ as a cluster to $\mathcal{P}_M$.
Denote by $N_M$ the set of net points that remain unclustered.
For every net point $x\in N_M$, we initiate a new cluster $C_x$ containing $x$ only. Then, every remaining unclustered point $z\in X$ joins the cluster of its closest net point $x_z$ (from either $N_M$ or $N\setminus N_M$).
See <Ref> for an illustration.
Illustration of the partition $\mathcal{P}_M$. The black points represent metric points, while the red points represent the net points $N$. The blue edges are the matching $M$. For each edge $\{u,v\}\in M$, the cluster $B_X(u,2\eps\Delta)\cup B_X(v,2\eps\Delta)$ is added to $\mathcal{P}_M$. These clusters are illustrated with colored filled boxes.
The remaining points are clustered around unclustered net points. These clusters are encircled by green lines.
Distances shown in the figure are not properly scaled for a better visualization.
We observe that for any two edges $(u,v)$ and $(u',v')$ in matching $M$, $B_X(u,2\eps\Delta)\cap B_X(u',2\eps\Delta) = \emptyset$ since otherwise, there is a red edge between $u$ and $u'$, contradicting item (b) in <Ref>. Thus, $\mathcal{P}_M$ is indeed a partition of $X$.
We next bound the diameter of each cluster in $\mathcal{P}_M$. Clearly every cluster $C_x$ for $x\in N_M$ has diameter at most $2\eps\Delta$. On the other hand, by the construction and the triangle inequality, the diameter of every cluster resulting from the matching is bounded by $2\cdot (\eps\Delta+2\eps\Delta)+(1+2\eps)\Delta=(1+8\eps)\Delta$.
Thus $\mathcal{P}_M$ is $\left((1+8\eps)\Delta\right)$-bounded.
Let $\mathbb{P} = \{\mathcal{P}_M\}_{M\in \mathcal{M}}$.
It remains to show that for every $x,y\in X$ such that $d_{X}(x,y)\in\left[\frac{(1+8\eps)\Delta}{(1+8\eps)2},\frac{(1+8\eps)\Delta}{(1+8\eps)}\right]=\left[\frac{\Delta}{2},\Delta\right]$, there is a cluster $C$ in a partition $\mathcal{P} \in \mathbb{P}$ containing both $B_X(x,\frac{\eps}{2}\cdot (1+8\eps)\Delta)$ and $B_X(y,\frac{\eps}{2}\cdot (1+8\eps)\Delta)$. Note that as $\eps\le\frac{1}{16}$, $\frac{\eps}{2}\cdot (1+8\eps)\Delta\le \eps\Delta$.
Let $x',y'\in N$ be net points such that $d_X(x,x'),d_X(y,y')\le\eps\Delta$. Then by the triangle inequality
$\left|d_{X}(x',y')-d_{X}(x,y)\right|\le2\eps\Delta$, implying that
Hence, $G$ contains a blue edge between $x',y'$. It follows that there is a matching $M$ containing the edge $\{x',y'\}$, and a partitions $\mathcal{P}_M$ containing the cluster $C=B_X(x',2\eps\Delta)\cup B_X(y',2\eps\Delta)$. In particular, $B_X(x,\eps\Delta)\cup B_X(y,\eps\Delta)\subseteq C$ as required.
We are finally ready to prove <Ref> that we restate below for convenience.
We begin with the first assertion (doubling metrics admit ultrametric covers).
After appropriate rescaling, by <Ref> and
<Ref>, we obtain an
$\left(\eps^{-O(d)},1+\epsilon\right)$-ultrametric cover where every ultrametric in the cover is a $\frac{1}{\epsilon}$-HST.
It remains to show that every ultrametric in the cover returned by <Ref> w.r.t. the pairwise partition cover scheme constructed in <Ref> has bounded degree.
We will use the terminology of <Ref>.
Consider some ultrametric $\mathcal{U}_j$ and some cluster $\tilde{C}$ at level $i$ with label $(1+\eps) \Delta_i$.
The clusters at level $i-1$ correspond to points in $(\eps\Delta_{i-1})$-net. The cluster $\tilde{C}$ has diameter $(1+\eps)\Delta_i$, and hence it has at most $\eps^{-O(d)}$ $(\epsilon\Delta_{i-1})$-net points.
In particular $\tilde{C}$ can contain at most $\eps^{-O(d)}$ level-$(i-1)$ clusters. The bound on the degree follows.
Next, we prove the second assertion. Consider a metric space $(X,d_X)$ admitting a $(\tau,\rho,k,\delta)$-ultrametric cover with $k\geq 2\rho$. Let $B_X(x,r)$ be some ball of radius $r$. In each ultrametric $U_i$ in the cover, let $L_i$ be the node closest to root that is an ancestor of $x$ and has label at most $\rho\cdot r$. Let $\{L_{i,1},L_{i,2},\dots\}$ be the set of at most $\delta$ children of $L_i$ in $U_i$. For each $L_{i,j}$, we pick an arbitrary leaf $u_{i,j}\in X$ descendent of $L_{i,j}$. We argue that
$$B_X(x,r)\subseteq \cup_{i,j}B_X(u_{i,j},\frac r2)~,$$
as the number of balls in the union is at most $\tau\delta$, the theorem will follow.
Consider a vertex $y\in B_X(x,r)$. There is necessarily an ultrametric $U_i$ such that $d_{U_i}(x,y)\le \rho\cdot r$. In particular, in $U_i$, $x,y$ are both decedents of a node with label at most $\rho\cdot r$. Recall that $L_i$ is such a node with maximal label. Let $L_{i,j}$ be the child of $L_i$ such that $y$ is decedent of $L_{i,j}$. As $U_i$ is a $k$-HST, the label of $L_{i,j}$ is bounded by $\frac{\rho r}{k}\le\frac r2$ since $k\geq 2\rho$. In particular, $y\in B_X(u_{i,j},\frac r2)$ as required.
§ LOCALITY SENSITIVE ORDERING
Locality-Sensitive Ordering. Chan <cit.> introduced and studied the notion of locality-sensitive ordering (<Ref>).
In the same paper, Chan <cit.> showed that the Euclidean metric of dimension $d$ has an $\left(O(\epsilon)^{-d}\log\frac{1}{\epsilon}),\epsilon\right)$-LSO. They also presented various applications of the LSO to solve fundamental geometry problems in Euclidean spaces. The proof relies on the following lemma by Walecki <cit.>.
Given a set of $n$ elements $[n] = \{1,\ldots,n\}$, there exists a set $\Sigma$ of $\lceil \frac{n}{2} \rceil$ orderings such that for any two elements $i\not= j\in [n]$, there exists an ordering $\sigma$ in which $i$ and $j$ are adjacent.
We show that metrics admitting an ultrametric cover of bounded degree have an LSO with a small number of orders. Our proof relies on the following lemma.
Every $\alpha$-HST $\left(U,d_{U}\right)$ of degree $\delta$ admits a
$(\left\lceil \frac{\delta}{2}\right\rceil ,\frac{1}{\alpha})$-LSO.
For simplicity, we will assume that the number of children in each
node is exactly $\delta$. This could be achieved by adding dummy nodes. By <Ref>, every set of $\delta$ vertices can be ordered into $\left\lceil \frac{\delta}{2}\right\rceil $
orderings such that every two vertices are adjacent in at least one of them. Denote these orderings by $\sigma_{1},\dots,\sigma_{\left\lceil \frac{\delta}{2}\right\rceil }$. We construct the set of orderings for $(U,d_U)$ inductively.
Let $A$ be the root of the HST with children
$A_{1},\dots,A_{\delta}$. By the induction hypothesis, each $A_{i}$
admits a set $\sigma_{1}^{i},\dots,\sigma_{\left\lceil \frac{\delta}{2}\right\rceil }^{i}$
orderings. We construct $\lceil \delta/2 \rceil$ orderings as follows: for each $j \in [\lceil \delta/2 \rceil]$, order
the vertices inside each $A_{i}$ w.r.t. $\sigma_{j}^{i}$, and order
the sets in between them w.r.t. $\sigma_{j}$. The resulting ordering is denoted by
$\tilde{\sigma}_{j}$. This finishes the construction.
Next, we argue that this is a $(\left\lceil \frac{\delta}{2}\right\rceil ,\frac{1}{\alpha})$-LSO.
Clearly, we used exactly $\left\lceil \frac{\delta}{2}\right\rceil $
orderings. Let $\Delta$ be the label of the root. Consider a pair
of leaves $x,y$. If $d_{U}(x,y)<\Delta$, then there is some $i$
such that $x,y\in A_{i}$. By the induction hypothesis, there is
some order $\sigma_{j}^{i}$ of $A_{i}$ such that (w.l.o.g.) $x\prec_{\sigma_{j}^{i}}y$,
and the points between $x$ and $y$ w.r.t. $\sigma_{j}^{i}$ could
be partitioned into two consecutive intervals $I_{x},I_{y}$ such
that $I_{x}\subseteq B_{U}(x,d_{U}(x,y)/\alpha)$ and $I_{y}\subseteq B_{U}(y, d_{U}(x,y)/\alpha)$. The base case is trivial since every leaf has label $0$.
Note that $\sigma_{j}^{i}$ is a sub-ordering of $\tilde{\sigma}_{j}$.
In particular, the lemma holds.
The next case is when $d_{U}(x,y)=\Delta$. Then there are $i\ne i'$
such that $x\in A_{i}$ and $y\in A_{i'}$. There is some ordering
$\sigma_{j}$ such that $A_{i}$ and $A_{i'}$ are consecutive. In
particular all the vertices between $x$ to $y$ in $\tilde{\sigma}_{j}$
can be partitioned to two sets, the first belonging to $A_{i}$, and
the second to $A_{i'}$. The lemma follows as all the vertices in $A_{i}$
($A_{i'}$) are at distance at most $\frac{\Delta}{\alpha}=\frac{d_{U}(x,y)}{\alpha}$
from $x$ ($y$).
If a metric $(X,d_X)$ admits a $\left(\tau, \rho, k, \delta\right)$-ultrametric cover, then it has a $\left(\tau\cdot \lceil \frac{\delta}{2}\rceil, \frac{\rho}{k}\right)$-LSO.
Let $\mathcal{U}$ be an ultrametric cover for $(X,d_X)$. For each ultrametric $(U,d_U)$, let $\Sigma_U$ be the set of orderings obtained by applying <Ref>. Let $\Sigma = \cup_{U\in \mathcal{U}}\Sigma_U$. We show that $\Sigma$ is the LSO claimed by the lemma. Clearly, it contains at most $\tau\cdot\lceil\frac{\delta}{2}\rceil$ orderings.
Consider two points $x\not=y\in X$. Let $U$ be an ultrametric in $\mathcal{U}$ such that $d_U(x,y)\leq \rho\cdot d_X(x,y)$. By <Ref>, there is an ordering $\rho \in \Sigma_U$ such that (w.l.o.g) $x\prec_{\sigma} y$ and points between $x$ and $y$ w.r.t $\sigma$ can be partitioned into two consecutive intervals $I_x$ and $I_y$ where $I_x\subseteq B_{U}(x,\frac{d_{U}(x,y)}{k})$ and $I_y\subseteq B_{U}(y,\frac{d_{U}(x,y)}{k})$. Since $d_U(x,y)\leq \rho\cdot d_X(x,y)$, we conclude that $I_{x}\subseteq B_{X}(x,\frac{\rho}{k}\cdot d_{X}(x,y))$ and $I_y\subseteq B_{X}(y,\frac{\rho}{k}\cdot d_{X}(x,y))$ as desired.
Using <Ref> and <Ref> with $\rho = (1+\epsilon)$ and $k = O(\frac{1}{\epsilon})$, we conclude
[LSO For Doubling Metrics]corollaryDoublingLSO
For every $\eps$ sufficiently smaller than $1$, every metric space $(X,d_{X})$ of doubling dimension $d$ admits an $\left(\eps^{-O(d)},\epsilon\right)$-LSO.
Reliable $(1+\eps)$-Spanners from LSO. Buchin <cit.> constructed reliable $(1+\epsilon)$ spanners for Euclidean metrics using $(\tau(\eps),\epsilon)$-LSO by Chan <cit.>.
Specifically, their spanner for the deterministic case has $n\cdot O(\epsilon)^{-7d}\log^{7}(\frac{1}{\epsilon})\cdot\nu^{-6}\log n(\log\log n)^{6}$ edges, while for the oblivious case, they constructed a spanner with an almost linear number of edges: $n\cdot O(\epsilon)^{-2d}\log^{3}\frac{1}{\epsilon}\cdot\nu^{-1}\log\nu^{-1}(\log\log n)^{2}\log\log\log n$.
Their key idea is to reduce the problem to the construction of reliable $(1+\epsilon)$-spanners for the (unweighted) path graph $P_n$ with $n$ vertices. We observe that their construction of the reliable $(1+\eps)$spanners did not use any property of the metric space other than the existence of an LSO.
Suppose that for any $\epsilon\in(0,1)$, an $n$-point metric space $(X,d_{X})$
admits a $(\tau(\epsilon),\epsilon)$-LSO for some function $\tau:(0,1)\rightarrow\mathbb{N}$.
Then for every $\nu\in(0,1)$ and $\epsilon\in(0,1)$:
* $(X,d_{X})$ admits a deterministic $\nu$-reliable $(1+\epsilon)$-spanner with
$n\cdot O\left(\left(\tau(\frac{\epsilon}{c_d})\right)^{7}\nu^{-6}\log n(\log\log n)^{6}\right)$ edges for some universal constant $c_d$.
* $(X,d_{X})$
admits an oblivious $\nu$-reliable $(1+\epsilon)$-spanner
$n\cdot O\left(\left(\tau(\frac{\epsilon}{c_0})\right){}^{2}\nu^{-1}(\log\log n)^{2}\cdot\log\left(\frac{\tau(\frac{\epsilon}{c_0})\log\log n}{\nu}\right)\right)$ edges for some universal constant $c_0$.
By <Ref> and <Ref>, we have:
Consider a metric space $(X,d_X)$ with doubling dimension $d$.
Then for every $\nu\in(0,1)$ and $\epsilon\in(0,1)$, $(X,d_{X})$
admits a deterministic $\nu$-reliable $(1+\epsilon)$-spanner
with $n\cdot\epsilon{}^{-O(d)}\nu^{-6}\log n(\log\log n)^{6}$ edges,
and an oblivious $\nu$-reliable $(1+\epsilon)$-spanner
$n\cdot\epsilon{}^{-O(d)}\cdot\nu^{-1}(\log\log n)^{2}\cdot\log\left(\frac{\log\log n}{\epsilon\nu}\right)=n\cdot\epsilon{}^{-O(d)}\cdot\tilde{O}\left(\nu^{-1}(\log\log n)^{2}\right)$
§ TRIANGLE LOCALITY SENSITIVE ORDERING
A triangle locality-sensitive ordering (triangle-LSO) is defined as follows.
Given a metric space $(X,d_{X})$, we say that a collection $\Sigma$
of orderings is a $(\tau,\rho)$-triangle-LSO if
$\left|\Sigma\right|\le\tau$, and for every $x,y\in X$, there is
an ordering $\sigma\in\Sigma$ such that (w.l.o.g.) $x\prec_{\sigma}y$, and for every $a,b\in X$ such that $x\preceq_{\sigma}a\preceq_{\sigma}b\preceq_{\sigma}y$ it holds that $d_X(a,b)\le\rho\cdot d_X(x,y)$.
Note that every $(\tau,\rho)$-triangle LSO is also a $(\tau,\rho)$-LSO; however, a $(\tau,\rho)$-LSO is a $(\tau,2\rho+1)$-triangle LSO (by triangle inequality).
Hence for stretch parameter $\rho>1$, triangle-LSO is preferable to the classic LSO.
Similar to <Ref>, we show that a metric space admitting an ultrametric cover has a triangle-LSO with a small number of orderings.
If a metric $(X,d_X)$ admits a $(\tau,\rho)$-ultrametric cover $\mathcal{U}$, then it has a $\left(\tau, \rho\right)$-triangle-LSO.
Let $(U,d_U)$ be an ultrametric in the cover $\mathcal{U}$. Note that $U$ is also a $1$-HST. We create a single ordering for $\sigma_U$ by simply putting the leaves (only) in the pre-order fashion. That is, given a node $v$ with children $v_1,\dots,v_\delta$, we recursively put all the descendants of $v_1$, then $v_2$, and so on until we put all the descendants of $v$ to $\sigma_{U}$. It holds that for every $x\prec_{\sigma_U} y$ with lca $z$, all the vertices $a,b$ such that
$x\prec_{\sigma_{U}} a\prec_{\sigma_{U}} b\prec_{\sigma_{U}} y$ are also descendants of $z$. Thus, $d_U(a,b)\leq d_U(x,y)$.
We now show that $\{\sigma_U\}_{U\in \mathcal{U}}$ is a $(\tau,\rho)$-triangle-LSO. For any given $x,y$, there exists $U\in \mathcal{U}$ such that $d_X(x,y)\leq d_U(x,y)\leq \rho\cdot d_X(x,y)$. Then for every pair of vertices $a,b$ such that
$x\prec_{\sigma_{U}} a\prec_{\sigma_{U}} b\prec_{\sigma_{U}} y$, $d_X(a,b)\leq d_{U}(a,b)\leq d_U(x,y)\leq \rho\cdot d_X(x,y)$ as desired.
Using <Ref> and <Ref>, we conclude.
For every $k\in\N$, and $\eps\in(0,1)$, every $n$-point metric space admits an $\left(O(n^{\frac{1}{k}}\cdot\log n\cdot\frac{k^{2}}{\eps}\cdot\log\frac{k}{\eps}),2k+\eps\right)$-triangle-LSO.
Reliable Spanners from triangle-LSO. We show that if a metric space admits a $(\tau,\rho)$-triangle-LSO, it has an oblivious $2\rho$-spanners with about $n\cdot\tau^2$ edges. We use this result to construct reliable $(8k-2)(1+\epsilon)$-spanners for general metrics (and reliable $(2\rho)$-spanners for ultrametrics).
theoremTriangleLSOtoSpanner Suppose that a metric space $(X,d_{X})$ admits a $(\tau,\rho)$-Triangle-LSO. Then for every $\nu\in(0,1)$, $X$ admits an oblivious $\nu$-reliable, $(2\rho)$-spanner with $n\tau\cdot O\left(\log^{2}n+\nu^{-1}\tau\log n\cdot\log\log n\right)$ edges.
The proof of <Ref> is deferred to <Ref>. Using <Ref> with parameters $2k$ and $\frac\eps2$, and <Ref> we conclude:
For every $n$-point metric space $(X,d)$ and parameters $\nu\in(0,1)$, $\eps\in(0,\frac12)$, $k\in\N$, $(X,d)$ admits an oblivious $\nu$-reliable, $8k+\eps$-spanner for $X$ with $n^{1+\frac{1}{k}}\cdot\nu^{-1}\cdot\log^{3}n\cdot\log\log n\cdot\frac{k^{4}}{\eps^{2}}O(\log\frac{k}{\eps})^{2}=n^{1+\frac{1}{k}}\cdot\nu^{-1}\cdot\tilde{O}\left(\log^{3}n\cdot\frac{k^{4}}{\eps^{2}}\right)$
By <Ref> and <Ref>, we obtain:
For every parameter $\nu\in(0,1)$, every $n$-point ultrametric space $(X,d)$ admits an oblivious $\nu$-reliable, $2$-spanner with $n\cdot O\left(\log^{2}n+\nu^{-1}\log n\cdot\log\log n\right)$ edges.
The stretch parameter in <Ref> is tight; see <Ref>.
For the rest of this section, we show how to construct reliable spanners for metric spaces admitting a triangle LSO. Following the approach of Buchin <cit.>, we reduce the problem to the construction of reliable spanners for the (unweighted) path graph $P_n$. However, in our setting, we face a very different challenge: when $\rho > 1$, the stretch of the reliable spanner for $(X,d_X)$ grows linearly w.r.t. the number of hops of the reliable spanner for the path graph. In the Euclidean setting studied by Buchin <cit.>, the stretch parameter is $\rho = \epsilon < 1$, and as a result, the stretch is not significantly affected by the number of hops of the spanner for the path graph.
Buchin <cit.> constructed an oblivious $\nu$-reliable $1$-spanner for the path graph $P_n$ with $O(n\cdot\nu^{-1}\log \nu^{-1})$ edges. However, their spanner has hop diameter $2\log n$; if we use their reliable spanner for the path graph, we will end up in a spanner for $(X,d_X)$ with stretch $2\rho\log n$.
To have a stretch $2\rho$, we construct a reliable spanner for the path graph $P_n$ with hop diameter $2$.
As a consequence, the sparsity of our spanner has some additional logarithmic factors.
Note that a $2$-hop spanner for the path graph $P_n$ even without any reliability guarantee must contain $\Omega(n\log n)$ edges (see Excercise 12.10 <cit.>).
Our result is summarized in the following lemma whose proof is deferred to <Ref>.
For every $\nu\in(0,1)$, the path graph $P_{n}$ admits an
oblivious $\nu$-reliable, $2$-hop $1$-spanner $H$ with $n\cdot O\left(\log^{2}n+\nu^{-1}\log n\cdot\log\log n\right)$
Using <Ref>, we can construct a reliable spanner for metric spaces admitting a $(\tau,\rho)$-triangle LSO as claimed by <Ref>.
§.§ Proof of <Ref>
Let $\Sigma$ be a $(\tau,\rho)$-triangle LSO as assumed by the theorem. Let $\nu'=\frac{\nu}{\tau}$.
For every ordering $\sigma\in \Sigma$, we form an unweighted path graph $P_{\sigma}$ with vertex set $X$ and the order of vertices along the path is $\sigma$. We construct a $\nu'$-reliable $2$-hop spanner $H_\sigma(X,E_{\sigma},w_{\sigma}))$ for $P^{\sigma}_{n}$ with $n\cdot O\left(\log^{2}n+\nu'^{-1}\log n\cdot\log\log n\right)$ edges by <Ref>. Note that for every edge $\{u,v\}\in E_{\sigma}$ of $H_{\sigma}$, $w_{\sigma}(u,v)$ is the distance between $u$ and $v$ in the (unweighted) path graph $P^{\sigma}_{n}$.
We form a new weight function $w_{X}$ that assign each edge $\{u,v\}\in E_{\sigma}$ a weight $w_X(u,v) = d_X(u,v)$. The reliable spanner for $(X,d_X)$ is $H = \bigcup_{\sigma\in \Sigma}H_{\sigma}(X,E_{\sigma},w_X)$. We observe that the total number of edges in $H$ is bounded by
\[
\sum_{\sigma\in\Sigma}n\cdot O\left(\log^{2}n+\nu'^{-1}\log n\cdot\log\log n\right)=n\tau\cdot O\left(\log^{2}n+\nu^{-1}\tau\log n\cdot\log\log n\right)~.
\]
Let $B\subseteq X$ be an oblivious attack. Let $B^+_{\sigma}$ be the faulty extension of $B$ in $H_{\sigma}$, and $B^+ = \cup_{\sigma\in \Sigma}B^+_{\sigma}$ be the faulty extension of $B$ in $H$. We observe that:
\[
\mathbb{E}\left[\left|B^{+}\right|\right]\le|B|+\sum_{\sigma}\mathbb{E}\left[\left|B_{\sigma}^{+}\setminus B\right|\right]\le|B|+\tau\nu'\cdot|B|\le(1+\nu)\cdot|B|~.
\]
It remains to show the stretch guarantee of $H$. For every pair of points $x,y\notin B^+$, let $\sigma \in \Sigma$ be the ordering that satisfies the ordering property for $x$ and $y$: for every $a,b\in X$ such that $x\preceq_{\sigma} a \preceq_{\sigma} b \preceq_{\sigma} y$, $d_X(a,b)\leq \rho\cdot d_X(x,y)$. (Here we assume w.l.o.g. that $x\preceq_{\sigma}y$.) Since $x,y\notin B^+$, $x,y \notin B_{\sigma}^+$. Since $H_\sigma(X,E_{\sigma},w_{\sigma}))$ is a $2$-hop $1$-spanner for $P_{\sigma}$, there must be $z\not\in B$ such that $x\preceq_{\sigma}z\preceq_{\sigma}y$ and $\{x,z\},\{z,y\}\in E_\sigma$. We conclude that
\[
d_{H}(x,y)\le w_X(x,z) + w_X(z,y) = d_{X}(x,z)+d_{X}(z,y)\le2\rho\cdot d_{X}(x,y)~;
\]
the theorem follows.
§.§ $2$-hop spanner: Proof of <Ref>
In this section, we construct a 2-hop reliable spanner for the path graph $P_n$. Our construction uses the concept of a shadow introduced by Buchin, <cit.>.
Let $B$ be a subset of $[n]$.
The left $\alpha$-shadow of $B$ is all the vertices $b$ such for some $a<b$, $|[a:b]\cap B|\ge\alpha\cdot|[a:b]|$, denoted by $\mathcal{S}_L(\alpha,B)$. The right $\alpha$-shadow $\mathcal{S}_R(\alpha,B)$ is defined symmetrically. The set $\mathcal{S}=\mathcal{S}_L(\alpha,B)\cup \mathcal{S}_R(\alpha,B)$ is called the $\alpha$-shadow of $B$.
Buchin <cit.> showed that for every $\alpha\in (\frac23,1)$ and any $B\subseteq [n]$, $|\mathcal{S}_L(\alpha,B)|\le\frac{|B|}{2\alpha-1}$. In particular, for $\beta<\frac13$,
\begin{equation}
|\mathcal{S}(1-\beta,B)\setminus B|\le\frac{|B|}{2(1-\beta)-1}-|B|\le(1+6\beta)|B|-|B|=6\beta\cdot|B|~.\label{eq:shaddow}
\end{equation}
We also use a construction of a (non-reliable) $2$-hop $1$-spanner for path graph $P_n$ with $O(n\log n)$ edges <cit.>.
There is a $2$-hop $1$-spanner for $P_n$ with $O(n\log n)$ edges.
We a now ready to prove <Ref>, which we restate below for convenience.
We will assume that $\nu>\frac1n$, as otherwise, we can take all the possible edges to the spanner. We will construct a spanner with reliability parameter $O(\nu)$; the lemma will follow by rescaling.
Let $M=\log n$, and for every $j\in[0,M]$, let $p_{j}=c\left(\ln n+\nu^{-1}\ln(\log n)\right)\cdot2^{-j}$ for a constant $c$ to be determined later. We construct a vertex set $N_j$ by sampling each vertex with probability $\min\{1,p_{j}\}$. Vertices in $N_j$ are called centers.
The spanner $H$ is constructed as follows: For every vertex $v$, and parameter $j\in[0,M]$, we add edges from $v$ to $[v-2^{j+1}:v+2^{j+1}] \cap N_{i}$. That is to every vertex $u\in N_{j}$ such that $|v-u|\le2^{j+1}$. We also add to $H$ a $2$-hop spanner of size $O(n\log n)$ guaranteed by <Ref>.
To bound the number of edges, we charge only to the centers; each edge between two different centers is charged twice. The expected number of edges charged to a vertex $a$ is
\[
\sum_{j=0}^{M}\Pr\left[a\in N_{j}\right]\cdot2^{j+1}\cdot2=c\left(\log n+\nu^{-1}\ln\log n\right)\cdot\sum_{j=1}^{M}2^{-j}\cdot2^{j+2}=O(\log n)\cdot\left(\log n+\nu^{-1}\ln\log n\right)~,
\]
implying that the expected number of edges is $n\cdot O\left(\log^{2}n+\nu^{-1}\log n\cdot\log\log n\right)$. The worst-case guarantee promised in the lemma follows by using <Ref>.
Let $B$ be an oblivious attack. The faulty extension $B^{+}$ will consist of $B$ and all the vertices $v$ such that for some $j$, (a) $v>2^{j}$ and $[v-2^{j}, v]\cap N_{j}\subseteq B$ or (b) $v\le n-2^j$ and $[v, v+2^{j}]\cap N_{j}\subseteq B$. This way, for every pair of vertices $u,v\not\in B^+$ whose distance is in $[2^{i},2^{i+1})$, there is a vertex $z\in N_{i}\setminus B$ such that both will have edges to $z$. This implies the monotone $2$-hop property for $u$ and $v$.
Finally we analyze the expected size of $B^{+}$. If $B=\emptyset$, then $B^+=\emptyset$ as we added a $2$-hop spanner.
Otherwise, a vertex $v\notin B$ joins $B^+$ iff $N_j\cap[v:v+2^{j}]\subseteq B$ or $N_j\cap[v-2^{j}:v]\subseteq B$ for some $j\in [0,M]$.
Fix an $i \leq \lfloor\log\frac{1}{3\nu}\rfloor$. For every vertex $a\notin \mathcal{S}_L(1-2^{i}\cdot \nu,B)$ and every $j\in [0,M]$, it holds that $[a,a+2^{j}]\cap B\le (1-2^{i}\cdot \nu)\cdot 2^j$ by the definition of a left shadow. In particular, there are at least $2^{i+j}\cdot \nu$ vertices in $[a,a+2^{j}]\setminus B$. Vertex $a$ joins $B^+$ iff for some $j$, $([a:a+2^{j}]\setminus B)\cap N_j = \emptyset$ or $([a-2^{j}:a]\setminus B)\cap N_j = \emptyset$. (Note that if the distance from $a$ to either $1$ or $n$ is smaller than $2^j$, then it cannot join $B^+$ by definition.) We observe that:
\begin{align*}
\Pr\left[[a,a+2^{j}]\cap N_{j}\subseteq B\mid a\notin\mathcal{S}(1-2^{i}\cdot\nu,B)\right] & \le\left(1-2^{-j}\cdot c\cdot\nu^{-1}\cdot\ln(\log n)\right)^{2^{i+j}\cdot\nu}\\
& \le e^{-2^{-j}\cdot c\cdot\nu^{-1}\cdot\ln\log n\cdot2^{i+j}\cdot\nu}=(\log n)^{-c\cdot 2^{i}}~.
\end{align*}
Thus, the probability that $a$ is added to $B^+$ is at most:
\begin{align*}
\Pr\left[a\in B^{+}\mid a\notin\mathcal{S}(1-2^{i}\cdot\nu,B)\right] & \le\sum_{j=1}^{M}2\cdot\Pr\left[[a,a+2^{j}]\cap N_{j}\subseteq B\mid a\notin\mathcal{S}(1-2^{i}\cdot\nu,B)\right]\\
& \le\sum_{j=1}^{M}2\cdot(\log n)^{-c\cdot 2^{i}}\le2\cdot(\log n)^{-c\cdot 2^{i}-1}~.
\end{align*}
On the other hand, for a vertex $a\notin\mathcal{S}(\frac{2}{3},B)$, we have that
\[
\Pr\left[[a,a+2^{j}]\cap N_{j}\subseteq B\mid a\notin\mathcal{S}(\frac{2}{3},B)\right]\le\left(1-2^{-j}\cdot c\cdot\ln n\right)^{\frac{1}{3}\cdot2^{j}}\le e^{-2^{-j}\cdot c\cdot\ln n\cdot\frac{1}{3}\cdot2^{j}}=n^{-\frac{c}{3}}~,
\]
implying that
\begin{align*}
\Pr\left[a\in B^{+}\mid a\notin\mathcal{S}(\frac{2}{3},B)\right] & \le\sum_{j=1}^{M}2\cdot\Pr\left[[a,a+2^{j}]\cap N_{j}\subseteq B\mid a\notin\mathcal{S}(\frac{2}{3},B)\right]\\
& \le\sum_{j=1}^{M}2\cdot n^{-\frac{c}{3}}\le2\log n\cdot n^{-\frac{c}{3}}~.
\end{align*}
We conclude that
\begin{align*}
\mathbb{E}\left[B^{+}\right] & \le|\mathcal{S}(1-\nu,B)|+\sum_{i=1}^{\log\frac{1}{3\nu}}|\mathcal{S}(1-2^{i}\cdot\nu,B)\setminus B|\cdot\Pr\left[a\in B^{+}\mid a\notin\mathcal{S}(1-2^{i-1}\cdot\nu,B)\right]\\
& \phantom{~\le|\mathcal{S}(1-\nu,B)|}+\left|[n]\setminus\mathcal{S}(\frac{2}{3},B)\right|\cdot\Pr\left[a\in B^{+}\mid a\notin\mathcal{S}(\frac{2}{3},B)\right]\\
& \overset{(\ref{eq:shaddow})}{\le}(1+4\nu)|B|+\sum_{i=1}^{\log\frac{1}{3\nu}}2^{i+2}\nu\cdot|B|\cdot2\cdot(\log n)^{-c2^{i}-1}+n\cdot2\log n\cdot n^{-\frac{c}{3}}\quad=\quad(1+O(\nu))|B|~,
\end{align*}
for a sufficiently large constant $c$.
§ LEFT-SIDED LOCALITY SENSITIVE ORDERING
We use a left-sided LSO to construct reliable spanners with optimal stretch for trees, planar graphs, bounded treewidth graphs, and minor free graphs. We say an ordering of $(X,d_X)$ is partial if it is a linear ordering on a subset of points in $X$.
Given a metric space $(X,d_{X})$, we say that a collection $\Sigma$
of partial orderings is a $(\tau,\rho)$-left-sided LSO if every point $x\in X$ belongs to at most $\tau$ orderings in the collection, and for every $x,y\in X$, there is an order $\sigma\in\Sigma$ such that for every $x'\preceq_\sigma x$ and $y'\preceq_\sigma y$ it holds that $d_X(x',y')\le\rho \cdot d_X(x,y)$.
Unlike the classic LSO and the triangle-LSO, a $(\tau,\rho)$-left-sided LSO may contain $\Omega(n)$ (partial) orderings; it only guarantees that each point belongs to at most $\tau$ (partial) orderings.
Note that the stretch guarantee of a $(\tau,\rho)$-left-sided LSO also fulfills the stretch requirement of a $(\tau,\rho)$-LSO.
To see this, consider a pair of points $x,y\in X$, and let $\sigma\in \Sigma$ be the ordering guaranteed above, where w.l.o.g $x\preceq_\sigma y$. Then for every $z$ such that $x \preceq_\sigma z \preceq_\sigma y$, it holds that $d_X(x,z)\le \rho \cdot d_X(x,y)$ (as $x\preceq_\sigma z$ and $z\preceq_\sigma y$). In particular $z\in B_X(x,\rho\cdot d_X(x,y))$.
However, there is no $f(.)$ such that a $(\tau,\rho)$-left-sided LSO will be guaranteed to be an $(f(\tau),\rho)$-LSO as the number of orderings in a $(\tau,\rho)$-left-sided LSO, by definition, could be $\Omega(n)$ even if $\tau$ is a constant.
We show that minor-free metrics admit a left-sided LSO where each point belongs to a small number of orderings.
theoremLeftLSOTreewidth Treewidth-$k$ metrics admit a $(k\log n, 1)$-left-sided LSO.
In particular, tree metrics admit a $(\log n, 1)$-left-sided LSO.
theoremLeftLSOMinorFree For every $\eps\in(0,\frac12)$, every graph with $k$ admits an $(O(\frac{k}{\epsilon}\log n),1+\eps)$-left-sided-LSO.
In particular, every graph
excluding a fixed minor (e.g. a planar graph) admits an $(O(\frac{1}{\epsilon}\log^2 n),1+\eps)$-left-sided-LSO.
The proof of <Ref> is deferred to <Ref>.
The definition of and the proof of <Ref> are deferred to <Ref>.
Reliable Spanners from left-sided LSO's. Similar to <Ref>, we can construct a reliable spanner with a nearly linear number of edges from a left-sided LSO.
[Reliable Spanner From Left-sided LSO]theoremSpannerLeftLSO
Consider an $n$-point metric space $(X,d_X)$ that admits a $(\tau,\rho)$-left-sided LSO. Then for every $\nu\in(0,1)$, $X$ admits an oblivious $\nu$-reliable, $(2\rho)$-spanner with $n\cdot O(\nu^{-1}\tau^{2}\log n)$ edges.
The proof of <Ref> is deferred to <Ref>. By applying <Ref> on <Ref> and <Ref> we have:
Treewidth-$k$ graphs admit oblivious $\nu$-reliable $2$-spanners with $n\cdot O(\nu^{-1}k^{2}\log^{3}n)$ edges. In particular, trees admit oblivious $\nu$-reliable $2$-spanners with $n\cdot O(\nu^{-1}\log^{3}n)$ edges.
For every $\eps\in(0,\frac12)$, every graph with $k$ admits a
$\nu$-reliable $2(1+\eps)$-spanner with $n\cdot O(\nu^{-1}\frac{k^2}{\eps^{2}}\log^{3}n)$ edges.
In particular, every graph
excluding a fixed minor (e.g. a planar graph) admits a $\nu$-reliable $2(1+\eps)$-spanner with $n\cdot O(\nu^{-1}\eps^{-2}\log^{5}n)$ edges.
Finally, we observe that the stretch of reliable spanners in <Ref> and <Ref> is essentially optimal even for tree metrics.
For every $\nu\in(0,1)$, and $n\in \N$, there is an $n$-vertex unweighted tree $T=(V,E)$ such that every oblivious $\nu$-reliable spanner with stretch $t<2$ has $\Omega(n^{2})$ edges.
Let $T$ be the star graph. That is, there is a single vertex $r$ of degree $n-1$ vertices of degree $1$, and let $H$ be a $\nu$-reliable spanner of stretch $t$. Let $B=\{r\}$ be an attack. Every pair of vertices $u,v\notin B^+$ must be adjacent, as every path of length at least $2$ has weight $4>t\cdot d_T(u,v)$. It follows that $H$ must contain $\mathbb{E}\left[{\left|V\setminus B^{+}\right| \choose 2}\right]=\Omega\left(n-\mathbb{E}\left[|B^{+}|\right]\right)^{2}=\Omega(n^{2})$
Note that <Ref> will also hold for ultrametrics, as the uniform metric on $n$ points is an ultrametric (here the attack $B$ could be $\emptyset$). In particular, the stretch parameter in <Ref> is tight.
§.§ $k$-separable graphs
A graph family $\mathcal{G}$ is $k$-separable if for every graph $G=(V,E,w)\in\mathcal{G}$, and every induced subgraph $G'=G[V']$ for $V'\subseteq V$,
there is a set $K\subseteq V'$ of at most $k$ vertices such that each connected component in $G'\setminus K$ contains at most $|V'|/2$ vertices.
Note that trees belong to a $1$-separable graph family and treewidth $k$ graphs belong to a $k$-separable graph family. Hence <Ref> is an immediate corollary of the following lemma.
[$k$-Separators to LSO]lemmaSeperatorToLSO
Let $\mathcal{G}$ be a $k$-separable family, then every $n$-vertex graph $G=(V,E,w)\in\mathcal{G}$ admits a $(k\log n,1)$-left-sided LSO.
We recursively construct a set of orderings $\Sigma$; initially, $\Sigma = \emptyset$. Let $K$ be a separator containing at most $k$ vertices such that every connected component in $G\setminus K$ has size at most $\frac{n}{2}$. For every $r\in K$, let $l_r$ be an order of $V$ w.r.t. distances from $r$. That is $l_r={v_1,v_2,\dots,v_n}$ where $d_G(v_1,r)\le d_G(v_2,r)\le\dots\le d_G(v_n,r)$. The set of $k$ orderings $\{l_r\}_{r\in K}$ are added to $\Sigma$.
Let $\{G_1,\ldots, G_s\}$ be the set of connected components of $G\setminus K$. For each $G_i$, let $\Sigma_i$ be the set of orderings obtained by recursively applying the construction to $G_i$; each ordering in $\Sigma_i$ is a partial ordering of $G$. We then update $\Sigma \leftarrow \Sigma\bigcup (\cup_{i\in [1,s]}\Sigma_i)$. This completes the construction of $\Sigma$.
Next, we argue that $\Sigma$ is a $(k\log n,1)$-left-sided LSO. By construction, each vertex belongs to at most $k\log n$ orderings since the depth of the recursion is at most $\log n$ and at each level of the recursion, each vertex belongs to at most $k$ partial orderings.
Finally, consider a pair of vertices $u,v\in V$. Let $P_{u,v}$ be a shortest path from $u$ to $v$ in $G$.
Let $G'$ be the subgraph of $G$ at highest depth in the recursive construction that contains all the vertices of $P_{u,v}$.
Let $K$ be the separator of $G'$ of size at most $k$; by definition, there is a vertex $r\in K\cap P_{u,v}$. As $G'$ contains all the vertices of $P_{u,v}$ it holds that $d_{G'}(u,v)=d_{G'}(u,r)+d_{G'}(r,v)=d_{G}(u,r)+d_{G}(r,v)$.
Let $l_r$ be the ordering that is constructed for $r$. Let $u',v'$ be any vertices such that $u'\preceq_{l_r} u$ and $v'\preceq_{l_r} v$. Then:
\begin{align*}
d_{G}(u',v') & \le d_{G}(u',r)+d_{G}(r,v')\le d_{G'}(u',r)+d_{G'}(r,v')\le d_{G'}(u,r)+d_{G'}(r,v)=d_{G}(u,v)~;
\end{align*}
the lemma follows.
§.§ $\SPD$ Graphs
This section is devoted to proving <Ref>.
Abraham <cit.> introduced the family of graphs with a shortest path decomposition (abbreviated ) of bounded depth.
A graph has an of depth $1$ if and only if it is a (weighted) path. A graph $G$ has an of depth $k \geq 2$ if there exists a shortest path $P$, such that deleting $P$ from the graph $G$ results in a graph whose connected components all have of depth at most $k-1$.
It is known that (see <cit.>) that minor-free graphs have of depth $k = O(\log n)$. However, the family of bounded graphs is much richer and contains dense graphs with $K_r$ as a minor, for arbitrarily large $r$.
To prove <Ref>, we will show that graphs with an $k$ admit a $(\frac{k}{\epsilon}\log \frac{n}{\eps},1+\eps)$-left-sided LSO, the second assertion in <Ref> is an immediate corollary.
Our construction will rely on the following lemma, which implicitly appeared in <cit.> (see <cit.> for an explicit proof).
Consider a weighted graph $G=(V,E,w)$ with parameter $\eps\in(0,1)$, and let $P$ be a shortest path in $G$. Then one can find for each $v\in V$ a set of vertices, called landmarks, $L_v$ on $P$ of size $|L_v|=O(\frac1\eps)$, such that for any $v,u\in V$ whose shortest path between them intersects $P$, there exists $x\in L_v$ and $y\in L_u$ satisfying
$d_G(v,x)+d_P(x,y)+d_G(y,u)\le(1+\epsilon)\cdot d_G(v,u)$.
We will assume that $\eps>\frac{1}{n^2}$, as otherwise we can simply return ${n\choose2}$ orderings, where for every pair $u,v$, there is an ordering where $u,v$ are the first two vertices.
We recursively construct a set of orderings $\Sigma$; initially, $\Sigma = \emptyset$. Let $P$ be a shortest path of $G$ such that every component of $G\setminus P$ has an of depth $k-1$.
We construct a set of orderings for $G$ from $P$ to add to $\Sigma$ as follows. Let $\{L_v\}_{v\in V}$ be the set of landmarks provided by <Ref> w.r.t. $P$. For every vertex $v\in V$, denote $l(v)=|L_v|$. We construct an auxiliary tree $T$ that initially contains the shortest path $P$ only. For every vertex $v\in V$, let $L_v=\{x_1,\dots,x_{l(v)}\}\subseteq P$. We will add the vertices $\{v_i\}_{i=1}^{l(v)}$, which are $l(v)$ copies of $v$, to $T$, and for every $i$ connect $v_i$ to $x_i$ with an edge of weight $d_G(v,x_i)$. Note that $T$ has $\sum_{v\in V}l(v)=O(\frac{n}{\eps})$ vertices. The distances in $T$ dominate the distances in the original graph, that is for every $v_i,u_j$ copies of $v,u$ respectively, it holds that $d_{T}(v_i,u_j)\ge d_G(v,u)$. Additionally, by <Ref>, for every pair of vertices $u,v$, such that the shortest path between them intersects $P$, it holds that $\min_{i,j}d_{T}(v_i,u_j)\le(1+\epsilon)\cdot d_G(v,u)$. Let $\Sigma_T$ be an $(O(\log \frac{n}{\epsilon}),1)$-left-sided LSO for $T$ provided by <Ref>.
Let $\Sigma'_T$ be the collection of orderings $\Sigma_T$ where we delete duplicated occurrences. That is, for every ordering $\sigma\in\Sigma$ and vertex $v$, $\sigma$ might contain multiple copies of $v$: $v_{i_1},v_{i_2},\dots$,
we will keep only the leftmost copy of $v$ in $\sigma$; the resulting ordering is denoted by $\sigma'$. As $\eps>n^{-2}$, each copy $v_i$ of a vertex $v$ appears in at most $O(\log \frac{n}{\epsilon})=O(\log n)$ orderings in $\Sigma_T$. As each vertex has $O(\frac1\eps)$ copies, we conclude that each vertex $v$ appears in at most $O(\frac1\eps\cdot\log n)$ orderings in $\Sigma'_T$.
Let $\{G_1,\ldots, G_s\}$ be the set of connected components of $G\setminus P$. For each $G_i$, let $\Sigma_i$ be the set of orderings obtained by recursively applying the construction to $G_i$; each ordering in $\Sigma_i$ is a partial ordering of $G$. We then set $\Sigma \leftarrow \Sigma'_T\bigcup (\cup_{i\in [1,s]}\Sigma_i)$. This completes the construction of $\Sigma$.
Next, we argue that $\Sigma$ is an $(O(\frac{k}{\epsilon}\cdot\log n),1)$-left-sided LSO. Since the recursion depth is $k$, and each vertex belongs to at most $O(\frac{1}{\epsilon}\cdot\log n)$ partial orderings in each recursive level, it follows that each vertex belongs to at most $O(\frac{k}{\epsilon}\cdot\log n)$ partial orderings in $\Sigma$.
Finally, consider a pair of vertices $u,v\in V$. Let $P_{u,v}$ be a shortest path from $u$ to $v$ in $G$.
Let $G'$ be the subgraph of $G$ at highest depth in the recursive construction that contains all the vertices of $P_{u,v}$.
In particular $d_{G'}(v,u)= d_G(v,u)$.
Let $P$ be the shortest path deleted from $G'$. Let $T$ be the tree constructed for $P,G'$.
By maximality $P\cap P_{u,v}\not= \emptyset$, thus there are two copies $v_i,u_j$ of $v,u$ such that $d_{T}(v_i,u_j)\le(1+\epsilon)\cdot d_{G'}(v,u)=(1+\epsilon)\cdot d_G(v,u)$.
By <Ref>, $\Sigma_T$ contains an ordering $\sigma$ such that $v_i,u_j\in \sigma$,
and for every $x\preceq_\sigma v_i$, $y\preceq_\sigma u_j$ it holds that $d_T(x,y)\le d_T(v_i,u_j)$.
The collection of orderings $\Sigma'_T$ contains an ordering $\sigma'$, which is simply $\sigma$ with removed duplicates. In particular, $\sigma'$ contains only original vertices from $V$.
Let $v_{i'},u_{j'}$ be the leftmost duplicates of $v,u$ (respectively) in $\sigma$.
For every $x,y\in V$ such that $x\preceq_{\sigma'} v$, $y\preceq_{\sigma'}u$, there are copies $x_p,y_q$ of $x,y$ such that $x_p\preceq_\sigma v_{i'}\preceq_\sigma v_i$ and $y_q\preceq_\sigma u_{j'}\preceq_\sigma u_j$. It thus holds that
\[
d_{G}(x,y)\le d_{T}(x_{p},y_{q})\le d_{T}(v_{i},u_{j})\le(1+\eps)\cdot d_{G}(v,u)~.
\]
This completes the proof of <Ref>.
§.§ Reliable Spanners from Left-sided LSO
To create a reliable spanner from a left-sided LSO, we will use left spanners:
Given a path graph $P_n$, a $2$-hop left spanner $H$ is a graph such that for every $a<b$ there is $c\le a,b$, such that $\{v_c,v_a\},\{v_c,v_b\}\in H$.
A distribution over $2$-hop left spanners $H$ is
$\nu$-reliable if for every attack $B\subseteq X$, there is a set $B\subseteq B^+$ such that $\mathbb{E}[B^+]\le(1+\nu)|B|$, and for every $a<b$ such that $v_a,v_b\notin B^+$, there is $c\le a,b$, $c\notin B$ such that $\{v_c,v_a\},\{v_c,v_b\} \in E(H)$.
An oblivious spanner $\mathcal{D}$ is said to have $m$ edges if every spanner in the support $\mathcal{D}$ has at most $m$ edges (see <Ref>).
In the following lemma, we construct a reliable left spanner. Interestingly, it is sparser than the reliable $2$-hop spanner for the path graph constructed in <Ref>. The proof is deferred to <Ref>.
For every $\nu\in(0,1)$, the $n$-vertex path $P_n$ admits an oblivious $\nu$-reliable $2$-hop left spanner with $n\cdot O(\nu^{-1}\log n)$ edges.
Using <Ref>, we can construct a reliable spanner using a left-sided LSO, and thus prove <Ref>. The meta theorem is restated bellow for convenience.
Let $\Sigma$ be a $(\tau,\rho)$-left-sided LSO; $\Sigma$ is a collection of partial orderings. Set $\nu'=\frac{\nu}{\tau}$. For every ordering $\sigma\in \Sigma$ which contains $n_\sigma$ vertices, using <Ref>, we construct a $\nu'$-reliable $2$-hop left spanner $H_\sigma$ with $n_\sigma\cdot O(\nu'^{-1}\log n_\sigma)$ edges.
Set $H=\cup_\sigma H_\sigma$.
The total number of edges is thus bounded by
\[
|H|=\sum_{\sigma}|H_{\sigma}|=\sum_{\sigma}n_{\sigma}\cdot O(\nu'^{-1}\log n_{\sigma})=O(\nu'^{-1}\log n)\cdot\sum_{\sigma}n_{\sigma} = n\cdot O(\nu^{-1}\tau^2\log n)~.
\]
For an attack $B$, let $B^+_\sigma$ be the faulty extension of $B$ w.r.t. $H_\sigma$, and let $B^+=\cup B^+_\sigma$.
For every pair of points $x,y\notin B^+$, let $\sigma$ be the promised left-sided partial ordering from <Ref>. As $x,y\notin B_\sigma^+$, there is some vertex $z\notin B$ such that $z\preceq_{\sigma}x,y$ and $(x,z),(y,z)\in E(H_\sigma)$. We conclude that
\[
d_{H}(x,y)\le d_{X}(x,z)+d_{X}(z,y)\le2\rho\cdot d_{X}(x,y)~.
\]
To bound the size of $B^+$, we observe that
\[
\mathbb{E}\left[\left|B^{+}\right|\right]\le|B|+\sum_{\sigma}\mathbb{E}\left[\left|B_{\sigma}^{+}\setminus B\right|\right]\le|B|+\tau\nu'\cdot|B|\le(1+\nu)\cdot|B|~.
\]
The theorem now follows.
§.§.§ Left Spanner: proof of <Ref>
We restate the lemma for convenience.
We will construct a spanner with reliability parameter $O(\nu)$; afterward, the constants can be readjusted accordingly.
Let $h$ be a universal constant such that $H_{s}-H_{t}=\frac{1}{t+1}+\frac{1}{t+2}+\dots+\frac{1}{s}>h\cdot\ln\frac{s}{t}$.
We create a set $N$ of centers as follows: sample each vertex $a\in[n]$ to $N$ with probability $\frac{1}{a}\cdot\frac{c}{\nu}$ (or $1$ for small enough $a$) where $c=\max\{\frac2h,1\}$.
The spanner $H$ is then defined as follows: for each vertex $a\in V$ set $N_a=[1,a]\cap N$. We add edges between $a$ and all the vertices in $N_a$ (alternatively, each vertex add edges to all the centers with smaller indices).
Clearly, the expected number of edges is bounded by
\[
\sum_{a=1}^{n}\frac{1}{a}\cdot\frac{c}{\nu}\cdot(n-a)\le n\cdot\frac{c}{\nu}\cdot\sum_{a=1}^{n}\frac{1}{a}=n\cdot O(\nu^{-1}\log n)~.
\]
By <Ref>, we can obtain the same guarantee on the number of edges also in the worst case.
Given an attack $B$, we add a vertex $a$ to $B^+$ iff $N_a\subseteq B$. The stretch bound easily follows: for a pair of vertices $a<b\notin B^+$ there is a center $x\in N_a\subseteq N_b$, thus $\{x,a\},\{x,b\}\in H$ as required.
Finally we bound the expected size of $B^+$ for an attack $B$.
We will begin by analyzing the specific attack $B=[1,x]$ for some $x\in[n]$. Later, we will argue that our analysis holds also for an arbitrary attack $B$.
The vertex $x+i$ joins $B^{+}$ only if none of the vertices $x+1,\dots,x+i$
becomes a center in $N$. The probability of this event is
\[
\Pr\left[x+i\in B^{+}\right]=\Pi_{j=1}^{i}\left(1-\frac{1}{x+j}\cdot\frac{c}{\nu}\right)<e^{-\frac{c}{\nu}\sum_{j=1}^{i}\frac{1}{x+j}}<e^{-\frac{c}{\nu}\cdot h\left(\ln\frac{x+i}{x}\right)}\le(1+\frac{i}{x})^{-\frac{2}{\nu}}\,.
\]
The expect size of $B^{+}\setminus B$ is thus bounded by
\[
\mathbb{E}\left[\left|B^{+}\setminus B\right|\right]<\sum_{i=1}^{n-x}(1+\frac{i}{x})^{-\frac{2}{\nu}}<\sum_{i\ge1}(1+\frac{i}{x})^{-\frac{2}{\nu}}
\]
For every $s\ge0$, and $i\in[s\nu x,(s+1)\nu x)$, we
have that $(1+\frac{i}{x})^{-\frac{2}{\nu}}<(1+s\nu)^{-\frac{2}{\nu}}<e^{-\frac{s\nu}{2}\cdot\frac{2}{\nu}}=e^{-s}$.
We conclude:
\[
\mathbb{E}\left[\left|B^{+}\setminus B\right|\right]\le\sum_{s\ge0}\sum_{i\in[s\nu x,(s+1)\nu x)}(1+\frac{i}{x})^{-\frac{2}{\nu}}<\nu x\sum_{s\ge0}e^{-s}=O(\nu)\cdot|B|~.
\]
It follows that $\mathbb{E}\left[\left|B^{+}\right|\right]=(1+O(\nu))|B|$ as required.
Finally, consider an arbitrary attack $B\subseteq [n]$ of size $x$. Let $a_1,a_2,\dots,a_{n-x}$ the vertices not in $B$, ordered from left to right. Note that $a_i$ has $i$ vertices to the left (or equal) of it not in $B$, all from the interval $[1,x+i]$. To bound $\Pr\left[a_{i}\in B^{+}\right]$, we observe that this probability is maximum when $B = [1,x]$ since the probability that vertices are sampled to $N$ decreases monotonically. Thus, $\Pr\left[a_{i}\in B^{+}\right]\le\Pi_{j=1}^{i}\left(1-\frac{1}{x+j}\cdot\frac{c}{\nu}\right)$. The rest of the analysis of $|B^+|$ follows exactly the same argument.
§ SUBGRAPH RELIABLE CONNECTIVITY PRESERVES
Classically, throughout the literature, given a graph $G=(V,E,w)$ a spanner $H$ of $G$ is required to be a subgraph of $G$. A natural question in the context of this paper is: Is it possible to construct a reliable spanner with a subquadratic number of edges that only uses edges of the input graph?
Since removing a single vertex could disconnect the graph into two equal parts, a necessary relaxation is to require that distances are preserved only w.r.t. the induced graph $G[V\setminus B]$ (similarly to the case of fault-tolerant spanners).
Here we show that even for the less ambitious task of constructing a $\nu$-reliable connectivity preserver, it is not possible without using $\Omega(n^2)$ edges. To this end, we introduce a generalized definition of reliability, and present an oblivious lower bound and a matching deterministic upper bound.
Given a graph $G=(V,E)$, and a monotonically non-decreasing function $g:\mathbb{N}\rightarrow\mathbb{N}$, a subgraph $H$ of $G$ is a deterministic $g$-reliable connectivity preserver if for every attack $B\subseteq V$, there is a superset $B\subseteq B^{+}$
of size at most $g(|B|)$, such that for every $u,v\in V\setminus B^{+}$, if $u$ and $v$ are connected in $G\setminus B$, then they are also connected in $H\setminus B$.
An oblivious $g$-reliable connectivity preserver is a distribution $\mathcal{D}$ over subgraphs $H$ of $G$, such that for every attack $B$ and $H\in \supp(\mathcal{D})$, there is an algorithm producing a superset $B^{+}$ of $B$ such that, for
every $u,v\notin B^{+}$, if $u$ and $v$ are connected in $G\setminus B$, then they are also connected in $H\setminus B$,
and furthermore, $\mathbb{E}_{H\sim\mathcal{D}}\left[|B^{+}|\right]\le g(|B|)$. We say that an oblivious $g$-reliable connectivity preserver $\mathcal{D}$ has $m$ edges if every graph $H$ in the support of $\supp(\mathcal{D})$ has at most $m$ edges.
Note that using the notation of <Ref>, the rest of the paper is concerned with constructing oblivious $g$-reliable (non-subgraph) spanners for the linear function $g(x)=(1+\nu)x$.
We begin with the upper bound.
Let $k>1$ be an integer and $g_{k}(x)= \Omega(x^k)$. Then every $n$-vertex graph admits a deterministic $g_{k}$-reliable connectivity preserver with $O(n^{1+1/k})$ edges.
The construction is by reduction to $f$-fault-tolerant spanners. Recall that a subgraph $H$ of $G$ is an $f$-vertex-fault-tolerant $t$-spanner if for every subset $B$ of at most $f$ vertices, for every $u,v\in V\setminus B$, $d_{H\setminus B}(u,v)\le t\cdot d_{G\setminus B}(u,v)$. Bodwin and Patel <cit.> constructed $f$-vertex-fault-tolerant $2t-1$-spanners with $O(n^{1+\frac1t}\cdot f^{1-\frac1t})$ edges (later Bodwin, Dinitz, and Robelle <cit.> showed how to construct such spanners efficiently). Assume that $g(x)\geq cx^k$ for some constant $c$. Set $f=(n/c)^{\frac{1}{k}}$, and construct an $f$-vertex-fault-tolerant $O(\log n)$-spanner $H$ with $O(n\cdot f)=O(n^{1+1/k})$ edges.
We argue that $H$ is a deterministic $g_{k}$-reliable connectivity preserver. Clearly $H$ has $O(n^{1+1/k})$ edges. Consider an attack $B\subseteq V$,
* If $|B|\le f$, set $B^{+}=B$. Clearly $|B^+|\le g(|B|)$, and for every $u,v\in V\setminus B^+$, if $u,v$ are connected in $G\setminus B$, then $d_{H\setminus B}(u,v)\le t\cdot d_{G\setminus B}(u,v)<\infty$, and in particular $u$ and $v$ are connected in $H\setminus B$.
* Else, $|B|>f$, set $B^{+}=V$. Then $|B^+|=n= c((n/c)^{\frac1k})^k\le g(|B|)$. The connectivity preservation holds trivially.
Note that in <Ref> we actually constructed a $g_k$-reliable deterministic $O(\log n)$ spanner. One can also observe that by the same construction, we get the following corollary.
Let $k>1$ be an integer and $g_{k}(x)= \Omega(x^k)$. Then every graph $n$-vertex
graph $G=(V,E,w)$ admits a deterministic $g_{k}$-reliable $(2t-1)$-spanner with $O(n^{1+\frac{t+k-1}{t\cdot k}})$ edges that only uses edges of $G$.
Next, we provide a matching lower bound for the connectivity preserver; our lower bound is actually stronger as it holds in the oblivious case.
Let $k>1$ be an integer and $g_{k}(x)= O(x^k)$. For every large enough $n$, there is an $n$-vertex graph $G$ such that every oblivious $g_{k}$-reliable connectivity preserver has $\Omega(n^{1+1/k})$ edges.
For simplicity, we show the lower bound for $g_k(x) = x^k$; the lower bound could be extended to any function $g_{k}(x)\leq c_0\cdot x^k$ for some constant $c_0$ by adjusting the constants in our proof.
We define a graph $G$ to be a “thick” cycle, specifically, for $c=12$ we will have $s=cn^{1-\frac{1}{k}}$ sets $A_0,A_1,\dots,A_{s-1}$, each containing $\frac{1}{c}n^{1/k}$ vertices. There will be an edge between $v\in A_i$ to $u\in A_j$ if and only if $i-j=1 (\mod s)$.
All the index computation in the rest of the proof will be done modulo $s$, and we will omit it in the notation.
We will also ignore rounding issues (which can be easily fixed).
Note that $|V|=n$, while $|E|=cn^{1-\frac{1}{k}}\cdot\frac{1}{c^{2}}n^{\frac{2}{k}}=\frac{1}{c}n^{1+1/k}$. See <Ref> for illustration.
We begin by proving that every deterministic reliable connectivity preserver has $\Omega(n^{1+1/k})$ edges. Later, we will generalize to the oblivious case.
We argue that only the graph $G$ itself is a deterministic $g_{k}$-reliable connectivity preserver. Suppose for contradiction that there is a $g_{k}$-reliable connectivity preserver $H$ missing the
edge $\{u,v\}$, where $u\in A_i$ and $v\in A_{i+1}$. Consider an attack $B=V_i\cup V_{i+1}\cup V_{i+\frac s2}\setminus \{u,v\}$. Note that $|B|<\frac{3}{c}\cdot n^{1/k}$.
Note that the graph $G\setminus B$ is connected.
The graph $G'$ obtained by removing the edge $\{u,v\}$ from $G\setminus B$ has two connected components $C_1=\{v\}\cup A_{i+2}\cup \cdots\cup V_{i+\frac s2-1}$, and $C_2=V_{i+\frac s2-1}\cup\cdots \cup V_{i-1}\cup\{u\}$, see <Ref> for illustration.
The set $B^+$ must contain all the vertices in either $C_1$ or $C_2$, as otherwise there will vertices connected in $G\setminus B$, but not connected in $H\setminus B\subseteq G'\setminus B$.
It follows that $|B^+|\ge\min\{|C_1\cup B|,|C_2\cup B|\}\ge\frac n2$.
But $g_{k}(|B|)|=\left(\frac{3}{c}\cdot n^{1/k}\right)^{k}=\left(\frac{3}{c}\right)^{k}\cdot n<\frac{n}{4}\le|B^+|$,
a contradiction. It follows that every deterministic $g_{k}$-reliable
connectivity preserver has $\Omega(n^{1+1/k})$ edges.
Next we generalize the lower bound to the oblivious case. Let $\mathcal{D}$ be a distribution over connectivity preservers, and suppose
that there is an edge $\{u,v\}$ as above such that $\Pr_{H\sim\mathcal{D}}\left[e\in H\right]<\frac{1}{2}$.
Then using the same argument as above, with the same attack $B$ (w.r.t. $\{u,v\}$) we get that in all the preservers $H\in\supp(\mathcal{D})$ not containing $\{u,v\}$, it holds that $|B^+|\ge\frac n2$. We conclude
\[
\mathbb{E}_{H\sim\mathcal{D}}[|B^{+}|]\ge\text{\ensuremath{\Pr\left[e\notin H\right]\cdot}}\mathbb{E}_{H\sim\mathcal{D}}[|B^{+}|\mid e\notin H]\ge\frac{1}{2}\cdot\frac{n}{2}~,
\]
As $g_{k}(|B|)<\frac{n}{4}$, it follows that the probability that $H$ contains an arbitrary edge $\{u,v\}$ is at least $\frac12$. In particular
$\mathbb{E}_{H\sim\mathcal{D}}\left[|H|\right]\ge\frac{1}{2}|G|=\Omega(n^{1+1/k})$. The theorem now follows.
On the left illustrated the thick cycle consisting of $12$ sets $A_0,\dots,A_1$, where there is complete bipartite graph between $A_i$ to $A_{i+1}$.
On the right, illustrated the graph $G'$ lacking an edge $\{u,v\}$ colored in blue. The attack $B=V_i\cup V_{i+1}\cup V_{i+\frac s2}\setminus \{u,v\}$ (encircled in red) disconnects the graph $G'$ into two nearly equal halves, while keeping $G$ connected.
By setting $g(x) = (1+\nu)x$, we obtain a lower bound $\Omega(n^2)$ on the number of edges of any oblivious $\nu$-reliable subgraph connectivity preserver, and hence any oblivious $\nu$-reliable subgraph $t$-spanner for any finite $t$.
§ CONCLUSIONS
In this paper, we have presented different types of locality-sensitive orderings and used them to construct reliable spanners.
For the construction of the LSO's, we introduced and constructed ultrametric covers. Finally, in order to use the LSO's to construct reliable spanners, we construct $2$-hop spanners and left spanners for the path graph. Several open questions naturally arise from our work:
* Can we construct a $\nu$-reliable $2$-hop $1$-spanner for the path graph $P_n$ with $O(n\log n)$ edges for constant $\nu$? Note that a $2$-hop spanner for the path graph $P_n$ even without any reliability guarantee must contain $\Omega(n\log n)$ edges (see Excercise 12.10 <cit.>).
* A major open question is the construction of deterministic reliable spanners general metric spaces. <cit.> constructed deterministic reliable $O(t^2)$-spanners for general metrics with $\tilde{O}(n^{1+\frac1t})$ edges, and deterministic reliable $O(t)$-spanners for trees and planar graphs with $\tilde{O}(n^{1+\frac1t})$ edges, while showing an $\Omega(n^{1+\frac1t})$ lower bound on the number of edges in a deterministic reliable $t$-spanners for the uniform metric.
Using the new LSO's constructed in this paper, it could be possible to improve the stretch parameters by a constant factor and remove the dependency on aspect ratio from the sparsity. However, the lower bound for uniform metrics applies to trees and planar graphs as well; thus using $\tilde{O}(n^{1+\frac1t})$ edges to obtain stretch $t$ is necessary.
On the other hand, general metrics are far from understood. Closing the gap between the current $O(t^2)$ upper bound to the $t$ lower bound is a fascinating open question.
* Can we construct a reliable spanner of stretch $2$ (as opposed to stretch $2+\epsilon$ presented in this paper) for planar metrics with a nearly linear number of edges?
|
# Channelized Axial Attention –
Considering Channel Relation within Spatial Attention for Semantic
Segmentation
Ye Huang1 Di Kang 2 Wenjing Jia1 Xiangjian He1 Liu Liu3
###### Abstract
Spatial and channel attentions, modelling the semantic interdependencies in
spatial and channel dimensions respectively, have recently been widely used
for semantic segmentation. However, computing spatial and channel attentions
separately sometimes causes errors, especially for those difficult cases. In
this paper, we propose Channelized Axial Attention (CAA) to seamlessly
integrate channel attention and spatial attention into a single operation with
negligible computation overhead. Specifically, we break down the dot-product
operation of the spatial attention into two parts and insert channel relation
in between, allowing for independently optimized channel attention on each
spatial location. We further develop grouped vectorization, which allows our
model to run with very little memory consumption without slowing down the
running speed. Comparative experiments conducted on multiple benchmark
datasets, including Cityscapes, PASCAL Context, and COCO-Stuff, demonstrate
that our CAA outperforms many state-of-the-art segmentation models (including
dual attention) on all tested datasets.
Figure 1: Different dual attention designs: (a) Parallel dual attention sums
the results from spatial and channel attentions directly, which may cause
conflicts because spatial and channel attentions are focusing on different
aspects. (b) Sequential dual attention performs spatial attention after
channel attention, where the spatial attention may override correct features
extracted by the channel attention. (c) Our channelized attention seamlessly
merges the spatial and channel attentions into a single operation (see Sect.
4.2), removing the potential conflicting issue caused by a or b.
## 1 Introduction
Semantic segmentation is a fundamental task in many computer vision
applications, which assigns a class label to each pixel in the image. Most of
the existing approaches (Yuan, Chen, and Wang 2020; Yang et al. 2018; Fu et
al. 2019; Li et al. 2019) have adopted a pipeline similar to the one that is
defined by Fully Convolutional Networks (FCNs) (Long, Shelhamer, and Darrell
2015) using fully convolutional layers to output the pixel-level segmentation
results of input images. These approaches have achieved state-of-the-art
performance. After the FCNs, there have been many approaches dedicated to
extracting enhanced pixel representations from the backbone. Earlier
approaches, including PSPNet (Zhao et al. 2017) and DeepLab (Chen et al.
2018), used a Pyramid Pooling Module or an Atrous Spatial Pyramid Pooling
module to expand the receptive field to enhance the representation
capabilities. Recently, many works focus on using the attention mechanisms to
enhance pixel representations. The first attempts in this direction included
Squeeze and Excitation Networks (SENets) (Hu, Shen, and Sun 2018) that
introduced a simple yet effective channel attention module to explicitly model
the interdependencies between channels. Meanwhile, spatial attention relied on
self-attention proposed in (Wang et al. 2018; Vaswani et al. 2017) to model
long-range dependencies in spatial domain, so as to produce more correct pixel
representations. For each pixel in the feature maps, spatial attention
“corrects” its representation with the representations of other pixels
depending on their similarity. In contrast, channel attention identifies
important channels based on all spatial locations and reweights the extracted
features.
Parallel dual attention (e.g., (Fu et al. 2019)) was proposed to gain the
advantages of both spatial attention and channel attention. This approach
directly fused their results with an element-wise addition (see Fig. 1(a)).
Although they have achieved improved performance, the relationship between the
contributions of spatial and channel attentions to the final results is
unclear. Moreover, calculating the two attentions separately not only
increases the computational complexity, but may also result in conflicting
importance of feature representations. For example, some channels may appear
to be important in spatial attention for a pixel that belongs to a partial
region in the feature maps. However, channel attention may have its own
perspective, which is calculated by summing up the similarities over the
entire feature maps, and weakens the impact of spatial attention.
Sequential dual attention, which combines channel attention and spatial
attention in a sequential manner (Fig. 1(a)) has similar issues. For example,
channel attention can ignore the partial region representation obtained from
the overall perspective. However, this partial region representation may be
required by spatial attention. Thus, directly fusing the spatial and channel
attention results may yield incorrect importance weights for pixel
representations. In Sect. 5, we develop an approach to visualize the impact of
the conflicting feature representation on the final segmentation results.
In order to overcome the aforementioned issues, we propose Channelized Axial
Attention (CAA), which breaks down the axial attention into more basic parts
and inserts channel attention into them, combining spatial attention and
channel attention together seamlessly and efficiently. Specifically, when
applying the axial attention maps to the input signal (Wang et al. 2018), we
capture the intermediate results of the dot product before they are summed up
along the corresponding axes. Capturing these intermediate results allows
channel attention to be integrated for each column and each row, instead of
computing on the mean or sum of the features in the entire feature maps. We
also develop a novel grouped vectorization approach to maximize the
computation speed in limited GPU memory.
In summary, our contributions in this paper include:
* •
We are the first to explicitly identify the potential conflicts between
spatial and channel attention in existing dual attention designs by
visualizing the effects of each attention on the final result.
* •
We propose a novel Channelized Axial Attention, which breaks down the axial
attention into more basic parts and inserts channel attention in between,
integrating spatial attention and channel attention together seamlessly and
efficiently, with only a minor computation overhead compared to the original
axial attention.
* •
To balance the computation speed and GPU memory usage, a grouped vectorization
approach for computing the channelized attentions is proposed. This is
particularly advantageous when processing large images.
* •
Experiments on three challenging benchmark datasets, including PASCAL Context
(Everingham et al. 2009), COCO-Stuff (Caesar, Uijlings, and Ferrari 2018) and
Cityscapes (Marius et al. 2016), demonstrate the superiority of our approach
over the state-of-the-art approaches.
## 2 Related Work
Spatial attention. Non-local networks (Wang et al. 2018) and Transformer
(Vaswani et al. 2017) introduced the self-attention mechanism to examine the
pixel relationship in the spatial domain. It usually calculates dot-product
similarity or cosine similarity to obtain the similarity measurement between
every two pixels in feature maps, and recalculates the feature representation
of each pixel according to its similarity with others. Self-attention has
successfully addressed the feature map coverage issue of multiple fixed-range
approaches (Chen et al. 2017; Zhao et al. 2017), but it has also introduced
huge computation costs for computing the complete feature map. This means
that, for each pixel in the feature maps, its attention similarity affects all
other pixels. Recently, many approaches (Huang et al. 2020; Zhu et al. 2019)
have been developed to optimize the GPU memory costs of spatial self-
attention.
Channel Attention. Channel attention (Hu, Shen, and Sun 2018) examined the
relationships between channels, and enhanced the important channels so as to
improve performance. SENets (Hu, Shen, and Sun 2018) conducted a global
average pooling to get mean feature representations, and then went through two
fully connected layers, where the first one reduced channels and the second
one recovered the original channels, resulting in channel-wise weights
according to the importance of channels. In DANet (Fu et al. 2019), channel-
wise relationships were modelled by a 2D attention matrix, similar to the
self-attention used in the spatial domain, except that it computed the
attention with a dimension of $C\times C$ rather than $(H\times
W)\times(H\times W)$ (here, $C$ represents the number of channels, and $H$ and
$W$ represent the height and width of the feature maps, respectively).
Spatial Attention + Channel Attention. Combining spatial attention and
channel attention can provide fully optimized pixel representations in a
feature map. However, it is not easy to enjoy both advantages seamlessly. In
DANet (Fu et al. 2019), the results of the channel attention and spatial
attention are directly added together. Supposing that there is a pixel
belonging to a semantic class that has a tiny region in the feature maps,
spatial-attention can find its similar pixels. However, channel representation
of the semantic class with a partial region of the feature maps may not be
important in the perspective of entire feature maps, so it may be ignored when
conducting channel attention computations. Computing self-attention and
channel attention separately (as illustrated in Fig. 1(a)) can cause
conflicting results, and thus weaken their performance when both results are
summarized together. Similarly, in the cascaded model (see Fig. 1(b)), the
spatial attention module after the channel attention module may pick up an
incorrect pixel representation enhanced by channel attention, because channel
attention computes channel importance according to the entire feature maps.
Figure 2: Our designs for visualizing the effects of dual attentions in
parallel and sequential. Figure 3: Conflicting features in parallel dual
attention designs. Top: The bad spatial attention representation negatively
influences the good channel attention representation. Bottom: The bad channel
attention representation negatively influences the good spatial attention
representation. See the boxed areas. Figure 4: In sequential dual attention
designs, the spatial attention representation (the 4th column) ignores the
correct channel attention representation (the 3rd column).
## 3 Exploring Conflicting Features
As we have analyzed earlier in Sect. 2, computing spatial and channel
attentions separately can cause conflicting features. In our experiments, to
illustrate this feature conflicting issue faced by existing dual attention
approaches, we designed a simple way to visualize the effects of spatial
attention and channel attentions on pixel representation.
### 3.1 Visualizing Conflicts
For a parallel dual attention design such as DANet (Fu et al. 2019), since it
has two auxiliary losses for each of spatial attention and channel attention,
we directly use their logits during inference to generate corresponding
segmentation results and compare them with the result generated by the main
logits. For a sequential dual attention design, we add an extra branch that
directly uses the pixel representation obtained from channel attention to
perform the segmentation logits. Note that, since the original sequential
design does not have independent logits after the channel attention module, we
stop the gradient from back-propagating to the main branch, to ensure that our
newly added branch has no effect on the main branch.
### 3.2 Examples of Conflicting Features
To visualize the impact of the feature conflicting issue in the existing dual
attention designs (see Sect. 2), we present examples of the segmentation
results obtained with the conflicting features in the parallel dual attention
design (see Fig. 3) and the sequential dual attention design (see Fig. 4). As
observed from Fig. 3, the parallel design of dual attention directly sums up
the pixel representations obtained from spatial attention and channel
attention. With this approach, the advantages of the pixel representations
obtained from one can be weakened by the other.
The sequential way of combining the dual attentions avoids taking their
average but still has its own problem. As shown in Fig. 4, the pixel
representation obtained from the spatial attention ignores the correct
representation obtained from the channel attention, and worsens the
prediction.
## 4 Methods
Figure 5: The detailed architecture of the proposed CAA (Row Attention). We
present the way to apply channel attention seamlessly in (b). We mark the
independent spatial dimension in bold style. This allows channel attention to
also consider spatial unique information. Note that, in our design, the
“value” for row attention is obtained from the result of column attention. See
Eq. 11 for details, and the Appendix for the full architecture.
### 4.1 Preliminaries
#### Formulation of the Spatial Self-attention
Following Non Local (Wang et al. 2018) and Stand Alone Self Attention
(Ramachandran et al. 2019), a 2D self-attention operation in spatial domain
can be defined by:
$\mathbf{y}_{i,j}=\sum_{\forall
m,n}f(\mathbf{x}_{i,j},\mathbf{x}_{m,n})g(\mathbf{x}_{m,n}).$ (1)
Here, a pairwise function $f$ computes the similarity between the pixel
representation $\mathbf{x}_{i,j}$ (query) at position $(i,j)$ and the pixel
representation $\mathbf{x}_{m,n}$ (key) at all other possible positions
$(m,n)$. The unary function $g$ maps the original representation at position
$(m,n)$ to a new domain (value). In our work, we use the similarity function
(Wang et al. 2018) as $f$, i.e.,
$f(\mathbf{x}_{i,j},\mathbf{x}_{m,n})=\text{softmax}_{m,n}(\theta(\mathbf{x}_{i,j})^{T}\theta(\mathbf{x}_{m,n})),$
(2)
where $\theta$ is a $1\times 1$ convolution layer transforming the feature
maps $\mathbf{x}$ to a new domain to calculate dot-product similarity (Wang et
al. 2018) between every two pixels. Note that, following a common practice (Li
et al. 2020), we use the same $1\times 1$ convolution weights for both query
and key. Then, these similarities are used as the weights (Eq. (1)) to
aggregate features of all pixels, producing an enhanced pixel representation
$\mathbf{y}_{i,j}$ at position $(i,j)$.
#### Formulation of the Axial Attention
From the above equations, we can see the computational complexity of the self-
attention module is $O(H^{2}W^{2})$, which requires large computation
resources and prevents real-time applications, such as autopilot. Several
subsequent works (Huang et al. 2020; Ho et al. 2019) focused on reducing the
computational complexity while maintaining high accuracy. In this work, we
adopt axial-attention to perform spatial attention. In axial attention, the
attention map is calculated for the column and row that cover the current
pixel, reducing the computational complexity to $O(HW^{2}+H^{2}W)$.
For convenience, we call the attention values calculated along the $Y$ axis
“column attention”, and the attention values calculated along the $X$ axis
“row attention”. Similar to Eq. 2, we define axial similarity functions by:
$A_{\text{col}}(\mathbf{x}_{i,j},\mathbf{x}_{m,j})=\text{softmax}_{m}\left(\theta(\mathbf{x}_{i,j})^{T}\theta(\mathbf{x}_{m,j})\right)\
,\ m\in[H],$ (3)
and
$A_{\text{row}}(\mathbf{x}_{i,j},\mathbf{x}_{i,n})=\text{softmax}_{n}\left(\phi(\mathbf{x}_{i,j})^{T}\phi(\mathbf{x}_{i,n})\right)\
,\ n\in[W].$ (4)
Note that we use different feature transformations ($\theta,\phi$) for the row
and column attention calculations.
With the column and row attention maps $A_{\text{col}}$ and $A_{\text{row}}$,
the final value weighted by the column and row attention maps can be
represented by:
$\mathbf{y}_{i,j}=\sum_{\forall
n}\left(A_{\text{row}}(\mathbf{x}_{i,j},\mathbf{x}_{i,n})(\sum_{\forall
m}A_{\text{col}}(\mathbf{x}_{i,j},\mathbf{x}_{m,j})g(\mathbf{x}_{m,n}))\right).$
(5)
### 4.2 Channelized Axial Attention
In order to address the feature conflicting issue of the existing dual
attention designs, we propose a novel Channelized Axial Attention (CAA), which
seamlessly combines the advantages of spatial attention and channel attention.
As mentioned in the above sections, feature conflicts may be caused by the
different interests of spatial and channel attentions. Ideally, channel
attention should not ignore the regional features that are interesting to
spatial attention. Conversely, spatial attention should consider channel
relation as well.
Thus, we propose to compute channel attention within spatial attention.
Specifically, we firstly break down spatial attention into more basic parts
(Sect. 4.2). Then, spatially varying channel attention is generated with
$\bm{\alpha}_{i,j,m,n}$ and $\bm{\beta}_{i,j,n}$. In this way, channel
attention is incorporated into spatial attention and spatial attention will
not be ignored when small objects exist, seamlessly and effectively combining
spatial and channel attention together.
#### Breaking Down Axial Attention.
For convenience, we firstly define two variables $\bm{\alpha}_{i,j,m,n}$ and
$\bm{\beta}_{i,j,n}$ to represent the intermediate weighted features as
follows:
$\bm{\alpha}_{i,j,m,n}=A_{\text{col}}(\mathbf{x}_{i,j},\mathbf{x}_{m,j})g(\mathbf{x}_{m,n})$
(6)
and
$\bm{\beta}_{i,j,n}=A_{\text{row}}(\mathbf{x}_{i,j},\mathbf{x}_{i,n})\sum_{\forall
m}\bm{\alpha}_{i,j,m,n}.$ (7)
Thus, Eq. (5) can be rewritten as:
$\mathbf{y}_{i,j}=\sum_{\forall n}\bm{\beta}_{i,j,n}=\sum_{\forall
n}A_{\text{row}}(\mathbf{x}_{i,j},\mathbf{x}_{i,n})\left(\sum_{\forall
m}\bm{\alpha}_{i,j,m,n}\right).$ (8)
Eqs. (6), (7) and (8) show that the computation of the dot product is composed
of two steps: 1) Reweighting: re-weighting features on selected locations by
column attention as in Eq. (6) and row attention as in Eq. (7), and 2)
Summation: summing the elements along row and column axes in Eq. (8). Note
that, this breakdown is also applicable to regular self-attention (see Table 3
and Appendix).
#### Spatially Varying Channel Attention.
With the intermediate results $\bm{\alpha}_{i,j,m,n}$ and $\bm{\beta}_{i,j,n}$
in Eqs. (6) and (7), channel relation can be applied inside spatial attention,
seamlessly combining them into one operation. In addition, channel attention
is now independently conducted on each column or row (on each pixel in regular
self-attention) and provides spatial perspective for the channel relation
modeling, resulting in our spatially varying channel attention. Enhanced with
spatially varying channel attentions, now $C_{\text{col}}$ and
$C_{\text{row}}$ are written as:
$C_{\text{col}}(\bm{\alpha}_{i,j,m,n})=\text{Sigmod}\left(\text{ReLU}(\frac{\sum_{\forall
m,j}(\bm{\alpha}_{i,j,m,n})}{H\times
W}\omega_{c1})\omega_{c2}\right)\bm{\alpha}_{i,j,m,n},$ (9)
and
$C_{\text{row}}(\bm{\beta}_{i,j,n})=\text{Sigmod}\left(\text{ReLU}(\frac{\sum_{\forall
i,n}(\bm{\beta}_{i,j,n})}{H\times
W}\omega_{r1})\omega_{r2}\right)\bm{\beta}_{i,j,n},$ (10)
where $\text{Sigmod}(\cdot)$ is the learned channel attention, and
$\omega_{c1}$, $\omega_{c2}$, $\omega_{r1}$ and $\omega_{r2}$ are the learned
relationships between different channels according to $\bm{\alpha}_{i,j,m,n}$
and $\bm{\beta}_{i,j,n}$.
Thus, instead of directly using $\bm{\alpha}_{i,j,m,n}$ and
$\bm{\beta}_{i,j,n}$ as in Eq. (8), for each column and row, we obtain the
channelized axial attention features, where the intermediate results
$\bm{\alpha}_{i,j,m,n}$ and $\bm{\beta}_{i,j,n}$ are weighted by the spatially
varying channel attention defined in Eqs. (9) and (10) as:
$\mathbf{y}_{i,j}=\sum_{\forall
n}C_{\text{row}}\left(A_{\text{row}}(\bm{x}_{i,j},\bm{x}_{i,n})(\sum_{\forall
m}C_{\text{col}}(\bm{\alpha}_{i,j,m,n}))\right).$ (11)
Note that the spatially varying channel attention keeps a $W$ dimension after
averaging $H\times W$ during the channel attention (Fig. 5). Now each row has
its own channel attention thanks to the breaking down of spatial axial
attention.
#### Going Deeper in Channel Attention.
Similar to the work in (Hu, Shen, and Sun 2018), we use two fully connected
layers, followed by ReLU and sigmoid activations respectively, to first reduce
the channel number and then increase it to the original channel number.
To further boost performance, we explore the design of more powerful channel
attention modules for our channelization since our attention module keeps the
spatial dimension, and thus contains more information than a regular SE module
($1\times 1\times C\times WorH$ vs $1\times 1\times C$, see Fig. 5).
We experimented with increased depth and/or width of hidden layers to enhance
the capacity of spatial varying channel attention. We find that deeper hidden
layers allow channel attention to find a better relationship between channels
for our spatially varying channel attention. Moreover, increasing layer width
is not as effective as adding layer depth (see Table 1).
Furthermore, in spatial domain, each channel of a pixel contains unique
information that can lead to a unique semantic representation. We find that
using Leaky ReLU (Mass, Hannun, and Ng 2013) is more effective than ReLU in
preventing the loss of information along deeper activations (Sandler et al.
2018). Apparently, this replacement only works in spatially varying channel
attention.
#### Grouped Vectorization.
Computing spatial attention row by row and column by column can save
computation but it is still too slow (about 2.5 times slower on a single V100
with feature map size = $33\times 33$) even with parallelization. Full
vectorization can achieve a very high speed but it has a high requirement on
GPU memory (about 2 times larger GPU memory usage than no vectorization on a
single V100 with feature map size = $33\times 33$) for storing the
intermediate partial axial attention results $\bm{\alpha}$ (which has a
dimension of $H\times H\times W\times C$) and $\bm{\beta}$ (which has a
dimension of $W\times H\times W\times C$) in Eqs. (6) and (7). To enjoy the
high speed benefit of vectorization with limited GPU memory usage, in our
implementation we propose grouped vectorization to dynamically batch rows and
columns into multiple groups, and then perform vectorization for each group
individually. The technical details and impact of our group vectorization are
detailed in the Appendix.
## 5 Experiments
To demonstrate the effectiveness for accuracy of the proposed CAA,
comprehensive experimental results are compared with the state-of-the-art
methods on three benchmark datasets, i.e., PASCAL Context (Everingham et al.
2009), COCO-Stuff (Caesar, Uijlings, and Ferrari 2018) and Cityscapes (Marius
et al. 2016).
Using similar settings as in other existing works, we measure the segmentation
accuracy using mean intersection over union (mIOU). Moreover, to demonstrate
the efficiency of our CAA, we also compare the floating point operations per
second (FLOPS) of different approaches. Experimental results show that our CAA
outperforms the state-of-the-art methods on all tested datasets. Due to page
limitations, we focus on ResNet-101 (with naive upsampling) results in the
main paper for the fairest comparison. Results obtained with EfficientNet or
results on other extra datasets are presented in the Appendix.
### 5.1 Implementation Details
#### Backbone
Our network is built on ResNet-101 (He et al. 2016) pre-trained on ImageNet.
The original ResNet results in a feature map of ${1}/{32}$ of the input size.
Following other works (Chen et al. 2018; Li et al. 2019), we apply dilated
convolution at the output stride = 16 for ablation experiments if not
specified. We conduct experiments with the output stride = 8 to compare with
the state-of-the-art methods.
#### Naive Upsampling
Unless otherwise specified, we directly bi-linearly upsampled the logits to
the input size without refining using any low-level and high resolution
features.
#### Training Settings
We employ stochastic gradient descent (SGD) for optimization, where the
polynomial decay learning rate policy $(1-\frac{iter}{maxiter})^{0.9}$ is
applied with an initial learning rate = 0.01. We use synchronized batch
normalization with batch size = 16 (8 for Cityscapes) during training. For
data augmentation, we only apply the most basic data augmentation strategies
as in (Chen et al. 2018), including random flip, random scale and random crop.
### 5.2 Results on PASCAL Context
The PASCAL Context (Mottaghi et al. 2014) dataset has 59 classes with 4,998
images for training and 5,105 images for testing. We train our CAA on the
training set for 40k iterations. In the following, we first present a series
of ablative experiments to show the effectiveness of our method. Then,
quantitative and qualitative comparisons with other state-of-the-art methods
are presented.
Note that, in ablation studies below, we report mean result with standard
deviation (numbers in parentheses) calculated with 5 repeated experiments.
(See Deterministic in the Appendix for technical details).
Layer Counts | # of Channels | mIOU (%) | FLOPs
---|---|---|---
- | - | 50.27($\pm$0.2) | 68.7G
1 | 128 | 50.75($\pm$0.2) | +0.00024G
3 | 128 | 50.85($\pm$0.2) | +0.00027G
5 | 128 | 51.06($\pm$0.2) | +0.00030G
7 | 128 | 50.40($\pm$0.3) | +0.00043G
5 | 64 | 50.12($\pm$0.2) | +0.00015G
5 | 256 | 50.35($\pm$0.4) | +0.00098G
Table 1: Results without using channelization (Row 1) and using
channelization with different layer counts and channel numbers. Numbers in
parentheses indicate standard deviations (see Sect. 5.2).
#### Effectiveness of the Proposed Channelization
We first report the impact of adding channelized axial attention and with
different depth and width in Table 1, where ‘-’ for the baseline result
indicates no channelization is performed.
Axial Attention | \+ SE | \+ Our Channelization
---|---|---
50.27($\pm$0.2) | 50.37($\pm$0.2) | 51.06($\pm$0.2)
Table 2: Result comparison between axial attention, axial attention + SE and
channelized axial attention.
As can be seen from this table, our proposed channelization improves the mIOU
over the baseline regardless of the layer counts and the number of channels
used. In particular, the best performance is achieved when the Layer Counts =
5 and the number of Channels = 128.
We also compare our model with the sequential design of “Axial Attention +
SE”, as shown in Table 2. We find the sequential design brings only marginal
contributions to performance, showing that our proposed channelization method
can combine the advantages of both spatial attention and channel attention
more effectively. In Table 5, results obtained with other backbones are
provided to demonstrate the effectiveness and robustness of CAA.
#### Channelized Self-Attention
In this section, we conduct experiments on the PASCAL Context by applying
channelization to the original self-attention. We report its single-scale
performance in Table 3 with ResNet-101. Our channelized method can also
further improve the performance of self-attention by 0.67% (51.09% vs 50.42%).
Attention Base | Eval OS | Channelized | mIOU (%)
---|---|---|---
Axial Attention | 16 | | 50.27
16 | ✓ | 51.06
Self Attention | 16 | | 50.42
16 | ✓ | 51.09
Table 3: Ablation study of applying our Channelized Attention on self-
attention with ResNet-101. Eval OS: Output strides (Chen et al. 2018) during
evaluation.
#### Impact of the Testing Strategies
We compare the performance and computation cost of our proposed model against
the baseline and DANet (Fu et al. 2019) with different testing strategies in
Table 4. Using the same settings as in other works (Fu et al. 2019), we add
multi-scale, left-right flip and auxiliary loss during inference. The
accuracies of CAA are further boosted with output stride = 8 since the channel
attention can learn and optimize three times more pixels.
Methods | OS | MF | Aux | mIOU (%) | FLOPs
---|---|---|---|---|---
ResNet | 16 | | - | - | 59.85G
-101 | 8 | | - | - | 190.70G
DANet | 8 | | | | +101.25G
| 8 | ✓ | ✓ | 52.60 | -
Axial | 16 | | | 50.27($\pm$0.2) | +8.85G
Attention | 16 | ✓ | | 52.01($\pm$0.2) | -
| 8 | | | 51.24($\pm$0.2) | +34.33G
| 8 | ✓ | | 52.51($\pm$0.2) | -
Our | 16 | | | 51.06($\pm$0.2) | +8.85G
CAA | 16 | ✓ | | 53.09($\pm$0.3) | -
| 8 | | | 52.73($\pm$0.1) | +34.33G
| 8 | ✓ | | 54.05($\pm$0.1) | -
Our | 16 | | ✓ | 51.80($\pm$0.2) | +8.85G
CAA | 16 | ✓ | ✓ | 53.52($\pm$0.2) | -
+ | 8 | | ✓ | 53.48($\pm$0.3) | +34.33G
Aux loss | 8 | ✓ | ✓ | 54.65($\pm$0.4) | -
Table 4: Comparison results with different testing strategies. OS: Output stride in training and inference. MF: Apply multi-scale and left-right flipping during inference. Aux: Add auxiliary loss during training. “$\bm{+}$” refers to the extra FLOPS over the baseline FLOPS of ResNet-101. Backbone | OS | AA | C | mIOU (%)
---|---|---|---|---
ResNet-50 | 16 | ✓ | | 49.73
(He et al. 2016) | 16 | ✓ | ✓ | 50.23
Xception65 | 16 | ✓ | | 52.42
(Chollet 2017) | 16 | ✓ | ✓ | 52.65
EfficientNetB7 | 16 | ✓ | | 57.24
(Tan and Le 2019) | 16 | ✓ | ✓ | 57.93
| 8 | ✓ | ✓ | 58.40
Table 5: Ablation study of other backbones. All results are obtained in single scale without flipping. OS: Output strides during evaluation. AA: Axial Attention. C: Channelized. Methods | Backbone | mIOU (%)
---|---|---
ENCNet (Zhang et al. 2018) | ResNet-101 | 51.7
ANNet (Zhu et al. 2019) | ResNet-101 | 52.8
EMANet (Li et al. 2019) | ResNet-101 | 53.1
SPYGR (Li et al. 2020) | ResNet-101 | 52.8
CPN (Yu et al. 2020) | ResNet-101 | 53.9
CFNet (Zhang et al. 2019) | ResNet-101 | 54.0
DANet (Fu et al. 2019) | ResNet-101 | 52.6
Our CAA (OS = 16) | ResNet-101 | 53.7
Our CAA (OS = 8) | ResNet-101 | 55.0
Table 6: Comparisons with other state-of-the-art approaches on the PASCAL
Context test set. For a fair comparison, all compared methods used ResNet-101
and naive upsampling.
#### Comparison with the State-of-the-art
Finally, in Table 6, we compare our approach with the state-of-the-art
approaches. Like other similar works, we apply multi-scale and left-right flip
during inference. For a fair comparison, we only compare with the methods that
use ResNet-101 and naive upsampling in the main paper. More results using
alternative backbones are included in Table 5.
As shown in this table, our proposed CAA outperforms all listed state-of-the-
art models that are trained with an output stride = 8. Our CAA also performs
better than EMANet and SPYGR that are trained with output stride = 16. Note
that, in this and the following tables, we report the best results of our
approach obtained in experiments.
In Fig. 6, we show some results obtained by our CAA model, FCN and Dual
attention. Our model is able to handle previous failure cases better,
especially when a class A covering a smaller region is surrounded by another
class B covering a much larger region (see the boxed regions).
Figure 6: Examples of the segmentation results obtained on the PASCAL Context
dataset using FCN, DANet and CAA.
### 5.3 Results on the COCO-Stuff 10K
Following the other works (Fu et al. 2019), we evaluate our CAA on COCO-Stuff
10K dataset (Caesar, Uijlings, and Ferrari 2018), which contains 9,000
training images and 1,000 testing images with 172 classes. Our model is
trained for 40k iterations. As shown in Table 7, our proposed CAA outperforms
all other state-of-the-art approaches by a large margin of $1.3\%$,
demonstrating that our model can better handle complex images with a large
number of classes.
Methods | Backbone | mIOU (%)
---|---|---
DSSPN (Liang, Zhou, and Xing 2018) | ResNet-101 | 38.9
SVCNet (Ding et al. 2019) | ResNet-101 | 39.6
EMANet (Li et al. 2019) | ResNet-101 | 39.9
SPYGR (Li et al. 2020) | ResNet-101 | 39.9
OCR (Yuan, Chen, and Wang 2020) | ResNet-101 | 39.5
DANet (Fu et al. 2019) | ResNet-101 | 39.7
Our CAA | ResNet-101 | 41.2
Table 7: Comparisons with other state-of-the-art approaches on the COCO-Stuff 10K test set. For a fair comparison, all compared methods use ResNet-101 and naive upsampling. Methods | Backbone | mIOU (%)
---|---|---
CFNet (Zhang et al. 2019) | ResNet-101 | 79.6
ANNet (Zhu et al. 2019) | ResNet-101 | 81.3
CCNet (Huang et al. 2020) | ResNet-101 | 81.4
CPN (Yu et al. 2020) | ResNet-101 | 81.3
SPYGR (Li et al. 2020) | ResNet-101 | 81.6
OCR (Yuan, Chen, and Wang 2020) | ResNet-101 | 81.8
DANet (Fu et al. 2019) | ResNet-101 | 81.5
Our CAA | ResNet-101 | 82.6
Table 8: Comparisons with other state-of-the-art approaches on the Cityscapes
Test set. For a fair comparison, all compared methods use ResNet-101 and naive
upsampling.
### 5.4 Results on the Cityscapes
The Cityscapes dataset (Marius et al. 2016) has 19 classes. Its fine set
contains high quality pixel-level annotations of 2,975, 500 and 1,525 images
in the training, validation, and test sets, respectively. Following previous
works (Fu et al. 2019), we use only the fine set with a crop size of
769$\times$769 during training, and our training iteration is set to 80k. We
report our results on the test set in Table 8. Results show our CAA is also
working well on high-resolution images.
## 6 Conclusion
In this paper, we have proposed a novel and effective Channelized Axial
Attention, effectively combining the advantages of the popular spatial-
attention and channel attention. Specifically, we first break down spatial
attention into two steps and insert channel attention in between, enabling
different spatial positions to have their own channel attentions. Experiments
on the three popular benchmark datasets have demonstrated the effectiveness of
our proposed channelized axial attention.
## 7 Acknowledgments
We thank TensorFlow Research Cloud (TFRC) for TPUs.
## Appendix A Appendix
### A.1 Extra Experiments
#### Stronger Backbone in PASCAL Context
As mentioned in the main paper, our CAA outperforms the SOTA methods (Zhang et
al. 2019; Li et al. 2019) with the same settings (ResNet-101 + naive
upsampling). Furthermore, we show our proposed CAA is suitable for different
backbones.
In this section, we report our CAA’s performance with EfficientNet (Tan and Le
2019) in Table 9. Note that, this is not a fair comparison, since the listed
methods were not trained under the same settings, or using the same backbone.
The results show that our method can outperform the state-of-the-art
Transformer (Dosovitskiy et al. 2021) based hybrid models such as SETR (Sixiao
et al. 2021) and DPT (Ranftl, Bochkovskiy, and Koltun 2021) with the CNN
backbone EfficientNet-B7. The simple decoder merges the low level features
from output stride = 4, during the final upsampling (see (Chen et al. 2018)
for details).
Methods | mIOU (%)
---|---
CTNet (Li, Sun, and Tang 2021) \+ JPU | 55.5
SETR-MLA (Sixiao et al. 2021) | 55.83
HRNetV2 + OCR (Ding et al. 2019) | 56.2
ResNeSt-269 (Zhang et al. 2020) \+ DeepLab V3+ | 58.9
HRNetV2 + OCR + RMI | 59.6
DPT-Hybrid (Ranftl, Bochkovskiy, and Koltun 2021) | 60.46
Our CAA (EfficientNet-B7, w/o decoder) | 60.12
Our CAA (EfficientNet-B7 + simple decoder) | 60.50
Table 9: Result comparison with the state-of-the-art approaches on the PASCAL
Context test set for multi-scale prediction. Note that, the listed methods
were not trained under the same settings, or using same backbone.
#### Stronger Backbone in COCOStuff-10K
We also report our CAA’s results using Efficientnet-b7 (Tan and Le 2019) as
the backbone in Table 10.
#### Results in COCOStuff-164k
The recent method Segformer (Xie et al. 2021) used COCOStuff-164k (164,000
images), i.e., the full set of COCOStuff-10k to validate its performance for
the first time. Since Segformer is a strong backbone, in this section, we also
use EfficientNet-B5 + CAA to verify the robustness of our CAA on
COCOStuff-164k. Table 11 shows that our method outperforms the recent strong
baselines Segformer and SETR (Sixiao et al. 2021) by a large margin,
indicating our CAA keeps the superior performance with large training data.
Methods | mIOU (%)
---|---
HRNetV2 + OCR (Ding et al. 2019) | 40.5
DRAN | 41.2
HRNetV2 + OCR + RMI | 45.2
Our CAA (EfficientNet-B7) | 45.4
Table 10: Result comparison with the state-of-the-art approaches on the COCO-Stuff-10K test set for multi-scale prediction. Note that, the listed methods were not trained under the same settings, or using same backbone. Methods | mIOU (%)
---|---
ResNet-50 + DeepLabV3+ (Chen et al. 2018) | 38.4
HRNetV2 + OCR | 42.3
SETR (Sixiao et al. 2021) | 45.8
Segformer-B5 (Xie et al. 2021) | 46.7
Our CAA (EfficientNet-B5) | 47.30
Table 11: Result comparison with the state-of-the-art approaches on the COCO-
Stuff-164K test set for multi-scale prediction. Note that, the listed methods
were not trained under same settings, or using same backbone. Methods other
than CAA and Segformer are reproduced in Segformer paper.
### A.2 Extra Visualizations
Figure 7: Examples of the results obtained on the COCO-Stuff 10K dataset with
our proposed CAA in comparison to the results obtained with FCN, DANet and the
ground truth. Figure 8: Examples of the results obtained on the PASCAL Context
dataset with our proposed CAA in comparison to the results obtained with FCN,
DANet and the ground truth. Figure 9: Extra examples of the segmentation
results obtained on the Cityscapes validation set (Marius et al. 2016) with
our proposed CAA in comparison to the results obtained with DANet (Fu et al.
2019) and the ground truth.
#### COCOStuff-10k
Fig. 7 shows some examples of the segmentation results obtained on the COCO-
Stuff 10K with our proposed CAA in comparison to the results of FCNs (Long,
Shelhamer, and Darrell 2015), DANet (Fu et al. 2019), and the ground truth
(output stride = 8, ResNet-101). As it can be seen, our CAA can segment common
objects such as building, human, or sea very well.
#### Extra PASCAL Context
In the main paper, due to the page limit, only 3 images from PASCAL Context
are presented. In this section, we show more examples of the segmentation
results obtained on the PASCAL Context in Fig. 8. Results show that the
failure cases in FCN and DANet are segmented much better by our CAA,
especially hard cases (see the 2nd row).
#### Cityscapes
In Fig. 9, we compare the segmentation results obtained on Cityscapes
validation set with DANet and our CAA. Key areas of difference are highlighted
with white boxes. Results show that many errors produced by DANet no longer
exist in our CAA results.
### A.3 Group Vectorization
Algorithm 1 presents the pseudo code of implementing the proposed grouped
vectorization.
Algorithm 1 Our proposed grouped vectorization algorithm
1:$G$: Group Number, $A$: Attention Map $[N,H,H,W]$, $X$: Feature Map
$[N,C,H,W]$
2:$padding\leftarrow H\>\%\>G$
3:$A\leftarrow$ Transpose $A$ into $[H,N,H,W]$
4:$H^{+}\leftarrow H+padding$
5:$A\leftarrow$ padding zero to $A$ into $[H^{+},N,H,W]$
6:$A\leftarrow$ Reshape $A$ into $[G,H^{+}\>//\>G,N,H,W]$
7:for $g\in G$ do
8: $Y_{g}\leftarrow$ Channelization $(X,A_{g}),\>Y_{g}\in[H^{+}\>//\>G,N,C,W]$
9:end for
10:$Y\leftarrow$ Concat$(Y_{0,1,...G}),\>Y\in[G,H^{+}\>//\>G,N,C,W]$
11:$Y\leftarrow$ Reshape $Y$ into $[H^{+},N,C,W]$
12:$Y\leftarrow$ Remove padding from $Y$ into $[H,N,C,W]$
13:$Y\leftarrow$ Transpose $Y$ into $[N,C,H,W]$ return Y
#### Effectiveness of Our Grouped Vectorization
In our main paper, we introduced the grouped vectorization to split tensors
into multiple groups so as to reduce the GPU memory usage when preforming
channel attention inside spatial attention. As we use more groups in group
vectorization, the proportionally less GPU memory is needed for the
computation. However, longer running time is required. In this section, we
conduct experiments to show the variation of the inference time
(seconds/image) when different numbers of groups are used.
Fig. 10 shows the results of three different input resolutions. As shown in
this graph, when splitting the vectorization into smaller numbers of groups,
e.g., 2 or 4, our grouped vectorization can achieve similar inference speed
using one half or one quarter of the original spatial complexity. For example,
separating into 4 groups has similar inference speed with no separation (1
group).
Figure 10: Inference time (seconds/image) when applying different numbers of
groups in grouped vectorization.
### A.4 Extra Technical Details
#### Deterministic
In our paper, all experiments are conducted with enabled deterministic and
fixed seed value to reduce the effect of randomness. However, in current deep
learning frameworks (e.g., PyTorch or TensorFlow), completely reproducible
results are not guaranteed since not all the operations have a deterministic
implementation. To show the robustness of our proposed method, in the ablation
studies of the main paper, we repeat each experiment 5 times and report their
mean results and standard deviation. To learn more about deterministic, please
check “https://pytorch.org/docs/stable/notes/randomness.html” and
“https://github.com/NVIDIA/framework-determinism”.
#### No Extra Tricks
In our work, we strictly follow the training settings and implementation of
DANet (Fu et al. 2019) when comparing with other methods. Recently, many
settings such as cosine decay learning rate or layer normalization has been
widely used in computer vision to boost performance. In our experiments, we
found they also worked well for our CAA, but we did not include them in this
paper to conduct comparisons as fair as possible.
### A.5 The Detailed Architecture of CAA
Due to the page limits, we are only able to present a partial architecture
(row attention) of our CAA in the main paper. In this section, we show the
complete CAA architecture in Fig. 11.
Figure 11: The detailed architecture of our CAA
## References
* Caesar, Uijlings, and Ferrari (2018) Caesar, H.; Uijlings, J.; and Ferrari, V. 2018. COCO-Stuff: Thing and Stuff Classes in Context. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Chen et al. (2017) Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. _IEEE Transactions on Pattern Analysis and Machine Intelligence_.
* Chen et al. (2018) Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-Decoder with atrous separable convolution for semantic image segmentation. In _European Conference on Computer Vision_.
* Chollet (2017) Chollet, F. 2017. Xception: Deep Learning With Depthwise Separable Convolutions. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Ding et al. (2019) Ding, H.; Jiang, X.; Shuai, B.; Liu, A. Q.; and Wang, G. 2019. Semantic Correlation Promoted Shape-Variant Context for Segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Dosovitskiy et al. (2021) Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In _International Conference on Learning Representations_.
* Everingham et al. (2009) Everingham, M.; Gool, L. V.; K.l.Wiliams, C.; Winn, J.; and Zisserman, A. 2009. The Pascal Visual Object Classes (VOC) Challenge. _International Journal of Computer Vision_.
* Fu et al. (2019) Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; and Lu, H. 2019. Dual Attention Network for Scene Segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Ho et al. (2019) Ho, J.; Kalchbrenner, N.; Weissenborn, D.; and Salimans, T. 2019. Axial Attention in Multidimensional Transformers.
* Hu, Shen, and Sun (2018) Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Huang et al. (2020) Huang, Z.; Wang, X.; Wei, Y.; Huang, L.; Shi, H.; Liu, W.; and Huang, T. S. 2020\. CCNet: Criss-Cross Attention for Semantic Segmentation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_.
* Li et al. (2020) Li, X.; Yang, Y.; Zhao, Q.; Shen, T.; Lin, Z.; and Liu, H. 2020. Spatial Pyramid Based Graph Reasoning for Semantic Segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Li et al. (2019) Li, X.; Zhong, Z.; Wu, J.; Yang, Y.; Lin, Z.; and Liu, H. 2019. Expectation-Maximization Attention Networks for Semantic Segmentation. In _International Conference on Computer Vision_.
* Li, Sun, and Tang (2021) Li, Z.; Sun, Y.; and Tang, J. 2021. CTNet: Context-based Tandem Network for Semantic Segmentation. _arXiv preprint arXiv:2104.09805_.
* Liang, Zhou, and Xing (2018) Liang, X.; Zhou, H.; and Xing, E. 2018. Dynamic-structured Semantic Propagation Network. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Long, Shelhamer, and Darrell (2015) Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Marius et al. (2016) Marius, C.; Mohamed, O.; Sebastian, R.; Timo, R.; Markus, E.; Rodrigo, B.; Uwe, F.; Roth, S.; and Bernt, S. 2016. The Cityscapes Dataset for Semantic Urban Scene Understanding. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Mass, Hannun, and Ng (2013) Mass, A. L.; Hannun, A. Y.; and Ng, A. Y. 2013. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In _International Conference on Machine Learning_.
* Mottaghi et al. (2014) Mottaghi, R.; Chen, X.; Liu, X.; Cho, N.-G.; Lee, S.-W.; Fidler, S.; Urtasun, R.; and Yuille, A. 2014. The Role of Context for Object Detection and Semantic Segmentation in the Wild. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Ramachandran et al. (2019) Ramachandran, P.; Parmar, N.; Vaswani, A.; Bello, I.; Levskaya, A.; and Shlens, J. 2019. Stand-Alone Self-Attention in Vision Models. In _Conference on Neural Information Processing Systems_.
* Ranftl, Bochkovskiy, and Koltun (2021) Ranftl, R.; Bochkovskiy, A.; and Koltun, V. 2021. Vision Transformers for Dense Prediction. In _International Conference on Computer Vision_.
* Sandler et al. (2018) Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; and Chen, L.-C. 2018. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Sixiao et al. (2021) Sixiao, Z.; Jiachen, L.; Hengshuang, Z.; Xiatian, Z.; Zekun, L.; Yabiao, W.; Yanwei, F.; Jianfeng, F.; Tao, X.; H.S., T. P.; and Li, Z. 2021. Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Tan and Le (2019) Tan, M.; and Le, Q. 2019. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In _International Conference on Machine Learning_.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Łukasz Kaiser; and Polosukhin, I. 2017. Attention Is All You Need. In _Conference on Neural Information Processing Systems_.
* Wang et al. (2018) Wang, X.; Girshick, R.; Gupta, A.; and He, K. 2018. Non-local Neural Networks. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Xie et al. (2021) Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. _arXiv preprint arXiv:2105.15203_.
* Yang et al. (2018) Yang, M.; Yu, K.; Zhang, C.; Li, Z.; and Yang, K. 2018. DenseASPP for semantic segmentation in street scenes. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Yu et al. (2020) Yu, C.; Wang, J.; Gao, C.; Yu, G.; Shen, C.; and Sang, N. 2020. Context Prior for Scene Segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Yuan, Chen, and Wang (2020) Yuan, Y.; Chen, X.; and Wang, J. 2020. Object-Contextual Representations for Semantic Segmentation. In _European Conference on Computer Vision_.
* Zhang et al. (2018) Zhang, H.; Dana, K.; Shi, J.; Zhang, Z.; Wang, X.; Tyagi, A.; and Agrawal, A. 2018\. Context Encoding for Semantic Segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Zhang et al. (2020) Zhang, H.; Wu, C.; Zhang, Z.; Zhu, Y.; Zhang, Z.; Lin, H.; Sun, Y.; He, T.; Muller, J.; Manmatha, R.; Li, M.; and Smola, A. 2020. ResNeSt: Split-Attention Networks. _arXiv preprint arXiv:2004.08955_.
* Zhang et al. (2019) Zhang, H.; Zhan, H.; Wang, C.; and Xie, J. 2019. Semantic Correlation Promoted Shape-Variant Context for Segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Zhao et al. (2017) Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid Scene Parsing Network. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Zhu et al. (2019) Zhu, Z.; Xu, M.; Bai, S.; Huang, T.; and Bai, X. 2019. Asymmetric Non-local Neural Networks for Semantic Segmentation. In _International Conference on Computer Vision_.
|
# Dissipative dynamics of a particle coupled to field via internal degrees of
freedom
Kanupriya Sinha<EMAIL_ADDRESS>Department of Electrical Engineering,
Princeton University, Princeton, New Jersey 08544, USA Adrián Ezequiel Rubio
López<EMAIL_ADDRESS>Institute for Quantum Optics and Quantum
Information of the Austrian Academy of Sciences,
Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck,
Austria Yiğit Subaşı<EMAIL_ADDRESS>Computer, Computational and
Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM
87545, USA
###### Abstract
We study the non-equilibrium dissipative dynamics of the center of mass of a
particle coupled to a field via its internal degrees of freedom. We model the
internal and external degrees of freedom of the particle as quantum harmonic
oscillators in 1+1 D, with the internal oscillator coupled to a scalar quantum
field at the center of mass position. Such a coupling results in a nonlinear
interaction between the three pertinent degrees of freedom – the center of
mass, internal degree of freedom, and the field. It is typically assumed that
the internal dynamics is decoupled from that of the center of mass owing to
their disparate characteristic time scales. Here we use an influence
functional approach that allows one to account for the self-consistent
backaction of the different degrees of freedom on each other, including the
coupled non-equilibrium dynamics of the internal degree of freedom and the
field, and their influence on the dissipation and noise of the center of mass.
Considering a weak nonlinear interaction term, we employ a perturbative
generating functional approach to derive a second order effective action and a
corresponding quantum Langevin equation describing the non-equilibrium
dynamics of the center of mass. We analyze the resulting dissipation and noise
arising from the field and the internal degree of freedom as a composite
environment. Furthermore, we establish a generalized fluctuation-dissipation
relation for the late-time dissipation and noise kernels. Our results are
pertinent to open quantum systems that possess intermediary degrees of freedom
between system and environment, such as in the case of optomechanical
interactions.
## I Introduction
Dissipative non-equilibrium dynamics of complex quantum systems comprising
different degrees of freedom is a subject of significant interest from the
perspective of theory of open quantum systems, as well as, emerging quantum
information applications and devices BPBook ; Weiss ; Clerk20 . While the
dissipative dynamics of a reduced quantum system coupled linearly to its
environment has been studied extensively, the case of a nonlinear coupling
between a system and its environment via an intermediate degree of freedom and
its effect on the resulting dynamics is seldom discussed HPZ93 ; Maghrebi16 ;
Lampo16 . A commonly employed assumption is that the time-scales associated
with the different degrees of freedom are sufficiently disparate so that their
dynamics can be treated as being effectively decoupled from each other
Stenholm86 ; Lan08 ; Haake . Such an assumption precludes the possibility of
studying the rich interplay between the different degrees of freedom, and its
effect on the non-equilibrium dynamics of the system of interest. Moreover, an
adiabatic elimination of fast variables does not allow one to see the effects
of the non-equilibrium dynamics and quantum fluctuations of the fast degrees
of freedom on the system dissipation and noise.
A canonical example of such a complex open quantum system is the
optomechanical interaction between a neutral polarizable particle and a field,
for instance, a moving atom, nanoparticle, or molecule interacting with the
electromagnetic (EM) field Wieman ; Yin ; Molecules . As an essential feature
these systems possess internal electronic degrees of freedom that interact
with the EM field and are located at the center of mass position represented
by the mechanical degree of freedom (MDF). While the MDF of the particle does
not couple to the EM field directly, its internal degrees of freedom (IDF)
interact with the EM field at the center of mass position, thus mediating an
effective coupling between the MDF and the field. Considering the MDF as the
system of interest, with the EM field and the IDF acting as an environment,
such an effective interaction leads to dissipation and noise in system
dynamics. There is an interesting interplay between the three degrees of
freedom (the MDF, the IDF, and the EM field) that takes place in such a
scenario as has been previously explored in KSthesis ; MOF1 ; MOF2 ; Wang14 .
Considering the dynamics from an approach that includes the self-consistent
backaction of all the degrees of freedom on each other allows one to examine
the role of IDF in the center of mass dynamics. For example, it has been shown
that the IDF can facilitate a transfer of correlations between the field and
center of mass in the case of optomechanical interactions KSthesis ; MOF2 and
affect the center of mass decoherence Brun16 . On the other hand, it has also
been demonstrated that including the quantized center of mass motion can
affect the radiation reaction and thermalization of the IDF AERL19 .
We study here a minimal model that captures the interplay between different
degrees of freedom in a composite system and the nonequilibrium dissipative
center of mass dynamics. Such a model forms the basis for optomechanical
interactions between neutral particles and fields, and more generally applies
to open quantum systems that couple to an environment via internal degrees of
freedom. The remainder of this paper is organized as follows. In Section II we
describe the model in consideration, in Section III we describe the derivation
of a second order effective action for the center of mass by integrating out
the IDF and the field using a perturbative influence functional approach HPZ93
. In Section IV we write an effective Langevin equation for the center of
mass, discuss the resulting dynamics and present a generalized fluctuation-
dissipation relation. We summarize our findings and present an outlook of the
work in Section V.
Figure 1: Schematic representation of a mechanical oscillator of mass $M$ and
frequency $\Omega$ coupled to a massless scalar field $\Phi$ via an IDF. The
internal oscillator with position coordinate $Q$ is described as a harmonic
oscillator of mass $m$ and frequency $\omega_{0}$ and couples to the scalar
field $\Phi\left({x},t\right)$ at the center of mass position ${X}$ of the
mechanical oscillator.
## II Model
Let us consider a particle with its center of mass described by a mechanical
oscillator of mass $M$ and frequency $\Omega$, and its polarization described
by an IDF as a harmonic oscillator of mass $m$ and resonance frequency
$\omega_{0}$. The composite system is assumed to be interacting with a quantum
field, which we consider to be a scalar field $\Phi(x,t)$ in $1+1$ D. The
total action of the system can be written as
$\displaystyle S=S_{M}+S_{I}+S_{F}+S_{\mathrm{int},}$ (1)
where
$\displaystyle S_{M}$
$\displaystyle=\int\mathrm{d}\tau\left(\frac{1}{2}M\dot{X}^{2}-\frac{1}{2}M\Omega^{2}X^{2}\right)$
(2)
refers to the free action for the mechanical oscillator with
$\left\\{X,\dot{X}\right\\}$ referring to the center of mass position and
velocity, while
$\displaystyle S_{I}$
$\displaystyle=\int\mathrm{d}\tau\left(\frac{1}{2}m\dot{Q}^{2}-\frac{1}{2}m\omega_{0}^{2}Q^{2}\right)$
(3)
corresponds to the free action for the internal degree of freedom with
amplitude $Q$, and
$\displaystyle S_{F}$
$\displaystyle=\int\mathrm{d}\tau\int\mathrm{d}{x}\frac{\epsilon_{0}}{2}\left[\left(\partial_{t}\Phi(x,\tau)\right)^{2}-\left({\mathbf{\partial}_{x}}\Phi(x,\tau)\right)^{2}\right]$
(4)
is the free action for the scalar field $\Phi\left(x,t\right)$, $\epsilon_{0}$
being the vacuum permittivity. We consider $\hbar=c=1$ throughout this paper.
The correspondence with the EM field is established by identifying the scalar
field $\Phi\left({x},t\right)$ as the vector potential.
Considering that the particle interacts with the field via its IDF, at the
position determined by the MDF, the interaction action is given as
$\displaystyle S_{\mathrm{int}}$
$\displaystyle=\int\mathrm{d}\tau\int\mathrm{d}{x}\lambda
Q\Phi({x},\tau)\delta\left({x}-X(\tau)\right)$ (5)
with the strength of the interaction defined by $\lambda$. We note that the
above interaction action is nonlinear, with the two degrees of freedom and the
field interacting together. Such a model, previously referred to as the
mirror-oscillator-field model has been studied in the context of describing
optomechanical interactions from a microscopic perspective MOF1 ; MOF2 ;
KSthesis . It provides a self-consistent treatment of the different degrees of
freedom involved, in that one does not need to impose the boundary conditions
on the field by hand, and the mechanical effects of the field are consistently
described upon tracing over the IDF.
In typical experimental systems of relevance, such as those of atoms and
nanoparticles confined in traps, the center of mass motion is restricted to a
region of the space smaller than the wavelengths of the field that interact
resonantly with the internal degrees. Thus motivated by practical
considerations, we assume that the motion of the MDF is restricted to small
displacements about the average equilibrium position $X_{0}$, such that we can
expand the interaction action to first order in displacements from the
equilibrium as follows
$\displaystyle S_{\mathrm{int}}$
$\displaystyle\approx\int\mathrm{d}\tau\left[\lambda Q\Phi(X_{0},\tau)+\lambda
Q\partial_{x}\Phi(X_{0},\tau)\left(X-X_{0}\right)\right].$ (6)
The first term in the above action represents the linear interaction between
the IDF and the field, similar to the $\sim{\mathbf{p}}\cdot{\mathbf{A}}$
coupling in atom-field interactions. The second term stands for the tripartite
interaction between the IDF, the MDF and the field. Thus, as we show in the
following, the effective coupling of the center of mass to the field via the
IDF leads to its dissipative dynamics. From now on, for simplicity and without
loss of generality, we assume $X_{0}=0$.
### II.1 Scalar field as bath of harmonic oscillators
The above separation of the total interaction action (Eq.(6)) suggests
splitting the field into two separate oscillator baths such that one of the
baths interacts with the IDF, while the other is coupled to both the internal
and mechanical degrees. We thus make an eigenmode decomposition of the field
to rewrite it in terms of a bath of oscillators as follows QBM3
$\displaystyle\Phi({x},\tau)=\sum_{n}\sqrt{\frac{1}{\epsilon_{0}{L}}}\left[q_{n}^{(-)}\cos({\omega}_{n}{x})+q_{n}^{(+)}\sin({\omega}_{n}{x})\right],$
(7)
where $L$ is the mode volume of the field, $\omega_{n}$ refers to the mode
frequency, and $q_{n}^{(+)}$ and $q_{n}^{(-)}$ refer to two independent set of
eigenmodes of the field, assuming periodic boundary conditions. One can thus
rewrite the free field action as as a sum $S_{F}=S_{F}^{(+)}+S_{F}^{(-)}$,
where
$\displaystyle
S_{F}^{(\pm)}=\int\mathrm{d}\tau\sum_{n}\frac{1}{2}\left[\left.\dot{q}_{n}^{(\pm)}\right.^{2}-\omega_{n}^{2}\left.{q}_{n}^{(\pm)}\right.^{2}\right]$
(8)
with $S_{F}^{(\pm)}$ corresponding to two separate baths of $(+)$ and $(-)$
oscillators. Thus, for $X_{0}=0$ the interaction action in Eq.(6) can be
written as a sum of two separate interaction terms
$S_{\mathrm{int}}=S_{\mathrm{int}}^{(-)}+S_{\mathrm{int}}^{(+)}$, with
$\displaystyle S_{\mathrm{int}}^{(-)}$
$\displaystyle=\int\mathrm{d}\tau\lambda Q\sum_{n}q_{n}^{(-)}$ (9)
$\displaystyle S_{\mathrm{int}}^{(+)}$
$\displaystyle=\int\mathrm{d}\tau\lambda Q\sum_{n}\omega_{n}Xq_{n}^{(+)}.$
(10)
Thus the interaction action $S_{\mathrm{int}}^{(-)}$ linearly couples the
$(-)$ bath of oscillators of the scalar field to the IDF, leading to the
dissipative dynamics of the IDF. The interaction action
$S_{\mathrm{int}}^{(+)}$ couples the center of mass to the $(+)$ bath of
oscillators via the IDF. We note that $S_{\mathrm{int}}^{(+)}$ is essentially
responsible for all the mechanical effects of the field, such as radiation
pressure due to optical fields and vacuum-induced forces PierreBook ;
PeterQOBook . For instance, one can see that eliminating the IDF from the
interaction action $S_{\mathrm{int}}^{(+)}$ yields an effective nonlinear
interaction between the MDF and the field, similar to the intensity-position
coupling in optomechanical interactions KSthesis . We also remark here that
the above separation of the field into two uncorrelated baths is akin to the
Einstein-Hopf theorem that establishes the statistical independence of the
blackbody radiation field and its derivative EinHopf1 ; EinHopf3 .
## III Effective action derivation from perturbative influence functional
approach
The nonlinearity of the interaction action $S_{\mathrm{int}}^{(+)}$ limits the
possibility of obtaining an exact analytical solution for the center of mass
dynamics. We therefore follow a perturbative generating functional approach as
prescribed in HPZ93 , assuming that the characteristic nonlinear coupling
strength $\left(\sim\lambda\omega_{n}\right)$ can be treated perturbatively to
write the resulting dynamics of the center of mass.
Let us consider the evolution of the system density matrix $\hat{\rho}_{M}$ as
follows
$\displaystyle\hat{\rho}_{M}\left(t\right)=\mathrm{Tr}_{I}\mathrm{Tr}_{F^{+}}\mathrm{Tr}_{F^{-}}\left[\hat{\mathcal{U}}(t,0)\left(\hat{\rho}_{M}(0)\otimes\hat{\rho}_{I}(0)\otimes\hat{\rho}_{F}^{(+)}(0)\otimes\hat{\rho}_{F}^{(-)}(0)\right)\hat{\mathcal{U}}^{\dagger}(t,0)\right],$
(11)
where we have assumed that the density operators for the different degrees of
freedom are initially uncorrelated with each other, with $\hat{\rho}_{I}$ and
$\hat{\rho}_{F}^{(\pm)}$ referring to the density matrices for the IDF and the
$(\pm)$ bath, and $\hat{\mathcal{U}}(t,0)$ corresponding to the time evolution
operator. We write the density matrices in a coordinate representation such
that
$\displaystyle\hat{\rho}_{M}(\tau)=$
$\displaystyle\int\mathrm{d}X\mathrm{d}X^{\prime}\rho_{M}\left(X,X^{\prime};\tau\right)\left|X\right\rangle\left\langle
X^{\prime}\right|,$ (12) $\displaystyle\hat{\rho}_{I}(\tau)=$
$\displaystyle\int\mathrm{d}Q\mathrm{d}Q^{\prime}\rho_{I}\left(Q,Q^{\prime};\tau\right)\left|Q\right\rangle\left\langle
Q^{\prime}\right|,$ (13) $\displaystyle\hat{\rho}_{F}^{(\pm)}(\tau)=$
$\displaystyle\prod_{n}\int\mathrm{d}q_{n}^{(\pm)}\mathrm{d}{q^{(\pm)}_{n}}^{\prime}\rho_{F}^{(\pm)}\left(\left\\{q_{n}^{(\pm)},{q^{(\pm)}_{n}}^{\prime}\right\\};\tau\right)\left|q_{n}^{(\pm)}\right\rangle\left\langle{q^{(\pm)}_{n}}^{\prime}\right|,$
(14)
such that the system evolution in Eq. (11) can be rewritten as
$\displaystyle\rho_{M}\left({X}_{f},{X}_{f}^{\prime};t\right)=$
$\displaystyle\prod_{m,n}\int\mathrm{d}Q_{f}\mathrm{d}q^{(+)}_{nf}\mathrm{d}q^{(-)}_{mf}\int\mathrm{d}{X}_{i}\mathrm{d}Q_{i}\mathrm{d}q^{(+)}_{ni}\mathrm{d}q^{(-)}_{mi}\int\mathrm{d}{X}_{i}^{\prime}\mathrm{d}Q_{i}^{\prime}\mathrm{d}{q^{(+)}_{ni}}^{\prime}\mathrm{d}{q^{(-)}_{mi}}^{\prime}$
$\displaystyle\left[\rho_{M}\left({X}_{i},{X}_{i}^{\prime};0\right)\otimes\rho_{I}\left(Q_{i},Q_{i}^{\prime};0\right)\otimes\rho_{F}\left(\left\\{q^{(+)}_{ni},{q^{(+)}_{ni}}^{\prime}\right\\};0\right)\otimes\rho_{F}\left(\left\\{q^{(-)}_{mi},{q^{(-)}_{mi}}^{\prime}\right\\};0\right)\right.$
$\displaystyle\left.\mathcal{J}\left(X_{f},Q_{f},\left\\{{q_{nf}^{(+)}},{q_{mf}^{(-)}}\right\\};t\big{|}X_{i},Q_{i},\left\\{{q_{ni}^{(+)}},{q_{mi}^{(-)}}\right\\};0\right)\mathcal{J}^{\dagger}\left(X^{\prime}_{f},Q_{f},\left\\{{q_{nf}^{(+)}},{q_{mf}^{(-)}}\right\\};t\big{|}X^{\prime}_{i},Q^{\prime}_{i},\left\\{{q_{ni}^{(+)}}^{\prime},{q_{mi}^{(-)}}^{\prime}\right\\};0\right)\right],$
(15)
where $\left\\{{X}_{i},{X}_{i}^{\prime}\right\\}$ and
$\left\\{{X}_{f},{X}_{f}^{\prime}\right\\}$ refer to the initial and final
coordinates corresponding to the center of mass variables respectively,
$\left\\{Q_{i},Q_{i}^{\prime};Q_{f},Q_{f}^{\prime}\right\\}$ are those for the
internal oscillator, and
$\left\\{q_{ni}^{(\pm)},{q_{ni}^{(\pm)}}^{\prime};q_{nf}^{(\pm)},{q_{nf}^{(\pm)}}^{\prime}\right\\}$
are the initial and final coordinates for the $n^{\mathrm{th}}$ oscillator of
the $(\pm)$ bath. We assume that the initial states
$\rho_{F}^{(\pm)}\left(\left\\{q^{(\pm)}_{ni},{q^{(\pm)}_{ni}}^{\prime}\right\\};0\right)$
of the field and $\rho_{I}\left(Q_{i},Q_{i}^{\prime};0\right)$ of the IDF are
thermal with a temperature $T_{F}$ and $T_{I}$ respectively. The forward
propagator is defined as
$\mathcal{J}\left(X_{f},Q_{f},\left\\{q_{nf}^{(+)},q_{mf}^{(-)}\right\\};t\big{|}X_{i},Q_{i},\left\\{q_{ni}^{(+)},q_{mi}^{(-)}\right\\};0\right)\equiv\prod_{m,n}\left\langle{X}_{f}\right|\left\langle
Q_{f}\right|\left\langle q^{(+)}_{nf}\right|\left\langle
q^{(-)}_{mf}\right|\hat{\mathcal{U}}\left(t,0\right)\left|q^{(-)}_{mi}\right\rangle\left|q^{(+)}_{ni}\right\rangle\left|Q_{i}\right\rangle\left|{X}_{i}\right\rangle$.
Thus the first and last terms in the integral refer to the forward and
backward propagators that can be expressed in path integral representation to
write the evolution as
$\displaystyle\rho_{M}\left({X}_{f},{X}_{f}^{\prime};t\right)=\int\mathrm{d}{X}_{i}\int\mathrm{d}{X}_{i}^{\prime}\,\rho_{M}\left({X}_{i},{X}_{i}^{\prime};0\right)\int_{{X}(0)={X}_{i}}^{{X}(t)={X}_{f}}\mathcal{D}{X}\int_{{X}^{\prime}(0)={X}^{\prime}_{i}}^{{X}^{\prime}(t)=X^{\prime}_{f}}\mathcal{D}{X}^{\prime}e^{i\left(S_{M}[{X}]-S_{M}[{X}^{\prime}]\right)}\mathcal{F}\left[{X},{X}^{\prime}\right],$
(16)
where $\mathcal{F}\left[{X},{X}^{\prime}\right]$ refers to the influence
functional of the field and the internal oscillator acting on the MDF
FeynmanTrick , which can be written explicitly as
$\displaystyle\mathcal{F}\left[{X},{X}^{\prime}\right]\equiv$
$\displaystyle\prod_{m,n}\int\mathrm{d}Q_{f}\mathrm{d}q^{(+)}_{nf}\mathrm{d}q^{(-)}_{mf}\int\mathrm{d}Q_{i}\mathrm{d}q^{(+)}_{ni}\mathrm{d}q^{(-)}_{mi}\int\mathrm{d}Q_{i}^{\prime}\mathrm{d}{q^{(+)}_{ni}}^{\prime}\mathrm{d}{q^{(-)}_{mi}}^{\prime}\rho_{I}\left(Q_{i},Q_{i}^{\prime};0\right)\otimes\rho_{F}^{(+)}\left(\left\\{q^{(+)}_{ni},{q^{(+)}_{ni}}^{\prime}\right\\};0\right)\otimes\rho_{F}^{(-)}\left(\left\\{q^{(-)}_{mi},{q^{(-)}_{mi}}^{\prime}\right\\};0\right)$
$\displaystyle\int_{Q(0)=Q_{i}}^{Q(t)=Q_{f}}\mathcal{D}Q\int_{Q^{\prime}(0)=Q^{\prime}_{i}}^{Q^{\prime}(t)=Q_{f}}\mathcal{D}Q^{\prime}e^{i\left(S_{I}[Q]-S_{I}[Q^{\prime}]\right)}\int_{q^{(+)}_{n}(0)=q^{(+)}_{ni}}^{q^{(+)}_{n}(t)=q^{(+)}_{nf}}\mathcal{D}q_{n}^{(+)}\int_{{q^{(+)}_{n}}^{\prime}(0)={q^{(+)}_{ni}}^{\prime}}^{{q^{(+)}_{n}}^{\prime}(t)=q^{(+)}_{nf}}\mathcal{D}{q^{(+)}_{n}}^{\prime}e^{i\left(S_{F}^{(+)}\left[\left\\{q_{n}^{(+)}\right\\}\right]-S_{F}^{(+)}\left[\left\\{{q_{n}^{(+)}}^{\prime}\right\\}\right]\right)}$
$\displaystyle\int_{q^{(-)}_{m}(0)=q^{(-)}_{mi}}^{q^{(-)}_{m}(t)=q^{(-)}_{mf}}\mathcal{D}q_{m}^{(-)}\int_{{q^{(-)}_{m}}^{\prime}(0)={q^{(-)}_{mi}}^{\prime}}^{{q^{(-)}_{m}}^{\prime}(t)=q^{(-)}_{mf}}\mathcal{D}{q^{(-)}_{m}}^{\prime}e^{i\left(S_{F}^{(-)}\left[\left\\{q_{m}^{(-)}\right\\}\right]-S_{F}^{(-)}\left[\left\\{{q_{m}^{(-)}}^{\prime}\right\\}\right]\right)}e^{i\left(S_{\mathrm{int}}^{(-)}\left[Q,\left\\{q_{m}^{(-)}\right\\}\right]-S_{\mathrm{int}}^{(-)}\left[Q^{\prime},\left\\{{q_{m}^{(-)}}^{\prime}\right\\}\right]\right)}$
$\displaystyle
e^{i\left(S_{\mathrm{int}}^{(+)}\left[{X},Q,\left\\{q_{n}^{(+)}\right\\}\right]-S_{\mathrm{int}}^{(+)}\left[{X}^{\prime},Q^{\prime},\left\\{{q_{n}^{(+)}}^{\prime}\right\\}\right]\right)}.$
(17)
Thus we have treated here the MDF as the system and the IDF and the field as
the bath, with the influence functional capturing the influence of the bath on
the evolution of the MDF. We will now evaluate the influence functional in a
perturbative manner, by first tracing out the $(-)$ bath modes that couple
only to the IDF, and then the $(+)$ bath modes and the IDF that couple to the
center of mass.
### III.1 Tracing over the $(-)$ bath
Let us define ${\mathcal{F}^{(-)}\left[Q,Q^{\prime}\right]}$ as the influence
functional that accounts for the influence of the $(-)$ bath on the IDF as
follows
$\displaystyle\mathcal{F}^{(-)}\left[Q,Q^{\prime}\right]\equiv\prod_{m}\int\mathrm{d}q^{(-)}_{mf}\int\mathrm{d}q^{(-)}_{mi}\mathrm{d}{q^{(-)}_{mi}}^{\prime}\rho_{F}^{(-)}\left(\left\\{q^{(-)}_{mi},{q^{(-)}_{mi}}^{\prime}\right\\};0\right)\int_{q^{(-)}_{m}(0)=q^{(-)}_{mi}}^{q^{(-)}_{m}(t)=q^{(-)}_{mf}}\mathcal{D}q_{m}^{(-)}\int_{{q^{(-)}_{m}}^{\prime}(0)={q^{(-)}_{mi}}^{\prime}}^{{q^{(-)}_{m}}^{\prime}(t)={q^{(-)}_{mf}}^{\prime}}\mathcal{D}{q^{(-)}_{m}}^{\prime}$
$\displaystyle
e^{i\left(S_{F}^{(-)}\left[\left\\{q_{m}^{(-)}\right\\}\right]-S_{F}^{(-)}\left[\left\\{{q_{m}^{(-)}}^{\prime}\right\\}\right]\right)}e^{i\left(S_{\mathrm{int}}^{(-)}\left[Q,\left\\{q_{m}^{(-)}\right\\}\right]-S_{\mathrm{int}}^{(-)}\left[Q^{\prime},\left\\{{q_{m}^{(-)}}^{\prime}\right\\}\right]\right)}.$
(18)
Performing the path integrals by considering an initial thermal state of the
$(-)$ bath oscillators with temperature $T_{F}$ corresponding to the
temperature of the field, one obtains CalzettaHu
$\displaystyle\mathcal{F}^{(-)}\left[Q,Q^{\prime}\right]=e^{{i}S^{(-)}_{\mathrm{I,IF}}\left[Q,Q^{\prime}\right]},$
(19)
where the influence action due to the $(-)$ bath is given as
$\displaystyle
S^{(-)}_{\mathrm{I,IF}}\left[Q,Q^{\prime}\right]\equiv-\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}\left[Q\left(\tau_{1}\right)-Q^{\prime}\left(\tau_{1}\right)\right]\eta^{(-)}\left(\tau_{1}-\tau_{2}\right)\left[Q\left(\tau_{2}\right)+Q^{\prime}\left(\tau_{2}\right)\right]$
$\displaystyle+i\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}\left[Q\left(\tau_{1}\right)-Q^{\prime}\left(\tau_{1}\right)\right]\nu^{(-)}\left(\tau_{1}-\tau_{2}\right)\left[Q\left(\tau_{2}\right)-Q^{\prime}\left(\tau_{2}\right)\right].$
(20)
The above influence action corresponds to the case of quantum Brownian motion
(QBM) model with a linear system-bath coupling HPZ92 . The dissipation and
noise kernels, $\eta^{(-)}(\tau)$ and $\nu^{(-)}(\tau)$ respectively, are
$\displaystyle\eta^{(-)}(\tau)=$
$\displaystyle-\sum_{n}\frac{\lambda^{2}}{2\omega_{n}}\sin\left(\omega_{n}\tau\right){\Theta(\tau)}$
(21) $\displaystyle\nu^{(-)}(\tau)=$
$\displaystyle\sum_{n}\frac{\lambda^{2}}{2\omega_{n}}\coth\left(\frac{\omega_{n}}{2k_{B}T_{F}}\right)\cos\left(\omega_{n}\tau\right).$
(22)
The spectral density associated with the $(-)$ bath is thus
$\displaystyle
J^{(-)}\left(\omega\right)=\sum_{n}\frac{\lambda^{2}}{2\omega_{n}}\delta\left(\omega-\omega_{n}\right).$
(23)
We note from the above that the influence of the $(-)$ bath on the IDF leads
to its QBM dynamics, with a sub-Ohmic spectral density, which necessitates a
non-Markovian treatmeant of the resulting dynamics Weiss .
We now consider the effect of the IDF in turn on the dynamics of the center of
mass. Using Eq. (III.1) and (19), we can rewrite the total influence
functional in Eq. (III) as
$\displaystyle\mathcal{F}\left[{X},{X}^{\prime}\right]=\prod_{n}$
$\displaystyle\int\mathrm{d}Q_{f}\mathrm{d}q^{(+)}_{nf}\int\mathrm{d}Q_{i}\mathrm{d}q^{(+)}_{ni}\int\mathrm{d}Q_{i}^{\prime}\mathrm{d}{q^{(+)}_{ni}}^{\prime}\,\rho_{I}\left(Q_{i},Q_{i}^{\prime};0\right)\otimes\rho_{F}^{(+)}\left(\left\\{q^{(+)}_{ni},{q^{(+)}_{ni}}^{\prime}\right\\};0\right)$
$\displaystyle\int_{Q(0)=Q_{i}}^{Q(t)=Q_{f}}\mathcal{D}Q\int_{Q^{\prime}(0)=Q^{\prime}_{i}}^{Q^{\prime}(t)=Q^{\prime}_{f}}\mathcal{D}Q^{\prime}e^{i\left(S_{I}[Q]-S_{I}[Q^{\prime}]\right)}\int_{q_{n}^{(+)}(0)=q_{ni}^{(+)}}^{q_{n}^{(+)}(t)=q_{nf}^{(+)}}\mathcal{D}q_{n}\int_{{q^{(+)}_{n}}^{\prime}(0)={q^{(+)}_{ni}}^{\prime}}^{{q^{(+)}_{n}}^{\prime}(t)=q^{(+)}_{nf}}\mathcal{D}{q^{(+)}_{n}}^{\prime}$
$\displaystyle
e^{i\left(S_{F}\left[\left\\{q_{n}^{(+)}\right\\}\right]-S_{F}\left[\left\\{{q_{n}^{(+)}}^{\prime}\right\\}\right]\right)}e^{i\left(S_{\mathrm{int}}^{(+)}\left[{X},Q,\left\\{q_{n}^{(+)}\right\\}\right]-S^{(+)}_{\mathrm{int}}\left[{X}^{\prime},Q^{\prime},\left\\{{q_{n}^{(+)}}^{\prime}\right\\}\right]\right)}e^{iS_{\mathrm{I,IF}}^{(-)}\left[Q,Q^{\prime}\right]},$
(24)
where the integrations over the IDF and the $(+)$ bath of oscillators remain.
Thus far the above influence functional is exact in that we have not made any
additional approximations with regard to the strength of coupling when tracing
over the $(-)$ bath.
### III.2 Tracing over the $(+)$ bath and the internal degree of freedom
Next we would like to integrate out the (+) bath and the IDF, which are
nonlinearly coupled to the system, using a perturbative generating functional
approach. The generating functional is simply the influence functional of the
environment where the bath oscillators are linearly coupled to the system. We
define the influence action $S_{\mathrm{M,IF}}\left[X,X^{\prime}\right]$
corresponding to the influence functional
$\mathcal{F}\left[{X},{X}^{\prime}\right]$ in Eq.(III.1) such that
$\mathcal{F}\left[{X},{X}^{\prime}\right]\equiv e^{iS_{\mathrm{M,IF}}}$. The
influence action up to second order in the coupling strength $\lambda$ can be
obtained as (see appendix B for a detailed derivation) HPZ93 :
$\displaystyle S_{\mathrm{M,IF}}^{(2)}[{X},{X}^{\prime}]\approx$
$\displaystyle\left\langle
S_{\mathrm{int}}^{(+)}\left[{X},\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\rangle_{0}-\left\langle
S_{\mathrm{int}}^{(+)}\left[{X}^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right\rangle_{0}$
$\displaystyle+\frac{i}{2}\left(\left\langle\left\\{S_{\mathrm{int}}^{(+)}\left[{X},\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\\}^{2}\right\rangle_{0}-\left\\{\left\langle
S_{\mathrm{int}}^{(+)}\left[{X},\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\rangle_{0}\right\\}^{2}\right)$
$\displaystyle+\frac{i}{2}\left(\left\langle\left\\{S_{\mathrm{int}}^{(+)}\left[{X}^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right\\}^{2}\right\rangle_{0}-\left\\{\left\langle
S_{\mathrm{int}}^{(+)}\left[{X}^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right\rangle_{0}\right\\}^{2}\right)$
$\displaystyle-i\left(\left\langle
S_{\mathrm{int}}^{(+)}\left[{X},\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]S_{\mathrm{int}}^{(+)}\left[{X}^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right\rangle_{0}-\left\langle
S_{\mathrm{int}}^{(+)}\left[{X},\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\rangle_{0}\left\langle
S_{\mathrm{int}}^{(+)}\left[{X}^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right\rangle_{0}\right),$ (25)
where we have defined ${\bf J}\equiv\left(J,\left\\{J_{n}\right\\}\right)$,
${\bf
J^{\prime}}\equiv\left(J^{\prime},\left\\{J^{\prime}_{n}\right\\}\right)$, and
the expectation values
$\left\langle\mathcal{O}[{J},{J^{\prime}}]\right\rangle_{0}\equiv\left.\mathcal{O}[{J},{J^{\prime}}]\mathcal{G}^{(1)}[{J},{J^{\prime}}]\right|_{{J}={J^{\prime}}=0}$.
The generating functional $\mathcal{G}^{(1)}[{J},{J^{\prime}}]$ for the $(+)$
bath and the IDF is defined as
$\displaystyle\mathcal{G}^{(1)}[J,J^{\prime},\left\\{J_{n},J^{\prime}_{n}\right\\}]\equiv\prod_{n}\mathcal{F}^{(1)}_{n}\left[J_{n},J^{\prime}_{n}\right]\mathcal{F}^{(1)}_{I}\left[J,J^{\prime}\right]$
(26)
with $\mathcal{F}^{(1)}_{I}\left[J,J^{\prime}\right]$ and
$\mathcal{F}^{(1)}_{n}\left[J_{n},J^{\prime}_{n}\right]$ as the influence
functionals for the internal oscillator and the $n^{\mathrm{th}}$ oscillator
of the $(+)$ bath respectively, with a linear coupling to the corresponding
source current terms $\left\\{J,J^{\prime}\right\\}$ and
$\left\\{J_{n},J^{\prime}_{n}\right\\}$. An explicit derivation of the
generating functional is given in Appendix A.
We can calculate the second order influence action in Eq. (III.2) as shown in
Appendix C to obtain a form similar to that of linear QBM dynamics HPZ92 :
$\displaystyle S_{\mathrm{M,IF}}^{(2)}\left[X,X^{\prime}\right]=$
$\displaystyle-\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t_{1}}\mathrm{d}t_{2}\,\left[X(t_{1})-X^{\prime}(t_{1})\right]\eta^{(2)}(t_{1},t_{2})\left[X(t_{2})+X^{\prime}(t_{2})\right]$
$\displaystyle+i\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t_{1}}\mathrm{d}t_{2}\,\left[X(t_{1})-X^{\prime}(t_{1})\right]\nu^{(2)}(t_{1},t_{2})\left[X(t_{2})-X^{\prime}(t_{2})\right],$
(27)
where dissipation and noise kernels are defined as
$\displaystyle\eta^{(2)}(t_{1},t_{2})=$
$\displaystyle\frac{1}{2}\eta^{(+)}\left(t_{1}-t_{2}\right)\left\\{\nu_{GG}\left(t_{1},t_{2}\right)+\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle\right\\}+\frac{1}{4}\nu^{(+)}\left(t_{1}-t_{2}\right){g}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right),$
(28) $\displaystyle\nu^{(2)}(t_{1},t_{2})=$
$\displaystyle\frac{1}{2}\nu^{(+)}\left(t_{1}-t_{2}\right)\left\\{\nu_{GG}\left(t_{1},t_{2}\right)+\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle\right\\}-\frac{1}{4}\eta^{(+)}\left(t_{1}-t_{2}\right){g}\left(t_{1}-t_{2}\right),$
(29)
with ${G}(t)=g(t)\Theta(t)$ the propagator for the IDF defined as in Eq.(67).
The noise correlation of the IDF is given by
$\displaystyle\nu_{GG}(t_{1},t_{2})\equiv\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{t}\mathrm{d}\tau_{2}{G}\left(t_{1}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(t_{2}-\tau_{2}\right).$
(30)
The noise arising from the dispersion in the initial conditions of the IDF is
captured in the term $\sim\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$, where
$Q_{h}\left(t\right)$ is the classical solution to the homogeneous Langevin
equation (Eq. (66)) corresponding to the dissipative dyanmics of IDF in the
presence of the $(-)$ bath. The average $\left\llangle\dots\right\rrangle$ is
taken over initial position and momentum distribution of the IDF, as defined
in Eq. (65).
The dissipation and noise kernels associated with the $(+)$ bath, defined
based on a bilinear system-bath coupling are
$\displaystyle\eta^{(+)}\left(\tau\right)\equiv$
$\displaystyle-\frac{1}{2}\sum_{n}\lambda^{2}\omega_{n}\sin\left(\omega_{n}\tau\right){\Theta(\tau)}$
(31) $\displaystyle\nu^{(+)}\left(\tau\right)\equiv$
$\displaystyle\frac{1}{2}\sum_{n}\lambda^{2}\omega_{n}\coth\left(\frac{\omega_{n}}{2k_{B}T_{F}}\right)\cos\left(\omega_{n}\tau\right),$
(32)
similar to those of the $(-)$ bath (Eq. (21) and Eq. (22)) with a coupling
$\lambda\rightarrow\lambda\omega_{n}$. The spectral density associated with
the $(+)$ bath is thus
$\displaystyle
J^{(+)}\left(\omega\right)=\sum_{n}\frac{\lambda^{2}\omega_{n}}{2}\delta\left(\omega-\omega_{n}\right),$
(33)
corresponding to an Ohmic case CalzettaHu . We note that while the two baths
$(+)$ and $(-)$ correspond to the same field, due to a gradient coupling of
the center of mass to the field the $(+)$ bath has an Ohmic spectral density
while the $(-)$ bath is sub-Ohmic (see Eq. (23)).
One can note a few important features from the influence action (Eq. (27)) and
the dissipation and noise kernels therein (Eq. (28) and Eq. (29)):
* •
The overall structural similarity of the second order influence action to that
of the linear QBM extends to structure of the dissipation and noise kernels as
well. Meaning that writing the interaction Hamiltonian as
$H\sim\hat{X}\hat{\mathcal{B}}$, where $\hat{\mathcal{B}}$ is the total bath
operator (possibly nonlinear), one can write an effective action of the same
form as linear QBM with dissipation and noise kernels given in terms of the
expectation values of the commutator and anticommutator of the two-time bath
correlation functions
$\eta\left(\tau\right)=\left\langle\left[\hat{\mathcal{B}}\left(\tau\right),\hat{\mathcal{B}}\left(0\right)\right]\right\rangle$
and
$\nu\left(\tau\right)=\left\langle\left\\{\hat{\mathcal{B}}\left(\tau\right),\hat{\mathcal{B}}\left(0\right)\right\\}\right\rangle$.
This is a more general property of bilinear system-bath couplings. The
influence functional of Eq.(III.1) for $X=0=X^{\prime}$ results in a Gaussian
path integral, so it is reasonable to obtain that the second order results
with a linear QBM form. One can further verify as a check that the unitarity
of the evolution implies $S_{\mathrm{M,IF}}^{(2)}\left[X,X\right]=0$.
* •
The dissipation kernel contains two parts that can be interpreted as: (1)
coming from the dissipation kernel of the $(+)$ bath
$(\eta^{(+)}(t_{1}-t_{2}))$ combined with the noise from the IDF
$(\nu_{GG}\left(t_{1},t_{2}\right)+\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$, and (2) linear
propagator of the IDF $(G\left(t_{1}-t_{2}\right))$ combined with the noise
from the $(+)$ bath $(\nu^{(+)}\left(t_{1}-t_{2}\right))$. Notice that the
quantity $\nu_{GG}\left(t_{1},t_{2}\right)+\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$ stands for the
full quantum correlations of the IDF under the influence of the $(-)$ bath
alone. The term $\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$ accounts for the
relaxation of the initial conditions’ contribution, thus carrying the
information about the initial state of the IDF and determined only by the
dissipation caused by the $(-)$ bath. On the other hand, the term
$\nu_{GG}\left(t_{1},t_{2}\right)$ stands for the fluctuations on the IDF
generated by the fluctuations of the $(-)$ bath.
* •
The noise kernel has the combined noise from both the IDF
$(\nu_{GG}\left(t_{1},t_{2}\right)+\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$ and that from
the $(+)$ bath $(\nu^{(+)}\left(t_{1}-t_{2}\right))$. In addition to that the
noise term also contains the two dissipation kernels from the IDF and the
$(+)$ bath.
## IV Langevin equation and fluctuation-dissipation relation
Having determined the influence action for the center of mass, we can now turn
to studying its dynamics. This section is devoted to the deduction of an
effective Langevin equation of motion for the center of mass and a
corresponding generalized fluctuation-dissipation relation in the long-time
limit.
### IV.1 Effective equation of motion for the center of mass
Considering the influence action obtained as a result of tracing out the IDF
and the field (Eq.(27)), the total effective action for the center of mass
variables up to second order reads:
$\displaystyle
S_{\mathrm{M,eff}}\left[X,X^{\prime}\right]=S_{M}\left[X\right]-S_{M}\left[X^{\prime}\right]+S_{\mathrm{M,IF}}^{(2)}\left[X,X^{\prime}\right].$
(34)
The above action is quadratic and has the same form as in the case of linear
QBM CalzettaHu . One can thus deduce an equation of motion for the center of
mass by extremizing the above action to obtain
$\displaystyle\ddot{X}\left(t\right)+\Omega^{2}X\left(t\right)+\frac{2}{M}\int_{0}^{t}d\tau~{}\eta^{(2)}(t,\tau)~{}X(\tau)=0.$
(35)
A Langevin equation can be worked out by implementing the Feynman and Vernon
procedure for Gaussian path integrals FeynmanTrick ; Behunin1 . First, we
notice that $iS_{\mathrm{M,IF}}^{(2)}=i{\rm Re}[S_{\mathrm{M,IF}}^{(2)}]-{\rm
Im}[S_{\mathrm{M,IF}}^{(2)}]$, and that both, the real and imaginary parts of
the effective action are quadratic functionals of $\\{X,X^{\prime}\\}$. Thus,
${\rm exp}(-{\rm Im}[S_{\mathrm{M,IF}}^{(2)}])$ is a Gaussian functional since
the kernel owing to the quadratic nature of the action. Considering that a
Gaussian functional can be written in terms of its functional Fourier
transform (which is also a Gaussian functional), we can write ${\rm exp}(-{\rm
Im}[S_{\mathrm{M,IF}}^{(2)}])$ as an influence functional over a new variable
$\xi(t)$
$\displaystyle
e^{iS_{\mathrm{M,IF}}^{(2)}\left[X,X^{\prime}\right]}=\int\mathcal{D}\xi~{}\mathcal{P}\left[\xi\right]e^{i\int_{0}^{t}dt_{1}\left[X(t_{1})-X^{\prime}(t_{1})\right]\left[\xi(t_{1})-\int_{0}^{t_{1}}dt_{2}\eta^{(2)}(t_{1},t_{2})\left[X(t_{2})+X^{\prime}(t_{2})\right]\right]},$
(36)
where the distribution of the new functional variable is given by
$\displaystyle\mathcal{P}\left[\xi\right]=e^{-\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\xi(t_{1})\left[4\nu^{(2)}(t_{1},t_{2})\right]^{-1}\xi(t_{2})}.$
(37)
It is worth noting that the new variable $\xi\left(t\right)$ comes to replace
the kernel describing the quantum and thermal fluctuations of the composite
environment and drive the dynamics as an external stochastic force. To recover
the influence action as in Eq. (36), one needs to integrate over $\xi$ given
the functional distribution $\mathcal{P}[\xi]$, which is positive definite
since the noise kernel $\nu^{(2)}\left(t_{1},t_{2}\right)$ is symmetric and
positive. Thus, the stochastic variable $\xi\left(t\right)$ acts as a
classical fluctuating force, which can be interpreted as a noise with a
probability distribution $\mathcal{P}[\xi]$. Furthermore, due to the
Gaussianity of $\mathcal{P}\left[\xi\left(t\right)\right]$ the noise is
completely characterized by its first and second moments:
$\displaystyle\left\langle\xi\left(t\right)\right\rangle_{\xi}=0,$
$\displaystyle\left\langle\xi(t_{1})\xi(t_{2})\right\rangle_{\xi}=4\nu^{(2)}(t_{1},t_{2}),$
(38)
where
$\left\langle...\right\rangle_{\xi}=\int\mathcal{D}\xi\mathcal{P}\left[\xi\right](...)$.
Now, we can re-define an effective action for the center of mass including
this new variable as:
$\displaystyle\tilde{S}_{\mathrm{M,eff}}\left[X,X^{\prime},\xi\right]=S_{M}\left[X\right]-S_{M}\left[X^{\prime}\right]+{\mathrm{Re}}\left[S_{\mathrm{M,IF}}^{(2)}\left[X,X^{\prime}\right]\right]+\int_{0}^{t}dt_{1}\left[X(t_{1})-X^{\prime}(t_{1})\right]\xi(t_{1}).$
(39)
Finally, associated with this new action we can derive an effective equation
of motion for the center of mass $(\delta\tilde{S}_{\mathrm{M,eff}}/\delta
X_{\Delta})|_{X_{\Delta}=0}=0$, which now gives the Langevin equation:
$\displaystyle\ddot{X}\left(t\right)+\Omega^{2}X\left(t\right)+\frac{2}{M}\int_{0}^{t}d\tau~{}\eta^{(2)}(t,\tau)~{}X(\tau)=\xi\left(t\right),$
(40)
subjected to the statistical properties of $\xi\left(t\right)$ as in Eq.
(IV.1). We can see that averaging the above equation over the stochastic force
($\left\langle...\right\rangle_{\xi}$) reduces to Eq.(35), which we can
interpret as the Langevin equation of motion after averaging over the possible
noise realizations. As has been shown previously in CRV , the correlations of
the system observables can be obtained from the solutions of such a Langevin
equation.
### IV.2 Generalized fluctuation-dissipation relation
We now analyze the relations between the dissipation and noise kernels of the
composite environment. Considering the definitions of the dissipation and
noise kernels given in Eqs.(28) and (29), respectively, we first take the
long-time limit. In this limit, the dissipation and noise kernels simplify to
$\displaystyle\eta^{(2)}(t_{1},t_{2})\longrightarrow$
$\displaystyle\left[\frac{1}{2}\eta^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}\left(t_{1},t_{2}\right)+\frac{1}{4}g\left(t_{1}-t_{2}\right)\nu^{(+)}\left(t_{1}-t_{2}\right)\right]\Theta\left(t_{1}-t_{2}\right),$
(41) $\displaystyle\nu^{(2)}(t_{1},t_{2})\longrightarrow$
$\displaystyle\frac{1}{2}\nu^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}\left(t_{1},t_{2}\right)-\frac{1}{4}g\left(t_{1}-t_{2}\right)\eta^{(+)}\left(t_{1}-t_{2}\right),$
(42)
where the term $\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$ associated with
the relaxation of the IDF vanishes in the late time limit, and the kernel
$\nu_{GG}(t_{1},t_{2})$ reads:
$\displaystyle\nu_{GG}(t_{1},t_{2})\equiv\int_{0}^{\infty}\mathrm{d}\tau_{1}\int_{0}^{\infty}\mathrm{d}\tau_{2}{G}\left(t_{1}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(t_{2}-\tau_{2}\right).$
(43)
Notice that this limit holds regardless of the initial state of the IDF. This
is in agreement with the notion that the dynamics at late-times is determined
by the field fluctuations only.
It can be shown that the Fourier and Laplace transforms of the dissipation and
noise kernels corresponding to the individual baths can be related in terms of
the following fluctuation-dissipation relations (see Appendix D for details)
$\displaystyle\bar{\nu}^{(\pm)}(\omega)=$
$\displaystyle-2\coth\left(\frac{\omega}{2k_{B}T_{F}}\right){\rm
Im}\left[\bar{\eta}^{(\pm)}(\omega)\right]$ (44)
$\displaystyle\bar{\nu}_{GG}(\omega)=$
$\displaystyle\coth\left(\frac{\omega}{2k_{B}T_{F}}\right){\rm
Im}[\bar{G}(\omega)],$ (45)
where $\bar{F}(\omega)$ represent the Fourier transforms of $F(t)$, defined as
$\bar{F}(\omega)=\int_{-\infty}^{+\infty}{dt}e^{i\omega t}F(t)$. Note that
both $\eta^{(\pm)}$ and $G$ are causal functions, as is also true for the
dissipation kernel $\eta^{(2)}$.
One can thus rewrite Eqs.(41) and (42) in terms of the Fourier transformed
kernels as follows
$\displaystyle\bar{\eta}^{(2)}(\omega)=$
$\displaystyle\frac{1}{2}\left[\bar{\nu}_{GG}\ast\bar{\eta}^{(+)}\right](\omega)+\frac{1}{4}\left[\bar{G}\ast\bar{\nu}^{(+)}\right](\omega),$
(46) $\displaystyle\bar{\nu}^{(2)}(\omega)=$
$\displaystyle\frac{1}{2}\left[\bar{\nu}_{GG}\ast\bar{\nu}^{(+)}\right](\omega)+\left[{\rm
Im}\left(\bar{G}\right)\ast{\rm
Im}\left(\bar{\eta}^{(+)}\right)\right](\omega).$ (47)
The convolution product is defined for two functions $[A\ast
B](\omega)\equiv\int_{-\infty}^{+\infty}\frac{d\omega^{\prime}}{2\pi}A(\omega-\omega^{\prime})B(\omega^{\prime})$,
which satisfies $[A\ast B](\omega)=[B\ast A](\omega)$.
Using the fluctuation-dissipation relations for the kernels of the two baths
and the IDF, we can write the dissipation kernel for the MDF in terms of those
of the IDF and the baths in frequency domain as follows
$\displaystyle{\rm Im}\left[\bar{\eta}^{(2)}(\omega)\right]=$
$\displaystyle\int_{0}^{+\infty}\frac{d\omega^{\prime}}{2\pi}\frac{1}{2}\left[\frac{1}{2}\left\\{\bar{\nu}^{(+)}(\omega-\omega^{\prime})-\bar{\nu}^{(+)}(\omega+\omega^{\prime})\right\\}{\rm
Im}[\bar{G}(\omega^{\prime})]+\left\\{\bar{\nu}_{GG}(\omega-\omega^{\prime})-\bar{\nu}_{GG}(\omega+\omega^{\prime})\right\\}{\rm
Im}\left[\bar{\eta}^{(+)}(\omega^{\prime})\right]\right].$ (48)
We note from the above equation that the contributions to the total
dissipation come from the three body processes between the MDF, IDF and the
$(+)$ bath, with the propagator of the IDF combining with the noise of the
$(+)$ bath and vice versa. Each term in the above equation accounts for the
four possible processes involved in the dynamics at second order, wherein one
of the thermal baths contributes the initial excitation, while the other bath
acts as an extra channel for partial energy exchange.
Furthermore, the above kernel can be alternatively expressed in a compact way
as follows
$\displaystyle{\rm Im}\left[\bar{\eta}^{(2)}(\omega)\right]=$
$\displaystyle\int_{-\infty}^{+\infty}\frac{d\omega^{\prime}}{2\pi}\frac{1}{2}\left[\coth\left(\frac{\omega-\omega^{\prime}}{2k_{B}T_{F}}\right)-\coth\left(\frac{\omega^{\prime}}{2k_{B}T_{F}}\right)\right]{\rm
Im}[\bar{G}(\omega-\omega^{\prime})]~{}{\rm
Im}\left[\bar{\eta}^{(+)}(\omega^{\prime})\right].$ (49)
We can similarly express the noise kernel in terms of the dissipation kernels
of the IDF and the two baths as follows
$\displaystyle\bar{\nu}^{(2)}(\omega)=$
$\displaystyle\int_{-\infty}^{+\infty}\frac{d\omega^{\prime}}{2\pi}\frac{1}{2}\left[1-\coth\left(\frac{\omega-\omega^{\prime}}{2k_{B}T_{F}}\right)\coth\left(\frac{\omega^{\prime}}{2k_{B}T_{F}}\right)\right]{\rm
Im}[\bar{G}(\omega-\omega^{\prime})]~{}{\rm
Im}\left[\bar{\eta}^{(+)}(\omega^{\prime})\right].$ (50)
It is not possible to find simple relation between the above tranforms of the
dissipation and noise kernels $\\{\bar{\eta}^{(2)},\bar{\nu}^{(2)}\\}$ because
of the presence of the integrals on $\omega^{\prime}$, which are related to
the convolution products. We therefore define new kernels
$D^{(2)}(\omega,\omega^{\prime})$ and $N^{(2)}(\omega,\omega^{\prime})$ that
depend on two frequency variables, accounting for processes where a given
frequency-component contribution is modified by other frequencies such that
$\bar{\eta}^{(2)}(\omega)=\int_{-\infty}^{+\infty}\frac{d\omega^{\prime}}{2\pi}D^{(2)}(\omega,\omega^{\prime})$
and
$\bar{\nu}^{(2)}(\omega)=\int_{-\infty}^{+\infty}\frac{d\omega^{\prime}}{2\pi}N^{(2)}(\omega,\omega^{\prime})$.
Thus we can find a relation between these new kernels that reads
$\displaystyle
N^{(2)}\left(\omega,\omega^{\prime}\right)=2\coth\left[\frac{\omega-2\omega^{\prime}}{2k_{B}T}\right]{\rm
Im}\left[D^{(2)}(\omega,\omega^{\prime})\right],$ (51)
or inversely
$\displaystyle{\rm
Im}\left[D^{(2)}(\omega,\omega^{\prime})\right]=\frac{1}{2}\tanh\left[\frac{\omega-2\omega^{\prime}}{2k_{B}T}\right]N^{(2)}\left(\omega,\omega^{\prime}\right).$
(52)
The introduction of an extra variable $\omega^{\prime}$ stands for the
complexity of the environment in terms of its energy exchange with the system.
The nonlinear coupling allows the two parts of the environment (the field and
the IDF) to simultaneously exchange energy with the system. From these
relations, it becomes simple to prove that
$\displaystyle{\rm Im}\left[\bar{\eta}^{(2)}(\omega)\right]=$
$\displaystyle\int_{-\infty}^{+\infty}\frac{d\omega^{\prime}}{2\pi}\frac{1}{2}\tanh\left[\frac{\omega^{\prime}}{2k_{B}T}\right]N^{(2)}\left(\omega,\frac{\omega-\omega^{\prime}}{2}\right),$
(53)
and
$\displaystyle\bar{\nu}^{(2)}(\omega)=$
$\displaystyle\int_{-\infty}^{+\infty}\frac{d\omega^{\prime}}{2\pi}2\coth\left[\frac{\omega^{\prime}}{2k_{B}T}\right]D^{(2)}\left(\omega,\frac{\omega-\omega^{\prime}}{2}\right).$
(54)
The above equation corresponds to a generalized fluctuation-dissipation
relation (FDR) for the late-time dissipation and noise kernels.
We can physically interpret the results by comparing the present situation to
the case of linear QBM. The FDR that holds between the dissipation and noise
kernels in QBM can be stated in frequency-domain for a single frequency
component $\omega$ in the general form that reads
$[{\textit{N}oise}](\omega)=[{\textit{T}hermalFactor}](\omega)\times[{\textit{D}issipation}](\omega)$,
wherein a single frequency connects both sides of the relation. In our case,
the nonlinear nature of the coupling precludes such a relation at a single
frequency. Physically such a nonlinear coupling leads to processes wherein an
excitation from one of the baths interacts with the system while perturbed by
the other bath. We can thus formulate a generalized FDR Eq. (51) by
introducing two frequency variables to account for the nonlinearity.
## V Discussion
We have derived the non-equilbrium dissipative center of mass dynamics of a
particle interacting with a field via its IDF. The model considered in this
paper underlies microscopic optomechanical interactions between neutral
particles and fields, and is generally applicable to open quantum systems that
possess intermediary quantum degrees of freedom between system and bath. We
show that the field can be separated into two baths, referred to as $(+)$ and
$(-)$ baths, with the $(-)$ bath coupled linearly to the IDF, and the $(+)$
bath coupled nonlinearly to both IDF and MDF (see Eq. (9) and Eq. (10)). Such
a decomposition allows one to systematically trace over $(-)$ bath and express
its resulting influence on the IDF in terms of a exact second order effective
action as given by Eq. (20). As the nonlinear coupling between the MDF, IDF
and the $(+)$ bath poses a challenge in terms of writing the exact dynamics of
the MDF, we use a perturbative influence functional approach to trace over the
IDF and $(+)$ bath to obtain a second order effective action for the MDF. We
find that the effective action, the dissipation and noise kernels resulting
from the composite environment (Eq. (27), (28) and (29) respectively) are
structurally similar to linear QBM dynamics. This can be attributed to the
quadratic nature of coupling between the system variable and the composite
bath that goes as $\sim X\mathcal{B}$, where the bath operator
$\mathcal{B}\left(\sim Q\sum_{n}q_{n}^{(+)}\right)$ can be nonlinear. We find
a Langevin equation of motion for the MDF, describing its dissipative non-
equilibrium dynamics (Eq. (40)). The dissipation and noise kernels at late
time can be related via a generalized FDR as given by Eq. (51).
This work highlights three aspects:
1. 1.
Dissipation and noise of the open quantum system in the presence of an IDF –
It can be seen from the dissipation and noise kernels (Eq. (28) and (29))
arising from the composite bath of the IDF+field that the total dissipation of
the MDF is a combination of the dissipation kernel corresponding to the IDF
and noise kernel corresponding to the $(+)$ bath, and vice versa. Similarly
the noise kernel involves a combination of the noise of the IDF and that of
the $(+)$ bath. Additionally, it also includes a contribution resulting from
the combination of the dissipation of the IDF and that of the $(+)$ bath,
which is needed to obtain a generalized FDR for the kernels of our composite
environment. As discussed in Sec. III.2, the dissipation and noise kernels are
similar in structure to those of linear QBM up to the second order
perturbative in action. The present approach can be extended to higher orders
in a systematic way, thus extending the results of Ref.HPZ93 to complex
environments.
2. 2.
Coupled non-equilibrium dynamics of different degrees of freedom with self-
consistent backaction – The equation of motion for the MDF given in Eq. (40)
describes the quantum center of mass motion including the self-consistent
backaction of the various degrees of freedom on each other. The approach based
on the separation of the field into two uncorrelated baths is akin to the
Einstein-Hopf theorem that establishes the statistical independence of the
blackbody radiation field and its derivative EinHopf1 ; EinHopf3 . The
influence of the $(-)$ bath on the IDF is included via the effective action
Eq. (20), and the perturbative second order effective action in Eq. (27)
includes the influence of the $(+)$ bath and the IDF (affected by the $(-)$
bath). In this way, the quantum field influences the MDF’s dynamics by two
means, by its direct interaction as the $(+)$ bath and through the
fluctuations of the IDF as the $(-)$ bath. Furthermore, considering the zero-
temperature limit of the dynamics, we find that a mechanical oscillator
interacting with the vacuum field exhibits dissipative dynamics as a result of
the quantum fluctuations of its composite nonlinear environment. Such a
vacuum-induced noise poses a fundamental constraint on preparing mechanical
objects in quantum states KSYS1 ; Skatepark .
3. 3.
Fluctuation-dissipation relations for a system coupled to a composite
environment – We find a generalized FDR between the late-time dissipation and
noise kernels, that has a similar structure to linear QBM (see Eq. (51))
albeit involving two frequency variables due to the nonlinearity of the
interaction. This allows one to interpret the single-frequency response of the
effective kernels for the MDF in terms of contributions from various frequency
components of the IDF and field. Our result provides an example of FDR for an
open quantum system with nonlinear dynamics when linear response theory is no
longer valid HPZ93 ; JT20 .
As a future prospect our results can be extended to studying the non-
equilibrium dynamics in a wide range of physical systems, particularly when
the time scales of the different degrees of freedom are no longer disparate.
In the context of atom-field interactions, it has been shown for example that
the internal and external degrees of freedom of atoms in a standing laser wave
can exhibit synchronization Argonov05 . Furthermore, it is possible for the
IDF dynamics to be sufficiently slow for a dark internal state which can lead
to highly correlated internal-external dynamics as in the case of velocity
selective coherent population trapping scheme Aspect88 ; Aspect89 , and long-
lived internal states in atomic clock transitions Boyd . It has also been
experimentally observed that the internal and external degrees of freedom can
be efficiently coupled in the presence of colored noise Machluf10 . On the
other hand, when the center of mass dynamics is fast enough to be comparable
to the internal dynamics, one requires a careful consideration of the coupled
internal-external dynamics. Such a situation can arise in the case of
dynamical Casimir effect (DCE) Lin18 , when considering the self-consistent
backaction of the field and the IDF on the mirror Butera ; Belen19 . Finally,
our results can be extended to analyze the thermalization properties and non-
equilibrium dynamics of the center mass of nanoparticles Yin ; AERL18 .
## VI Acknowledgments
We are grateful to Bei-Lok Hu, Esteban A. Calzetta, and Peter W. Milonni for
insightful discussions. Y.S. acknowledges support from the Los Alamos National
Laboratory ASC Beyond Moore’s Law project and the LDRD program. A.E.R.L. wants
to thank the AMS for the support.
## Appendix A Generating functional derivation
The generating functionals corresponding to the IDF and the $(+)$ bath can be
explicitly written as
$\displaystyle\mathcal{F}^{(1)}_{I}\left[J,J^{\prime}\right]=\int\mathrm{d}Q_{f}\int\mathrm{d}Q_{i}\int\mathrm{d}Q^{\prime}_{i}\,\rho_{I}\left(Q_{i},Q^{\prime}_{i};0\right)\int_{Q(0)=Q_{i}}^{Q(t)=Q_{f}}\mathcal{D}Q\int_{Q^{\prime}(0)=Q^{\prime}_{i}}^{Q^{\prime}(t)=Q_{f}}\mathcal{D}Q^{\prime}$
$\displaystyle
e^{i\left(S_{I}[Q]-S_{I}[Q^{\prime}]+S_{\mathrm{I,IF}}^{(-)}[Q,Q^{\prime}]\right)}$
$\displaystyle
e^{i\int_{0}^{t}\mathrm{d}\tau\left[J(\tau)Q(\tau)-J^{\prime}(\tau)Q^{\prime}(\tau)\right]}$
(55) $\displaystyle\mathcal{F}^{(1)}_{n}\left[J_{n},J^{\prime}_{n}\right]=$
$\displaystyle\int\mathrm{d}q_{nf}^{(+)}\int\mathrm{d}q_{ni}^{(+)}\int\mathrm{d}{q_{ni}^{(+)}}^{\prime}\,\rho_{F,n}^{(+)}\left(\left\\{q_{ni}^{(+)},{q_{ni}^{(+)}}^{\prime}\right\\};0\right)$
$\displaystyle\int_{q_{n}^{(+)}(0)=q^{(+)}_{ni}}^{q^{(+)}_{n}(t)=q^{(+)}_{nf}}\mathcal{D}q^{(+)}_{n}\int_{{q_{n}^{(+)}}^{\prime}(0)={q_{ni}^{(+)}}^{\prime}}^{{q_{n}^{(+)}}^{\prime}(t)=q^{(+)}_{nf}}\mathcal{D}{{q_{n}}^{(+)}}^{\prime}e^{i\left(S_{F,n}^{(+)}[\left\\{q_{n}^{(+)}\right\\}]-S_{F,n}^{(+)}[\left\\{{q_{n}^{(+)}}^{\prime}\right\\}]\right)}e^{i\int_{0}^{t}\mathrm{d}\tau\left[J_{n}(\tau)q_{n}^{(+)}(\tau)-J^{\prime}_{n}(\tau){{q_{n}^{(+)}}^{\prime}}(\tau)\right]},$
(56)
where $\rho_{F,n}^{(+)}\left(q_{ni}^{(+)},{q_{ni}^{(+)}}^{\prime};0\right)$
represents the initial state of the $n^{\mathrm{th}}$ oscillator of the $(+)$
bath, and $S_{F,n}^{(+)}$ is the corresponding free action. We can then see
from Eq. (III.1) and Eq. (19) that the generating functional for the $(+)$
bath is simply the influence functional for linear QBM given by
$\displaystyle\mathcal{F}^{(1)}_{n}\left[J_{n},J^{\prime}_{n}\right]=e^{iS_{\mathrm{IF},n}^{(+)}[J_{n},J_{n}^{\prime}]},$
(57)
where influence action $S_{\mathrm{IF},n}^{(+)}[J_{n},J_{n}^{\prime}]$ is
defined as
$\displaystyle
S^{(+)}_{\mathrm{IF},n}\left[J_{n},J_{n}^{\prime}\right]\equiv-\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}\left[J_{n}\left(\tau_{1}\right)-J_{n}^{\prime}\left(\tau_{1}\right)\right]\eta_{n}^{(+)}\left(\tau_{1}-\tau_{2}\right)\left[J_{n}\left(\tau_{2}\right)+J_{n}^{\prime}\left(\tau_{2}\right)\right]$
$\displaystyle+i\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}\left[J_{n}\left(\tau_{1}\right)-J_{n}^{\prime}\left(\tau_{1}\right)\right]\nu_{n}^{(+)}\left(\tau_{1}-\tau_{2}\right)\left[J_{n}\left(\tau_{2}\right)-J_{n}^{\prime}\left(\tau_{2}\right)\right],$
(58)
with the dissipation and noise kernels
$\displaystyle\eta_{n}^{(+)}(\tau)=$
$\displaystyle-\frac{1}{2\omega_{n}}\sin\left(\omega_{n}\tau\right)$ (59)
$\displaystyle\nu_{n}^{(+)}(\tau)=$
$\displaystyle\frac{1}{2\omega_{n}}\coth\left(\frac{\omega_{n}}{2k_{B}T_{F}}\right)\cos\left(\omega_{n}\tau\right).$
(60)
Now to evaluate the generating functional for the IDF in terms of
$\left\\{J,J^{\prime}\right\\}$, we follow the approach in CRV . Consider the
effective action pertaining to the IDF in Eq. (A)
$\displaystyle S_{I,\mathrm{eff}}\left[Q,Q^{\prime},J,J^{\prime}\right]=$
$\displaystyle
S_{I}\left[Q\right]-S_{I}\left[Q^{\prime}\right]+S_{I,\mathrm{IF}}^{(-)}\left[Q,Q^{\prime}\right]+\int_{0}^{t}\mathrm{d}\tau\left[J(\tau)Q(\tau)-J^{\prime}(\tau)Q^{\prime}(\tau)\right]$
(61) $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}\tau\left[\frac{1}{2}M\dot{Q}^{2}-\frac{1}{2}M\omega_{0}^{2}Q^{2}+JQ\right]-\left[\frac{1}{2}M\dot{Q}^{\prime
2}-\frac{1}{2}M\omega_{0}^{2}Q^{\prime 2}+J^{\prime}Q^{\prime}\right]$
$\displaystyle-\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}\left[Q(\tau_{1})-Q^{\prime}(\tau_{1})\right]\eta^{(-)}\left(\tau_{1}-\tau_{2}\right)\left[Q(\tau_{2})+Q^{\prime}(\tau_{2})\right]$
$\displaystyle+i\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}\left[Q(\tau_{1})-Q^{\prime}(\tau_{1})\right]\nu^{(-)}\left(\tau_{1}-\tau_{2}\right)\left[Q(\tau_{2})-Q^{\prime}(\tau_{2})\right]$
(62) $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}\tau\left[\frac{1}{2}M\dot{Q}_{\Sigma}(\tau)\dot{Q}_{\Delta}(\tau)-\frac{1}{2}M\omega_{0}^{2}Q_{\Sigma}(\tau)Q_{\Delta}(\tau)+Q_{\Sigma}(\tau)J_{\Delta}(\tau)+Q_{\Delta}(\tau)J_{\Sigma}(\tau)\right]$
$\displaystyle-\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}Q_{\Delta}(\tau_{1})\eta^{(-)}\left(\tau_{1}-\tau_{2}\right)Q_{\Sigma}(\tau_{2})+i\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{\tau_{1}}\mathrm{d}\tau_{2}Q_{\Delta}(\tau_{1})\nu^{(-)}\left(\tau_{1}-\tau_{2}\right)Q_{\Delta}(\tau_{2}),$
(63)
where we have defined the new coordinates as $Q_{\Sigma}\equiv Q+Q^{\prime}$,
$Q_{\Delta}\equiv Q-Q^{\prime}$,
$J_{\Sigma}\equiv\left(J+J^{\prime}\right)/2$, and
$J_{\Delta}\equiv\left(J-J^{\prime}\right)/2$. The dissipation and noise
kernels $\eta^{(-)}\left(t\right)$ and $\nu^{(-)}\left(t\right)$ are as
defined in Eq. (21) and Eq. (22).
Rewriting the generating functional in terms of the relative and center of
mass coordinates, using the approach outlined in CRV , Eq. (A) can be
simplified to obtain
$\displaystyle\mathcal{F}^{(1)}_{I}\left[J,J^{\prime}\right]=$
$\displaystyle\left\llangle e^{2iJ_{\Delta}\cdot\,Q_{h}}\right\rrangle
e^{-2\left(J_{\Delta}\cdot\,{G}\right)\cdot{\nu}^{(-)}\cdot\left(J_{\Delta}\cdot\,{G}\right)^{T}}e^{2iJ_{\Delta}\cdot{G}\cdot
J_{\Sigma}},$ (64)
where
$\displaystyle\left\llangle\dots\right\rrangle=\int\mathrm{d}Q^{i}\mathrm{d}\Pi^{i}W\left(Q^{i},\Pi^{i}\right)(\dots),$
(65)
denotes the average over the initial Wigner distribution $W(Q^{i},\Pi^{i})$.
$Q_{h}$ is the solution to the homogeneous Langevin equation
$\displaystyle{L}\cdot Q_{h}=0,$ (66)
with initial conditions given by $\left\\{Q^{i},\Pi^{i}\right\\}$ and the
differential operator $L(t,t^{\prime})=m(\frac{d^{2}}{dt^{\prime
2}}+\omega_{0}^{2})\delta(t-t^{\prime})+2\eta^{(-)}(t-t^{\prime})$. The
Green’s function corresponding to the Langevin operator ${{L}}$ is defined as
$\displaystyle{L}\cdot{G}={\delta},$ (67)
where $A\cdot B\equiv\int_{0}^{t}\mathrm{d}\tau A(\tau)B(\tau)$.
## Appendix B Perturbative effective action derivation
Now having obtained the total generating functional as defined in Eq. (26)
which contains the influence of the IDF (Eq. (64)) and the (+) bath (Eq. (A))
on the system, we substitute and back in the total generating functional to
obtain the effective action perturbatively up to second order in the system-
bath coupling parameter in a term by term manner as follows.
$\displaystyle F[X,X^{\prime}]=$
$\displaystyle\left.\exp{\left\\{{i}\left(S_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta
J},\left\\{\frac{1}{i}\frac{\delta}{\delta
J_{n}}\right\\}\right]-S_{\mathrm{int}}^{(+)}\left[X^{\prime},\frac{-1}{i}\frac{\delta}{\delta
J^{\prime}},\left\\{\frac{-1}{i}\frac{\delta}{\delta
J^{\prime}_{n}}\right\\}\right]\right)\right\\}}\mathcal{G}^{(1)}[J,J^{\prime},\left\\{J_{n},J^{\prime}_{n}\right\\}]\right|_{{\bf
J}={\bf J^{\prime}}=0}$
$\displaystyle\equiv\exp{\left\\{{i}S_{\mathrm{M,IF}}[X,X^{\prime}]\right\\}}.$
(68)
So far the above expression would yield the exact effective action for the
given system-bath interaction since we have not invoked any weak-coupling
approximations yet. We will now make a perturbative expansion of the
interaction action up to second order in the system-bath coupling strength in
the exponent. Let us consider
$S_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta
J},\left\\{\frac{1}{i}\frac{\delta}{\delta
J_{n}}\right\\}\right]\equiv\varepsilon\tilde{S}_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta
J},\left\\{\frac{1}{i}\frac{\delta}{\delta J_{n}}\right\\}\right]$ and
$S_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta
J^{\prime}},\left\\{-\frac{1}{i}\frac{\delta}{\delta
J^{\prime}_{n}}\right\\}\right]\equiv\varepsilon^{\prime}\tilde{S}_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta
J^{\prime}},\left\\{-\frac{1}{i}\frac{\delta}{\delta
J^{\prime}_{n}}\right\\}\right]$, where the dimensionless parameters
$\varepsilon$ and $\varepsilon^{\prime}$ characterize the system-bath coupling
strength that we expand the influence action about
$\displaystyle
S_{\mathrm{M,IF}}^{(2)}[X,X^{\prime}]\approx\left.S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}+\varepsilon\left.\frac{\delta}{\delta\varepsilon}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}+\varepsilon^{\prime}\left.\frac{\delta}{\delta\varepsilon^{\prime}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}$
$\displaystyle+\frac{\varepsilon^{2}}{2}\left.\frac{\delta^{2}}{\delta\varepsilon^{2}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}+\frac{\varepsilon^{\prime
2}}{2}\left.\frac{\delta^{2}}{\delta\varepsilon^{\prime
2}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}+{\varepsilon\varepsilon^{\prime}}\left.\frac{\delta^{2}}{\delta\varepsilon\delta\varepsilon^{\prime}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}$
(69)
Let us consider the above expression term by term
1. 1.
$\left.S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}=0$,
this can be understood as the influence action corresponding to a non-
interacting bath, which trivially vanishes.
2. 2.
$\begin{aligned}
\left.\frac{\delta}{\delta\varepsilon}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}&=\left.\tilde{S}_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right|_{{\bf J}={\bf
J^{\prime}}=0}\end{aligned}$
3. 3.
$\begin{aligned}
\left.\frac{\delta}{\delta\varepsilon^{\prime}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}&=-\left.\tilde{S}_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right|_{{\bf
J}={\bf J^{\prime}}=0}\end{aligned}$
4. 4.
$\begin{aligned}
\left.\frac{\delta^{2}}{\delta\varepsilon^{2}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}={i}\left(\left.\left[\left(\tilde{S}_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right)^{2}\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right]\right|_{{\bf
J}={\bf
J^{\prime}}=0}-\left[\left.\tilde{S}_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right|_{{\bf J}={\bf
J^{\prime}}=0}\right]^{2}\right)\end{aligned}$
5. 5.
$\begin{aligned} \left.\frac{\delta^{2}}{\delta\varepsilon^{\prime
2}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}=i&\left(\left.\left[\left(\tilde{S}_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right)^{2}\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right]\right|_{{\bf
J}={\bf J^{\prime}}=0}\right.\\\
&\left.-\left[\left.\tilde{S}_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right|_{{\bf
J}={\bf J^{\prime}}=0}\right]^{2}\right)\end{aligned}$
6. 6.
$\begin{aligned}
\left.\frac{\delta^{2}}{\delta\varepsilon\delta\varepsilon^{\prime}}S_{\mathrm{M,IF}}[X,X^{\prime}]\right|_{\varepsilon=\varepsilon^{\prime}=0}=&-i\left(\left.\left[\tilde{S}_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\tilde{S}_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right]\right|_{{\bf
J}={\bf J^{\prime}}=0}\right.\\\
&\left.-\left[\left.\tilde{S}_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right|_{{\bf J}={\bf
J^{\prime}}=0}\right]\left[\left.\tilde{S}_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\mathcal{G}^{(1)}[\bf{J},\bf{J^{\prime}}]\right|_{{\bf
J}={\bf J^{\prime}}=0}\right]\right)\end{aligned}$
Putting all the terms together we can rewrite the influence action up to
second order in the system-bath coupling as in Eq (III.2).
## Appendix C Calculation of averages
### C.1 First order average
The first order average in the influence action can be calculated as follows
$\displaystyle\left\langle
S_{\mathrm{int}}^{(+)}\left[{X},\frac{1}{i}\frac{\delta}{\delta{J}},\left\\{\frac{1}{i}\frac{\delta}{\delta{J_{n}}}\right\\}\right]\right\rangle_{0}$
$\displaystyle=\left.S_{\mathrm{int}}^{(+)}\left[{X},\frac{1}{i}\frac{\delta}{\delta{J}},\left\\{\frac{1}{i}\frac{\delta}{\delta{J_{n}}}\right\\}\right]\mathcal{G}^{(1)}[{J},{J^{\prime}},\left\\{J_{n},J_{n}^{\prime}\right\\}]\right|_{{\bf
J}={\bf J^{\prime}}=0}$
$\displaystyle=\left.\int_{0}^{t}\mathrm{d}\tau\sum_{n}\lambda\omega_{n}\left[\frac{1}{i}\frac{\delta}{\delta{J_{n}(\tau)}}\right]\left[\frac{1}{i}\frac{\delta}{\delta{J(\tau)}}\right]{X}(\tau)\prod_{p}F^{(1)}_{p}[J_{p},J_{p}^{\prime}]F^{(1)}_{I}[J,J^{\prime}]\right|_{{\bf
J}={\bf J^{\prime}}=0}$ (70)
where we have used the generating functional as defined in Eq. (26), and the
interaction action from Eq. (10). Now one can note that the first order
derivative of the influence action which is quadratic in
$\left\\{J_{n},J_{n}^{\prime}\right\\}$ brings in a factor that is linear in
$\left\\{J_{n},J_{n}^{\prime}\right\\}$, and thus vanishes when we set
$J_{n}=J_{n}^{\prime}=0$. Thus, we obtain that
$\displaystyle\left\langle
S_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{J}},\left\\{\frac{1}{i}\frac{\delta}{\delta{J_{n}}}\right\\}\right]\right\rangle_{0}=\left\langle
S_{\mathrm{int}}^{(+)}\left[X^{\prime},\frac{1}{i}\frac{\delta}{\delta{J^{\prime}}},\left\\{\frac{1}{i}\frac{\delta}{\delta{J^{\prime}_{n}}}\right\\}\right]\right\rangle_{0}=0.$
(71)
### C.2 Second order averages
We calculate the second order terms in the influence action as follows:
1. 1.
$\displaystyle\left\langle\left\\{S_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\\}^{2}\right\rangle_{0}=$
$\displaystyle\left(\int_{0}^{t}\mathrm{d}t_{1}\sum_{n}\lambda\omega_{n}\left[\frac{1}{i}\frac{\delta}{\delta{J(t_{1})}}\right]\left[\frac{1}{i}\frac{\delta}{\delta{J_{n}(t_{1})}}\right]X(t_{1})\right)$
$\displaystyle\left(\int_{0}^{t}\mathrm{d}t_{2}\sum_{m}\lambda
k_{m}\left[\frac{1}{i}\frac{\delta}{\delta{J{}(t_{2})}}\right]\left[\frac{1}{i}\frac{\delta}{\delta{J_{m}(t_{2})}}\right]X(t_{2})\right)\left.F^{(1)}_{I}[J,J^{\prime}]\,\prod_{p}F^{(1)}_{p}[J_{p},J_{p}^{\prime}]\right|_{{\bf
J}={\bf J^{\prime}}=0}$ (72)
$\displaystyle=\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{m,n}\lambda^{2}\omega_{m}\omega_{n}X(t_{1})X(t_{2})\,\zeta_{I}\left(t_{1},t_{2}\right)\,\sum_{p}\zeta^{(p)}_{m,n}\left(t_{1},t_{2}\right)$
(73)
where we have defined the influence of the (+) bath and the IDF as
$\displaystyle\zeta^{(p)}_{m,n}(t_{1},t_{2})\equiv$
$\displaystyle\left.\frac{\delta}{\delta
J_{n}(t_{1})}\left\\{\frac{\delta}{\delta
J_{m}(t_{2})}\left[F^{(1)}_{p}[J_{p},J^{\prime}_{p}]\right]\right\\}\right|_{J_{p}=J^{\prime}_{p}=0}$
$\displaystyle=$
$\displaystyle-{i}\delta_{mp}\delta_{np}\left[\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right)+\eta_{p}^{(+)}\left(t_{2}-t_{1}\right)\Theta\left(t_{2}-t_{1}\right)\right]$
$\displaystyle-\delta_{mp}\delta_{np}\left[\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{2}-t_{1}\right)\Theta\left(t_{2}-t_{1}\right)\right]$
(74) $\displaystyle=$
$\displaystyle\delta_{mp}\delta_{np}\left[-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)-\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right]$
(75) $\displaystyle\zeta_{I}(t_{1},t_{2})\equiv$
$\displaystyle\frac{\delta}{\delta J(t_{1})}\left\\{\frac{\delta}{\delta
J(t_{2})}\left[F^{(1)}_{I}[J,J^{\prime}]\right]\right\\}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\left[\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{t}\mathrm{d}\tau_{2}\left({G}\left(t_{1}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(t_{2}-\tau_{2}\right)+{G}\left(t_{2}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(t_{1}-\tau_{2}\right)\right)\right]$
$\displaystyle-\frac{i}{2}\left[{G}\left(t_{1}-t_{2}\right)+{G}\left(t_{2}-t_{1}\right)\right]-\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle.$ (76)
Let us further define an odd function ${g}\left(t\right)$ such that
${G}\left(t_{1}-t_{2}\right)\equiv{g}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right)$,
and
$\displaystyle\nu_{GG}\left(t_{1},t_{2}\right)\equiv$
$\displaystyle\int_{0}^{t_{1}}\mathrm{d}\tau_{1}\int_{0}^{t_{2}}\mathrm{d}\tau_{2}\left[{g}\left(t_{1}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){g}\left(t_{2}-\tau_{2}\right)\right],$
(77)
such that we can rewrite $\zeta_{I}\left(t_{1},t_{2}\right)$ as
$\displaystyle\zeta_{I}\left(t_{1},t_{2}\right)=-\nu_{GG}\left(t_{1},t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)-\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle.$ (78)
We have made use of the fact that the function
$g\left(s\right)=-g\left(-s\right)$ is odd, and the noise kernel
$\nu^{(+)}\left(s\right)=\nu^{(+)}\left(-s\right)$ is even.
This allows us to rewrite Eq.(73) as
$\displaystyle\left\langle\left\\{S_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\\}^{2}\right\rangle_{0}$ $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{p}\lambda^{2}k_{p}^{2}X(t_{1})X(t_{2})\left[-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)-\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right]$
$\displaystyle\left[-\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)+\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right]$
(79) $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{p}\lambda^{2}k_{p}^{2}X(t_{1})X(t_{2})\left[i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.$
$\displaystyle\left.-\frac{1}{2}{g}\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)+\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right],$
(80)
where
$\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\equiv\nu_{GG}\left(t_{1},t_{2}\right)+\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$.
2. 2.
$\displaystyle\left\langle\left\\{S_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right\\}^{2}\right\rangle_{0}=\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{m,n}\lambda^{2}\omega_{m}\omega_{n}X^{\prime}(t_{1})X^{\prime}(t_{2})\,\widehat{\zeta}_{I}\left(t_{1},t_{2}\right)\,\sum_{p}\widehat{\zeta}^{(p)}_{m,n}\left(t_{1},t_{2}\right),$
(81)
where
$\displaystyle\widehat{\zeta}^{(p)}_{m,n}(t_{1},t_{2})\equiv$
$\displaystyle\left.\frac{\delta}{\delta
J^{\prime}_{n}(t_{1})}\left\\{\frac{\delta}{\delta
J^{\prime}_{m}(t_{2})}\left[F^{(1)}_{p}[J_{p},J^{\prime}_{p}]\right]\right\\}\right|_{J_{p}=J^{\prime}_{p}=0}$
$\displaystyle=$
$\displaystyle{i}\delta_{mp}\delta_{np}\left[\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right)+\eta_{p}^{(+)}\left(t_{2}-t_{1}\right)\Theta\left(t_{2}-t_{1}\right)\right]$
$\displaystyle-\delta_{mp}\delta_{np}\left[\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{2}-t_{1}\right)\Theta\left(t_{2}-t_{1}\right)\right]$
(82) $\displaystyle=$
$\displaystyle\delta_{mp}\delta_{np}\left[i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)-\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right]$
(83) $\displaystyle\widehat{\zeta}_{I}(t_{1},t_{2})\equiv$
$\displaystyle\frac{\delta}{\delta
J^{\prime}(t_{1})}\left\\{\frac{\delta}{\delta
J^{\prime}(t_{2})}\left[F^{(1)}_{I}[J,J^{\prime}]\right]\right\\}$
$\displaystyle=$
$\displaystyle-\frac{1}{2}\left[\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{t}\mathrm{d}\tau_{2}\left({G}\left(t_{1}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(\tau_{2}-t_{2}\right)+{G}\left(t_{2}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(\tau_{2}-t_{1}\right)\right)\right]$
$\displaystyle+\frac{i}{2}\left[{G}\left(t_{1}-t_{2}\right)+{G}\left(t_{2}-t_{1}\right)\right]-\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$ (84)
$\displaystyle=$
$\displaystyle-\nu^{\prime}_{GG}\left(t_{1},t_{2}\right)+\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)$
(85)
Substituting back in Eq.(81)
$\displaystyle\left\langle\left\\{S_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]\right\\}^{2}\right\rangle_{0}$ $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{p}\lambda^{2}k_{p}^{2}X^{\prime}(t_{1})X^{\prime}(t_{2})\left[i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)-\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right]$
$\displaystyle\left[-\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right]$
(86) $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{p}\lambda^{2}k_{p}^{2}X^{\prime}(t_{1})X^{\prime}(t_{2})\left[-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.$
$\displaystyle\left.-\frac{1}{2}{g}\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right].$
(87)
3. 3.
$\displaystyle\left\langle
S_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]S_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\rangle_{0}=\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{m,n}\lambda^{2}\omega_{m}\omega_{n}X(t_{1})X^{\prime}(t_{2})\,\widetilde{\zeta}_{I}\left(t_{1},t_{2}\right)\,\sum_{p}\widetilde{\zeta}^{(p)}_{m,n}\left(t_{1},t_{2}\right)$
(88)
where
$\displaystyle\widetilde{\zeta}^{(p)}_{m,n}(t_{1},t_{2})\equiv$
$\displaystyle\left.\frac{\delta}{\delta
J_{n}(t_{1})}\left\\{\frac{\delta}{\delta
J^{\prime}_{m}(t_{2})}\left[F^{(1)}_{p}[J_{p},J^{\prime}_{p}]\right]\right\\}\right|_{J_{p}=J^{\prime}_{p}=0}$
$\displaystyle=$
$\displaystyle-i\delta_{mp}\delta_{np}\left[\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right)-\eta_{p}^{(+)}\left(t_{2}-t_{1}\right)\Theta\left(t_{2}-t_{1}\right)\right]$
$\displaystyle+\delta_{mp}\delta_{np}\left[\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\Theta\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{2}-t_{1}\right)\Theta\left(t_{2}-t_{1}\right)\right]$
(89) $\displaystyle=$
$\displaystyle\delta_{mp}\delta_{np}\left[-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right],$
(90)
and
$\displaystyle\widetilde{\zeta}_{I}(t_{1},t_{2})\equiv$
$\displaystyle\frac{\delta}{\delta J(t_{1})}\left\\{\frac{\delta}{\delta
J^{\prime}(t_{2})}\left[F^{(1)}_{I}[J,J^{\prime}]\right]\right\\}$
$\displaystyle=$
$\displaystyle\frac{1}{2}\left[\int_{0}^{t}\mathrm{d}\tau_{1}\int_{0}^{t}\mathrm{d}\tau_{2}\left({G}\left(t_{1}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(\tau_{2}-t_{2}\right)+{G}\left(t_{2}-\tau_{1}\right)\nu^{(-)}\left(\tau_{1}-\tau_{2}\right){G}\left(\tau_{2}-t_{1}\right)\right)\right]$
$\displaystyle+\frac{i}{2}\left[{G}\left(t_{1}-t_{2}\right)-{G}\left(t_{2}-t_{1}\right)\right]+\left\llangle
Q_{h}\left(t_{1}\right)Q_{h}\left(t_{2}\right)\right\rrangle$ (91)
$\displaystyle=$
$\displaystyle\nu^{\prime}_{GG}\left(t_{1},t_{2}\right)-\frac{i}{2}g\left(t_{1}-t_{2}\right).$
(92)
Substituting the above in Eq.(88)
$\displaystyle\left\langle
S_{\mathrm{int}}^{(+)}\left[X^{\prime},-\frac{1}{i}\frac{\delta}{\delta{\bf
J^{\prime}}}\right]S_{\mathrm{int}}^{(+)}\left[X,\frac{1}{i}\frac{\delta}{\delta{\bf
J}}\right]\right\rangle_{0}$ $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{p}\lambda^{2}k_{p}^{2}X(t_{1})X^{\prime}(t_{2})\left[-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right]\left[\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)+\frac{i}{2}g\left(t_{1}-t_{2}\right)\right]$
(93) $\displaystyle=$
$\displaystyle\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\sum_{p}\lambda^{2}k_{p}^{2}X(t_{1})X^{\prime}(t_{2})\left[-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.$
$\displaystyle\left.-\frac{1}{2}g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right].$
(94)
Putting together Eqs. (1), (2), and (3) in Eq.(III.2), we obtain the second
order influence action as
$\displaystyle S_{\mathrm{M,IF}}^{(2)}\left[X,X^{\prime}\right]$
$\displaystyle=$
$\displaystyle\frac{i}{2}\sum_{p}\lambda^{2}k_{p}^{2}\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\left[X(t_{1})X(t_{2})\left\\{i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.\right.$
$\displaystyle\left.\left.-\frac{1}{2}{g}\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)+\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.+X^{\prime}(t_{1})X^{\prime}(t_{2})\left\\{-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.\right.$
$\displaystyle\left.\left.-\frac{1}{2}{g}\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.-2X(t_{1})X^{\prime}(t_{2})\left\\{-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.\right.$
$\displaystyle\left.\left.-\frac{1}{2}g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right]$
(95) $\displaystyle=$
$\displaystyle\frac{i}{8}\sum_{p}\lambda^{2}k_{p}^{2}\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\left[\left\\{X_{\Sigma}(t_{1})+X_{\Delta}(t_{1})\right\\}\left\\{X_{\Sigma}(t_{2})+X_{\Delta}(t_{2})\right\\}\left\\{i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right.\right.$
$\displaystyle\left.\left.+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-\frac{1}{2}{g}\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)+\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.+\left\\{X_{\Sigma}(t_{1})-X_{\Delta}(t_{1})\right\\}\left\\{X_{\Sigma}(t_{2})-X_{\Delta}(t_{2})\right\\}\left\\{-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right.\right.$
$\displaystyle\left.\left.+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-\frac{1}{2}{g}\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\mathrm{sign}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.-2\left\\{X_{\Sigma}(t_{1})+X_{\Delta}(t_{1})\right\\}\left\\{X_{\Sigma}(t_{2})-X_{\Delta}(t_{2})\right\\}\left\\{-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.\right.$
$\displaystyle\left.\left.-\frac{1}{2}g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right]$
(96) $\displaystyle=$
$\displaystyle\frac{i}{4}\sum_{p}\lambda^{2}k_{p}^{2}\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}$
$\displaystyle\left[X_{\Sigma}(t_{1})X_{\Sigma}(t_{2})\left\\{-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.+X_{\Delta}(t_{1})X_{\Delta}(t_{2})\left\\{2\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)-i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right.\right.$
$\displaystyle\left.\left.-\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.+X_{\Sigma}(t_{1})X_{\Delta}(t_{2})\left\\{i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\left(\mathrm{sign}\left(t_{1}-t_{2}\right)-1\right)+\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\left(\mathrm{sign}\left(t_{1}-t_{2}\right)-1\right)\right.\right.$
$\displaystyle\left.\left.+\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-\frac{1}{2}g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.+X_{\Delta}(t_{1})X_{\Sigma}(t_{2})\left\\{i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\left(\mathrm{sign}\left(t_{1}-t_{2}\right)+1\right)+\frac{i}{2}{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\left(\mathrm{sign}\left(t_{1}-t_{2}\right)+1\right)\right.\right.$
$\displaystyle\left.\left.-\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-\frac{1}{2}g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right]$
(97) $\displaystyle=$
$\displaystyle\frac{i}{4}\sum_{p}\lambda^{2}k_{p}^{2}\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}\left[X_{\Delta}(t_{1})X_{\Delta}(t_{2})\left\\{2\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.+X_{\Delta}(t_{1})X_{\Sigma}(t_{2})\left\\{2i\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)+i{g}\left(t_{1}-t_{2}\right)\nu_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\Theta\left(t_{1}-t_{2}\right)\right],$
(98)
where we have defined $X_{\Sigma}\left(t\right)\equiv
X\left(t\right)+X^{\prime}\left(t\right)$, and $X_{\Delta}\left(t\right)\equiv
X\left(t\right)-X^{\prime}\left(t\right)$, and for the last line we have used
the property
$\int_{0}^{t}dt_{1}\int_{0}^{t}dt_{2}f\left(t_{1}\right)f\left(t_{2}\right)F\left(t_{1}-t_{2}\right)H\left(t_{1},t_{2}\right)=0$
provided that $F\left(t_{2}-t_{1}\right)=-F\left(t_{1}-t_{2}\right)$ and
$H\left(t_{2},t_{1}\right)=H\left(t_{1},t_{2}\right)$. Using the definitions
of the dissipation and noise kernels for the (+) bath as in Eq.(31) and Eq.
(32)
$\displaystyle
S_{\mathrm{M,IF}}^{(2)}=\int_{0}^{t}\mathrm{d}t_{1}\int_{0}^{t}\mathrm{d}t_{2}$
$\displaystyle\left[\frac{i}{4}X_{\Delta}(t_{1})X_{\Delta}(t_{2})\left\\{2\nu^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)-g\left(t_{1}-t_{2}\right)\eta_{p}^{(+)}\left(t_{1}-t_{2}\right)\right\\}\right.$
$\displaystyle\left.-\frac{1}{4}X_{\Delta}(t_{1})X_{\Sigma}(t_{2})\left\\{{g}\left(t_{1}-t_{2}\right)\nu^{(+)}\left(t_{1}-t_{2}\right)+2\eta^{(+)}\left(t_{1}-t_{2}\right)\nu_{GG}^{\prime}\left(t_{1},t_{2}\right)\right\\}\Theta\left(t_{1}-t_{2}\right)\right],$
(99)
which reduces to Eq. (27).
## Appendix D Fourier transform of the dissipation and noise kernels
The Fourier transform of the dissipation and noise kernels corresponding to
the baths associated with the scalar field $\\{\eta^{(\pm)},\nu^{(\pm)}\\}$
are
$\displaystyle\bar{\eta}^{(\pm)}(\omega)=$ $\displaystyle i{\rm
Im}\left[\bar{\eta}^{(\pm)}(\omega)\right]=-i\frac{\pi}{2}\left[J^{(\pm)}(\omega)-J^{(\pm)}(-\omega)\right],$
(100) $\displaystyle\bar{\nu}^{(\pm)}(\omega)=$
$\displaystyle\pi\coth\left(\frac{\omega}{2k_{B}T_{F}}\right)\left[J^{(\pm)}(\omega)-J^{(\pm)}(-\omega)\right].$
(101)
From these expressions, we immediately obtain a fluctuation-dissipation
relation for the $\left(\pm\right)$ baths as in Eq.(44).
We now consider the kernel $\nu_{GG}(t_{1},t_{2})$ in the late-time limit
(Eq.(43)) as follows:
$\displaystyle\nu_{GG}(t_{1},t_{2})=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}\bar{\nu}^{(-)}\left(\omega\right)\int_{0}^{t_{1}}\mathrm{d}\tau_{1}~{}G\left(t_{1}-\tau_{1}\right)e^{-i\omega\tau_{1}}\int_{0}^{t_{2}}\mathrm{d}\tau_{2}~{}G\left(t_{2}-\tau_{2}\right)e^{i\omega\tau_{2}},$
(102)
where we have written $\nu^{(-)}(t)$ in terms of its Fourier transform. The
time integrals on a finite interval are convolutions that can be re-cast in
terms of the Laplace transform of $G$ (defined as
$\tilde{F}(z)=\int_{0}^{+\infty}{dt}e^{-zt}F(t)$) via its inverse as:
$\displaystyle\int_{0}^{t}\mathrm{d}\tau~{}G\left(t-\tau\right)e^{\pm
i\omega\tau}=\int_{l-i\infty}^{l+i\infty}\frac{dz}{2\pi
i}~{}e^{zt}\frac{\tilde{G}(z)}{(z\mp i\omega)},$ (103)
where $l$ has to be larger than the real parts of each of the poles of
$\tilde{G}$ to ensure causality. Specifically, as $\tilde{G}$ has poles
$\\{z_{k}\\}$ with negative or vanishing real parts (${\rm Re}(z_{k})<0$),
implying $l>0$. Using Cauchy’s theorem, the causal property of the retarded
propagator allow us to express the convolution in terms of a sum of the
residue of the poles of $\tilde{G}(z)/(z\mp i\omega)$ as:
$\displaystyle\int_{0}^{t}\mathrm{d}\tau~{}G\left(t-\tau\right)e^{\pm
i\omega\tau}=e^{\pm i\omega
t}\bar{G}(\mp\omega)+\sum_{k}e^{z_{k}t}\frac{\mathcal{R}_{k}}{(z_{k}\mp
i\omega)},$ (104)
with $\mathcal{R}_{k}\equiv{\rm Res}[\tilde{G}(z),z_{k}]$ and where we have
used the relation between the Fourier and Laplace transforms of causal
functions $\tilde{G}(\pm i\omega)=\bar{G}(\mp\omega)$.
Considering that ${\rm Re}(z_{k})<0$, it is easy to realize that in the large-
time limit $t\rightarrow+\infty$:
$\displaystyle\int_{0}^{t}\mathrm{d}\tau~{}G\left(t-\tau\right)e^{\pm
i\omega\tau}\xrightarrow{t\rightarrow+\infty}e^{\pm i\omega
t}\bar{G}(\mp\omega),$ (105)
which immediately allow us to obtain for the long-time limit of Eq.(102):
$\displaystyle\nu_{GG}(t_{1},t_{2})\rightarrow\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}e^{-i\omega(t_{1}-t_{2})}\left|\bar{G}(\omega)\right|^{2}\bar{\nu}^{(-)}\left(\omega\right),$
(106)
where we have used that
$\bar{G}\left(-\omega\right)=\bar{G}^{*}\left(\omega\right)$ given that $G$ is
real. Similarly, we note that the last expression is a function of
$t_{1}-t_{2}$ only, which is generally not the case during the course of the
evolution. In the late-time limit, we can define the Fourier transform of the
kernel as
$\bar{\nu}_{GG}(\omega)\equiv|\bar{G}\left(\omega\right)|^{2}\bar{\nu}^{(-)}\left(\omega\right)$.
We note that $\bar{\nu}_{GG}(\omega)$ is real and even, as expected.
Furthermore, considering Eq.(67), we have that the Fourier transform of the
retarded propagator reads:
$\displaystyle\bar{G}(\omega)=\frac{1}{m\omega_{0}^{2}-m\omega^{2}+2\bar{\eta}^{(-)}(\omega)}.$
(107)
Noticing that $\bar{\eta}^{(-)}(\omega)$ has real and imaginary parts, we
leave the real part as a renormalization of the frequency of the center of
mass, while the imaginary part accounts for the dissipation. After
implementing this, we consider the renormalized frequency
$\omega_{R}^{2}\equiv\omega_{0}^{2}+2{\rm Re}[\bar{\eta}^{(-)}(\omega)]/m$, so
we can re-cast the Fourier transform of the retarded propagator as:
$\displaystyle\bar{G}(\omega)=\frac{1}{m\omega_{R}^{2}-m\omega^{2}+2i{\rm
Im}[\bar{\eta}^{(-)}(\omega)]},$ (108)
from which we obtain that ${\rm
Im}[\bar{G}(\omega)]=-2|\bar{G}(\omega)|^{2}{\rm
Im}[\bar{\eta}^{(-)}(\omega)]$. Combining this with the fluctuation-
dissipation relation for the bath $(-)$ of Eq.(44), we can directly arrive at
the fluctuation-dissipation relation between
$\bar{\nu}_{GG}\left(\omega\right)$ and $\bar{G}$ as in Eq. (45).
## References
* (1) H.-P. Breuer, and F. Petruccione, Theory of open quantum systems (Oxford University Press, New York, 2002).
* (2) U. Weiss, Quantum Dissipative Systems (World Scientific, 1992).
* (3) A. A. Clerk, K. W. Lehnert, P. Bertet, J. R. Petta, and Y. Nakamura, Hybrid quantum systems with circuit quantum electrodynamics, Nat. Phys. 16, 257 (2020).
* (4) B. L. Hu, J. P. Paz, and Y. Zhang, Quantum Brownian motion in a general environment. II. Nonlinear coupling and perturbative approach, Phys. Rev. D 47, 1576 (1993).
* (5) M. F. Maghrebi, M. Krüger, and M. Kardar, Flight of a heavy particle nonlinearly coupled to a quantum bath, Phys. Rev. B 93, 014309 (2016).
* (6) A. Lampo, S. H. Lim, J. Wehr, P. Massignan, and M. Lewenstein, Lindblad model of quantum Brownian motion, Phys. Rev. A 94, 042123 (2016).
* (7) S. Stenholm, The semiclassical theory of laser cooling, Rev. Mod. Phys. 58, 699 (1986).
* (8) F. Haake, Systematic adiabatic elimination for stochastic processes, Z. Physik B - Condensed Matter 48, 31 (1982).
* (9) Y. Lan, T. C. Elston, and G. A. Papoian, Elimination of fast variables in chemical Langevin equations, J. Chem. Phys. 129, 214115 (2008).
* (10) Carl E. Wieman, D. E. Pritchard, and D. J. Wineland, Atom cooling, trapping, and quantum manipulation, Rev. Mod. Phys. 71, S253 (1999).
* (11) Z.-Q. Yin, A. A. Geraci, and T. Li, Optomechanics of Levitated dielectric particles, Int. J. Mod. Phys. B 27, 1330018 (2013).
* (12) Y. Y. Fein, P. Geyer, P. Zwick, F. Kiałka, S. Pedalino, M. Mayor, S. Gerlich, and M. Arndt, Quantum superposition of molecules beyond 25 kDa, Nat. Phys. 15, 1242 (2019).
* (13) K. Sinha, A Microscopic Model for Quantum Optomechanics, Doctoral Thesis (2015).
* (14) C. R. Galley, R. O. Behunin, and B. L. Hu, Oscillator-field model of moving mirrors in quantum optomechanics, Phys. Rev. A 87, 043832(2013).
* (15) K. Sinha, S.-Y. Lin, and B. L. Hu, Mirror-field entanglement in a microscopic model for quantum optomechanics, Phys. Rev. A 92, 023852 (2015).
* (16) Q. Wang and W. G. Unruh, Motion of a mirror under infinitely fluctuating quantum vacuum stress, Phys. Rev. D 89, 085009 (2014).
* (17) T. A. Brun and L. Mlodinow, Decoherence by coupling to internal vibrational modes, Phys. Rev. A 94, 052123 (2016).
* (18) A. E. Rubio López and O. Romero-Isart, Radiation Reaction of a Jiggling Dipole in a Quantum Electromagnetic Field, Phys. Rev. Lett. 123, 243603 (2019).
* (19) B. L. Hu and A. Matacz, Quantum Brownian motion in a bath of parametric oscillators: A model for system-field interactions, Phys. Rev. D 49, 6612 (1994).
* (20) P. Meystre and M. Sargent, Elements of Quantum Optics (Springer-Verlag, Berlin, 2007).
* (21) P. W. Milonni, An introduction to quantum optics and quantum fluctuations (Oxford University Press, 2019).
* (22) A. Einstein and L. Hopf, Statistische Untersuchung der Bewegung eines Resonators in einem Strahlungsfeld, Ann. Phys. 33, 1105 (1910).
* (23) P. W. Milonni, Quantum mechanics of the Einstein-Hopf model, Am. J. Phys. 49, 177 (1981).
* (24) R. P. Feynman and F. L. Vernon, The theory of a general quantum system interacting with a linear dissipative system, Ann. Phys. (N.Y.), 24, 118 (1963).
* (25) E.A. Calzetta and B.-L. Hu, Nonequilibrium Quantum Field Theory (Cambridge University Press, Cambridge, England, 2008).
* (26) B. L. Hu, J. P. Paz, and Y. Zhang, Quantum Brownian motion in a general environment: Exact master equation with nonlocal dissipation and colored noise, Phys. Rev. D 45, 2843 (1992).
* (27) R. O. Behunin and B.-L. Hu, Nonequilibrium forces between atoms and dielectrics mediated by a quantum field, Phys. Rev. A 84, 012902 (2011).
* (28) E. Calzetta, A. Roura, and E. Verdaguer,Stochastic description for open quantum systems, Physica A 319, 188 (2003).
* (29) K. Sinha and Y. Subaşı, Quantum Brownian motion of a particle from Casimir-Polder interactions, Phys. Rev. A 101, 032507 (2020).
* (30) H. Pino, J. Prat-Camps, K. Sinha, B. P. Venkatesh, and O. Romero-Isart, On-chip quantum interference of a superconducting microsphere, Quantum Sci. Technol. 3, 025001 (2018).
* (31) J.-T. Hsiang and B. L. Hu, Fluctuation-dissipation relation from the nonequilibrium dynamics of a nonlinear open quantum system, Phys. Rev. D 101, 125003 (2020).
* (32) V. Yu. Argonov and S. V. Prants, Synchronization of internal and external degrees of freedom of atoms in a standing laser wave, Phys. Rev. A 71, 053408 (2005).
* (33) A. Aspect, E. Arimondo, R. Kaiser, N. Vansteenkiste, and C. Cohen-Tannoudji, Laser Cooling below the One-Photon Recoil Energy by Velocity-Selective Coherent Population Trapping, Phys. Rev. Lett. 61, 826 (1988).
* (34) A. Aspect, E. Arimondo, R. Kaiser, N. Vansteenkiste, and C. Cohen-Tannoudji, Laser cooling below the one-photon recoil energy by velocity-selective coherent population trapping: theoretical analysis, J. Opt. Soc. Am. B 6, 2112 (1989).
* (35) M. M. Boyd, T. Zelevinsky, A. D. Ludlow, S. M. Foreman, S. Blatt, T. Ido, J. Ye, Optical Atomic Coherence at the 1-Second Time Scale, Science 314, 1430 (2006).
* (36) S. Machluf, J. Coslovsky, P. G. Petrov, Y. Japha, and R. Folman, Coupling between Internal Spin Dynamics and External Degrees of Freedom in the Presence of Colored Noise, Phys. Rev. Lett. 105, 203002 (2010).
* (37) S.-Y. Lin, Unruh-DeWitt detectors as mirrors: Dynamical reflectivity and Casimir effect, Phys. Rev. D 98, 105010 (2018).
* (38) S. Butera and I. Carusotto, Mechanical backreaction effect of the dynamical Casimir emission, Phys. Rev. A 99, 053815 (2019).
* (39) M. Belén Farías, C. D. Fosco, Fernando C. Lombardo, and Francisco D. Mazzitelli, Motion induced radiation and quantum friction for a moving atom, Phys. Rev. D 100, 036013 (2019).
* (40) A. E. Rubio López, C. Gonzalez-Ballestero, and O. Romero-Isart, Internal quantum dynamics of a nanoparticle in a thermal electromagnetic field: A minimal model, Phys. Rev. B 98, 155405 (2018).
|
# The limit of the harmonic flow on flat complex vector bundle
Xi Zhang Xi Zhang
School of Mathematical Sciences
University of Science and Technology of China
Hefei, 230026,P.R. China
<EMAIL_ADDRESS>
###### Abstract.
In this paper we study the limiting behaviour of the harmonic flow on flat
complex vector bundle, and prove the limit must be isomorphic to the graded
flat complex vector bundle associated to the Jordan-Hölder filtration.
###### Key words and phrases:
Projectively flat bundle, Higgs bundle, non-Kähler, the Hermitian-Yang-Mills
flow, $\epsilon$-regularity theorem.
###### Mathematics Subject Classification:
53C07, 58E15
###### Mathematics Subject Classification:
53C07, 58E15
The authors are partially supported by NSF in China No.11625106, 11801535 and
11721101. The research was partially supported by the project “Analysis and
Geometry on Bundle” of Ministry of Science and Technology of the People’s
Republic of China, No.SQ2020YFA070080.
## 1\. Introduction
Let $(E,D)$ be a flat complex vector bundle of rank $r$ over a compact
Riemannian manifold $(M,g)$. We say $(E,D)$ is simple if it has no proper
$D$-invariant sub-bundle and $(E,D)$ is semi-simple if it is a direct sum of
$D$-invariant sub-bundles. For the general case, there is a filtration of sub-
bundles
(1.1) $0=E_{0}\subset E_{1}\subset\cdots\subset E_{i}\cdots\subset E_{l}=E,$
such that every sub-bundle $E_{i}$ is $D$-invariant and every quotient bundle
$(Q_{i},D_{i}):=(E_{i}/E_{i-1},D_{i})$ is flat and simple, which is called the
Jordan-Hölder filtration of the flat complex vector bundle $(E,D)$. It is well
known that the above filtration may be not unique, but the following graded
flat complex vector bundle
(1.2) $Gr^{JH}(E,D)=\oplus_{i=1}^{l}(Q_{i},D_{i})$
is unique in the sense of isomorphism. By the Riemann-Hilbert correspondence,
we know that there is a one-to-one correspondence between the moduli space of
fundamental group representations and the moduli space of flat vector bundles.
If the flat bundle $(E,D)$ is corresponding to a representation
$\tau:\pi_{1}(M)\rightarrow GL(r,\mathbb{C})$, the graded object
$Gr^{JH}(E,D)$ is corresponding to the semi-simplification of $\tau$.
Given a Hermitian metric $H$ on $E$, there is a unique decomposition
(1.3) $D=D_{H}+\psi_{H},$
where $D_{H}$ is a unitary connection and
$\psi_{H}\in\Omega^{1}(\mbox{End}(E))$ is self-adjoint with respect to $H$. A
Hermitian metric $H$ is called harmonic on $(E,D)$ if it is a critical point
of the energy functional $\int_{M}|\psi_{H}|^{2}dV_{g}$, i.e. it satisfies the
Euler-Lagrange equation
(1.4) $D_{H}^{\ast}\psi_{H}=0.$
Under the assumption that the flat complex vector bundle $(E,D)$ is semi-
simple, Corlette ([3]) and Donaldson ([7]) proved the existence of harmonic
metric. Furthermore, when $(M,g)$ is a Kähler manifold, the existence of
harmonic metric $H$ implies that there exists a poly-stable Higgs structure
$(D_{H}^{0,1},\psi_{H}^{1,0})$ on $E$. On the other hand, by the work of
Hitchin ([9]) and Simpson ([14]) on Donaldson-Uhlenbeck-Yau theorem for Higgs
bundles, one has the non-abelian Hodge correspondence, i.e. there is an
equivalence of categories between the category of poly-stable Higgs bundles
with vanishing Chern numbers and the category of semi-simple flat bundles.
In order to obtain harmonic metrics, Corlette ([3]) introduced the following
heat flow
(1.5) $\frac{\partial\sigma(t)}{\partial
t}\cdot(\sigma(t))^{-1}=D_{t,K}^{\ast}\psi_{t,K},$
where $K$ is a fixed Hermitian metric on $(E,D)$,
$\sigma(t)\in\Gamma(\mbox{Aut}E)$ and
(1.6) $D_{t}=\sigma(t)\\{D\\}=\sigma(t)\cdot
D\cdot\sigma^{-1}(t)=D_{t,K}+\psi_{t,K}.$
The heat flow (1.5) is equivalent to the following heat flow which involves
flat connections,
(1.7) $\frac{\partial D_{t}}{\partial
t}=-D_{t}\\{D_{t,K}^{\ast}\psi_{t,K}\\}.$
We call the above heat flow (1.7) the harmonic flow on the flat bundle
$(E,D)$. Corlette proved the existence of long time solution $D_{t}$ ($0\leq
t<\infty$) for the heat flow (1.7). By choosing a subsequence and taking
suitable unitary gauge transformations, $D_{t_{i}}$ converges weakly to a flat
connection $D_{\infty}$ in $L_{1}^{p}$. Furthermore, if $(E,D)$ is simple,
Corlette showed that the limit must lie in the complex gauge orbit of $D$,
i.e. there exists $\eta_{\infty}\in\Gamma(\mbox{Aut}E)$ such that
$D_{\infty}=\eta_{\infty}\cdot D\cdot\eta^{-1}_{\infty}$.
In this paper, we consider the limit of the harmonic flow (1.7) on the flat
bundle $(E,D)$ which is not necessarily simple. Firstly, let’s recall the
limiting behaviour of the Yang-Mills flow on holomorphic vector bundles. For
the Riemann surface case, Atiyah and Bott ([1]) pointed out that the limiting
holomorphic bundle should be isomorphic to the graded bundle associated to the
Harder-Narasimhan-Seshadri filtration, and this conjecture has been proved by
Daskalopoulos ([4]). In [2], Bando and Siu proposed an interesting question
that the above Atiyah-Bott’s conjecture should still hold for reflexive sheaf
$\mathcal{E}$ over higher dimensional Kähler manifold. When the sheaf
$\mathcal{E}$ is locally free, this question was answered in the affirmative
by Daskalopoulos and Wentworth ([5]) for Kähler surfaces case; by Jacob ([11])
and Sibley ([13]) for higher dimensional case. The general reflexive sheaves
case was confirmed by Li, Zhang and the author ([12]). Inspired by this, it is
natural to raise a question: should the limit of the harmonic flow (1.7) be
isomorphic to the graded flat complex vector bundle associated to the Jordan-
Hölder filtration? When the base manifold $(M,g)$ is Kähler, this also is
conjectured by Deng ([6]) in his doctoral dissertation. In this paper, we
solve this problem, i.e. we prove the following theorem.
###### Theorem 1.1.
Let $(E,D)$ be a flat complex vector bundle over a compact Riemannian manifold
$(M,g)$, and $D_{t}$ be the long time solution of the harmonic flow (1.7) with
initial data $D$. Then the limiting flat bundle $(E,D_{\infty})$ must be
isomorphic to the graded flat complex vector bundle associated to the Jordan-
Hölder filtration of $(E,D)$, i.e. we have:
(1.8) $(E,D_{\infty})\cong Gr^{JH}(E,D).$
This paper is organized as follows. In Section 2, we introduce some basic
concepts and results about the harmonic flow on flat complex vector bundles.
In section 3, we give a proof of Theorem 1.1.
## 2\. Preliminaries
Let $(M,g)$ be a compact Riemannian manifold of dimension $n$, $E$ be a
complex vector bundle over $M$ with rank $r$. Given any connection $D$ and
Hermitian metric $H$ on $E$, there is a unique decomposition
(2.1) $D=D_{H}+\psi_{H},$
where $D_{H}$ is an $H$-unitary connection,
$\psi_{H}\in\Omega^{1}(\mbox{End}(E))$ is $H$-self-adjoint, i.e.
$\psi_{H}^{\ast H}=\psi_{H}$, and
(2.2) $H(\psi_{H}X,Y)=\frac{1}{2}\\{H(DX,Y)+H(X,DY)-dH(X,Y)\\}$
for any $X,Y\in\Gamma(E)$. Suppose $K$ is another Hermitian metric on $E$,
then we have
(2.3) $\begin{split}\psi_{H}&=\frac{1}{2}h^{-1}\circ\psi_{K}\circ
h+\frac{1}{2}\psi_{K}+\frac{1}{2}(D_{K}-h^{-1}\circ D_{K}\circ h)\\\
&=h^{-1}\circ\psi_{K}\circ h+\frac{1}{2}(D-h^{-1}\circ D\circ h)\\\
\end{split}$
and
(2.4) $\begin{split}D_{H}&=\frac{1}{2}(\psi_{K}-h^{-1}\circ\psi_{K}\circ
h+D_{K}+h^{-1}\circ D_{K}\circ h)\\\ &=\psi_{H}-h^{-1}\circ\psi_{K}\circ
h+h^{-1}\circ D_{K}\circ h\\\ &=h^{-1}\circ D_{K}\circ
h+\frac{1}{2}(D-h^{-1}\circ D\circ h).\\\ \end{split}$
where $h=K^{-1}H$.
If $D$ is a flat connection, then
(2.5)
$0=F_{D}=D_{H}^{2}+\psi_{H}\wedge\psi_{H}+D_{H}\circ\psi_{H}+\psi_{H}\circ
D_{H}.$
Considering the self-adjoint and anti-self-adjoint parts of the above
identity, we obtain
(2.6) $D_{H}(\psi_{H})=0,$
and
(2.7) $D_{H}^{2}+\psi_{H}\wedge\psi_{H}=0.$
Let $H(t)$ be a family of Hermitian metrics on $E$. By direct computation, one
can find that
(2.8) $\frac{\partial\psi_{H(t)}}{\partial
t}=-\frac{1}{2}D_{H}(H^{-1}\frac{\partial H}{\partial
t})+\frac{1}{2}\psi_{H}\circ H^{-1}\frac{\partial H}{\partial
t}-\frac{1}{2}H^{-1}\frac{\partial H}{\partial t}\circ\psi_{H}.$
Choosing local coordinates $\\{x^{i}\\}_{i=1}^{n}$ on $M$, we write
$g=g_{ij}dx^{i}\otimes dx^{j}$, $\psi_{H}=(\psi_{H})_{k}dx^{k}$ and
(2.9) $|\psi_{H}|_{H}^{2}=g^{ij}\mbox{\rm
tr\,}\\{(\psi_{H})_{i}\circ(\psi_{H})_{j}^{\ast H}\\},$
where $(\psi_{H})_{i}\in\Gamma({\rm End}(E))$ and $(g^{ij})$ is the inverse
matrix of $(g_{ij})$. After a straightforward calculation, one can check that
(2.10) $\begin{split}\frac{\partial}{\partial
t}|\psi_{H(t)}|_{H}^{2}&=2Re\langle\frac{\partial\psi_{H(t)}}{\partial
t}-\frac{1}{2}\psi_{H}\circ H^{-1}\frac{\partial H}{\partial
t}+\frac{1}{2}H^{-1}\frac{\partial H}{\partial
t}\circ\psi_{H},\psi_{H}\rangle_{H}\\\ &=-Re\langle D_{H}(h^{-1}\frac{\partial
h}{\partial t}),\psi_{H}\rangle_{H}.\\\ \end{split}$
Let $K$ be a fixed metric on $E$. Denote the group of smooth automorphisms of
$E$ (which preserve the metric $K$) as $\mathcal{G}$ ($\mathcal{U}_{K}$).
Every $\sigma\in\mathcal{G}$ acts on the connection $D$ by
(2.11) $\sigma(D):=\sigma\circ D\circ\sigma^{-1}.$
For $\sigma\in\mathcal{G}$, i.e. $H(X,Y)=K(\sigma X,\sigma Y)$ for any
$X,Y\in\Gamma(E)$, set $H=K\sigma^{\ast K}\sigma$. One can see that
(2.12)
$\begin{split}K(\psi_{\sigma(D),K}(X),Y)&=\frac{1}{2}\\{K(\sigma(D)X,Y)+K(X,\sigma(D)Y)-dK(X,Y)\\}\\\
&=\frac{1}{2}\\{H(D\circ\sigma^{-1}X,\sigma^{-1}Y)+H(\sigma^{-1}X,D\circ\sigma^{-1}Y)-dH(\sigma^{-1}X,\sigma^{-1}Y)\\}\\\
&=H(\psi_{D,H}\circ\sigma^{-1}(X),\sigma^{-1}Y)\\\
&=K(\sigma\circ\psi_{D,H}\circ\sigma^{-1}(X),\sigma^{-1}Y).\\\ \end{split}$
Then
(2.13) $\psi_{\sigma(D),K}=\sigma\circ\psi_{D,H}\circ\sigma^{-1}$
and
(2.14) $\sigma(D)_{K}=\sigma\circ D_{H}\circ\sigma^{-1}.$
By (2.3), (2.4), (2.13) and (2.14), we have (or By the definition, we
have!!!!!!)
(2.15) $\psi_{\sigma(D),K}=(\sigma^{\ast
K})^{-1}\circ\psi_{D,K}\circ\sigma^{\ast K}+\frac{1}{2}\sigma\circ
D\circ\sigma^{-1}-\frac{1}{2}(\sigma^{\ast K})^{-1}\circ D\circ\sigma^{\ast
K}$
and
(2.16) $\sigma(D)_{K}=(\sigma^{\ast K})^{-1}\circ D_{K}\circ\sigma^{\ast
K}+\frac{1}{2}\sigma\circ D\circ\sigma^{-1}-\frac{1}{2}(\sigma^{\ast
K})^{-1}\circ D\circ\sigma^{\ast K}.$
Let $h=\sigma^{\ast K}\circ\sigma$. There holds that
(2.17) $\psi_{\sigma(D),K}=(\sigma^{\ast
K})^{-1}\circ(\psi_{D,K}-\frac{1}{2}D(h)\circ h^{-1})\circ\sigma^{\ast K}$
and
(2.18) $\sigma(D)_{K}=(\sigma^{\ast K})^{-1}\circ(D_{K}-\frac{1}{2}D(h)\circ
h^{-1})\circ\sigma^{\ast K}.$
From the definition, it is easy to see that
(2.19)
$\langle\varphi_{1},\varphi_{2}\rangle_{H}=\langle\sigma\circ\varphi_{1}\circ\sigma^{-1},\sigma\circ\varphi_{2}\circ\sigma^{-1}\rangle_{K}$
for any $\varphi_{1},\varphi_{2}\in\Gamma({\rm End}(E))$. We know
(2.20)
$\begin{split}\langle\psi_{D,H},D_{H}\varphi\rangle_{H}&=\langle\psi_{D,H},(\sigma^{-1}\circ(\sigma(D))_{K}\circ\sigma)(\varphi)\rangle_{H}\\\
&=\langle\psi_{D,H},\sigma^{-1}\circ\\{(\sigma(D))_{K}(\sigma\circ\varphi\circ\sigma^{-1})\\}\circ\sigma\rangle_{H}\\\
&=\langle\sigma\circ\psi_{D,H}\circ\sigma^{-1},(\sigma(D))_{K}(\sigma\circ\varphi\circ\sigma^{-1})\rangle_{K}\\\
&=\langle\psi_{\sigma(D),K},(\sigma(D))_{K}(\sigma\circ\varphi\circ\sigma^{-1})\rangle_{K}\\\
\end{split}$
and then
(2.21) $(\sigma(D))_{K}^{\ast}\psi_{\sigma(D),K}=\sigma\circ
D_{H}^{\ast}\psi_{D,H}\circ\sigma^{-1}.$
On the other hand, one can check that
$(\sigma(D))_{K}^{\ast}\psi_{\sigma(D),K}$ is self-adjoint with respect to the
metric $K$.
###### Lemma 2.1.
Let $(E,D)$ be a flat complex vector bundle on a compact Riemannian manifold
$(M,g)$, and $K$ be a Hermitian metric on $E$. For any $\sigma\in\mathcal{G}$,
we have
(2.22)
$\langle\sigma^{-1}\circ(\sigma(D))_{K}^{\ast}\psi_{\sigma(D),K}\circ\sigma-
D_{K}^{\ast}\psi_{D,K},h\rangle_{K}=\frac{1}{2}\Delta\mbox{\rm
tr\,}h-\frac{1}{2}\langle D(h)\circ h^{-1},D(h)\rangle_{K},$ (2.23) $\langle
D_{K}^{\ast}\psi_{D,K}-\sigma^{-1}\circ(\sigma(D))_{K}^{\ast}\psi_{\sigma(D),K}\circ\sigma,h^{-1}\rangle_{K}=\frac{1}{2}\Delta\mbox{\rm
tr\,}h^{-1}-\frac{1}{2}\langle h\circ D(h^{-1}),D(h^{-1})\rangle_{K}$
and
(2.24)
$\langle\sigma^{-1}\circ(\sigma(D))_{K}^{\ast}\psi_{\sigma(D),K}\circ\sigma-
D_{K}^{\ast}\psi_{D,K},s\rangle_{K}=\frac{1}{4}\Delta|s|_{K}^{2}-\frac{1}{2}\langle
D(h)\circ h^{-1},D(s)\rangle_{K},$
where $h=\sigma^{\ast K}\circ\sigma$ and $s=\log h$.
###### Proof.
By (2.17) and (2.18), and choosing local normal coordinates
$\\{x^{i}\\}_{i=1}^{n}$ centered at the considered point, one can easily check
that
(2.25)
$\begin{split}&\sigma^{-1}\circ(\sigma(D))_{K}^{\ast}\psi_{\sigma(D),K}\circ\sigma=h^{-1}\circ(D_{K}^{\ast}\psi_{D,K}-\frac{1}{2}D_{K}^{\ast}(D(h)\circ
h^{-1}))\circ h\\\ &+\frac{1}{2}g^{ij}h^{-1}\circ(D_{\frac{\partial}{\partial
x^{i}}}(h)\circ h^{-1}\circ\psi_{D,K}(\frac{\partial}{\partial
x^{j}})-\psi_{D,K}(\frac{\partial}{\partial x^{j}})\circ
D_{\frac{\partial}{\partial x^{i}}}(h)\circ h^{-1})\circ h,\end{split}$
and then
(2.26)
$\begin{split}&\langle\sigma^{-1}\circ(\sigma(D))_{K}^{\ast}\psi_{\sigma(D),K}\circ\sigma-
D_{K}^{\ast}\psi_{D,K},s\rangle_{K}\\\
=&\langle-\frac{1}{2}D_{K}^{\ast}(D(h)\circ h^{-1}),h^{-1}\circ s\circ
h\rangle_{K}\\\ &+\langle\frac{1}{2}g^{ij}\circ(D_{\frac{\partial}{\partial
x^{i}}}(h)\circ h^{-1}\circ\psi_{D,K}(\frac{\partial}{\partial
x^{j}})-\psi_{D,K}(\frac{\partial}{\partial x^{j}})\circ
D_{\frac{\partial}{\partial x^{i}}}(h)\circ h^{-1}),h^{-1}\circ s\circ
h\rangle_{K}\\\ =&-\frac{1}{2}\mbox{\rm tr\,}(D_{K}^{\ast}(D(h)\circ
h^{-1})\circ s)\\\ &+\frac{1}{2}g^{ij}\mbox{\rm
tr\,}(D_{\frac{\partial}{\partial x^{i}}}(h)\circ
h^{-1}\circ(\psi_{D,K}(\frac{\partial}{\partial x^{j}})\circ
s-s\circ\psi_{D,K}(\frac{\partial}{\partial x^{j}})))\\\
=&\frac{1}{2}g^{ij}\frac{\partial}{\partial x^{j}}\mbox{\rm
tr\,}(D_{\frac{\partial}{\partial x^{i}}}(h)\circ h^{-1}\circ s)\\\
&-\frac{1}{2}g^{ij}\mbox{\rm tr\,}(D_{\frac{\partial}{\partial x^{i}}}(h)\circ
h^{-1}\circ(D_{K,\frac{\partial}{\partial
x^{j}}}(s)-\psi_{D,K}(\frac{\partial}{\partial x^{j}})\circ
s+s\circ\psi_{D,K}(\frac{\partial}{\partial x^{j}})))\\\
=&\frac{1}{4}\Delta|s|_{K}^{2}-\frac{1}{2}\langle D(h)\circ
h^{-1},D(s)\rangle_{K}.\end{split}$
Here we have used the following identity
(2.27) $\mbox{\rm tr\,}(D_{\frac{\partial}{\partial x^{i}}}(h)\circ
h^{-1}\circ s)=\mbox{\rm tr\,}(s\circ D_{\frac{\partial}{\partial x^{i}}}s).$
Immediately (2.22) and (2.23) can be proved in a similar way.
###### Lemma 2.2.
Let $(E,\hat{D})$ be a flat complex vector bundle on a compact Riemannian
manifold $(M,g)$, and $K$ be a Hermitian metric on $E$. Assume
$\hat{D}_{K}^{\ast}\psi_{\hat{D},K}=0$, then $(E,\hat{D})$ must be semi-
simple.
###### Proof.
Suppose that $(E,\hat{D})$ is not simple, and choose a $\hat{D}$-invariant
sub-bundle $S$ which is minimal rank. Then there exists an exact sequence
(2.28) $0\rightarrow S\rightarrow E\rightarrow Q\rightarrow 0.$
Denote $D_{S}$ and $D_{Q}$ (respectively, $K_{S}$ and $K_{Q}$) the connections
(respectively, metrics) on the sub-bundle $S$ and the quotient bundle $Q$
induced by the connection $\hat{D}$ (respectively, metric $K$). For the
Hermitian metric $K$ on $E$, we have the following bundle isomorphism
(2.29) $f_{K}:S\oplus Q\rightarrow E,\qquad(X,[Y])\mapsto
i(X)+(\textrm{Id}_{E}-\pi_{K})(Y),$
where $X\in\Gamma(S)$, $Y\in\Gamma(E)$, $i:S\hookrightarrow E$ is the
inclusion and $\pi_{K}:E\rightarrow E$ is the orthogonal projection into $S$
with respect to the metric $K$. Since $S$ is $\hat{D}$-invariant, we know
(2.30) $\pi_{K}=(\pi_{K})^{2}=(\pi_{K})^{\ast K}$
and
(2.31) $(\textrm{Id}_{E}-\pi_{K})\circ\hat{D}(\pi_{K})=0.$
By the definition, the pulling back metric is
(2.32) $f_{K}^{\ast}(K)=\begin{pmatrix}K_{S}&0\\\ 0&K_{Q}\\\ \end{pmatrix},$
and the pulling back connection is
(2.33) $f_{K}^{\ast}(\hat{D})=\begin{pmatrix}D_{S}&\beta\\\ 0&D_{Q}\\\
\end{pmatrix},$
where $\beta\in\Omega^{1}({\rm Hom}(Q,S))$ will be called the second
fundamental form. One can check that
(2.34) $\beta([Y])=-\pi_{K}\circ(\hat{D}\pi_{K})(Y),$
where $Y\in\Gamma(E)$. Because $\hat{D}$ is flat, we have
(2.35) $D_{S}^{2}=0,\quad D_{Q}^{2}=0,\quad D_{S}\circ\beta+\beta\circ
D_{Q}=0.$
It is easy to see that
(2.36)
$f_{K}^{\ast}(\psi_{\hat{D},K})=\begin{pmatrix}\psi_{D_{S},K_{S}}&\frac{1}{2}\beta\\\
\frac{1}{2}\beta^{\ast}&\psi_{D_{Q},K_{Q}}\\\ \end{pmatrix},$
(2.37) $f_{K}^{\ast}(\hat{D}_{K})=\begin{pmatrix}D_{K_{S}}&\frac{1}{2}\beta\\\
-\frac{1}{2}\beta^{\ast}&D_{K_{Q}}\\\ \end{pmatrix},$
where $\beta^{\ast}\in\Omega^{1}({\rm Hom}(Q,S))$ is the adjoint of $\beta$
with respect to the metrics $K_{S}$ and $K_{Q}$. In the following, we choose
the normal coordinates $\\{x^{i}\\}_{i=1}^{n}$ centered at the considered
point $p\in M$. A direct calculation yields
(2.38) $\begin{split}&f_{K}^{-1}\circ\hat{D}_{K}^{\ast}\psi_{\hat{D},K}\circ
f_{K}=f_{K}^{\ast}(\hat{D}_{K})^{\ast}\\{f_{K}^{\ast}(\psi_{\hat{D},K})\\}\\\
&=\begin{pmatrix}D_{K_{S}}^{\ast}\psi_{D_{S},K_{S}}-\frac{1}{2}g^{ij}\beta_{i}\circ\beta_{j}^{\ast}&\frac{1}{2}D_{K,Q^{\ast}\otimes
S}^{\ast}\beta-\frac{1}{2}g^{ij}(\beta_{i}\circ\psi_{Q,j}-\psi_{S,j}\circ\beta_{i})\\\
\frac{1}{2}D_{K,S^{\ast}\otimes
Q}^{\ast}\beta^{\ast}+\frac{1}{2}g^{ij}(\beta_{i}^{\ast}\circ\psi_{S,j}-\psi_{Q,j}\circ\beta_{i}^{\ast})&D_{K_{Q}}^{\ast}\psi_{D_{Q},K_{Q}}+\frac{1}{2}g^{ij}\beta_{i}^{\ast}\circ\beta_{j}\\\
\end{pmatrix},\end{split}$
where $\beta_{i}=\beta(\frac{\partial}{\partial x^{i}})$,
$\beta_{j}^{\ast}=\beta^{\ast}(\frac{\partial}{\partial x^{j}})$,
$\psi_{S,i}=\psi_{S}(\frac{\partial}{\partial x^{i}})$ and
$\psi_{Q,j}=\psi_{Q}(\frac{\partial}{\partial x^{j}})$. Due to
$\hat{D}_{K}^{\ast}\psi_{\hat{D},K}=0$, (2.38) implies
(2.39)
$D_{K_{S}}^{\ast}\psi_{D_{S},K_{S}}-\frac{1}{2}g^{ij}\beta_{i}\circ\beta_{j}^{\ast}=0,$
and
(2.40) $\int_{M}|\beta|^{2}dV_{g}=2\int_{M}\langle
D_{K_{S}}^{\ast}\psi_{D_{S},K_{S}},\textrm{Id}_{S}\rangle_{K_{S}}dV_{g}=0.$
So $(E,\hat{D})\cong(S,D_{S})\oplus(Q,D_{Q})$, where $(S,D_{S})$ is a simple
flat bundle and $(Q,D_{Q})$ is a flat bundle with
$D_{K_{Q}}^{\ast}\psi_{D_{Q},K_{Q}}=0$. Applying the above argument to
$(Q,D_{Q})$, we obtain an isomorphism
(2.41) $(E,\hat{D})\cong\oplus_{i=1}^{l}(Q_{i},D_{Q_{i}}),$
where every $(Q_{i},D_{Q_{i}})$ is a simple flat bundle.
Let $\sigma(t)$ be a solution of the heat flow (1.5), i.e. it satisfies
(2.42) $\frac{\partial\sigma(t)}{\partial
t}\circ\sigma^{-1}(t)=(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K},$
then
(2.43) $\begin{split}\frac{\partial}{\partial
t}\sigma(t)\\{D\\}&=-\sigma(t)\\{D\\}(\frac{\partial\sigma(t)}{\partial
t}\circ\sigma^{-1}(t))\\\
&=-\sigma(t)\\{D\\}((\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}).\\\
\end{split}$
Considering the self-adjoint and anti-self-adjoint parts of the above
identity, we have
(2.44) $\frac{\partial}{\partial
t}\sigma(t)\\{D\\}_{K}=-[\psi_{\sigma(t)\\{D\\},K},(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}],$
and
(2.45) $\frac{\partial}{\partial
t}\psi_{\sigma(t)\\{D\\},K}=-\sigma(t)\\{D\\}_{K}((\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}).$
Let’s recall some basic estimates of the heat flows (1.5) and (1.7).
###### Lemma 2.3.
([3]) Let $(E,D)$ be a flat complex vector bundle on a compact Riemannian
manifold $(M,g)$, and $K$ be a Hermitian metric on $E$. If $\sigma(t)$ is a
solution of the heat flow (1.5), then we have
(2.46)
$\frac{d}{dt}\|\psi_{\sigma(t)\\{D\\},K}\|_{L^{2}}^{2}=-2\|(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}\|_{L^{2}}^{2},$
(2.47) $\begin{split}&(\Delta-\frac{\partial}{\partial
t})|\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}=2|\nabla^{(\sigma(t)\\{D\\})_{K}}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\\\
&+2\langle\psi_{\sigma(t)\\{D\\},K}\circ
Ric,\psi_{\sigma(t)\\{D\\},K}\rangle_{K}+2|[\psi_{\sigma(t)\\{D\\},K},\psi_{\sigma(t)\\{D\\},K}]|_{K}^{2}\end{split}$
and
(2.48) $\begin{split}&(\Delta-\frac{\partial}{\partial
t})|(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\\\
&=2|(\sigma(t)\\{D\\})_{K}((\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K})|_{K}^{2}+2|[\psi_{\sigma(t)\\{D\\},K},(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}]|_{K}^{2}.\end{split}$
###### Proposition 2.1.
([3]) Let $(E,D)$ be a flat complex vector bundle on a compact Riemannian
manifold $(M,g)$, and $K$ be a Hermitian metric on $E$. The harmonic flow
(1.5) has a long time solution $\sigma(t)$ for $t\in[0,\infty)$. Furthermore,
for every sequence $t_{i}\rightarrow\infty$ there exists a subsequence $t_{j}$
such that
$\sigma(t_{j})\\{D\\}=(\sigma(t_{j})\\{D\\})_{K}+\psi_{\sigma(t_{j})\\{D\\},K}$
converges weakly, modulo $K$-unitary gauge transformations, to a flat
connection $D_{\infty}=D_{\infty,K}+\psi_{\infty,K}$ in $L_{1}^{p}$-topology,
and $D_{\infty,K}^{\ast}\psi_{\infty,K}=0$.
In the following, we will show that the above convergence can be strengthened
to in $C^{\infty}$-topology. Using (2.47), we deduce
(2.49) $(\Delta-\frac{\partial}{\partial
t})|\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\geq-
C_{1}|\psi_{\sigma(t)\\{D\\},K}|_{K}^{2},$
equivalently
(2.50) $(\Delta-\frac{\partial}{\partial
t})(e^{-C_{1}t}|\psi_{\sigma(t)\\{D\\},K}|_{K}^{2})\geq 0,$
where $C_{1}$ is a positive constant depending only on the Ricci curvature of
$(M,g)$. Let
$f(x,t)=\int_{M}\chi(x,y,t-t_{0})e^{-C_{1}t_{0}}|\psi_{\sigma(t_{0})\\{D\\},K}|_{K}^{2}(y)dV_{g}(y)$,
where $\chi$ is the heat kernel of $(M,g)$. Of course (2.50) implies:
(2.51) $(\Delta-\frac{\partial}{\partial
t})(e^{-C_{1}t}|\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}-f(x,t))\geq 0$
and
(2.52)
$f(\cdot,t_{0})=e^{-C_{1}t_{0}}|\psi_{\sigma(t_{0})\\{D\\},K}|_{K}^{2}.$
From the maximum principle and (2.46), for any $t_{0}\geq 0$, it follows that
(2.53)
$\begin{split}\max_{M}e^{-C_{1}(t_{0}+1)}|\psi_{\sigma(t_{0}+1)\\{D\\},K}|_{K}^{2}\leq&\max_{M}f(x,t_{0}+1)\\\
=&\int_{M}\chi(x,y,1)e^{-C_{1}t_{0}}|\psi_{\sigma(t_{0})\\{D\\},K}|_{K}^{2}(y)dV_{g}(y)\\\
\leq&C_{2}e^{-C_{1}t_{0}}\int_{M}|\psi_{\sigma(t_{0})\\{D\\},K}|_{K}^{2}dV_{g}\\\
\leq&C_{2}e^{-C_{1}t_{0}}\int_{M}|\psi_{D,K}|_{K}^{2}dV_{g},\\\ \end{split}$
and then
(2.54) $\max_{M}|\psi_{\sigma(t_{0}+1)\\{D\\},K}|_{K}^{2}\leq
C_{2}e^{C_{1}}\int_{M}|\psi_{D,K}|_{K}^{2}dV_{g},$
where $C_{2}$ is a positive constant depending only on the upper bound of
$\chi(x,y,1)$. Hence we know that $\sup_{M}|\psi_{\sigma(t)\\{D\\},K}|_{K}$ is
uniformly bounded.
Choosing local normal coordinates $\\{x^{i}\\}_{i=1}^{n}$ centered at the
considered point, we have
(2.55)
$\begin{split}&\Delta|\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}=2|\nabla^{\sigma}\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\\\
&+2Re\\{g^{ij}\langle\nabla^{\sigma}_{\frac{\partial}{\partial
x^{i}}}\nabla^{\sigma}_{\frac{\partial}{\partial
x^{i}}}\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K},\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}\rangle\\},\end{split}$
where $\nabla^{\sigma}$ is the covariant derivative induced by the connection
$(\sigma(t)\\{D\\})_{K}$ and the Levi-Civita connection $\nabla$ of $(M,g)$.
Denote
(2.56) $\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}=\psi_{m,l}dx^{m}\otimes
dx^{l},$
and we have
(2.57) $g^{ij}\nabla^{\sigma}_{\frac{\partial}{\partial
x^{i}}}\nabla^{\sigma}_{\frac{\partial}{\partial
x^{i}}}\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}=g^{ij}\psi_{m,lji}dx^{m}\otimes
dx^{l},$
where $\psi_{m,lji}$ denotes the component of the covariant derivative.
According to the Ricci identity, one can check that
(2.58)
$\begin{split}g^{ij}\psi_{m,lji}=&g^{ij}(\psi_{j,iml}+\psi_{a,l}R^{a}_{jmi}+\psi_{a}R^{a}_{jmi,l}+\psi_{a,i}R^{a}_{mlj}+\psi_{a}R^{a}_{mlj,i}++\psi_{a,m}R^{a}_{jli}+\psi_{j,a}R^{a}_{mli}\\\
&+[F_{im,l},\psi_{j}]+[F_{im},\psi_{j,l}]+[F_{il},\psi_{j,m}]+[F_{jl,i},\psi_{m}]+[F_{jl},\psi_{m,i}]),\end{split}$
where $R$ denotes the Riemannian curvature of $g$ and $F$ denotes the
curvature of the connection $(\sigma(t)\\{D\\})_{K}$. By the harmonic flow
(1.5), we have
(2.59) $\begin{split}&\frac{\partial}{\partial
t}|\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}=-2Re\langle\nabla^{\sigma}((\sigma(t)\\{D\\})_{K}((\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K})),\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}\rangle\\\
&-2Re\langle[[\psi_{\sigma(t)\\{D\\},K},(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}],\psi_{\sigma(t)\\{D\\},K}],\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}\rangle.\end{split}$
On the other hand, it is straightforward to check that
(2.60)
$\nabla^{\sigma}((\sigma(t)\\{D\\})_{K}((\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}))=-g^{ij}\psi_{j,iml}dx^{m}\otimes
dx^{l}.$
From (2.55), (2.58), (2.59) and (2.60), we get
(2.61) $\begin{split}&(\Delta-\frac{\partial}{\partial
t})|\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\geq
2|\nabla^{\sigma}\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\\\
&-C_{3}(|Rm|+|\psi_{\sigma(t)\\{D\\},K}|_{K}^{2})|\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\\\
&-C_{4}(|\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}|(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}|_{K}+|\psi_{\sigma(t)\\{D\\},K}|_{K}|\nabla
Rm|)|\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K},\\\ \end{split}$
where $C_{3}$ and $C_{4}$ are uniform constants depending only on $\dim(M)$
and ${\rm rank}(E)$. (2.47) and (2.54) yields
(2.62)
$\int_{M\times[t_{0},t_{0}+3]}|\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}dV_{g}dt\leq
C_{5}$
for all $t_{0}\geq 0$, where $C_{3}$ is a uniform constant. Making use of
(2.61) and the Moser’s parabolic estimate, one can derive
(2.63)
$\sup_{M\times[t_{0}+1,t_{0}+2]}|\nabla^{\sigma}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\leq
C_{6}.$
Furthermore, it holds that
(2.64) $\begin{split}&(\Delta-\frac{\partial}{\partial
t})|(\nabla^{\sigma})^{\alpha}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\geq
2|(\nabla^{\sigma})^{\alpha+1}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}\\\
&-C_{5}|(\nabla^{\sigma})^{\alpha}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2}-C_{6}|(\nabla^{\sigma})^{\alpha}\psi_{\sigma(t)\\{D\\},K}|_{K},\end{split}$
where $C_{5}$ and $C_{6}$ are uniform positive constants depending only on
$|Rm|,\quad|\nabla Rm|$, $\cdots$, $|(\nabla)^{\alpha}Rm|$, and
$|\psi_{\sigma(t)\\{D\\},K}|_{K},\cdots,|(\nabla^{\sigma})^{\alpha-1}\psi_{\sigma(t)\\{D\\},K}|_{K}$.
Using (2.7), (2.64) and repeating the above argument, we have the following
uniform $C^{\infty}$-estimates,
(2.65) $\sup_{[0,\infty)\times
M}(|(\nabla^{\sigma})^{\alpha}F_{(\sigma(t)\\{D\\})_{K}}|^{2}+|(\nabla^{\sigma})^{\alpha}\psi_{\sigma(t)\\{D\\},K}|_{K}^{2})\leq\hat{C}_{\alpha}$
for every $0\leq\alpha<\infty$. Then, applying a result of Donaldson-
Kronheimer(Theorem 2.3.7 in [8]) and Hong-Tian’s argument (Proposition 6 in
[10]), we obtain the following proposition.
###### Proposition 2.2.
Let $(E,D)$ be a flat complex vector bundle over a compact Riemannian manifold
$(M,g)$, and $K$ be a Hermitian metric on $E$. If $\sigma(t)$ is a longtime
solution of the heat flow (1.5), then for every sequence
$t_{i}\rightarrow\infty$ there exists a subsequence $t_{j}$ such that
$\sigma(t_{j})\\{D\\}=(\sigma(t_{j})\\{D\\})_{K}+\psi_{\sigma(t_{j})\\{D\\},K}$
converges, modulo $K$-unitary gauge transformations, to a flat connection
$D_{\infty}=D_{K,\infty}+\psi_{\infty}$ in $C^{\infty}$-topology, and
$D_{K,\infty}^{\ast}\psi_{\infty}=0$.
## 3\. Proof of Theorem 1.1
The following proposition about the existence of the Jordan-Hölder filtration
and the uniqueness of the graded flat complex vector bundle should be well
known to experts, and we give a proof here for the reader’s convenience.
###### Proposition 3.1.
Let $(E,D)$ be a flat complex vector bundle. There is a filtration of sub-
bundles
(3.1) $0=E_{0}\subset E_{1}\subset\cdots\subset E_{i}\cdots\subset E_{l}=E,$
such that every sub-bundle $E_{i}$ is $D$-invariant and every quotient bundle
$(Q_{i},D_{Q_{i}}):=(E_{i}/E_{i-1},D_{Q_{i}})$ is flat and simple.
Furthermore, the graded flat complex vector bundle
$\oplus_{i=1}^{l}(Q_{i},D_{Q_{i}})$ is unique in the sense of isomorphism.
###### Proof.
Suppose that $(E,D)$ is not simple. Let $E_{1}$ be a $D$-invariant sub-bundle
of $E$ with minimal rank. Then we have the following exact sequence
(3.2) $0\rightarrow E_{1}\xrightarrow{i_{0}}E\xrightarrow{P}Q\rightarrow 0,$
and $(E_{1},D_{E_{1}})$ is simple. By induction, we can assume that there is a
Jordan-Hölder filtration of the flat bundle $(Q,D_{Q})$, i.e.
(3.3)
$0=\hat{Q}_{0}\subset\hat{Q}_{1}\subset\cdots\subset\hat{Q}_{i}\cdots\subset\hat{Q}_{l-1}=Q.$
Choosing a Hermitian metric $K$ on $E$, we get a bundle isomorphism $P^{\ast
K}:Q\rightarrow E_{1}^{\bot}$ , where $P^{\ast K}$ is the adjoint of the
projection $P$ with respect to the metric $K$. Set
(3.4) $E_{i}=E_{1}\oplus P^{\ast K}(\hat{Q}_{i-1})$
for all $1<i\leq l$. One can easily find that every $E_{i}$ is $D$-invariant
and $(E_{i}/E_{i-1},D_{Q_{i}})$ is simple. So, we obtain a Jordan-Hölder
filtration of $(E,D)$.
Suppose that there is another Jordan-Hölder filtration of $(E,D)$,
(3.5)
$0=\tilde{E}_{0}\subset\tilde{E}_{1}\subset\cdots\subset\tilde{E}_{i}\cdots\subset\tilde{E}_{\tilde{l}}=E.$
We will show that
(3.6)
$\oplus_{i=1}^{l}(Q_{i},D_{Q_{i}})\cong\oplus_{\alpha=1}^{\tilde{l}}(\tilde{Q}_{\alpha},D_{\tilde{Q}_{\alpha}}).$
It is straightforward to check that there exist bundle isomorphisms
$f:\oplus_{i=1}^{l}Q_{i}\rightarrow E$ and
$\tilde{f}:\oplus_{\alpha=1}^{\tilde{l}}\tilde{Q}_{\alpha}\rightarrow E$ such
that
(3.7)
$\begin{split}f^{*}(D)=\left(\begin{array}[]{ccccc}D_{Q_{1}}&\cdots&\beta_{1l}\\\
\vdots&\ddots&\vdots\\\ 0&\cdots&D_{Q_{l}}\end{array}\right)\end{split}$
and
(3.8)
$\begin{split}\tilde{f}^{*}(D)=\left(\begin{array}[]{ccccc}D_{\tilde{Q}_{1}}&\cdots&\gamma_{1\tilde{l}}\\\
\vdots&\ddots&\vdots\\\
0&\cdots&D_{\tilde{Q}_{\tilde{l}}}\end{array}\right).\end{split}$
Let $\eta=f^{-1}\circ\tilde{f}=(\eta_{i\alpha})$, where $\eta_{i\alpha}\in{\rm
Hom}(\tilde{Q}_{j},Q_{i})$. (3.7) and (3.8) mean
(3.9)
$\begin{split}\eta\circ\left(\begin{array}[]{ccccc}D_{\tilde{Q}_{1}}&\cdots&\gamma_{1\tilde{l}}\\\
\vdots&\ddots&\vdots\\\
0&\cdots&D_{\tilde{Q}_{\tilde{l}}}\end{array}\right)=\left(\begin{array}[]{ccccc}D_{Q_{1}}&\cdots&\beta_{1l}\\\
\vdots&\ddots&\vdots\\\
0&\cdots&D_{Q_{l}}\end{array}\right)\circ\eta.\end{split}$
Assume that $\eta_{l1}=0$, $\cdots$, $\eta_{(k+1)1}=0$ and $\eta_{k1}\neq 0$.
We express $\eta$ as a partitioned matrix
(3.10)
$\eta=\left(\begin{array}[]{ccccc}\check{\eta}_{11}&&\check{\eta}_{12}\\\
\eta_{k1}&&\check{\eta}_{22}\\\ 0&&\check{\eta}_{32}\end{array}\right),$
where $\check{\eta}_{11}=\left(\begin{array}[]{ccccc}\eta_{11}\\\ \vdots\\\
\eta_{(k-1)1}\end{array}\right)$,
$\check{\eta}_{12}=\left(\begin{array}[]{ccccc}\eta_{12}&\cdots&\eta_{1\tilde{l}}\\\
\vdots&\ddots&\vdots\\\
\eta_{(k-1)2}&\cdots&\eta_{(k-1)\tilde{l}}\end{array}\right)$,
$\check{\eta}_{22}=\left(\begin{array}[]{ccccc}\eta_{k2}&\cdots&\eta_{k\tilde{l}}\\\
\end{array}\right)$,
$\check{\eta}_{32}=\left(\begin{array}[]{ccccc}\eta_{12}&\cdots&\eta_{1\tilde{l}}\\\
\vdots&\ddots&\vdots\\\
\eta_{(k-1)2}&\cdots&\eta_{(k-1)\tilde{l}}\end{array}\right)$. Write:
(3.11)
$\begin{split}\left(\begin{array}[]{ccccc}D_{\tilde{Q}_{1}}&\cdots&\gamma_{1\tilde{l}}\\\
\vdots&\ddots&\vdots\\\
0&\cdots&D_{\tilde{Q}_{\tilde{l}}}\end{array}\right)=\left(\begin{array}[]{ccccc}D_{\tilde{Q}_{1}}&&\check{\gamma}_{12}\\\
0&&\check{D}_{\tilde{Q}_{2\tilde{l}}}\end{array}\right)\end{split}$
and
(3.12)
$\begin{split}\left(\begin{array}[]{ccccc}D_{Q_{1}}&\cdots&\beta_{1l}\\\
\vdots&\ddots&\vdots\\\
0&\cdots&D_{Q_{l}}\end{array}\right)=\left(\begin{array}[]{ccccc}\check{D}_{Q_{1(k-1)}}&\check{\beta}_{12}&\check{\beta}_{13}\\\
0&D_{Q_{k}}&\check{\beta}_{23}\\\
0&0&\check{D}_{Q_{(k+1)l}}\end{array}\right),\end{split}$
where
$\check{\gamma}_{12}=\left(\begin{array}[]{ccccc}\gamma_{12}&\cdots&\gamma_{1\tilde{l}}\\\
\end{array}\right)$,
$\check{D}_{\tilde{Q}_{2\tilde{l}}}=\left(\begin{array}[]{ccccc}D_{\tilde{Q}_{2}}&\cdots&\gamma_{2\tilde{l}}\\\
\vdots&\ddots&\vdots\\\ 0&\cdots&D_{\tilde{Q}_{\tilde{l}}}\end{array}\right)$,
$\check{D}_{Q_{1(k-1)}}=\left(\begin{array}[]{ccccc}D_{Q_{1}}&\cdots&\beta_{1(k-1)}\\\
\vdots&\ddots&\vdots\\\ 0&\cdots&D_{Q_{k-1}}\end{array}\right)$,
$\check{\beta}_{12}=\left(\begin{array}[]{ccccc}\beta_{1k}\\\ \vdots\\\
\beta_{(k-1)k}\end{array}\right)$,
$\check{\beta}_{13}=\left(\begin{array}[]{ccccc}\beta_{1(k+1)}&\cdots&\beta_{1l}\\\
\vdots&\ddots&\vdots\\\
\beta_{(k-1)(k+1)}&\cdots&\beta_{(k-1)l}\end{array}\right)$,
$\check{\beta}_{23}=\left(\begin{array}[]{ccccc}\beta_{k(k+1)}&\cdots&\beta_{kl}\end{array}\right)$,
$\check{D}_{Q_{(k+1)l}}=\left(\begin{array}[]{ccccc}D_{Q_{k+1}}&\cdots&\beta_{(k+1)l}\\\
\vdots&\ddots&\vdots\\\ 0&\cdots&D_{Q_{l}}\end{array}\right)$. (3.9) tells us
that
(3.13) $\eta_{k1}\circ D_{\tilde{Q}_{1}}=D_{Q_{k}}\circ\eta_{k1}.$
Since $(\tilde{Q}_{1},D_{\tilde{Q}_{1}})$ and $(Q_{k},D_{Q_{k}})$ are simple,
$\eta_{k1}:(\tilde{Q}_{1},D_{\tilde{Q}_{1}})\rightarrow(Q_{k},D_{Q_{k}})$ is
an isomorphism. Denote
(3.14)
$\begin{split}A:=\left(\begin{array}[]{ccccc}\textrm{Id}_{Q_{1}\oplus\cdots\oplus
Q_{k-1}}&-\check{\eta}_{11}\circ(\eta_{k1})^{-1}&0\\\
0&\textrm{Id}_{Q_{k}}&0\\\ 0&0&\textrm{Id}_{Q_{k+1}\oplus\cdots\oplus
Q_{l}}\end{array}\right)\end{split}$
and
(3.15)
$\begin{split}B:=\left(\begin{array}[]{ccccc}\textrm{Id}_{\tilde{Q}_{1}}&-(\eta_{k1})^{-1}\circ\check{\eta}_{22}\\\
0&&\textrm{Id}_{\tilde{Q}_{2}\oplus\cdots\oplus\tilde{Q}_{\tilde{l}}}\end{array}\right).\end{split}$
By direct calculation, we have:
(3.16) $A\circ\eta\circ
B=\left(\begin{array}[]{ccccc}0&&\check{\eta}_{12}-\check{\eta}_{11}\circ(\eta_{k1})^{-1}\check{\eta}_{22}\\\
\eta_{k1}&&0\\\ 0&&\check{\eta}_{32}\end{array}\right).$
(3.9) is equivalent to the following formula
(3.17) $\begin{split}A\circ\eta\circ B\circ
B^{-1}\circ\left(\begin{array}[]{ccccc}D_{\tilde{Q}_{1}}&\cdots&\gamma_{1\tilde{l}}\\\
\vdots&\ddots&\vdots\\\
0&\cdots&D_{\tilde{Q}_{\tilde{l}}}\end{array}\right)=A\circ\left(\begin{array}[]{ccccc}D_{Q_{1}}&\cdots&\beta_{1l}\\\
\vdots&\ddots&\vdots\\\ 0&\cdots&D_{Q_{l}}\end{array}\right)A^{-1}\circ
A\circ\eta\circ B,\end{split}$
and then
(3.18)
$\begin{split}&\left(\begin{array}[]{ccccc}0&&(\check{\eta}_{12}-\check{\eta}_{11}\circ(\eta_{k1})^{-1}\check{\eta}_{22})\circ\check{D}_{\tilde{Q}_{2\tilde{l}}}\\\
\eta_{k1}\circ
D_{\tilde{Q}_{1}}&&\check{\eta}_{22}\circ\check{D}_{\tilde{Q}_{2\tilde{l}}}-D_{Q_{k}}\circ\check{\eta}_{22}+\eta_{k1}\circ\check{\gamma}_{12}\\\
0&&\check{\eta}_{32}\circ\check{D}_{\tilde{Q}_{2\tilde{l}}}\end{array}\right)\\\
=&\left(\begin{array}[]{ccccc}\check{D}_{Q_{1(k-1)}}\circ\check{\eta}_{11}-\check{\eta}_{11}\circ
D_{\tilde{Q_{1}}}+\check{\beta}_{12}\circ\eta_{k1}&&\check{D}_{Q_{1(k-1)}}\circ(\check{\eta}_{12}-\check{\eta}_{11}\circ(\eta_{k1})^{-1}\check{\eta}_{22})+\check{\alpha}\circ\check{\eta}_{32}\\\
D_{Q_{k}}\circ\eta_{k1}&&\check{\beta}_{23}\circ\check{\eta}_{32}\\\
0&&\check{D}_{Q_{(k+1)l}}\circ\check{\eta}_{32}\end{array}\right),\\\
\end{split}$
where
$\check{\alpha}=\check{\beta}_{13}-\check{\eta}_{11}\circ(\eta_{k1})^{-1}\check{\beta}_{22}$.
Set
(3.19)
$\check{\eta}=\left(\begin{array}[]{ccccc}\check{\eta}_{12}-\check{\eta}_{11}\circ(\eta_{k1})^{-1}\check{\eta}_{22}\\\
\check{\eta}_{32}\end{array}\right).$
From (3.16) and (3.18), it is not hard to see that
$\check{\eta}:\tilde{Q}_{2}\oplus\cdots\oplus\tilde{Q}_{\tilde{l}}\rightarrow
Q_{1}\oplus\cdots\oplus\tilde{Q}_{k-1}\oplus
Q_{k+1}\oplus\cdots\oplus\tilde{Q}_{l}$ is a bundle isomorphism and satisfies
(3.20)
$\check{\eta}\circ\check{D}_{\tilde{Q}_{2\tilde{l}}}=\left(\begin{array}[]{ccccc}\check{D}_{Q_{1(k-1)}}&&\check{\alpha}\\\
0&&\check{D}_{Q_{(k+1)l}}\end{array}\right)\circ\check{\eta}.$
According to (3.13), (3.20) and induction, we can prove there must exist an
isomorphism between $\oplus_{i=1}^{l}(Q_{i},D_{Q_{i}})$ and
$\oplus_{\alpha=1}^{\tilde{l}}(\tilde{Q}_{\alpha},D_{\tilde{Q}_{\alpha}})$.
###### Proposition 3.2.
Let $(\hat{E},\hat{D})$ be a flat complex vector bundle over a compact
Riemannian manifold $(M,g)$, $\hat{K}$ be a Hermitian metric on $\hat{E}$ and
$i_{0}:\hat{S}\hookrightarrow\hat{E}$ be a $\hat{D}$-invariant sub-bundle of
$\hat{E}$. Suppose that there is a sequence of gauge transformation
$\hat{\sigma}_{l}$ such that
$\hat{D}_{l}:=\hat{\sigma}_{l}(\hat{D})\rightarrow\hat{D}_{\infty}$ weakly in
$L_{1}^{p}$-topology as $l\rightarrow+\infty$. Furthermore,
$\|\hat{D}_{l,K}^{\ast}\psi_{\hat{D}_{l},K}\|_{L^{\infty}}$ and
$\|\psi_{\hat{D}_{l},K}\|_{L^{\infty}}$ are uniformly bounded. Then there is a
subsequence of $\eta_{l}:=\hat{\sigma}_{l}\circ i_{0}$, up to rescale,
converges weakly to a nonzero map $\eta_{\infty}:\hat{S}\rightarrow\hat{E}$
satisfying $\eta_{\infty}\circ D_{\hat{S}}=\hat{D}_{\infty}\circ\eta_{\infty}$
in $L_{2}^{p}$-topology, where $D_{\hat{S}}=\hat{D}|_{\hat{S}}$ is the induced
flat connection on $\hat{S}$.
###### Proof.
With respect to the Hermitian metric $K$, we have the following decomposition
(3.21) $\hat{D}_{l}=\hat{D}_{l,K}+\psi_{\hat{D}_{l},K},$
where $\hat{D}_{l,K}$ is a $K$-unitary connection,
$\psi_{\hat{D}_{l},K}\in\Omega^{1}(\mbox{End}(E))$ is $K$-self-adjoint. Under
the condition $\hat{D}_{l}\rightarrow\hat{D}_{\infty}$ weakly in
$L_{1}^{p}$-topology, we know that
(3.22) $\hat{D}_{l,K}\rightarrow\hat{D}_{\infty,K},\quad
and\quad\psi_{\hat{D}_{l},K}\rightarrow\psi_{\hat{D}_{\infty},K}$
weakly in $L_{1}^{p}$-topology as $l\rightarrow\infty$. Choose local
coordinates $\\{x^{i}\\}_{i=1}^{n}$ on $M$, and write $g=g_{ij}dx^{i}\otimes
dx^{j}$. Note that $\hat{S}$ is a $\hat{D}$-invariant sub-bundle, and we have
(3.23) $\eta_{l}\circ D_{\hat{S}}=\hat{D}_{l}\circ\eta_{l},$ (3.24)
$\begin{split}\Delta_{K,l}\eta_{l}&=g^{ij}(\nabla^{K,l}_{\frac{\partial}{\partial
x^{i}}}(\hat{D}_{l,K}\circ\eta_{l}-\eta_{l}\circ
D_{\hat{S},K}))(\frac{\partial}{\partial x^{j}})\\\
&=\hat{D}_{l,K}^{\ast}\psi_{\hat{D}_{l},K}\circ\eta_{l}-\eta_{l}\circ
D_{\hat{S},K}^{\ast}\psi_{D_{\hat{S}},K}-2g^{ij}\psi_{\hat{D}_{l},K}(\frac{\partial}{\partial
x^{i}})\circ\eta_{l}\circ\psi_{D_{\hat{S}},K}(\frac{\partial}{\partial
x^{j}})\\\ &+g^{ij}\psi_{\hat{D}_{l},K}(\frac{\partial}{\partial
x^{i}})\circ\psi_{\hat{D}_{l},K}(\frac{\partial}{\partial
x^{j}})\circ\eta_{l}+g^{ij}\eta_{l}\circ\psi_{D_{\hat{S}},K}(\frac{\partial}{\partial
x^{i}})\circ\psi_{D_{\hat{S}},K}(\frac{\partial}{\partial x^{j}})\end{split}$
and
(3.25)
$\Delta|\eta_{l}|_{K}^{2}\geq-\hat{C}_{0}(|\hat{D}_{l,K}^{\ast}\psi_{\hat{D}_{l},K}|_{K}+|D_{\hat{S},K}^{\ast}\psi_{D_{\hat{S}},K}|_{K}+|\psi_{\hat{D}_{l},K}|_{K}^{2}+|\psi_{D_{\hat{S}},K}|_{K}^{2})|\eta_{l}|_{K}^{2},$
where $\hat{C}_{0}$ is a constant depending only on the dimension of $M$. Set
$\tilde{\eta}_{l}=\frac{\eta_{l}}{\|\eta_{l}\|_{L^{2}}}$. From (3.25) and the
Moser’s iteration, we see that there exists a uniform constant $\hat{C}_{1}$
such that
(3.26) $\|\tilde{\eta}_{l}\|_{L^{\infty}}\leq\hat{C}_{1}$
for all $l$. By the above uniform $C^{0}$-estimate (3.26), the equation (3.24)
and the assumption that $\hat{D}_{l}\rightarrow\hat{D}_{\infty}$ weakly in
$L_{1}^{p}$-topology, the elliptic theory gives us that there exists a
subsequence of $\tilde{\eta}_{l}$ which converges weakly in
$L_{2}^{p}$-topology to a map $\eta_{\infty}$ such that $\eta_{\infty}\circ
D_{\hat{S}}=\hat{D}_{\infty}\circ\eta_{\infty}$. On the other hand, the fact
that $\|\tilde{\eta}_{l}\|_{L^{2}}=1$ for all $l$ implies that the map
$\eta_{\infty}$ is non-zero.
###### Theorem 3.1.
Let $(E,D)$ be a rank $r$ flat complex vector bundle over a compact Riemannian
manifold $(M,g)$, $K$ be a Hermitian metric on $E$. Suppose that there is a
sequence of gauge transformation $\sigma_{j}$ such that
$D_{j}:=\sigma_{j}(D)\rightarrow D_{\infty}$ weakly in $L_{1}^{p}$-topology
with $D_{\infty,K}^{\ast}\psi_{D_{\infty},K}=0$. Furthermore,
$\|D_{j,K}^{\ast}\psi_{D_{j},K}\|_{L^{\infty}}$ and
$\|\psi_{D_{j},K}\|_{L^{\infty}}$ are uniformly bounded. Then, we have:
(3.27) $(E,D_{\infty})\cong Gr^{JH}(E,D),$
where $Gr^{JH}(E,D)$ is the graded flat complex vector bundle associated to
the Jordan-Hölder filtration of $(E,D)$.
###### Proof.
We prove this by induction. Let’s assume that the conclusion of this theorem
is true for ${\rm rank}(E)<r$. If $(E,D)$ is simple, Proposition 3.2 implies
that there exists an isomorphic map between $(E,D)$ and $(E,D_{\infty})$.
Suppose $(E,D)$ is not simple, and then we have the following Jordan-Hölder
filtration of sub-bundles
(3.28) $0=E_{0}\subset E_{1}\subset\cdots\subset E_{i}\cdots\subset E_{l}=E,$
such that every sub-bundle $E_{i}$ is $D$-invariant and every quotient bundle
$(Q_{i},D_{i}):=(E_{i}/E_{i-1},D_{i})$ is flat and simple. Let $S=E_{1}$ and
$Q=E/E_{1}$. We consider the following exact sequence
(3.29) $0\rightarrow S\xrightarrow{i_{0}}E\xrightarrow{P}Q\rightarrow 0.$
Denote that $D_{S}$ and $D_{Q}$ are the induced connections on $S$ and $Q$,
$H_{j}=K\sigma_{j}^{\ast}\circ\sigma_{j}$, $\pi_{1}^{H_{j}}$ is the orthogonal
projection onto $E_{1}$ with respect to the metric $H_{j}$. Set
$\pi_{1}^{j}=\sigma_{j}\circ\pi_{1}^{H_{j}}\circ\sigma_{j}^{-1}$. It is easy
to see that
(3.30) $(\pi_{1}^{j})^{\ast K}=\pi_{1}^{j}=(\pi_{1}^{j})^{2}$
and
(3.31) $(\textrm{Id}_{E}-\pi_{1}^{j})\circ D_{j}\pi_{1}^{j}=0.$
According to (2.31), (2.34), (2.38), (2.21) and the conditions of the theorem,
we derive
(3.32)
$\begin{split}\int_{M}|D_{j}\pi_{1}^{j}|_{K}^{2}dV_{g}&=\int_{M}|\pi_{1}^{j}\circ
D_{j}\pi_{1}^{j}|_{K}^{2}dV_{g}=\int_{M}|\pi_{1}^{H_{j}}\circ
D\pi_{1}^{H_{j}}|_{H_{j}}^{2}dV_{g}\\\ &=2\int_{M}\langle
D_{S,H_{j}}^{\ast}\psi_{D_{S},H_{j}}-\pi_{1}^{H_{j}}\circ
D_{H_{j}}^{\ast}\psi_{D,H_{j}}\circ
i_{0},\textrm{Id}_{S}\rangle_{H_{j}}dV_{g}\\\
&=-2\int_{M}\langle\pi_{1}^{H_{j}}\circ D_{H_{j}}^{\ast}\psi_{D,H_{j}}\circ
i_{0},\textrm{Id}_{S}\rangle_{H_{j}}dV_{g}\\\ &\leq
2\int_{M}|D_{H_{j}}^{\ast}\psi_{D,H_{j}}|_{H_{j}}dV_{g}\\\
&=2\int_{M}|D_{j}^{\ast}\psi_{D_{j},K}|_{K}dV_{g}\rightarrow 0.\end{split}$
On the other hand, $|\pi_{1}^{j}|_{K}\equiv{\rm rank}(S)$. After going to a
subsequence, one can obtain $\pi_{1}^{j}\rightarrow\pi_{1}^{\infty}$ strongly
in $L^{p}\cap L_{1}^{2}$ , and
(3.33) $D_{\infty}\pi_{1}^{\infty}=0.$
We know that $\pi_{1}^{\infty}$ determines a $D_{\infty}$-invariant sub-bundle
$E_{1}^{\infty}$ of $(E,D_{\infty})$ with ${\rm rank}(E_{1}^{\infty})={\rm
rank}(E_{1})$, and
(3.34)
$(E,D_{\infty})\cong(E_{1}^{\infty},D_{1,\infty})\oplus(Q_{\infty},D_{Q_{\infty}}),$
where $Q_{\infty}=(E_{1}^{\infty})^{\bot K}$, $D_{1,\infty}$ and
$D_{Q_{\infty}}$ are the induced connections on $E_{1}^{\infty}$ and
$Q_{\infty}$ by the connection $D_{\infty}$.
Proposition 3.2 yields that there is a subsequence of
$\eta_{j}:=\frac{\sigma_{j}\circ i_{0}}{\|\sigma_{j}\circ i_{0}\|_{L^{2}}}$,
up to rescale, converges to a nonzero map $\eta_{\infty}:S\rightarrow E$
satisfying $\eta_{\infty}\circ D_{S}=D_{\infty}\circ\eta_{\infty}$. Due to
$\pi_{1}^{j}\circ\sigma_{j}\circ i_{0}=\sigma_{j}\circ i_{0}$, we have:
(3.35) $\pi_{1}^{\infty}\circ\eta_{\infty}=\eta_{\infty}.$
The condition $D_{\infty,K}^{\ast}\psi_{D_{\infty},K}=0$ implies that
$D_{\infty}$ is smooth, and then $\eta_{\infty}$ is also smooth. Because
$E_{1}$ is simple, it is easy to see that $\eta_{\infty}$ is an isomorphic map
between $(E_{1},D_{S})$ and $(E_{1}^{\infty},D_{1,\infty})$.
Let $\\{e_{\alpha}\\}$ be a local frame of $E_{1}$, and
$H_{j,\alpha\beta}=\langle\eta_{j}(e_{\alpha}),\eta_{j}(e_{\beta})\rangle_{K}$.
We write
(3.36) $\pi_{1}^{j}(Y)=\langle
Y,\eta_{j}(e_{\beta})\rangle_{K}H_{j}^{\alpha\beta}\eta_{j}(e_{\alpha})$
for any $Y\in\Gamma(E)$, where $(H_{j}^{\alpha\beta})$ is the inverse of the
matrix $(H_{j,\alpha\beta})$. Since $\eta_{j}\rightarrow\eta_{\infty}$ weakly
in $L_{2}^{p}$-topology, and $\eta_{\infty}$ is injective, we know that
$\pi_{1}^{j}\rightarrow\pi_{1}^{\infty}$ weakly in $L_{2}^{p}$-topology .
Here, $\pi_{1}^{\infty}:E\rightarrow E$ is just the projection onto
$E_{1}^{\infty}$ with respect to the metric $K$.
Using Lemma 5.12 in [4], we can choose a sequence of $K$-unitary gauge
transformations $u_{j}$ such that $\pi_{1}^{j}=u_{j}\circ\pi_{1}^{\infty}\circ
u_{j}^{-1}$ and $u_{j}\rightarrow\textrm{Id}_{E}$ weakly in
$L_{2}^{p}$-topology as $j\rightarrow\infty$. It is straightforward to check
that $u_{j}(Q_{\infty})=u_{j}((E_{1}^{\infty})^{\bot
K})=(\pi_{1}^{j}(E))^{\bot K}$, and the $K$-unitary gauge transformation
$u_{0}$ satisfies $u_{0}((E_{1}^{\infty})^{\bot K})=E_{1}^{\bot K}$. Set
(3.37) $D_{j}^{Q}=(P^{\ast K})^{-1}\circ u_{0}\circ(\pi_{1}^{\infty})^{\bot
K}\circ u_{j}^{-1}\circ D_{j}\circ u_{j}\circ(\pi_{1}^{\infty})^{\bot K}\circ
u_{0}^{-1}\circ P^{\ast K},$ (3.38) $\hat{\sigma}_{j}=(P^{\ast K})^{-1}\circ
u_{0}\circ(\pi_{1}^{\infty})^{\bot K}\circ u_{j}^{-1}\circ\sigma_{j}\circ
P^{\ast K}$
and
(3.39) $\hat{\sigma}_{j}^{-1}=(P^{\ast K})^{-1}\circ(\pi_{1}^{\infty})^{\bot
K}\circ\sigma_{j}^{-1}\circ u_{j}\circ u_{0}\circ P^{\ast K}.$
One can find that
(3.40) $D_{j}^{Q}=\hat{\sigma}_{j}\circ D_{Q}\circ\hat{\sigma}_{j}^{-1}$
and
(3.41) $D_{j}^{Q}\rightarrow D_{\infty}^{Q}=(P^{\ast K})^{-1}\circ u_{0}\circ
D_{Q_{\infty}}\circ u_{0}^{-1}\circ P^{\ast K}$
weakly in $L_{1}^{p}$-topology. From (2.36), $(\ref{codazzi1})$ and (3.32), it
follows that $\|\psi_{D^{Q}_{j},K}\|_{L^{\infty}}$ and
$\|(D^{Q}_{j,K})^{\ast}\psi_{D^{Q}_{j},K}\|_{L^{\infty}}$ are uniformly
bounded, and $D_{Q_{\infty},K}^{\ast}\psi_{D_{Q_{\infty}},K}=0$. According to
the induction hypothesis, we have
(3.42) $(Q_{\infty},D_{Q_{\infty}})\cong(Q,D_{\infty}^{Q})\cong
Gr^{JH}(Q,D_{Q}).$
This completes the proof of theorem.
###### Proof of Theorem 1.1.
Thanks to Proposition 2.1 ([3]), we know that the harmonic flow (1.5) has a
long time solution $\sigma(t)$ for $t\in[0,\infty)$, and there exists a
sequence $t_{j}\rightarrow\infty$ such that $\sigma(t_{j})\\{D\\}$ converges
weakly, modulo $K$-unitary gauge transformations, to a flat connection
$D_{\infty}$ in $L_{1}^{p}$-topology, and
$D_{\infty,K}^{\ast}\psi_{\infty,K}=0$. (2.54) and (2.48) imply that
$\|\psi_{\sigma(t)\\{D\\},K}\|_{L^{\infty}}$ and
$\|(\sigma(t)\\{D\\})_{K}^{\ast}\psi_{\sigma(t)\\{D\\},K}\|_{L^{\infty}}$ are
uniformly bounded. By Theorem 3.1, we deduce
(3.43) $(E,D_{\infty})\cong Gr^{JH}(E,D).$
## References
* [1] M.F.Atiyah and R.Bott, The Yang-Mills equations over Riemann surfaces, Philos. Trans. Roy. Soc. London Ser. A, 308(1983), no. 1505, 523-615.
* [2] S.Bando and Y.T.Siu, Stable sheaves and Einstein-Hermitian metrics, in Geometry and Analysis on Complex Manifolds, World Sci. Publ., River Edge, NJ, 1994, 39-50.
* [3] K.Corlette, Flat $G$-bundles with canonical metrics, J. Differential Geom., 28 (1988), no. 3, 361-382.
* [4] G.D.Daskalopoulos, The topology of the space of stable bundles on a compact Riemann surface, J. Differential Geom., 36(1992), no. 3, 699-746.
* [5] G.D.Daskalopoulos and R.A.Wentworth, Convergence properties of the Yang-Mills flow on Kähler surfaces, J. Reine Angew. Math., 575(2004), 69-99.
* [6] Y.Deng, Generalized Okounkov bodies, hyperbolicity-related and direct image problems, Université Grenoble Alpes (Ph.D. thesis), 2017.
* [7] S.K.Donaldson, Twisted harmonic maps and the self-duality equations, Proc. London Math. Soc. (3), 55 (1987), no. 1, 127-131.
* [8] S.K.Donaldson and P.B.Kronheimer The Geometry of Four-Manifolds, Clarendon Press, Oxford (1990).
* [9] N.J.Hitchin, The self-duality equations on a Riemann surface, Proc. London Math. Soc. (3), 55 (1987), no. 1, 59-126.
* [10] M.C.Hong and G.Tian, Asymptotical behaviour of the Yang-Mills flow and singular Yang-Mills connections, Math. Ann. 330(2004), no. 3, 441–472.
* [11] A.Jacob, The Yang-Mills flow and the Atiyah-Bott formula on compact Kähler manifolds, Amer. J. Math., 138(2016), no. 2, 329-365.
* [12] J.Y.Li, C.J.Zhang and X.Zhang, The limit of the Hermitian-Yang-Mills flow on reflexive sheaves, Adv. Math. 325 (2018), 165-214.
* [13] B.Sibley, Asymptotics of the Yang-Mills flow for holomorphic vector bundles over Kähler manifolds: the canonical structure of the limit, J. Reine Angew. Math., 706(2015), 123-191.
* [14] C.Simpson, Higgs bundles and local systems, Inst. Hautes tudes Sci. Publ. Math., 75 (1992), 5-95.
|
# Accelerating Derivative-Free Optimization with Dimension Reduction and
Hyperparameter Learning
Jordan R. Hall and Varis Carey
###### Abstract.
We consider convex, black-box functions $f$ with additive or multiplicative
noise with a high-dimensional parameter space $\Lambda$ and a data space
$\mathcal{D}$ of lower dimension, where $\nabla f$ exists, but may be
inaccessible. We investigate Derivative-Free Optimization (DFO) in this
setting and propose a novel method, Active STARS (ASTARS), based on STARS [CW]
and dimension reduction in $\Lambda$ via Active Subspace methods
[Constantine2015]. STARS hyperparmeters are inversely proportional to the
known dimension $P$ of $\Lambda$, resulting in heavy smoothing and small step
sizes for large $P$. When possible, ASTARS learns a lower-dimensional Active
Subspace, $\mathcal{A}\subset\Lambda$, defining a set of directions in
$\Lambda$ causing the majority of the variance in $f$. ASTARS iterates are
updated with steps only taken in $\mathcal{A}$, reducing the value of $f$ more
efficiently than STARS, which updates iterates in the full variables,
$\Lambda$. In addition to computational savings made by stepping only in
$\mathcal{A}$ when it exists, computational costs may be reduced further by
estimating hyperparameters and $\mathcal{A}$ using STARS iterates, reducing
the total evaluations of $f$ and eliminating the requirement that the user
specify hyperparameters, which may be unknown in our setting. We call this
method Fully Automated ASTARS (FAASTARS). We show that STARS and ASTARS will
both converge – with a certain complexity – even with inexact, estimated
hyperparemters. We also find that FAASTARS converges with the use of estimated
$\mathcal{A}$ and hyperparameters. We explore the effectiveness of ASTARS and
FAASTARS in numerical examples which compare ASTARS and FAASTARS to STARS.
###### Contents
1. 1 Introduction
2. 2 Methods
3. 3 Results
4. 4 Conclusion and Discussion
## 1\. Introduction
In this paper, we present an efficient optimization algorithm for expensive,
noisy objective functions which avoids the use of full gradient information.
Efficiency is achieved by performing parameter space dimension reduction
whenever possible, which reduces the burden of many computational tasks,
including optimization. Additionally, we present a fully-automated version of
our algorithm which estimates its hyperparameters and performs approximate
dimension reduction by reusing iterates, saving costly function evaluations.
Optimizing a deterministic function in the presence of its noise is a problem
of Optimization Under Uncertainty (OUU) and quite often arises as a necessary
step in important mathematical and statistical applications. Many objective
functions appearing in Uncertainty Quantification (UQ) problems and
applications involve post-processing evaluations of noisy functions (i.e.,
physical models). For instance, Stochastic Inverse Problems (SIPs) may be
solved by posing equivalent deterministic, convex optimization problems (under
the heavy assumptions that the true model $f$ is a linear and distributions
are Gaussian). We carefully address this topic and propose the use of our
methods for optimization in this setting in a forthcoming paper.
We define a parameter space $\Lambda$ with $P:=\dim\left(\Lambda\right)$, a
data space $\mathcal{D}$, and a parameter-to-observable map or model
$f:\Lambda\rightarrow\mathcal{D}$, which we assume is convex. We assume
$\dim\left(\mathcal{D}\right)=:D<P$. (We focus on the case
$\mathcal{D}\subseteq\mathbb{R}$.) Points in $\mathcal{D}$ may be known values
of $f(\lambda),\lambda\in\Lambda$; we may write $d=f(\lambda)$ to denote the
particular datum corresponding to the evaluation of a point
$\lambda\in\Lambda$ under $f$.
Many physical systems of interest possess turbulent or chaotic behavior. The
physical state of a time-dependent system $u(t,\lambda)$ and the corresponding
parameter-to-observable map may be modeled as a stochastic process,
$\hat{f}(u(t,\lambda))$, a deterministic function with additive or
multiplicative noise. We model a noisy signal with
$\hat{f}(\lambda;\xi)=f(\lambda)+\epsilon(\xi)$ (additive noise) or
$\hat{f}(\lambda;\xi)=f(\lambda)(1+\epsilon(\xi))$ (multiplicative noise). Now
$f$ represents some true signal and $\hat{f}$ represents a noisy, polluted
signal. In both cases $\epsilon(\cdot)$ denotes a random variable specified by
realizations or draws $\epsilon(\xi)$ and we assume $\epsilon(\cdot)$ has
bounded variance $\sigma^{2}<\infty$. We postulate that $\hat{f}$ is an
expensive, black-box function so that $f$ lacks closed form entirely and may
take minutes to hours for even a single evaluation.
The efficient and accurate extraction of gradients of $\hat{f}$ in parameter
space is a challenging undertaking, as popular techniques based on
linearization, including adjoint methods, are inaccurate [lea2000, Qiqi2014].
The finite-difference approximations to $\nabla f$ involve
$P=\text{dim}(\Lambda)$ additional, usually nonlinear model solves for
physical system states $u(t,\lambda+\delta)$, and may be greatly polluted by
the noise in $\hat{f}=f+\epsilon$ or $\hat{f}=f(1+\epsilon)$.
As a consequence of these difficulties, optimization in this setting often
needs to be performed by algorithms which do not require gradient
approximations. Otherwise, we expect to surpass a reasonable computational
budget since even one gradient approximation involves $\mathcal{O}(P)$
evaluations of $\hat{f}$ using, for instance, finite differencing. With
$\nabla f$ considered unavailable and unfeasible to approximate we consider
Derivative-Free Optimization (DFO) techniques which avoid evaluations of
$\nabla f$, as well as expensive and possibly inaccurate approximations to
full pointwise gradients. In the following section we provide a discussion of
DFO, DFO hyperparameter learning, and parameter space dimension reduction via
Active Subspace (AS) methods.
### 1.1. Derivative-Free Optimization (DFO)
As in [CW, Nesterov, ARDFDS], we consider the additive noise OUU problem
(1)
$\displaystyle\min_{\lambda\in\Lambda}\quad\mathbb{E}\left[f(\lambda)+\epsilon(\xi)\right],$
and the multiplicative noise OUU problem
(2)
$\displaystyle\min_{\lambda\in\Lambda}\quad\mathbb{E}\left[f(\lambda)(1+\epsilon(\xi))\right],$
where we assume:
1. (i.)
$f:\Lambda=\mathbb{R}^{P}\to\mathbb{R}=\mathcal{D}$ is continuously
differentiable, convex, and $\nabla f$ has a real-valued Lipschitz constant
$L_{1}>0$;
2. (ii.)
$\epsilon(\xi)$ is a random variable with probability density
$\pi_{\epsilon}(\epsilon(\xi))$;
3. (iii.)
for all $\lambda$ and $\xi$ the noise $\epsilon(\xi)$ is independent and
identically distributed, has bounded variance $\sigma^{2}$, and is unbiased;
i.e., $\mathbb{E}_{\xi}(\epsilon(\xi))=0$;
4. (iv.)
for multiplicative OUU, the additional assumptions that: the signal-to-noise
ratio is bounded – i.e., $\mathbb{E}((1+\epsilon(\xi))^{-1})<b$, $b>0$ – and
the support of $\epsilon(\xi)$ is bounded by $\pm a$, $a<1.$
We consider DFO algorithms suited for additive and multiplicative noise. DFO
algorithms require nothing more than the ability to evaluate the noisy model
and randomly draw from a normal distribution; $\nabla f$ is not required.
Given an initial iterate $\lambda^{(0)}$, many DFO algorithms find subsequent
iterates $\lambda^{(k)}$ by random walks or ball walks in $\Lambda$ specified
by draws from a $P$-dimensional Gaussian distribution. Iterates are often
controlled by prescribed hyperparameters (e.g., step size) which depend on
potentially unknown properties of the model. For example, in the DFO algorithm
considered in this paper [CW], the hyperparameters depend on the variance in
the noise, $\sigma^{2}$, and on the first-degree or gradient Lipschitz
constant of $f$, denoted $L_{1}$, which we assume to exist; i.e., the real
number $L_{1}>0:||\nabla f(\lambda^{1})-\nabla f(\lambda^{2})||\leq
L_{1}||\lambda^{1}-\lambda^{2}||$ $\forall\lambda^{1},\lambda^{2}\in\Lambda$,
where $||\cdot||$ denotes a norm on $\Lambda$ (which is a normed linear
space). We discuss strategies for estimating or learning DFO hyperparameters
in Section 1.1.2.
#### 1.1.1. STARS
The authors in [CW] present the STep-size Approximation in Randomized Search
(STARS), a DFO algorithm used to numerically solve the additive and
multiplicative OUU problems (1) and (2) under assumptions (i.)-(iv.) STARS
uses small perturbations of iterates in $\Lambda$ by the addition of a random
vector with components drawn from a normal distribution, computes the noisy
function value at the randomly perturbed point, and updates iterates using a
Gaussian-smoothed finite-difference for approximate gradient information in a
gradient-descent-type scheme. STARS only requires the ability to evaluate
$\hat{f}$ and take random draws from a normal distribution. All in all, the
algorithm can be implemented in about 10 lines of code in any standard
computing language. (We used Python 3.7 for the results presented in this
paper.)
STARS requires prescribing the values of two key hyperparameters – denoted
$\mu^{*}_{k}$ and $h$ – which are the algorithm’s smoothing factor and step
size, respectively. ($k=1,\ldots,M$ denotes STARS iterations 1 through $M$,
the maximum number of iterations.) The step length $h$ will remain constant
for all iterations regardless of the type of noise in $\hat{f}$. In the case
of Additive OUU, $\mu^{*}_{k}$ will also be constant, i.e.,
$\mu^{*}_{k}=\mu^{*}$, a fixed value for all iterations $k=1,\ldots,M$.
However in the case of Multiplicative OUU, the smoothing factor $\mu^{*}_{k}$
will need to be adjusted at every iteration $k$. For the additive noise OUU
problem (1), the values for $\mu^{*}$ and $h$ are
(3)
$\displaystyle\mu^{*}:=\left(\frac{8\sigma^{2}P}{L_{1}^{2}(P+6)^{3}}\right)^{1/4}\quad
h:=(4L_{1}(P+4))^{-1},$
which are proven as optimal values for STARS’ convergence in [CW].
In the multiplicative noise OUU problem (2), the step length $h$ remains the
same, exactly as in (3) above, held constant for each iteration. However the
smoothing factor must be updated for each iteration $k=1,\ldots,M.$ As shown
in [CW], the optimal smoothing factor for an iterate $k$ is given by
(4)
$\displaystyle\mu^{*}_{k}:=\left(\frac{16\sigma^{2}\hat{f}(\lambda^{(k)})^{2}P}{L_{1}^{2}(1+3\sigma^{2})(P+6)^{3}}\right)^{1/4}.$
We present STARS suited for additive or multiplicative OUU as in (1) and (2)
in the pseudocode below.
Input: maxit$=:M$; $\lambda^{(0)}$; $f_{0}:=\hat{f}(\lambda^{(0)})$; $h$;
$k=1$
while _$k\leq M$_ do
Form smoothing factor $\mu^{*}_{k}$
Draw $u^{(k)}$, where $u^{(k)}_{p}\sim N(0,1)$ for $p=1,\ldots,P$;
Evaluate $g_{k}:=\hat{f}(\lambda^{(k-1)}+\mu^{*}_{k}\cdot u^{(k)})$;
Set $\displaystyle d^{(k)}:=\frac{g_{k}-f_{k-1}}{\mu^{*}_{k}}\cdot u^{(k)}$;
Set $\lambda^{(k)}=\lambda^{(k-1)}-h\cdot d^{(k)}$;
Evaluate $f_{k}:=\hat{f}(\lambda^{(k)})$;
Set $k=k+1$;
end while
Output: ($\lambda^{(M)}$, $f_{M}$)
Algorithm 1 STARS [CW]
STARS typically converges to a minimum when the hyperparameters $\mu_{k}^{*}$
and $h$ are within an order of magnitude of their true values. The closer the
user-defined $\mu_{k}^{*}$ and $h$ values are to the truth, the faster STARS
converges. If $\mu_{k}^{*}$ and $h$ are underestimated, STARS will take very
small and heavily smoothed steps, converging slowly; however, if the values
are overestimated, the algorithm may cause divergence, in the sense that
function evaluations will grow with each iterate.
It is then of high interest to tune the values $\mu_{k}^{*}$ and $h$ so that
function evaluations are not wasted. In the proceeding subsection, we discuss
learning the values $\mu_{k}^{*}$ and $h$ depend on – namely $\sigma^{2}$ and
$L_{1}$.
#### 1.1.2. Learning STARS Hyperparameters
STARS [CW], like many DFO algorithms [Nesterov], exhibits optimal convergence
if and only if its hyperparameters – namely the smoothing factor and step size
– are properly tuned. Tuning STARS hyperparameters is a matter of learning
$\sigma^{2}$ and $L_{1}$. We examine and propose algorithmic methods of
estimating STARS hyperparameters so that one need not specify the
hyperparameters at all, fully automating the process of solving problems (1)
and (2).
To estimate $\sigma^{2}$, we rely on the ECNoise algorithm [MW] which even for
$P$ large requires few evaluations of $\hat{f}$ – often 6 to 10 evaluations
will suffice. Briefly, ECNoise uses a set of nearby samples
$s^{i}:=(\lambda^{i},\hat{f}(\lambda^{i}))$ of points $\lambda^{i}$ along a
line in $\Lambda$. Forming a classical difference table of iterative
residuals, the authors show that estimators $\hat{\sigma^{2}}$ to $\sigma^{2}$
may be formed using scaled averaging and selection criteria discussed more in
the next section. Learning $\sigma^{2}$ is performed prior to STARS, and
herein is viewed as a computational expense one must pay up front to ensure
convergence of the algorithm.
Learning $L_{1}$ in our setting is a challenge mainly due to our postulated
lack of access to $\nabla f$. Given $S$ pointwise estimates to the gradient,
which we denote with $\hat{\nabla}f(\lambda^{i})$, $i=1,\ldots,S$, and
assuming $\lambda^{i}\neq\lambda^{j}$ for $i\neq j$ we could consider an
estimator such as
(5)
$\begin{array}[]{c}\stretchto{\scaleto{\scalerel*[width("L_{1}")]{\kern-0.5pt\bigwedge\kern-0.5pt}{\rule[-505.89pt]{4.30554pt}{505.89pt}}}{}}{0.5ex}\\\
L_{1}\\\ \rule{-4.30554pt}{0.0pt}\end{array}:=\max_{i\neq
j}\frac{\left|\left|\begin{array}[]{c}\stretchto{\scaleto{\scalerel*[width("\nabla")]{\kern-0.5pt\bigwedge\kern-0.5pt}{\rule[-505.89pt]{4.30554pt}{505.89pt}}}{}}{0.5ex}\\\
\nabla\\\
\rule{-4.30554pt}{0.0pt}\end{array}f(\lambda^{i})-\begin{array}[]{c}\stretchto{\scaleto{\scalerel*[width("\nabla")]{\kern-0.5pt\bigwedge\kern-0.5pt}{\rule[-505.89pt]{4.30554pt}{505.89pt}}}{}}{0.5ex}\\\
\nabla\\\
\rule{-4.30554pt}{0.0pt}\end{array}f(\lambda^{j})\right|\right|-2\epsilon^{*}}{||\lambda^{i}-\lambda^{j}||},$
$\epsilon^{*}=\sup_{\xi}|\epsilon(\xi)|,i,j=1,\ldots,S,$ given in [Calliess].
There are several problems with such an approach in this setting. Mainly,
forming each $\hat{\nabla}f$ is expensive, requiring at least $P+1$
evaluations of $\hat{f}$ for each approximation. Even if one uses only 3
samples $s^{i}$, $i=1,2,3$, forming $\hat{L_{1}}$ requires $3(P+1)$ $f$
evaluations, which will often exceed the number of function evaluations needed
for STARS to converge, assuming its hyperparameters are reasonably tuned.
Another challenge involves specifying $\epsilon^{*}$ in (5), which is
subtracted from the estimator’s numerator to control for noise in $\hat{f}$.
To save computational expense, we propose forming an initial estimate to
$L_{1}$ by re-using the samples from ECNoise to approximate directional
derivatives in a finite-difference fashion, avoiding the expensive approach
above (as well as approximating the full $\nabla f$). Then, once STARS is
initiated, $\hat{L_{1}}$ may be updated using information from approximate
directional derivatives formed from iterates (and their corresponding function
values). We shall see that the iterates (and intermediate iterates) formed
during STARS lend themselves naturally to finite differencing to estimate
$L_{1}$ – if a larger (more pessimistic) value of $L_{1}$ is discovered, we
replace or update $\hat{L_{1}}$ with this new value.
We also propose fitting a surrogate using $\hat{f}$ values often collected
from STARS iterates. One may use the closed-form surrogate to estimate
$L_{1}$. We observe the surrogate-based method for estimating $L_{1}$ is
typically more accurate than finite differences, which – as we discussed above
– are sensitive to noise.
Updates to $\hat{L_{1}}$ may be performed between iterations of STARS. As
additional $\hat{f}$ evaluations are obtained, one may update $\hat{L_{1}}$
anytime an estimate $L_{1}^{\text{update}}$ for which
$\hat{L_{1}}<L_{1}^{\text{update}}$. The updated value
$\hat{L_{1}}=L_{1}^{\text{update}}$ may be formed using an un-centered second-
order finite difference scheme, which is similar to how $L_{1}^{\text{init}}$
is formed, or by refitting a surrogate with the newly-gathered samples of
$\hat{f}$. (One must use care to adjust the finite difference formulas to
account for iterates and intermediate function evaluations, which are
generally un-centered.)
### 1.2. Dimension Reduction via the Active Subspace (AS) Method
We consider functions which map a high-dimensional space $\Lambda$ to a data
space $\mathcal{D}$ of smaller dimension; i.e.,
$\dim\mathcal{D}=D<<P=\dim\Lambda$. Many functions of interest actually
represent post-processed quantities from the solution of complex physical
models. It is not often the case that every parameter has an equal impact on
function values; usually some parameters matter more than others. If it is
possible to mimic the response of $f$ by processing fewer parameters, we can
expect computational savings.
We consider AS methods described by Paul Constantine in [Constantine2015] and
an equivalent method by T.M. Russi in [Russi]. These techniques seek to
explain outputs $f(\Lambda)$ in an AS denoted $\mathcal{A}\subset\Lambda$ for
which $\dim(\mathcal{A})<P$. Here we discuss the theoretical formulation of
$\mathcal{A}$. The details of finding $\mathcal{A}$ algorithmically is
discussed in the proceeding section.
We note that AS requires, at the very least, approximations to $\nabla f$. For
the discussion in this section, we continue with the understanding that
$\nabla f$ is approximated in some fashion, the details of which will be
discussed in the proceeding Methods section. We assume that $\nabla
f(\lambda)$ is square integrable in $\Lambda$ with respect to a probability
density $\pi_{\Lambda}(\lambda)$ that is positive everywhere in $\Lambda$ and
0 otherwise.
In AS techniques – and many other dimension reduction techniques [Russi] – we
transform inputs $\lambda$ to a bounded domain with some fixed variance,
typically so that $\lambda\in[-1,1]^{P}$ for all $\lambda$. Then, as in
[ConstantineMC], we write the sensitivity matrix
(6) $W=\int_{\Lambda}\nabla f(\lambda)\nabla
f(\lambda)^{\top}\pi_{\Lambda}(\lambda)d\lambda,$
which is a $P\times P$ symmetric positive semi-definite matrix defining a
certain covariance of $\nabla f$ in $\Lambda$. This interpretation of (6)
suggests the computation of the eigendecomposition of $W$,
(7) $W=VQV^{\top},$
where $V$ is $P\times P$ unitary with columns given by the eigenvectors
$v^{i}$, $i=1,\ldots,P$ of $W$ and $Q$ is a diagonal matrix containing the
ordered eigenvalues of $W$, $\\{q_{i}\\}_{i=1}^{P}$. To find the AS, we seek a
drop-off in magnitude between some pair of eigenvalues, $q_{j}$ and $q_{j+1}$,
$1\leq j\leq j+1\leq P$, where $q_{j}>>q_{j+1}$. In this paper, we typically
use $95\%$ of the eigenvalues by weight, so that
$q_{1}+\cdots+q_{j}\geq\tau(q_{1}+\cdots+q_{P})$, where $\tau=0.95$. We shall
see that sometimes the eigenvalue threshold $\tau\in(0,1)$ must be changed –
depending on the problem – to obtain a quality AS. The active subspace of $f$
is the span of $v^{1},\ldots,v^{j}$, which we denote
(8) $\mathcal{A}(f)=\text{span}\\{v^{1},\ldots,v^{j}\\}.$
Likewise, we define the inactive subspace of $f$ with
(9) $\mathcal{I}(f)=\text{span}\\{v^{j+1},\ldots,v^{P}\\}.$
The fact that $v^{1},\ldots,v^{j}$ correspond to large eigenvalues is
precisely why they account for the most amount of variance in function values.
In fact, one can view an AS as a reasonable choice of principal components
after a full Principal Component Analysis (PCA) is performed in the gradient
space $W$; for more details on this viewpoint, we refer the reader to Section
6.4 in [Russi].
For a point $\lambda\in\Lambda$, we define
(10)
$\mathcal{P}_{\mathcal{A}}(\lambda)=\sum_{i=1}^{j}\left(({v^{i}})^{T}\lambda\right)v^{i}\in\mathcal{A},$
which is a projection of the point $\lambda$ into the AS of $f$. We call this
projection an active variable, which is a point in the AS $\mathcal{A}$. We
define a submatrix of $V$ with $V_{\mathcal{A}}:=V_{1:P,1:j}$, the first $j$
columns of $V$ from the eigendecomposition of $W$ in (2). Then (10) can be
rewritten as
$\mathcal{P}_{\mathcal{A}}(\lambda)=V_{\mathcal{A}}V_{\mathcal{A}}^{\top}\lambda.$
We have arrived at the property that
(11) $f\left(\mathcal{P}_{\mathcal{A}}(\lambda)\right)\approx f(\lambda).$
The above property gives the ability to save computational expense in many
scenarios in UQ, including optimization, approximation, and solving inverse
problems [Constantine2015]. We analogously define a projection into the
inactive variables with
(12)
$\mathcal{P}_{\mathcal{I}}(\lambda)=\sum_{i=j+1}^{P}\left(({v^{i}})^{T}\lambda\right)v^{i}\in\mathcal{I}.$
We define another submatrix of $V$ with $V_{\mathcal{I}}:=V_{1:P,j+1:P}$, the
last $P-j$ columns of $V$ from the eigendecomposition of $W$ in (7). Then (12)
can be rewritten as
$\mathcal{P}_{\mathcal{I}}(\lambda)=V_{\mathcal{I}}V_{\mathcal{I}}^{\top}\lambda.$
In cases in which $j=1$ or $2$, we can use visualizations to check the extent
to which the AS accounts for functions values $\hat{f}(\lambda)$ by checking
for resolution in a _sufficient summary plot_ [Constantine2015], where one
plots active variables against function values. Interpolating these values
results with a curve ($j=1$) or surface ($j\geq 2$) forms what is called a
response surface. In these plots, we hope to see a pattern between the active
variables versus their function values. For example, if $f$ is quadratic in
its active variables, then we expect to see a quadratic response surface. When
$\tilde{j}>3,$ one may check the correlation coefficient between the response
surface and true function values since visualization techniques become
unfeasible.
In the event that $\nabla f$ is unavailable, the methods above are unusable
and approximation methods are required. We turn our attention to briefly
discuss estimating an AS.
#### 1.2.1. AS Learning
We present two methods we utilized for learning an AS from $\hat{f}$
evaluations. In both cases, we often heavily violate a standard assumption
that $\Lambda$ samples are independent and random – the samples we use are
deterministic samples from evaluations of $\hat{f}$ from ECNoise (usually 7-10
samples) and STARS (where we often only take enough steps in full variables to
train a linear or quadratic surrogate). 111Recall that points in ECNoise are
drawn along a line, but the points in STARS will be at least somewhat random,
dictated by a random vector. In practice, finding an AS of $f$ without $\nabla
f$ will require forming an approximation to $W$ in (6) in some fashion
[ConstantineMC] and necessarily involves full $\nabla f$ approximations. We
present two methods for AS learning we considered – an approach involving a
Monte Carlo approximation to $W$, and an approach involving the use of a
closed-form surrogate to obtain $\nabla f$ approximations.
The Monte Carlo approach is simple to implement, found in [Russi], and
equivalent to the method in [ConstantineMC]. For a draw
$\lambda^{i}\in\Lambda$, we obtain an approximation to $\nabla f$ and store
the row vector $\nabla f(\lambda^{i})^{\top}$ in a matrix $\tilde{W}$. The
eigendecomposition of $\tilde{W}=\tilde{V}Q\tilde{V}^{\top}$ gives
approximations to the eigenvectors and eigenvalues of $W$ as in (7).
One initializes the method by performing $S$ random draws of
$\lambda^{i}\in\Lambda$. We then compute $\hat{f}(\lambda^{i})$ for all
$i=1,\ldots,S$ samples, which we note will require $S$ evaluations of $f$; in
a realistic setting, this would require $S$ model solves. We define
$D_{S}:=\\{s^{i}\\}_{i=1}^{S}$, a set of $S$ pairs of samples $\lambda^{i}$
and their function values. Next, we need $\nabla f$ – which we assume that we
do not have in closed form – evaluated at $\lambda^{i}$ for all
$i=1,\ldots,S$. Hence, we generally need a gradient approximation method
[Constantine2015, Smith]. Here we form a local linear, global linear, or
global quadratic surrogate to $f$ using $D_{S}$ along the lines of
[Constantine2015]. We also consider RBFs as an additional surrogate method.
The gradient of the closed-form surrogate is used to approximate $\nabla f$.
222Despite the fact that we now have formed a potentially global approximation
to $\nabla_{\Lambda}f$ via surrogate, following [CW], we will still prefer the
directional derivative approximations to take steps in STARS since we take
steps in exactly the direction we are approximating $\hat{D}_{v}^{i}$ for some
decent direction $v$. Using this approximation, we denote each estimation to
$\nabla f(\lambda^{i})$ with $\hat{\nabla}f(\lambda^{i})$ and we define the
$P\times S$ matrix $\tilde{W}$ (which is presented below as
$\tilde{W}^{\top}$)
(13)
$\tilde{W}^{\top}:=\begin{bmatrix}\begin{array}[]{c}\stretchto{\scaleto{\scalerel*[width("\nabla")]{\kern-0.5pt\bigwedge\kern-0.5pt}{\rule[-505.89pt]{4.30554pt}{505.89pt}}}{}}{0.5ex}\\\
\nabla\\\
\rule{-4.30554pt}{0.0pt}\end{array}f(\lambda^{1})\cdot\cdot\cdot\begin{array}[]{c}\stretchto{\scaleto{\scalerel*[width("\nabla")]{\kern-0.5pt\bigwedge\kern-0.5pt}{\rule[-505.89pt]{4.30554pt}{505.89pt}}}{}}{0.5ex}\\\
\nabla\\\ \rule{-4.30554pt}{0.0pt}\end{array}f(\lambda^{S})\\\ \end{bmatrix}.$
Applying an eigendecomposition to $\tilde{W}^{\top}\tilde{W}$, which is
$P\times P$, we obtain
$\tilde{W}^{\top}\tilde{W}=\tilde{V}\tilde{Q}\tilde{V}^{\top}$ and search for
a drop off in the magnitude of the numerical eigenvalues
$\\{\tilde{q}_{i}\\}_{i=1}^{P}$ (using an eigenvalue threshold
$\tau\in(0,1)$). Assuming such a drop off occurs for an index
$\tilde{j}:1\leq\tilde{j}\leq P$, we let
(14)
$\tilde{\mathcal{A}}\left(\hat{f};D_{S}\right):=\text{span}\\{\tilde{v}^{1},\ldots,\tilde{v}^{\tilde{j}}\\}$
denote the AS of $\hat{f}$ with respect to the samples $D_{S}$. (We use this
notation including $D_{S}$ to emphasize the dependence of the approximated AS,
$\tilde{\mathcal{A}}$, on the samples taken in $\Lambda$.)
Instead of performing the Monte Carlo method above, one may use a closed-form
surrogate to obtain an approximate (but also closed-form) gradient function,
which may be used in place of the exact gradient in the formulations of the
last section. We often prefer the surrogate-based approach for its simplicity,
as well as its estimation quality and performance (which is often better than
the Monte Carlo method).
$\ast$ $\ast$ $\ast$
In the proceeding Methods section (2), we present the Active STARS (ASTARS)
and Fully-Automated ASTARS (FAASTARS) algorithms. ASTARS leverage AS dimension
reduction in $\Lambda$ to perform STARS steps in a lower-dimensional space
when possible. FAASTARS is fully-automated in the sense that the user need not
specify hyperparameters nor provide the true AS of $f$. In the Results section
(3), we analyze and compare the performance of STARS, ASTARS ,and FAASTARS in
a series of examples. Finally, in the Conclusion and Discussion section (4),
we review the extent of ASTARS and FAASTARS efficiency. Limitations are
discussed and future research questions are posed.
## 2\. Methods
### 2.1. Active STARS (ASTARS)
Given $\hat{f}$ and the exact AS $\mathcal{A}$ of stochastic-free signal $f$,
we are interested in investigating the effectiveness of optimizing $\hat{f}$
in its active variables. There are several approaches one may consider, and
some of those approaches and their corresponding results are discussed in the
remainder of the paper; in this section, we focus on performing STARS in the
true, known AS of the true signal, $f$, denoted $\mathcal{A}$.
Active STARS, or ASTARS, is a modification of STARS in which iterates only
take random walks in directions lying in $\mathcal{A}$. In detail, at
iteration $k$, STARS uses random walks in directions given by drawing a random
vector $u^{(k)}$ of dimension $P$ in which every component
$u_{i}^{(k)},i=1,\ldots,P$ of $u^{(k)}$ is drawn from a specified normal
distribution. Instead, given the first $j$ eigenvectors $v^{1},\ldots,v^{j}$
spanning $\mathcal{A}$, ASTARS takes $j$ draws from a specified normal
distribution, which we denote $r_{i}\sim N(0,\omega_{i}^{2})$, $i=1,\ldots,j$,
defining the random vector $u_{\mathcal{A}}^{(k)}$ for the $k$-th random
active direction as
(15) $\displaystyle u_{\mathcal{A}}^{(k)}=\sum_{i=1}^{j}r_{i}v^{i},\quad
r_{i}\sim N(0,\omega_{i}^{2}),\quad i=1,\ldots,j.$
The direction $u_{\mathcal{A}}^{(k)}$ is a randomly-weighted linear
combination of the active directions of $f$ and is the direction used in place
of $u^{(k)}$ in STARS. In the case that there is not a large drop-off in the
spectrum of $\tilde{W}$, then all $P$ directions are active, and ASTARS
reduces to performing STARS (in all variables).
In ASTARS (Algorithm 2 below), we equally weight the $j$ active directions
using unit variance, in the sense that $\omega_{i}^{2}=1$ for $i=1,\ldots,j$.
The use of unit variance in the random coefficients matches the theoretical
assumptions in [CW], and is an assumption in the proofs in our technical
report. However, one may suspect other choices of $\omega_{i}$ may improve
ASTARS performance in some cases.
Other weighting schemes considered include taking
$\omega_{i}=\sqrt{\tilde{q_{1}}}/\sqrt{\tilde{q_{i}}}$, $i=1,\ldots,j$, where
we recall $\tilde{q_{i}}$ denotes the $i$-th numerical eigenvalue obtained
from the eigendecomposition of $\tilde{W}^{\top}\tilde{W}$ which are indexed
so that $\tilde{q_{1}}\geq\cdots\geq\tilde{q_{j}}$. In our numerical
experiments, this and other alternate weighting schemes exhibited promising
ASTARS convergence, and further research is needed, which we plan to present
in a follow-up paper investigating this and other extensions of the ASTARS
framework.
ASTARS requires modifying the initialization and changing the second step of
STARS (Algorithm 1) by replacing $u^{(k)}$ with $u^{(k)}_{\mathcal{A}}$ as we
discussed above. ASTARS also uses modified STARS hyperparameters. For the
additive noise OUU problem, we define the active hyperparameters
$\mu_{\mathcal{A}}^{*}$ and $h_{\mathcal{A}}$
(16)
$\displaystyle\mu_{\mathcal{A}}^{*}:=\left(\frac{8\sigma^{2}j}{L_{1}^{2}(j+6)^{3}}\right)^{1/4}\,h_{\mathcal{A}}:=(4L_{1}(j+4))^{-1},$
the active smoothing factor and step length, respectively. For the
multiplicative noise OUU problem, the step length $h_{\mathcal{A}}$ remains
the same, exactly as in (16) above but the optimal active smoothing factor for
the $k$-th iterate of ASTARS, $k=1,\ldots,M$, is given by
(17)
$\displaystyle(\mu_{\mathcal{A}}^{*})_{k}:=\left(\frac{16\sigma^{2}\hat{f}(\lambda^{(k)})^{2}j}{L_{1}^{2}(1+3\sigma^{2})(j+6)^{3}}\right)^{1/4}.$
We may use $\mu_{\mathcal{A}}^{*}$ to generally denote and discuss the active
smoothing factor hereon with the understanding that in the case of
multiplicative noise one actually uses $(\mu_{\mathcal{A}}^{*})_{k}$.
In Algorithm 2 below, we present ASTARS.
Input: maxit$=:M$; $\lambda^{(0)}$; $f_{0}:=\hat{f}(\lambda^{(0)})$;
$h_{\mathcal{A}}$; $\tilde{V}_{\mathcal{A}}:=\tilde{V}_{1:P,1:j}$; $k=1$
while _$k\leq M$_ do
Form smoothing factor $(\mu^{*}_{\mathcal{A}})_{k}$;
Draw $r^{(k)}$, where $r^{(k)}_{p}\sim N(0,1)$ for $p=1,\ldots,j$ and set
$u^{(k)}_{\mathcal{A}}:=\tilde{V}_{\mathcal{A}}r^{(k)}$;
Evaluate
$g_{k}:=\hat{f}(\lambda^{(k-1)}+(\mu^{*}_{\mathcal{A}})_{k}u_{\mathcal{A}}^{(k)})$;
Set
$d^{(k)}:=\frac{g_{k}-f_{k-1}}{(\mu^{*}_{\mathcal{A}})_{k}}u_{\mathcal{A}}^{(k)}$;
Set $\lambda^{(k)}=\lambda^{(k-1)}-h_{\mathcal{A}}\cdot d^{(k)}$;
Evaluate $f_{k}:=\hat{f}(\lambda^{(k)})$;
Set $k=k+1$;
end while
Output: ($\lambda^{(M)}$, $f_{M}$)
Algorithm 2 ASTARS
ASTARS Corollary. Let the vectors $u_{\mathcal{A}}^{(k)}$ denote those drawn
using Algorithm 2 (zero mean, unit variance in each component); let
$f\in\mathcal{C}^{1,1}(\Lambda)$ and assume $f$ is convex; and assume that the
i.i.d. noise draws $\epsilon(\xi)$ are additive, zero mean, with bounded
variance $\sigma^{2}$ for all $\xi$. Fixing the step size $h_{\mathcal{A}}$ in
(16), the active smoothing parameter $\mu_{\mathcal{A}}^{*}$ in (16) minimizes
the error between the gradient oracle in Algorithm 2, given by
$\frac{\hat{f}(\lambda^{(k-1)}+\mu^{*}_{\mathcal{A}}u_{\mathcal{A}}^{(k)})-\hat{f}(\lambda^{(k-1)})}{\mu^{*}_{\mathcal{A}}}u_{\mathcal{A}}^{(k)},$
and the true directional derivative of $f$ in the direction
$u_{\mathcal{A}}^{(k)}$ in the $j$-dimensional $\mathcal{A}$.
Remark. The preceding corollary implies that using the fixed step length
$h_{\mathcal{A}}$, ASTARS uses an optimal choice of smoothing parameter
$\mu^{*}_{\mathcal{A}}$, in the sense that $\mu^{*}_{\mathcal{A}}$ minimizes
the error between our approximate directional derivative formed in Algorithm
2. Since ASTARS takes steps in the $j$-dimensional space $\mathcal{A}$, the
hyperparameters from STARS must be redefined to remain optimal. The
hyperparameters in (3) and (16) are proven to be optimal hyperparameters for
the convergence of STARS in the $P$-dimensional space $\Lambda$ by the authors
in Theorem 4.3 of [CW]. Now, replacing $P=\dim\Lambda$ in (3) and (4) with
$j=\dim\mathcal{A}$, we obtain (16) and (17). We will present a proof of the
corollary above as ASTARS Corollary 2 in our technical report. Note also that
the authors [CW] have an analogous result for the case of multiplicative
noise, and so we note that we, too, have an optimal active smoothing constant
given in (17) in the case of multiplicative noise.
In our technical report, we prove that the complexity of ASTARS is
(18) $\displaystyle
M\sim\mathcal{O}\left(\frac{L_{1}jR^{2}_{\mathcal{A}}}{\epsilon_{\text{tol}}}\right).$
Here, $R^{2}_{\mathcal{A}}$ is a bound on the squared norm of the difference
between any initial iterate and the true minimum of $f$, both projected in the
inactive subspace $\mathcal{I}$. $\epsilon_{\text{tol}}>0$ is a final accuracy
which is bounded below by terms involving the variance in the noise, as well
as by terms involving the inactive subspace of $f$; for details, refer to the
technical report.
### 2.2. Fully-Automated ASTARS (FAASTARS)
We now introduce a fully-automated version of ASTARS (Algorithm 2), in the
sense that the user need not specify anything beyond an initial iterate
$\lambda^{(0)}$, its evaluation $\hat{f}(\lambda^{(0)})$, the black-box
objective function $\hat{f}$, and a maximum number of iterations, $M$. We call
this algorithm Fully-Automated ASTARS (FAASTARS).
We first note that $\hat{f}$ need not be in closed form; we only require that
$\hat{f}$ is a callable function, which we recall may actually represent a
post-processed quantity from, for instance, a noisy solution of a PDE
evaluated at some point in parameter (e.g., phase) space. FAASTARS estimates
$\sigma^{2}$ and $L_{1}$ from a handful of upfront samples taken from
performing ECNoise (and recycled for $L_{1}$ learning). As well, $\mathcal{A}$
is estimated from regular STARS iterates (supplemented with the original
ECNoise samples as well) during a STARS burn-in phase.
In the following, we outline FAASTARS by breaking it down into three phases:
(1.) a hyperparameter learning phase; (2.) a STARS burn-in phase; (3.) an
approximate ASTARS phase.
In the first phase, the estimator $\hat{\sigma}^{2}$ will be formed
immediately by using ECNoise on 7 to 10 sampled points in $\Lambda$. As
discussed above, the samples $s^{i}$ created by ECNoise lend themselves to
forming $L_{1}^{\text{init}}$ in a finite-difference scheme approximating
directional derivatives and $\hat{L_{1}}=L_{1}^{\text{init}}$ is used.
We can also form a surrogate $F$ from the ECNoise points to estimate $L_{1}$.
By obtaining the value of the closed-form hessian of the surrogate $F$,
$\nabla^{2}F$, we obtain a lower bound $L_{1}$, as we will see. The surrogate
can be improved by incorporating STARS iterates into the set of samples used
to form the surrogate, which we will present in the next section, about
adaptive sampling.
We will first need approximated hyperparameters, given by
(19)
$\hat{\mu}^{*}:=\left(\frac{8\hat{\sigma}^{2}P}{\hat{L_{1}}^{2}(P+6)^{3}}\right)^{1/4}\quad\quad\hat{h}=(4\hat{L_{1}}(P+4))^{-1}.$
Note that the approximated active smoothing factor will be optimal in
$\Lambda$ – given the available information – in a somewhat analogous fashion
as our result in ASTARS Corollary 2 (see technical report), but with a
specified loss of optimality as the estimates $\hat{L_{1}}$ and $\hat{\sigma}$
stray from their true values. (See STARS Theorem 4.3 (Modified) in 4.2.) In
particular, we find that STARS will either diverge or make no progress when
the values are underestimated or overestimated, respectively. That is,
underestimation of either value may lead to STARS’ divergence, but for
distinct reasons in each case.
When the variance in the noise $\sigma^{2}$ is underestimated, $\mu^{*}$ is
also underestimated, and thus we may not take a large enough step to
successfully perturb $\hat{f}$ enough to see a change in function value larger
than the noise level itself, leading to inaccurate derivative information and
potentially descent steps of poor quality. (Note $\sigma^{2}$ does not appear
in the step size.) When the gradient Lipschitz constant $L_{1}$ is
underestimated, both $\mu^{*}$ and $h$ will be too large, meaning we may take
too large of a step for the finite difference approximation to be accurate and
we may take too large of a descent step (in a bad direction), causing a quick
rise in the function valeus we see. Indeed, underestimating $L_{1}$ is to be
avoided at all costs.
We may also form a linear surrogate $F$ from ECNoise samples (if we have
enough data) to initiate our approximation to $L_{1}$ by computing the matrix
norm of the closed form Hessian of $F$, $||\nabla^{2}F||$, for which we have
$||\nabla^{2}F||\approx||\nabla^{2}f||\leq L_{1}$. The first phase of FAASTARS
is given in the algorithm below.
Input: $\lambda^{(0)}$; $k=0$
while _$k=0$_ do
Run ECNoise using $\lambda^{(0)}$ as base point and obtain $\hat{\sigma}^{2}$;
Initialize storage array $D_{S}$ for samples of $f$ and store $\\{s^{i}\\}$
formed by ECNoise;
Use $\\{s^{i}\\}$ to form second-order FD approximation (or form linear
surrogate $F$ if $S>P+1$ to compute $||\nabla^{2}F||$) for
$L_{1}^{\text{init}}$;
end while
Output: $\hat{\sigma}^{2}$; $L_{1}^{\text{init}}$; $D_{S}$
Algorithm 3 FAASTARS for Additive or Multiplicative OUU, Phase 1:
Hyperparameter Learning
Next, in the second phase, we perform standard STARS (Algorithm 1) until
enough samples are obtained to perform AS analysis via a surrogate to form
needed $\nabla f$ evaluations. We let $M_{\mathcal{A}}$ denote the number of
iterations needed to find the AS from samples, as we see in the first phase of
FAASTARS above. We note that $M_{\mathcal{A}}$ will depend on the type of
surrogate formed (e.g., using local linear versus global quadratic versus
RBFs). FAASTARS will not begin its ASTARS routine until enough samples have
been taken to form a surrogate, based on the chosen surrogate method – RBFs
are the default surrogate method when none is provided using known formulas.
For example, if one wishes to use a globally quadratic surrogate,
$(P+1)(P+2)/2$ samples of $f$ are required [Smith]. Recalling that every STARS
step requires two evaluations of $f$, $M_{\mathcal{A}}=(P+1)(P+2)/4$ will
suffice for quadratic surrogates. $M_{\mathcal{A}}$ is not a value the user
has to provide; FAASTARS will automatically terminate its STARS burn-in period
as soon as $k$ is large enough for the chosen or default surrogate to be
formed.
After all steps of approximate STARS in Phase 2 are taken,
$\tilde{\mathcal{A}}$ is found using the Monte Carlo method described in
Chapter 1 using the samples/iterates $\\{s^{i}\\}$ collected from both ECNoise
and $M_{\mathcal{A}}$ steps of standard STARS. We present the second phase of
FAASTARS in the pseudocode below.
Input: $\lambda^{(0)}$; $f_{0}:=\hat{f}(\lambda^{(0)})$; $k=1$; Surrogate
method (optional, default is RBFs); $\hat{\sigma}^{2}$; $L_{1}^{\text{init}}$;
$D_{S}$; eigenvalue threshold $0<\tau<1$ (optional, default is $\tau=0.95$)
Determine $M_{\mathcal{A}}$ based on chosen surrogate method or RBFs if none
is provided;
Form step length $\hat{h}$ using $\hat{\sigma}^{2}$ and $L_{1}^{\text{init}}$
while _$1\leq k\leq M_{\mathcal{A}}$_ do
Form smoothing factor $\hat{\mu}^{*}_{k}$ using $\hat{\sigma}^{2}$ and
$L_{1}^{\text{init}}$;
Find $\lambda^{(k)}$ and evaluate $f_{k}:=\hat{f}(\lambda^{(k)})$ via STARS
(Algorithm 1);
Store $(\lambda^{(k-1)}+\hat{\mu}^{*}_{k}u^{(k)},g_{k})$ and
$(\lambda^{(k)},f_{k})$ as samples $\\{s^{i}\\}$ in $D_{S}$;
Optional: Form $(L_{1}^{\text{update}})_{k}$ via FD with $D_{S}$ (or use
$||\nabla^{2}F||$ for surrogate $F$ with $D_{S}$);
Optional: If $(L_{1}^{\text{update}})_{k}>L_{1}^{\text{init}}$, set
$L_{1}^{\text{init}}=(L_{1}^{\text{update}})_{k}$ and re-compute $h$;
Set $k=k+1$;
end while
Use $D_{S}$ to form surrogate via selected method;
Form $\tilde{W}$, and apply SVD to obtain
$\tilde{W}=\tilde{V}\tilde{Q}\tilde{V}^{\top}$;
Find a drop-off index $\tilde{j}$ for which
$\tilde{q}_{1}+\cdots+\tilde{q}_{\tilde{j}}\geq\tau(\tilde{q}_{1}+\cdots+\tilde{q}_{P})$
Set $\tilde{V}_{\tilde{\mathcal{A}}}:=\tilde{V}_{1:P,1:\tilde{j}}$ and form
$h_{\mathcal{A}}$ using $\tilde{j}$ for $\dim\tilde{\mathcal{A}}$;
$\hat{\sigma}^{2}$ and $L_{1}^{\text{init}}$;
Output: $\lambda^{(M_{\mathcal{A}})}$,
$f_{M_{\mathcal{A}}}:=\hat{f}(\lambda^{(M_{\mathcal{A}})})$,
$\tilde{V}_{\tilde{\mathcal{A}}}$
Algorithm 4 FAASTARS for Additive or Multiplicative OUU, Phase 2: Approximate
STARS burn-in
Note that at the end of each standard STARS iteration in the burn-in phase, we
have the functionality to form candidates $L_{1}^{\text{update}}$ for
$\hat{L_{1}}$ via finite difference (FD) approximations to the directional
derivatives in the direction of the corresponding descent step. We can also
use a surrogate as in Phase 1. Regardless, we reject the candidate update
anytime $L_{1}^{\text{update}}\leq L_{1}^{\text{init}}$, since we are always
searching for the most pessimistic bound to $L_{1}$ to avoid divergence.
With $\tilde{\mathcal{A}}$ in hand, we must first update the hyperparameters
so that they are computed with the value $\tilde{j}=\dim\tilde{\mathcal{A}}$
(and not $j=\dim\mathcal{A}$, since it is generally unknown in this setting).
We define the approximated active hyperparamters,
(20)
$\hat{\mu}^{*}_{\tilde{\mathcal{A}}}:=\left(\frac{8\hat{\sigma}^{2}\tilde{j}}{\hat{L_{1}}^{2}(\tilde{j}+6)^{3}}\right)^{1/4}\quad\quad\hat{h}_{\tilde{\mathcal{A}}}:=(4\hat{L_{1}}(\tilde{j}+4))^{-1}.$
In the third phase, we pick up where standard STARS left off, and we perform
ASTARS in the approximated AS for the remaining iterations until the maximum
number of iterations, $M$, is met. We have the functionality to also update
$\hat{L_{1}}$ in a similar fashion as the burn-in phase during the ASTARS
phase. That is, we may continue to use finite differences, or use the
surrogate $F$ that we form for AS approximations to update our approximation
to $L_{1}$. (In practice, this surrogate could be formed from the initial
ECNoise points and also be recalculated at every step as we gain more samples
of $\hat{f}$.) We may take $L_{1}^{\text{update}}=||\nabla^{2}F||$ and
similarly to above reject the candidate update anytime
$L_{1}^{\text{update}}\leq L_{1}^{\text{init}}$; otherwise, we have a new
initial $L_{1}$, where $L_{1}^{\text{init}}=||\nabla^{2}F||$.
Input: maxit$=:M$; $\lambda^{(M_{\mathcal{A}})}$;
$f_{M_{\mathcal{A}}}:=\hat{f}(\lambda^{(M_{\mathcal{A}})})$;
$k=M_{\mathcal{A}}+1$; $\hat{\sigma}^{2}$; $L_{1}^{\text{init}}$;
$\tilde{V}_{\tilde{\mathcal{A}}}$
while _$M_{\mathcal{A}} <k\leq M$_ do
Form step size $\hat{h}_{\tilde{\mathcal{A}}}$ and smoothing factor
$\hat{\mu}^{*}_{\tilde{\mathcal{A}}}$ using $\tilde{j}$ for
$\dim\tilde{\mathcal{A}}$; $\hat{\sigma}^{2}$ and $L_{1}^{\text{init}}$;
Draw $r^{(k)}$, where $r^{(k)}_{p}\sim N(0,1)$ for $p=1,\ldots,\tilde{j}$ and
set $u^{(k)}_{\tilde{\mathcal{A}}}:=\tilde{V}_{\tilde{\mathcal{A}}}r^{(k)}$;
Evaluate
$g_{k}:=\hat{f}(\lambda^{(k-1)}+\hat{\mu}^{*}_{\tilde{\mathcal{A}}}\cdot
u^{(k)}_{\tilde{\mathcal{A}}})$;
Set
$d^{(k)}:=\frac{g_{k}-f_{k-1}}{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}}u^{(k)}_{\tilde{\mathcal{A}}}$;
Set $\lambda^{(k)}=\lambda^{(k-1)}-\hat{h}_{\tilde{\mathcal{A}}}\cdot
d^{(k)}$;
Evaluate $f_{k}:=\hat{f}(\lambda^{(k)})$;
Optional: Update $L_{1}^{\text{init}}$ with FD or surrogates as in Phase 2
(requires updating $D_{S}$);
Set $k=k+1$;
end while
Output: ($\lambda^{(M)}$, $f_{M}$)
Algorithm 5 FAASTARS for Additive or Multiplicative OUU, Phase 3: Approximate
ASTARS
We now have all the necessary components for performing FAASTARS. In summary,
FAASTARS has three major phases: (1.) an initial, relatively inexpensive
learning phase where we acquire the estimates $\hat{\sigma}^{2}$ and
$L_{1}^{\text{init}}$; (2.) a STARS burn-in phase (in full variables) where we
acquire enough samples to compute an AS using the Monte Carlo methods above;
and (3.) an ASTARS phase, where we use the learned AS, $\tilde{\mathcal{A}}$.
Note that in both of the latter phases, we will update $\hat{L_{1}}$, if and
only if we obtain a more pessimistic (larger) estimate.
In our technical report, we show that the approximately active hyperparameters
(20) are chosen to minimize the error in finite difference approximations to
directional derivatives. We also prove that the complexity of FAASTARS is
similar to that of ASTARS, but with a specified inflation of certain bounds,
which arise from approximating both hyperparameters and $\mathcal{A}$.
#### 2.2.1. AS Retraining
We introduce a method we call AS retraining which can be optionally applied in
place of Phase 3 FAASTARS. AS retraining begins with an initial approximation
$\tilde{\mathcal{A}}$ for $\mathcal{A}$, which is obtained once the minimum
number of samples $M_{\mathcal{A}}$ are gathered during FAASTARS Phase 2. We
then take $k_{T}\geq 1$ approximate ASTARS steps using $\tilde{\mathcal{A}}$
(as in FAASTARS Phase 3) and recompute $\tilde{\mathcal{A}}$ with our new set
of samples of $f$. We continue in this fashion until the maximum iteration
count $M$ is reached.
Samples obtained during FAASTARS Phase 3 contain information which is more
local to a given Phase 3 iterate. Hence, adaptive sampling incorporates local
information to approximate $\mathcal{A}$, more aligned with how $f$ changes in
the region of recent samples. It is almost always better to include more
samples whenever we can to improve $\tilde{A}$, borne out in numerical
results. Indeed, we use AS retraining to produce all FAASTARS results in this
work. We present the adaptive sampling algorithm below.
Input: maxit$=:M$; $\lambda^{(M_{\mathcal{A}})}$;
$f_{M_{\mathcal{A}}}:=\hat{f}(\lambda^{(M_{\mathcal{A}})})$; re-training phase
length $k_{T}$; $D_{S}$; $\hat{\sigma}^{2}$; $L_{1}^{\text{init}}$;
$\tilde{V}_{\tilde{\mathcal{A}}}$ from FAASTARS Phase 2; $l=1$;
$k=M_{\mathcal{A}}+1$;
while _$k\leq M_{\mathcal{A}}+lk_{T}$_ do
Form approximate active step size $\hat{h}_{\tilde{\mathcal{A}}}$ and
approximate active smoothing factor
$(\hat{\mu}^{*}_{\tilde{\mathcal{A}}})_{k}$;
Take $k_{T}$ steps of (approximate) ASTARS with $\tilde{\mathcal{A}}$ (as in
FAASTARS Phase 3) and store all samples of $f$ in $D_{S}$;
Retrain $\tilde{\mathcal{A}}$ and recompute $\tilde{V}_{\tilde{\mathcal{A}}}$
with $D_{S}$ (containing $k_{T}$ new samples);
Set $l=l+1$ and exit loop only if $k_{S}+lk_{T}\geq M$;
end while
Output: ($\lambda^{(M)}$, $f_{M}$)
Algorithm 6 AS Retraining
At times, a challenge with adaptive sampling is poor quality in the samples
obtained from the FAASTARS Phase 3 steps, after the original formation of
$\tilde{A}$. When the step size is small and $f$ changes very little –
sometimes due to inaccurate descent directions – samples are generally
uninformative about the behavior of $f$. This challenge is also present, more
generally, in all AS approximations involving samples of $\hat{f}$ taken in a
partially deterministic DFO algorithm. Again, these samples are often
clustered together in $\Lambda$ due to the small steps we take both in finite
differencing and in a descent step. Some of these challenges are addressed in
a forthcoming extensions paper. Regardless, we find incorporating as many
samples as possible for computations involving $\mathcal{A}$ usually improves
numerical results.
## 3\. Results
In this section, we present results of using STARS, ASTARS, and FAASTARS in
several examples with noisy objective functions with high-dimensional
parameter spaces.
Example 1. (Toy) Let
$\hat{f}:\Lambda=\mathbb{R}^{P}\to\mathbb{R}=\mathcal{D}$. Fixing a weight
vector, $w\in\Lambda$, where $w\neq 0$, we define
(21)
$\displaystyle\hat{f}(\lambda;\xi):=\left(w^{\top}\lambda\right)^{2}+\epsilon(\xi),$
$\epsilon(\cdot)\sim N(0,\sigma^{2})$, where $\sigma^{2}=1\times 10^{-4}.$ We
note the noise-free signal $f$ is convex. The minimum of $\hat{f}$ is given by
$0\in\Lambda$ with minimizer $f^{*}=0$. Here, the noise-free signal $f$ has a
one-dimensional AS (i.e., $j=1$) in the direction $w$. Also note $L_{1}=2$. We
considered $P=20$ and note $D=1$. We took $w_{i}=1$ in all directions and an
initial iterate $\lambda^{(0)}$ with components drawn from a zero-mean
Gaussian with unit variance, scaled by 10. First, we performed 1000 trials
each of STARS using exact hyperparameters, ASTARS using exact active
hyperparemters as well as the exact AS, and using exact hyperparameters but
learned AS, we performed FAASTARS. A maximum iteration count of $2P^{2}=800$
was used. AS’s were re-trained every $2P=40$ steps for FAASTARS using an
eigenvalue threshold of 99 percent. No noise regularization was required. We
then performed 500 trials each of STARS and FAASTARS using estimated
hyperparameters and a maximum iteration count of $500$. In this case, AS’s
were relearned every $P=20$ steps for FAASTARS using an eigenvalue threshold
$\tau=0.95$ and a noise regularization at the level of $\sigma^{2}$.
Figure 1. We show the convergence of STARS, FAASTARS, and ASTARS with and
without hyperparameter learning for Example 1.
Example 2. (Active sphere) Let
$\hat{f}:\Lambda=\mathbb{R}^{P}\to\mathbb{R}=\mathcal{D}$,
(22)
$\displaystyle\hat{f}(\lambda;\xi):=\sum_{i=1}^{j}\lambda_{i}^{2}+\epsilon(\xi),$
$\epsilon(\cdot)\sim N(0,\sigma^{2})$, where $\sigma^{2}=1\times 10^{-3}.$ We
note the noise-free signal $f$ is convex. The minimum of $\hat{f}$ is given by
$0\in\mathcal{A}$ with arbitrary components in $\mathcal{I}$ with minimizer
$f^{*}=0$. Here, $\hat{f}$ has a $j$-dimensional AS spanned by the first $j$
standard basis vectors in $\Lambda$. Also note $L_{1}=2$. We considered
$P=20$, $j=10$, and note $D=1$. We took initial iterate $\lambda^{(0)}$ with
components drawn from a zero-mean Gaussian with unit variance, scaled by a
factor of 10. First, we performed 100 trails each of STARS using exact
hyperparameters, ASTARS using exact active hyperparemters as well as the exact
AS, and using exact hyperparameters but learned AS, we performed FAASTARS. A
maximum iteration count of $2P^{2}=800$ was used. AS’s were re-trained every
$P=20$ steps using an eigenvalue threshold of 99.9 percent. Noise
regularization of $\sigma^{2}$ improved results. We then performed 250 trials
of STARS and FAASTARS using estimated hyperparameters and a maximum iteration
count of $4P^{2}=1600$. In this case, AS’s were relearned every $P=20$ steps
for FAASTARS using an eigenvalue threshold $\tau=0.9995$ and a noise
regularization at the level of $\sigma^{2}$.
Figure 2. We show the convergence of STARS, FAASTARS, and ASTARS with and
without hyperparameter learning for Example 2.
Example 3. (Nesterov’s function – active version) Let
$\hat{f}:\Lambda=\mathbb{R}^{P}\to\mathbb{R}=\mathcal{D},$
(23)
$\displaystyle\hat{f}(\lambda;\xi)=\frac{1}{2}\left(\lambda_{1}^{2}+\sum_{i=1}^{j-1}(\lambda_{i}-\lambda_{i+1})^{2}+\lambda_{j}^{2}\right)-\lambda_{1}+\epsilon(\xi),$
$\epsilon(\cdot)\sim N(0,\sigma^{2})$, where $\sigma^{2}=1\times 10^{-4}.$
$\hat{f}$ possesses additive noise with variance $\sigma^{2}.$ This function
is a test function used in [Nesterov] we have modified so that there is a
distinct AS. We considered $P=50$ and $j=5$, note $D=1$, and the non-
stochastic $f$ is convex. Here, we have a $j$-dimensional $\mathcal{A}$
spanned by the standard basis vectors $e^{i}$, $i=1,\ldots,j$. Note that the
minimum of $f$, $\lambda^{*}$ is given by $\lambda^{*}_{i}=1-i/(j+1)$ for
$i=1,\ldots,j$ and $\lambda^{*}_{i}$ is arbitrary for $i=j+1,\ldots,P$ and has
minimizer $f^{*}=-1/2(1-1/(j+1))$ [Nesterov]. Also, $L_{1}=4$ [Nesterov]. We
performed 50 trials each of STARS using exact hyperparameters, ASTARS using
exact active hyperparemters as well as the exact AS, and using exact
hyperparameters but learned AS, we performed FAASTARS. A maximum iteration
count of $3P^{2}=7500$ was used. AS’s were relearned every $2P=100$ steps
using an eigenvalue threshold $\tau=0.999$. Noise regularization at the level
of $\sigma^{2}$ improved results. We do not present hyperparameter learning in
this example – further extensions to our methods are needed and discussed in
our next paper.
Figure 3. We show the convergence of STARS, FAASTARS, and ASTARS (without
hyperparameter learning) for Example 3.
Example 4. (Full sphere) Let
$\hat{f}:\Lambda=\mathbb{R}^{P}\to\mathbb{R}=\mathcal{D},$
(24)
$\displaystyle\hat{f}(\lambda;\xi):=\sum_{i=1}^{P}\omega_{i}\lambda_{i}^{2}+\epsilon(\xi),$
$\epsilon(\cdot)\sim N(0,\sigma^{2})$, where $\sigma^{2}=1\times 10^{-5}.$ We
took $P=10$ and $\omega_{i}=1$, $i=1,\ldots,P$ so $\hat{f}$ the sphere
function in $\mathbb{R}^{P}$ with additive noise. Note $L_{1}=2$ here. We
examined the extent to which the presented theoretical bounds for $L_{1}$
estimates hold in practice. Define $K_{1}>0$ as the scale factor for which
$L_{1}^{2}=K_{1}\hat{L_{1}}^{2}$. In STARS Theorem 4.5 (in our technical
report), we find the requirement $0<K_{1}<4$. We write $\hat{L_{1}}=c\,L_{1}$,
where $c=1/\sqrt{K_{1}}$ and we have $\frac{1}{2}<c<\infty$. We used STARS to
minimize (24) where we used the correct $\sigma^{2}$ (not estimated) and fixed
$\hat{L_{1}}=c\,L_{1}$ for $c=0.1,\,0.2,\,1,$ and $4$. We performed 100 trials
for each value $c$ and a maximum iteration count of $2P^{3}=2000$. Note that
we allow $c<1/2$; in some cases we find $c$ may be less than $1/2$ and STARS
with estimated hyperparameters will still converge. In general, overestimation
of $L_{1}$ ($c>1$) slow convergence and as $c\to\infty$, perturbations are not
taken in $\Lambda$, and the value of $\hat{f}$ cannot be meaningfully changed
at all. Underestimation of $L_{1}$ ($c<1$) may in fact improve convergence in
some cases (as we see in this example), but as $c\to 0^{+}$, STARS will
eventually diverge.
Figure 4. We show the convergence of STARS for various $c\,L_{1}$ for Example
4.
Example 5. (Nesterov-inspired, Nesterov 2) Let
$\hat{f}:\Lambda=\mathbb{R}^{P}\to\mathbb{R}=\mathcal{D},$
(25)
$\displaystyle\hat{f}(\lambda;\xi)=\sum_{i=1}^{P}2^{(-1)^{(i-1)}(i-1)}\lambda_{i}^{2}+\epsilon(\xi),$
$\epsilon(\cdot)\sim N(0,\sigma^{2})$, where $\sigma^{2}=1\times 10^{-3}.$ We
note $f$ is convex. We considered $P=10$ and note $D=1$. Note that the minimum
of $\hat{f}$ is given by $0\in\Lambda$. Here, as $i$ increases, terms in
$\hat{f}$ become either more important or less important, depending on whether
$i$ is even or odd. We take an initial iterate similar to prior examples. Also
note $L_{1}=2^{P+1}$ as long as $P$ is odd; if $P$ is even then $L_{1}=2^{P}$.
Thus, with $P=10$, we have $L_{1}=1024$. Here, the determination of which
variables are active depends completely on one’s choice of threshold. We show
convergence of STARS compared with FAASTARS using fixed active dimensions
$\tilde{j}=2,\,4,$ and $8$ in turn. We performed 25 trials for each method and
used a maximum iteration count of $5P^{3}=5000$.
Figure 5. We show the convergence of STARS versus FAASTARS for various fixed
$\tilde{j}$ for Example 5.
In our examples, we see the computational benefit in stepping in the AS
instead of in the full variables. Recall that the hyperparameters in STARS are
dimension-dependent, so anytime an AS resolves $\hat{f}$ well – which occurred
in the above examples by obvious design – we expect ASTARS in the exact active
variables to converge more quickly than STARS in full variables.
### 3.1. Software Dissemination
A Python 3.7 package called ASTARS was used to produce results in this
section, and is open-source and publicly-available online [ASTARS]. The ASTARS
package has the functionality to perform STARS, ASTARS, and FAASTARS. STARS is
not otherwise publicly-available, to our knowledge. The algorithms used here
which are open-source and publicly-available online include the AS software of
Constantine et al [AS2] – updated in Python 3.6 (required for ASTARS package)
by Varis Carey [AS3] – and ECNoise by Wild and Moré [ECN].
## 4\. Conclusion and Discussion
We presented combinations and modifications made to well-documented algorithms
including STARS [CW], Monte Carlo AS learning [ConstantineMC], noise variance
learning [MW], and Lipschitz constant learning [Calliess] to produce the
fully-automated ASTARS algorithm. In addition, we presented several model
problems that were used for testing ASTARS and FAASTARS.
There is not any guarantee that a general stochastic-free mapping
$f:\Lambda\to\mathcal{D}$ permits dimension reduction via AS. AS methods may
fail to improve STARS for many reasons – sometimes occurring in combination
with each other – including (nearly) equal importance of modeled parameters, a
$\nabla f$ that is poorly approximated by surrogates, or too much noise.
Regardless, in the case that no AS is clearly defined, recall that ASTARS is
equivalent to STARS in full variables.
If an AS exists, we observed that performing ASTARS and FAASTARS provided
computational savings in our numerical examples when compared to STARS, since
its convergence outpaces STARS convergence on average; see our technical
report for detailed theoretical statements. We note that at times, any of the
presented algorithms may take a very lucky step in $\Lambda$, causing quick
convergence. After all, the considered DFO algorithms depend on random (but
smoothed) steps.
We also observed that it is possible to learn the AS of the functions
considered using the samples obtained from deterministic ECNoise and STARS
iterates, which greatly reduces the usual computational expense of an AS
computation. This result is not obvious, since AS discovery typically relies
on (many) random, iid samples in $\Lambda$.
Sometimes, when FAASTARS has minimized $\hat{f}$ in $\tilde{\mathcal{A}}$ as
much as possible, there may be remaining variables in the inactive subspace
that are not minimized, as in Example 5 above. Indeed, we saw that for various
fixed AS dimensions $\tilde{j}$, ASTARS may behave almost identically to
STARS, or much worse than STARS, depending on $\tilde{j}$. Even without a
fixed AS dimension, we still find that if FAASTARS determines $\tilde{j}<j$
(or $\tilde{j}>j$), iterates may not provide enough information for FAASTARS
to update $\tilde{j}$ closer to $j$, incorrectly fixing $\tilde{j}$, and
causing behavior identical to the flat-lining we produced in Example 5.
At its core, this problem is directly related to the choice of eigenvalue
threshold $\tau$, which determines how many directions to include in
$\tilde{\mathcal{A}}$. When $\tau$ is fixed and samples are not informative
enough to increase (or decrease) $\tilde{j}$, $\tilde{\mathcal{A}}$ cannot be
substantially updated based on local information.
In an upcoming follow-up paper, we address the flat-lining behavior witnessed
in some numerical examples by introducing a method of adaptive thresholding,
which will change $\tau$ if and when flat-lining occurs. Other extensions of
the ASTARS method, such as alternative weighting schemes and other approaches
to address flat-lining, may also improve convergence and behavior, and will be
considered as well.
In their unpublished manuscript, the authors in [ARDFDS] present an
accelerated version of DFO algorithms of Nesterov’s classic approach called
$RS_{\mu}$ in [Nesterov] by leveraging techniques of mirror descent. (Note,
$RS_{\mu}$ and STARS exhibit identical complexities [Nesterov, CW].) As
authors in [ARDFDS] observe, complexity in $M$ – the maximum number of
iterations – is equivalent to the complexity of oracle calls (i.e., the number
of times we must approximate certain directional derivatives). These
complexities are then proportional to the number of $\hat{f}$ evaluations, as
well. Since we postulate that $\hat{f}$ calls are expensive, we always seek
methods that evaluate our map as few times as possible. We will note, though,
that we also know that there is a trade-off between fewer evaluations
(samples) and the quality of the numerically estimated AS.
For a more concise comparison between the complexity results here and in [CW,
Nesterov, ARDFDS], we let $\epsilon=\epsilon_{\text{tol}}$ and recall
$R^{2}=||\lambda^{(0)}-\lambda^{*}||_{2}^{2}$ denotes the distance from the
true minimizer to our initial iterate. Recall, for all STARS-based methods, we
achieve the main complexity statement in (74) so long as
(26)
$\sigma\leq\frac{\sqrt{2K_{2}}(2-\sqrt{K_{1}})\epsilon_{\text{tol}}}{8(P+4)C_{4}},$
where $K_{1}>0$ and $K_{2}>0$ are the scale factors for which
$L_{1}^{2}=K_{1}\hat{L_{1}}^{2}$ and $\sigma^{2}=K_{2}\hat{\sigma}^{2}$.
If we satisfy (26), then we obtain the results shown in Table 1 below. Other
methods will have $\sigma^{2}$ appearing in their complexity results, since no
bounds are assumed directly on the variance in the noise.
Table 1. Complexity of DFO Algorithms DFO Algorithm | Complexity, $\mathcal{O}(\cdot)$
---|---
$RS_{\mu}$, [Nesterov] and STARS, [CW] | $\frac{L_{1}PR^{2}}{\epsilon}$
STARS (estimated hyperparameters) | $\frac{L_{1}PR^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})\epsilon}$
ASTARS (estimated hyperparameters) | $\frac{L_{1}jR^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})\epsilon}$
FAASTARS (estimated hyperparameters) | $\frac{L_{1}\tilde{j}R^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})\epsilon}$
ARDFDS, [ARDFDS] | $\max\left\\{P^{\frac{1}{2}+\frac{1}{q}}\sqrt{\frac{L_{1}R^{2}}{\epsilon}},\frac{P^{\frac{2}{q}}\sigma^{2}R^{2}}{\epsilon^{2}}\right\\}$
The methods in [ARDFDS] achieve accelerated convergence by utilizing a
different random direction in forming their gradient oracle, drawn randomly
from the unit hypersphere in $\Lambda$ in a mirror descent scheme. For future
work, we propose investigating whether dimension reduction could accelerate
the methods in [ARDFDS].
As a final note of caution, anytime the variance of the noise $\sigma^{2}$
approaches the magnitude of $f$ values, we expect failure of all methods
presented. The assumptions we made in Chapter 1 – assumptions ubiquitous in
the DFO literature – forbid noise of this order, and with good reason: anytime
the noise is on the order of function evaluations, it becomes difficult (and
eventually impossible) to distinguish the true signal from the effects of
noise. Filtering or smoothing methods must be used in this scenario, which is
outside the scope and focus of this dissertation.
## References
Technical Report
We present theoretical results regarding the convergence of the methods
provided in the preceding section. In the first part of this section, we
provide key results needed for proofs in the remaining parts. In the second
part of this section, we prove a series of modified STARS results culminating
in a statement about the convergence of the algorithm with approximate
hyperparameters. In the third part, we prove a series of results showing the
convergence of ASTARS with exact hyperparameters and an exact AS. Finally, we
prove a series of FAASTARS results culminating in a statement about the
convergence of FAASTARS with approximate hyperparamters and with an
approximate AS.
Broadly, our contribution is showing: (1) STARS will still converge if $L_{1}$
and $\sigma$ are unknown and replaced with estimates $\hat{L_{1}}$ and
$\hat{\sigma}$ in the formation of STARS hyperparameters; (2) ASTARS will
converge with exact hyperparameters and an exact AS; and (3) FAASTARS will
analogously converge also with uncertain hyperparameters, and even with an
approximated $\tilde{\mathcal{A}}$ in place of the true $\mathcal{A}$.
### 4.1. Preliminaries
We provide the equations and results needed from [Nesterov], which are also
summarized in [CW]. We also modify certain key results needed for ASTARS
theoretical arguments.
We focus only on the case of additive noise in $\hat{f}$ and we assume $f$ is
convex and differentiable in $\Lambda$. Recall that $\Lambda=\mathbb{R}^{P}$
with $\lambda$’s denoting vectors, $\lambda\in\Lambda$; at times
$u,x,y,z\in\Lambda$ will denote vectors, too. Also, we note that for the
FAASTARS , we leave out the optional steps in FAASTARS related to the updating
of $L_{1}^{\text{init}}$. We will fix our approximation to $L_{1}$ at the
beginning of FAASTARS using the samples formed via ECNoise [MW]; i.e.,
$\hat{L_{1}}=L_{1}^{\text{init}}$ throughout. We also note that we will not
consider adaptive sampling methods for learning $\tilde{\mathcal{A}}$, so
$\tilde{\mathcal{A}}$ will be fixed after FAASTARS Phase 2 (STARS burn-in
phase).
We assume the true signal $f$ is convex and differentiable – this is
Assumption 4.1 in [CW]. The signal we access is $\hat{f}$, which has additive
noise so $\hat{f}(\lambda;\xi):=f(\lambda)+\epsilon(\xi)$, where
$\mathbb{E}_{\xi}(\epsilon(\xi))=0$ $\forall\xi$ and
$0<$Var${}_{\xi}(\epsilon(\xi))=\sigma^{2}<\infty$ $\forall\xi$. These
assumptions make up Assumption 4.2 in [CW]. First, let
(1) $f_{\mu}(\lambda):=\mathbb{E}_{u}[f(\lambda+\mu
u)],\,u\in\mathbb{R}^{P},\,u_{i}\sim N(0,1),\,i=1,\ldots,P,$
which is the expectation of the Gaussian-smoothed form of $f$.
Also, note for a direction
$u=u_{\mathcal{A}}\in\mathcal{A}\subset\mathbb{R}^{P}$, the last $P-j$
components of $u_{\mathcal{A}}$ are zero and $u_{i}\sim N(0,1)$ for
$i=1,\ldots,j$. Hence, (1) becomes
(2) $f_{\mu}^{\mathcal{A}}(\lambda)=\mathbb{E}_{u_{\mathcal{A}}}[f(\lambda+\mu
u_{\mathcal{A}})],\,u_{\mathcal{A}}\in\mathcal{A}\subset\mathbb{R}^{P},\,u_{i}\sim
N(0,1),\,i=1,\ldots,j,$
and similarly for $\tilde{\mathcal{A}}$ with
$\dim\tilde{\mathcal{A}}=\tilde{j}$.
The existence of $L_{1}$ implies
(3) $\left|f(y)-f(x)-\langle\nabla
f(x),y-x\rangle\right|\leq\frac{L_{1}}{2}||x-y||^{2}\quad\forall
x,y\in\Lambda,$
Now let $\lambda^{*}$ denote a global minimizer of $f$. Then
(4) $||\nabla f(\lambda)||^{2}\leq
2L_{1}(f(\lambda)-f(\lambda^{*}))\quad\forall\lambda\in\Lambda,$
proven in [Zhou]. A differentiable function $f$ is convex iff
(5) $f(y)\geq f(x)+\langle\nabla f(x),y-x\rangle\quad\forall x,y\in\Lambda,$
Note that this implies that the left-hand side of (3) is nonnegative, so long
as $f$ is convex. An interpretation of (5) is that a convex function is always
underestimated by its linear approximation.
We now present the needed results on Gaussian smoothing from [Nesterov], also
presented in [CW]. First, some notation; for $\mu>0$ and $u$ a Gaussian random
vector (as in (1) or (2)),
(6) $g_{\mu}(x):=\frac{f(x+\mu u)-f(x)}{\mu}u,$
which is a first-order approximation to the directional derivative of $f$ in
the direction of $u.$ If $u=u_{\mathcal{A}}$, then $g_{\mu}^{\mathcal{A}}$
will be the same object as (6), but with $u=u_{\mathcal{A}}$. Then
$g_{\mu}^{\mathcal{A}}$ estimates the directional derivative of $f$ for
directions $u_{\mathcal{A}}$ strictly in $\mathcal{A}$.
Now for $p\geq 0$, let
(7) $M_{p}(u):=\mathbb{E}_{u}\left(||u||^{p}\right),$
the $p$-th moment of the norm of the random vector $u$, $||u||$. Here and
throughout this section, $||\cdot||$ denotes the 2-norm in $\mathbb{R}^{P}$.
Let $u$ continue to denote a random vector as in (1). For $M_{p}(u)$ defined
in (7) above,
(8) $M_{p}(u)\leq(P)^{p/2},\quad p\in[0,2]\quad\text{and}\quad
M_{p}(u)\leq(P+p)^{p/2},\quad p>2,$
where we recall $P=\dim\Lambda$.
Now for $u=u_{\mathcal{A}}$ (as in (2)) and $j<P$, we have a sharper but
analogous result for these moments, which we explain by considering the case
of $p=2$. From [Nesterov], we have
(9)
$M_{2}(u)=\frac{1}{c}\int_{\mathbb{R}^{P}}||u||^{2}\,e^{-\frac{1}{2}||u||^{2}}\,\,du=B^{-1},$
where $B$ is the matrix specifying the norm $||u||^{2}=\langle Bu,u\rangle$
for a given inner product $\langle\cdot,\cdot\rangle$ in $\mathbb{R}^{P}$ and
$c$ is a normalization factor. Throughout, we will let $B=I_{P}$ and use the
Euclidean inner product, so that $||\cdot||$ will continue to denote the
standard (Euclidean) 2-norm in $\Lambda$.
Note that since $u_{\mathcal{A}}\in\mathcal{A}$, we only integrate over the
$j$ parameters corresponding to $\mathcal{A}$ since the components of
$u_{\mathcal{A}}$ in $\mathcal{I}$ are fixed and zero in expectation. Also,
since $u_{\mathcal{A}}$ is zero in its last $P-j$ components, we can compute
the norm of $u_{\mathcal{A}}$ truncated after its $j$-th entry in
$\mathbb{R}^{j}$ instead of in $\mathbb{R}^{P}$. In particular, define
$\underline{u_{\mathcal{A}}}=(u_{\mathcal{A}})_{1:j}$ and let
$||\cdot||_{\underline{\mathcal{A}}}$ denote the 2-norm in $\mathbb{R}^{j}$ We
have
$||u_{\mathcal{A}}||=||u_{\underline{\mathcal{A}}}||_{\underline{\mathcal{A}}}$,
and therefore
(10)
$M_{2}(u_{\mathcal{A}})=\frac{1}{c}\int_{\mathbb{R}^{j}}||\underline{u_{\mathcal{A}}}||^{2}_{\underline{\mathcal{A}}}\,e^{-\frac{1}{2}||\underline{u_{\mathcal{A}}}||^{2}_{\underline{\mathcal{A}}}}\,\,d\underline{u_{\mathcal{A}}}=I_{j}.$
As in [Nesterov], taking the inner product of the left-hand and right-hand
sides with $I_{j}$ shows $M_{2}(u_{\mathcal{A}})=j$. Using similar arguments,
one can prove the generalized bounds
(11) $M_{p}(u_{\mathcal{A}})\leq(j)^{p/2},\quad p\in[0,2]\quad\text{and}\quad
M_{p}(u_{\mathcal{A}})\leq(j+p)^{p/2},\quad p>2,$
which involves making similar changes to those outlined above. In particular,
one must substitute $u$ with $\underline{u_{\mathcal{A}}}$ and $||\cdot||$
with $||\cdot||_{\underline{\mathcal{A}}}$ and rewrite the integrals
corresponding to the expectation over all $\mathbb{R}^{P}$ as integrals over
$\mathbb{R}^{j}$, as we presented for $M_{2}$.
We now present Gaussian smoothing results in the case for $f$ convex. If $f$
is convex, then
(12) $f_{\mu}(x)\geq f(x)\quad\forall x\in\Lambda,$
which can be verified by writing $\mathbb{E}_{u}(f(x+\mu
u))\geq\mathbb{E}_{u}(f(x)+\langle\nabla f(x),\mu u\rangle)$, where the
inequality arises from applying the definition of convexity ((5) with $y=x+\mu
u$ and $x=x$). Then since $f(x)$ is constant with respect to $u$ and
$\mathbb{E}_{u}(\langle\nabla f(x),\mu u\rangle)=0$ (since each component of
$u$ is zero-mean), we obtain $\mathbb{E}_{u}(f(x)+\langle\nabla f(x),\mu
u\rangle)=f(x)$, verifying (12). Also note that (12) holds with
$f_{\mu}=f_{\mu}^{\mathcal{A}}$ (as in (2)).
If $f$ is convex and $f\in\mathcal{C}^{1,1}(\Lambda)$, which is the space of
functions $f:\Lambda=\mathbb{R}^{P}\to\mathbb{R}$ that are continuously
differentiable and possess a $L_{1}$ Lipschitz constant, then
(13) $f_{\mu}(x)-f(x)\leq\frac{L_{1}\mu^{2}}{2}P\quad\forall x\in\Lambda,$
where we recall (12) implies the left-hand side of (13) above is nonnegative.
To verify (13), we use the fact that $f$ is convex (i.e., (5) with $y=x+\mu u$
and $x=x$) to write
$0\leq f(x+\mu u)-f(x)-\langle\nabla f(x),\mu u\rangle.$
Applying (3),
$f(x+\mu u)-f(x)-\langle\nabla f(x),\mu
u\rangle\leq\frac{L_{1}\mu^{2}}{2}||u||^{2}.$
Applying the expectation in $u$ to both sides and using (8) for the second
moment of $||u||$, we obtain (13). If $u=u_{\mathcal{A}}$ and we have
$f_{\mu}^{\mathcal{A}}$ (as in (2)), then using (11),
(14) $f_{\mu}^{\mathcal{A}}(x)-f(x)\leq\frac{L_{1}\mu^{2}}{2}j\quad\forall
x\in\Lambda.$
We will also work with the objects $\nabla f_{\mu}(x)$ and $\nabla
f_{\mu}^{\mathcal{A}}(x)$. To quote [CW], we note that with ”$\nabla
f_{\mu}(x)$ we denote the gradient (with respect to $x$) of the Gaussian
approximation” $f_{\mu}(x)$ defined in (1) and likewise for $\nabla
f_{\mu}^{\mathcal{A}}(x)$, for $f_{\mu}^{\mathcal{A}}$ as in (2). In
[Nesterov], the authors obtain the form in (15) below for $\nabla f_{\mu}$ by
performing a substitution in the integral corresponding to the expectation in
$u$ arising in the definition of $f_{\mu}$ and differentiate both sides in
$x$, yielding
(15) $\nabla f_{\mu}(x)=\frac{1}{c}\int_{\mathbb{R}^{P}}\frac{f(x+\mu
u)-f(x)}{\mu}ue^{-\frac{1}{2}||u||^{2}}\,du=\frac{1}{c}\int_{\mathbb{R}^{P}}g_{\mu}(x)e^{-\frac{1}{2}||u||^{2}}\,du,$
which shows by definition
(16) $\nabla f_{\mu}(x)=\mathbb{E}_{u}\left(g_{\mu}(x)\right)\quad\forall
x\in\Lambda$
Analogously, we have
(17) $\nabla
f_{\mu}^{\mathcal{A}}(x)=\mathbb{E}_{u}\left(g_{\mu}^{\mathcal{A}}(x)\right)\quad\forall
x\in\Lambda$
by taking $u=u_{\mathcal{A}}$ in our arguments above.
If $f$ is differentiable at $x$ and $f\in\mathcal{C}^{1,1}$,
(18) $\mathbb{E}_{u}\left(||g_{\mu}(x)||^{2}\right)\leq 2(P+4)||\nabla
f(x)||^{2}+\frac{\mu^{2}}{2}L_{1}^{2}(P+6)^{3}\quad\forall x\in\Lambda.$
To verify (18), it is helpful to first bound the quantity
$\mathbb{E}_{u}\left(||u||^{2}\langle\nabla f(x),u\rangle^{2}\right)$. We
follow [Nesterov]. Applying Cauchy-Schwarz and write $||u||^{2}\langle\nabla
f(x),u\rangle^{2}\leq||\nabla f(x)||^{2}||u||^{4}$. Here we could apply
$\mathbb{E}_{u}(\cdot)$ and use (8) to bound $M_{4}(u)$, yielding
$\mathbb{E}_{u}\left(||u||^{2}\langle\nabla
f(x),u\rangle^{2}\right)\leq(P+4)^{2}||\nabla f(x)||^{2}.$
However, the authors in [Nesterov] obtain a tighter bound by minimizing the
integral form of $\mathbb{E}_{u}\left(||u||^{2}\langle\nabla
f(x),u\rangle^{2}\right)$. The proof is technical but mainly involves
parameterizing and then minimizing the argument of the exponential function
appearing in the integral form associated with the expectation over $u$.
[Nesterov] show
$\mathbb{E}_{u}\left(||u||^{2}\langle\nabla
f(x),u\rangle^{2}\right)\leq(P+4)||\nabla f(x)||^{2}.$
For the case of $g_{\mu}^{\mathcal{A}}$, we can show
(19) $\mathbb{E}_{u}\left(||g_{\mu}(x)||^{2}\right)\leq 2(j+4)||\nabla
f(x)||^{2}+\frac{\mu^{2}}{2}L_{1}^{2}(j+6)^{3}\quad\forall x\in\Lambda.$
Justifying (19) requires the same substitutions as in justifying (11). Again,
one must substitute $u$ with $\underline{u_{\mathcal{A}}}$ and $||\cdot||$
with $||\cdot||_{\underline{\mathcal{A}}}$ and rewrite the integrals
corresponding to the expectation over all $\mathbb{R}^{P}$ as integrals over
$\mathbb{R}^{j}$. Then $P$’s in the arguments above may be safely replaced
with $j$’s as in (19).
Next, the authors in [CW] introduce two more pieces of notation integral to
the STARS process. First, for an iterate $k$, let
(20)
$s_{\mu_{k}}(x^{(k)};u^{(k)},\xi_{k-1},\xi_{k}):=\frac{\hat{f}(x^{(k)}+\mu_{k}u^{(k)};\xi_{k-1})-\hat{f}(x^{(k)};\xi_{k})}{\mu_{k}}u^{(k)},$
which is the ”stochastic gradient-free oracle” to the directional derivative
of $f$ (with noise) in the direction $u$. The authors also define the error
between the oracle (the forward-difference approximation) and the true
directional derivative of $f$ in the direction $u$ as
(21)
$\mathcal{E}(\mu)=\mathcal{E}(\mu;x,u,\xi_{1},\xi_{2})=||s_{\mu}(x;u,\xi_{1},\xi_{2})-\langle\nabla
f(x),u\rangle u||^{2}.$
Note when $u=u_{\mathcal{A}}$ in either (20) or (21), we write
$s^{\mathcal{A}}_{\mu_{k}}$ and $\mathcal{E}^{\mathcal{A}}(\mu)$ to emphasize
that these values are computed for $u_{\mathcal{A}}\in\mathcal{A}$. If we are
working with $\tilde{\mathcal{A}}$ to approximate $\mathcal{A}$, one must
replace $\mathcal{A}$ with $\tilde{\mathcal{A}}$ and $j$ with $\tilde{j}$ to
obtain the analogous definitions for the approximate $\tilde{\mathcal{A}}$
case.
### 4.2. STARS Convergence with Estimated Hyperparameters
In the following, we follow [CW] closely. However, we provide more detail and
in fact correct some minor details from their unpublished manuscript while
generalizing their results to the case in which hyperparameters are estimated.
In the latter sections, we build on results in [CW] and [ConstantineK], a
paper which contains crucial theoretical results regarding the approximation
of active subspaces.
Let the positive, finite values $\hat{L_{1}}$ and $\hat{\sigma}$ denote
estimators to the true (also positive and finite values of $L_{1}$ and
$\sigma$. Recall, in our setting, we will assume $0<L_{1}<\infty$ and
$0<\sigma^{2}<\infty.$ We won’t let $L_{1}=0$ since that would imply $f$ is
constant; we won’t let $\sigma^{2}=0$, since that would imply zero noise. Then
there exist $K_{1}>0$ and $K_{2}>0$ so that
(22)
$L_{1}^{2}=K_{1}\hat{L_{1}}^{2}\quad\quad\sigma^{2}=K_{2}\hat{\sigma}^{2}.$
Note that if a $K_{i}<1$, $i=1$ or $2$, then we have overestimated the
corresponding value, $L_{1}$ or $\sigma^{2}$ and similarly when $K_{i}>1$, the
corresponding value has been underestimated. Hence, as a particular $K_{i}\to
1$, the corresponding estimate to either $L_{1}$ or $\sigma^{2}$ approaches
the truth. Finally, note that when the true values $L_{1}$ and $\sigma^{2}$
are unknown, $K_{1}$ and $K_{2}$ are also generally unknown.
Below, we recall the approximate smoothing size and step length to replace the
STARS hyperparameters in the case that $L_{1}$ and $\sigma$ are unknown and
estimated by values $\hat{L_{1}}$ and $\hat{\sigma}$.
(23)
$\begin{array}[]{c}\stretchto{\scaleto{\scalerel*[width("\mu^{*}")]{\kern-0.5pt\bigwedge\kern-0.5pt}{\rule[-505.89pt]{4.30554pt}{505.89pt}}}{}}{0.5ex}\\\
\mu^{*}\\\
\rule{-4.30554pt}{0.0pt}\end{array}:=\left(\frac{8\hat{\sigma}^{2}P}{\hat{L_{1}}^{2}(P+6)^{3}}\right)^{1/4}\quad\quad\begin{array}[]{c}\stretchto{\scaleto{\scalerel*[width("h")]{\kern-0.5pt\bigwedge\kern-0.5pt}{\rule[-505.89pt]{4.30554pt}{505.89pt}}}{}}{0.5ex}\\\
h\\\ \rule{-4.30554pt}{0.0pt}\end{array}:=(4\hat{L_{1}}(P+4))^{-1}$
Shortly, we will precisely show how the bound $\mathcal{E}(\hat{\mu^{*}})$ is
just a modification of the bound on $\mathcal{E}(\mu^{*})$ proven in [CW].
With this point of view, one will see our choice of $\hat{\mu^{*}}$ is the
best that we can do with uncertain estimators to $L_{1}$ and $\sigma$. We note
that both in [CW] and here in this work, $h$ (and also $\hat{h}$) is a fixed
choice, not necessarily optimal nor sub-optimal; more investigation could be
done into the step length, but $\hat{h}$ works fine in our numerical
experiments.
It is helpful to quote [CW] here: ”Our goal is to find $\mu^{*}$ that
minimizes an upper bound on”
$\mathbb{E}_{u,\xi_{1},\xi_{2}}(\mathcal{E}(\mu))$, where we have
$\mathcal{E}$ defined in (21) and the expectation is taken over the random
vector $u$, as well as two draws of additive noise from the two function
evaluations that occur in $s_{\mu}$. The noise draws are denoted as
$\epsilon(\xi_{1})$ and $\epsilon(\xi_{2})$.
The major difference between our result and the STARS result is that we will
use estimations to $L_{1}$ and $\sigma^{2}$ rather than their true values,
which we have postulated lack of access to.
STARS Theorem 4.3 (Modified): We assume random vectors $u^{(k)}$ are drawn
according to (1); $f\in\mathcal{C}^{1,1}(\Lambda)$ and $f$ is convex; and that
the i.i.d. noise draws $\epsilon(\xi)$ are additive, zero mean, with bounded
variance $\sigma^{2}$ for all $\xi$. Let $u$ be drawn in the fashion described
in (1). If a smoothing stepsize is chosen as $\mu=\hat{\mu^{*}}$ in (23), then
for any iterate $x\in\Lambda$ and random vector $u$, noting $K_{1}>0$ and
$K_{2}>0$ as in (22),
(24)
$\mathbb{E}_{u,\xi_{1},\xi_{2}}\left(\mathcal{E}\left(\hat{\mu^{*}}\right)\right)\leq\frac{K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}\sigma
L_{1}\sqrt{P(P+6)^{3}}.$
Proof: Let $u\in\Lambda$ be a random vector as in (1), $x\in\Lambda$ denote a
general STARS iterate, and $\epsilon(\xi_{1})$ and $\epsilon(\xi_{2})$ denote
the two i.i.d. draws of the additive noise in $\hat{f}$ which appear in
$\mathcal{E}(\mu)$, (21). Plugging equation (20) into equation (21), we obtain
(25) $\mathcal{E}(\mu)=\left|\left|\frac{f(x+\mu
u)+\epsilon(\xi_{1})-\left(f(x)+\epsilon(\xi_{2})\right)}{\mu}u-\langle\nabla
f(x),u\rangle u\right|\right|^{2}.$
Rearranging,
(26)
$\mathcal{E}(\mu)=\left|\left|\left(\frac{\left(\epsilon(\xi_{1})-\epsilon(\xi_{2})\right)+\left(f(x+\mu
u)-f(x)-\left\langle\nabla f(x),\mu
u\right\rangle\right)}{\mu}\right)u\right|\right|^{2}.$
We have
(27)
$\mathcal{E}(\mu)\leq\frac{X^{2}}{\mu^{2}}||u||^{2},\,\,\text{where}\,\,X:=\left(\epsilon(\xi_{1})-\epsilon(\xi_{2})\right)+\left(f(x+\mu
u)-f(x)-\left\langle\nabla f(x),\mu u\right\rangle\right).$
Expanding the form of $X$,
(28)
$\begin{split}X^{2}&=\epsilon(\xi_{1})^{2}-2\epsilon(\xi_{1})\epsilon(\xi_{2})+\epsilon(\xi_{2})^{2}+2(\epsilon(\xi_{1})-\epsilon(\xi_{2}))\left(f(x+\mu
u)-f(x)-\left\langle\nabla f(x),\mu u\right\rangle\right)\\\ &+\left(f(x+\mu
u)-f(x)-\left\langle\nabla f(x),\mu u\right\rangle\right)^{2}.\end{split}$
We begin by examining the expectation of $X^{2}$ with respect to the two
stochastic noise draws. Recall that $\mathbb{E}_{\xi}(\epsilon(\xi))=0$ and
Var$(\epsilon(\xi))=\sigma^{2}>0$ for all draws $\xi,$ hence, we also have
$\mathbb{E}_{\xi}(\epsilon(\xi)^{2})=\sigma^{2}$ for all draws $\xi$.
(29) $\mathbb{E}_{\xi_{1},\xi_{2}}(X^{2})=2\sigma^{2}+\left(f(x+\mu
u)-f(x)-\left\langle\nabla f(x),\mu u\right\rangle\right)^{2}.$
Noting that neither $u$ nor $\mu$ depends on noise draws $\xi$, we have
(30)
$\mathbb{E}_{u,\xi_{1},\xi_{2}}(\mathcal{E}(\mu))=\mathbb{E}_{u}\left(\frac{\mathbb{E}_{\xi_{1},\xi_{2}}(X^{2})||u||^{2}}{\mu^{2}}\right)=\frac{\mathbb{E}_{u}\left(2\sigma^{2}||u||^{2}+\left(f(x+\mu
u)-f(x)-\left\langle\nabla f(x),\mu
u\right\rangle\right)^{2}||u||^{2}\right)}{\mu^{2}}.$
Now replacing $y$ with $y=x+\mu u$ and squaring both sides, we can re-write
(3) as
(31) $(f(x+\mu u)-f(x)-\langle\nabla f(x),\mu
u\rangle)^{2}\leq\frac{L_{1}^{2}}{4}\mu^{4}||u||^{4}.$
Now, combining (30) and (31), we have
(32)
$\mathbb{E}_{u,\xi_{1},\xi_{2}}(\mathcal{E}(\mu))\leq\frac{1}{\mu^{2}}\mathbb{E}_{u}\left(2\sigma^{2}||u||^{2}+\frac{L_{1}^{2}}{4}\mu^{4}||u||^{6}\right).$
Using the bounds on the moments $M_{p}$ of $||u||^{p}$ given in (8), using
$p=2$ and $p=6$, we have
(33)
$\mathbb{E}_{u,\xi_{1},\xi_{2}}(\mathcal{E}(\mu))\leq\frac{2\sigma^{2}}{\mu^{2}}M_{2}+\frac{L_{1}^{2}\mu^{2}}{4}M_{6}\leq\frac{2\sigma^{2}P}{\mu^{2}}+\frac{L_{1}^{2}\mu^{2}(P+6)^{3}}{4}.$
We have
(34)
$\mathbb{E}_{u,\xi_{1},\xi_{2}}(\mathcal{E}(\mu))\leq(2\sigma^{2}P)\frac{1}{\mu^{2}}+\left(\frac{L_{1}^{2}(P+6)^{3}}{4}\right)\mu^{2}.$
The authors in [CW] observe that the right-hand side of the above inequality
is uniformly convex for $\mu>0$, taking the form $t(\mu)=a\mu^{-2}+b\mu^{2}$
for positive constants $a$ and $b$; calculus shows that the minimizer of $t$
for $\mu>0$ is $\mu^{*}:=\left(a/b\right)^{1/4}$ with $t(\mu^{*})=2\sqrt{ab}$.
Using $a=2\sigma^{2}P$ and $b=(L_{1}^{2}(P+6)^{3})/4$, we recover
$\mu^{*}=\left(\frac{8\sigma^{2}P}{L_{1}^{2}(P+6)^{3}}\right)^{1/4}$, the
optimal (in the sense of minimizing the upper bound on $\mathcal{E}$)
smoothing step length proven in [CW]. Our optimal choice of smoothing, given
the information we have available, will require us to swap out $L_{1}$ and
$\sigma^{2}$ in $\mu^{*}$ with their estimates, $\hat{L_{1}}$ and
$\hat{\sigma}^{2}$, recovering $\hat{\mu}^{*}$ (23). This particular choice
$\mu=\hat{\mu^{*}}$ can be plugged into (34), which gives us the bound
(35)
$\mathbb{E}_{u,\xi_{1},\xi_{2}}(\mathcal{E}(\hat{\mu^{*}}))\leq\frac{K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}\sigma
L_{1}\sqrt{P(P+6)^{3}},$
our main result. $\blacksquare$
Note that we can recover the exact result of STARS Theorem 4.3 by taking
$K_{1}=K_{2}=1$, the case in which our hyperparameters are estimated exactly.
We next derive an upper bound on $\mathbb{E}(||s_{\mu_{k}}||^{2})$, where
$\mathbb{E}$ will now denote the expectation over every noise draw and random
vector used in STARS up to (and including) the $k$-th iterate; that is, the
expectations are now taken with respect to $\xi_{0},\ldots,\xi_{k}$ and
$u^{(1)},\ldots,u^{k}$ unless stated otherwise. Recall that $s_{\mu_{k}}$, the
stochastic gradient-free oracle, was defined in (20). We prove this result in
a modified Lemma very similar to STARS Lemma 4.4 in [CW]; similarly to the
result above, we will only use the estimated values to $L_{1}$ and $\sigma$.
STARS Lemma 4.4 (Modified): We assume random vectors $u^{(k)}$ are drawn
according to (1); $f\in\mathcal{C}^{1,1}(\Lambda)$ and $f$ is convex; and that
the i.i.d. noise draws $\epsilon(\xi)$ are additive, zero mean, with bounded
variance $\sigma^{2}$ for all $\xi$. If we use $\mu_{k}=\hat{\mu^{*}}$ as in
(23) for all STARS iterates $k$, then noting $K_{1}>0$ and $K_{2}>0$ as in
(22), STARS generates steps satisfying
(36) $\mathbb{E}(||s_{\mu_{k}}||^{2})\leq 2(P+4)||\nabla
f(x^{(k)})||^{2}+\frac{3K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}L_{1}\sigma\sqrt{P(P+6)^{3}}.$
Proof: First, we set $\mu_{k}=\hat{\mu^{*}}$. For a STARS iterate $k$, let
(37) $g_{0}(x^{(k)}):=\langle\nabla f(x^{(k)}),u^{(k)}\rangle u^{(k)},$
which is the exact directional derivative of $f$ in the direction of $u$ at
the point $x^{(k)}\in\Lambda$. We can use this notation to re-express (35) as
(38) $\mathbb{E}\left(||s_{\mu_{k}}||^{2}-2\langle
s_{\mu_{k}},g_{0}(x^{(k)})\rangle+||g_{0}(x^{(k)})||^{2}\right)\leq\frac{K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}\sigma
L_{1}\sqrt{P(P+6)^{3}},$
where we have also expanded $\mathcal{E}$, defined in (21). Recalling that all
draws of the noise are zero-mean, the expectation of the oracle $s_{\mu_{k}}$
(defined in (20)) with respect to the appearing noise draws $\xi_{k-1}$ and
$\xi_{k}$ is given by
(39)
$\mathbb{E}_{\xi_{k-1},\xi_{k}}(s_{\mu_{k}})=\frac{f(x^{(k)}+\mu_{k}u^{(k)})-f(x^{(k)})}{\mu_{k}}u^{(k)}=g_{\mu}(x^{(k)}),$
which is the (noise-free) first-order approximation to the directional
derivative of $f$ in the direction of $u$, defined in (6). The linearity of
$\mathbb{E}$ allows us to rewrite (38) as
(40) $\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)\leq\mathbb{E}\left(2\langle
s_{\mu_{k}},g_{0}(x^{(k)})\rangle-||g_{0}(x^{(k)})||^{2}\right)+C_{1},$
where $C_{1}:=\frac{K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}\sigma
L_{1}\sqrt{P(P+6)^{3}}$. The only term involving noise draws on the right-hand
side of (40) is $s_{\mu_{k}}$; thus, passing through the expectation with
respect to all noise draws $\xi_{k}$, we can use our result in (39) to write
(41)
$\begin{split}\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)&\leq\mathbb{E}\left(2\langle
s_{\mu_{k}},g_{0}(x^{(k)})\rangle-||g_{0}(x^{(k)})||^{2}\right)+C_{1}\\\
&=\mathbb{E}_{u^{(k)}}\left(2\langle
g_{\mu}(x^{(k)}),g_{0}(x^{(k)})\rangle-||g_{0}(x^{(k)})||^{2}\right)+C_{1}.\end{split}$
Adding and subtracting by $||g_{\mu}(x^{(k)})||^{2}$ inside of the
$\mathbb{E}_{u^{(k)}}$ (and then factoring) in (41) gives
(42)
$\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)\leq\mathbb{E}_{u^{(k)}}\left(-||g_{0}(x^{(k)})-g_{\mu}(x^{(k)})||^{2}+||g_{0}(x^{(k)})||^{2}\right)+C_{1}.$
Using the linearity of $\mathbb{E}_{u^{(k)}}$ and observing that
$-||x||^{2}\leq 0$ for all $x\in\Lambda$ gives
(43)
$\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)\leq\mathbb{E}_{u^{(k)}}\left(||g_{0}(x^{(k)})||^{2}\right)+C_{1}.$
Recalling the result in (18), we have arrived at
(44) $\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)\leq 2(P+4)||\nabla
f(x^{(k)})||^{2}+\frac{\mu_{k}^{2}L_{1}^{2}}{2}(P+6)^{3}+C_{1}.$
Equivalently, recalling (22) – which equates $L_{1}^{2}$ to $\hat{L_{1}}^{2}$
scaled by a positive constant $K_{1}$ – we also have
(45) $\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)\leq 2(P+4)||\nabla
f(x^{(k)})||^{2}+\frac{K_{1}\mu_{k}^{2}\hat{L_{1}}^{2}}{2}(P+6)^{3}+C_{1}.$
Recall we have set $\mu_{k}=\hat{\mu^{*}}$ (from (23)) in (44) for all
iterations $k$. Plugging in this value, we obtain
(46) $\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)\leq 2(P+4)||\nabla
f(x^{(k)})||^{2}+C_{2},$
where
$C_{2}:=\frac{3K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}L_{1}\sigma\sqrt{P(P+6)^{3}}=\frac{3K_{1}+K_{2}}{\sqrt{2}}\hat{L_{1}}\hat{\sigma}\sqrt{P(P+6)^{3}}$,
our main result. $\blacksquare$
Note that in a fashion analogous to our modification of STARS Theorem 4.3, we
can recover the exact result of STARS Lemma 4.4 by taking $K_{1}=K_{2}=1$.
We can now present the final result which shows that STARS converges with
estimates replacing the exact values for $L_{1}$ and $\mu$. We need just a bit
more notation, borrowed directly from [CW]. Let $x^{*}\in\Lambda$ denote a
minimizer with the associated stochastic-free function evaluation
$f^{*}:=f(x^{*})$. Also, define
$\mathcal{Q}_{k}:=\\{\xi_{0},\ldots,\xi_{k}\\}$and
$\mathcal{U}_{k}:=\\{u^{(1)},\ldots,u^{(k)}\\}$, which are two sets containing
all random variables that appear in STARS up through iteration $k$. Let
$\phi_{0}:=f(x^{(0)})$ and
$\phi_{k}:=\mathbb{E}_{\mathcal{Q}_{k-1},\mathcal{U}_{k-1}}(f(x^{(k)}))$,
$k\geq 1.$ Define $M\in\mathbb{N}$ as the total number of STARS iterates
performed.
STARS Theorem 4.5 (Modified): Let Assumptions 3.1, 4.1, and 4.2 hold – here
those assumptions mean that random vectors $u^{(k)}$ are drawn according to
(1); $f\in\mathcal{C}^{1,1}(\Lambda)$ and $f$ is convex; and that the i.i.d.
noise draws $\epsilon(\xi)$ are additive, zero mean, with bounded variance
$\sigma^{2}$ for all $\xi$. Let $\\{x^{(k)}\\}_{k\geq 0}$ denote a sequence of
STARS iterates formed using a fixed step length $h_{k}=\hat{h}$ and fixed
smoothing $\mu_{k}=\hat{\mu^{*}}$ (both given in (23)) for all STARS iterates
$k$. Finally, we require $0<K_{1}<4$ and $K_{2}>0$, the values defined in
(22). Then for any total number of STARS iterations $M$,
(47)
$\sum_{k=0}^{M}\frac{\phi_{k}-f^{*}}{M+1}\leq\frac{4L_{1}(P+4)||x^{(0)}-x^{*}||^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M+1)}+\frac{4\sigma(P+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5},$
where $C_{5}:=\sqrt{K_{1}}\cdot 0.036+\frac{3K_{1}+K_{2}}{16}\cdot 1.034$.
Proof: For a STARS iterate $k\geq 0$, let $r_{k}:=||x^{(k)}-x^{*}||$, the
distance from a given STARS iterate to a true minimizer of $f$, denoted
$x^{*}\in\Lambda$. Along the lines of [CW], we will bound
$\mathbb{E}(r_{k+1}^{2})-r_{k}$, ‘the expected change of $x$ after each
iteration in our setting. We note that with this viewpoint, every iterate so
far is known; that is, the sequence of vectors $\\{x^{i}\\}_{i=0}^{k}$ are
known/fixed up until index $k$, and thus, the sequence $\\{r_{i}\\}_{i=0}^{k}$
of distances are also known/fixed. In particular, both sequences are non-
stochastic, meaning that they are constant with respect to any expected values
we may apply upon them, in the context of a given step.
First, observe that by using definitions, we may write
(48)
$r_{k+1}^{2}=||x^{k+1}-x^{*}||^{2}=||x^{(k)}-\hat{h}s_{\mu_{k}}-x^{*}||^{2}.$
Rearranging and expanding,
(49)
$r_{k+1}^{2}=||(x^{(k)}-x^{*})-hs_{\mu_{k}}||^{2}=r_{k}^{2}-2\hat{h}\langle
s_{\mu_{k}},x^{(k)}-x^{*}\rangle+\hat{h}^{2}||s_{\mu_{k}}||^{2}.$
Let $\mathbb{E}$ continue to denote the expectation over $\mathcal{Q}_{k}$ and
$\mathcal{U}_{k}$, all of the random vectors and noise draws defining our
first $k$ iterates. Recall that one of our current assumptions is that all of
the $x^{(k)}$’s (and thus also all of the $r_{k}$’s) are given/fixed, as well
as $\hat{h}$ and $x^{*}$. Hence, $\mathbb{E}(r_{k}^{2})=r_{k}^{2}$,
$\mathbb{E}(x^{(k)})=x^{(k)}$, $\mathbb{E}(\hat{h})=\hat{h}$, and
$\mathbb{E}(x^{*})=x^{*}$ as we already have these constant objects in hand.
However, the next iterate, $x^{k+1}$, will depend on the stochastic direction
$u^{(k)}$, as well as the stochastic noise values $\xi_{k-1}$ and $\xi_{k}$ –
and since these stochastic objects literally define $r_{k+1}$ and
$s_{\mu_{k}}$, the expectations will not drop from these terms.
Applying $\mathbb{E}$ to both sides of (49),
(50)
$\mathbb{E}(r_{k+1}^{2})=r_{k}^{2}-2\hat{h}\langle\mathbb{E}\left(s_{\mu_{k}}\right),x^{(k)}-x^{*}\rangle+\hat{h}^{2}\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right).$
We begin by noting the appearance of $\mathbb{E}\left(s_{\mu_{k}}\right)$,
which we can characterize using a pair of previous results. First, we found in
(39) that $\mathbb{E}_{\xi_{k-1},\xi_{k}}(s_{\mu_{k}})=g_{\mu}(x^{(k)})$.
Next, we recall (16), $\mathbb{E}_{u}\left(g_{\mu}(x)\right)=\nabla
f_{\mu}(x)\quad\forall x\in\Lambda$. Putting these results together, we have
$\mathbb{E}\left(s_{\mu_{k}}\right)=\nabla f_{\mu}(x^{(k)})$. We can invoke
the main result in STARS Lemma 4.4 (Modified) (summarized by (46)) to bound
$\mathbb{E}\left(||s_{\mu_{k}}||^{2}\right)$ and our new characterization of
$\mathbb{E}\left(s_{\mu_{k}}\right)=\nabla f_{\mu}(x^{(k)})$ to write
(51) $\mathbb{E}(r_{k+1}^{2})\leq r_{k}^{2}-2\hat{h}\langle\nabla
f_{\mu}(x^{(k)}),x^{(k)}-x^{*}\rangle+\hat{h}^{2}\left(2(P+4)||\nabla
f(x^{(k)})||^{2}+C_{2}\right).$
Now, to reach our next key result, we shall need to verify that $f_{\mu}$ is
convex. Let $x,y\in\Lambda$ and $\mu>0$. By definition,
$f_{\mu}(y)=\mathbb{E}_{u}(f(y+\mu u))$. Recalling that we have assumed the
convexity of $f$ we invoke (5), writing
(52) $f_{\mu}(y)=\mathbb{E}_{u}(f(y+\mu u))\geq\mathbb{E}_{u}\left(f(x+\mu
u)+\langle\nabla f(x+\mu u),y-x\rangle\right).$
Now using the properties of $\mathbb{E}_{u}$ and defintions,
(53) $f_{\mu}(y)\geq\mathbb{E}_{u}\left(f(x+\mu
u)\right)+\langle\nabla\left(\mathbb{E}_{u}\left(f(x+\mu
u)\right)\right),y-x\rangle=f_{\mu}(x)+\langle\nabla f_{\mu}(x),y-x\rangle,$
which holds for any $x,y\in\Lambda$ and $\mu>0$, proving that $f_{\mu}$ is
convex (so long as $f$ is also convex). We shall require a restatement of (53)
for our purposes; we also have for any $x,y\in\Lambda$ and $\mu>0$:
(54) $\langle\nabla f_{\mu}(x),x-y\rangle\geq f_{\mu}(x)-f_{\mu}(y).$
Hence, plugging $x=x^{(k)}$ and $y=x^{*}$ into (54) and multiplying both sides
of the inequality by $-2\hat{h}$, we obtain
(55) $-2\hat{h}\langle\nabla
f_{\mu}(x^{(k)}),x^{(k)}-x^{*}\rangle\leq-2\hat{h}\left(f_{\mu}(x^{(k)})-f_{\mu}(x^{*})\right).$
Now (12) implies $-2\hat{h}f_{\mu}(x^{(k)})\leq-2\hat{h}f(x^{(k)})$. Hence,
(56) $-2\hat{h}\langle\nabla
f_{\mu}(x^{(k)}),x^{(k)}-x^{*}\rangle\leq-2\hat{h}\left(f(x^{(k)})-f_{\mu}(x^{*})\right).$
Recalling (4), we can write $||\nabla f(x^{(k)})||^{2}\leq
2L_{1}(f(x^{(k)})-f(x^{*}))$. Using this result along with (56), we can update
our bound in (51), writing
(57) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-2\hat{h}\left(f(x^{(k)})-f_{\mu}(x^{*})\right)+\hat{h}^{2}\left(4L_{1}(P+4)(f(x^{(k)})-f(x^{*}))+C_{2}\right).$
Now we add and subtract $-2\hat{h}f(x^{*})$ on the RHS of (57), obtaining
(58) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-2\hat{h}\left(f(x^{*})-f_{\mu}(x^{*})\right)-2\hat{h}(f(x^{(k)})-f(x^{*}))+\hat{h}^{2}\left(4L_{1}(P+4)(f(x^{(k)})-f(x^{*}))+C_{2}\right).$
Now (13) implies $-\frac{\mu^{2}}{2}L_{1}P\leq
f(x)-f_{\mu}(x)\implies-2\hat{h}(f(x)-f_{\mu}(x))\leq\hat{h}\mu^{2}L_{1}P$ for
all $x$, in particular for $x=x^{*}$; hence,
(59) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}+\hat{h}\mu^{2}L_{1}P-2\hat{h}(f(x^{(k)})-f(x^{*}))+\hat{h}^{2}\left(4L_{1}(P+4)(f(x^{(k)})-f(x^{*}))+C_{2}\right).$
Manipulating and rearranging a little,
(60) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-2\hat{h}(f(x^{(k)})-f(x^{*}))(1-2\hat{h}L_{1}(P+4))+C_{3},$
where $C_{3}:=\hat{h}\mu^{2}L_{1}P+\hat{h}^{2}C_{2}.$ Now recall that we have
fixed $\hat{h}=(4\hat{L_{1}}(P+4))^{-1}$; plugging in this particular value,
we obtain
(61) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-\frac{\sqrt{K_{1}}(2-\sqrt{K_{1}})}{4L_{1}(P+4)}(f(x^{(k)})-f(x^{*}))+C_{3},$
We rewrite $C_{3}$ (with our $\hat{h}$ and $\hat{\mu}^{*}$ plugged in) with a
function $g(P)$, $P=\dim\Lambda$ given by:
(62)
$C_{3}=\frac{\sqrt{K_{1}}}{\sqrt{K_{2}}}\cdot\frac{\sigma}{\sqrt{2}L_{1}}g(P),\quad\,g(P):=\left(\frac{\sqrt{K_{1}}}{P+4}\cdot\left(\frac{P}{P+6}\right)^{3/2}+\frac{3K_{1}+K_{2}}{16}\cdot\frac{\sqrt{P(P+6)^{3}}}{(P+4)^{2}}\right).$
We may recover the function called $g_{1}$ from [CW] by setting
$K_{1}=K_{2}=1$, so that our estimates to $L_{1}$ and $\sigma^{2}$ are exact.
We shall bound our more general $g$ by bounding each of its terms. Note that
the asymptotic analysis of each term gives us hope to find such a bound, as
the first term tends to zero as $P\to\infty$ and the second term tends to the
constant $(3K_{1}+K_{2})/16$. Using calculus and numerics along the lines of
[CW], we find
(63) $g(P)\leq\sqrt{K_{1}}\cdot 0.036+\frac{3K_{1}+K_{2}}{16}\cdot 1.034.$
With $K_{1}=K_{2}=1$, we have $g(P)\approx 0.2895<3/10$ again matching [CW]
(using the bound of $3/10$). We may now define a constant, $C_{4}$, which
bounds $C_{3}$ over all dimensions $P$ in terms of $L_{1}$, $\sigma$, $K_{1}$,
and $K_{2}$:
(64)
$C_{3}\leq\frac{\sqrt{K_{1}}}{\sqrt{K_{2}}}\cdot\frac{\sigma}{\sqrt{2}L_{1}}\left(\sqrt{K_{1}}\cdot
0.036+\frac{3K_{1}+K_{2}}{16}\cdot 1.034\right)=:C_{4}.$
Thus, we can update the bound in (61) to write
(65) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-\frac{\sqrt{K_{1}}(2-\sqrt{K_{1}})}{4L_{1}(P+4)}(f(x^{(k)})-f(x^{*}))+C_{4}.$
Applying the expectation over $\mathcal{U}_{k}$ and $\mathcal{P}_{k}$,
(66)
$\mathbb{E}_{\mathcal{U}_{k},\mathcal{P}_{k}}(r_{k+1}^{2})\leq\mathbb{E}_{\mathcal{U}_{k-1},\mathcal{P}_{k-1}}(r_{k}^{2})-\frac{\sqrt{K_{1}}(2-\sqrt{K_{1}})}{4L_{1}(P+4)}(\phi_{k}-f^{*})+C_{4}.$
Rearranging, we have
(67)
$\phi_{k}-f^{*}\leq\frac{4L_{1}(P+4)}{\sqrt{K_{1}}(2-\sqrt{K_{1}})}\left(\mathbb{E}_{\mathcal{U}_{k-1},\mathcal{P}_{k-1}}(r_{k}^{2})-\mathbb{E}_{\mathcal{U}_{k},\mathcal{P}_{k}}(r_{k+1}^{2})+C_{4}\right).$
Summing over $k=0,\ldots,M$ and dividing by $M+1$, we obtain
(68)
$\sum_{k=0}^{M}\frac{\phi_{k}-f^{*}}{M+1}\leq\frac{4L_{1}(P+4)}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M+1)}\left(r_{0}-\mathbb{E}_{\mathcal{U}_{k},\mathcal{P}_{k}}(r_{k+1}^{2})\right)+\frac{4L_{1}(P+4)}{\sqrt{K_{1}}(2-\sqrt{K_{1}})}C_{4}.$
Dropping the strictly negative term on the RHS (involving
$\mathbb{E}_{\mathcal{U}_{k},\mathcal{P}_{k}}(r_{k+1}^{2})$) and plugging in
the definition of $r_{0}$ (which is a constant, and not stochastic) and noting
that we require $0<K_{1}<4$ and $K_{2}>0$, we have found
(69)
$\sum_{k=0}^{M}\frac{\phi_{k}-f^{*}}{M+1}\leq\frac{4L_{1}(P+4)||x^{(0)}-x^{*}||^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M+1)}+\frac{4\sigma(P+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5},$
where $C_{5}:=\sqrt{K_{1}}\cdot 0.036+\frac{3K_{1}+K_{2}}{16}\cdot 1.034$, our
main result. $\blacksquare$
Again, we recover the exact result of STARS Theorem 4.5 by taking
$K_{1}=K_{2}=1$.
Remark: We now mimic the analysis in [CW] to explain the implications of our
modified Theorem 4.5. First, let $||x^{(0)}-x^{*}||^{2}\leq R^{2}$. Define
$x^{\dagger}:=\text{argmin}_{x}\\{f(x):x\in\\{x^{(0)},\ldots,x^{M}\\}\\}$ and
$\phi_{\dagger}:=\mathbb{E}_{\mathcal{U}_{k-1},\mathcal{P}_{k-1}}(f(x^{\dagger}))$.
Then the value $\phi_{\dagger}-f^{*}$ must be less than or equal to the
average improvement for any given run of STARS; that is, (69), along with our
new definitions, implies that
(70)
$\phi_{\dagger}-f^{*}\leq\sum_{k=0}^{M}\frac{\phi_{k}-f^{*}}{M+1}\leq\frac{4L_{1}(P+4)}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M+1)}R^{2}+\frac{4\sigma(P+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5}.$
Along the lines of [CW], let us now assume that we wish to achieve a final
accuracy of $\epsilon_{\text{tol}}>0$. Then we will need
$\phi_{\dagger}-f^{*}\leq\epsilon_{\text{tol}}$. If we take
(71)
$\frac{4\sigma(P+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5}\leq\frac{\epsilon_{\text{tol}}}{2},$
then we must require that the noise not exceed the following threshold:
(72)
$\sigma\leq\frac{\sqrt{2K_{2}}(2-\sqrt{K_{1}})\epsilon_{\text{tol}}}{8(P+4)C_{4}},$
If we satisfy (72), then we can achieve $\epsilon_{\text{tol}}$ accuracy as
long as
(73)
$\frac{4L_{1}(P+4)R^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M+1)}\leq\frac{\epsilon_{\text{tol}}}{2}\iff
M\geq\frac{8L_{1}(P+4)R^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})\epsilon_{\text{tol}}}-1.$
Hence, we achieve $\epsilon_{\text{tol}}$ accuracy as long as the noise is
small enough, and $M$ is large enough, with details of those bounds given by
(72) and (73). Also, we achieve $\epsilon_{\text{tol}}$ accuracy in
(74)
$M\sim\mathcal{O}\left(\frac{L_{1}PR^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})\epsilon_{\text{tol}}}\right).$
This analysis also shows that given a particular variance in the noise,
$\sigma$, the achievable accuracy can be no better (i.e., less) than the value
given below:
(75)
$\epsilon_{\text{tol}}\geq\frac{8(P+4)C_{5}}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}\sigma.$
As usual, we recover the results in [CW] by setting $K_{1}=K_{2}=1$.
### 4.3. ASTARS Convergence
We now investigate the convergence of ASTARS. We will build upon the
theoretical results of the last section, meaning [CW] will be heavily invoked
again in this section. Given the exact $j$-dimensional AS $\mathcal{A}$ of
$f$, we shall also need results generally regarding the distance between the
minimum of $f$ and the minimum that ASTARS obtains. We shall also need to
discuss the corresponding minimizers. Recall that we denote the minimizer of
$f$ with $x^{*}$ and we have the stochastic-free minimum of $f$,
$f^{*}=f(x^{*})$, as before. Since ASTARS steps in $\mathcal{A}$ only, given
an initial iterate $x^{(0)}$, the minimizer ASTARS is able to attain will be
of the form
$x^{*}_{\mathcal{A}}:=P_{\mathcal{A}}(x^{*})+P_{\mathcal{I}}(x^{(0)})$. We
analogously define the stochastic-free
$f^{*}_{\mathcal{A}}:=f(x^{*}_{\mathcal{A}})$. Since the initial iterate will
not be changed in $\mathcal{I}$ during ASTARS, the components in
$x^{*}_{\mathcal{A}}$ are fixed in $\mathcal{I}$ at the given initial iterate,
given by the inactive coordinates of $P_{\mathcal{I}}(x^{(0)})$. However, we
do step towards the true $x^{*}$ in its coordinates corresponding to
$\mathcal{A}$, which is why we also obtain $P_{\mathcal{A}}(x^{*})$ in the
definition of $x^{*}_{\mathcal{A}}$. Notice that with our definitions,
$x^{*}-x^{*}_{\mathcal{A}}=P_{\mathcal{I}}(x^{(0)}-x^{*})$. Again – the
difference between $x^{*}$ and $x^{*}_{\mathcal{A}}$ will be in the inactive
subspace $\mathcal{I}$, given exactly by the projection of $x^{(0)}$ into
$\mathcal{I}$, since ASTARS iterations do not perturb
$\mathcal{I}$-coordinates.
ASTARS estimates directional derivatives of $f$ strictly for directions in
$\mathcal{A}$ – iterates are not perturbed in $\mathcal{I}$. Consequently, the
ASTARS gradient oracle can only provide gradient information in the $j$ active
directions of $f$. Hence, the gradients we approximate in ASTARS are denoted
$\nabla_{\mathcal{A}}f(x)\in\mathcal{A}$, which is the gradient of $f$ in
$\mathcal{A}$. Gradients in $\mathcal{I}$ will also be needed for our proofs,
and they are defined similarly with $\nabla_{\mathcal{I}}f(x)\in\mathcal{I}$.
Note the subspace gradients $\nabla_{\mathcal{A}}f(x)$ and
$\nabla_{\mathcal{I}}f(x)$ are still computed in $\Lambda$, but each will fall
into their respective subspaces upon computation. In particular,
$\nabla_{\mathcal{A}}f(x)_{i}=0$ for $i=j+1,\ldots,P$ and
$\nabla_{\mathcal{I}}f(x)_{i}=0$ for $i=1,\ldots,j$.
We first present a lemma which will be used in both the ASTARS and FAASTARS
convergence analyses. Recall that the vectors $r^{(k)}$ and $\tilde{r}^{(k)}$
have components which are drawn from a $N(0,1)$ distribution. These vectors
are used to form random coefficients in a linear combination in $\mathcal{A}$
(or $\tilde{\mathcal{A}}$) to perform ASTARS steps. Here, we write
$r^{(k)}\sim N(0,I_{j})$, a multivariate normal distribution, where $0$
denotes the zero vector in $\Lambda^{j}$ and $I_{j}$ is the $j\times j$
identity matrix, so that the covariance is $1$ for every element in $r^{(k)}$
but all elements are independent, with zero covariance between elements.
Analogously, we have $\tilde{r}^{(k)}_{p}\sim N(0,I_{\tilde{j}})$. In the
first lemma, we show that the random directions for ASTARS steps $u^{(k)}$ and
$\tilde{u}^{(k)}$ are also distributed normally with zero mean and unit
covariance.
ASTARS/FAASTARS Lemma 1: Let $\tilde{\mathcal{A}}$ denote a
$\tilde{j}$-dimensional AS of $\hat{f}$ and let $\mathcal{A}$ denote the true
$j$-dimensional AS of $\hat{f}$. Recall $V_{\mathcal{A}}:=V_{1:P,1:j}$, where
$V$ comes for the eigendecomposition (ED) of the exact sensitivity matrix $W$;
as well, recall that
$\tilde{V}_{\tilde{\mathcal{A}}}:=\tilde{V}_{1:P,1:\tilde{j}}$, where
$\tilde{V}$ comes from the ED of the sensitivity matrix $\tilde{W}$,
approximated from samples of $\hat{f}$. Let $r^{(k)}$ denote a random vector
such that $r^{(k)}\sim N(0,I_{j})$; likewise, let $\tilde{r}^{(k)}$ denote a
random vector such that $\tilde{r}^{(k)}\sim N(0,I_{\tilde{j}})$. Let
$u^{(k)}:=V_{\mathcal{A}}r^{(k)}$ and
$\tilde{u}^{(k)}:=\tilde{V}_{\tilde{\mathcal{A}}}\tilde{r}^{(k)}$. Then both
$u^{(k)}$ and $\tilde{u}^{(k)}$ are normal random vectors; i.e., $u^{(k)}\sim
N(0,I_{j})$ and $\tilde{u}^{(k)}_{p}\sim N(0,I_{\tilde{j}})$. Also,
$u^{(k)}\in\mathcal{A}$ and $\tilde{u}^{(k)}\in\tilde{\mathcal{A}}$ for all
$k$.
Proof: We begin by considering the case in which we have the exact AS of
$\hat{f}$, $\mathcal{A}$. We recall that since $W$ is a real $P\times P$
symmetric matrix, its ED is $W=VQV^{\top}$ where $V$ contains the $P$
eigenvectors of $W$ which are orthonormal in this case, due to the symmetry of
$W$, meaning $V$ is a unitary matrix. (Note $Q$ contains the eigenvalues of
$W$ along its diagonal in descending order.) Recall that
$V_{\mathcal{A}}:=V_{1:P,1:j}$; hence, $V_{\mathcal{A}}$ is also a unitary
matrix.
By definition, $u^{(k)}=V_{\mathcal{A}}r^{(k)}$. Since every component of
$r^{(k)}$ is a $N(0,1)$ random variable, $u^{(k)}$ is distributed as
$u^{(k)}\sim N(0,(V_{\mathcal{A}})^{\top}(V_{\mathcal{A}}))$. Now since
$V_{\mathcal{A}}$ is unitary, we know
$(V_{\mathcal{A}})^{\top}(V_{\mathcal{A}})=I_{j}$. Therefore $u^{(k)}\sim
N(0,I_{j})$, our desired result for the case of an exact AS.
In the case that we are dealing with an approximated $\tilde{j}$-dimensional
AS $\tilde{\mathcal{A}}$, it is still the case that
$\tilde{V}_{\tilde{\mathcal{A}}}$ is unitary by construction, and so we can
follow the proof above analogously, only replacing $j$ with $\tilde{j}$, and
state $\tilde{u}^{(k)}_{p}\sim N(0,I_{\tilde{j}})$ as well.
To verify $u^{(k)}\in\mathcal{A}$ and $\tilde{u}^{(k)}\in\tilde{\mathcal{A}}$,
recall $u^{(k)}=V_{\mathcal{A}}r^{(k)}$. Then $u^{(k)}$ is a linear
combination of the columns of $V_{\mathcal{A}}$ with coefficients given by
$r^{(k)}$. The columns of $V_{A}$ are the eigenvectors $v^{i}$, $i=1,\ldots,j$
of the associated sensitivity matrix $W$ meaning $u^{(k)}$ a linear
combination of the first $j$ eigenvectors of $W$. Since the span of those $j$
eigenvectors equals $\mathcal{A}$ by definition, $u^{(k)}\in\mathcal{A}$.
(Subspaces are closed under linear combinations of their elements.) The
argument is analogous for $\tilde{u}^{(k)}\in\tilde{\mathcal{A}}$.
$\blacksquare$
We now formulate a series of ASTARS results, where we assume $\mathcal{A}$ is
correct and not estimated. We now show that using a fixed step size
$h_{\mathcal{A}}$ in (16), the active smoothing parameter
$\mu_{\mathcal{A}}^{*}$ in (16) is optimal, in the sense that the error in the
gradient oracle used in 2 is minimized. This result is a direct corollary of
ASTARS Proposition 1 above and STARS Theorem 4.3 (Modified) in the previous
section, but with $K_{1}=K_{2}=1$, so that we have $L_{1}$ and $\sigma^{2}$
exactly to form the active hyperparameters.
ASTARS Corollary 2: Let the vectors $u_{\mathcal{A}}^{(k)}$ denote those drawn
using Algorithm 2; let $f\in\mathcal{C}^{1,1}(\Lambda)$ and assume $f$ is
convex; and assume that the i.i.d. noise draws $\epsilon(\xi)$ are additive,
zero mean, with bounded variance $\sigma^{2}$ for all $\xi$. By fixing the
step size $h_{\mathcal{A}}$ in (16), the active smoothing parameter
$\mu_{\mathcal{A}}^{*}$ in (16) minimizes the error between the gradient
oracle in Algorithm 2 and the true directional derivative of $f$ in the
direction $u_{\mathcal{A}}^{(k)}$ in the $j$-dimensional AS $\mathcal{A}$.
That is, $\mathcal{E}^{\mathcal{A}}(\mu)$ in (21) (with
$u=u_{\mathcal{A}}^{(k)}$) is minimized by the choice
$\mu=\mu^{*}_{\mathcal{A}}$. In particular, we have the bound
(76)
$\mathbb{E}_{u_{\mathcal{A}}^{(k)},\xi_{1},\xi_{2}}\left(\mathcal{E}^{\mathcal{A}}\left(\mu^{*}_{\mathcal{A}}\right)\right)\leq\sqrt{2}\sigma
L_{1}\sqrt{j(j+6)^{3}}.$
Proof: Replacing $\mathcal{E}$ with $\mathcal{E}^{\mathcal{A}}$ and taking the
expectation over the noise and $u=u_{\mathcal{A}}$, the proof is identical to
the proof of STARS Theorem 4.3 (Modified), until we formulate (as in (32))
(77)
$\mathbb{E}_{u_{\mathcal{A}},\xi_{1},\xi_{2}}(\mathcal{E}^{\mathcal{A}}(\mu))\leq\frac{1}{\mu^{2}}\mathbb{E}_{u}\left(2\sigma^{2}||u||^{2}+\frac{L_{1}^{2}}{4}\mu^{4}||u||^{6}\right)$
and proceed to bound the right hand side. Applying ASTARS Proposition 1, we
have $(u_{\mathcal{A}}^{(k)})_{p}\sim N(0,1)$ for $p=1,\ldots,P$. Taking
$u=u_{\mathcal{A}}^{(k)}$ and noting $u_{\mathcal{A}}^{(k)}\in\mathcal{A}$
(and $\dim\mathcal{A}=j$), we apply the bounds on the moments $M_{p}$ of
$||u||^{p}$ given in (11). Using $p=2$ and $p=6$ we have
(78)
$\mathbb{E}_{u_{\mathcal{A}}^{(k)},\xi_{1},\xi_{2}}(\mathcal{E}^{\mathcal{A}}(\mu))\leq(2\sigma^{2}j)\frac{1}{\mu^{2}}+\left(\frac{L_{1}^{2}(j+6)^{3}}{4}\right)\mu^{2}.$
We again observe that the right-hand side of the above inequality is uniformly
convex for $\mu>0$ with minimizer
$\mu^{*}_{\mathcal{A}}:=\left(\frac{8\sigma^{2}j}{L_{1}^{2}(j+6)^{3}}\right)^{1/4}$.
This particular choice $\mu=\hat{\mu^{*}_{\mathcal{A}}}$ can be plugged into
(78), and we obtain the bound in (76), our main result. $\blacksquare$
Next, define $\mathcal{P}_{k}:=\\{\xi_{k}\\}_{k=1}^{M}$ and
$\mathcal{U}_{k}^{\mathcal{A}}:=\\{u^{(k)}_{\mathcal{A}}\\}_{k=1}^{M}$, which
are two sets containing all random variables that appear in ASTARS, iterations
$k=1,\ldots,M$. Let $\phi_{0}:=f(x^{(0)})$ and
$\phi_{k}^{\mathcal{A}}:=\mathbb{E}_{\mathcal{Q}_{k-1},\mathcal{U}^{\mathcal{A}}_{k-1}}(f(x^{(k)}))$,
$k\geq 1,$ where the $x^{(k)}$’s are now ASTARS iterates. $\mathbb{E}$ will
now denote the expectation over every noise draw and random vector used in
STARS up to (and including) the $k$-th iterate; that is, the expectations are
now taken with respect to $\xi_{0},\ldots,\xi_{k}$ and
$u^{(1)}_{\mathcal{A}},\ldots,u^{(k)}_{\mathcal{A}}$ unless stated otherwise.
Now, given that the active smoothing parameter $\hat{\mu^{*}_{\mathcal{A}}}$
is optimal, in the sense of minimizing $\mathcal{E}^{\mathcal{A}}$, we present
the following result, showing the convergence of ASTARS. The following result
is a direct corollary of STARS Lemma 4.4 (Modified), STARS Theorem 4.5
(Modified), ASTARS/FAASTARS Lemma 1, and FAASTARS Corollary 2.
ASTARS Corollary 3: Let random vectors $u^{(k)}_{\mathcal{A}}$ be drawn
according to 2; $f\in\mathcal{C}^{1,1}(\Lambda)$ and $f$ is convex; and that
the i.i.d. noise draws $\epsilon(\xi)$ are additive, zero mean, with bounded
variance $\sigma^{2}$ for all $\xi$. Let $\\{x^{(k)}\\}_{k\geq 0}$ denote a
sequence of ASTARS iterates formed using a fixed active step length
$h_{\mathcal{A}}$ and fixed active smoothing $\mu=\mu^{*}_{\mathcal{A}}$ (both
given in (16)) for all ASTARS iterates $k$. Then for any total number of
ASTARS iterations $M$,
(79)
$\sum_{k=0}^{M}\frac{\phi_{k}^{\mathcal{A}}-f^{*}_{\mathcal{A}}}{M+1}\leq\frac{4L_{1}(j+4)||P_{\mathcal{A}}(x^{(0)}-x^{*})||^{2}}{(M+1)}+\frac{3\sqrt{2}\sigma(j+4)}{5}.$
Proof: The proof is almost identical to the proofs of STARS Lemma 4.4
(Modified) and STARS Theorem 4.5 (Modified), but with $j$’s replacing the
roles of $P$’s (since we take steps with $j$-dimensional
$u^{(k)}_{\mathcal{A}}$ vectors and not $u^{(k)}\in\Lambda$),
$\mathcal{E}^{\mathcal{A}}$ replacing $\mathcal{E}$,
$\mathcal{U}^{\mathcal{A}}$ replacing $\mathcal{U}$, $\mu^{*}_{\mathcal{A}}$
replacing $\hat{\mu}^{*}$, $\phi_{k}^{\mathcal{A}}$ replacing $\phi_{k}$,
$x^{*}_{\mathcal{A}}$ replacing $x^{*}$ (since we are converging to the
minimum of $f_{\mathcal{A}}$, and $K_{1}=K_{2}=1$, since we are assuming exact
active hyperparameters, formed with the true values for $L_{1}$ and
$\sigma^{2}$. Note $||\cdot||=||\cdot||_{\Lambda}$, a norm on $\Lambda$
throughout. We outline the required changes one must make to the proofs of
STARS Lemma 4.4 (Modified) and STARS Theorem 4.5 (Modified) to obtain our
desired result.
We begin by obtaining a bound analogous to that of STARS Lemma 4.4 (Modified),
but note that ASTARS steps are taken with random vectors
$u^{(k)}_{\mathcal{A}}\in\mathcal{A}$, and $\dim\mathcal{A}=j$. First, we
replace $u^{(k)}$ in (37) with $u^{(k)}_{\mathcal{A}}$, so
$g_{0}(x^{(k)}):=\langle\nabla f(x^{(k)}),u^{(k)}_{\mathcal{A}}\rangle
u^{(k)}_{\mathcal{A}}$. Similarly, note that we set
$u^{(k)}=u^{(k)}_{\mathcal{A}}$ in (20) Then, using ASTARS Corollary 2 –
instead of the STARS Theorem 4.3 (Modified) – we have
(80) $\mathbb{E}\left(||s_{\mu^{*}_{\mathcal{A}}}^{\mathcal{A}}||^{2}-2\langle
s_{\mu^{*}_{\mathcal{A}}}^{\mathcal{A}},g_{0}(x^{(k)})\rangle+||g_{0}(x^{(k)})||^{2}\right)\leq\sqrt{2}\sigma
L_{1}\sqrt{j(j+6)^{3}}.$
Using (19), we have
(81)
$\mathbb{E}\left(||s_{\mu^{*}_{\mathcal{A}}}^{\mathcal{A}}||^{2}\right)\leq
2(j+4)||\nabla
f(x^{(k)})||^{2}+\frac{(\mu^{*}_{\mathcal{A}})^{2}L_{1}^{2}}{2}(j+6)^{3}+C_{1},$
where we recall we have $C_{1}=\sqrt{2}\sigma L_{1}\sqrt{j(j+6)^{3}}$ here.
Plugging in the value of $\mu^{*}_{\mathcal{A}}$, we obtain the bound
(82)
$\mathbb{E}\left(||s_{\mu^{*}_{\mathcal{A}}}^{\mathcal{A}}||^{2}\right)\leq
2(j+4)||\nabla f(x^{(k)})||^{2}+C_{2},$
where $C_{2}=2\sqrt{2}L_{1}\sigma\sqrt{j(j+6)^{3}}$. The bound in (81) is the
analogous result to STARS Lemma 4.4 (Modified) in the case of ASTARS performed
in the known and exact $\mathcal{A}$ and with $K_{1}=K_{2}=1$ (exact
hyperparameters).
We now proceed to proving the analogous result to STARS Theorem 4.5 (Modified)
in our case. We redefine $r_{k}:=||x^{(k)}-x^{*}_{\mathcal{A}}||$ for ASTARS
iterates $x^{(k)}$. The first three equations appearing in the proof of STARS
Theorem 4.5 (Modified), (48) through (50), are nearly identical for this proof
– one must replace $x^{*}$ with $x^{*}_{\mathcal{A}}$, replace $\hat{h}$ with
$h_{\mathcal{A}}$, and note that here we have
$s_{\mu^{(k)}}=s_{\mu^{*}_{\mathcal{A}}}^{\mathcal{A}}$. Then, invoking (82),
we rewrite (51) in our case as
(83) $\mathbb{E}(r_{k+1}^{2})\leq r_{k}^{2}-2h_{\mathcal{A}}\langle\nabla
f_{\mu}^{\mathcal{A}}(x^{(k)}),x^{(k)}-x^{*}_{\mathcal{A}}\rangle+h_{\mathcal{A}}^{2}\left(2(j+4)||\nabla
f(x^{(k)})||^{2}+C_{2}\right),$
where we recall $C_{2}=2\sqrt{2}L_{1}\sigma\sqrt{j(j+6)^{3}}$ here. Now,
again, equations (52) through (56) are identical for this proof as long as
$x^{*}_{\mathcal{A}}$ replaces $x^{*}$ and $h_{\mathcal{A}}$ replaces
$\hat{h}$ throughout, and similarly for equations (57) through (59) with the
additional needed replacement of $j$ for $P$ and using $C_{2}$ as we have
defined it here. Taking
$C_{3}=h_{\mathcal{A}}\mu^{*}_{\mathcal{A}}L_{1}j+h_{\mathcal{A}}^{2}C_{2}$,
(60) holds (with the usual replacments) and plugging $h_{\mathcal{A}}$ into
the modified (60) gives the following modification to (61):
(84) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-\frac{1}{4L_{1}(j+4)}(f(x^{(k)})-f(x^{*}_{\mathcal{A}}))+C_{3}.$
Now plugging $h_{\mathcal{A}}$ into $C_{3}$, we obtain
$C_{3}\leq\frac{3\sigma}{10\sqrt{2}L_{1}}=:C_{4}$, so
(85) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-\frac{1}{4L_{1}(j+4)}(f(x^{(k)})-f(x^{*}_{\mathcal{A}}))+C_{4}.$
We may now apply the expectation over $\mathcal{P}_{k}$ and
$\mathcal{U}_{k}^{\mathcal{A}}$, rearrange, and sum over $k=0,\ldots,M$ as
before. We obtain a modification to (68) with
(86)
$\sum_{k=0}^{M}\frac{\phi_{k}^{\mathcal{A}}-f^{*}_{\mathcal{A}}}{M+1}\leq\frac{4L_{1}(j+4)}{(M+1)}\left(r_{0}-\mathbb{E}_{\mathcal{U}_{k}^{\mathcal{A}},\mathcal{P}_{k}}(r_{k+1}^{2})\right)+4L_{1}(j+4)C_{4}.$
We drop the strictly negative term (again involving
$\mathbb{E}_{\mathcal{P}_{k},\mathcal{U}_{k}^{\mathcal{A}}}$) and plug in the
definition of $r_{0}=||x^{(0)}-x^{*}_{\mathcal{A}}||^{2}$. Then, recalling
$x^{*}_{\mathcal{A}}=P_{\mathcal{A}}(x^{*})+P_{\mathcal{I}}(x^{(0)})$, writing
$x^{(0)}=P_{\mathcal{A}}(x^{(0)})+P_{\mathcal{I}}(x^{(0)})$, and noting that
$P_{\mathcal{A}}$ is linear, we have
$r_{0}=||P_{\mathcal{A}}(x^{(0)}-x^{*})||^{2}$. We obtain (79). $\blacksquare$
We have shown the convergence of ASTARS to the minimum $f^{*}_{\mathcal{A}}$
of $f_{\mathcal{A}}$ with correct hyperparameters and the correct and known
$\mathcal{A}$. Ultimately, to obtain complexity results for ASTARS, we desire
a statement about the convergence of ASTARS to $f^{*}$, the minimum of $f$. We
pay a price for stepping only in active variables in ASTARS, which is that
inactive variables are not minimized or even perturbed at all. Because ASTARS
converges to $f_{\mathcal{A}}^{*}$, we will not minimize $f$ in its inactive
variables $\mathcal{I}$, and $|f^{*}-f_{\mathcal{A}}^{*}|$ may be nonzero. By
modifying results in [ConstantineK], we show that this difference will usually
be negligible, as it is bounded by the square root of the sum of the
eigenvalues associated with $\mathcal{I}$ (which are usually small), scaled by
a constant related to the distance from $x^{*}$ to $x^{*}_{\mathcal{A}}$
(which is also small in many cases).
ASTARS Corollary 4: Let $x^{(0)}$ denote any initial iterate for ASTARS. Let
$x^{*}$ denote a true minimizer of $f$ with the corresponding stochastic-free
function evaluation given by $f^{*}$. Let $\mathcal{A}$ continue to denote the
true $j$-dimensional AS of $f$. Denote the ASTARS minimizer with
$x^{*}_{\mathcal{A}}$ and corresponding noiseless function evaluation
$f^{*}_{\mathcal{A}}$. Assume
$||x^{*}-x^{*}_{\mathcal{A}}||_{\Lambda}^{2}<\infty$, where
$||\cdot||_{\Lambda}$ denotes a norm. Then we may bound the difference between
$|f^{*}-f^{*}_{\mathcal{A}}|$ with
(87) $|f^{*}-f^{*}_{\mathcal{A}}|\leq\sqrt{a_{1}(q_{j+1}+\cdots+q_{P})},$
where $0\leq a_{1}<\infty$ is an eigenvalue of the positive-semi definite
matrix $(x^{*}-x^{*}_{\mathcal{A}})(x^{*}-x^{*}_{\mathcal{A}})^{\top}$ and the
$q$’s are our notations for the exact eigenvalues associated with the
eigendecomposition of the sensitivity matrix $W$. Also,
$a_{1}=||P_{\mathcal{I}}(x^{(0)}-x^{*})||_{\Lambda}^{2}.$
Proof: We bound the quantity $(f^{*}-f^{*}_{\mathcal{A}})^{2}$. First, we
expand $f^{*}=f(x^{*})$ around $x^{*}_{\mathcal{A}}$ using a special case of
Taylor’s theorem, sometimes called the Extended Mean Value Theorem. For
$c\in[0,1]$ and $z:=cx_{\mathcal{A}}^{*}+(1-c)x^{*}$ we have
$f^{*}=f(x^{*})=f(x_{\mathcal{A}}^{*})+\nabla
f(z)^{\top}(x^{*}-x^{*}_{\mathcal{A}})$. Note that since the components of
$x_{\mathcal{A}}^{*}$ and $x^{*}$ must match for indices $i=1,\ldots,j$ (by
definition), the point $z$ varies along $\mathcal{I}$ only; that is, $z$ is
fixed in $\mathcal{A}$. Thus, $\nabla f(z)=\nabla_{\mathcal{I}}f(z)$, the
gradient taken in the inactive subspace only. The expansion of $f^{*}$ around
$x^{*}_{\mathcal{A}}$ allows us to write
$(f^{*}-f^{*}_{\mathcal{A}})^{2}=\left(\nabla_{\mathcal{I}}f(z)^{\top}(x^{*}-x^{*}_{\mathcal{A}})\right)^{2}=\nabla_{\mathcal{I}}f(z)^{\top}A\nabla_{\mathcal{I}}f(z)$,
where $A:=(x^{*}-x^{*}_{\mathcal{A}})(x^{*}-x^{*}_{\mathcal{A}})^{\top}$ is a
$P\times P$ matrix. Note that $A$ is a square, positive semi-definite, rank 1
matrix. Hence, it has 1 eigenvalue that is positive or zero, which we denote
with $a_{1}\geq 0$; all other $P-1$ eigenvalues are $0$. We find
$a_{1}=||x^{*}-x^{*}_{\mathcal{A}}||^{2}_{\Lambda}$ and so by definition,
$a_{1}=||P_{\mathcal{I}}(x^{(0)}-x^{*})||_{\Lambda}^{2}$. Observe
$a_{1}<\infty$ because $||x^{*}-x^{*}_{\mathcal{A}}||_{\Lambda}<\infty$ and
also $a_{1}=0\iff x^{*}=x^{*}_{\mathcal{A}}$, in which case
$f^{*}=f^{*}_{\mathcal{A}}$ so that $|f^{*}-f^{*}_{\mathcal{A}}|=0$. We have
$(f^{*}-f^{*}_{\mathcal{A}})^{2}\leq
a_{1}\nabla_{\mathcal{I}}f(z)^{\top}\nabla_{\mathcal{I}}f(z)$. Applying the
expectation over $\mathcal{I}$ to both sides (where the left-hand side is
constant with respect to this expectation) we have
$(f^{*}-f^{*}_{\mathcal{A}})\leq
a_{1}\mathbb{E}_{\mathcal{I}}\left(\nabla_{\mathcal{I}}f(z)^{\top}\nabla_{\mathcal{I}}f(z)\right)$.
Citing [ConstantineK] Lemma 2.2, we have
$\mathbb{E}_{\mathcal{I}}\left(\nabla_{\mathcal{I}}f(z)^{\top}\nabla_{\mathcal{I}}f(z)\right)<q_{j+1}+\cdots+q_{P}$,
where $q_{i}$, $i=j+1,\ldots,P$ are the last $P-j$ eigenvalues of the
sensitivity matrix $W$. Hence, $(f^{*}-f^{*}_{\mathcal{A}})^{2}\leq
a_{1}(q_{j+1}+\cdots+q_{P})$. Applying a square root to both sides, we obtain
(87). $\blacksquare$
We now present a statement about the convergence of ASTARS to $f^{*}$ by
combining the previous two results. Recall $\phi_{0}:=f(x^{(0)})$ and
$\phi_{k}^{\mathcal{A}}:=\mathbb{E}_{\mathcal{Q}_{k-1},\mathcal{U}_{k-1}^{\mathcal{A}}}(f(x^{(k)}))$,
$k\geq 1,$ where the $x^{(k)}$’s are ASTARS iterates.
ASTARS Theorem 5: Let random vectors $u^{(k)}_{\mathcal{A}}$ be drawn
according to 2; $f\in\mathcal{C}^{1,1}(\Lambda)$ and $f$ is convex; and that
the i.i.d. noise draws $\epsilon(\xi)$ are additive, zero mean, with bounded
variance $\sigma^{2}$ for all $\xi$. Let $\\{x^{(k)}\\}_{k\geq 0}$ denote a
sequence of ASTARS iterates formed using a fixed active step length
$h_{\mathcal{A}}$ and fixed active smoothing $\mu=\mu^{*}_{\mathcal{A}}$ (both
given in (16)) for all ASTARS iterates $k$. For any total number of ASTARS
iterations $M\geq 1$,
(88)
$\sum_{k=0}^{M}\frac{\phi_{k}^{\mathcal{A}}-f^{*}}{M+1}\leq\frac{4L_{1}(j+4)||P_{\mathcal{A}}(x^{(0)}-x^{*})||^{2}}{(M+1)}+\frac{3\sigma(j+4)}{5\sqrt{2}}+\sqrt{a_{1}(q_{j+1}+\cdots+q_{P})}$
Proof: By the triangle inequality, we have
$|\phi_{k}-f^{*}|\leq|\phi_{k}-f_{\mathcal{A}}^{*}|+|f^{*}-f_{\mathcal{A}}^{*}|$.
Noting that $\phi_{k}-f_{\mathcal{A}}^{*}>0$ for all $k$, we can bound the
left-hand side of (86), writing
(89)
$\sum_{k=0}^{M}\frac{\phi_{k}-f^{*}}{M+1}\leq\sum_{k=0}^{M}\frac{\phi_{k}-f^{*}_{\mathcal{A}}}{M+1}+|f^{*}-f_{\mathcal{A}}^{*}|.$
Now the first term on the right-hand side of (89) is bounded by ASTARS
Corollary 3 and the second term is bounded by ASTARS Corollary 4. Plugging in
those bounds, we obtain (88). $\blacksquare$
We use the results above to analyze the complexity of ASTARS in the following
remark.
Remark: We now mimic the complexity analysis we performed in the preceding
section for STARS with approximated hyperparameters for our case in this
section, ASTARS with correct hyperparameters and $\mathcal{A}$. Let
$||\cdot||$ denote a norm in $\Lambda$ throughout. Define
$R_{\mathcal{A}}^{2}$ as a bound $||P_{\mathcal{A}}(x^{(0)}-x^{*})||^{2}\leq
R_{\mathcal{A}}^{2}$. We recall
$a_{1}=||P_{\mathcal{I}}(x^{(0)}-x^{*})||_{\Lambda}^{2}$ and define a bound
$a_{1}\leq R_{\mathcal{I}}^{2}$.
Now define
$x^{\dagger}:=\text{argmin}_{x}\\{f(x):x\in\\{x^{(0)},\ldots,x^{(M)}\\}\\}$
and
$\phi_{\dagger}:=\mathbb{E}_{\mathcal{U}_{k-1},\mathcal{P}_{k-1}}(f(x^{\dagger}))$.
Then the value $\phi_{\dagger}-f^{*}$ must be less than or equal to the
average improvement for any given run of STARS; that is, (88), along with our
new definitions, implies that
(90)
$\phi_{\dagger}-f^{*}\leq\sum_{k=0}^{M}\frac{\phi_{k}-f^{*}}{M+1}\leq\frac{4L_{1}(j+4)}{(M+1)}R_{\mathcal{A}}^{2}+\frac{3\sigma(j+4)}{5\sqrt{2}}+R_{\mathcal{I}}\sqrt{(q_{j+1}+\cdots+q_{P})}.$
We assume that we wish to achieve a final accuracy of
$\epsilon_{\text{tol}}>0$. Then we will need
$\phi_{\dagger}-f^{*}\leq\epsilon_{\text{tol}}$. If we take
(91) $\frac{3\sigma(j+4)}{5\sqrt{2}}\leq\frac{\epsilon_{\text{tol}}}{3},$
then we must require that the noise not exceed the following threshold:
(92) $\sigma\leq\frac{5\sqrt{2}\epsilon_{\text{tol}}}{9\sigma(j+4)}.$
If we satisfy (92) and also have
(93)
$R_{\mathcal{I}}\sqrt{(q_{j+1}+\cdots+q_{P})}\leq\frac{\epsilon_{\text{tol}}}{3},$
then we can achieve $\epsilon_{\text{tol}}$ accuracy as long as
(94)
$\frac{4L_{1}(j+4)}{(M+1)}R_{\mathcal{A}}^{2}\leq\frac{\epsilon_{\text{tol}}}{3}\iff
M\geq\frac{12L_{1}(j+4)R_{\mathcal{A}}^{2}}{\epsilon_{\text{tol}}}-1.$
Hence, we achieve $\epsilon_{\text{tol}}$ accuracy as long as: the noise is
small enough; the eigenvalues of the inactive subspace and distance from
$x^{*}$ to $x^{*}_{\mathcal{A}}$ is small enough; and $M$ is large enough.
Details of those required bounds given by (92), (93), and (94), respectively.
With these assumptions, we achieve $\epsilon_{\text{tol}}$ accuracy in
(95)
$M\sim\mathcal{O}\left(\frac{L_{1}jR_{\mathcal{A}}^{2}}{\epsilon_{\text{tol}}}\right).$
This analysis also shows that given a particular variance in the noise,
$\sigma$, as well as the term involving the eigenvalues of the inactive
subspace and distance from $x^{*}$ to $x^{*}_{\mathcal{A}}$, the achievable
accuracy can be no better (i.e., less) than the value given below:
(96)
$\epsilon_{\text{tol}}\geq\max\left\\{\frac{9(j+4)}{5\sqrt{2}}\sigma,3R_{\mathcal{I}}\sqrt{(q_{j+1}+\cdots+q_{p})}\right\\}.$
### 4.4. FAASTARS Convergence
Now we focus on analyzing the convergence of FAASTARS. Here, we must consider
that FAASTARS uses approximate information both for hyperparameters and for
the AS in its phases. We have already analyzed the convergence of performing
STARS with estimated hyperparameters in 4.2, which corresponds to the first
phase of FAASTARS. Before we can state our main result about the convergence
of FAASTARS, we will need results analogous to those in 4.3, but with
estimated hyperparameters and and estimated AS $\tilde{A}$. We first
reintroduce the approximately-optimal ASTARS hyperparameters, need to perform
the third and final phase of FAASTARS:
(97)
$\hat{\mu}^{*}_{\tilde{\mathcal{A}}}:=\left(\frac{8\hat{\sigma}^{2}\tilde{j}}{\hat{L_{1}}^{2}(\tilde{j}+6)^{3}}\right)^{1/4}\quad\quad\hat{h}_{\tilde{\mathcal{A}}}:=(4\hat{L_{1}}(\tilde{j}+4))^{-1}.$
We also must modify our definitions from the previous section to account for
the approximated subspace. Recall that the third phase of FAASTARS will begin
with the initial iterate $x^{(M_{\mathcal{A}})}$ (the last iterate from phase
two). Define
$x^{*}_{\tilde{\mathcal{A}}}:=P_{\tilde{\mathcal{A}}}(x^{*})+P_{\tilde{\mathcal{I}}}(x^{(M_{\mathcal{A}})})$
with its associated stochastic-free
$f^{*}_{\mathcal{A}}:=f(x^{*}_{\mathcal{A}})$. Observe that with our
definitions,
$x^{*}-x^{*}_{\tilde{\mathcal{A}}}=P_{\tilde{\mathcal{I}}}(x^{(M_{\mathcal{A}})})$.
We define
$f|_{\tilde{\mathcal{A}}}(\lambda):=f(V_{\tilde{\mathcal{A}}}V_{\tilde{\mathcal{A}}}^{\top}\lambda)=f(P_{\tilde{\mathcal{A}}}(\lambda))$
and let $f_{\tilde{\mathcal{A}}}:=f|_{\tilde{\mathcal{A}}}$. Note that
$f_{\tilde{\mathcal{A}}}$ is convex since $f$ is convex and we can define
$f_{\tilde{\mathcal{I}}}$ analogously. Also, when we evaluate gradients for
points $x\in\tilde{\mathcal{A}}$, we note we obtain the object
$\nabla_{\tilde{\mathcal{A}}}f(x)\in\tilde{\mathcal{A}}$, and similarly for
$\tilde{\mathcal{I}}$.
We begin by providing a modification to ASTARS Corollary 2 for the case of
estimated hyperparameters and $\tilde{\mathcal{A}}$. The proof is a blend of
the proofs of STARS Theorem 4.3 (Modified) and ASTARS Corollary 2, with
additional consideration for the now $\tilde{j}$-dimensional
$\tilde{\mathcal{A}}$.
FAASTARS Corollary 1 (Modified ASTARS Corollary 2): Let the vectors
$u_{\tilde{\mathcal{A}}}^{(k)}$ denote those drawn using Algorithm 5; let
$f\in\mathcal{C}^{1,1}(\Lambda)$ and assume $f$ is convex; and assume that the
i.i.d. noise draws $\epsilon(\xi)$ are additive, zero mean, with bounded
variance $\sigma^{2}$ for all $\xi$. We assume we have fixed estimates
$\hat{L_{1}}$ and $\hat{\sigma}$ with $K_{1}>0$ and $K_{2}>0$ as in (22). By
fixing the step size as $\hat{h}_{\tilde{\mathcal{A}}}$ in (97), the
approximately active smoothing parameter $\hat{\mu}_{\tilde{\mathcal{A}}}^{*}$
in (97) minimizes the error between the gradient oracle in Algorithm 5 and the
true directional derivative of $f$ in the direction
$u_{\tilde{\mathcal{A}}}^{(k)}$ in the $\tilde{j}$-dimensional AS
$\tilde{\mathcal{A}}$. That is, $\mathcal{E}(\mu)$ in (21) (with
$u=u_{\tilde{\mathcal{A}}}^{(k)}$) is minimized by the choice
$\mu=\hat{\mu}_{\tilde{\mathcal{A}}}^{*}$. In particular, we have the bound
(98)
$\mathbb{E}_{u_{\tilde{\mathcal{A}}}^{(k)},\xi_{1},\xi_{2}}\left(\mathcal{E}^{\tilde{\mathcal{A}}}\left(\mu^{*}_{\tilde{\mathcal{A}}}\right)\right)\leq\frac{K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}\sigma
L_{1}\sqrt{\tilde{j}(\tilde{j}+6)^{3}}.$
Proof: The proof is identical to the proof of STARS Theorem 4.3 (Modified) and
ASTARS Corollary 2 (making the usual substitutions), until we formulate (as in
(32))
(99)
$\mathbb{E}_{u,\xi_{1},\xi_{2}}(\mathcal{E}^{\tilde{\mathcal{A}}}(\mu))\leq\frac{1}{\mu^{2}}\mathbb{E}_{u}\left(2\sigma^{2}||u||^{2}+\frac{L_{1}^{2}}{4}\mu^{4}||u||^{6}\right)$
and proceed to bound the right hand side. Applying ASTARS Proposition 1, we
have $(u_{\tilde{\mathcal{A}}}^{(k)})_{p}\sim N(0,1)$ for $p=1,\ldots,P$.
Taking $u=u_{\tilde{\mathcal{A}}}^{(k)}$ and noting
$u_{\tilde{\mathcal{A}}}^{(k)}\in\tilde{\mathcal{A}}$ (and
$\dim\tilde{\mathcal{A}}=\tilde{j}$), we apply the bounds on the moments
$M_{p}$ of $||u||^{p}$ given in (11). Using $p=2$ and $p=6$ – but replacing
the AS dimension $j$ with $\tilde{j}$ – we have
(100)
$\mathbb{E}_{u_{\tilde{\mathcal{A}}}^{(k)},\xi_{1},\xi_{2}}(\mathcal{E}(\mu))\leq(2\sigma^{2}\tilde{j})\frac{1}{\mu^{2}}+\left(\frac{L_{1}^{2}(\tilde{j}+6)^{3}}{4}\right)\mu^{2}.$
We again observe that the right-hand side of the above inequality is uniformly
convex for $\mu>0$ with minimizer
$\mu^{*}_{\tilde{\mathcal{A}}}:=\left(\frac{8\sigma^{2}\tilde{j}}{L_{1}^{2}(\tilde{j}+6)^{3}}\right)^{1/4}$.
Analogously to the proof of STARS Theorem 4.3 (Modified), our optimal choice
of smoothing, given the information we have available, will require us to swap
out $L_{1}$ and $\sigma^{2}$ in $\mu^{*}_{\mathcal{A}}$ with their estimates,
$\hat{L_{1}}$ and $\hat{\sigma}^{2}$, and to swap out $j$ for $\tilde{j}$,
recovering $\hat{\mu}^{*}_{\tilde{\mathcal{A}}}$ (97). This particular choice
$\mu=\hat{\mu}^{*}_{\tilde{\mathcal{A}}}$ can be plugged into (100), which
gives us the bound in (98), our main result. $\blacksquare$
Next, we redefine $\mathcal{Q}_{k}:=\\{\xi_{k}\\}_{k=1}^{M_{\mathcal{A}}}$ and
$\mathcal{U}_{k}:=\\{u^{(k)}\\}_{k=1}^{M_{\mathcal{A}}}$, which are two sets
containing all random variables that appear in FAASTARS’ regular STARS burn-in
phase, iterations $k=1,\ldots,M_{\mathcal{A}}$. Likewise, we extend the
definitions of each set so that
$\mathcal{Q}_{k}=\\{\xi_{k}\\}_{k=M_{\mathcal{A}}+1}^{M}$ and
$\mathcal{U}_{k}^{\tilde{\mathcal{A}}}=\\{u^{(k)}_{\tilde{\mathcal{A}}}\\}_{k=M_{\mathcal{A}}+1}^{M}$
also contain all random variables that appear in FAASTARS’ approximate ASTARS
phase, iterations $k=M_{\mathcal{A}}+1,\ldots,M$.
Let $\phi_{0}:=f(x^{(0)})$,
$\phi_{k}:=\mathbb{E}_{\mathcal{Q}_{k-1},\mathcal{U}_{k-1}}(f(x^{(k)}))$,
$1\leq k\leq M_{\mathcal{A}},$ and
$\phi_{k}:=\mathbb{E}_{\mathcal{Q}_{k-1},\mathcal{U}_{k-1}^{\tilde{\mathcal{A}}}}(f(x^{(k)}))$,
$M_{\mathcal{A}}+1\leq k\leq M$ where the $x^{(k)}$’s are now STARS iterates
for $1\leq k\leq M_{\mathcal{A}},$ and FAASTARS iterates for
$M_{\mathcal{A}}+1\leq k\leq M$.
FAASTARS Corollary 2 (FAASTARS, approximate ASTARS phase result): For all
FAASTARS iterates in phase 3, $k=M_{\mathcal{A}}+1,\ldots,M$, let the vectors
$u_{\tilde{\mathcal{A}}}^{(k)}$ denote those drawn using Algorithm 5. Also,
let $f\in\mathcal{C}^{1,1}(\Lambda)$ with $f$ convex, and let i.i.d. noise
draws $\epsilon(\xi)$ be additive, zero mean, with bounded variance
$\sigma^{2}$ for all appearing $\xi$. We assume we have fixed estimates
$\hat{L_{1}}$ and $\hat{\sigma}$ with $K_{1}>0$ and $K_{2}>0$ as in (22). Let
$\tilde{\mathcal{A}}$ denote the approximated $\tilde{j}$-dimensional AS of
$f$. Let $x^{(M_{\mathcal{A}})}$ be fixed and given and let
$\\{x^{(k)}\\}_{k=M_{\mathcal{A}}+1}^{M}$ denote a sequence of FAASTARS
iterates formed using the approximate active hyperparameters in (97)).
Finally, we require $0<K_{1}<4$ and $K_{2}>0$, the values defined in (22).
Then for any $M-M_{\mathcal{A}}$ total number of approximate ASTARS iterations
within FAASTARS, $k=M_{\mathcal{A}}+1,\ldots,M$,
(101)
$\sum_{k=M_{\mathcal{A}}}^{M}\frac{\phi_{k}-f^{*}_{\tilde{\mathcal{A}}}}{M-M_{\mathcal{A}}}\leq\frac{4L_{1}(\tilde{j}+4)||P_{\tilde{\mathcal{A}}}(x^{(M_{\mathcal{A}})}-x^{*})||^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M-M_{\mathcal{A}}+1)}+\frac{4\sigma(\tilde{j}+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5},$
where $C_{5}:=\sqrt{K_{1}}\cdot 0.036+\frac{3K_{1}+K_{2}}{16}\cdot 1.034$.
Proof: The proof is almost identical to the proof of ASTARS Corollary 3, but
with $\tilde{j}$’s replacing the roles of $j$’s (since we take steps with
$\tilde{j}$-dimensional $u^{(k)}_{\tilde{\mathcal{A}}}$ vectors and not
$u^{(k)}_{\mathcal{A}}\in\mathcal{A}$), $\hat{\mu}^{*}_{\tilde{\mathcal{A}}}$
replacing $\mu^{*}_{\mathcal{A}}$, and $x^{*}_{\tilde{\mathcal{A}}}$ replacing
$x^{*}_{\mathcal{A}}$ (since we are converging to the minimum of
$f_{\tilde{\mathcal{A}}}$). Here, $K_{1}$ and $K_{2}$ are not necessarily
equal to $1$ as before, since we are assuming inexact active hyperparameters,
formed with the estimates $\hat{L_{1}}$ and $\hat{\sigma}^{2}$. We outline the
required changes one must make to the proofs of STARS Lemma 4.4 (Modified) and
STARS Theorem 4.5 (Modified) to obtain our desired result. These changes
essentially amount to keeping the logic from STARS Lemma 4.4 (Modified) and
STARS Theorem 4.5 (Modified) to account for $K_{1}$ and $K_{2}$ but to
replacing $j$ with $\tilde{j}$.
We begin by obtaining a bound analogous to that of STARS Lemma 4.4 (Modified),
but note that the approximate ASTARS steps are taken with random vectors
$u^{(k)}_{\tilde{\mathcal{A}}}\in\tilde{\mathcal{A}}$, and
$\dim\tilde{\mathcal{A}}=\tilde{j}$. First, we replace $u^{(k)}$ in (37) with
$u^{(k)}_{\tilde{\mathcal{A}}}$, so $g_{0}(x^{(k)}):=\langle\nabla
f(x^{(k)}),u^{(k)}_{\tilde{\mathcal{A}}}\rangle
u^{(k)}_{\tilde{\mathcal{A}}}$. Similarly, note that we set
$u^{(k)}=u^{(k)}_{\tilde{\mathcal{A}}}$ in (20). Also, let
$s_{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}}=s_{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}}^{\tilde{\mathcal{A}}}$
for cleaner notation. Then, using FAASTARS Corollary 1 – instead of ASTARS
Corollary 2 – (38) becomes
(102)
$\mathbb{E}\left(||s_{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}}||^{2}-2\langle
s_{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}},g_{0}(x^{(k)})\rangle+||g_{0}(x^{(k)})||^{2}\right)\leq\frac{K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}\sigma
L_{1}\sqrt{\tilde{j}(\tilde{j}+6)^{3}}.$
Continuing with $u^{(k)}_{\tilde{\mathcal{A}}}$ replacing $u^{(k)}$, we
proceed identically, noting
$C_{1}=\frac{K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}\sigma
L_{1}\sqrt{\tilde{j}(\tilde{j}+6)^{3}}$. Modifying (19) for
$x^{(k)}\in\tilde{\mathcal{A}}$ with $\dim\tilde{\mathcal{A}}=\tilde{j}$, we
have
(103)
$\mathbb{E}\left(||s_{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}}||^{2}\right)\leq
2(\tilde{j}+4)||\nabla
f(x^{(k)})||^{2}+\frac{(\hat{\mu}^{*}_{\tilde{\mathcal{A}}})^{2}L_{1}^{2}}{2}(\tilde{j}+6)^{3}+C_{1},$
Plugging in the value of $\hat{\mu}^{*}_{\tilde{\mathcal{A}}}$, we obtain the
bound
(104)
$\mathbb{E}\left(||s_{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}}||^{2}\right)\leq
2(\tilde{j}+4)||\nabla f(x^{(k)})||^{2}+C_{2},$
where
$C_{2}:=\frac{3K_{1}+K_{2}}{\sqrt{2K_{1}K_{2}}}L_{1}\sigma\sqrt{\tilde{j}(\tilde{j}+6)^{3}}=\frac{3K_{1}+K_{2}}{\sqrt{2}}\hat{L_{1}}\hat{\sigma}\sqrt{\tilde{j}(\tilde{j}+6)^{3}}$.
The bound in (104) is the analogous result to STARS Lemma 4.4 (Modified) in
the case of ASTARS performed in the estimated $\tilde{\mathcal{A}}$ and with
inexact hyperparameters.
We now proceed to proving the analogous result to STARS Theorem 4.5 (Modified)
in our case. We redefine $r_{k}:=||x^{(k)}-x^{*}_{\tilde{\mathcal{A}}}||$ for
approximate ASTARS iterates $x^{(k)}$. The first three equations appearing in
the proof of STARS Theorem 4.5 (Modified), (48) through (50), are nearly
identical for this proof – one must replace $x^{*}$ with
$x^{*}_{\tilde{\mathcal{A}}}$, replace $\hat{h}$ with
$\hat{h}_{\tilde{\mathcal{A}}}$, and note that here we have
$s_{\mu^{(k)}}=s^{\tilde{\mathcal{A}}}_{\hat{\mu}^{*}_{\tilde{\mathcal{A}}}}$.
Then, invoking (104), we rewrite (51) in our case as
(105) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-2\hat{h}_{\tilde{\mathcal{A}}}\langle\nabla
f_{\mu}(x^{(k)}),x^{(k)}-x^{*}_{\tilde{\mathcal{A}}}\rangle+\hat{h}_{\tilde{\mathcal{A}}}^{2}\left(2(\tilde{j}+4)||\nabla
f(x^{(k)})||^{2}+C_{2}\right),$
where we recall
$C_{2}=\frac{3K_{1}+K_{2}}{\sqrt{2}}\hat{L_{1}}\hat{\sigma}\sqrt{\tilde{j}(\tilde{j}+6)^{3}}$
here. Now, again, equations (52) through (56) are identical for this proof as
long as $x^{*}_{\tilde{\mathcal{A}}}$ replaces $x^{*}$ and
$\hat{h}_{\tilde{\mathcal{A}}}$ replaces $\hat{h}$ throughout, and similarly
for equations (57) through (59) with the additional needed replacement of
$\tilde{j}$ for $P$ and using $C_{2}$ as we have defined it here. Taking
$C_{3}=\hat{h}_{\tilde{\mathcal{A}}}\hat{\mu}^{*}_{\tilde{\mathcal{A}}}L_{1}\tilde{j}+\hat{h}_{\tilde{\mathcal{A}}}^{2}C_{2}$,
(60) holds (with the usual replacments) and plugging
$\hat{h}_{\tilde{\mathcal{A}}}$ in (105):
(106) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-\frac{1}{4L_{1}(\tilde{j}+4)}(f(x^{(k)})-f(x^{*}_{\tilde{\mathcal{A}}}))+C_{3}.$
Now plugging $\hat{h}_{\tilde{\mathcal{A}}}$ into $C_{3}$, we obtain
$C_{3}\leq\frac{\sqrt{K_{1}}}{\sqrt{K_{2}}}\cdot\frac{\sigma}{\sqrt{2}L_{1}}\left(\sqrt{K_{1}}\cdot
0.036+\frac{3K_{1}+K_{2}}{16}\cdot 1.034\right)=:C_{4}$, so
(107) $\mathbb{E}(r_{k+1}^{2})\leq
r_{k}^{2}-\frac{1}{4L_{1}(\tilde{j}+4)}(f(x^{(k)})-f(x^{*}_{\tilde{\mathcal{A}}}))+C_{4}.$
We may now apply the expectation over $\mathcal{P}_{k}$ and
$\mathcal{U}_{k}^{\tilde{\mathcal{A}}}$, $k=M_{\mathcal{A}},\ldots,M$,
rearrange, and sum over $k=M_{\mathcal{A}},\ldots,M$, similarly to before. We
obtain a modification to (68) with
(108)
$\sum_{k=M_{\mathcal{A}}}^{M}\frac{\phi_{k}-f^{*}_{\tilde{\mathcal{A}}}}{M-M_{\mathcal{A}}+1}\leq\frac{4L_{1}(\tilde{j}+4)}{(M-M_{\mathcal{A}}+1)}\left(r_{M_{\mathcal{A}}}-\mathbb{E}_{\mathcal{U}_{k},\mathcal{P}_{k}}(r_{k+1}^{2})\right)+4L_{1}(\tilde{j}+4)C_{4}.$
We drop the strictly negative term (again involving
$\mathbb{E}_{\mathcal{P}_{k},\mathcal{U}_{k}^{\tilde{\mathcal{A}}}}$) and plug
in the definition of
$r_{M_{\mathcal{A}}}=||x^{(M_{\mathcal{A}})}-x^{*}_{\tilde{\mathcal{A}}}||^{2}$.
Then, recalling
$x^{*}_{\tilde{\mathcal{A}}}=P_{\tilde{\mathcal{A}}}(x^{*})+P_{\tilde{\mathcal{I}}}(x^{(M_{\mathcal{A}})})$,
writing
$x^{(M_{\mathcal{A}})}=P_{\tilde{\mathcal{A}}}(x^{(M_{\mathcal{A}})})+P_{\tilde{\mathcal{I}}}(x^{(M_{\mathcal{A}})})$,
and noting that $P_{\tilde{\mathcal{A}}}$ is linear, we have
$r_{M_{\mathcal{A}}}=||P_{\tilde{\mathcal{A}}}(x^{(M_{\mathcal{A}})}-x^{*})||^{2}$.
Plugging in the value of $C_{4}$, we obtain (101). $\blacksquare$
FAASTARS Corollary 2 shows that its iterates during its third phase
(approximate ASTARS) converge to $f^{*}_{\tilde{\mathcal{A}}}$; however, we
need a result for the convergence to $f^{*}$ like we did in the last section.
In particular, we need a result analogous to ASTARS Corollary 4, generally
regarding the distance between evaluations of $f$ and
$f_{\tilde{\mathcal{A}}}$ (instead of $f_{\mathcal{A}}$) at their respective
minimizers, $x^{*}$ and $x^{*}_{\tilde{\mathcal{A}}}$. Recall, by
$f_{\tilde{\mathcal{A}}}$, we mean
$f_{\mathcal{A}}(\lambda):=f(V_{\tilde{\mathcal{A}}}V_{\tilde{\mathcal{A}}}^{\top}\lambda)$,
where the application of
$V_{\tilde{\mathcal{A}}}V_{\tilde{\mathcal{A}}}^{\top}$ is a linear
transformation of $\lambda$ into $\tilde{\mathcal{A}}$, as in [ConstantineK],
which we again rely on heavily for the following result.
FAASTARS Corollary 3: Let $x^{(M_{\mathcal{A}})}$ denote any initial iterate
for phase 3 of FAASTARS. Let $\mathcal{A}$ continue to denote the true
$j$-dimensional AS of $f$ and let $\tilde{\mathcal{A}}$ denote the approximate
$\tilde{j}$-dimensional AS of $f$ and the true minimizer of
$f_{\tilde{\mathcal{A}}}$ with $x^{*}_{\tilde{\mathcal{A}}}$ and corresponding
noiseless function evaluation $f^{*}_{\tilde{\mathcal{A}}}$. Assume
$||x^{*}-x^{*}_{\tilde{\mathcal{A}}}||_{\Lambda}<\infty$, where
$||\cdot||_{\Lambda}$ denotes a norm. Assume $\tilde{j}=j$ and for $\delta>0$,
$||V-\tilde{V}||_{2}<\delta$, where $||\cdot||_{2}$ here denotes the matrix
2-norm induced by the 2-norm in $\Lambda$. Also, assume the sign of
$(\tilde{V}_{\tilde{\mathcal{I}}})_{i}$, the $i$-th column of
$\tilde{V}_{\tilde{\mathcal{I}}}$ to be chosen so that
$||(\tilde{V}_{\tilde{\mathcal{I}}})_{i}-(V_{\mathcal{I}})_{i}||_{2}$ is
minimized for $i=j+1,\ldots,P$, where $||\cdot||_{2}$ denotes the vector
2-norm. Then the difference between $|f^{*}-f^{*}_{\tilde{\mathcal{A}}}|$ is
bounded by
(109)
$|f^{*}-f^{*}_{\tilde{\mathcal{A}}}|\leq\sqrt{\tilde{a}_{1}}\left(\delta\sqrt{q_{1}+\cdots+q_{j}}+\sqrt{q_{j+1}+\cdots+q_{P}}\right),$
where $0\leq\tilde{a}_{1}<\infty$ is an eigenvalue of the positive-semi
definite matrix
$(x^{*}-x^{*}_{\tilde{\mathcal{A}}})(x^{*}-x^{*}_{\tilde{\mathcal{A}}})^{\top}$
and the $q$’s are our notations for the exact eigenvalues associated with the
eigendecomposition of the sensitivity matrix $W$. Also,
$\tilde{a}_{1}=||P_{\tilde{\mathcal{I}}}(x^{(M_{\mathcal{A}})}-x^{*})||_{\Lambda}^{2}.$
Proof: We bound the quantity $(f^{*}-f^{*}_{\tilde{\mathcal{A}}})^{2}$. First,
we expand $f*=f(x^{*})$ around $x^{*}_{\tilde{\mathcal{A}}}$ by applying
Extended Mean Value Theorem analogously to ASTARS Corollary 4. For $c\in[0,1]$
and $\tilde{z}:=cx_{\tilde{\mathcal{A}}}^{*}+(1-c)x^{*}$ we have
$f^{*}=f(x^{*})=f(x_{\tilde{\mathcal{A}}}^{*})+\nabla
f(\tilde{z})^{\top}(x^{*}-x^{*}_{\tilde{\mathcal{A}}})$. Note that since the
components of $x_{\tilde{\mathcal{A}}}^{*}$ and rotated
$V_{\tilde{\mathcal{A}}}V_{\tilde{\mathcal{A}}}^{\top}x^{*}$ must match for
indices $i=1,\ldots,j$, the point $\tilde{z}$ varies along
$\tilde{\mathcal{I}}$ only; that is, $\tilde{z}$ is fixed in
$\tilde{\mathcal{A}}$. Thus, $\nabla
f(\tilde{z})=\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})$, the gradient taken in
the approximate inactive subspace only.
The expansion of $f^{*}$ around $x^{*}_{\tilde{\mathcal{A}}}$ writes
$(f^{*}-f^{*}_{\tilde{\mathcal{A}}})^{2}=\left(\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})^{\top}(x^{*}-x^{*}_{\tilde{\mathcal{A}}})\right)^{2}=\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})^{\top}\tilde{A}\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})$,
where
$\tilde{A}:=(x^{*}-x^{*}_{\tilde{\mathcal{A}}})(x^{*}-x^{*}_{\tilde{\mathcal{A}}})^{\top}$
is a $P\times P$ matrix. Note that $\tilde{A}$ is a square, positive semi-
definite, rank 1 matrix. Hence, it has 1 eigenvalue that is positive or zero,
which we denote with $\tilde{a}_{1}\geq 0$; all other $P-1$ eigenvalues are
$0$. We find
$\tilde{a}_{1}=||x^{*}-x^{*}_{\tilde{\mathcal{A}}}||^{2}_{\Lambda}$ so by
definition,
$\tilde{a}_{1}=||P_{\tilde{\mathcal{I}}}(x^{(M_{\mathcal{A}})}-x^{*})||_{\Lambda}^{2}.$
Observe $\tilde{a}_{1}<\infty$ because
$||x^{*}-x^{*}_{\tilde{\mathcal{A}}}||_{\Lambda}<\infty$ and also
$\tilde{a}_{1}=0\iff x^{*}=x^{*}_{\tilde{\mathcal{A}}}$, which case
$f^{*}=f^{*}_{\tilde{\mathcal{A}}}$ so that
$|f^{*}-f^{*}_{\tilde{\mathcal{A}}}|=0$. Then
$(f^{*}-f^{*}_{\tilde{\mathcal{A}}})^{2}\leq\tilde{a}_{1}\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})^{\top}\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})$.
Recall $\tilde{j}=j$ and for $\delta>0$, $||V-\tilde{V}||_{2}<\delta$. Then we
have comparable partitions, in the sense that the submatrices
$V_{\mathcal{A}}$ and $\tilde{V}_{\tilde{\mathcal{A}}}$ (of $V$ and
$\tilde{V}$ respectively) are both $j$-dimensional, and likewise the
submatrices $V_{\mathcal{I}}$ and $V_{\tilde{\mathcal{I}}}$ are both
$P-j$-dimensional. We shall need the results from Lemma 3.4 in [ConstantineK]
which state $||V_{\mathcal{I}}^{\top}\tilde{V}_{\tilde{\mathcal{I}}}||_{2}\leq
1$ and
$||V_{\mathcal{A}}^{\top}\tilde{V}_{\tilde{\mathcal{I}}}||_{2}\leq\delta$,
where $||\cdot||_{2}$ denotes the matrix 2-norm. Note that Lemma 3.4 requires
that the sign of $(\tilde{V}_{\tilde{\mathcal{I}}})_{i}$, the $i$-th column of
$\tilde{V}_{\tilde{\mathcal{I}}}$ to be chosen so that
$||(\tilde{V}_{\tilde{\mathcal{I}}})_{i}-(V_{\mathcal{I}})_{i}||_{2}$ is
minimized for $i=j+1,\ldots,P$, where $||\cdot||_{2}$ denotes the vector
2-norm. Now the chain rule provides
$\nabla_{\tilde{\mathcal{I}}}f=V_{\mathcal{I}}^{\top}\tilde{V}_{\tilde{\mathcal{I}}}\nabla_{\mathcal{I}}f+V_{\mathcal{A}}^{\top}\tilde{V}_{\tilde{\mathcal{I}}}\nabla_{\mathcal{A}}f$
([ConstantineK], pp. A1510).
Applying the expectation over both $\mathcal{I}$ and $\mathcal{A}$ to both
sides (where the left-hand side is constant with respect to this expectation)
we have
$(f^{*}-f^{*}_{\tilde{\mathcal{A}}})\leq\tilde{a}_{1}\mathbb{E}_{\mathcal{A},\mathcal{I}}\left(\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})^{\top}\nabla_{\tilde{\mathcal{I}}}f(\tilde{z})\right)$.
Using the chain rule and
$||V_{\mathcal{I}}^{\top}\tilde{V}_{\tilde{\mathcal{I}}}||_{2}\leq 1$ and
$||V_{\mathcal{A}}^{\top}\tilde{V}_{\tilde{\mathcal{I}}}||_{2}\leq\delta$
gives
(110)
$(f^{*}-f^{*}_{\tilde{\mathcal{A}}})^{2}\leq\tilde{a}_{1}\left(\mathbb{E}_{\mathcal{I}}\left(\nabla_{\mathcal{I}}f(\tilde{z})^{\top}\nabla_{\mathcal{I}}f(\tilde{z})\right)+2\delta\,\mathbb{E}_{\mathcal{A},\mathcal{I}}\left(\nabla_{\mathcal{I}}f(\tilde{z})^{\top}\nabla_{\mathcal{A}}f(\tilde{z})\right)+\delta^{2}\mathbb{E}_{\mathcal{A}}\left(\nabla_{\mathcal{A}}f(\tilde{z})^{\top}\nabla_{\mathcal{A}}f(\tilde{z})\right)\right),$
where linearity of $\mathbb{E}$ allows for breaking the expectation of the
appearing sum into a sum of expectations taken over only $\mathcal{A}$, only
$\mathcal{I}$, or both $\mathcal{A}$ and $\mathcal{I}$, depending on which
types of gradients appear. The Cauchy-Schwarz inequality may be applied to the
second term on the right-hand side of (110) to write
(111)
$2\delta\mathbb{E}_{\mathcal{A},\mathcal{I}}\left(\nabla_{\mathcal{I}}f(\tilde{z})^{\top}\nabla_{\mathcal{A}}f(\tilde{z})\right)\leq
2\delta\,\mathbb{E}_{\mathcal{I}}\left(\nabla_{\mathcal{I}}f(\tilde{z})^{\top}\nabla_{\mathcal{I}}f(\tilde{z})\right)\mathbb{E}_{\mathcal{A}}\left(\nabla_{\mathcal{A}}f(\tilde{z})^{\top}\nabla_{\mathcal{I}}f(\tilde{z}))\right),$
where we can split up the two terms inside
$\mathbb{E}_{\mathcal{A},\mathcal{I}}(\cdot)$ into two separate expectations
taken over $\mathcal{I}$ and $\mathcal{A}$ individually, due to their
independence. Substituting this bound into (86) and then factoring the
resulting terms will yield
(112)
$(f^{*}-f^{*}_{\tilde{\mathcal{A}}})^{2}\leq\tilde{a}_{1}\left(\mathbb{E}_{\mathcal{I}}\left(\nabla_{\mathcal{I}}f(\tilde{z})^{\top}\nabla_{\mathcal{I}}f(\tilde{z})\right)^{1/2}+\delta\,\mathbb{E}_{\mathcal{A}}\left(\nabla_{\mathcal{A}}f(\tilde{z})^{\top}\nabla_{\mathcal{A}}f(\tilde{z})\right)^{1/2}\right)^{2},$
Citing [ConstantineK] Lemma 2.2, we have
$\mathbb{E}_{\mathcal{A}}\left(\nabla_{\mathcal{A}}f(\tilde{z})^{\top}\nabla_{\mathcal{A}}f(\tilde{z})\right)<q_{1}+\cdots+q_{j}$,
where $q_{i}$, $i=1,\ldots,j$ are the first $j$ eigenvalues of the sensitivity
matrix $W$ and
$\mathbb{E}_{\mathcal{I}}\left(\nabla_{\mathcal{I}}f(\tilde{z})^{\top}\nabla_{\mathcal{I}}f(\tilde{z})\right)<q_{j+1}+\cdots+q_{P}$,
where $q_{i}$, $i=j+1,\ldots,P$ are the last $P-j$ eigenvalues of the
sensitivity matrix $W$. Hence,
$(f^{*}-f^{*}_{\mathcal{A}})^{2}\leq\tilde{a}_{1}\left(\delta\,\sqrt{q_{1}+\cdots+q_{j}}+\sqrt{q_{j+1}+\cdots+q_{P}}\right)^{2}$.
Applying a square root to both sides, we obtain (109). $\blacksquare$
We now present a statement about the convergence of FAASTARS to $f^{*}$ by
combining the previous two results, along with STARS Theorem 4.5 (Modified).
Recall for $k=1,\ldots,M_{\mathcal{A}}$ iterates correspond to STARS with
estimated hyperparameters and for $k=M_{\mathcal{A}}+1,\ldots,M$ the iterates
correspond to ASTARS with estimated hyperparameters and estimated
$\mathcal{A}$.
FAASTARS Theorem 4: For $k=M_{\mathcal{A}},\ldots,M$, let the assumptions from
FAASTARS Corollaries 3 and 4 hold. For any total number of FAASTARS iterations
$M\geq 1$,
(113)
$\begin{split}\sum_{k=M_{\mathcal{A}}}^{M}\frac{\phi_{k}-f^{*}}{M-M_{\mathcal{A}}+1}\leq&\frac{4L_{1}(j+4)||P_{\tilde{\mathcal{A}}}(x^{(M_{\mathcal{A}})}-x^{*})||^{2}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M-M_{\mathcal{A}}+1)}+\frac{4\sigma(j+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5}\\\
&+\sqrt{\tilde{a}_{1}}\left(\delta\sqrt{q_{1}+\cdots+q_{j}}+\sqrt{q_{j+1}+\cdots+q_{P}}\right),\end{split}$
where $C_{5}:=\sqrt{K_{1}}\cdot 0.036+\frac{3K_{1}+K_{2}}{16}\cdot 1.034$.
Proof: Using the triangle inequality, we have
$|\phi_{k}-f^{*}|\leq|\phi_{k}-f^{*}_{\tilde{\mathcal{A}}}|+|f^{*}-f^{*}_{\tilde{\mathcal{A}}}|.$
Thus, noting that the quantity $\phi_{k}-f^{*}_{\tilde{\mathcal{A}}}$ is
always nonnegative, we use the triangle inequality to rewrite the summation on
the left-hand side of (113) as
(114)
$\sum_{k=M_{\mathcal{A}}}^{M}\frac{\phi_{k}-f^{*}}{M-M_{\mathcal{A}}+1}\leq\sum_{k=M_{\mathcal{A}}}^{M}\frac{\phi_{k}-f^{*}_{\tilde{\mathcal{A}}}}{M-M_{\mathcal{A}}+1}+|f^{*}-f^{*}_{\tilde{\mathcal{A}}}|.$
Now the second sum on the right-hand side of (115) is in a more useful form
for us, since the final phase of iterates, $k=M_{\mathcal{A}},\ldots,M$ are
(approximate) ASTARS iterates converging to $f^{*}_{\tilde{\mathcal{A}}}$ (not
$f^{*}$); thus, we recall that we already proved a bound for this sum in
FAASTARS Corollary 2. Note that since we assume in FAASTARS Corollary 3 that
$\tilde{j}=j$, we replace $\tilde{j}$ with $j$ in the result from FAASTARS
Corollary 2. Otherwise, we just invoke FAASTARS Corollary 3, bounding the last
term on the right-hand side of (114) to obtain (113). $\blacksquare$
We use the results above to analyze the complexity of FAASTARS in the
following remark.
Remark: We again mimic the complexity analyses we performed in the preceding
sections for FAASTARS. Let $||\cdot||$ denote a norm in $\Lambda$ throughout.
Define $R_{\tilde{\mathcal{A}}}^{2}$ as a bound
$||P_{\tilde{\mathcal{A}}}(x^{(M_{\mathcal{A}})}-x^{*})||^{2}\leq
R_{\tilde{\mathcal{A}}}^{2}$. We recall
$\tilde{a}_{1}=||P_{\tilde{\mathcal{I}}}(x^{(M_{\mathcal{A}})}-x^{*})||_{\Lambda}^{2}$
and define a bound $\tilde{a}_{1}\leq R_{\tilde{\mathcal{I}}}^{2}$.
Define
$x^{\dagger}:=\text{argmin}_{x}\\{f(x):x\in\\{x^{(0)},\ldots,x^{M}\\}\\}$ and
$\phi_{\dagger}:=\mathbb{E}_{\mathcal{U}_{k-1},\mathcal{P}_{k-1}}(f(x^{\dagger}))$
as before. Then the value $\phi_{\dagger}-f^{*}$ must be less than or equal to
the average improvement for any given run of STARS; that is, (113), along with
our new definitions, implies that
(115)
$\begin{split}\phi_{\dagger}-f^{*}\leq&\frac{4L_{1}(j+4)}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M-M_{\mathcal{A}}+1)}R_{\tilde{\mathcal{A}}}^{2}+\frac{4\sigma(j+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5}\\\
&+R_{\tilde{\mathcal{I}}}\left(\delta\sqrt{q_{1}+\ldots+q_{j}}+\sqrt{q_{j+1}+\cdots+q_{P}}\right),\end{split}$
Now in this analysis, we are only taking enough approximate STARS steps to
learn a surrogate for $f$ to obtain $\tilde{\mathcal{A}}$ Thus, we have
$M_{\mathcal{A}}\sim\mathcal{O}(L(P))$, where $L$ is a function of $P$ that
depends on the surrogate method used to learn $\mathcal{A}$. For instance, if
we use a linear surrogate or RBF’s – which begin by fitting a linear
surrogate, and then later a quadratic surrogate once enough ASTARS steps are
taken – then $L(P)=P$. If we use quadratic surrogates, then $L(P)=P^{2}$.
(Typically we use quadratic surrogates for higher-quality active subspaces.)
The terms involving Phase 3 of FAASTARS (approximate ASTARS phase) are the
three terms on the right-hand side of (115). We assume that we wish to achieve
a final accuracy of $\epsilon_{\text{tol}}>0$. Then we will need
$\phi_{\dagger}-f^{*}\leq\epsilon_{\text{tol}}$. If we take
(116)
$\frac{4\sigma(j+4)}{\sqrt{2K_{2}}(2-\sqrt{K_{1}})}C_{5}\leq\frac{\epsilon_{\text{tol}}}{3},$
then we must require that the noise not exceed the following threshold:
(117)
$\sigma\leq\frac{\sqrt{2K_{2}}(2-\sqrt{K_{1}})\epsilon_{\text{tol}}}{12\sigma(j+4)}.$
If we satisfy (117) and also have
(118)
$R_{\tilde{\mathcal{I}}}\left(\delta\sqrt{q_{1}+\ldots+q_{j}}+\sqrt{q_{j+1}+\cdots+q_{P}}\right)\leq\frac{\epsilon_{\text{tol}}}{3},$
then we can achieve $\epsilon_{\text{tol}}$ accuracy as long as
(119)
$\frac{4L_{1}(j+4)}{\sqrt{K_{1}}(2-\sqrt{K_{1}})(M-M_{\mathcal{A}}+1)}R_{\tilde{\mathcal{A}}}^{2}\leq\frac{\epsilon_{\text{tol}}}{3}\iff
M\geq
M_{\mathcal{A}}+\frac{12L_{1}(j+4)R_{\tilde{\mathcal{A}}}^{2}}{\epsilon_{\text{tol}}\sqrt{K_{1}}(2-\sqrt{K_{1}})}-1.$
Hence, we achieve $\epsilon_{\text{tol}}$ accuracy as long as: the noise is
small enough; the eigenvalues of the inactive subspace and distance from
$x^{*}$ to $x^{*}_{\tilde{\mathcal{A}}}$ is small enough; and $M$ is large
enough. Details of those required bounds given by (117), (118), and (119),
respectively. With these assumptions, we achieve $\epsilon_{\text{tol}}$
accuracy in
(120)
$M\sim\mathcal{O}\left(\max\left\\{L(P),\,\frac{L_{1}jR_{\tilde{\mathcal{A}}}^{2}}{\epsilon_{\text{tol}}\sqrt{K_{1}}(2-\sqrt{K_{1}})}\right\\}\right).$
This analysis also shows that given a particular variance in the noise,
$\sigma$, as well as the term involving the eigenvalues of the inactive
subspace and distance from $x^{*}$ to $x^{*}_{\tilde{\mathcal{A}}}$, the
achievable accuracy can be no better (i.e., less) than the value given below:
(121)
$\epsilon_{\text{tol}}\geq\max\left\\{\frac{12(j+4)C_{5}}{\sqrt{K_{1}}(2-\sqrt{K_{1}})}\sigma,\,3R_{\tilde{\mathcal{I}}}\left(\delta\sqrt{q_{1}+\cdots+q_{j}}+\sqrt{q_{j+1}+\cdots+q_{P}}\,\right)\right\\},$
showing that the achievable accuracy will either be limited (mainly) by
hyperparameter approximations or (mainly) by the error in
$\tilde{\mathcal{A}}$.
We find similar complexity results to that of ASTARS, but also pay a price for
approximations to $\tilde{\mathcal{A}}$, especially when the approximation is
poor, usually do to both an insufficient number of samples and samples that do
not explore $\Lambda$ sufficiently. (Recall that since $\tilde{\mathcal{A}}$
is formed using information from a surrogate trained on STARS samples in the
FAASTARS routine, it is really poor surrogates that hurt convergence.)
|
# A review of one-phase Hele–Shaw flows and a level-set method for non-
standard configurations
Liam C. Morrow1,2, Timothy J. Moroney2, Michael C. Dallaston2 and Scott W.
McCue2
1Department of Engineering Science, University of Oxford, Oxford OX1 3PJ,
United Kingdom
2School of Mathematical Sciences, Queensland University of Technology,
Brisbane, QLD, 4001, Australia
###### Abstract
The classical model for studying one-phase Hele–Shaw flows is based on a
highly nonlinear moving boundary problem with the fluid velocity related to
pressure gradients via a Darcy-type law. In a standard configuration with the
Hele–Shaw cell made up of two flat stationary plates, the pressure is
harmonic. Therefore, conformal mapping techniques and boundary integral
methods can be readily applied to study the key interfacial dynamics,
including the Saffman–Taylor instability and viscous fingering patterns. As
well as providing a brief review of these key issues, we present a flexible
numerical scheme for studying both standard and non-standard Hele–Shaw flows.
Our method consists of using a modified finite difference stencil in
conjunction with the level set method to solve the governing equation for
pressure on complicated domains and track the location of the moving boundary.
Simulations show that our method is capable of reproducing the distinctive
morphological features of the Saffman–Taylor instability on a uniform
computational grid. By making straightforward adjustments, we show how our
scheme can easily be adapted to solve for a wide variety of non-standard
configurations, including cases where the gap between the plates is linearly
tapered, the plates are separated in time, and the entire Hele–Shaw cell is
rotated at a given angular velocity.
Keywords: Hele–Shaw flow; Saffman–Taylor instability; viscous fingering
patterns; moving boundary problem; conformal mapping; level-set method.
## 1 Introduction
Viscous fingering patterns that develop in a Hele–Shaw flow are very well
studied in fluid dynamics. These patterns, which arise due to the
Saffman–Taylor instability [118], occur when a viscous fluid that fills a gap
between two narrowly separated parallel plates is displaced by a less viscous
fluid, which is injected into (or withdrawn from) the cell. Provided these two
fluids are immiscible, an interface forms that is usually unstable and
develops visually striking patterns characterised by their branching
morphology. As the governing equation for the velocity of the viscous fluid is
the same as Darcy’s law, Hele–Shaw flow can be interpreted as a two-
dimensional paradigm for flow through a homogeneous porous medium. Further,
the Hele–Shaw framework has also been used to model interfacial instabilities
appearing in other scenarios including bacterial colony growth [13], crystal
solidification [79], random walk simulations, [82], and the flow of
electrolytes [95, 50]. We refer the reader to Refs [18, 63, 88, 136] for a
historical summary and comprehensive overview of the broad applicability of
the Hele–Shaw model.
If we assume that the viscosity of the less viscous fluid (air, say) can be
neglected entirely, then the classical model for flow in the more viscous
fluid, $\Omega(t)$, is the one-phase moving boundary problem
$\boldsymbol{v}=-\frac{b^{2}}{12\mu}\mbox{\boldmath$\nabla$}p,\quad\nabla\cdot\boldsymbol{v}=0,\quad\boldsymbol{x}\in\Omega(t),$
(1)
where $\boldsymbol{v}$ is the fluid velocity (averaged across the distance $b$
between the parallel Hele–Shaw plates), $p$ is the fluid pressure and $\mu$ is
the constant viscosity, together with the boundary conditions
$p=-\gamma\kappa+\,\mbox{constant},\quad
v_{n}=-\frac{b^{2}}{12\mu}\frac{\partial p}{\partial
n},\quad\boldsymbol{x}\in\partial\Omega(t),$ (2)
where $\gamma$ is the surface tension, $\kappa$ is the signed curvature of
$\partial\Omega$, defined to be positive if the interface is convex from the
side of the more viscous fluid, and $v_{n}$ is the normal speed of the
interface. Typically the flow is driven by injection or withdrawal of fluid
through a point or at infinity. This original model for two immiscible fluids
is described by Saffman & Taylor [118] in 1958, except that for the most part
they do not neglect the flow details of the less viscous fluid.
The one-phase Hele–Shaw model that we are concerned with has been applied to
three main configurations, namely an expanding bubble of air into an infinite
body of fluid, a contracting finite blob of fluid, and the displacement of
viscous fluid in a Hele–Shaw channel. In each of these three scenarios, the
fluid boundary is unstable (the Saffman–Taylor instability), and a typical
outcome involves portions of the interface propagating increasingly faster
than other portions, in some cases leading to a striking fingering pattern at
the boundary. For the special zero-surface-tension case (also known as
Laplacian growth), a host of mathematical studies based mostly on conformal
mapping, conserved moments and the Baiocchi transform have highlighted the
possible scenarios for this ill-posed model, including exact solutions and
finite-time blow-up [29, 31, 30, 34, 43, 61, 62, 69, 70, 71, 77, 92, 93]. For
the more physically realistic nonzero surface tension case (which is well-
posed), the broader strategies to study this problem include stability
analysis [94, 107], small-surface-tension asymptotics [19, 128], employing
harmonic moments and conserved quantities [78] and fully numerical methods
mostly with boundary integral methods [20, 33, 66, 68, 75, 101] but also the
level set formulation [67]. While we shall devote much of our attention in
this article to certain non-standard variations of (1)-(2), we do not provide
any commentary on how the boundary conditions (2) may be altered by
considering additional physical effects on the boundary apart from surface
tension, including the effects of a dynamic contact angle, thin wetting films
and the related issue of kinetic undercooling [5, 6, 9, 22, 35, 36, 41, 106,
112, 117, 138, 139]. Similarly, we do not review non-Newtonian flows, which
themselves are well-studied [10, 45, 47, 76, 90, 116]. Finally, our focus is
on time-dependent problems and so we are not intending to review the extensive
literature on travelling wave problems involving a steadily propagating finger
[21, 27, 51, 52, 65, 91, 117, 123, 126, 132] or bubble [59, 64, 87, 125, 129,
127, 135, 134].
In recent years, there has been increased interest in studying how variations
to the classic Hele–Shaw model influence the development of viscous fingering
patterns. Many of these studies consider the effect of imposing a time-
dependent injection rate, specifically to control or reduce the growth of
fingers [11, 12, 28, 39, 40, 38, 57, 81]. Further, much attention has been
devoted to manipulating the geometry of the Hele–Shaw cell. One of the
earliest examples of this approach is by Zhao et al. [140], who considered the
classic Saffman–Taylor experiment [118] and linearly tapered the gap between
the plates in the direction of the fluid flow. Since this experiment, numerous
studies have been performed to generate further insight into how the taper
angle influences viscous fingering [1, 2, 14, 74, 86, 99]. Other popular
physical alterations to the Hele–Shaw cell include uniformly separating the
plates in time [43, 44, 83, 100, 122, 133, 143], rotating the entire Hele–Shaw
cell at a given angular velocity [7, 17, 43, 119], or replacing one of the
plates with an elastic membrane [3, 32, 48, 85, 89, 108, 109, 110, 111]. All
of these configurations have been shown to produce patterns distinct from
traditional Saffman–Taylor fingering.
One of the most commonly used analytical tools for studying both standard and
non-standard Hele–Shaw flow is linear stability analysis. For the standard
configuration, Paterson [107] showed that modes of perturbation to the
circular solution become successively unstable as the bubble expands,
predicting the most unstable wave number for a given bubble radius. Further,
linear stability analysis has also been used to derive injection rates to
control [80] or minimise [40] the development of viscous fingering. For non-
standard Hele–Shaw flow, linear stability analysis provides insight into how
manipulating the geometry of the cell influences the development of viscous
fingers, including when the plates are tapered [1, 2], rotating [17], or are
being separated [122]. While linear stability analysis is a flexible tool that
leads to analytic predictions [39, 56, 81], it only leads to an accurate
description of solutions for small time. As such, this restriction increases
the need for flexible and accurate numerical methods that can be used to
understand the full nonlinear behaviour of these problems.
Computing numerical solutions to Hele–Shaw flow (and related moving boundary
problems) can be a challenging task, as interfacial patterns develop, which
requires solving the governing equations in complicated moving domains. Such
approaches can be classified as either front tracking, where the interface is
solved for explicitly, or front capturing, where the interface is represented
implicitly. For the classic Hele–Shaw problem, as the pressure of the viscous
fluid satisfies Laplace’s equation, the most popular choice is the boundary
integral method (also known as the boundary element method), which is
classified as a front tracking method. In particular, the boundary integral
method has been used to solve the classic one-phase Hele–Shaw problem [33, 80,
81], as well as two-phase flow [73, 114], problems for which the plates are
uniformly separated in a time-dependent fashion [122, 141, 142], and Hele–Shaw
flow in channel geometry [37]. However, for non-standard Hele–Shaw
configurations, the pressure may no longer be harmonic and the boundary
integral method becomes a less suitable tool. Another disadvantage of front
tracking methods is that the mesh may need to be regenerated as the interface
evolves, in which case care must be taken to avoid mesh distortion effects.
A popular alternative to the boundary integral method is the level set method,
which represents the interface implicitly as the zero level set of a higher
dimensional hypersurface [104]. A commonly cited advantage of the level set
method is that it can easily handle complicated interfacial behaviour such as
the merging and splitting of interfaces. Another, more pertinent, advantage of
the level set method is that it can describe the formation of complicated
interfacial patterns (such as those that occur in Hele–Shaw flow) on a uniform
grid, eliminating the need to re-mesh as the interface evolves. One of the
most significant drawbacks of the level set method is that it can suffer from
mass loss/gain in regions where the mesh is under resolved. However, this
issue can be mitigated by using the particle level set method [42], which uses
massless marker particles to correct the location of the interface when mass
is lost/gained. The level set method is a popular tool for studying moving
boundary problems in fluid dynamics, and has been used to investigate
interfacial instabilities that occur in Stefan problems [25, 54] and Hele–Shaw
flow [67, 84]. Also, we have applied this method to these applications, in
particular to conduction-limited melting of crystal dendrites [98], bubbles
shrinking and breaking up in a porous medium [97], and bubbles expanding in
various Hele-Shaw configurations [99]. We refer to Refs [55, 103, 121] for
more information about the level set method, including details regarding
implementation and applications.
While the level set method is used to implicitly represent the location of the
interface, to numerically simulate Hele–Shaw flow we are also required to
determine the pressure within the viscous fluid, which involves solving a
partial differential equation in a complicated domain that changes in time.
When applying the boundary integral method for the classic Hele–Shaw problem,
the solution to Laplace’s equation can be expressed in terms of Green’s
functions. As such, the problem is reformulated as an integro-differential
equation, and nodes need only be placed on the fluid-fluid interface. An
alternative choice is to solve for the pressure using the finite difference
method, which can be modified to solve problems on complicated domains when
coupled with level set functions that describe the location of the interface
[26, 53]. An advantage of this approach is that the finite difference method
can be easily adapted to wide classes of partial differential equations.
Further, while the boundary integral method can easily handle non-trivial far-
field boundary conditions, their inclusion into the finite difference stencil
is not so straightforward. One solution to overcome this difficulty is to use
a very large computational domain, but this in turn results in significantly
longer computational times. Another, more elegant, solution is to use a
Dirichlet-to-Neumann map [58], which has been shown to accurately capture the
far-field boundary condition even when the interface is relatively close to
the curve on which the Dirichlet-to-Neumann map is applied [97, 98, 99].
In this work, we provide a brief review of the one-phase Hele–Shaw model,
touching on the use of complex variable and conformal mapping techniques as
well as the mathematical consequences of including or excluding surface
tension in the model. We focus on the three well-studied scenarios, namely an
expanding bubble, a contracting blob and displacement of fluid in a linear
channel. Our approach is to write down a generalised model that allows for a
number variations of the standard approach, including a time-dependent flow
rate, a spatially and/or time-dependent gap between the plate, or rotating
plates. We then present a flexible numerical framework for solving this
generalised model [97, 98, 99]. Our scheme is based on the work of Chen [24]
and Hou et al. [67], and uses a level set based approach to track the location
of the liquid-air interface. There are several novel aspects of our numerical
framework. The first is that our scheme overcomes the limitations of the
boundary integral method in that it can easily solve Hele–Shaw flow in non-
homogeneous media, i.e. where the plates are not parallel. Second, by
representing the interface implicitly by a higher dimensional level set
function, we are able represent the complicated interfacial patterns easily on
a uniform mesh. By performing a series of simulations, we show that our
numerical solutions are able to reproduce the morphological features of
viscous fingering in a Hele–Shaw cell. Further, by making straightforward
adjustments, we show that our scheme can easily be modified for a wide range
of non-standard Hele–Shaw configurations, including where the plates are
linearly tapered, uniformly separated in time, or rotated. For all the
configurations considered, our numerical solutions are shown to compare well
with previous simulations and experiments.
## 2 Review of one-phase Hele–Shaw model
### 2.1 Generalised Hele–Shaw model
We consider a generalised one-phase model of Hele–Shaw flow where the gap
between the plates is either spatially or temporally dependent such that $b\to
b(\boldsymbol{x},t)$ and the Hele–Shaw plates can rotate with angular velocity
$\bar{\omega}$. We suppose an inviscid bubble is injected into the viscous
fluid at rate $Q(t)$, and denote the domain occupied by the inviscid fluid to
be $\Omega(t)$. The interface separating the inviscid bubble and the viscous
fluid is denoted by $\partial\Omega(t)$. Is is commonplace with Hele-Shaw
flows, we consider a depth averaged model that comes about from averaging
Stokes flow over the gap between the plates, which itself is assumed to be
small.
Denoting $P$, $\mu$ and $\rho$ as the pressure, viscosity and density of the
viscous fluid, respectively, and denoting the angular velocity of the
Hele–Shaw cell by $\bar{\omega}$, the governing equations for the velocity of
the viscous fluid are
$\displaystyle\boldsymbol{v}$
$\displaystyle=-\frac{b^{2}}{12\mu}(\mbox{\boldmath$\nabla$}P-\bar{\omega}^{2}\rho
r\boldsymbol{e}_{r}),$ $\displaystyle\boldsymbol{x}$
$\displaystyle\in\mathbb{R}^{2}\backslash\Omega(t),$ (3)
$\displaystyle\mbox{\boldmath$\nabla$}\cdot(b\boldsymbol{v})$
$\displaystyle=-\frac{\partial b}{\partial t},$ $\displaystyle\boldsymbol{x}$
$\displaystyle\in\mathbb{R}^{2}\backslash\Omega(t),$ (4)
where $r=|\boldsymbol{x}|$. Equation (3) is Darcy’s law modified to include
the rotational effects of the Hele–Shaw cell, while (4) ensures that the mass
of the fluid is conserved. Defining a reduced pressure according to
$p=P-\bar{\omega}\rho r^{2}/2$ and then substituting (3) into (4) generates
the governing equation for pressure,
$\displaystyle\mbox{\boldmath$\nabla$}\cdot\left(\frac{b^{3}}{12\mu}\mbox{\boldmath$\nabla$}p\right)$
$\displaystyle=\frac{\partial b}{\partial t},$
$\displaystyle\boldsymbol{x}\in\mathbb{R}^{2}\backslash\Omega(t).$ (5)
When the gap between the plates is both spatially and temporally uniform, (5)
reduces to Laplace’s equation $\nabla^{2}p=0$. We have two boundary conditions
on the fluid-fluid interface given by
$\displaystyle p$
$\displaystyle=-\gamma\left(\kappa+\frac{2}{b}\right)-\frac{\rho\bar{\omega}^{2}r^{2}}{2},$
$\displaystyle\boldsymbol{x}$ $\displaystyle\in\partial\Omega(t),$ (6)
$\displaystyle v_{n}$ $\displaystyle=-\frac{b^{2}}{12\mu}\frac{\partial
p}{\partial n},$ $\displaystyle\boldsymbol{x}$
$\displaystyle\in\partial\Omega(t),$ (7)
where $\gamma$ is the surface tension, $\kappa$ is the signed curvature of
$\partial\Omega$, $\rho$ is the density of the viscous fluid, and $v_{n}$ is
the normal speed of the interface. The dynamic boundary condition (6)
incorporates both the effects of surface tension as well as the rotation of
the Hele–Shaw plates. The kinematic boundary condition (7) relates the
velocity of the viscous fluid with the normal velocity of the interface. We
also have the far-field boundary condition
$\displaystyle\frac{b^{3}}{12\mu}\frac{\partial p}{\partial r}$
$\displaystyle\sim-\frac{Q}{2\pi r}+\frac{1}{2}r\frac{\partial b}{\partial
t},$ $\displaystyle r\to\infty,$ (8)
which acts as a source/sink term at infinity. For $Q>0$ ($Q<0$), this
condition corresponds to the bubble area expanding (contracting) at rate $Q$.
The inclusion of the $\partial b/\partial t$ in (8) comes about from the non-
homogenous term in (5), and ensures the change of rate of volume of the bubble
is $Q$.
To nondimensionalise (5)-(8), we introduce $r_{0}$ as the average initial
radius of the bubble and $Q_{0}$ as the average injection rate over the
duration of a simulation, and $b_{0}=b(0,0)$. Then space, time, pressure, and
velocity are scaled according to
$\displaystyle\hat{\boldsymbol{x}}=\frac{\boldsymbol{x}}{r_{0}},\quad\hat{t}=\frac{t}{T},\quad\hat{b}=\frac{b}{b_{0}},\quad\hat{p}=\frac{b_{0}^{2}T}{12\mu
r_{0}^{2}}p,\quad\hat{\boldsymbol{v}}=\frac{T}{r_{0}}\boldsymbol{v},$ (9)
where $T$ is a representative time-scale. Dropping the hats and retaining our
original variable names, (5)-(8) become
$\displaystyle\nabla\cdot\left(b^{3}\nabla p\right)$
$\displaystyle=\frac{\partial b}{\partial t},$ $\displaystyle\boldsymbol{x}$
$\displaystyle\in\mathbb{R}^{2}\backslash\Omega(t),$ (10) $\displaystyle p$
$\displaystyle=-\sigma\left(\kappa+\frac{2R_{0}}{b}\right)-\omega^{2}r^{2},\qquad\qquad$
$\displaystyle\boldsymbol{x}$ $\displaystyle\in\partial\Omega(t),$ (11)
$\displaystyle v_{n}$ $\displaystyle=-b^{2}\frac{\partial p}{\partial n},$
$\displaystyle\boldsymbol{x}$ $\displaystyle\in\partial\Omega(t),$ (12)
$\displaystyle b^{3}\frac{\partial p}{\partial r}$
$\displaystyle\sim-\frac{Q}{2\pi r}+\frac{1}{2}r\frac{\partial b}{\partial t}$
$\displaystyle r$ $\displaystyle\to\infty,$ (13)
where $\sigma=b_{0}^{2}T\gamma/12\mu r_{0}^{3}$, $R_{0}=r_{0}/b_{0}$, and
$\omega^{2}=\rho b_{0}^{2}T\bar{\omega}^{2}/24\mu$. For this configuration, an
appropriate time-scale could be $T=r_{0}^{2}b_{0}/Q_{0}$, in which case the
dimensionless average injection rate would become $Q_{0}=1$.
In addition to this expanding bubble problem, we shall also be concerned with
two other scenarios, namely the blob geometry, where viscous fluid occupies
$\Omega(t)$ and is withdrawn from a point or the cell rotates around a
perpendicular axis, and the channel geometry, where viscous fluid occupies a
long rectangular channel and is displaced by the inviscid fluid that is
injected at one end. For these two scenarios, modifications to (10)-(13) will
be made as appropriate.
### 2.2 Complex variable formulation
Before outlining our numerical scheme in Section 3, we take some time to
illustrate some of the mathematical properties of the Hele-Shaw problem,
especially in the special case of zero surface tension. This mathematical
exploration, which relies heavily on complex variable theory and conformal
mapping, is based on many previous studies in this spirit [29, 31, 30, 34, 61,
62, 69, 70, 71, 92, 93]. To keep the discussion contained and to connect with
numerical simulations described later in this paper, we restrict ourselves to
examples of three geometries (the expanding bubble problem, the blob problem
and the channel geometry).
In the standard set-up in which $b=1$ (parallel, stationary plates) and
$\omega=0$ (no rotation), equations (10)-(13) reduce to
$\displaystyle\nabla^{2}p$ $\displaystyle=0,$ $\displaystyle\boldsymbol{x}$
$\displaystyle\in\mathbb{R}^{2}\backslash\Omega(t),$ (14) $\displaystyle p$
$\displaystyle=-\sigma\kappa,\qquad\qquad$ $\displaystyle\boldsymbol{x}$
$\displaystyle\in\partial\Omega(t),$ (15) $\displaystyle v_{n}$
$\displaystyle=-\frac{\partial p}{\partial n},$ $\displaystyle\boldsymbol{x}$
$\displaystyle\in\partial\Omega(t),$ (16) $\displaystyle\frac{\partial
p}{\partial r}$ $\displaystyle\sim-\frac{Q}{2\pi r},$ $\displaystyle r$
$\displaystyle\to\infty.$ (17)
It is instructive to reformulate this problem using complex variable methods
as follows. Given the fluid pressure $p$ satisfies Laplace’s equation (14), it
can be interpreted as the negative real part of an analytic function
$W(z,t)=-p(x,y,t)+\mathrm{i}\psi(x,y,t)$ of the complex variable
$z=x+\mathrm{i}y$. Here, $W$ is acting as a complex potential, while $\psi$ is
a streamfunction.
Further, there exists a time-dependent conformal map $z=f(\zeta,t)$ from the
unit disc in the plane of an auxiliary variable $\zeta$ to the fluid region in
the $z$-plane (i.e., $\mathbb{R}^{2}\backslash\Omega(t)$) and the unit circle
$|\zeta|=1$ to the fluid interface $\partial\Omega(t)$, as depicted
schematically in Figure 1. The map will be univalent (that is, one-to-one) and
analytic in the unit disc except for at a single point, which we choose to be
$\zeta=0$, that represents $z\to\infty$. In the limit $\zeta\rightarrow 0$, we
have $f\sim a(t)/\zeta$. By fixing a rotational degree of freedom we force
$a(t)$ to be real. Now the complex potential $W(z,t)$ is also an analytic
function of $\zeta$ and so we write $w(\zeta,t)=W(f(\zeta,t),t)$. Given the
far-field condition (17), which implies $W\sim(Q/2\pi)\log z$ as
$|z|\rightarrow\infty$, we now have the local behaviour
$w\sim-(Q/2\pi)\log\zeta$ as $\zeta\rightarrow 0$.
To formulate the kinematic condition (16) in terms of the map $f$, it is
useful to introduce some complex variable equivalents of standard concepts
from vector algebra. Firstly, a complex number can be used to represent a
vector (with components given by the real and imaginary parts), such as the
normal to an interface. The unit normal to the unit circle $|\zeta|=1$ is
$n^{(\zeta)}=\zeta$, and given the interface $\partial\Omega$ is the image of
the unit circle under $z=f(\zeta,t)$, the normal $n^{(z)}$ in the $z$-plane is
found by rotating $n^{(\zeta)}$ by the argument of $f_{\zeta}$, thus
$n^{(z)}=\zeta f_{\zeta}/|\zeta f_{\zeta}|$ (see Figure 1). Secondly, the
equivalent of the dot product between two complex numbers $a$ and $b$ is
$\Re\\{a\overline{b}\\}$. The time derivative of the map $f_{t}$ at a point on
the unit disc gives a velocity vector of a point on the interface
$\partial\Omega$. Therefore the normal velocity $v_{n}$ of the interface
$\delta\Omega$ as a function of $\zeta$ is given by
$\Re\\{f_{t}\overline{\zeta f_{\zeta}}\\}/|\zeta f_{\zeta}|$, while the normal
derivative $\partial p/\partial n$ is given by $-\Re\\{\zeta
W_{z}f_{\zeta}\\}/|f_{\zeta}|=\Re\\{\zeta w_{\zeta}\\}/|f_{\zeta}|$. This
calculation allows the kinematic condition to be represented as
$\Re\\{f_{t}\overline{\zeta f_{\zeta}}\\}=\Re\\{\zeta
w_{\zeta}\\},\qquad|\zeta|=1.$
Now to reformulate the dynamic condition (15) we note the curvature on the
fluid boundary can be written as $\Re\\{\zeta(\zeta
f_{\zeta})_{\zeta}\overline{\zeta f_{\zeta}}\\}/|\zeta f_{\zeta}|^{3}$ on the
unit circle. Given (15) and the logarithmic behaviour of $w$ as
$\zeta\rightarrow 0$, we can write
$w=-(Q/2\pi)\log\zeta-\sigma\mathcal{K}(\zeta,t)$, where
$\mathcal{K}(\zeta,t)$ is an analytic function of $\zeta$ in the unit disc
whose real part on the unit circle $|\zeta|=1$ is given by the curvature
$\kappa$. Combining these ideas, we arrive at the single governing equation
$\Re\\{f_{t}\overline{\zeta
f_{\zeta}}\\}=-\frac{Q}{2\pi}-\sigma\Re\\{\zeta\mathcal{K}_{\zeta}\\}\qquad|\zeta|=1.$
(18)
This equation is often referred to as the Polubarinova–Galin equation,
especially when surface tension is ignored [60].
Figure 1: A schematic diagram indicating the time-dependent complex mapping
$f(\zeta,t)$ from the auxiliary $\zeta$ plane to the physical
$z(=x+\mathrm{i}y)$ plane. The interface $\delta\Omega(t)$ is the image of the
unit circle $|\zeta|=1$, and the complex representation of the unit normal
vector can be expressed in terms of the derivative of $f$.
We shall briefly outline five illustrative examples, chosen to demonstrate the
key features for the special case in which surface tension is ignored. We
shall later refer back to these examples when we implement our numerical
scheme with surface tension included. First, we shall consider the mapping
$f=a(t)/\zeta+b(t)\zeta^{N}$, $N\geq 2$ [69]. By substituting into (18) with
$\sigma=0$, we find that $a$ and $b$ must satisfy the coupled system of ODEs
$a\dot{a}-Nb\dot{b}=Q/2\pi$, $N\dot{a}b-a\dot{b}=0$. Say, for definiteness,
$a(0)=1$ and $b(0)=\epsilon$, then the second of these equations gives
$b=\epsilon a^{N}$, while the first equation integrates to give the time-
dependence
$t=\frac{\pi}{Q}\left(a^{2}-1-\epsilon^{2}N(a^{2N}-1)\right).$
Thus we have an exact solution, as shown by the red dashed curves in the top
panel of Figure 2(a). The innermost curve is the initial bubble boundary. Here
we have chosen $N=5$, $Q=2\pi$ and $\epsilon=0.01$, so this inner curve is a
circle with a six-fold perturbation. As time increases, the bubble expands and
starts to develop six small fingers.
Figure 2: Solutions with and without surface tension. The red dashed curves in
(a)-(e) are zero-surface-tension solutions described by the five examples in
Section 2.2. The solid blue curves are numerical solutions, including surface
tension, for the same initial conditions. For the examples in (a), (c) and
(d), the zero-surface-tension solutions involve a form of finite-time blow-up
characterised by cusps forming on the interface; the inclusion of surface
tension regularises these singularities, allowing the full solution to
continue past these blow-up times. For the examples in (b) and (e), blow-up in
the zero-surface-tension solution is prevented by the presence of a
logarithmic singularity; here the numerical solution with small surface
tension remains close to the zero-surface-tension solution for small time and
then deviates away so that the long-time behaviour is different. Each case
includes a sketch of the $\zeta$-plane with the unit circle and critical
points and logarithmic singularities indicated by solid red dots and black
diamonds, respectively. Note in (b) we do not plot the critical points at
$t=0$ as they are outside the field of view here. For $(a)$-$(e)$, as time
increases, the critical points and logarithmic singularities move towards the
unit circle.
It is of interest to track the critical points, $\zeta=\zeta^{*}$, which are
the points at which $f_{\zeta}=0$. For this example, $\zeta^{*}=(1/\epsilon
Na^{N-1})^{1/(N+1)}$. Clearly there are $N+1$ critical points that are equally
spaced along a circle that is outside the unit circle in the $\zeta$-plane. As
time evolves, each of these critical points moves in a straight line towards
the origin and intersects the unit circle when $|\zeta^{*}|=1$, ie
$a=1/(\epsilon N)^{1/(N-1)}$. We can compute the exact time that this occurs,
namely
$t^{*}=\frac{\pi}{Q}\left(\frac{N-1}{N(\epsilon
N)^{2/(N-1)}}-1+\epsilon^{2}N\right).$
For the case in Figure 2(a), for which $N=5$, we can see the six critical
points (red dots) in the bottom panel moving towards the unit circle. At
$t=t^{*}$, there is finite-time blow-up, characterised by six cusps of order
$3/2$ along the bubble boundary, which we can see in the top panel of Figure
2(a) (a cusp of order $3/2$ is characterised by a curvature singularity that
appears locally like two branches of $y^{2}=x^{3}$ meeting at a cusp, suitably
scaled and rotated). The solution cannot be continued past $t=t^{*}$ as the
conformal mapping ceases to be univalent.
The second zero-surface-tension example we consider is for
$f=a(t)/\zeta+\log(\zeta-d(t))-\log(\zeta+d(t))$ [69]. Again, the geometry is
an expanding bubble, however this time the behaviour is qualitatively
different. By substituting this map for $f$ into (18) with $\sigma=0$, we find
that $a$ and $d$ satisfy the coupled system:
$\dot{a}=\frac{Q}{2\pi}\frac{ad^{4}-a+4d}{a^{2}d^{4}-(2d-a)^{2}},\quad\dot{d}=-\frac{Q}{2\pi}\frac{d(d^{4}-1)}{a^{2}d^{4}-(2d-a)^{2}}.$
For initial conditions, we choose both $a(0)>d(0)>1$ so the denominators are
initially positive, which means that $\dot{a}>0$ and $\dot{d}<0$ for small
time. To determine the precise time-dependent behaviour, one option is to
integrate this system numerically, which demonstrates that $a(t)$ continues to
increase while $d(t)$ decreases towards $d=1$ as $t$ increases. To make
progress analytically, by dividing one equation by the other, we can also
derive a first-order ode with exact solution
$a=\frac{1}{d}\left[\ln\left(\frac{(d(0)^{2}-1)(d^{2}+1)}{(d(0)^{2}+1)(d^{2}-1)}\right)+a(0)d(0)\right],$
which shows that $a\sim-\ln(d-1)$ as $d\rightarrow 1^{+}$. Clearly the map has
logarithmic singularities at $\zeta_{s}=\pm d$ and critical points where
$\zeta^{*}=\pm d\sqrt{a}/\sqrt{a-2d}$. For sufficiently large time, we have
$a-2d>0$ and so both $\zeta_{s}$ and $\zeta^{*}$ lie on the real $\zeta$-axis
and move towards the unit circle as $t\rightarrow\infty$. Since
$|\zeta^{*}|\sim d+d^{2}/a$ as $a\rightarrow\infty$, we see the critical
points $\zeta^{*}$ are further away from the unit circle than the logarithmic
singularities $\zeta_{s}$, and therefore finite-time blow-up is avoided (each
logarithmic singularity asymptotes to the unit circle, but does not cross it;
see Howison et al. [72], and the discussion on the channel problem at the end
of this section).
This second example is illustrated in Figure 2(b) using $a(0)=4$, $d(0)=10/3$,
$Q=2\pi$. In the top panel, the exact solution is denoted by the dashed red
curves. Initially the bubble boundary looks oval in shape on this scale. As
time increases the interface expands, leaving two fjords behind, centred both
the positive and negative $x$-axes. In the bottom panel, both the critical
points (red dots) and logarithmic singularities (black dots) are indicated in
the $\zeta$-plane. As just described, even though the critical points move
towards the unit circle, they are further away than the logarithmic
singularities; therefore, the logarithmic singularities have the effect of
shielding the critical points and preventing blow-up. An interesting
observation is that each of the two fjords appears to take the shape of a
classical Saffman-Taylor finger [118]. To see this, note that on the unit
circle near $\zeta=1$ we can derive a local analysis by setting
$\zeta=1+\mathrm{i}\eta$, so that $f\sim a+\log(1-d+\mathrm{i}\eta)$. Taking
real and imaginary parts and then eliminating $\eta$ gives
$x=\mathrm{constant}-\ln(\cos y)$, which is the famous Saffman-Taylor finger
shape with width $\pi$.
The third example is probably the most well-known example of an exact solution
in Hele–Shaw flow. Here suppose the geometry is such that there is a blob of
fluid in the Hele–Shaw cell, occupying a region $\Omega(t)$, surrounded by
inviscid fluid. If the fluid is withdrawn from a point in space (the origin,
say), then the blob boundary contracts and we have the less viscous fluid
displacing the more viscous fluid. The governing equations (14)-(16) apply in
$\Omega(t)$, while (14) is replaced by $\partial p/\partial r\sim Q/2\pi r$ as
$r\rightarrow 0$. The map we have in mind here for this example is the
quadratic map $f=a(t)\zeta+b(t)\zeta^{2}$, where the initial conditions
$a(0)=1$, $b(0)=\epsilon\ll 1$ correspond to an initial blob boundary that is
a perturbed circle [113]. Substituting this map into the Polubarinova–Galin
equation with $\sigma=0$ leads to a coupled system of integrable odes with the
exact solution
$a^{2}b=\epsilon,\quad
t=\frac{\pi}{Q}\left(1-a^{2}+2(\epsilon^{2}-b^{2})\right).$
There is one critical point $\zeta^{*}=-a^{3}/2\epsilon$, which is initially
located in the $\zeta$-plane at $-1/2\epsilon$ and moves towards the origin as
time evolves and $a$ decreases. At $t=t^{*}$ this critical point hits the unit
circle, where
$t^{*}=\frac{\pi}{Q}\left(1+2\epsilon^{2}-\frac{9}{8}(2\epsilon)^{2/3}\right),$
causing a cusp of order $3/2$ to form on the blob boundary. This behaviour is
illustrated in Figure 2(c) for $\epsilon=0.1$ and $Q=2\pi$. On the left panel,
the blob boundary is represented by the dashed red curves. Initially, this
curve appears circular, as the perturbation is very small. For intermediate
times, the left portion of the boundary begins contracting faster than the
remainder of the boundary until the cusp forms, corresponding to finite-time
blow-up. On the right panel of Figure 2(c) the location of the fixed point in
the $\zeta$-plane is represented (red dots) for the same three times that the
boundary is drawn for in the left panel. Here we see that the critical point
touches the unit circle at the precise time that finite-time blow-up occurs.
Note that for polynomial maps like the one in this example, it has been proven
that a cusp will always form before the interface reaches the sink [61]; other
explicit solutions exist whose boundary evolves to the location of the sink
before or at the same time as cusp formation (see [115], for example). The
time reversibility of the system (14)-(17) in the absence of surface tension
($\sigma=0$) implies that the only initial condition that will lead to the
removal of all fluid is a disc centred on the sink, that is,
$f(\zeta,t)=a(t)\zeta$.
For completeness we include two more examples, providing only the key details.
These examples are for the geometry of flow in a Hele–Shaw channel, which we
fix to be $2\pi$ units wide. For the fourth example, the map takes for the
$f=-\log\zeta+a(t)+b(t)\zeta$, with initial conditions $a(0)=0$,
$b(0)=\epsilon\ll 1$, corresponding to a slightly perturbed flat interface
[72]. This case is analogous to the first and third examples above. The
functions $a$ and $b$ satisfy the coupled system of odes with an exact
solution
$a-\ln b=-\ln\epsilon,\quad
t=\frac{\pi}{Q}\left(2a-b^{2}+\epsilon^{2}\right).$
There are critical points at $\zeta^{*}=1/b$ that move towards the origin as
$b$ increases and ultimately intersect the unit circle at
$t^{*}=\pi(2\ln(1/\epsilon)-1+\epsilon^{2})/Q$, at which time a $3/2$ cusp
forms on the interface. This example is illustrated in Figure 2(d), where the
interface profiles are shown in the top panel as red dashed curves. In the
bottom panel the critical points are indicated (red dots). Here $\epsilon=0.1$
and so the critical point is initially at $\zeta^{*}=10$ and ultimately hits
the unit circle at $t=t^{*}$.
Finally, the fifth example is for $f=-\log\zeta+a(t)+\alpha\log(\zeta+d(t))$
with initial conditions $a(0)=0$, $d(0)=1/\epsilon\gg 1$ [72], which is
analogous to the second example above. There is a critical point at
$\zeta^{*}=d/(\alpha-1)$ and a logarithmic singularity at $\zeta_{s}=-d$. We
shall not include the details here, but it is possible to derive a coupled
system of odes for $a(t)$ and $d(t)$ that can be solved numerically or reduced
further analytically by diving one by the other. For $0<\alpha<2$ the
singularity $\zeta_{s}$ is always smaller in magnitude, and therefore closer
to the unit circle, than $\zeta^{*}$. As a result, it turns out that $d$ is a
decreasing function and that $\dot{d}\rightarrow 0^{+}$ as $d\rightarrow
1^{+}$. Therefore, neither $\zeta^{*}$ nor $\zeta_{s}$ intersect the unit
circle but in fact approach it asymptotically as $t\rightarrow\infty$. As
such, there is no cusp formation, and instead the proximity of $\zeta_{s}$ to
the unit circle results in the interface forming a long finger, whose width is
$(2-\alpha)\pi$. In Figure 2(e) we present an example with $\epsilon=0.2$,
$\alpha=1.2$. The interface in the top panel, given by the dashed red curves,
clearly approaches a finger in shape. The bottom panel shows the critical
point (red dots) moving towards the unit circle, but always further away than
the logarithmic singularity (black dots).
In each of Figure 2(a)(top panel), (b)(top panel), (c)(left panel), (d)(top
panel) and (e)(top panel), we have included numerical solutions drawn as solid
blue curves that are computed using the same initial conditions but with a
nonzero value of surface tension. While we discuss these (more physically
realistic) solutions at various points later in the paper, it is worth
repeating here that nonzero surface tension is required in the model in order
to relate to the real physics of a Hele-Shaw experiment. The historical
interest in complex singularities and finite-time blow-up is therefore mostly
of a mathematical nature.
## 3 Numerical scheme
### 3.1 The level set method
To numerically solve (10)-(13), following the methodology of Osher and Sethian
[104], we construct a level set function $\phi$ such that the fluid-fluid
interface $\partial\Omega$ is the zero level set of $\phi$ or
$\displaystyle\partial\Omega(t)=\left\\{\boldsymbol{x}\,|\,\phi(\boldsymbol{x},t)=0\right\\}.$
(19)
If the interface has the normal speed $v_{n}$, then we wish to construct a
speed function, $F$, such that $v_{n}=F$ on
$\boldsymbol{x}\in\partial\Omega(t)$, and is continuous over the entire
computational domain. Thus $\phi$ satisfies the level set equation
$\displaystyle\frac{\partial\phi}{\partial
t}+F|\mbox{\boldmath$\nabla$}\phi|=0.$ (20)
We discuss how $F$ is computed in Section 3.2. To solve (20), we approximate
the spatial derivatives using a second order essentially non-oscillatory
scheme (see Osher and Fedkiw [102, chapter 3] and Sethian [120, chapter 6] for
details), and integrate in time using second order total variation diminishing
(TVD) Runge-Kutta (RK), which is performed by taking two forward Euler steps
$\displaystyle\tilde{\phi}^{(n+1)}$ $\displaystyle=\phi^{(n)}-\Delta
tF^{(n)}|\mbox{\boldmath$\nabla$}\phi^{(n)}|,$ (21)
$\displaystyle\phi^{(n+2)}$ $\displaystyle=\tilde{\phi}^{(n+1)}-\Delta
tF^{(n+1)}|\mbox{\boldmath$\nabla$}\tilde{\phi}^{(n+1)}|,$ (22)
and then take an averaging step
$\phi^{(n+1)}=\left(\phi^{(n)}+\phi^{(n+2)}\right)/2$. We note that the
inclusion of the second order curvature term, $\kappa$, in the dynamic
boundary condition (6) would typically require $\Delta t\sim\Delta x^{2}$.
However, we find that for the results presented in this work, the surface
tension parameter is sufficiently small such that we can maintain numerical
stability by choosing $\Delta t=\Delta x/(4\max|F|)$.
The level set function $\phi$ is initialised as a signed distance function
satisfying
$\displaystyle\phi=\begin{cases}d&\mbox{if
}\boldsymbol{x}\in\mathbb{R}^{2}\backslash\Omega(t)\\\ -d&\mbox{if
}\boldsymbol{x}\in\Omega(t)\end{cases},$ (23)
where $d$ is the minimum distance between $\boldsymbol{x}$ and
$\partial\Omega$, via the method of crossing times [102, chapter 7]. That is,
we advect $\phi$ in the normal direction to the interface by solving (20) with
$F=1$ and determine the point in time where each value of $\phi$ crosses from
positive to negative. This process is repeated for $F=-1$. To reduce numerical
error, which can result when the gradient of $\phi$ becomes excessively small
or large, we periodically perform re-initialisation in order to keep $\phi$
approximately equal to a signed distance function, which satisfies
$|\mbox{\boldmath$\nabla$}\phi|=1$, over the duration of a simulation. Re-
initialisation is performed by solving
$\displaystyle\frac{\partial\phi}{\partial\tau}+S(\phi)(|\mbox{\boldmath$\nabla$}\phi|-1)=0,$
(24)
where
$\displaystyle S(\phi)=\frac{\phi}{\sqrt{\phi^{2}+\Delta x^{2}}},$ (25)
to steady state. Here $\tau$ is a pseudo time variable where
$\Delta\tau=\Delta x/5$. We find that performing re-initialisation every five
time steps is sufficiently frequent.
While the level set method has successfully been used as a framework for
studying a variety of moving boundary problems, a limitation of the method is
that it can suffer from volume loss or gain in regions where the mesh is
underresolved. In an effort to alleviate this problem, Enright et al. [42]
proposed the particle level set method, which combines the Eulerian based
level set method with a marker particle based Lagrangian approach. We briefly
describe the algorithm here, and refer the reader to Enright et al. [42] for a
more comprehensive description, as well as examples illustrating the
effectiveness of the particle level set method.
The method works assigning placing massless particles in the regions where
$\phi>0$ and $\phi<0$, i.e. on both sides of the interface, which are referred
to positive and negative particles, respectively. We denote $r_{p}$ as the
minimum distance between the interface and particle’s location The marker
particles are advected according to
$\displaystyle\frac{\textrm{d}\boldsymbol{x}_{p}}{\textrm{d}t}=F\boldsymbol{n},$
(26)
where $\boldsymbol{x}_{p}$ is the location of the particle and
$\boldsymbol{n}=\mbox{\boldmath$\nabla$}\phi/|\mbox{\boldmath$\nabla$}\phi|$
is a unit vector that reduces to the outward facing normal on the interface.
If a particle crosses the interface, this indicates that mass has been lost
(or gained). We mitigate this error by locally rebuilding the interface by
constructing a local level set function from the four adjacent nodes to the
particle defined as
$\phi_{p}(\mbox{\boldmath$x$})=s_{p}(r_{p}-|\mbox{\boldmath$x$}-\mbox{\boldmath$x$}_{p}|)$,
where $s_{p}=1$ and $-1$ for positive and negative particles, respectively.
Using $\phi_{p}$, $\phi$ is corrected using
$\phi=\begin{cases}\phi^{+}&\mbox{if }|\phi^{+}|\leq|\phi^{-}|\\\
\phi^{-}&\mbox{if }|\phi^{+}|>|\phi^{-}|\end{cases},$
where $\phi^{+}=\max(\phi_{p},\phi^{+})$ and
$\phi^{-}=\min(\phi_{p},\phi^{-})$. This procedure is performed both when the
level set equation (20) is solved as well as when re-initialisation is
performed.
### 3.2 Solving for $F$
From the kinematic boundary condition (12), we have the expression for the
speed function
$\displaystyle
F=-b^{2}\mbox{\boldmath$\nabla$}p\cdot\boldsymbol{n}\qquad\boldsymbol{x}\in\mathbb{R}^{2}\backslash\Omega(t),$
(27)
recalling $F$ is required to solve the level set equation (20) and
$\boldsymbol{n}=\mbox{\boldmath$\nabla$}\phi/|\mbox{\boldmath$\nabla$}\phi|$
reduces to the outward facing normal on the interface. Thus by solving (27),
we find that $F=v_{n}$ on $\boldsymbol{x}\in\partial\Omega(t)$, recalling
$v_{n}$ is the normal speed of the interface. Further, (27) provides a
continuous expression for $F$ in the viscous fluid region. The derivatives in
(27) are evaluated using central differencing. However, to solve (20) we
require an expression for $F$ over the entire computational domain. It was
proposed by Moroney et al. [96] that the speed function be extended into the
inviscid fluid region by solving the biharmonic equation
$\displaystyle\nabla^{4}F=0\qquad\boldsymbol{x}\in\Omega(t).$ (28)
This ensures that $F=v_{n}$ is satisfied on the interface and gives a
continuous expression for $F$ away from the interface. For the purpose of
discretisation, the sign of $\phi$ is used to determine nodes inside the
interface that need to be included in the biharmonic stencil. This
discretisation results in a symmetric system of linear equations that is
solved using LU decomposition. As such, the location of the interface does not
need to be known explicitly, similar to the level set method itself. This
velocity extension process is a variant of a thin plate spline in two
dimensions [15]. To illustrate the velocity extension process, we consider an
example $F$ defined in the region $\Omega(t)$, shown in Figure 3$(a)$. The red
line represents the fluid-fluid interface $\partial\Omega(t)$. Figure 3$(b)$
shows $F$ after the biharmonic equation (28) is solved. We see that this
velocity extension process gives a differentiable expression for $F$ over the
entire computational domain, which can be used to solve (20).
Figure 3: An illustration of the velocity extension process used to extend $F$
to be defined over the entire computational domain. $(a)$ shows an example
function that is undefined where $\mbox{\boldmath$x$}\in\Omega$. $(b)$ shows
this regions being ‘filled-in’ by solving the biharmonic equation (28). This
gives us a differentiable extension of $F$ over the entire domain.
### 3.3 Solving for pressure
To evaluate the speed function $F$, we must first compute the pressure field.
We consider (10)-(13) in polar coordinates with $p=p(r,\theta,t)$ and the
location of the interface is given by $r=s(\theta,t)$. Thus (10) becomes
$\displaystyle\frac{1}{r}\frac{\partial}{\partial r}\left(rb^{3}\frac{\partial
p}{\partial
r}\right)+\frac{1}{r^{2}}\diffp{}{\theta}\left(b^{3}\diffp{p}{\theta}\right)=\frac{\partial
b}{\partial t}\qquad r>s(\theta,t)$
For nodes away from the interface, the derivatives in (3.3) are discretised
using a standard 5-point stencil, illustrated in Figure 4$(a)$. Denoting
$\beta=rb^{3}$, the $r$-derivatives in (3.3) are approximated via
$\displaystyle\frac{1}{r}\frac{\partial}{\partial r}\left(\beta\frac{\partial
p}{\partial r}\right)\to\frac{1}{r_{i,j}\Delta
r}\left(\beta_{i+1/2,j}\frac{p_{i+1,j}-p_{i,j}}{\Delta
r}-\beta_{i-1/2,j}\frac{p_{i,j}-p_{i-1,j}}{\Delta r}\right),$ (29)
where $\beta_{i+1/2,j}=(\beta_{i+1,j}+\beta_{i,j})/2$ and
$\beta_{i-1/2,j}=(\beta_{i-1,j}+\beta_{i,j})/2$. The derivatives in the
$\theta$-direction are discretised in a similar fashion.
Figure 4: An illustration of how the pressure of the viscous fluid is solved
for using the finite difference method. $(a)$ For nodes away from the
interface, we solve for pressure (3.3) using a standard 5-point finite
difference stencil (29). $(b)$ However, this stencil cannot be used for nodes
adjacent to the interface, as in this case the southern node is not in the
domain $\boldsymbol{x}\in\mathbb{R}^{2}\backslash\Omega(t)$, and thus cannot
be used in our stencil. Instead we impose a ghost node, denoted by the red
dots, on the interface whose location corresponds to the point where $\phi=0$.
The value at this ghost node is determined from the dynamic boundary condition
(11). This leads to the non-standard finite difference stencil (31).
As illustrated in Figure 4$(b)$, special care must be taken when solving for
nodes adjacent to the interface. Suppose that the interface is located at
$r=r_{I}$ where $r_{i-1,j}<r_{I}<r_{i,j}$ where the nodes $r_{i-1,j}$ and
$r_{i,j}$ are in the inviscid and viscous fluid regions respectively. When
discretising (3.3), we can no longer incorporate $p_{i-1,j}$ into our finite
difference stencil as it is not in the domain
$\boldsymbol{x}\in\mathbb{R}^{2}\backslash\Omega(t)$. Instead, we define a
ghost node at $r_{I}$ (denoted by the red dots in Figure 4$(b)$) whose value
is $p_{I}$. By noting that $\phi$ is a signed distance function, the distance
between $r_{i,j}$ and $r_{I}$ is computed via
$\displaystyle h=\Delta
r\left|\frac{\phi_{i,j}}{\phi_{i-1,j}-\phi_{i,j}}\right|.$ (30)
As per Chen et al. [25], our finite difference stencil becomes
$\begin{split}\frac{1}{r}\frac{\partial}{\partial r}\left(\beta\frac{\partial
p}{\partial r}\right)&\approx\frac{2}{r_{i,j}(\Delta
r+h)}\left(\beta_{i+1/2,j}\frac{p_{i+1,j}-p_{i,j}}{\Delta
r}-\hat{\beta}_{i-1/2,j}\frac{p_{i,j}}{h}\right)\\\ &+\frac{2}{r_{i,j}h(\Delta
r+h)}\hat{\beta}_{i-1/2,j}p_{I}.\end{split}$ (31)
Here $\hat{\beta}_{i-1/2,j}=(\beta_{i,j}+\beta_{I})/2$ where $\beta_{I}$ is
the value of $\beta$ on the interface. When the node and interface are
sufficiently close together such that $h<\Delta r^{2}$, we set
$p_{i,j}=p_{I}$. A similar procedure is applied if the interface lies between
$r_{i,j}<r_{I}<r_{i+1,j}$ and in the azimuthal direction.
The value of $p_{I}$ is computed from the dynamic boundary condition (11). To
determine the value of $p_{I}$, we first compute the curvature of $\phi$ via
$\kappa=\mbox{\boldmath$\nabla$}\cdot\boldsymbol{n}$ over the entire
computational domain, recalling
$\boldsymbol{n}=\mbox{\boldmath$\nabla$}\phi/|\mbox{\boldmath$\nabla$}\phi|$.
The value of $\kappa$ on the interface is computed via linear interpolation
using $\kappa_{i,j}$ and $\kappa_{i-1,j}$ leading to
$\displaystyle
p_{I}=-\sigma\left(\kappa_{i,j}-\frac{h(\kappa_{i,j}-\kappa_{i-1,j})}{\Delta
r}+\frac{2}{b(r_{I},\theta_{j})}\right)-\omega^{2}r_{I}^{2}.$ (32)
For more information about solving both elliptic and parabolic problems in
irregular domains using the finite difference method in conjunction with level
set functions, we refer the reader to Coco and Russo [26], Gibou et al. [53]
(and references therein). Once the finite difference stencil has been formed,
the resulting system of linear equations is solved using LU decomposition.
#### 3.3.1 Far-field boundary condition
To incorporate the far-field boundary condition (13) into our finite
difference stencil, we utilise a Dirichlet-to-Neumann map. This map is
implemented by imposing an artificial circular boundary at $r=R$ such that
$R>s(\theta,t)$. By only considering the region $s(\theta,t)\leq r\leq R$, we
seek a solution to (10) of the form
$\displaystyle\hat{p}(r,\theta,t)=A_{0}-\frac{Q}{2\pi}\log
r+\frac{r^{2}}{4}\frac{\partial b}{\partial
t}+\sum_{n=1}^{\infty}r^{-n}\left(A_{n}\cos n\theta+B_{n}\sin n\theta\right),$
(33)
where $A_{0}$, $A_{n}$, and $B_{n}$ are unknown, and $\hat{p}=b^{3}p$. The
expansion (33) assumes that $b$ is spatially uniform in $r\geq R$ so that
$\hat{p}$ satisfies the appropriate Poisson equation. Considering the value of
pressure on the artificial boundary, suppose that $\hat{p}(R,\theta,t)$ can be
represented as a Fourier series
$\displaystyle\hat{p}(R,\theta,t)=a_{0}+\sum_{n=1}^{\infty}a_{n}\cos
n\theta+b_{n}\sin n\theta,$ (34)
where
$a_{0}=\frac{1}{2\pi}\int_{0}^{2\pi}\hat{p}(R,\theta,t)\hskip
1.49994pt\text{d}\theta,\quad
a_{n}=\frac{1}{\pi}\int_{0}^{2\pi}\hat{p}(R,\theta,t)\cos n\theta\hskip
1.49994pt\text{d}\theta,\quad
b_{n}=\frac{1}{\pi}\int_{0}^{2\pi}\hat{p}(R,\theta,t)\sin n\theta\hskip
1.49994pt\text{d}\theta.$
By equating (34) with (33) evaluated at $r=R$, we find that
$A_{0}=a_{0}+(Q/2\pi)\log R-\dot{b}R^{2}/4$, $A_{n}=R^{n}a_{n}$ and
$B_{n}=R^{n}b_{n}$.
To incorporate our expression for $\hat{p}$ into our finite difference
stencil, we differentiate (34) with respect to $r$ and evaluate it at $r=R$ to
give
$\displaystyle\frac{\partial}{\partial r}\hat{p}(R,\theta_{j})=-\frac{Q}{2\pi
R}+\frac{R}{2}\frac{\partial b}{\partial
t}-\sum_{n=1}^{\infty}\frac{n}{R}\left(a_{n}\cos n\theta_{j}+b_{n}\sin
n\theta_{j}\right).$ (35)
By approximating the integrals in our expressions for $a_{n}$ and $b_{n}$ as
$\displaystyle
a_{n}\approx\frac{\Delta\theta}{\pi}\sum_{k=1}^{m}\hat{p}(R,\theta_{k})\cos
n\theta_{k}\quad\textrm{and}\quad
b_{n}\approx\frac{\Delta\theta}{\pi}\sum_{k=1}^{m}\hat{p}(R,\theta_{k})\sin
n\theta_{k},$ (36)
then
$\displaystyle\frac{\partial}{\partial
r}\hat{p}(R,\theta_{j})\approx-\frac{Q}{2\pi R}+\frac{R}{2}\frac{\partial
b}{\partial
t}-\frac{\Delta\theta}{R\pi}\sum_{k=1}^{m}v_{jk}\hat{p}(R,\theta_{k}),$ (37)
where
$\displaystyle v_{jk}=\sum_{n=1}^{\infty}n\cos(n(\theta_{k}-\theta_{j})).$
(38)
Defining $I$ as the outermost index at which $r=R$, then our expression for
$\partial p/\partial r$ is incorporated into our finite difference stencil,
$\frac{1}{r}\frac{\partial}{\partial r}\left(\beta\frac{\partial p}{\partial
r}\right)\to\frac{2}{R\Delta
r}\left\\{-\beta_{I-1/2,j}\frac{p_{I,j}-p_{I-1,j}}{\Delta
r}+R\left[-\frac{Q}{2\pi R}+\frac{R}{2}\frac{\partial b}{\partial
t}-b^{3}\frac{\Delta\theta}{R\pi}\sum_{k=1}^{m}w_{jk}p_{I,k}\right]\right\\},$
(39)
recalling $\beta=rb^{3}$.
As an aside, we note this procedure can be adapted to model a Dirichlet
boundary condition of the form $p\sim p_{\infty}$ as $r\to\infty$ where
$p_{\infty}$ is prescribed. This type of boundary condition would be more
appropriate for the model of a Stefan problem [98], where now $p$ would
represent temperature that is prescribed in the far-field. Assuming that $b$
is constant, then (33) becomes
$\displaystyle p=p_{\infty}+A_{0}\ln
r+\sum_{n=1}^{\infty}r^{-n}\left(A_{n}\cos n\theta+B_{n}\sin n\theta\right),$
(40)
By following the same steps outlined above, we have
$\displaystyle\frac{\partial}{\partial r}p(R,\theta_{j})$
$\displaystyle=\frac{a_{0}-p_{\infty}}{R\ln
R}-\sum_{n=1}^{\infty}\frac{n}{R}\left(a_{n}\cos n\theta_{j}+b_{n}\sin
n\theta_{j}\right).$ (41)
This expression is then incorporated into our finite difference stencil as per
usual.
### 3.4 General algorithm
We summarise our numerical algorithm as follows:
* Step 1
For a given initial interface $s(\theta,0)$, initialise $\phi$ as a signed
distanced function such that it satisfies (23) using the method of crossing
times.
* Step 2
Place marker particles around the interface, noting which side of the
interface they are on, as well as their minimum distance from the interface.
* Step 3
Solve for pressure in the domain
$\boldsymbol{x}\in\mathbb{R}^{2}\backslash\Omega$ using the procedure
described in Section 3.3.
* Step 4
Compute $F$ according to (27), and then extend $F$ into the region
$\boldsymbol{x}\in\Omega$ by solving the biharmonic equation (28).
* Step 5
Using $F$, update both $\phi$ and the marker particles by solving (20) and
(26), respectively.
* Step 6
Correct $\phi$ (if necessary) using the marker particles.
* Step 7
Re-initialise the level set function by solving (24), and then correct $\phi$
using the marker particles.
* Step 8
Repeat steps 2-7 until $t=t_{f}$.
## 4 Numerical results
In this section, we present a selection of results demonstrating the
capabilities of the numerical scheme presented in Section 3. We show how our
framework can be modified to solve for a wide range of different
configurations of the Hele–Shaw cell, and provide examples illustrating that
simulations are capable of producing solutions consistent with previous
experimental and numerical results.
### 4.1 Expanding bubble problem
We first consider the standard Hele–Shaw problem in which the inviscid bubble
is injected into the viscous fluid while the plates are parallel and
stationary such that $b=1$ and $\omega=0$.
#### 4.1.1 Numerical validation
As a preliminary test for our scheme, we demonstrate that numerical solutions
converge for a sufficiently refined grid. To do so, we perform simulations
with the initial condition
$\displaystyle s(\theta,0)=1+\varepsilon\cos m\theta,$ (42)
where $\varepsilon=0.1$, $m=6$, on the computational domain $0\leq r\leq 7.5$
and $0\leq\theta<2\pi$, and are performed using an increasingly refined mesh.
These simulations, shown in Section 4.1.1, indicate that the interfacial
profiles are converging as the mesh is refined, and that grid independence is
achieved, at this scale, using $750\times 628$ equally spaced nodes. Further,
our solutions appear to maintain six fold symmetry over the duration of the
simulation.
Figure 5: Convergence test of numerical scheme for the evolution of a bubble
with initial condition (42), where solutions are computed using $(a)$
$350\times 293$, $(b)$ $550\times 461$, $(c)$ $750\times 628$, and $(d)$
$850\times 712$ equally spaced nodes. Additionally, $Q=1$, $\sigma=5\times
10^{-4}$, $\omega=0$, and $t_{f}=100$. Simulations are performed on the domain
$0\leq r\leq 7.5$ and $0\leq\theta<2\pi$. Solutions are plotted in time
intervals of $t=10$.
We also demonstrate that our use of the Dirichlet-to-Neumann map, described in
Section 3.3.1, results in the bubble’s volume changing at rate $Q$. To do so,
we consider three different injection rates. The first is the constant
injection rate $Q=1$, the second is the sinusoidal injection rate,
$\displaystyle Q=1+0.2\sin(4\pi t/t_{f}),$ (43)
and the third is the piecewise rate
$\displaystyle Q=\left\\{\begin{array}[]{ll}0.8&\quad t\leq t_{f}/2\\\
1.2&\quad t>t_{f}/2\end{array}.\right.$ (46)
We find that the rate of change of volume computed from the numerical
simulations compares well with the corresponding exact rate of expansion,
shown in Figure 6$(a)$. The relative error, shown in Figure 6$(b)$, suggests
that over the duration of a simulation, we experience mass loss of only
approximately $0.1\%$. This result suggests that the Dirichlet-to-Neumann map
correctly ensures that the bubble expands at the correct rate for both
constant and time-dependent $Q$.
Figure 6: $(a)$ The rate of change of volume, $\dot{V}$, of numerical solution
with constant (red), periodic (blue) and piecewise (yellow) injection rates.
Dotted black lines denote the corresponding exact rate of change of volume.
$(b)$ Corresponding relative error. Simulations are performed on the domain
$0\leq r\leq 7.5$ and $0\leq\theta<2\pi$ using $750\times 628$ equally spaces
nodes with the initial condition (42). The surface tension parameter is
$\sigma=5\times 10^{-4}$ and final time of simulations is $t=100$.
#### 4.1.2 Effect of surface tension
Surface tension, modelled via the inclusion of the signed curvature term in
the dynamic boundary condition (11), acts to regularise the Hele–Shaw model.
As shown in Section 2.2, in the absence of surface tension, exact solutions
exist that exhibit very different behaviour (finite-time cusp formation or
finger formation) despite initial conditions that are arbitrarily close; hence
the zero-surface-tension-problem is ill-posed. Adding surface tension removes
the possibility of cusp formation; in Figure 2, we have included solutions for
small but nonzero surface tension, computed using the method described in
Section 3, for each of the cases described in Section 2.2. With nonzero
surface tension, the interface remains smooth and solutions exist for all
time, or, in the case of the finite blob (Figure 2(c)), until the interface
intersects the point at which fluid is being withdrawn.
Linear stability analysis [107] indicates that increasing the surface tension
parameter $\sigma$ acts to make the interface less unstable, and nonlinear
numerical simulations and experiments [24, 33, 67] show that increasing the
injection rate, which is mathematically equivalent to decreasing the
dimensionless surface tension $\sigma$, results in an increase in the number
of fingers that develop. We perform numerical simulations for $Q=1$ with
values of surface tension varying over several orders of magnitude with the
initial condition
$\displaystyle s(\theta,0)=1+\varepsilon\left(\cos m\theta+\sin
n\theta\right),$ (47)
where $\varepsilon=0.1$, $m=3$, and $n=2$. These numerical simulations, shown
in Section 4.1.2, are able to reproduce the key morphological features of the
Saffman–Taylor instability. For each value of $\sigma$ considered, the
interface is unstable, and the sinusoidal perturbations in the initial
condition (47) grow and evolve into viscous fingers. As $\sigma$ is decreased,
the interface becomes more unstable and the number of fingers that develop
over the duration of a simulation increases due to tip-splitting occurring
(see $(1)$ in Section 4.1.2$(b)$ for example). Additionally, our simulations
are able to produce so called ‘shielding’ behaviour, where neighbouring
fingers can block one another off, which in turn results in fingers retracting
(denoted by $(2)$ in Section 4.1.2$(c)$). This behaviour is known to occur
experimentally (see Figure 1 in [107] for example). Finally, when surface
tension is sufficiently small, ‘feathering’ can occur, when a finger does not
strictly tip-split, but instead develops ripples along one of its sides as it
expands (denoted by $(3)$ in Section 4.1.2$(d)$). Again this behaviour has
been observed in experiments (see Figure 3 in [24]).
Figure 7: The numerical solution to (10)-(13) with $Q=1$ for values of surface
tension parameter $\sigma$ $(a)$ $1.5\times 10^{-3}$, $(b)$ $5\times 10^{-4}$,
$(c)$ $1.75\times 10^{-4}$, and $(d)$ $6.25\times 10^{-5}$ with initial
condition (47). Our numerical scheme captures different morphological
behaviour including $(1)$ tip-splitting, $(2)$ shielding, and $(3)$
feathering, all of which have been observed experimentally. Simulations are
performed on the domain $0\leq r\leq 10$ and $0\leq\theta<2\pi$ using
$1000\times 628$ equally spaced nodes. Solutions are plotted in time intervals
of $t=5$ up to $t_{f}=55$.
### 4.2 Tapered plates
One of the more popular modifications to the Hele–Shaw cell (particularly in
recent years) is to consider that the plates of the Hele–Shaw cell are no
longer parallel but instead linearly tapered (either converging or diverging)
in the direction of the flow. The concept was first introduced in Zhao et al.
[140], who considered tapered plates in a channel geometry, discussed further
in Section 4.4. Imposing tapered plates in radial geometry has been studied
using linear stability analysis [2], weakly-nonlinear stability analysis [8],
experimentally [2, 14], and by numerical simulations [74]. Results indicate
that tapering the gap between the plates either such that they converge or
diverge can have a stabilising or de-stabilising effect on the interface
depending on the injection rate. The numerical scheme presented in Section 3
was used by us in Ref. Morrow et al. [99] to study how tapering the plates
while injecting at either a constant or time dependent injection rate can be
used to reduce the development of viscous fingering patterns.
Here we perform numerical simulations where the gap between the plates is
linearly tapered according to
$\displaystyle b(r)=\begin{cases}b_{\infty}+\alpha(r-r_{B})&\text{if }r\leq
r_{B},\\\ b_{\infty}&\text{if }r>r_{B},\end{cases}$ (48)
where $\alpha$ is the gradient of the taper ($\alpha=0$ being the unmodified
Hele–Shaw cell). Experiments were recently performed by Bongrand and Tsai
[14], who considered a Hele–Shaw cell where the plate gap is (48) where
$\alpha>0$ for different injection rates. Bongrand and Tsai showed that if the
injection rate is sufficiently small, the interface is completely stabilised
over the duration of the experiment. In our nondimensionalisation, for which
the time-scale is set by the injection rate, a smaller (dimensional) injection
rate corresponds to a larger (dimensionless) surface tension value $\sigma$.
To confirm that our numerical scheme produces solutions consistent with these
experiments, we perform simulations with two different values of $\sigma$,
shown in Section 4.2, to model two different (dimensional) injection rates.
The initial condition of these simulations is (47) where $0\leq\theta<2\pi$,
$\varepsilon=5\times 10^{-3}$, $m=5$, and $n=4$. For a larger $\sigma$
(Section 4.2$(a)$), we indeed find that the interface is stabilised, and
remains circular over the duration of the simulation. For a smaller $\sigma$
(Section 4.2$(b)$), we find that the interface is unstable; in particular,
these fingers appear distinct from traditional Saffman–Taylor fingers (see
Section 4.1.2 for example). This behaviour is consistent with the results of
Bongrand and Tsai [14], who described the interface as becoming ‘wavy’ as it
expanded. These results suggest our numerical simulations are producing the
correct behaviour when the plates are of the form of (48).
Figure 8: Numerical simulations where the gap between the plates linearly
tapered according to (48) with $Q=1$, $b_{\infty}=1/15$, $R_{0}=8/3$,
$r_{B}=7$, and $\alpha=2/15$, with surface tension $\sigma$ $(a)$ 6 and $(b)$
1\. Simulations are performed on the domain $0\leq r\leq 7.5$ and
$0\leq\theta<2\pi$ using $750\times 628$ equally spaced nodes. Solutions are
plotted in time intervals of 5.6. up to $t_{f}=44.8$.
### 4.3 Blob problem
In this subsection, we now consider the complementary problem to (10)-(13),
for which a blob of viscous fluid now occupies the region $\Omega(t)$ and the
inviscid fluid is in $\mathbb{R}^{2}\backslash\Omega(t)$. This problem is
typically studied by considering the withdrawal of the viscous fluid, which in
turn causes fingers to develop inward (as in Figure 2(c), for example).
However, popular modifications, including considering the gap between the
plates to be time-dependent or rotating the entire Hele–Shaw cell, have also
received interest. In this subsection, we explain how the numerical scheme
presented in Section 3 is modified to solve for these variations, and show
that our simulations are consistent with previous experimental and numerical
results.
For the case in which the inviscid bubble is injected into the viscous fluid
(Section 4.1-4.2, say), the velocity of the fluid is driven by a sink term in
the far-field given by (13). For the blob problem considered here, the
withdrawal of the viscous fluid is incorporated into the model via a sink at
the origin. To include this sink into our numerical model, we follow Hou et
al. [67] and introduce a smoothed Dirac delta function in (10) such that
$\displaystyle\mbox{\boldmath$\nabla$}\cdot\left(b^{3}\mbox{\boldmath$\nabla$}p\right)=\frac{\partial
b}{\partial t}+S,$ (49)
where
$\displaystyle
S=\begin{cases}\dfrac{Q}{b\bar{r}_{0}^{2}}\left(1+\cos\dfrac{\pi
r}{\bar{r}_{0}}\right)&\text{if }r\leq\bar{r}_{0},\\\ 0&\text{if
}r>\bar{r}_{0}.\end{cases}$ (50)
As shown by Hou et al. [67], this choice of source term ensures the rate of
change of volume of the viscous fluid is $Q$. For all results in this
subsection, we use $\bar{r}_{0}=0.05$. We note that it is straightforward to
extend (50) to include multiple sink/source points.
When solving for the governing equation for pressure (10) for the case where
in inviscid fluid is injected into a viscous one (Section 4.1-4.2), we
consider the model in polar coordinates as it is simpler to incorporate the
far-field boundary condition (13) via the Dirichlet-to-Neuman map (Section
3.3.1) on a circle. However, when the inviscid region surrounds the viscous
fluid, we no longer have a far-field boundary condition and, as such, it is
more convenient to solve for pressure in Cartesian coordinates, although of
course either coordinate system could be used. The discretisation of the
spatial derivatives (49) is performed as was described in Section 3.3. That
is, we use central finite differences for nodes away from the interfaces, and
incorporating a ghost value of $p$ for nodes adjacent to the interface. We
refer to Refs Chen et al. [25], Gibou et al. [53, 54] for more details on
implementing this stencil in Cartesian coordinates.
In a similar way to that described in Section 3.2, once we have computed $F$
via (27), we extend it into the region
$\boldsymbol{x}\in\mathbb{R}^{2}\backslash\Omega(t)$ by solving the biharmonic
equation (28). When solving (10)-(13), the speed function was known outside
the interface and was extended in. We now have an expression for $F$ inside
the interface that needs to be extended outward. This means we require
boundary conditions on each of the four computational boundaries. When forming
our biharmonic stencil, we apply homogeneous Neumann boundary conditions on
each of the boundaries. We illustrate this process in Figure 9, which shows an
example $F$ before (Figure 9$(a)$) and after (Figure 9$(b)$) the velocity
extension process. Again we see that by solving the biharmonic equation we
obtain a smooth expression for $F$ over the entire computational domain.
Figure 9: An illustration of the velocity extension process where the viscous
fluid is surrounded by the inviscid bubble. As discussed in Section 3.2, this
is done by solving the biharmonic equation in the region where
$\boldsymbol{x}\in\Omega(t)$, where now we apply homogeneous Neumann boundary
conditions on the edge of the computational domain.
#### 4.3.1 Withdrawal of viscous fluid
We consider the case where the viscous fluid is withdrawn such that as the
interface contracts, viscous fingers form inward. As described in Section 2.2,
exact solutions in the absence of surface tension ($\sigma=0$) are ill-posed,
and unphysical cusps can form before the interface reaches the sink (the point
at which liquid is withdrawn). Numerical simulations performed using the
boundary integral method have investigated the regularising effects of surface
tension preventing these cusps from forming [20, 75]. Experimental results
[107, 130] show that the fingers that form exhibit morphological features
distinct from traditional Saffman–Taylor fingers, in that these fingers do not
appear to undergo tip-splitting but instead appear to be in competition to
‘race’ toward the sink.
As well as the comparison with the zero surface tension solution made in
Figure 2(c), we perform two further different simulations for this
configuration. The first is with an initially circular interface centred at
$(0,-0.1)$ shown in Figure 10$(a)$. This simulation shows that as the
interface contracts, it becomes non-convex until a single finger develops that
tends towards the origin. This behaviour compares well with previous numerical
simulations performed using the boundary integral method [75]. For the second
simulation, shown in Figure 10$(b)$, we consider a perturbed circle centred at
the origin of the form
$\displaystyle s(\theta,0)=1+\varepsilon\left(\cos 3\theta+\sin 7\theta+\cos
15\theta+\sin 25\theta\right),$ (51)
where $\varepsilon=5\times 10^{-3}$. We find that the interface initially
develops numerous short fingers. These fingers do not appear to exhibit the
same morphological features as the case in which the inviscid bubble is
injected such as tip-splitting and feathering (see Section 4.1.2 for example),
but instead the number of fingers remains constant. Due to the pressure
differential between the sink and the boundary of the bubble, the velocity of
one of the fingers rapidly increases, and the simulation is stopped when this
finger reaches the origin. We note that this behaviour compares well with
experimental results (see Figure 15 in [130] for example), as well as
numerical simulations in [23].
Figure 10: Numerical simulation where the viscous fluid is withdrawn at a
point located at the origin where $Q=1$. For $(a)$, initial condition is a
circle of radius unity centred at $(0.1,0)$ where $\sigma=8\times 10^{-4}$ and
$t_{f}=1.88$. For $(b)$, initial condition is (51) where $\sigma=1.6\times
10^{-6}$ and $t_{f}=1.45$. Black dot denotes region where $r\leq 0.05$.
Simulations are performed on the domain $-1.15\leq x\leq 1.15$ and $-1.15\leq
y\leq 1.15$ using $400\times 400$ equally spaced nodes.
#### 4.3.2 Lifting plates
A popular modification to the blob problem is to consider the case where the
upper plate is uniformly lifted in time such that $b\to b(t)$. The volume of
viscous fluid remains constant ($Q=0$) so that when the plates are separated,
viscous fingers developing inward. For this problem, the Hele-Shaw
approximation itself can only remain valid for as long as the gap $b(t)$ is
sufficiently small. For example, in dimensional terms, we must be very careful
about using the model when the gap width is of the same order as the important
length scales in the lateral direction.
For this lifting plates configuration, the governing equation for pressure
(49) becomes Poisson’s equation. However, pressure can be reduced via
$P=p-\dot{b}|\boldsymbol{x}|^{2}/(4b^{3})$, essentially moving the non-
homogeneous term to the dynamic boundary condition (11). This transformation
reduces (10) to Laplace’s equation, meaning this configuration can also be
solved numerically with the boundary integral method [141, 142].
The lifting plate problem was first considered mathematically by Shelley et
al. [122]. In the absence of surface tension ($\sigma=0$), Shelley et al.
[122] argued that the generic behaviour is that a cusp will develop in a
finite time. When surface tension is included, numerical simulations suggested
a relationship between the number of fingers that develop and the surface
tension parameter. In particular, it was shown that the number of fingers is a
monotonically decreasing function of time. This behaviour was later shown to
be consistent with experimental results [83, 100]. As a point of comparison,
for the traditional Hele–Shaw configuration, discussed in Section 4.1, the
number of fingers typically increases with time due to tip-splitting. We note
that the case in which the inviscid bubble is injected into the viscous fluid
while the plates are separated has also received attention [99, 133, 143].
We perform a simulation using the linearly increasing gap between the plates
$b=1+t$ for different values of $\sigma$. Note that this time-dependence with
$\dot{b}=1$ effectively chooses the appropriate time-scale $T$ in (9). When
$\sigma=10^{-4}$ (row one of Section 4.3.2), we find that the interface
quickly destabilises and approximately 15 fingers develop by $t=1$. As time
increases further, neighbouring fingers begin to merge with each other and the
overall number of fingers significantly decreases, and we find only around
five fingers remain by the conclusion of the simulation. For the lower value
of surface tension $\sigma=5\times 10^{-5}$ (row two of Section 4.3.2), we
find the number of fingers that initially develop has increased compared to
when $\sigma=10^{-4}$, about 25 at $t=1$. However as the interface contracts,
again fingers begin to merge and only 8 fingers remain at $t=4$. We perform
these simulations over a longer time period (not shown), which reveals that
the interface will become circular when the gap between the plates is
sufficiently large. This behaviour is consistent both with previous
experimental and numerical results [83, 100, 122].
$\sigma=10^{-4}$
Figure 11: Time evolution of viscous fluid where plates are separated
according to $b=1+t$ with initial condition (51). Solutions are plotted at
times (left to right) $t=0$, 1, 1.8, 2.8, and 4. Simulations are performed on
the domain $-1.1\leq x\leq 1.1$ and $-1.1\leq y\leq 1.1$ using $400\times 400$
equally spaced nodes.
$\sigma=5\times 10^{-5}$
#### 4.3.3 Rotating plates
While Saffman–Taylor fingers traditionally form due to the
injection/withdrawal of one immiscible fluid into another, it is known that
these fingers can also be triggered by body forces. The two most commonly
studied body forces are gravity and centrifugal forces. For the latter, when
the entire Hele–Shaw cell is rotated, this rotation results in the dense
viscous fluid being propelled outward, which in turn leads to finger formation
[119]. Experimental [17] and numerical [4, 46, 105] studies reveal that the
interface patterns are distinct from the traditional Saffman–Taylor
instability. That is, fingers appear more ‘stretched-out’ and generally go not
undergo tip-splitting. We note that our model (10)-(49) ignores the effect of
Coriolis forces, however several studies have investigated its effect on
interfacial dynamics [4, 119, 137]. Further, we consider the case where the
inviscid bubble is injected into the viscous fluid while the plates are
rotated in Morrow et al. [99], where we show that the angular velocity has a
stabilising effect on the interface.
The incorporation of the centrifugal term is straightforward. While body
forces appear in the governing equation for the velocity of the viscous fluid
(3), scaling pressure means that the angular velocity term can be moved to the
dynamic boundary condition (11) (this can be done for any conservative body
force $\boldsymbol{f}$ satisfying
$\mbox{\boldmath$\nabla$}\times\boldsymbol{f}=\boldsymbol{0}$). The rotating
Hele–Shaw cell has previously been studied numerically using boundary integral
[119] and diffusive interface [23, 105] techniques. We perform simulations
where the volume of viscous fluid between the plates is constant ($Q=0$) and
the Hele–Shaw plates are rotated with $\omega=1$ for different values of
$\sigma$, shown in Section 4.3.3. Note that this choice of $\omega$
effectively fixes the appropriate time-scale $T$ in (9).
For each value of $\sigma$ considered in Section 4.3.3, we find that the
interface is unstable, and the fingers that develop are distinct from
traditional Saffman–Taylor fingers. In particular, the fingers that develop do
not appear to tip-split but instead remain constant. Additionally, the number
of fingers that develop increases as the surface tension parameter is
decreased such that when $\sigma=10^{-2}$ (Section 4.3.3$(a)$), 7 fingers
form, while for $\sigma=10^{-3}$ (Section 4.3.3$(d)$), the number of fingers
is approximately 21. These results are consistent with experimental results
[17] and numerical simulations [4, 105].
Figure 12: Numerical simulation of a rotating Hele–Shaw cell with $\omega=1$,
$Q=0$, and $\sigma$ $(a)$ $10^{-2}$, $(b)$ $5\times 10^{-3}$, $(c)$ $2.5\times
10^{-3}$, and $(d)$ $10^{-3}$. Corresponding final time of simulations is
$t=0.61$, $0.55$, $0.425$, and $0.36$. Initial condition for each simulation
is (51). Simulations are performed on the domain $-2\leq x\leq 2$ and $-2\leq
y\leq 2$ using $500\times 500$ equally spaced nodes.
### 4.4 Channel geometry
In subsections 4.1-4.3, we considered Hele–Shaw flow in radial geometry, where
the bubble-fluid interface is completely immersed by an infinite amount of
viscous or inviscid fluid. In this subsection, we focus on another well
studied version of the Hele–Shaw cell is in channel (or rectangular) geometry,
where the shape of the cell is a narrow rectangle of infinite length and width
$L$. As discussed in subsection 2.2, for the zero-surface-tension case, exact
solutions are known to exist, which may involve a type of blow-up in finite
time with a cusp forming on the boundary (as in Figure 2(d)).
This channel problem dates back to the work of Saffman and Taylor [118], who
showed that when an inviscid bubble is injected into the channel filled with a
viscous fluid, typically a single finger develops that propagates through the
channel (see the numerical solution in Figure 2(e)). Since the work of Saffman
and Taylor, extensive research has been carried out determining how the
parameters of the model influence the width (relative to $L$) and speed of
this finger. In particular, for a fixed injection rate, as the surface tension
parameter is increased, it is established that the width and speed of the
finger increase and decrease, respectively. We refer to Refs Homsy [63],
Saffman [117] (and references therein) for a comprehensive overview of the
problem. In this subsection (and subsection 2.2), we restrict ourselves to the
classic configuration where $b$ is constant, but we note that similar to the
radial problem, our scheme can easily be used to study non-standard cases,
such as those in the papers [2, 49, 131, 140].
As with the blob problem discussed in Section 4.3, it is more convenient to
consider this problem in Cartesian coordinates such that $p\to p(x,y,t)$ and
the interface is given by $x=f(y,t)$. Similar to the radial case (see (17),
for example), the velocity of the fluid is driven by the sink term in the far-
field
$\displaystyle b^{3}\frac{\partial p}{\partial
x}\sim\frac{Q}{L}\qquad\textnormal{as}\quad x\to\pm\infty.$ (52)
Equation (52) is incorporated into our finite difference stencil using a
Dirichlet-to-Neumann map. We do not provide full details, but note the
procedure is similar to that described in Section 3.3.1, where we impose an
artificial boundary at $x=X$, and seek a solution to (10) of the form
$\displaystyle\hat{p}(x,y,t)=\frac{Q}{L}x+\sum_{n=0}^{\infty}A_{n}\textrm{e}^{-\lambda_{n}y}\cos\lambda_{n}x,$
(53)
where $\lambda_{n}=2\pi n/L$ and $A_{n}$ is to be determined. Additionally, we
also impose $\partial p/\partial y=0$ on $y=\pm L/2$.
We perform a series of simulations using the same parameters as those of
DeGregoria and Schwartz [37], who studied this problem using a boundary-
integral approach, to demonstrate our solutions are consistent with the
expected behaviour. We choose the initial condition
$\displaystyle f(y,0)=\varepsilon\cos 2\pi y,$ (54)
where $\varepsilon=0.05$, and perform simulations over a range of values of
$\sigma$, shown in Figure 13. In this figure $L=1$, which sets the length
scale $r_{0}$ in (9), while $Q=1$ fixes $T$ in these equations. For low
$\sigma$ (row one of Figure 13), we find that as the bubble expands, a finger
grows, which is unstable and split into two. This is consistent with the
results of DeGregoria and Schwartz [37], and this behaviour is also observed
experimentally by Tabeling et al. [124] when the injection rate is
sufficiently large. For larger values of $\sigma$ (rows two to four of Figure
13), a single stable finger propagates through the channel whose speed and
width decreases and increases as $\sigma$ increases. Again, this behaviour is
consistent with previous experimental and numerical results [37, 124].
Figure 13: Numerical simulations in channel geometry with values of the
surface tension parameter (top to bottom) $\sigma=2\times 10^{-4}$, $5\times
10^{-4}$, $7\times 10^{-3}$, and $2\times 10^{-2}$. Initial condition for all
simulations is $f(x,0)=0.05+\cos(2\pi y)$. Simulations are performed on the
domain $-0.5\leq x\leq 0.5$ and $-0.5\leq y\leq 6$ using $100\times 650$
equally spaced nodes. Solutions are plotted in time intervals of $t=0.5$ up to
$t_{f}=3$.
## 5 Conclusion
In this article, we have reviewed a suite of Hele-Shaw configurations with two
immiscible fluids separated by a sharp interface. Our focus is on one-phase
models, which arise by assuming one fluid is much less viscous than the other
(indeed we assume the less viscous fluid is inviscid and ignore its
contribution). For the standard Hele-Shaw configuration with parallel
stationary plates, we have summarised how complex variable and conformal
mapping techniques can be applied to the zero-surface-tension model to deduce
a variety of exact analytical results. The three geometries we have focussed
on involve a bubble expanding into a body of viscous fluid, a blob of fluid
withdrawn from a point or viscous fluid displaced by an inviscid fluid in a
Hele-Shaw channel. Despite the drawbacks of Hele-Shaw models without surface
tension in terms of physical applicability, these complex variable approaches
are very well studied by applied mathematicians and have motivated numerous
papers on moving boundary problems in general.
We have also reviewed a series of alterations to the standard one-phase Hele-
Shaw model. For these alterations applied in various combinations with the
three geometries, we have presented a flexible numerical scheme based on the
level set method. We have shown that our scheme is capable of reproducing the
complicated interfacial patterns that form in Hele–Shaw flow while using a
uniform computational grid. By making straightforward, appropriate adjustments
to the scheme, we have been able to solve for a wide range of configurations.
We have presented a selection of some of the more well-studied configurations,
including the expanding bubble problem, linearly tapered plates, the
withdrawal of fluid from a viscous blob, time-dependent plate gap, rotating
Hele-Shaw cell, and flow in a channel geometry. For all of these
configurations, we have demonstrated that our simulations compare well with
previous experimental and numerical results.
While we have considered a range of different Hele–Shaw configurations in this
article, this is by no means an exhaustive list. Using our numerical scheme,
opportunities exist to study configurations that have not previously been
considered either experimentally and numerically. For example, while the
linearly tapered configuration, discussed in Section 4.2, has received
significant attention [1, 8, 14, 74], including our own study in Morrow et al.
[99], an open question is to determine the effect of tapering the plates for
the corresponding blob problem. Our scheme could be used to gain insight the
effect of the taper angle on viscous finger development when the fluid is
withdrawn compared to the parallel plate case discussed in Section 4.3.
Further, additional physical effects on the interface between fluids could be
easily included, such as kinetic undercooling [6, 35] or dynamic wetting
effects [16, 106]. Further adjustments could be made to apply the scheme to
study controlling instabilities in Hele-Shaw cells with an elastic membrane
[108, 109, 111] or with an external electrical field [95, 50] and much more.
### Acknowledgements
SWM acknowledges the support of the Australian Research Council via the
Discovery Projects DP140100933. He would like to thank the Isaac Newton
Institute for Mathematical Sciences, Cambridge, for support and hospitality
during the programme Complex Analysis: Techniques, Applications and
Computations where part of the work on this paper was undertaken. This
programme was supported by the EPSRC grant EP/R014604/1. He is grateful for
the generous support of the Simons Foundation who provided further financial
support for his visit to the Isaac Newton Institute via a Simons Foundation
Fellowship.
## References
* Al-Housseiny and Stone [2013] T. T. Al-Housseiny and H. A. Stone. Controlling viscous fingering in tapered Hele-Shaw cells. _Physics of Fluids_ , 25:092102, 2013.
* Al-Housseiny et al. [2012] T. T. Al-Housseiny, P. A. Tsai, and H. A. Stone. Control of interfacial instabilities using flow geometry. _Nature Physics_ , 8:747, 2012.
* Al-Housseiny et al. [2013] T. T. Al-Housseiny, I. C. Christov, and H. A. Stone. Two-phase fluid displacement and interfacial instabilities under elastic membranes. _Physical Review Letters_ , 111:034502, 2013.
* Alvarez-Lacalle et al. [2008] E. Alvarez-Lacalle, H. Gadêlha, and J. A. Miranda. Coriolis effects on fingering patterns under rotation. _Physical Review E_ , 78:026305, 2008.
* Anjos and Miranda [2013] P. H. A. Anjos and J. A. Miranda. Radial viscous fingering: Wetting film effects on pattern-forming mechanisms. _Physical Review E_ , 88:053003, 2013.
* Anjos et al. [2015] P. H. A. Anjos, E. O. Dias, and J. A. Miranda. Kinetic undercooling in Hele-Shaw flows. _Physical Review E_ , 92:043019, 2015.
* Anjos et al. [2017] P. H. A. Anjos, V. M. M. Alvarez, E. O. Dias, and J. A. Miranda. Rotating Hele-Shaw cell with a time-dependent angular velocity. _Physical Review Fluids_ , 2:124003, 2017.
* Anjos et al. [2018] P. H. A. Anjos, E. O. Dias, and J. A. Miranda. Fingering instability transition in radially tapered Hele-Shaw cells: Insights at the onset of nonlinear effects. _Physical Review Fluids_ , 3:124004, 2018.
* Anjos et al. [2021] P. H. A. Anjos, M. Zhao, J. Lowengrub, W. Bao, and S. Li. Controlling fingering instabilities in Hele-Shaw flows in the presence of wetting film effects. _Physical Review E_ , 103:063105, 2021.
* Aronsson and Janfalk [1992] G. Aronsson and U Janfalk. On Hele-Shaw flow of power-law fluids. _European Journal of Applied Mathematics_ , 3:343–366, 1992.
* Arun et al. [2020] R. Arun, S. T. M. Dawson, P. J. Schmid, A. Laskari, and B. J. McKeon. Control of instability by injection rate oscillations in a radial Hele-Shaw cell. _Physical Review Fluids_ , 5:123902, 2020.
* Beeson-Jones and Woods [2017] T. H. Beeson-Jones and A. W. Woods. Control of viscous instability by variation of injection rate in a fluid with time-dependent rheology. _Journal of Fluid Mechanics_ , 829:214–235, 2017.
* Ben-Jacob and Garik [1990] E. Ben-Jacob and P. Garik. The formation of patterns in non-equilibrium growth. _Nature_ , 343:523–530, 1990.
* Bongrand and Tsai [2018] G. Bongrand and P. A. Tsai. Manipulation of viscous fingering in a radially tapered cell geometry. _Physical Review E_ , 97:061101, 2018.
* Bookstein [1989] F. L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 11:567–585, 1989\.
* Bretherton [1961] F. P. Bretherton. The motion of long bubbles in tubes. _Journal of Fluid Mechanics_ , 10:166–188, 1961.
* Carrillo et al. [1996] L. Carrillo, F. X. Magdaleno, J. Casademunt, and J. Ortín. Experiments in a rotating Hele-Shaw cell. _Physical Review E_ , 54:6260, 1996.
* Casademunt [2004] J. Casademunt. Viscous fingering as a paradigm of interfacial pattern formation: Recent results and new challenges. _Chaos_ , 14:809–824, 2004.
* Ceniceros and Hou [2000] H. D. Ceniceros and T. Y. Hou. The singular perturbation of surface tension in Hele-Shaw flows. _Journal of Fluid Mechanics_ , 409:251–272, 2000.
* Ceniceros et al. [1999] H. D. Ceniceros, T. Y. Hou, and H. Si. Numerical study of Hele-Shaw flow with suction. _Physics of Fluids_ , 11:2471–2486, 1999.
* Chapman [1999] S. J. Chapman. On the role of Stokes lines in the selection of Saffman-Taylor fingers with small surface tension. _European Journal of Applied Mathematics_ , 10:513–534, 1999.
* Chapman and King [2003] S. J. Chapman and J. R. King. The selection of Saffman–Taylor fingers by kinetic undercooling. _Journal of Engineering Mathematics_ , 46:1–32, 2003.
* Chen et al. [2014] C.-Y. Chen, Y.-S. Huang, and J. A. Miranda. Radial Hele-Shaw flow with suction: fully nonlinear pattern formation. _Physical Review E_ , 89:053006, 2014.
* Chen [1987] J.-D. Chen. Radial viscous fingering patterns in Hele-Shaw cells. _Experiments in Fluids_ , 5:363–371, 1987.
* Chen et al. [1997] S. Chen, B. Merriman, S. Osher, and P. Smereka. A simple level set method for solving Stefan problems. _Journal of Computational Physics_ , 135:8–29, 1997.
* Coco and Russo [2013] A. Coco and G. Russo. Finite-difference ghost-point multigrid methods on Cartesian grids for elliptic problems in arbitrary domains. _Journal of Computational Physics_ , 241:464–501, 2013.
* Combescot et al. [1986] R. Combescot, T. Dombre, V. Hakim, Y. Pomeau, and A. Pumir. Shape selection of Saffman-Taylor fingers. _Physical Review Letters_ , 56:2036, 1986.
* Coutinho and Miranda [2020] Í. M. Coutinho and J. A. Miranda. Control of viscous fingering through variable injection rates and time-dependent viscosity fluids: Beyond the linear regime. _Phys. Rev. E_ , 102:063102, 2020.
* Crowdy and Tanveer [2004] D. Crowdy and S. Tanveer. The effect of finiteness in the Saffman–Taylor viscous fingering problem. _Journal of Statistical Physics_ , 114:1501–1536, 2004.
* Cummings and King [2004] L. J. Cummings and J. R. King. Hele–Shaw flow with a point sink: generic solution breakdown. _European Journal of Applied Mathematics_ , 15:1–37, 2004.
* Cummings et al. [1999] L. M. Cummings, S. D. Howison, and J. R. King. Two-dimensional Stokes and Hele-Shaw flows with free surfaces. _European Journal of Mechanics_ , 10:635–680, 1999.
* Cuttle et al. [2020] C. Cuttle, D. Pihler-Puzović, and A. Juel. Dynamics of front propagation in a compliant channel. _Journal of Fluid Mechanics_ , 886:A20, 2020.
* Dai and Shelley [1993] W.-S. Dai and M. J. Shelley. A numerical study of the effect of surface tension and noise on an expanding Hele–Shaw bubble. _Physics of Fluids_ , 5:2131–2146, 1993.
* Dallaston and McCue [2012] M. C. Dallaston and S. W. McCue. New exact solutions for Hele-Shaw flow in doubly connected regions. _Physics of Fluids_ , 24:052101, 2012.
* Dallaston and McCue [2013] M. C. Dallaston and S. W. McCue. Bubble extinction in Hele-Shaw flow with surface tension and kinetic undercooling regularization. _Nonlinearity_ , 26:1639–1665, 2013.
* Dallaston and McCue [2014] M. C. Dallaston and S. W. McCue. Corner and finger formation in Hele–Shaw flow with kinetic undercooling regularisation. _European Journal of Applied Mathematics_ , 25:707–727, 2014.
* DeGregoria and Schwartz [1986] A. J. DeGregoria and L. W. Schwartz. A boundary-integral method for two-phase displacement in Hele-Shaw cells. _Journal of Fluid Mechanics_ , 164:383–400, 1986.
* Dias and Miranda [2010] E. O. Dias and J. Miranda. Control of radial fingering patterns: A weakly nonlinear approach. _Physical Review E_ , 81:016312, 2010.
* Dias et al. [2010] E. O. Dias, F. Parisio, and J. A. Miranda. Suppression of viscous fluid fingering: A piecewise-constant injection process. _Physical Review E_ , 82:067301, 2010.
* Dias et al. [2012] E. O. Dias, E. Alvarez-Lacalle, M. S. Carvalho, and J. A. Miranda. Minimization of viscous fluid fingering: a variational scheme for optimal flow rates. _Physical Review Letters_ , 109:144502, 2012.
* Ebert et al. [2007] U. Ebert, B. Meulenbroeck, and L. Schäfer. Convective stabilization of a Laplacian moving boundary problem with kinetic undercooling. _SIAM Journal on Applied Mathematics_ , 68:292–310, 2007.
* Enright et al. [2002] D. Enright, R. Fedkiw, J. Ferziger, and I. Mitchell. A hybrid particle level set method for improved interface capturing. _Journal of Computational Physics_ , 183:83–116, 2002.
* Entov et al. [1995] V. M. Entov, P.I. Etingof, and D. Y. Kleinbock. On nonlinear interface dynamics in Hele-Shaw flows. _European Journal of Applied Mathematics_ , 6:399–420, 1995.
* Eslami et al. [2020] A. Eslami, R. Basak, and S. M. Taghavi. Multiphase viscoplastic flows in a nonuniform Hele-Shaw cell: a fluidic device to control interfacial patterns. _Industrial& Engineering Chemistry Research_, 59:4119–4133, 2020.
* Fast et al. [2001] P. Fast, L. Kondic, M. J. Shelley, and P. Palffy-Muhoray. Pattern formation in non-Newtonian Hele–Shaw flow. _Physics of Physics_ , 13:1191, 2001.
* Folch et al. [2009] R. Folch, E. Alvarez-Lacalle, J. Ortín, and J. Casademunt. Pattern formation and interface pinch-off in rotating Hele-Shaw flows: A phase-field approach. _Physical Review E_ , 80:056305, 2009.
* Fontana et al. [2014] J. V. Fontana, E. O. Dias, and J. A. Miranda. Controlling and minimizing fingering instabilities in non-Newtonian fluids. _Physical Review E_ , 89:013016, 2014.
* Fontana et al. [2021] J. V. Fontana, A. Juel, N. Bergemann, M. Heil, and A. L. Hazel. Modelling finger propagation in elasto-rigid channels. _Journal of Fluid Mechanics_ , 916:A27, 2021.
* Franco-Gómez et al. [2016] A. Franco-Gómez, A. B. Thompson, A. L. Hazel, and A. Juel. Sensitivity of Saffman–Taylor fingers to channel-depth perturbations. _Journal of Fluid Mechanics_ , 794:343–368, 2016.
* Gao et al. [2019] T. Gao, M. Mirzadeh, P. Bai, K. M. Conforti, and M. Z. Bazant. Active control of viscous fingering using electric fields. _Nature Communications_ , 10:1–8, 2019.
* Gardiner et al. [2015a] B. P. J. Gardiner, S. W. McCue, M. C. Dallaston, and T. J. Moroney. Saffman-Taylor fingers with kinetic undercooling. _Physical Review E_ , 91:023016, 2015a.
* Gardiner et al. [2015b] B. P. J. Gardiner, S. W. McCue, and T. J. Moroney. Discrete families of Saffman-Taylor fingers with exotic shapes. _Results in Physics_ , 5:103–104, 2015b.
* Gibou et al. [2002] F Gibou, E P Fedkiw, L.-T. Cheng, and M Kang. A second-order-accurate symmetric discretization of the Poisson equation on irregular domains. _Journal of Computational Physics_ , 176:205–227, 2002.
* Gibou et al. [2003] F. Gibou, R. Fedkiw, R. Caflisch, and S. Osher. A level set approach for the numerical simulation of dendritic growth. _Journal of Scientific Computing_ , 19:183–199, 2003.
* Gibou et al. [2018] F. Gibou, R. Fedkiw, and S. Osher. A review of level-set methods and some recent applications. _Journal of Computational Physics_ , 353:82–109, 2018.
* Gin and Daripa [2015] C. Gin and P. Daripa. Stability results for multi-layer radial Hele-Shaw and porous media flows. _Physics of Fluids_ , 27:012101, 2015.
* Gin and Daripa [2021] C. Gin and P. Daripa. Time-dependent injection strategies for multilayer Hele-Shaw and porous media flows. _Physical Review Fluids_ , 6:033901, 2021.
* Givoli [2013] D. Givoli. _Numerical methods for problems in infinite domains_ , volume 33 of _Studies in Applied Mechanics_. Elsevier, 2013.
* Green et al. [2017] C. C. Green, C. J. Lustri, and S. W. McCue. The effect of surface tension on steadily translating bubbles in an unbounded Hele-Shaw cell. _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , 473:20170050, 2017.
* Gustafsson and Vasil’ev [2006] B. Gustafsson and A Vasil’ev. _Conformal and potential analysis in Hele-Shaw cells_. Birkhäuser-Verlag, 2006.
* Hohlov and Howison [1993] Y. E. Hohlov and S. D. Howison. On the classification of solutions to the zero-surface-tension model for Hele–Shaw free-boundary flows. _Quarterly of Applied Mathematics_ , 51:777–789, 1993.
* Hohlov et al. [1994] Y. E. Hohlov, S. D. Howison, C. Huntingford, J. R. Ockendon, and A. A. Lacey. A model for nonsmooth free boundaries in Hele–Shaw flows. _The Quarterly Journal of Mechanics and Applied Mathematics_ , 47:107–128, 1994.
* Homsy [1987] G. M. Homsy. Viscous fingering in porous media. _Annual Review of Fluid Mechanics_ , 19:271–311, 1987.
* Hong and Family [1988] D. C. Hong and F. Family. Bubbles in the Hele–Shaw cell — pattern selection and tip perturbations. _Physical Review A_ , 38:5253–5259, 1988.
* Hong and Langer [1986] D. C. Hong and J. S. Langer. Analytic theory of the selection mechanism in the Saffman-Taylor problem. _Physical Review Letters_ , 56:2032, 1986.
* Hou et al. [1994] T. Y. Hou, J. S. Lowengrub, and M. J. Shelley. Removing the stiffness from interfacial flow with surface tension. _Journal of Computational Physics_ , 114:312–338, 1994.
* Hou et al. [1997] T. Y. Hou, Z. Li, S. Osher, and H. Zhao. A hybrid method for moving interface problems with application to the Hele–Shaw flow. _Journal of Computational Physics_ , 134:236–252, 1997.
* Hou et al. [2001] T. Y. Hou, J. S. Lowengrub, and M. J. Shelley. Boundary Integral Methods for Multicomponent Fluids and Multiphase Materials. _Journal of Computational Physics_ , 169:302–362, 2001.
* Howison [1986a] S. D. Howison. Bubble growth in porous media and Hele–Shaw cells. _Proceedings of the Royal Society of Edinburgh Section A: Mathematics_ , 102:141–148, 1986a.
* Howison [1986b] S. D. Howison. Cusp Development in Hele–Shaw Flow with a Free Surface. _SIAM Journal on Applied Mathematics_ , 46:20–26, 1986b.
* Howison [1986c] S. D. Howison. Fingering in Hele-Shaw cells. _Journal of Fluid Mechanics_ , 167:439–453, 1986c.
* Howison et al. [1985] S. D. Howison, J. R. Ockendon, and A. A. Lacey. Singularity development in moving boundary problems. _The Quarterly Journal of Mechanics and Applied Mathematics_ , 38:343–354, 1985.
* Jackson et al. [2015] S. J. Jackson, D. Stevens, H. Power, and D. Giddings. A boundary element method for the solution of finite mobility ratio immiscible displacement in a Hele-Shaw cell. _International Journal for Numerical Methods in Fluids_ , 78:521–551, 2015.
* Jackson et al. [2017] S. J. Jackson, H. Power, D. Giddings, and D. Stevens. The stability of immiscible viscous fingering in Hele-Shaw cells with spatially varying permeability. _Computer Methods in Applied Mechanics and Engineering_ , 320:606–632, 2017.
* Kelly and Hinch [1997] E. D. Kelly and E. J. Hinch. Numerical simulations of sink flow in the Hele-Shaw cell with small surface tension. _European Journal of Applied Mathematics_ , 8:533–550, 1997.
* Kondic et al. [1998] L. Kondic, M. J. Shelley, and P. Palffy-Muhoray. Non-Newtonian Hele–Shaw flow and the Saffman–Taylor instability. _Physical Review Letters_ , 80:1433–1436, 1998.
* Lacey [1982] A. A. Lacey. Moving boundary problems in the flow of liquid through porous media. _ANZIAM Journal_ , 24:171–193, 1982.
* Leshchiner et al. [2010] A. Leshchiner, M. Thrasher, M. B. Mineev-Weinstein, and H. L. Swinney. Harmonic moment dynamics in Laplacian growth. _Physical Review E_ , 81:016206, 2010.
* Li et al. [2004] S. Li, J.S. Lowengrub, P. H. Leo, and V. Cristini. Nonlinear theory of self-similar crystal growth and melting. _Journal of Crystal Growth_ , 267:703–713, 2004.
* Li et al. [2007] S. Li, J. S. Lowengrub, and P. H. Leo. A rescaling scheme with application to the long-time simulation of viscous fingering in a Hele–Shaw cell. _Journal of Computational Physics_ , 225:554–567, 2007.
* Li et al. [2009] S. Li, J. S. Lowengrub, J. Fontana, and P. Palffy-Muhoray. Control of viscous fingering patterns in a radial Hele-Shaw cell. _Physical Review Letters_ , 102:174501, 2009.
* Liang [1986] S. Liang. Random-walk simulations of flow in Hele Shaw cells. _Physical Review A_ , 33:2663, 1986.
* Lindner et al. [2005] A. Lindner, D. Derks, and M. J. Shelley. Stretch flow of thin layers of Newtonian liquids: Fingering patterns and lifting forces. _Physics of Fluids_ , 17:072107, 2005.
* Lins and Azaiez [2017] T. F. Lins and J. Azaiez. Resonance-like dynamics in radial cyclic injection flows of immiscible fluids in homogeneous porous media. _Journal of Fluid Mechanics_ , 819:713–729, 2017.
* Lister et al. [2013] J. R. Lister, G. G. Peng, and J. A. Neufeld. Viscous control of peeling an elastic sheet by bending and pulling. _Physical Review Letters_ , 111:154501, 2013.
* Lu et al. [2020] D. Lu, F. Municchi, and I. C. Christov. Computational analysis of interfacial dynamics in angled Hele-Shaw cells: instability regimes. _Transport in Porous Media_ , 131:907–934, 2020.
* Lustri et al. [2020] C. J. Lustri, C. C. Green, and S. W. McCue. Selection of a Hele-Shaw bubble via exponential asymptotics. _SIAM Journal on Applied Mathematics_ , 80:289–311, 2020.
* McCloud and Maher [1995] K. V. McCloud and J. V. Maher. Experimental perturbations to Saffman-Taylor flow. _Physics Reports_ , 260:139–185, 1995.
* McCue [2018] S. W. McCue. Short, flat-tipped, viscous fingers: Novel interfacial patterns in a Hele-Shaw channel with an elastic boundary. _Journal of Fluid Mechanics_ , 834:1–4, 2018.
* McCue and King [2011] S. W. McCue and J. R. King. Contracting bubbles in Hele-Shaw cells with a power-law fluid. _Nonlinearity_ , 24:613–641, 2011.
* McLean and Saffman [1981] J. W. McLean and P. G. Saffman. The effect of surface tension on the shape of fingers in a Hele Shaw cell. _Journal of Fluid Mechanics_ , 102:455–469, 1981.
* Mineev–Weinstein [1998] M. Mineev–Weinstein. Selection of the Saffman–Taylor finger width in the absence of surface tension: an exact result. _Physical Review Letters_ , 80:2113–2116, 1998.
* Mineev–Weinstein et al. [2000] M. Mineev–Weinstein, P. B. Wiegmann, and A. Zabrodin. Integrable structure of interface dynamics. _Physical Review Letters_ , 84:5106–5109, 2000.
* Miranda and Widom [1998] J. Miranda and M. Widom. Radial fingering in a Hele-Shaw cell: a weakly nonlinear analysis. _Physica D_ , 120:315–328, 1998.
* Mirzadeh and Bazant [2017] M. Mirzadeh and M. Z. Bazant. Electrokinetic control of viscous fingering. _Physical Review Letters_ , 119:174501, 2017.
* Moroney et al. [2017] T. J. Moroney, D. R. Lusmore, S. W. McCue, and D. L. S. McElwain. Extending fields in a level set method by solving a biharmonic equation. _Journal of Computational Physics_ , 343:170–185, 2017.
* Morrow et al. [2019a] L. C. Morrow, M. C. Dallaston, and S. W. McCue. Interfacial dynamics and pinch-off singularities for axially symmetric darcy flow. _Physical Review E_ , 100:053109, 2019a.
* Morrow et al. [2019b] L. C. Morrow, J. R. King, T. J. Moroney, and S. W. McCue. Moving boundary problems for quasi-steady conduction limited melting. _SIAM Journal on Applied Mathematics_ , 79:2107–2131, 2019b.
* Morrow et al. [2019c] L. C. Morrow, T. J. Moroney, and S. W. McCue. Numerical investigation of controlling interfacial instabilities in non-standard Hele-Shaw configurations. _Journal of Fluid Mechanics_ , 877:1063–1097, 2019c.
* Nase et al. [2011] J. Nase, D. Derks, and A. Lindner. Dynamic evolution of fingering patterns in a lifted Hele–Shaw cell. _Physics of Fluids_ , 23:123101, 2011.
* Nie and Tian [1998] Q. Nie and F. R. Tian. Singularities in Hele–Shaw flows. _SIAM Journal on Applied Mathematics_ , 58:34–54, 1998.
* Osher and Fedkiw [2003] S. Osher and R. Fedkiw. _Level set methods and dynamic implicit surfaces_ , volume 153. Springer, 2003.
* Osher and Fedkiw [2001] S. Osher and R. P. Fedkiw. Level set methods: an overview and some recent results. _Journal of Computational Physics_ , 169:463–502, 2001.
* Osher and Sethian [1988] S. Osher and J. A. Sethian. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. _Journal of Computational Physics_ , 79:12–49, 1988.
* Paiva et al. [2019] A. S. S. Paiva, S. H. A. Lira, and R. F. S. Andrade. Non-linear effects in a closed rotating radial Hele-Shaw cell. _AIP Advances_ , 9:025121, 2019.
* Park and Homsy [1984] C.-W. Park and G. M. Homsy. Two-phase displacement in Hele Shaw cells: theory. _Journal of Fluid Mechanics_ , 139:291–308, 1984.
* Paterson [1981] L. Paterson. Radial fingering in a Hele Shaw cell. _Journal of Fluid Mechanics_ , 113:513–529, 1981.
* Pihler-Puzović et al. [2012] D. Pihler-Puzović, P. Illien, M. Heil, and A. Juel. Suppression of complex fingerlike patterns at the interface between air and a viscous fluid by elastic membranes. _Physical Review Letters_ , 108:074502, 2012.
* Pihler-Puzović et al. [2013] D. Pihler-Puzović, R. Périllat, M. Russell, A. Juel, and M. Heil. Modelling the suppression of viscous fingering in elastic-walled Hele-Shaw cells. _Journal of Fluid Mechanics_ , 731:162–183, 2013.
* Pihler-Puzović et al. [2014] D Pihler-Puzović, A. Juel, and M. Heil. The interaction between viscous fingering and wrinkling in elastic-walled Hele-Shaw cells. _Physics of Fluids_ , 26:022102, 2014.
* Pihler-Puzović et al. [2018] D. Pihler-Puzović, G. G. Peng, J. R. Lister, M. Heil, and A. Juel. Viscous fingering in a radial elastic-walled Hele-Shaw cell. _Journal of Fluid Mechanics_ , 849:163–191, 2018.
* Pleshchinskii and Reissig [2002] N. B. Pleshchinskii and M. Reissig. Hele-shaw flows with nonlinear kinetic undercooling regularization. _Nonlinear Analysis_ , 50:191–203, 2002.
* Polubarinova-Kochina [1945] P. Y. Polubarinova-Kochina. On motion of the contour of an oil layer. In _Dokl. Akad. Nauk SSSR_ , volume 47, pages 254–257, 1945\.
* Power et al. [2013] H. Power, D. Stevens, K. A. Cliffe, and A. Golin. A boundary element study of the effect of surface dissolution on the evolution of immiscible viscous fingering within a Hele-Shaw cell. _Engineering Analysis with Boundary Elements_ , 37:1318–1330, 2013.
* Richardson [1972] S. Richardson. Hele–Shaw flows with a free boundary produced by the injection of fluid into a narrow channel. _Journal of Fluids Mechanics_ , 56:609–618, 1972.
* Sader et al. [1994] J. E. Sader, D. Y. C. Chan, and B. D. Hughes. Non-Newtonian effects on immiscible viscous fingering in a radial Hele–Shaw cell. _Physical Review E_ , 49:420–432, 1994.
* Saffman [1986] P. G. Saffman. Viscous fingering in Hele-Shaw cells. _Journal of Fluid Mechanics_ , 173:73–94, 1986.
* Saffman and Taylor [1958] P. G. Saffman and G. I. Taylor. The penetration of a fluid into a porous medium or Hele-Shaw cell containing a more viscous liquid. _Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences_ , 245:312–329, 1958.
* Schwartz [1989] L. W. Schwartz. Instability and fingering in a rotating Hele–Shaw cell or porous medium. _Physics of Fluids_ , 1:167–169, 1989.
* Sethian [1993] J. A. Sethian. _Level set methods and fast marching methods: evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science_ , volume 3. Cambridge University Press, Cambridge, UK, 1993.
* Sethian and Smereka [2003] J. A. Sethian and P. Smereka. Level set methods for fluid interfaces. _Annual Review of Fluid Mechanics_ , 35:341–372, 2003.
* Shelley et al. [1997] M. J. Shelley, F. Tian, and K. Wlodarski. Hele-Shaw flow and pattern formation in a time-dependent gap. _Nonlinearity_ , 10:1471, 1997.
* Shraiman [1986] B. I. Shraiman. Velocity selection and the Saffman–Taylor problem. _Physical Review Letters_ , 56:2028–2031, 1986.
* Tabeling et al. [1987] P. Tabeling, G. Zocchi, and A. Libchaber. An experimental study of the Saffman-Taylor instability. _Journal of Fluid Mechanics_ , 177:67–82, 1987.
* Tanveer [1987a] S. Tanveer. New solutions for steady bubbles in a Hele–Shaw cell. _Physics of Fluids_ , 30:651–658, 1987a.
* Tanveer [1987b] S. Tanveer. Analytic theory for the selection of a symmetric Saffman-Taylor finger in a Hele-Shaw cell. _Physics of Fluids_ , 30:1589–1605, 1987b.
* Tanveer [1989] S. Tanveer. Analytic theory for the determination of velocity and stability of bubbles in a Hele-Shaw cell. _Theoretical and Computational Fluid Dynamics_ , 1:135–163, 1989.
* Tanveer [2000] S. Tanveer. Surprises in viscous fingering. _Journal of Fluid Mechanics_ , 409:273–308, 2000.
* Tanveer and Saffman [1987] S. Tanveer and P. G. Saffman. Stability of bubbles in a Hele–Shaw cell. _Physics of Fluids_ , 30:2624–2635, 1987.
* Thomé et al. [1989] H. Thomé, M. Rabaud, V. Hakim, and Y. Couder. The Saffman–Taylor instability: From the linear to the circular geometry. _Physics of Fluids_ , 1:224–240, 1989.
* Thompson et al. [2014] A. B. Thompson, A. Juel, and A. L. Hazel. Multiple finger propagation modes in Hele-Shaw channels of variable depth. _Journal of Fluid Mechanics_ , 746:123–164, 2014.
* Vanden-Broeck [1983] J.-M. Vanden-Broeck. Fingers in a Hele-Shaw Cell with surface tension. _Physics of Fluids_ , 26:2033, 1983.
* Vaquero-Stainer et al. [2019] C. Vaquero-Stainer, M. Heil, A. Juel, and D. Pihler-Puzović. Self-similar and disordered front propagation in a radial Hele-Shaw channel with time-varying cell depth. _Physical Review Fluids_ , 4:064002, 2019.
* Vasconcelos [2001] G. Vasconcelos. Exact solutions for steady bubbles in a Hele-Shaw cell with rectangular geometry. _Journal of Fluid Mechanics_ , 444:175–198, 2001.
* Vasconcelos [1994] G. L. Vasconcelos. Multiple bubbles in a Hele–Shaw cell. _Physical Review E_ , 50:R3306–R3309, 1994.
* Vasil’ev [2009] A. Vasil’ev. From the hele-shaw experiment to integrable systems: A historical overview. _Compl. Anal. Oper. Theor._ , 3:551–585, 2009.
* Waters and Cummings [2005] S. L. Waters and L. J. Cummings. Coriolis effects in a rotating Hele-Shaw cell. _Physics of Fluids_ , 17:048101, 2005.
* Xie [2019] X. Xie. Rigorous results in existence and selection of Saffman–Taylor fingers by kinetic undercooling. _European Journal of Applied Mathematics_ , 30:63–116, 2019.
* Xie [2021] X. Xie. Analytic solution to an interfacial flow with kinetic undercooling in a time-dependent gap Hele-Shaw cell. _Discrete and Continuous Dynamical Systems Series B_ , 26:4663–4680, 2021\.
* Zhao et al. [1992] H. Zhao, J. Casademunt, C. Yeung, and J. V. Maher. Perturbing Hele-Shaw flow with a small gap gradient. _Physical Review A_ , 45:2455, 1992.
* Zhao et al. [2018] M. Zhao, X. Li, W. Ying, A. Belmonte, J. Lowengrub, and S. Li. Computation of a shrinking interface in a Hele-Shaw cell. _SIAM Journal on Scientific Computing_ , 40:B1206–B1228, 2018.
* Zhao et al. [2021] M. Zhao, Z. Niroobakhsh, J. Lowengrub, and S. Li. Nonlinear limiting dynamics of a shrinking interface in a Hele-Shaw cell. _Journal of Fluid Mechanics_ , 910:A41, 2021.
* Zheng et al. [2015] Z. Zheng, H. Kim, and H. A. Stone. Controlling viscous fingering using time-dependent strategies. _Physical Review Letters_ , 115:174501, 2015.
|
# Fast Convergence of DETR with Spatially Modulated Co-Attention
Peng Gao1 Minghang Zheng3 Xiaogang Wang1 Jifeng Dai2 Hongsheng Li1
1Multimedia Laboratory The Chinese University of Hong Kong
2SenseTime Research 3Peking University
<EMAIL_ADDRESS><EMAIL_ADDRESS>
{xgwang<EMAIL_ADDRESS>
###### Abstract
The recently proposed Detection Transformer (DETR) model successfully applies
Transformer to objects detection and achieves comparable performance with two-
stage object detection frameworks, such as Faster-RCNN. However, DETR suffers
from its slow convergence. Training DETR [4] from scratch needs 500 epochs to
achieve a high accuracy. To accelerate its convergence, we propose a simple
yet effective scheme for improving the DETR framework, namely Spatially
Modulated Co-Attention (SMCA) mechanism. The core idea of SMCA is to conduct
regression-aware co-attention in DETR by constraining co-attention responses
to be high near initially estimated bounding box locations. Our proposed SMCA
increases DETR’s convergence speed by replacing the original co-attention
mechanism in the decoder while keeping other operations in DETR unchanged.
Furthermore, by integrating multi-head and scale-selection attention designs
into SMCA, our fully-fledged SMCA can achieve better performance compared to
DETR with a dilated convolution-based backbone (45.6 mAP at 108 epochs vs.
43.3 mAP at 500 epochs). We perform extensive ablation studies on COCO dataset
to validate the effectiveness of the proposed SMCA.
## 1 Introduction
The recently proposed DETR [4] has significantly simplified object detection
pipeline by removing hand-crafted anchor [35] and non-maximum suppression
(NMS) [2]. However, the convergence speed of DETR is slow compared with two-
stage [11, 10, 35] or one-stage [27, 33, 25] detectors (500 vs. 40 epochs).
Slow convergence of DETR increases the algorithm design cycle, makes it
difficult for researchers to further extend this algorithm, and thus hinders
its widespread usage.
Figure 1: Comparison of convergence of DETR-DC5 trained for 500 epochs, and
our proposed SMCA trained for 50 epochs and 108 epochs. The convergence speed
of the proposed SMCA is much faster than the original DETR.
In DETR, there are a series of object query vectors responsible for detecting
objects at different spatial locations. Each object query interacts with the
spatial visual features encoded by a Convolution Neural Network (CNN) [15] and
adaptively collects information from spatial locations with a co-attention
mechanism and then estimates the bounding box locations and object categories.
However, in the decoder of DETR, the co-attended visual regions for each
object query might be unrelated to the bounding box to be predicted by the
query. Thus the decoder of DETR needs long training epochs to search for the
properly co-attended visual regions to accurately identify the corresponding
objects.
Motivated by this observation, we propose a novel module named Spatially
Modulated Co-attention (SMCA), which is a plug-and-play module to replace the
existing co-attention mechanism in DETR and achieves faster convergence and
improved performance with very simple modifications. The proposed SMCA
dynamically predicts initial center and scale of the box corresponding to each
object query to generate a 2D spatial Gaussian-like weight map. The weight map
is element-wisely multiplied with the co-attention feature maps of object
query and image features to more effectively aggregate query-related
information from the visual feature map. In this way, the spatial weight map
effectively modulates the search range of each object query’s co-attention to
be properly around the initially estimated object center and scale. By
leveraging the predicted Gaussian-distributed spatial prior, our SMCA can
significantly speed up the training of DETR.
Although naively incorporating the spatially-modulated co-attention mechanism
into DETR speeds up the convergence, the performance is worse compared with
DETR (41.0 mAP at 50 epochs, 42.7 at 108 epochs vs. 43.3 mAP at 500 epochs).
Motivated by the effectiveness of multi-head attention-based Transformer [40]
and multi-scale feature [24] in previous research work, our SMCA is further
augmented with the multi-scale visual feature encoding in the encoder and the
multi-head attention in the decoder. For multi-scale visual feature encoding
in the encoder, instead of naively rescaling and upsampling the multi-scale
features from the CNN backbone to form a joint multi-scale feature map, Intra-
scale and multi-scale self-attention mechanisms are introduced to directly and
efficiently propagate information between the visual features of multiple
scales. For the proposed multi-scale self-attention, visual features at all
spatial locations of all scales interact with each other via self-attention.
However, as the number of all spatial locations at all scales is quite large
and leads to large computational cost, we introduce the intra-scale self-
attention to alleviate the heavy computation. The properly combined intra-
scale and multi-scale self-attention achieve efficient and discriminative
multi-scale feature encoding. In the decoder, each object query can adaptively
select features of proper scales via the proposed scale-selection attention.
For the multiple co-attention heads in the decoder, all heads estimate head-
specific object centers and scales to generate a series of different spatial
weight maps for spatially modulating the co-attention features. Each of the
multiple heads aggregates visual information from slightly different locations
and thus improves the detection performance.
Our SMCA is motivated by the following research directions. DRAW [12] proposed
a differential read-and-write operator with dynamically predicted Gaussian
sampling points for image generation. Gaussian Transformer [13] has been
proposed for accelerating natural language inference with Gaussian prior.
Different from Gaussian Transformer, SMCA predicts a dynamically spatial
weight map to tackle the dynamic search range of the objects. Deformable DETR
[46] achieved fast convergence of DETR with learnable sparse sampling.
Compared with Deformable DETR, our proposed SMCA explores another direction
for fast convergence of DETR by exploring dynamic Gaussian-like spatial prior.
Besides, SMCA can accelerate the training of DETR by only replacing co-
attention in the decoder. Deformable DETR replaces the Transformer with
deformable attention for both the encoder and decoder, which explores local
information rather than global information. SMCA demonstrates that exploring
global information can also result in the fast convergence of DETR. Besides
the above-mentioned methods, SMCA is also motivated by feature pyramids and
dynamic modulation, which will be introduced in related work.
We summarize our contributions below:
* •
We propose a novel Spatial Modulated Co-Attention (SMCA), which can accelerate
the convergence of DETR by conducting location-constrained object regression.
SMCA is a plug-and-play module in the original DETR. The basic version of SMCA
without multi-scale features and multi-head attention can already achieve 41.0
mAP at 50 epochs and 42.7 mAP at 108 epochs. It takes 265 V100 GPU hours to
train the basic version of SMCA for 50 epochs.
* •
Our full SMCA further integrates multi-scale features and multi-head spatial
modulation, which can further significantly improve and surpass DETR with much
fewer training iterations. SMCA can achieve 43.7 mAP at 50 epochs and 45.6 mAP
at 108 epochs, while DETR-DC5 achieves 43.3 mAP at 500 epochs. It takes 600
V100 GPU hours to train the full SMCA for 50 epochs.
* •
We perform extensive ablation studies on COCO 2017 dataset to validate the
proposed SMCA module and the network design.
## 2 Related Work
### 2.1 Object Detection
Motivated by the success of deep learning on image classification [22, 15],
deep learning has been successfully applied to object detection [11]. Deep
learning-based object detection frameworks can be categorized into two-stage,
one-stage, and end-to-end ones.
For two-stage object detectors including RCNN [11], Fast RCNN [10] and Faster
RCNN [35], the region proposal layer generates a few regions from dense
sliding windows first, and the ROI align [14] layer then extracts fine-grained
features and perform classification over the pooled features. For one-stage
detectors such as YOLO [33] and SSD [27], they conduct object classification
and location estimation directly over dense sling windows. Both two-stage and
one-stage methods need complicated post-processing to generate the final
bounding box predictions.
Recently, another branch of object detection methods [37, 36, 34, 4] beyond
one-stage and two-stage ones has gained popularity. They directly supervise
bounding box predictions end-to-end with Hungarian bipartite matching.
However, DETR [4] suffered from slow convergence compared with two-stage and
one-stage object detectors. Deformable DETR [46] accelerates the convergence
speed of DETR via learnable sparse sampling coupled with multi-scale
deformable encoder. TSP [38] analyzed the possible causes of slow convergence
in DETR and identify co-attention and biparte matching are two main causes. It
then combined RCNN- or FCOS-based methods with DETR. TSP-RCNN and TSP-FCOS
achieve fast convergence with better performance. Deformable DETR, TSP-RCNN
and TSP-FCOS only explored local information while our SMCA explores global
information with a self-attention and co-attention mechanism. Adaptive
Clustering Transformer (ACT) [45] proposed a run-time pruning of attention on
DETR’s encoder by LSH approximate clustering. Different from ACT, we
accelerate the converging speed while ACT targets at acceleration of inference
without re-training. UP-DETR [5] propose a novel self-supervised loss to
enhance the convergence speed and performance of DETR.
Loss balancing and multi-scale information has been actively studied in object
detection. There usually exist imbalance between positive and negative
samples. Thus the gradient of negative samples would dominate the training
process. Focal loss [25] proposed an improved version of cross entropy loss to
attenuate the gradients generated by negative samples in object detection.
Feature Pyramid Network (FPN) [24] and its variants [20] proposed a bottom-up
and top-down way to generate multi-scale features for achieving better
performance for object detection. Different from the multi-scale features
generated from FPN, SMCA adopts a simple cascade of intra-scale and multi-
scale self-attention modules to conduct information exchange between features
at different positions and scales.
### 2.2 Transformer
CNN [23] and LSTM [16] can be used for modeling sequential data. CNN processes
input sequences with a weight-shared sliding window manner. LSTM processes
inputs with a recurrence mechanism controlled by several dynamically predicted
gating functions. Transformer [40] introduces a new architecture beyond CNN
and LSTM by performing information exchange between all pairs of input using
key-query value attention. Transformer has achieved success on machine
translation, after which Transformer has been adopted in different fields,
including model pre-training [6, 31, 32, 3], visual recognition [30, 7], and
multi-modality fusion [44, 8, 29]. Transformer has quadratic complexity for
information exchange between all pairs of inputs, which is difficult to scale
up for longer input sequences. Many methods have been proposed to tackle this
problem. Reformer [21] proposed a reversible FFN and clustering self-
attention. Linformer [41] and FastTransformer [19] proposed to remove the
softmax in the transformer and perform matrix multiplication between query and
value first to obtain a linear-complexity transformer. LongFormer [1] perform
self-attention within a local window instead of the whole input sequence.
Transformer has been utilized in DETR to enhance the features by performing
feature exchange between different positions and object query. In SMCA, intra-
scale and multi-scale self-attention has been utilized for information
exchange inside and outside each scale. In this paper, our SMCA is based on
the original Transformer. We will explore memory-efficient transformers in
SMCA in future work.
### 2.3 Dynamic Modulation
Dynamic modulation has been actively studied in different research fields of
deep learning. In LSTM, a dynamic gate would be predicted to control the
temporal information flow. Recent attention mechanism can be seen as a variant
of dynamic modulation. Look-Attend-Tell [43] applied dynamic modulation in
image captioning using attention. At each time step, an extra attention map is
predicted and a weighted summation over the residual features and predict the
word for the current step. The attention patterns in [43] can be interpreted,
where the model is looking at. Dynamic filter [18] generates a dynamic
convolution kernel from a prediction network and apply the predicted
convolution over features in a sliding window fashion. Motivated by the
dynamic filter, QGHC [9] adopted a dynamic group-wise filter to guide the
information aggregation in the visual branch using language guided
convolution. Lightweight convolution [42] used dynamic predicted depth-wise
filters in machine translation and surpass the performance of Transformer. SE-
Net [17] successfully applies channel-wise attention to modulate deep features
for image recognition. Motivated by the dynamic modulation mechanism in
previous research, we design a simple scale-selection attention to dynamically
select the corresponding scale for each object query.
## 3 Spatially Modulated Co-Attention
### 3.1 Overview
In this section, we will first revisit the basic design of DETR [4] and then
introduce the basic version of SMCA. After introducing SMCA, we will introduce
how to integrate multi-head and scale-selection attention mechanisms into
SMCA. The overall pipeline of SMCA is illustrated in Figure 2.
### 3.2 A Revisit of DETR
Figure 2: The overall pipeline of Spatially Modulated Co-Attention (SMCA) with
intra-scale self-attention, multi-scale self-attention, spatial modulation,
and scale-selection attention modules. Each object query performs spatially
modulated co-attention and then predicts the target bounding boxes and their
object categories. $N$ stands for the number of object queries. $L$ stands for
the layers of decoder.
End-to-end object DEtection with TRansformer (DETR) [4] formulates object
detection as a set prediction problem. A Convolution Neural Network (CNN) [15]
extracts visual feature maps $f\in\mathbb{R}^{C\times H\times W}$ from an
image $I\in\mathbb{R}^{3\times H_{0}\times W_{0}}$, where $H,W$ and
$H_{0},W_{0}$ are the height/width of the input image and the visual feature
map, respectively.
The visual features augmented with position embedding $f_{pe}$ would be fed
into the encoder of the Transformer. Self-attention would be applied to
$f_{pe}$ to generate the key, query, and value features $K,Q,V$ to exchange
information between features at all spatial positions. To increase the feature
diversity, such features would be split into multiple groups along the channel
dimension for the multi-head self-attention. The multi-head normalized dot-
product attention is conducted as
$\displaystyle E_{i}$
$\displaystyle=\operatorname{Softmax}(K_{i}^{T}Q_{i}/{\sqrt{d}})V_{i},$ (1)
$\displaystyle E$ $\displaystyle=\operatorname{Concat}(E_{1},\dots,E_{H}),$
where $K_{i},Q_{i},V_{i}$ denote the $i$th feature group of the key, query,
and value features. There are $H$ groups for each type of features, and the
output encoder features $E$ is then further transformed and input into the
decoder of the Transformer.
Given the visual feature $E$ encoded from the encoder, DETR performs co-
attention between object queries $O_{q}\in\mathbb{R}^{N\times C}$ and the
visual features $E\in\mathbb{R}^{L\times C}$, where $N$ denotes the number of
pre-specified object queries and $L$ is the number of the spatial visual
features.
$\displaystyle Q$
$\displaystyle=\operatorname{FC}(O_{q}),\,\,K,V=\operatorname{FC}(E)$
$\displaystyle C_{i}$
$\displaystyle=\operatorname{Softmax}(K_{i}^{T}Q_{i}/\sqrt{d})V_{i},$ (2)
$\displaystyle C$ $\displaystyle=\operatorname{Concat}(C_{1},\dots,C_{H}),$
where $\operatorname{FC}$ denotes a single-layer linear transformation, and
$C_{i}$ denotes the co-attended feature for the object query $O_{q}$ from the
$i$th co-attention head. The decoder’s output features of each object query is
then further transformed by a Multi-Layer Perceptron (MLP) to output class
score and box location for each object.
Given box and class prediction, the Hungarian algorithm is applied between
predictions and ground-truth box annotations to identify the learning targets
of each object query.
### 3.3 Spatially Modulated Co-Attention
The original co-attention in DETR is unaware of the predicted bounding boxes
and thus requires many iterations to generate the proper attention map for
each object query. The core idea of our SMCA is to combine the learnable co-
attention maps with handcrafted query spatial priors, which constrain the
attended features to be around the object queries’ initial estimations and
thus to be more related to the final object predictions. SMCA module is
illustrated in the Figure 2 in orange.
Dynamic spatial weight maps. Each object query first dynamically predicts the
center and scale of its responsible object, which are then used to generate a
2D Gaussian-like spatial weight map. The center of the Gaussian-like
distribution are parameterized in the normalized coordinates of
$[0,1]\times[0,1]$. The initial prediction of the normalized center
$c^{\mathrm{norm}}_{h},c^{\mathrm{norm}}_{w}$ and scale $s_{h},s_{w}$ of the
Gaussian-like distribution for object query $O_{q}$ is formulated as
$\displaystyle c^{\mathrm{norm}}_{h},c^{\mathrm{norm}}_{w}$
$\displaystyle=\operatorname{sigmoid}(\operatorname{MLP}(O_{q})),$ (3)
$\displaystyle s_{h},s_{w}$ $\displaystyle=\operatorname{FC}(O_{q}),$
where the object query $O_{q}$ is projected to obtain normalized prediction
center in the two dimensions $c^{\mathrm{norm}}_{h},c^{\mathrm{norm}}_{w}$
with a 2-layer MLP followed by a sigmoid activation function. The predicted
center is then unnormalized to obtain the center coordinates $c_{h},c_{w}$ in
the original image. $O_{q}$ would also dynamically estimate the object scales
$s_{h},s_{w}$ along the two dimensions to create the 2D Gaussian-like weight
map, which is then used to re-weight the co-attention map to emphasize
features around the predicted object location.
Objects in natural images show diverse scales and height/width ratios. The
design of predicting width- and height-independent $s_{h},s_{w}$ can better
tackle the complex object aspect ratios in real-world scenarios. For large or
small objects, SMCA dynamically generates $s_{h},s_{w}$ of different values,
so that the modulated co-attention map by the spatial weight map $G$ can
aggregate sufficient information from all parts of large objects or suppress
background clutters for the small objects. After predicting the object center
$c_{w},c_{h}$ and scale $s_{w},s_{h}$, SMCA generates the Gaussian-like weight
map as
$\displaystyle G(i,j)=\operatorname{exp}\left(-\frac{(i-c_{w})^{2}}{\beta
s_{w}^{2}}-\frac{(j-c_{h})^{2}}{\beta s_{h}^{2}}\right),$ (4)
where $(i,j)\in[0,W]\times[0,H]$ is the spatial indices of the weight map $G$,
and $\beta$ is a hyper-parameter to modulate the bandwidth of the Gaussian-
like distribution. In general, the weight map $G$ assigns high importance to
spatial locations near the center and low importance to positions far from the
center. $\beta$ can be manually tuned with a handcrafted scheme to ensure $G$
covering a large spatial range at the beginning of training so that the
network can receive more informative gradients.
Spatially-modulated co-attention. Given the dynamically generated spatial
prior $G$, we modulate the co-attention maps $C_{i}$ between object query
$O_{Q}$ and self-attention encoded feature $E$ with the spatial prior $G$. For
each co-attention map $C_{i}$ generated with the dot-product attention (Eq.
(2)), we modulate the co-attention maps $C_{i}$ with the spatial weight map
$G$, where $G$ is shared for all co-attention heads in the basic version of
our SMCA,
$\displaystyle C_{i}=\operatorname{softmax}(K_{i}^{T}Q_{i}/\sqrt{d}+\log
G)V_{i}.$ (5)
Our SMCA performs element-wise addition between the logarithm of the spatial
map $G$ and the dot-product co-attention $K_{h}^{T}Q_{h}/\sqrt{d}$ followed by
softmax normalization over all spatial locations. By doing so, the decoder co-
attention would weight more around the predicted bounding box locations, which
can limit the search space of the spatial patterns of the co-attention and
thus increases the convergence speed. The Gaussian-like weight map is
illustrated in Figure 2, which constrains the co-attention to focus more on
regions near the predicted bounding box location and thus significantly
increases the convergence speed of DETR. In the basic version of SMCA, co-
attention maps $C_{i}$ of the multiple attention heads share the same
Gaussian-like weight map $G$.
SMCA with multi-head modulation. We also investigate to modulate co-attention
features differently for different co-attention heads. Each head starts from a
head-shared center $[c_{w},c_{h}]$, similar to that of the basic version of
SMCA, and then predicts a head-specific center offset $[\Delta c_{w,i},\Delta
c_{h,i}]$ and head-specific scales $s_{w,i},s_{h,i}$. The Gaussian-like
spatial weight map $G_{i}$ can thus be generated based on the head-specific
center $[c_{w}+\Delta c_{w,i},c_{h}+\Delta c_{h,i}]$ and scales
$s_{w,i},s_{h,i}$. The co-attention feature maps $C_{1},\dots,C_{H}$ can be
obtained as
$\displaystyle C_{i}=\operatorname{softmax}(K_{i}^{T}Q_{i}/\sqrt{d}+\log
G_{i})V_{i}\quad\text{ for }i=1,\dots,H.$ (6)
Different from Eq. (5) that shares $\operatorname{log}G$ for all attention
heads, the above Eq. (6) modulates co-attention maps by head-specific spatial
weight maps $\operatorname{log}G_{i}$. The multiple spatial weight maps can
emphasize diverse context and improve the detection accuracy.
SMCA with multi-scale visual features. Feature pyramid is popular in object
detection frameworks and generally leads to significant improvements over
single-scale feature encoding. Motivated by the feature pyramid network [24]
in previous works, we also integrate multi-scale features into SMCA. The basic
version of SMCA conducts co-attention between object queries and single-scale
feature maps. As objects naturally have different scales, we can further
improve the framework by replacing single-scale feature encoding with multi-
scale feature encoding in the encoder of the Transformer.
Given an image, the CNN extracts the multi-scale visual features with
downsampling rates 16, 32, 64 to obtain multi-scale features $f_{16}$,
$f_{32}$, $f_{64}$, respectively. The multi-scale features are directly
obtained from the CNN backbone and Feature Pyramid Network is not used to save
the computational cost. For multi-scale self-attention encoding in the
encoder, features at all locations of different scales are treated equally.
The self-attention mechanism propagates and aggregates information between all
feature pixels of different scales. However, the number of feature pixels of
all scales is quite large and the multi-scale self-attention operation is
therefore computationally costly. To tackle the issue, we introduce the intra-
scale self-attention encoding as an auxiliary operator to assist the multi-
scale self-attention encoding. Specifically, dot-product attention is used to
propagate and aggregate features only between feature pixels within each
scale. The weights of the Transformer block (with self-attention and
feedforward sub-networks) are shared across different scales. Our empirical
study shows that parameter sharing across scales enhances the generalization
capability of intra-scale self-attention encoding. For the final design of the
encoder in SMCA, it adopts 2 blocks of intra-scale self-attention encoding,
followed by 1 block of multi-scale self-attention, and another 2 blocks of
intra-scale self-attention. The design has a very similar detection
performance to that of 5 blocks of multi-scale self-attention encoding but has
a much smaller computational footprint.
Given the encoded multi-scale features $E_{16}$, $E_{32}$, $E_{64}$ with
downsampling rates of 16, 32, 64, a naive solution for the decoder to perform
co-attention would be first re-scaling and concatenating the multi-scale
features to form a single-scale feature map, and then conducting co-attention
between object query and the resulting feature map. However, we notice that
some queries might only require information from a specific scale but not
always from all the scales. For example, the information for small objects is
missing in low-resolution feature map $E_{64}$. Thus the object queries
responsible for small objects should more effectively acquire information only
from high-resolution feature maps. On the other hand, traditional methods,
such as FPN, assigns each bounding box explicitly to the feature map of a
specific scale. Different from FPN [24], we propose to automatically select
scales for each box using learnable scale-attention attention. Each object
query generates scale-selection attention weights as
$\displaystyle\alpha_{16},\alpha_{32},\alpha_{64}=\operatorname{Softmax}(\operatorname{FC}(O_{q})),$
(7)
where $\alpha_{16}$, $\alpha_{32}$, $\alpha_{64}$ stand for the importance of
selecting $f_{16}$, $f_{32}$, $f_{64}$. To conduct co-attention between the
object query $O_{q}$ and the multi-scale visual features
$E_{16},E_{32},E_{64}$, we first obtain the multi-scale key and value features
$K_{i,16},K_{i,32},K_{i,64}$ and $V_{i,16},V_{i,32},V_{i,64}$ for attention
head $i$, respectively, from $E_{16}$, $E_{32}$, $E_{64}$ with separate linear
projections. To conduct co-attention for each head $i$ between $O_{q}$ and
key/value features of each scale $j\in\\{16,32,64\\}$, the spatially-modulated
co-attention in Eq. (5) is adaptively weighted and aggregated by the scale-
selection weights $\alpha_{16},\alpha_{32},\alpha_{64}$ as
$\displaystyle C_{i,j}$
$\displaystyle=\operatorname{Softmax}(K_{i,j}^{T}Q_{i}/\sqrt{d}+\operatorname{log}{G_{i}})V_{i,j}\odot\alpha_{j},$
(8) $\displaystyle C_{i}$ $\displaystyle=\sum_{\text{all
}j}C_{i,j},\quad\text{ for }\,j\in\\{16,32,64\\},$ (9)
where $C_{i,j}$ stands for the co-attention features between the $i$th co-
attention head between query and visual features of scale $j$. $C_{i,j}$’s are
weightedly aggregated according to the scaled attention weights $\alpha_{j}$
obtained in Eq. (7). With such a scale-selection attention mechanism, the
scale most related to each object query is softly selected while the visual
features from other scales are suppressed.
Equipped with intra-inter-scale attention and scale selection attention
mechanisms, our full SMCA can better tackle object detection than the basic
version.
SMCA box prediction. After conducting co-attention between the object query
$O_{q}$ and the encoded image features, we can obtain the updated features
$D\in\mathbb{R}^{N\times C}$ for object query $O_{q}$. In the original DETR, a
3-layer MLP and a linear layer are used to predict the bounding box and
classification confidence. We denote the prediction as
$\displaystyle\mathrm{Box}=\operatorname{Sigmoid}(\operatorname{MLP}(D)),$
(10) $\displaystyle\mathrm{Score}=\operatorname{FC}(D),$ (11)
where “$\mathrm{Box}$” stands for the center, height, width of the predicted
box in the normalized coordinate system, and “$\mathrm{Score}$” stands for the
classification prediction. In SMCA, co-attention is constrained to be around
the initially predicted object center
$[c^{\mathrm{norm}}_{h},c^{\mathrm{norm}}_{w}]$. We then use the initial
center as a prior for constraining bounding box prediction, which is denoted
as
$\displaystyle\widehat{\mathrm{Box}}$ $\displaystyle=\operatorname{MLP}(D),$
$\displaystyle\widehat{\mathrm{Box}}[:2]$
$\displaystyle=\widehat{\mathrm{Box}}[:2]+[\widehat{c^{\mathrm{norm}}_{h}},\widehat{c^{\mathrm{norm}}_{w}}],$
(12) $\displaystyle\mathrm{Box}$
$\displaystyle=\operatorname{Sigmoid}(\widehat{\mathrm{Box}}),$
where $\widehat{\mathrm{Box}}$ stand for the box prediction, and
$[\widehat{c^{\mathrm{norm}}_{h}},\widehat{c^{\mathrm{norm}}_{w}}]$ represents
the center of initial object prediction before the sigmoid function. In Eq.
(3.3), we add the center of predicted box with the center of initial spatial
prior $[\widehat{c^{\mathrm{norm}}_{h}},\widehat{c^{\mathrm{norm}}_{w}}]$
before the sigmoid function. This procedure ensures that the bounding box
prediction is highly related to the highlighted co-attention regions in SMCA.
Method | Epochs | time(s) | GFLOPs | mAP | APS | APM | APL |
---|---|---|---|---|---|---|---|---
DETR | 500 | 0.038 | 86 | 42.0 | 20.5 | 45.8 | 61.1 |
DETR-DC5 | 500 | 0.079 | 187 | 43.3 | 22.5 | 47.3 | 61.1 |
SMCA w/o multi-scale | 50 | 0.043 | 86 | 41.0 | 21.9 | 44.3 | 59.1 |
SMCA w/o multi-scale | 108 | 0.043 | 86 | 42.7 | 22.8 | 46.1 | 60.0 |
SMCA | 50 | 0.100 | 152 | 43.7 | 24.2 | 47.0 | 60.4 |
SMCA | 108 | 0.100 | 152 | 45.6 | 25.9 | 49.3 | 62.6 |
Table 1: Comparison with DETR model over training epochs, mAP, inference time
and GFLOPs.
## 4 Experiments
### 4.1 Experiment setup
Dataset. We validate our proposed SMCA over COCO 2017 [26] dataset.
Specifically, we train on COCO 2017 training dataset and validate on the
validation dataset, which contains 118k and 5k images, respectively. We report
mAP for performance evaluation following previous research [4].
Implementation details. We follow the experiment setup in the original DETR
[4]. We denote the features extracted by ResNet-50 [15] as SMCA-R50. Different
from DETR, we use 300 object queries instead of 100 and replace the original
cross-entropy classification loss with focal loss [25]. To better tackle the
positive-negative imbalance problem in foreground/background classification.
The initial probability of focal loss is set as 0.01 to stabilize the training
process.
We report the performance trained for 50 epochs and the learning rate
decreases to 1/10 of its original value at the 40th epoch. The learning rate
is set as $10^{-4}$ for the Transformer encoder-encoder and $10^{-5}$ for the
pre-trained ResNet backbone and optimized by AdamW optimizer [28].
For multi-scale feature encoding, we use downsampling ratios of 16, 32, 64 by
default. For bipartite matching [37, 4], the coefficients of classification
loss, L1 distance loss, GIoU loss is set as 2, 5, 2, respectively. After
bounding box assignment via bipartite matching, SMCA is trained by minimizing
the classification loss, bounding box L1 loss, and GIoU loss with coefficients
2, 5, 2, respectively. For Transformer layers [40], we use post-norm similar
to those in previous approaches [4]. We use random crop for data augmentation
with the largest width or height set as 1333 for all experiments following
[4]. All models are trained on 8 V100 GPUs with 1 image per GPU.
### 4.2 Comparison with DETR
SMCA shares the same architecture with DETR except for the proposed new co-
attention modulation in the decoder and an extra linear network for generating
the spatial modulation prior. The increase of computational cost of SMCA and
training time of each epoch are marginal. For SMCA with single-scale features
(denoted as “SMCA w/o multi-scale”), we keep the dimension of self-attention
to be 256 and the intermediate dimension of FFN to be 2048. For SMCA with
multi-scale features, we set the intermediate dimension of FFN to be 1024 and
use 5 layers of intra-scale and multi-scale self-attention in the encoder to
have similar amount of parameters and fair comparison with DETR. As shown in
Table 1, the performance of “SMCA w/o multi-scale” reaches 41.0 mAP with
single-scale features and 43.7 mAP with multi-scale features at 50 epochs.
Given longer training procedure, mAP of SMCA increases from 41.0 to 42.7 with
single-scale features and from 43.7 to 45.6 with multi-scale features. ”SMCA
w/o multi-scale” can achieve better APs and APM compared with DETR. SMCA can
achieve better overall performance on objects of all scales by adopting multi-
scale information and the proposed spatial modulation. The convergence speed
of SMCA is 10 times faster than DETR-based methods.
Given the significant increase of convergence speed and performance, the FLOPs
and the increase of inference time of SMCA are marginal. With single-scale
features, the inference time increases from $0.038s\rightarrow 0.041s$ and
FLOPs increase by 0.06G. With multi-scale features, the inference speed
increase from $0.079s\rightarrow 0.100s$, while the GFLOPs actually decrease
because our multi-scale SMCA only uses 5 layers of self-attention layers for
the encoder. Thin layers in the Transformer and convolution without dilation
in the last stage of ResNet backbone achieve similar efficiency as the
original dilated DETR model.
### 4.3 Ablation Study
To validate different components of our proposed SMCA, we perform ablation
studies on the importance of the proposed spatial modulation, multi-head vs.
head-shared modulation, and multi-scale encoding and scale-selection attention
in comparison with the baseline DETR.
Method | mAP | AP50 | AP75
---|---|---|---
Baseline | DETR-R50 | 34.8 | 56.2 | 36.9
Head-shared Spatial Modulation | +Indep. (bs8) | 40.2 | 61.4 | 42.7
+Indep. (bs16) | 40.2 | 61.3 | 42.9
+Indep. (bs32) | 39.9 | 61.0 | 42.4
Multi-head Spatial Modulation | +Fixed | 38.5 | 60.7 | 40.2
+Single | 40.4 | 61.8 | 43.3
+Indep. | 41.0 | 62.2 | 43.6
Table 2: Ablation study on the importance of spatial modulation, multi-head mechanism. mAP, AP50, and AP75 are reported on COCO 2017 validation set. Method | mAP | Params (M)
---|---|---
SMCA | 41.0 | 41.0
SMCA (2Intra-Multi-2Intra) | 43.7 | 39.5
SMCA w/o SSA (2Intra-Multi-2Intra) | 42.6 | 39.5
3Intra | 42.9 | 37.9
3Multi | 43.3 | 37.9
5Intra | 43.3 | 39.5
Weight Share | Shared FFN | 43.0 | 42.2
Shared SA | 42.8 | 44.7
No Share | 42.3 | 47.3
Table 3: Ablation study on the importance of combining intra-scale and multi-
scale propagation, and the weight sharing for intra-scale self-attention.
“Shared FFN” stands for only sharing weights of the feedfoward network of
intra-scale self-attention. “Shared SA” stands for sharing the weights of the
self-attention network. “No share” stands for no weight sharing in intra-scale
self attention.
The baseline DETR model. We choose DETR with ResNet-50 backbone as our
baseline model. It is trained for 50 epochs with the learning rate dropping to
1/10 of the original value at the 40th epoch. Different from the original
DETR, we increase the object query from 100 to 300 and replace the original
cross entropy loss with focal loss. As shown in Table 2, the baseline DETR
model can achieve an mAP of 34.8 at 50 epochs.
Head-shared spatially modulated co-attention. Based on the baseline DETR, we
first test adding a head-shared spatial modulation as specified in Eq. (5) by
keeping factors including the learning rate, training schedule, self-attention
parameters, and coefficients of the loss to be the same as the baseline. The
spatial weight map is generated based on the predicted height and width shared
for all heads contain height- and width-independent scale prediction to better
tackle the scale variance problem. We denote the method as “Head-shared
Spatial Modulation + Indep.” in Table 3. The performance increase from 34.8 to
40.2 compared with baseline DETR. The large performance gain (+5.4) validates
the effectiveness of SMCA, which not only accelerates the convergence speed of
DETR but also improve its performance by a large margin. We further test the
performance of head-shared spatial modulation with different batch sizes of 8,
16, and 32 as shown in Table 3. The results show that our SMCA is insensitive
to different batch sizes.
Multi-head vs. head-shared spatially modulated co-attention. For spatial
modulation with multiple heads of separate predictable scales, all heads in
Transformer are modulated by different spatial weight maps $G_{i}$ following
Eq. (6). All heads start from the same object center and predict offsets
w.r.t. the common center and head-specific scales. The design of multi-head
spatial modulation for co-attention enables the model to learn diverse
attention patterns simultaneously. After switching from head-shared spatial
modulation to multi-head spatial modulation (denoted as “Multi-head Spatial
Modulation + Indep.” in Table 2), the performance increases from 40.2 to 41.0
compared with the head-shared modulated co-attention in SMCA. The importance
of multi-head mechanism has also been discussed in Transformer [40]. From
visualization in Figure 3, we observe that the multi-head modulation naturally
focuses on different parts of the objects to be predicted by the object
queries.
Design of multi-head spatial modulation for co-attention.
We test whether the width and height scales of the spatial weight maps should
be manually set, shared, or independently predicted. As shown in Table 2, we
test fixed-scale Gaussian-like spatial map (only predicting the center and
fixing the scale of the Gaussian-like distribution to be the constant 1). The
fixed-scale spatial modulation results in a 38.5 mAP (denoted as “+Fixed”),
which has +3.7 gain over the baseline DETR-R50 and validates the effectiveness
of predicting centers for spatial modulation to constrain the co-attention. As
objects in natural images have varying sizes, scales can be predicted to adapt
to objects of different size. Thus we allow the scale to be a single
predictable variable as in Eq. (3). If such a single predictable scale for
spatial modulation (denoted as “+Single”), SMCA can achieve 40.4 mAP and is
+1.9 compared with the above fixed-scale modulation. By further predicting
independent scales for height and width, our SMCA can achieve 41.0 mAP
(denoted as “+Indep.”), which is +0.6 higher compared with the SMCA with a
single predictable scale. The results demonstrate the importance of predicting
height and width scales for the proposed spatial modulation. As visualized by
the co-attention patterns in Figure 3, we observe that independent spatial
modulation can generate more accurate and compact co-attention patterns
compared with fixed-scale and shared-scale spatial modulation.
Multi-scale feature encoding and scale-selection attention. The above SMCA
only conducts co-attention between single-scale feature maps and the object
query. As objects in natural images exist in different scales, we conduct
multi-scale feature encoding in the encoder via adopting 2 layers of intra-
scale self-attention, followed by 1 layer of multi-scale self-attention, and
then another 2 layers of intra-scale self-attention. We denote the above
design as “SMCA (2Intra-Multi-2Intra)”. As shown in Table 3, we start from
SMCA with a single-scale visual feature map, which achieves 41.0 mAP. After
integrating multi-scale features with the 2intra-multi-2intra self-attention
design, the performance can be enhanced from 41.0 to 43.7. As we introduce 3
convolutions to project features output from ResNet-50 to 256 dimensions, we
make the hidden dimension of FFN decrease from 2048 to 1024 and the number of
encoder layer decrease from 6 to 5 to make the parameter comparable to other
models. To validate the effectiveness of scale-selection attention (SSA), we
perform ablation studies on SMCA without integrating SSA (denoted as “SMCA w/o
SSA”). As shown in Table 3, SMCA w/o SSA decreases the performance from 43.7
to 42.6.
After validating the effectiveness of the proposed multi-scale feature
encoding and scale-selection attention module, we further validate the
effectiveness of the design of 2intra-multi-2intra-scale self-attention. By
switching the 2intra-multi-2intra design to simply stacking 5 intra-scale
self-attention layers, the performance drops from 43.7 to 43.3, due to the
lack of cross-scale information exchange. 5 layers of intra-scale self-
attention (denoted as “5Intra”) encoder achieves better performance than
3Intra self-attention, which validates the effectiveness of a deeper intra-
scale self-attention encoder. A 3-layer multi-scale (denoted as “3Multi”)
self-attention encoder achieves better performance than a 3-layer intra-scale
(3Intra) self-attention encoder. It demonstrates that enabling multi-scale
information exchange leads to better performance than only conducting intra-
scale information exchange alone. However, the large increase of FLOPs by
replacing intra-scale with multi-scale self-attention encoder makes us choose
a combination of intra-scale and multi-scale self-attention encoders, namely,
the design of 2intra-inter-2intra. In the previously mentioned multi-scale
encoder, we share both Transformer and FFN weights for features from intra-
scale self-attention layers, which reduces the number of parameters and learns
common patterns of multi-scale features. It increases the generalization of
the proposed SMCA and achieves a better performance of 43.7 with fewer
parameters.
Figure 3: Visualization of co-attention of SMCA with fixed-scale, single-
scale, independent-scale spatial modulation, and co-attention of DETR. The
larger images show the average co-attention of 8 heads. Small images show the
attention pattern of each head. In the head-specific modulation of co-
attention of SMCA, we visualize the process of spatial modulation. Red circles
in SMCA variants stand for the head-specific offset starting from the same red
rectangular center.
Visualization of SMCA. We provide visualization of co-attention weight maps by
SMCA. As shown in Figure 3, we compare the detection result of fixed-scale
SCMA, single-scale SMCA, and independent-scale SMCA (default SMCA). From the
visualization, we can see independent-scale SMCA can better tackle objects of
large aspect ratios. Different spatial modulation heads focus on different
parts of the object to aggregate diverse information for final object
recognition. Finally, we show the co-attention map of the original DETR co-
attention. Our SMCA can better focus on features around the object of
interest, for which the query needs to estimate, while DETR’s co-attention
maps show sparse patterns and are unrelated to the object it aims to predict.
Model | Epochs | GFLOPs | Params (M) | AP | AP50 | AP75 | APS | APM | APL
---|---|---|---|---|---|---|---|---|---
DETR-R50 [4] | 500 | 86 | 41 | 42.0 | 62.4 | 44.2 | 20.5 | 45.8 | 61.1
DETR-DC5-R50 [4] | 500 | 187 | 41 | 43.3 | 63.1 | 45.9 | 22.5 | 47.3 | 61.1
Faster RCNN-FPN-R50 [4] | 36 | 180 | 42 | 40.2 | 61.0 | 43.8 | 24.2 | 43.5 | 52.0
Faster RCNN-FPN-R50++ [4] | 108 | 180 | 42 | 42.0 | 62.1 | 45.5 | 26.6 | 45.4 | 53.4
Deformable DETR-R50 (Single-scale) [46] | 50 | 78 | 34 | 39.7 | 60.1 | 42.4 | 21.2 | 44.3 | 56.0
Deformable DETR-R50 (50 epochs) [46] | 50 | 173 | 40 | 43.8 | 62.6 | 47.7 | 26.4 | 47.1 | 58.0
Deformable DETR-R50 (150 epochs) [46] | 150 | 173 | 40 | 45.3 | * | * | * | * | *
UP-DETR-R50 [5] | 150 | 86 | 41 | 40.5 | 60.8 | 42.6 | 19.0 | 44.4 | 60.0
UP-DETR-R50+ [5] | 300 | 86 | 41 | 42.8 | 63.0 | 45.3 | 20.8 | 47.1 | 61.7
TSP-FCOS-R50 [38] | 36 | 189 | * | 43.1 | 62.3 | 47.0 | 26.6 | 46.8 | 55.9
TSP-RCNN-R50 [38] | 36 | 188 | * | 43.8 | 63.3 | 48.3 | 28.6 | 46.9 | 55.7
TSP-RCNN+-R50 [38] | 96 | 188 | * | 45.0 | 64.5 | 49.6 | 29.7 | 47.7 | 58.0
SMCA-R50 | 50 | 152 | 40 | 43.7 | 63.6 | 47.2 | 24.2 | 47.0 | 60.4
SMCA-R50 | 108 | 152 | 40 | 45.6 | 65.5 | 49.1 | 25.9 | 49.3 | 62.6
DETR-R101 [4] | 500 | 152 | 60 | 43.5 | 63.8 | 46.4 | 21.9 | 48.0 | 61.8
DETR-DC5-R101 [4] | 500 | 253 | 60 | 44.9 | 64.7 | 47.7 | 23.7 | 49.5 | 62.3
Faster RCNN-FPN-R101 [4] | 36 | 256 | 60 | 42.0 | 62.1 | 45.5 | 26.6 | 45.4 | 53.4
Faster RCNN-FPN-R101+ [4] | 108 | 246 | 60 | 44.0 | 63.9 | 47.8 | 27.2 | 48.1 | 56.0
TSP-FCOS-R101 [38] | 36 | 255 | * | 44.4 | 63.8 | 48.2 | 27.7 | 48.6 | 57.3
TSP-RCNN-R101 [38] | 36 | 254 | * | 44.8 | 63.8 | 49.2 | 29.0 | 47.9 | 57.1
TSP-RCNN+-R101 [38] | 96 | 254 | * | 46.5 | 66.0 | 51.2 | 29.9 | 49.7 | 59.2
SMCA-R101 | 50 | 218 | 58 | 44.4 | 65.2 | 48.0 | 24.3 | 48.5 | 61.0
Table 4: Comparison with DETR-like object detectors on COCO 2017 validation
set.
### 4.4 Overall Performance Comparison
In Table 4, we compare our proposed SMCA with other object detection
frameworks on COCO 2017 validation set. DETR [4] uses an end-to-end
Transformer for object detection. DETR-R50 and DETR-DC5-R50 stand for DETR
with ResNet-50 and DETR with dilated ResNet-50 backbone. Compared with DETR,
our SMCA can achieve fast convergence and better performance in terms of
detection of the small, medium, and large objects. Faster RCNN [35] with FPN
[24] is a two-stage approach for object detection. Our method can achieve
better mAP than Faster RCNN-FPN-R50 at 109 epochs (45.6 vs 42.0 AP). As Faster
RCNN uses ROI-Align and feature pyramid with downsampled {8, 16, 32, 64}
features, Faster RCNN is superior at detecting small objects (26.6 vs 25.9
mAP). Thanks to the multi-scale self-attention mechanism that can propagate
information between features at all scales and positions, our SMCA is better
for localizing large objects (62.6 vs 53.4 AP).
Deformable DETR [46] replaces the original self-attention of DETR with local
deformable attention for both the encoder and the decoder. It achieves faster
convergence compared with the original DETR. Exploring local information in
Deformable DETR results in fast convergence at the cost of degraded
performance for large objects. Compared with DETR, the $\text{AP}_{L}$ of
Deformable DETR drops from 61.1 to 58.0. Our SMCA explores a new approach for
fast convergence of the DETR by performing spatially modulated co-attention.
As SMCA constrains co-attention near dynamically estimated object locations,
SMCA achieves faster convergence by reducing the search space in co-attention.
As SMCA uses global self-attention for information exchange between all scales
and positions, our SMCA can achieve better performance for large objects
compared with Deformable DETR. Deformable DETR uses downsampled 8, 16, 32, 64
multi-scale features and 8 sampling points for deformable attention. Our SMCA
only uses downsampled 16, 32, 64 features and 1 center point for dynamic
Gaussian-like spatial prior. SCMA achieves comparable mAP with Deformable DETR
at 50 epochs (43.7 vs. 43.8 AP). As SMCA focuses more on global information
and deformable DETR focuses more on local features, SMCA is better at
detecting large objects (60.4 vs 59.0 AP) while inferior at detecting small
objects (24.2 vs 26.4 AP).
UP-DETR [5] explores unsupervised learning for DETR. UP-DETR can achieve fast
convergence and better performance compared with the original DETR due to the
exploitation of unsupervised auxiliary tasks. The convergence speed and
performance of SMCA is better than UP-DETR (45.6 at 108 epochs vs. 42.8 at 300
epochs). TSP-FCOS and TSP-RCNN [38] combines DETR’s Hungarian matching with
FCOS [39] and RCNN [35] detectors, which results in faster convergence and
better performance than DETR. As TSP-FCOS and TSP-RCNN inherit the structure
of FCOS and RCNN that uses local-region features for bounding box detection,
they are strong at small objects but weak at large ones, similar to above
mentioned deformable DETR and Faster RCNN-FPN. For short training schedules,
TSP-RCNN and GMCA-R50 achieve comparable mAP (43.8 at 38 epochs vs 43.7 at 50
epochs), which are better than 43.1 at 38 epochs by TSP-FCOS. For long
training schedules, SMCA can achieve better performance than TSP-RCNN (45.6 at
108 epochs vs 45.0 at 96 epochs). We observe similar trends by replacing
ResNet-50 backbone with ResNet-101 backbone as shown in the lower half part of
Table 4.
## 5 Conclusion
DETR [4] proposed an end-to-end solution for object detection beyond previous
two-stage [35] and one-stage approaches [33]. By integrating the Spatially
Modulated Co-attention (SMCA) into DETR, the original 500 epochs training
schedule can be reduced to 108 epochs and mAP increases from 43.4 to 45.6
under comparable inference cost. SMCA demonstrates the potential power of
exploring global information for achieving high-quality object detection. In
the future, we will explore the application of SMCA in more scenarios beyond
object detection, such as general visual representation learning. We will also
explore flexible fusions of local and global features for faster and more
robust object detection.
## References
* [1] Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
* [2] Navaneeth Bodla, Bharat Singh, Rama Chellappa, and Larry S Davis. Soft-nms–improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision, pages 5561–5569, 2017.
* [3] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
* [4] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. arXiv preprint arXiv:2005.12872, 2020.
* [5] Zhigang Dai, Bolun Cai, Yugeng Lin, and Junying Chen. Up-detr: Unsupervised pre-training for object detection with transformers. arXiv preprint arXiv:2011.09094, 2020.
* [6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
* [7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
* [8] Peng Gao, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven CH Hoi, Xiaogang Wang, and Hongsheng Li. Dynamic fusion with intra-and inter-modality attention flow for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6639–6648, 2019.
* [9] Peng Gao, Hongsheng Li, Shuang Li, Pan Lu, Yikang Li, Steven CH Hoi, and Xiaogang Wang. Question-guided hybrid convolution for visual question answering. In Proceedings of the European Conference on Computer Vision (ECCV), pages 469–485, 2018.
* [10] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
* [11] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 38(1):142–158, 2015.
* [12] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
* [13] Maosheng Guo, Yu Zhang, and Ting Liu. Gaussian transformer: a lightweight approach for natural language inference. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6489–6496, 2019.
* [14] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
* [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [16] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
* [17] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
* [18] Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. In Advances in neural information processing systems, pages 667–675, 2016.
* [19] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156–5165. PMLR, 2020.
* [20] Seung-Wook Kim, Hyong-Keun Kook, Jee-Young Sun, Mun-Cheon Kang, and Sung-Jea Ko. Parallel feature pyramid network for object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 234–250, 2018.
* [21] Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020.
* [22] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
* [23] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
* [24] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117–2125, 2017.
* [25] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
* [26] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
* [27] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
* [28] Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. 2018\.
* [29] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13–23, 2019.
* [30] Niki Parmar, Prajit Ramachandran, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. Stand-alone self-attention in vision models. 2019\.
* [31] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training, 2018.
* [32] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
* [33] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
* [34] Mengye Ren and Richard S Zemel. End-to-end instance segmentation with recurrent attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6656–6664, 2017.
* [35] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137–1149, 2016.
* [36] Amaia Salvador, Miriam Bellver, Victor Campos, Manel Baradad, Ferran Marques, Jordi Torres, and Xavier Giro-i Nieto. Recurrent neural networks for semantic instance segmentation. arXiv preprint arXiv:1712.00617, 2017.
* [37] Russell Stewart, Mykhaylo Andriluka, and Andrew Y Ng. End-to-end people detection in crowded scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2325–2333, 2016.
* [38] Zhiqing Sun, Shengcao Cao, Yiming Yang, and Kris Kitani. Rethinking transformer-based set prediction for object detection. arXiv preprint arXiv:2011.10881, 2020.
* [39] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE international conference on computer vision, pages 9627–9636, 2019.
* [40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30:5998–6008, 2017.
* [41] Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020.
* [42] Felix Wu, Angela Fan, Alexei Baevski, Yann N Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. arXiv preprint arXiv:1901.10430, 2019.
* [43] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057, 2015.
* [44] Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6281–6290, 2019.
* [45] Minghang Zheng, Peng Gao, Xiaogang Wang, Hongsheng Li, and Hao Dong. End-to-end object detection with adaptive clustering transformer. arXiv preprint arXiv:2011.09315, 2020.
* [46] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.
|
# Single versus Multiple Annotation for Named Entity Recognition of Mutations
David Martinez Iraola, Antonio Jimeno Yepes IBM Research, Southbank, VIC,
Australia
###### Abstract
The focus of this paper is to address the knowledge acquisition bottleneck for
Named Entity Recognition (NER) of mutations, by analysing different approaches
to build manually-annotated data. We address first the impact of using a
single annotator vs two annotators, in order to measure whether multiple
annotators are required. Once we evaluate the performance loss when using a
single annotator, we apply different methods to sample the training data for
second annotation, aiming at improving the quality of the dataset without
requiring a full pass. We use held-out double-annotated data to build two
scenarios with different types of rankings: similarity-based and confidence
based. We evaluate both approaches on: (i) their ability to identify training
instances that are erroneous (cases where single-annotator labels differ from
double-annotation after discussion), and (ii) on Mutation NER performance for
state-of-the-art classifiers after integrating the fixes at different
thresholds.
###### keywords:
Natural Language Processing, Information Extraction, Named Entity Recognition,
Data Acquisition
††journal: Journal of Biomedical Informatics
## 1 Introduction
Natural Language Processing (NLP) is a research field that is receiving
increasing attention, because of its potential to transform the bulk of
written human knowledge into actionable structured data. One example of this
potential is applied to research in the area of DNA mutations, whose findings
are mainly distributed through publications describing experiments and their
outcomes. Reading the large number of new articles has become an unmanageable
task, and NLP tools are being used to automatically extract information from
the research literature. This information can be represented in databases or
knowledge bases, and allow researchers to more efficiently access knowledge.
Mutations are curated from the scientific literature into databases such as
COSMIC [6], which are valuable resources in clinical pipelines. Since these
resources are curated from the literature, automating the extraction of
mutation information using text analytics will speed up the transfer of
knowledge into structured databases. Genetic mutations are usually represented
in text using standard nomenclature, such as the one defined by the HGVS
(Human Genome Variation Society) 111https://varnomen.hgvs.org. Many tools have
used regular expressions, or a combination of regular expressions and machine
learning to identify these mentions in the literature [17, 3, 10]. In addition
to these structured representations, mutations are as well mentioned in the
literature using free natural language [11].
One limitation of using NLP tools is the lack of generic solutions that can be
applied to text in multiple domains for extracting information. In particular
for mutation extraction, existing Biomedical NLP tools (MetaMap, etc.) will
not cover certain kinds of phenomena, and state-of-the-art pre-trained models
such as BERT [5] require training data from the domain, which is hard to
obtain. In the mutation domain, Jimeno-Yepes et al. [11] developed annotation
and tools to extract different types of mentions and related entities. They
relied on a group of domain-expert annotators, and found that high performance
can be achieved by using deep learning. Their work also showed that an
important bottleneck to achieve high performance was the cost of manual
annotation, and they illustrated the importance of multiple annotators per
instance to increase annotation quality. Manual annotation inconsistencies or
errors can have large effects in performance, and they are difficult to
identify and fix once the annotation process is finished. The best approach
when resources are available (used in [11]) consists on multiple annotators
going over the same texts separately, and discussing when there are
disagreements. However, often this approach is not feasible, and a single
annotator is used for most of the annotation, with the intervention of a
second one over a sample, to provide inter-annotator agreement statistics.
Some widely used datasets for domain-specific NLP follow this approach, for
instance BIONLP-2013-GENIA [12], or NICTA-PIBOSO [13].
The goal of this paper is to address the knowledge acquisition bottleneck by
analysing different approaches to build manually-annotated data. We focus
first on the cost of using a single annotator vs two annotators, in order to
measure whether there is a large gap in results between the two options. Once
we evaluate the performance loss using a single annotator, we apply different
methods to sample the training data for second annotation, aiming at improving
the quality of the dataset. We use held-out double-annotated data to build two
scenarios with different types of rankings: similarity-based and confidence
based. We evaluate both methods on: (i) their ability to identify training
instances that are erroneous (cases where single-annotator labels differ from
double-annotation), and (ii) on Mutation NER performance for state-of-the-art
classifiers after integrating the fixes at different thresholds. This work is
related to active learning [21], which aims at effective sampling of unlabeled
data for annotation. The difference is that in our case we focus on comparing
single vs double annotation in a challenging domain, and on providing methods
to improve over single annotation with targeted second annotation.
The rest of the article is organised as follows. We present related work in
Section 2, and then introduce the experimental setting in Section 3. The
results are explained in Section 4, and we provide discussion and conclusions
in Sections 5 and 6.
## 2 Related work
Manual annotation is a time consuming and expensive activity, but required to
train supervised machine learning methods, and to tune or evaluate algorithms.
Previous work on annotating mutation related entities [24, 11] has relied on
manual annotation from several annotators that is used to measure inter-
annotator agreement, but as well to improve the quality of single annotated
corpora.
There are several methods considered to improve the speed or cost at which
manual annotation is collected. One of such methods is active learning [21],
the idea being to achieve high accuracy with fewer training examples, by
posing queries in the form of unlabeled instances to be labeled by an oracle
(typically a human annotator). The instances are chosen using techniques to
optimise the performance of the classifiers. In our case the number of
annotated instances is fixed, and we apply methods to identify a sample to re-
annotate in order to improve the performance of the models.
The cost is not the only factor that affects manual annotation outcomes.
Disagreements between human annotators might affect the quality of manually
annotated sets. This is clearly a problem in Mechanical Turk (MT) settings
[19], in which random annotators need to be identified and discarded. There
are also approaches such as task routing where instance difficulty is modeled,
and appropriate annotators are chosen for each instance [25], leading to
improved datasets. When multiple annotators per instance are available,
methods to integrate annotations from multiple sources such as in repeated
labelling [9, 20] (e.g. using MT) provide higher performance. In our work,
annotators have been trained during the generation of the guidelines and
annotation inconsistencies are due to the complexity of the domain (they may
be as well linked to domain experience), and errors might be caused by
overlooking entities present in the documents.
There is related work in the area of automatically fixing annotation errors.
In [1] manual rules and machine learning are combined to automatically re-
annotate an existing public dataset, leading to improved models built on the
data.
## 3 Experimental setting
The goal of the experimental pipeline is to explore the following research
questions:
1. 1.
What is the performance difference between using single-annotated versus
double-annotated data to build models?
2. 2.
Can we reduce the gap between single and double-annotated data-based models by
using similarity and confidence-based methods to select instances for double
annotation?
We rely on the IBM-Mutation dataset (Section 3.3) to build and evaluate two
classifiers based on deep learning: BioBERT [15], and Bilstm-crf [23]. Both
classifiers are state of the art, and they provide log probabilities with
their predictions, which we use these as confidence scores.
We define two scenarios to evaluate methods that can improve the manual
annotation process. In the first scenario (Figure 1), we use both a small
adjudicated dataset, and a larger pre-adjudicated dataset. The pre-adjudicated
dataset is employed to train the initial classifiers that are tested over the
small adjudicated dataset, and the instances with classification errors point
to similar instances in the training data via textual similarity methods. The
intuition in this case is that the single-annotated training examples that are
similar to test sentences producing errors, are more likely to be erroneous
themselves.
Figure 1: Similarity-based scenario. Training data is ranked according to
their similarity to error models over development data.
In the second scenario (Figure 2) we rely on a small sample of the data that
is double-annotated, and we build the initial classifiers by training the
models on this dataset. Then we run the classifiers over the rest of the data,
and rank all sentences according to the confidence scores. The intuition is
that the sentences with lower confidence are the most likely to be difficult
to annotate, and therefore prone to errors.
We describe the classifiers we have used in Section 3.1, the sentence
similarity approaches required for the first scenario in Section 3.2, and the
dataset in Section 3.3.
Figure 2: Confidence-based scenario. Training data is ranked according to the
confidence of the classifiers.
### 3.1 Classifiers
#### 3.1.1 Bilstm-crf
We have used a Bilstm-crf method [23] that is comprised of an encoding
mechanism that uses a bidirectional LSTM (Long Short Term Memory) network [8],
which acts as a feature encoding that substitutes manual engineering of
features, and a decoding method that uses a neural CRF [14] to generate the
named entity recognition annotation tags.
Three layers of bidirectional LSTM have been stacked using residual
connections. The LSTM dimension we used is 100, running 100 epochs with a
dropout rate of 0.5. All the other parameters are the same as in previous work
[23]. Named entity recognition is a sequence labeling task. In Bilstm-crf, the
probability of a sequence is shown in equation 1. The score $s(Y,S)$, as
explained in [23] is used in softmax with the scores of all possible sequences
$\sum_{Y^{\prime}\in\mathbb{Y}}e^{s(Y^{\prime},S)}$.
We use the output of equation 1 to obtain the confidence of the Bilstm-crf
classifier.
$\displaystyle
Pr(Y|S)=\frac{e^{s(Y,S)}}{\sum_{Y^{\prime}\in\mathbb{Y}}e^{s(Y^{\prime},S)}}$
(1)
#### 3.1.2 BioBERT
BioBERT is a domain specific pre-training of BERT (Bidirectional Encoder
Representations from Transformers) [5], which uses English Wikipedia,
BooksCorpus, PubMed abstracts and PubMed Central (PMC) full text articles as
corpus. We relied on pre-trained weights from the combination of BioBERT Base
1.0 with added PubMed and PMC documents. For NER fine-tuning, BioBERT has a
single output layer that uses the output of the last BERT layer to predict
BIO2 NER labels. Due to the random initialization of BERT, different results
can be expected for each run; in order to reduce the variability we average
results over 10 runs, and provide the standard deviation when using BioBERT.
The average of the logits from the output layer are used as the confidence of
the prediction of the BioBERT NER system.
### 3.2 Sentence Similarity
Semantic textual similarity (STS) is a widely studied problem, and multiple
approaches exist [4]. One of the most successful systems in recent public
evaluations is based on word alignments across the input texts [22]. This
method is unsupervised, and it uses the Paraphrase Database (PPDB) [7] to
identify semantically similar words, by computing dependencies and surface-
form neighbors of the two words to determine their contextual similarity. We
rely on the implementation by Brychcín and Svoboda [2] to apply this approach.
Apart from the similarity score, this method provides alignments between text
snippets in the different sentences, which can be used to provide an
interpretation of the score. This is an example of interpretable semantic
textual similarity (ISTS), and we use this term to refer to this method.
As a different approach, similarity methods based on word embeddings have
proven successful over shared datasets and tasks [18]. In the biomedical
domain, recent work has shown the robust performance of BioBERT [16], and our
second similarity method will rely on this technique, by averaging the word
embeddings obtained before running the last layer of BioBERT.
### 3.3 Dataset
In this study, we have used the IBM-Mutation corpus presented in [11] and
publicly available222https://github.com/ibm-aur-nlp/amia-18-mutation-corpus.
The annotation was performed by 5 domain experts on MEDLINE citations related
to colorectal cancer. During the manual annotation process two annotators
worked on the same document independently and disagreements were afterwards
resolved. The data set contains 167 MEDLINE citations and it contains 60
mentions of DNA modifications, 1324 mentions of genes/proteins, 320 mentions
of loci, 1337 mentions of mutations and 23 mentions of RNAs. The data set
contains 94 component of relations, 52 has modification relations and 907 has
mutation relations. For our experiments we focus on entities of type mutation.
The IBM-Mutation corpus has been double-annotated, and each sentence has been
separately annotated by two experts. We rely on this annotation to build two
versions of the dataset: pre-adjudicated (single annotation), and adjudicated
(agreed labels after discussion). For our experiments, when an instance has
different labels in the different versions, we consider the pre-adjudicated
label to be erroneous. For instance, in the adjudicated dataset the following
sentence contains the annotations in bold, both of which are missing in the
pre-adjudicated dataset: ”Sixteen of 17 mutations were at residue 599
(V599E).” We incorporated the partial annotations to the existing distribution
of the IBM-Mutation dataset for the research community.
For our experiments, we randomly split the data into 4 groups: training,
development, test-held-out (test1), and test (test2). The number of sentences
and annotations are given in Table 1. We initially split the dataset into
training and test (80%/20%); then further split each part randomly into two
similar sets, to be used as trainining$/$development and test-held-out$/$test.
The table does not provide pre-annotated cases for test, since they are not
used (we evaluate on the adjudicated labels only). We can see that the
adjudicated corpus contains more annotations, meaning that the single
annotation tends to miss cases.
Table 1: Dataset splits used for the experiments. Split | Sentences | Annotations | Annotations
---|---|---|---
| | Pre-adjudicated | Adjudicated
Training | 717 | 823 | 927
Development | 614 | 489 | 616
Test-held-out (test1) | 167 | - | 124
Test (test2) | 195 | - | 226
## 4 Results
We first compute the performance of different NER settings in Section 4.1, and
then we evaluate the scores of ranking methods to detect erroneous sentences
in Section 4.2. Finally in Section 4.3 we re-train and test the models with
different thresholds of double-annotated instances.
### 4.1 NER classifier performance
Our first goal is to measure the performance of NER classifiers over the IBM-
Mutation dataset, and compare the results to the state of the art. The results
at the bottom of Table 2 show the performance of our two classifiers over test
data (test2) when trained over double-annotated (adjudicated) training data.
For comparison with the state of the art, we include some relevant
performances reported over this dataset by [11]. Direct comparison is not
possible, due to different data splits, but the results illustrate that our
BioBERT classifier is competitive, and even scores close to the reported
inter-annotator agreement for the dataset.
Table 2: Performance of NER systems for Mutation NER. System | Precision | Recall | F-score (Std Dev)
---|---|---|---
Bilstm-crf([11]) | 82.0 | 68.8 | 74.9
Inter-annotator agreement([11]) | 79.0 | 77.6 | 78.3
Hybrid system([11]) | 85.0 | 74.8 | 79.6
Bilstm-crf | 72.0 | 72.4 | 72.2
BioBERT | 76.1 | 76.1 | 76.0 (2.1)
We next measure the scores when only single-annotated data is used to train
the models. We test our classifiers using the same test set (test2), and the
results are given in Table 3. We can see that in this scenario the performance
drops, specially for BioBERT (-9.8%), and less catastrophically for Bilstm-crf
(-4.6%). This is the first indication of the importance of second annotators
when building NER datasets.
Table 3: Performance of NER systems for Mutation NER using single-labeled data for training. System | Precision | Recall | F-score (Std Dev)
---|---|---|---
BioBERT | 73.4 | 63.3 | 67.9 (2.8)
Bilstm-crf | 71.4 | 64.1 | 67.6
### 4.2 Finding annotation errors in single-labeled data
In order to reduce the gap in performance between training with double and
single-labeled data, we explore the possibility of selectively double-
annotating training instances to improve the model, without having to double-
annotate the full dataset. There are different methods to identify the best
instances for re-annotation, and in this work we focus on confidence and
similarity-based techniques. We first explore the ability of these methods to
identify single-annotated cases in training data that change labels when they
go over second annotation.
Our first step is to evaluate similarity-based methods (cf. Figure 1). For
this experiment, we run models that are trained with pre-adjudicated data over
the held-out test data (test1), and identify the errors committed by the
model. These errors are used as guide to find instances to fix in training
data, via similarity metrics. As a reference, the performance of the models
over held-out test data (test1) is shown in Table 4, when trained both in
single- and double-annotated data. We can see that the scores for this held-
out partition are better than for the final test. This could happen because we
did not stratify the splits, and this test set is better aligned with the
training data. In any case the difference between single and double annotation
is again clear for both our classifiers.
Table 4: Performance of NER systems for Mutation with pre-adjudicated annotations over held-out test data (test1). Training | System | Precision | Recall | F-score (Std Dev)
---|---|---|---|---
Pre-adjudicated | BioBERT | 78.1 | 63.6 | 70.1 (2.4)
| bilstm-crf | 72.2 | 70.7 | 71.4
Adjudicated | BioBERT | 81.0 | 74.7 | 77.7 (2.4)
| bilstm-crf | 85.7 | 72.7 | 78.7
From the held-out test set, we identify all sentences that contain at least a
false positive or false negative. These sentences may contain errors because
of their similarity with inconsistently annotated training instances, and we
explore this possibility by finding the most similar training sentences using
two different methods: Interpretable Semantic Textual Similarity (ISTS) and
sentence embeddings.
We evaluate the rankings provided by the different approaches by relying on
thresholds for the number of instances to be checked. There are 1,331
sentences in training data, out of which 207 (15.6%) contain discrepancies
with adjudicated ground truth. Therefore we expect a random baseline to
perform with a precision close to 15.6%. As a sanity check we perform
different runs selecting random rankings from the training data, and show the
results in Table 5. We can see that we obtain the expected scores.
Table 5: Performance of sentences identified randomly. Total sentences in training data: 1331, out of which 207 (15.6%) contain discrepancies with adjudicated ground truth. Highest score per column in bold. Threshold | Sentences to check | Precision | Recall | F-score
---|---|---|---|---
| | | | (Std Dev)
100 | 7.5% | 16.0 | 8.6 | 11.2 (1.6)
200 | 15.0% | 14.0 | 15.5 | 14.7 (2.2)
500 | 37.6% | 15.9 | 41.0 | 22.8 (1.6)
For the similarity-based methods, we rely on the errors found in the Bilst-crf
and BioBERT experiments over the held-out test set (test1), and for each error
sentence we identify the most similar sentences using ISTS and sentence
embeddings in the full training set. We measure the ability to identify
sentences where single-annotated and double-annotated labels differ, and the
results of the experiment are given in Table 6.
Table 6: Performance of sentences identified by ISTS and sentence embeddings for the different NER classifiers. Total sentences in training data: 1331, out of which 207 (15.6%) contain discrepancies with adjudicated ground truth. Highest score per column in bold. NER | Alignment | Threshold | Sentences | Prec. | Rec. | F-sc.
---|---|---|---|---|---|---
| | | to check | | |
Baseline | | All | 1331 (100%) | 15.6 | 100.0 | 26.9
| | Top-100 | 7.5% | 29.0 | 14.0 | 18.9
Bilstm-crf | ISTS | Top-200 | 15.0% | 24.5 | 23.7 | 24.1
| | Top-500 | 37.6% | 19.8 | 47.8 | 28.0
| | Top-100 | 7.5% | 19.0 | 9.2 | 12.4
BioBERT | ISTS | Top-200 | 15.0% | 20.0 | 19.3 | 19.7
| | Top-500 | 37.6% | 16.8 | 40.6 | 23.8
| | Top-100 | 7.5% | 16.0 | 7.7 | 10.4
BioBERT | Sentence | Top-200 | 15.0% | 17.5 | 16.9 | 17.2
| Embeddings | Top-500 | 37.6% | 15.4 | 37.2 | 21.8
The results illustrate that in most cases the precision of the ranking methods
improves the random baseline. The highest precision is achieved by Bilstm-crf
with ISTS for the top-100 and top-200 thresholds, showing that the rankings
can retrieve significantly higher amounts of useful instances than random
selection. For BioBERT, we can see that ISTS performs better than sentence
embeddings, which does not improve random precision.
For our second experiment on ranking evaluation, we use confidence scores, as
represented by the log probabilities of predictions of models trained on
double-annotated data. The target sentences are those in training and
development data, and for each model (Bilstm-crf and BioBERT) a ranking is
generated according to the confidence scores (cf. Figure 2). The results for
different thresholds are given in Table 7.
Table 7: Performance of sentences ranked by confidence for the different NER classifiers. Total sentences in training and development data: 1331, out of which 207 (15.6%) contain discrepancies with adjudicated ground truth. NER | Threshold | Sentences | Precision | Recall | F-score
---|---|---|---|---|---
| | to check | | |
Baseline | All | 1331 (100%) | 15.6 | 100.0 | 26.9
| Top-100 | 7.5% | 34.0 | 16.4 | 22.1
Bilstm-crf | Top-200 | 15.0% | 32.0 | 30.9 | 31.4
| Top-500 | 37.6% | 28.2 | 68.1 | 39.9
| Top-100 | 7.5% | 32.0 | 15.5 | 20.8
BioBERT | Top-200 | 15% | 27.5 | 26.6 | 27.0
| Top-500 | 37.6% | 22.4 | 54.1 | 31.7
We can see that the best results are achieved by using the confidence scores
from Bilstm-crf, and this method is able to achieve the highest F-score for
all different thresholds. This indicates that simply using the confidence
values from the Bilstm-crf prototype can help find the best instances to
double-annotate in the dataset.
### 4.3 Retraining models
Finally me measure the impact of building models with double-annotated
samples, and we rely on two of the ranking methods above to detect those
instances: Bilstm-crf confidence scores, and ISTS alignment over Bilstm-crf
outputs. Because of its better performance on its own, we apply BioBERT with
different numbers of double-annotated documents, and we observe whether the
gap with the full annotation has closed. The results are given in Table 8.
Table 8: Results over test data (test2) after retraining BioBERT with confidence and ISTS-based Bilstm-crf ranking for double annotation. Best performance per column is given in bold. Ranking method | Threshold | Precision | Recall | F-score
---|---|---|---|---
| | | | (Std Dev)
Check none | 0 | 71.5 | 62.6 | 66.7 (2.9)
Check all | 1331 | 76.8 | 76.2 | 76.5 (1.1)
| 100 | 72.6 | 62.9 | 67.3 (2.2)
Random | 200 | 73.5 | 66.1 | 69.6 (2.0)
| 500 | 73.7 | 69.0 | 71.3 (2.0)
| Top-100 | 75.1 | 66.0 | 70.2 (1.9)
Bilstm-crf with | Top-200 | 76.0 | 68.2 | 71.9 (2.2)
confidence score | Top-500 | 75.4 | 68.7 | 71.9 (1.3)
| Top-100 | 75.0 | 66.0 | 70.2 (2.7)
Bilstm-crf with ISTS | Top-200 | 75.4 | 68.4 | 71.7 (2.0)
| Top-500 | 74.9 | 71.0 | 72.9 (1.5)
We can see that the two ranking approaches (confidence-based and ISTS) perform
similarly for the lower thresholds (100 and 200), and they clearly improve
over random selection. For the top-500 threshold, we see that the confidence-
based method does not get any improvement, while the similarity-based approach
is able to gain another percentage point. This could indicate that the
similarity-based method is better suited for fixing the errors of the model,
even if it performs worse in the task of predicting instances that have the
wrong label (Section 4.2). The reason for this could be that similarity-based
approaches rely on the errors found on held-out data as starting point. This
would make them better tuned to find sentences that have higher impact for the
model, as opposed to sentences that are easier to predict as erroneous.
## 5 Discussion
Our experiments show that there is a big difference in performance when the
training data is annotated by one or two annotators. This effect is clear for
different classifiers and test splits, and raises questions about the reliance
on single-annotated data for NER in challenging domains. The performance drop
takes most models from F-scores on the high 70s, to F-scores in the low 70s or
60s; this could have large effects for applications built on top of NER
methods.
We explored the possibility of automatically identifying the discrepancies
between pre-adjudicated and adjudicated examples by automatically ranking pre-
adjudicated instances for second annotation. We proposed two scenarios:
confidence-based and similarity-based. Our experiments show that the
confidence-based method is able to perform with high precision (given the high
bias towards negative instances), and obtain higher results than the
similarity-based techniques in this task. For similarity based techniques,
ISTS obtains the best result, with sentence embeddings failing to improve the
random baseline.
For our last experiment we explored how the identification of errors in pre-
adjudicated data translates to better NER models after re-training. We observe
that in this case the biggest gains are obtained by the similarity-based
technique ISTS. The reason for this could be that ISTS exploits errors over
held-out data to find similar training instances, which have larger impact to
build more accurate models. Another advantage of ISTS is that it provides
alignment of terms and phrases in the compared sentences, which could help
explain the predictions of the classifier, and make the second annotation
easier for the user. By relying on ISTS, the gap in F-score for the pre-
adjudicated dataset is reduced from 9.8% to 3.6% when adjudicating 37.6% of
the examples; when adjudicating only 100 examples (7.5% of the training), the
gap is reduced to 6.3% for both ISTS and Bilstm-crf confidence.
## 6 Conclusion
In this article we explored the issue of the quality of manually annotated
training data for NER, and the effect of using single versus double annotation
per instance. Our results show a large gap in model performance when relying
on the former, and we explored different ranking approaches to help choosing
instances for double-annotation. We evaluated these approaches on their
ability to find manual annotation differences, and also on the impact for re-
training NER models. We found that confidence-based methods perform best for
identifying training differences, while similarity-based techniques have the
most impact for re-training NER models. Using these methods the gap between
single and double-annotation can be significantly reduced without having to
double-annotate the full dataset.
## 7 References
## References
* Abaho et al. [2019] Micheal Abaho, Danushka Bollegala, Paula Williamson, and Susanna Dodd. Correcting crowdsourced annotations to improve detection of outcome types in evidence based medicine. In _CEUR Workshop Proceedings_ , volume 2429, pages 1–5, 2019.
* Brychcín and Svoboda [2016] Tomáš Brychcín and Lukáš Svoboda. UWB at SemEval-2016 task 1: Semantic textual similarity using lexical, syntactic, and semantic information. In _Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)_ , pages 588–594, San Diego, California, June 2016\. Association for Computational Linguistics. doi: 10.18653/v1/S16-1089. URL https://www.aclweb.org/anthology/S16-1089.
* Caporaso et al. [2007] J Gregory Caporaso, William A Baumgartner Jr, David A Randolph, K Bretonnel Cohen, and Lawrence Hunter. MutationFinder: a high-performance system for extracting point mutation mentions from text. _Bioinformatics_ , 23(14):1862–1865, 2007.
* Cer et al. [2017] Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_ , pages 1–14, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/S17-2001. URL https://www.aclweb.org/anthology/S17-2001.
* Devlin et al. [2018] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ , 2018.
* Forbes et al. [2010] Simon A Forbes, Nidhi Bindal, Sally Bamford, Charlotte Cole, Chai Yin Kok, David Beare, Mingming Jia, Rebecca Shepherd, Kenric Leung, Andrew Menzies, et al. COSMIC: mining complete cancer genomes in the Catalogue of Somatic Mutations in Cancer. _Nucleic acids research_ , 39(suppl_1):D945–D950, 2010.
* Ganitkevitch et al. [2013] Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. PPDB: The paraphrase database. In _Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 758–764, Atlanta, Georgia, June 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/N13-1092.
* Graves et al. [2013] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In _IEEE international conference on Acoustics, speech and signal processing (ICASSP)_ , pages 6645–6649, 2013.
* Ipeirotis et al. [2014] Panagiotis G Ipeirotis, Foster Provost, Victor S Sheng, and Jing Wang. Repeated labeling using multiple noisy labelers. _Data Mining and Knowledge Discovery_ , 28(2):402–441, 2014.
* Jimeno Yepes and Verspoor [2014] Antonio Jimeno Yepes and Karin Verspoor. Mutation extraction tools can be combined for robust recognition of genetic variants in the literature. _F1000Research_ , 3, 2014.
* Jimeno-Yepes et al. [2018] Antonio Jimeno-Yepes, Andrew D MacKinlay, Natalie Gunn, Christine Schieber, Noel Faux, Matthew Downton, Benjamin Goudey, and Richard L Martin. A hybrid approach for automated mutation annotation of the extended human mutation landscape in scientific literature. In _AMIA_ , pages 616–623, 2018.
* Kim et al. [2013] Jin-Dong Kim, Yue Wang, and Yamamoto Yasunori. The Genia event extraction shared task, 2013 edition - overview. In _Proceedings of the BioNLP Shared Task 2013 Workshop_ , pages 8–15, Sofia, Bulgaria, August 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W13-2002.
* Kim et al. [2011] Su Nam Kim, David Martinez, Lawrence Cavedon, and Lars Yencken. Automatic classification of sentences to support evidence based medicine. In _BMC bioinformatics_ , volume 12. Springer, 2011.
* Lample et al. [2016] Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. Neural architectures for named entity recognition. In _Proceedings of NAACL-HLT_ , pages 260–270, 2016.
* Lee et al. [2020] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. _Bioinformatics_ , 36(4):1234–1240, 2020.
* Molla et al. [2020] Diego Molla, Christopher Jones, and Vincent Nguyen. Query focused multi-document summarisation of biomedical texts. _arXiv preprint arXiv:2008.11986_ , 2020.
* Nagel et al. [2009] Kevin Nagel, Antonio Jimeno-Yepes, and Dietrich Rebholz-Schuhmann. Annotation of protein residues based on a literature analysis: cross-validation against UniProtKb. _BMC bioinformatics_ , 10(8):S4, 2009.
* Reimers and Gurevych [2019] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3973–3983, 2019.
* Rodrigues et al. [2013] Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. Learning from multiple annotators: distinguishing good from random labelers. _Pattern Recognition Letters_ , 34(12):1428–1436, 2013.
* Rodrigues et al. [2014] Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. Gaussian process classification and active learning with multiple annotators. In _International conference on machine learning_ , pages 433–441, 2014.
* Settles [2009] Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
* Sultan et al. [2015] Md Arafat Sultan, Steven Bethard, and Tamara Sumner. Dls@ cu: Sentence similarity from word alignment and semantic vector composition. In _Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)_ , pages 148–153, 2015.
* Tran et al. [2017] Quan Hung Tran, Andrew MacKinlay, and Antonio Jimeno Yepes. Named entity recognition with stack residual lstm and trainable bias decoding. In _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 566–575, 2017.
* Verspoor et al. [2013] Karin Verspoor, Antonio Jimeno Yepes, Lawrence Cavedon, Tara McIntosh, Asha Herten-Crabb, Zoë Thomas, and John-Paul Plazzer. Annotating the biomedical literature for the human variome. _Database_ , 2013, 2013.
* Yang et al. [2019] Yinfei Yang, Oshin Agarwal, Chris Tar, Byron C. Wallace, and Ani Nenkova. Predicting annotation difficulty to improve task routing and model performance for biomedical information extraction. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1471–1480, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1150. URL https://www.aclweb.org/anthology/N19-1150.
|
# Robust Bayesian Inference for Big Data: Combining Sensor-based Records with
Traditional Survey Data
Ali Rafei Carol A. C. Flannagan Brady T. West Michael R. Elliott
Corresponding author; address: 426 Thompson St. Ann Arbor, MI 48109. Rm 4068
ISR, email<EMAIL_ADDRESS>Survey Methodology Program, University of
Michigan University of Michigan Transportation Research Institute
###### Abstract
Big Data often presents as massive non-probability samples. Not only is the
selection mechanism often unknown, but larger data volume amplifies the
relative contribution of selection bias to total error. Existing bias
adjustment approaches assume that the conditional mean structures have been
correctly specified for the selection indicator or key substantive measures.
In the presence of a reference probability sample, these methods rely on a
pseudo-likelihood method to account for the sampling weights of the reference
sample, which is parametric in nature. Under a Bayesian framework, handling
the sampling weights is an even bigger hurdle. To further protect against
model misspecification, we expand the idea of double robustness such that more
flexible non-parametric methods as well as Bayesian models can be used for
prediction. In particular, we employ Bayesian additive regression trees, which
not only capture non-linear associations automatically but permit direct
quantification of the uncertainty of point estimates through its posterior
predictive draws. We apply our method to sensor-based naturalistic driving
data from the second Strategic Highway Research Program using the 2017
National Household Travel Survey as a benchmark.
Big Data,
non-probability sample,
quasi-randomization,
prediction model,
doubly robust,
augmented inverse propensity weighting,
Bayesian additive regression trees,
###### keywords:
, , and
## 1 Introduction
The 21st century is witnessing a re-emergence of non-probability sampling in
various domains (Murdoch and Detsky, 2013; Daas et al., 2015; Lane, 2016;
Senthilkumar et al., 2018). Probability sampling is facing new challenges,
mainly because of a steady decline in response rates (Groves, 2011; Johnson
and Smith, 2017; Miller, 2017). At the same time, new modes of data collection
via sensors, web portals, and smart devices have emerged that routinely
capture a variety of human activities. These automated processes have led to
an ever-accumulating massive volume of unstructured information, so-called
“Big Data” (Couper, 2013; Kreuter and Peng, 2014; Japec et al., 2015). Being
easy to access, inexpensive to collect, and highly detailed, this broad range
of data is perceived to be valuable for producing official statistics as an
alternative or supplement to probability surveys (Struijs et al., 2014;
Kitchin, 2015). However, “Big Data” typically have a self-selecting data-
generating process, which can lead to biased estimates. When this is the case,
the larger data volume in the non-probability sample increases the relative
contribution of selection bias to absolute or squared error. Meng et al.
(2018) call this phenomenon a “Big Data Paradox”, and these authors show both
theoretically and empirically that the impact of selection bias on the
effective sample size can be extremely large.
The motivating application in this article comes from naturalistic driving
studies (NDS), which are one real-world example of Big Data for rare event
investigations. Since traffic collisions are inherently rare events, measuring
accurate pre-crash behaviors as well as exposure frequency in normal driving
demands accurate long-term follow-up of the population of drivers. Thus, NDS
are designed to continually monitor
drivers’ behavior via in-vehicle sensors, cameras, and advanced wireless
technologies (Guo et al., 2009). The detailed information collected by NDS are
considered a rich resource for assessing various aspects of transportation
such as traffic safety, crash causality, and travel patterns (Huisingh et al.,
2019; Tan et al., 2017). In particular, we consider the sensor-based Big Data
from the second phase of the Strategic Highway Research Program (SHRP2), which
is the largest NDS conducted to date. This study recruited a convenience
sample from geographically restricted regions (6 US states: Florida, Indiana,
New York, North Carolina, Pennsylvania, and Washington) and attempted to
oversample both younger and older drivers, leading to potential selection bias
in the sample mean of some trip-related variables (Antin et al., 2015). To
deal with this, we employ the 2017 National Household Travel Survey (NHTS) as
a “reference survey”, which can serve as a probability sample representing the
population of American drivers (Santos et al., 2011). While daily trip
measures in SHRP2 are recorded via sensors, NHTS asks respondents to self-
report their trip measures through an online travel log. By analyzing the
aggregated data at the day level, we develop adjusted sensor-based estimates
from SHRP2 for measures such as frequency of trips, trip duration, trip speed,
and starting time of trip that can be compared with self-reported weighted
estimates in NHTS to assess the performance of our proposed methods in terms
of bias and efficiency, as well as estimates of maximum speed, brake use per
mile driven, and stop time that are available only in SHRP2.
Standard design-based approaches cannot be applied to non-probability samples
for the simple reason that the probabilities of selection are unknown (Chen et
al., 2020). Thus the American Association for Public Opinion Research (AAPOR)
task force on non-probability samples recommends that adjustment methods
should rely on models and external auxiliary information (Baker et al., 2013).
In the presence of a relevant probability sample with a set of common
auxiliary variables, which is often called a “reference survey”, two general
approaches can be taken: (1) _prediction modeling_ (PM)–fitting models on the
non-probability sample to predict the response variable for units in the
reference survey (Rivers, 2007; Kim and Rao, 2012; Wang et al., 2015; Kim et
al., 2018), and (2) _quasi-randomization_ (QR)–estimating the probabilities of
being included in the non-probability sample, also known as propensity scores
(PS), while treating the Big Data as a quasi-random sample (Lee, 2006; Lee and
Valliant, 2009; Valliant and Dever, 2011). Our focus is on the PM setting,
since in our application the key measures of interest are not available in the
probability (reference) survey, and our goal is to use prediction to impute
them.
Correct specification of the model predicting the outcome is critical for
imputation. To help relax this assumption, the PM approach can be combined
with the QR method, in a way that the adjusted estimate of a population
quantity is consistent if either the propensity or the outcome model holds.
Augmented inverse propensity weighting (AIPW) was the first of these so-called
“doubly-robust” (DR) methods (Robins et al., 1994), with applications to
causal inference (Scharfstein et al., 1999; Bang and Robins, 2005; Tan, 2006;
Kang et al., 2007; Tan et al., 2019) and adjustment for non-response bias
(Kott, 1994; Kim and Park, 2006; Kott, 2006; Haziza and Rao, 2006; Kott and
Chang, 2010). Further extension to multiple robustness has been developed by
Han and Wang (2013), where multiple models are specified and consistency is
obtained as long as at least one of the models is correctly specified. Chen et
al. (2020) offer further adjustments to adapt the AIPW estimator to a non-
probability sampling setting where an external benchmark survey is available.
While their method employs a modified pseudo-likelihood approach to estimate
the selection probabilities for the non-probability sample, a parametric model
is used to impute the outcome for units of the reference survey.
In a non-probability sample setting, Rafei et al. (2020) utilized BART in the
QR approach outlined in Elliott and Valliant (2017). In this paper, we extend
Rafei et al. (2020) in two major ways: first, by blending the QR and PS
methods into a novel DR method that is made even more robust by using BART,
which provides a strong non-parametric predictive tool by automatically
capturing non-linear associations as well as high-order interactions (Chipman
et al., 2007). The proposed method separates the propensity model from the
sampling weights in a two-step process, allowing for a broader range of models
to be utilized for imputing the missing inclusion probabilities. This allows
us to consider both parametric (linear and generalized linear models) and non-
parametric (BART) models for both propensity and outcome. In addition, the
posterior predictive distribution produced by BART makes it easier to quantify
the uncertainty due to the imputation of pseudo-weights and the outcome
variable (Tan et al., 2019; Kaplan and Chen, 2012). Second, we derive
asymptotic variance estimators for the previously proposed QR estimators in
Rafei et al. (2020) as well as proofs of the double robustness of the proposed
DR estimators.
The rest of the article is organized as follows. In Section 2, we develop the
theoretical background behind the proposed methods and associated variance
estimators. A simulation study is designed in Section 3 to assess the repeated
sampling properties of the proposed estimator, i.e. bias and efficiency.
Section 4 uses the NHTS to develop adjusted estimates from the SHRP2 using the
methods discussed and developed in the previous sections. All the statistical
analyses in both the simulations and empirical studies have been performed
using R version 3.6.1. The annotated R code is available for public use at
https://github.com/arafei/drinfNP. Finally, Section 5 reviews the strengths
and weaknesses of the study in more detail and suggests some future research
directions.
## 2 Methods
### 2.1 Notation
Denote by $U$ a finite population of size $N<\infty$. We consider the values
of a scalar outcome variable, $y_{i}$, $i=1,2,...,N$ and
$x_{i}=[1,x_{i1},x_{i2},...,x_{ip}]$, the values of a $p$-dimensional vector
of relevant auxiliary variables, $X$. Let $S_{B}$ be a non-probability sample
of size $n_{B}$ selected from $U$. The goal is to estimate an unknown finite
population quantity, e.g. $Q(Y)$. Here, the quantity of interest is considered
to be the finite population mean that is a function of the outcome variable,
i.e. $Q(Y)=\overline{y}_{U}=\sum_{i=1}^{N}y_{i}/N$. Suppose
$\delta^{B}_{i}=I(i\in S_{B})$ $(i=1,...,N)$ is the inclusion indicator
variable of the “big data” survey $S_{B}$ of size $n_{B}$ in $U$. Further, we
initially assume that given $X$, elements in $S_{B}$ are independent draws
from $U$, but later, we relax this assumption by considering $S_{B}$ to have a
single-stage clustering design as is the case in the real-data application of
this article.
Suppose $S_{R}$ is a parallel reference survey of size $n_{R}$, for which the
same set of covariates, $X$, has been measured. We also define
$d_{i}=[1,d_{i1},d_{i2},...,d_{iq}]$, the values of a $q$-dimensional vector
of design variables for the reference survey. We assume $y_{i}$ is unobserved
for $i\in S_{R}$; otherwise inference could be directly drawn based on
$S_{R}$. Also, let $\delta^{R}_{i}=I(i\in S_{R})$ denote the inclusion
indicator variable associated with $S_{R}$ for $i\in U$. To avoid unnecessary
complexity, we assume that units of $S_{R}$ are selected independently. Being
a full probability sample implies that the selection mechanism in $S_{R}$ is
ignorable given its design features, i.e.
$f(\delta^{R}_{i}|y_{i},d_{i})=f(\delta^{R}_{i}|d_{i})$ for $i\in U$, where
$d_{i}$ denotes a $q$-dimensional vector of associated design variables. Thus,
one can define the selection probabilities and sampling weights in $S_{R}$ as
$\pi^{R}_{i}=p(\delta^{R}_{i}=1|d_{i})$ and $w^{R}_{i}=1/\pi^{R}_{i}$,
respectively, for $i\in U$, which we assume are known.
While $X$ and $D$ may overlap or correlate, we define
$x^{*}_{i}=[x_{i},d_{i}]^{T}$, the $(p+q)$-dimensional vector of all auxiliary
variables associated with $S_{B}$ and $S_{R}$. To be able to make unbiased
inference for $S_{B}$, we consider the following assumptions for $S_{B}$:
1. C1.
Positivity—$S_{B}$ actually does have a probabilistic sampling mechanism,
albeit unknown. That means $p(\delta^{B}_{i}=1|x_{i})>0$ for all possible
values of $x_{i}$ in $U$.
2. C2.
Ignorability—the selection mechanism of $S_{B}$ is fully governed by $X$,
which implies $Y\rotatebox[origin={c}]{90.0}{$\models$}\delta^{B}|X$. Then,
for $i\in U$, the unknown “pseudo-inclusion” probability associated with
$S_{B}$ is defined as $\pi^{A}_{i}=p(\delta^{B}_{i}=1|x_{i})$.
3. C3.
Independence—conditional on $X^{*}$, $S_{R}$ and $S_{B}$ are selected
independently, i.e.
$\delta^{B}\rotatebox[origin={c}]{90.0}{$\models$}\delta^{R}|X^{*}$.
Note that the first two assumptions are collectively called “strong
ignorability” by Rosenbaum and Rubin (1983). Considering C1-C3, the joint
density of $y_{i}$, $\delta^{B}_{i}$ and $\delta^{R}_{i}$ can be factorized as
below:
$f(y_{i},\delta^{B}_{i},\delta^{R}_{i}|x^{*}_{i};\theta,\beta,\gamma)=f(y_{i}|x_{i}^{*};\theta)f(\delta^{B}_{i}|x_{i};\beta)f(\delta^{R}_{i}|d_{i};\gamma),\hskip
11.38109pt\forall i\in U$ (2.1)
where $\Psi=(\theta,\beta,\gamma)$ are some distributional parameters. While
$\theta$ and $\beta$ are unknown, $\gamma$ may be known as $S_{R}$ is a
probability sample. A QR approach involves modeling
$f(\delta^{B}_{i}|x^{*}_{i};\beta)$, whereas a PM approach deals with modeling
$f(y_{i}|x^{*}_{i};\theta)$.
Now suppose $S_{B}$ and $S_{R}$ have trivial overlap, i.e.
$p(\delta^{B}_{i}+\delta^{R}_{i}=2)\approx 0$. This assumption is reasonable
when the sampling fraction in both samples is small. Note that under the
ignorable assumption, the propensity model for $S_{B}$ depends on $X$ observed
for the entire population. Thus, given the combined sample, $S=S_{B}\cup
S_{R}$, with $n=n_{B}+n_{R}$ being the sample size, it is reasonable to expect
that the pseudo-inclusion probabilities, $\pi^{B}_{i}$’s, are a function of
both $x_{i}$ and $d_{i}$ for $i\in S$. Let $z_{i}=I(i\in S_{B}|\delta_{i}=1)$
be the indicator of subject $i$ belonging to the non-probability sample in the
combined sample where $\delta_{i}=\delta^{B}_{i}+\delta^{R}_{i}$. Note that
since $S_{B}\cap S_{R}=\emptyset$, $\delta_{i}$ can take values of either $0$
or $1$ as below:
$\delta_{i}=\begin{cases}0,&\hskip 8.53581pt\text{if}\hskip
8.53581pt\delta^{R}_{i}=0\hskip 8.53581ptand\hskip
8.53581pt\delta^{B}_{i}=0\\\ 1,&\hskip 8.53581pt\text{if}\hskip
8.53581pt\delta^{R}_{i}=1\hskip 17.07164ptor\hskip
8.53581pt\delta^{B}_{i}=1\end{cases}$
Figure 1 illustrates the data structure in both the finite population and the
combined sample.
Figure 1: Data structure in the population and the combined sample
### 2.2 Quasi-randomization
In QR, the non-probability sample is treated as if the self-selection
mechanism of population units mimics a stochastic process, but with unknown
selection probabilities. Then, attempts are made to estimate these missing
quantities in $S_{B}$ based on external information. Conditional on
$x^{*}_{i}$, suppose $\pi^{B}_{i}$ follows a logistic regression model in the
finite population. We have
$\pi^{B}(x_{i};\beta)=p(\delta^{B}_{i}=1|x_{i};\beta)=\frac{\exp\\{\beta^{T}_{0}x_{i}\\}}{1+\exp\\{\beta^{T}_{0}x_{i}\\}},\hskip
14.22636pt\forall i\in U$ (2.2)
where $\beta$ is a vector of model parameters in $U$. Using a modified pseudo-
maximum likelihood approach (PMLE), Chen et al. (2020) demonstrate that, given
$S$, a consistent estimate of $\beta$ can be obtained by solving the following
estimating equation with respect to $\beta$:
$U(\beta)=\sum_{i=1}^{n_{B}}x_{i}-\sum_{i=1}^{n_{R}}\pi^{B}(x_{i};\beta)x_{i}/\pi^{R}_{i}=0$
(2.3)
The estimates of the $\pi^{B}_{i}$’s, which we also call propensity scores
(PS), are obtained by plugging the solution of Eq. 2.3, i.e.
$\widehat{\beta}$, into Eq. 2.2. It is important to note that the proposed PS
estimator by Chen et al. (2020) depends implicitly on $d_{i}$ in addition to
$x_{i}$, because we know that $\pi^{R}_{i}$ is a determinstic function of
$d_{i}$ for $i\in U$. Under certain regularity conditions, the authors show
that the inverse PS weighted (IPSW) mean from $S_{B}$ yields a consistent and
asymptotically unbiased estimate for the population mean.
Obviously, the possible solutions of Eq. 2.3 are not a typical output of
logistic regression procedures in the existing standard software. With one
additional assumption, which is mutual exclusiveness of the two samples, i.e.
$S_{B}\cap S_{R}=\emptyset$, we show that estimating $\pi^{B}_{i}$’s can be
reduced to modeling $Z_{i}$ for $i\in S$ instead of modeling $\delta^{B}_{i}$
for $i\in U$. Intuitively, one can view the selection process of the $i$-th
population unit in $S_{B}$ as being initially selected in the joint sample
($\delta_{i}=1$) and then being selected in $S_{B}$ given the combined sample
($Z_{i}=1$). By conditioning on $x^{*}_{i}$, the selection probabilities in
$S_{B}$ are factorized as
$\displaystyle p(\delta^{B}_{i}=1|x^{*}_{i})$
$\displaystyle=p(\delta^{B}_{i}=1,\delta_{i}=1|x^{*}_{i})$ (2.4)
$\displaystyle=p(\delta^{B}_{i}=1|\delta_{i}=1,x^{*}_{i})p(\delta_{i}=1|x^{*}_{i})$
$\displaystyle=p(Z_{i}=1|x^{*}_{i})p(\delta_{i}=1|x^{*}_{i})\hskip
14.22636pti\in S$
Note that the last expression in Eq. 2.4 results from the definition of
$Z_{i}$ given $S$. The same factorization can be derived for the selection
probabilities in $S_{R}$. Thus, we have
$\displaystyle
p(\delta^{R}_{i}=1|x^{*}_{i})=p(Z_{i}=0|x^{*}_{i})p(\delta_{i}=1|x^{*}_{i})$
(2.5)
By dividing the two sides of the equations 2.4 and 2.5, one can get rid of
$p(\delta_{i}=1|x^{*}_{i})$ and obtain the pseudo-selection probabilities in
$S_{B}$ as below:
$p(\delta^{B}_{i}=1|x^{*}_{i})=p(\delta^{R}_{i}=1|x^{*}_{i})\frac{p(Z_{i}=1|x^{*}_{i})}{p(Z_{i}=0|x^{*}_{i})}$
(2.6)
It is clear that $p(\delta^{R}_{i}=1|x^{*}_{i})=\pi^{R}_{i}$ as $x^{*}_{i}$
contains $d_{i}$ and the sampling design of $S_{R}$ is known given $d_{i}$. As
will be seen in Section 2.4, conditioning on $d_{i}$ is vital for the DR
estimator, as Chen’s method is limited to situations where the dimension of
the auxiliary variables must be the same in QR and PM.
Note that Eq. 2.6 is identical to the pseudo-weighting formula Elliott and
Valliant (2017) derive for a non-probability sample. Unlike the PMLE approach,
modeling $Z_{i}$ in $S$ can be performed using the standard binary logistic
regression or any alternative classification methods, such as supervised
machine learning algorithms. Under a logistic regression model, we have
$p(Z_{i}=1|x^{*}_{i})=\frac{\exp\\{\beta^{T}_{1}x_{i}^{*}\\}}{1+\exp\\{\beta^{T}_{1}x_{i}^{*}\\}}$
(2.7)
where $\beta_{1}$ denotes the vector of model parameters being estimated via
maximum likelihood estimation (MLE). Hence, in situations where $\pi^{R}_{i}$
is known or can be calculated for $i\in S_{B}$, the estimate of $\pi^{B}_{i}$
for $i\in S_{B}$ is given by
$\widehat{\pi}^{B}_{i}=\pi^{R}_{i}\exp\\{\widehat{\beta}^{T}_{1}x_{i}^{*}\\}=\pi^{R}_{i}\frac{p_{i}(\widehat{\beta}_{1})}{1-p_{i}(\widehat{\beta}_{1})}$
(2.8)
where $\widehat{\beta}_{1}$ denotes the MLE estimate of the logistic
regression model parameters, and $p_{i}(\widehat{\beta}_{1})$ is a shorthand
of $p(Z_{i}=1|x^{*}_{i};\widehat{\beta}_{1})$. Intuitively, one can envision
that the first factor in 2.8 treats $S_{B}$ as if it is selected under the
design of $S_{R}$, and the second factor attempts to balance the distribution
of $x$ in $S_{B}$ with respect to that in $S_{R}$.
Having $\pi^{B}_{i}$ estimated based on 2.8 for all $i\in S_{B}$, one can
construct the Hájek-type pseudo-weighted estimator for the finite population
mean as below:
$\widehat{\overline{y}}_{PAPW}=\frac{1}{\widehat{N}_{B}}\sum_{i=1}^{n_{B}}\frac{y_{i}}{\widehat{\pi}^{B}_{i}}$
(2.9)
where $\widehat{N}_{B}=\sum_{i=1}^{n_{B}}1/\pi^{B}_{i}$. Hereafter, we refer
to the estimator in 2.9 as propensity-adjusted probability weighting (PAPW).
Under mild regularity conditions, the ignorable assumption in $S_{B}$ given
$x$, the logistic regression model and the additional assumption of $S_{B}\cap
S_{R}=\emptyset$, Appendix 8.1 shows that this estimator is consistent and
asymptotically unbiased for $\overline{y}_{U}$. Further, when $\pi_{i}^{R}$ is
known, the sandwich-type variance estimator for
$\widehat{\overline{y}}_{PAPW}$ is given by
$\displaystyle\widehat{Var}\left(\widehat{\overline{y}}_{PAPW}\right)=\frac{1}{N^{2}}\sum_{i=1}^{n_{B}}\big{\\{}1-\widehat{\pi}^{B}_{i}\big{\\}}\left(\frac{y_{i}-\widehat{\overline{y}}_{PAPW}}{\widehat{\pi}^{B}_{i}}\right)^{2}$
$\displaystyle-2\frac{\widehat{b}^{T}}{N^{2}}\sum_{i=1}^{n_{B}}\big{\\{}1-p_{i}(\widehat{\beta}_{1})\big{\\}}\left(\frac{y_{i}-\widehat{\overline{y}}_{PAPW}}{\widehat{\pi}^{B}_{i}}\right)x^{*}_{i}$
(2.10)
$\displaystyle+\widehat{b}^{T}\left[\frac{1}{N^{2}}\sum_{i=1}^{n}p_{i}(\widehat{\beta}_{1})x^{*}_{i}x_{i}^{*T}\right]\widehat{b}$
where
$\widehat{b}^{T}=\bigg{\\{}\frac{1}{N}\sum_{i=1}^{n_{B}}\left(\frac{y_{i}-\widehat{\overline{y}}_{PAPW}}{\widehat{\pi}^{B}_{i}}\right)x^{*T}_{i}\bigg{\\}}\bigg{\\{}\frac{1}{N}\sum_{i=1}^{n}p_{i}(\widehat{\beta}_{1})x^{*}_{i}x_{i}^{*T}\bigg{\\}}^{-1}$
(2.11)
where $\widehat{\pi}^{B}_{i}$ is the estimated pseudo-selection probability
based on 2.9 for $i\in S_{B}$. See Appendix 8.1 for the derivation.
In situations where $\pi^{R}_{i}$ is unknown for $i\in S_{B}$, Elliott and
Valliant (2017) suggest predicting this quantity for units of the non-
probability sample. Note that, in this situation, it is no longer required to
condition on $d_{i}$ in addition to $x_{i}$. Treating $\pi^{R}_{i}$ as a
random variable for $i\in S_{B}$ conditional on $x_{i}$, one can obtain this
quantity by regressing the $\pi^{R}_{i}$’s on the $x_{i}$’s in the reference
survey. We have
$\displaystyle p(\delta^{R}_{i}=1|x_{i})$
$\displaystyle=\int_{0}^{1}p(\delta^{R}_{i}=1|\pi^{R}_{i},x_{i})p(\pi^{R}_{i}|x_{i})d\pi^{R}_{i}$
(2.12) $\displaystyle=\int_{0}^{1}\pi^{R}_{i}p(\pi^{R}_{i}|x_{i})d\pi^{R}_{i}$
$\displaystyle=E(\pi^{R}_{i}|x_{i})\hskip 14.22636pti\in S_{R}.$
However, since the outcome is continuous bounded taking values within $(0,1)$,
fitting a $Beta$ regression model is recommended (Ferrari and Cribari-Neto,
2004). Note that, $\pi^{R}_{i}$ is fixed given $d_{i}$ as $S_{R}$ is a
probability sample, but conditional on $x_{i}$, $\pi^{R}_{i}$ can be regarded
as a random variable.
Rafei et al. (2020) call this approach propensity-adjusted probability
prediction (PAPP). This two-step derivation of pseudo-inclusion probabilities
is especially useful, as it separates sampling weights in $S_{R}$ from the
propensity model computationally. When the true model is unknown, this feature
enables us to fit a broader and more flexible range of models, such as
algorithmic tree-based methods. It is worth noting that modeling
$E(\pi^{R}_{i}|x_{i})$ does not impose an additional ignorable assumption in
$S_{R}$ given $x$, because in the extreme case if
$\delta^{R}_{i}\rotatebox[origin={c}]{90.0}{$\models$}x_{i}$, that means
weighted and unweighted distributions of $x$ are identical in $S_{R}$, and
therefore the $\pi^{R}_{i}$’s can be safely ignored in propensity modeling.
### 2.3 Prediction modeling approach
An alternative approach to deal with selectivity in Big Data is modeling
$f(y|x^{*})$ (Smith, 1983). In a fully model-based fashion, this essentially
involves imputing $y$ for the population non-sampled units, $U-S_{B}$. When
$x^{*}$ is unobserved for non-sampled units, it is recommended that a
synthetic population is generated by undoing the selection mechanism of
$S_{R}$ through a non-parametric Bayesian bootstrap method using the design
variables in $S_{R}$ (Dong et al., 2014; Zangeneh and Little, 2015). In the
non-probability sample context, Elliott and Valliant (2017) propose an
extension of the General Regression Estimator (GREG) when only summary
information about $x^{*}$, such as totals, is known regarding $U$. In
situations where an external probability sample is available with $x^{*}$
measured, an alternative is to limit the outcome prediction to the units in
$S_{R}$, and then, use design-based approaches to estimate the population
quantity (Rivers, 2007; Kim et al., 2018).
However, to the best of our knowledge, none of the prior literature
distinguish the role of $D$ from $X$ in the conditional mean structure of the
outcome, while the likelihood factorization in Eq. 2.1 indicates that
predicting $y$ requires conditioning not only on $x$ but also on $d$. Suppose
$U$ is a realization of a repeated random sampling process from a super-
population under the following model:
$\displaystyle E(y_{i}|x^{*}_{i};\theta)=m(x^{*}_{i};\theta)\hskip
14.22636pt\forall i\in U$ (2.13)
where $m(x^{*}_{i};\theta)$ can be either a parametric model with $m$ being a
continuous differentiable function or an unspecified non-parametric form.
Under the _ignorable_ condition where
$\displaystyle
f(y_{i}|x^{*}_{i},z_{i}=1;\theta)=f(y_{i}|x^{*}_{i},z_{i}=0;\theta)=f(y_{i}|x_{i},d_{i};\theta)$
(2.14)
an MLE estimate of $\theta$ can be obtained by regressing $Y$ on $X^{*}$ given
$S_{B}$. The predictions for units in $S_{R}$ are then given by
$\displaystyle\widehat{y}_{i}=E(y_{i}|x^{*}_{i},z_{i}=0;\widehat{\theta})=m(x^{*}_{i};\widehat{\theta})\hskip
14.22636pt\forall i\in S_{R}$ (2.15)
where $m(x^{*}_{i};\widehat{\theta})=\widehat{\theta}^{T}x^{*}_{i}$. Once $y$
is imputed for all units in the reference survey, the population mean can be
estimated through the Hájek formula as below:
$\displaystyle\widehat{\overline{y}}_{PM}=\frac{1}{\widehat{N}_{R}}\sum_{i=1}^{n_{R}}\frac{\widehat{y}_{i}}{\pi^{R}_{i}}$
(2.16)
where $\widehat{y}_{i}=m(x^{*}_{i};\widehat{\theta})$ for $i\in S_{R}$,
$\widehat{N}_{R}=\sum_{i=1}^{n_{R}}w^{R}_{i}$ and $\pi^{R}_{i}$ is the
selection probability for subject $i\in S$.
The asymptotic properties of the estimator in 2.16, including consistency and
unbiasedness, have been investigated by Kim et al. (2018). Note that in
situations where $\pi^{R}_{i}$ is available for $i\in S_{B}$, one can use
$w^{R}_{i}$ instead of the high-dimensional $d_{i}$ as a predictor in $m(.)$.
This method is known as linear-in-the-weight prediction (LWP) (Scharfstein et
al., 1999; Bang and Robins, 2005; Zhang and Little, 2011). However, since
outcome imputation relies fully on extrapolation, even minor misspecification
of the underlying model can be seriously detrimental to bias correction.
### 2.4 Doubly robust adjustment approach
To reduce the sensitivity to model misspecification, Chen et al. (2020)
reconcile the two aforementioned approaches, i.e. QR and PM, in a way that
estimates remain consistent even if one of the two models is incorrectly
specified. Their method involves an extension of the augmented inverse
propensity weighting (AIPW) proposed by Robins et al. (1994). When $N$ is
known, the expanded AIPW estimator takes the following form:
$\overline{y}_{DR}=\frac{1}{N}\sum_{i=1}^{n_{B}}\frac{\\{y_{i}-m(x^{*}_{i};\theta)\\}}{\pi^{B}(x^{*}_{i};\beta)}+\frac{1}{N}\sum_{j=1}^{n_{R}}\frac{m(x^{*}_{i};\theta)}{\pi^{R}_{j}}$
(2.17)
where given $x^{*}$, $\beta$ and $\theta$ are estimated using the modified
PMLE and MLE methods mentioned in sections 2.2 and 2.3, respectively. The
theoretical proof of the asymptotic unbiasedness of $\overline{y}_{DR}$ under
the correct modeling of $\pi^{B}(x^{*}_{i};\beta)$ or $m(x^{*}_{i};\theta)$ is
reviewed in Appendix 8.1.
To avoid using $\pi^{R}$ in modeling $\delta^{B}_{i}$ because of the PMLE
restrictions we discussed in Section 2.2, in this study, we suggest estimating
$\pi^{B}_{i}$ for $i\in S_{B}$ in Eq. 2.17 based on the PAPW/PAPP method
depending on whether $\pi^{R}_{i}$ is available for $i\in S_{B}$ or not. As a
result, in situations where $\pi^{R}_{i}$ is known for $i\in S_{B}$, our
proposed DR estimator is given by
$\widehat{\overline{y}}_{DR}=\frac{1}{N}\sum_{i=1}^{n_{B}}\frac{1}{\pi^{R}_{i}}\left[\frac{1-p_{i}(\beta_{1})}{p_{i}(\beta_{1})}\right]\\{y_{i}-m(x^{*}_{i};\theta)\\}+\frac{1}{N}\sum_{j=1}^{n_{R}}\frac{m(x^{*}_{i};\theta)}{\pi^{R}_{j}}$
(2.18)
wher $\pi^{B}(x^{*}_{i};\beta)$ is substituted using Eq. 2.8. We demonstrate
that this form of the AIPW estimator is identical to that defined by Kim and
Haziza (2014) in the non-response adjustment context under probability
surveys. Assuming that $y_{i}$ is fully observed for $i\in S_{R}$, let us
define the following HT-estimator for the population mean:
$\widehat{\overline{y}}_{U}=\frac{1}{N}\sum_{i=1}^{n_{R}}\frac{y_{i}}{\pi^{R}_{i}}$
(2.19)
Now, one can easily conclude that
$\displaystyle\widehat{\overline{y}}_{DR}$
$\displaystyle=\frac{1}{N}\sum_{i=1}^{n}\frac{1}{\pi^{R}_{i}}\left[Z_{i}\left(\frac{1-p_{i}(\beta_{1})}{p_{i}(\beta_{1})}\right)\\{y_{i}-m(x^{*}_{i};\theta)\\}+(1-Z_{i})m(x^{*}_{i};\theta)\right]$
(2.20)
$\displaystyle=\widehat{\overline{y}}_{U}+\frac{1}{N}\sum_{i=1}^{n}\frac{1}{\pi^{R}_{i}}\left[\frac{Z_{i}}{p_{i}(\beta_{1})}-1\right]\big{\\{}y_{i}-m(x^{*}_{i};\theta)\big{\\}}$
where $p_{i}(\beta_{1})=p(Z_{i}=1|x_{i}^{*};\beta_{1})$. The formula in 2.20
is similar to what is derived by Kim and Haziza (2014). Therefore, the rest of
the theoretical proof of asymptotic unbiasedness, i.e.
$\widehat{\overline{y}}_{DR}-\overline{\widehat{y}}_{U}=O_{p}(n_{B}^{-1/2})$,
in Kim and Haziza (2014) should hold for our modified AIPW estimator in 2.18
as well.
To preserve the DR property for both the point and variance estimator of
$\overline{y}_{DR}$, as suggested by Kim and Haziza (2014), one can solve the
following estimating equations simultaneously given $S$ to obtain the estimate
of $(\beta_{1},\theta)$. The aim is to cancel the first-order derivative terms
in the Taylor-series expansion of
$\widehat{\overline{y}}_{DR}-\widehat{\overline{y}}_{U}$ under QR and PM.
These estimating equations are given by
$\displaystyle\frac{\partial}{\partial\beta_{1}}\left[\widehat{\overline{y}}_{DR}-\widehat{\overline{y}}_{U}\right]$
$\displaystyle=\frac{1}{N}\sum_{i=1}^{n}\frac{Z_{i}}{\pi^{R}_{i}}\left[\frac{1}{p_{i}(\beta_{1})}-1\right]\\{y_{i}-m(x^{*}_{i};\theta)\\}x^{*}_{i}=0$
(2.21)
$\displaystyle\frac{\partial}{\partial\theta}\left[\widehat{\overline{y}}_{DR}-\widehat{\overline{y}}_{U}\right]$
$\displaystyle=\frac{1}{N}\sum_{i=1}^{n}\frac{1}{\pi^{R}_{i}}\left[\frac{Z_{i}}{p_{i}(\beta_{1})}-1\right]\dot{m}(x^{*}_{i};\theta)=0$
where $\dot{m}$ is the derivative of $m$ with respect to $\beta_{1}$. Under a
linear regression model, $\dot{m}(x^{*}_{i})=x^{*}_{i}$. Therefore, given the
same regularity conditions, ignorability in $S_{B}$, the logistic regression
model as well as the additional imposed assumption of $S_{B}\cap
S_{R}=\emptyset$, one can show that the proposed DR estimator is consistent
and approximately unbiased given that either the QR or PM model holds.
It is important to note that the system of equations in 2.21 may not have
unique solutions unless the dimension of covariates in QR and PM is identical.
Therefore, the AIPW estimator by Chen et al. (2020) may not be applicable
here, as our likelihood factorization suggests that conditioning on $d_{i}$ is
necessary at least for the PM. Furthermore, when $\pi^{R}_{i}$ is known for
$i\in S_{B}$, one can replace the $q$-dimensional $d_{i}$ with the
$1$-dimensional $w^{R}_{i}$ in modeling both QR and PM. Bang and Robins (2005)
shows that estimators based on a linear-in-weight prediction model remains
consistent.
### 2.5 Extension of the proposed method under a two-step Bayesian framework
A fully Bayesian approach specifies a model for the joint distribution of
selection indicator, $\delta^{B}_{i}$, and the outcome variable, $y_{i}$, for
$i\in U$ (McCandless et al., 2009; An, 2010). This requires multiply
generating synthetic populations and fitting the QR and PM models on each of
them repeatedly (Little and Zheng, 2007; Zangeneh and Little, 2015), which can
be computationally expensive under a Big Data setting. While joint modeling
may result in good frequentist properties (Little, 2004), feedback occurs
between the two models (Zigler et al., 2013). This can be controversial in the
sense that PS estimates should not be informed by the outcome model (Rubin,
2007). Here, we are interested in modeling the PS and the outcome separately
through the two-step framework proposed by Kaplan and Chen (2012). The first
step involves fitting Bayesian models to multiply impute the PS and the
outcome by randomly subsampling the posterior predictive draws, and Rubin’s
combining rules are utilized as the second step to obtain the final point and
interval estimates. This method not only is computationally efficient as it
suffices to fit the models once and on the combined sample but also cuts the
undesirable feedback between the models as they are fitted separately.
Bayesian modeling can be performed either parametrically or non-
parametrically.
#### 2.5.1 Parametric Bayes
As the first step, we employ Bayesian Generalized Linear Models to handle
multiple imputations of $\pi^{B}_{i}$ and $y_{i}$ for $i\in S$, and
$\pi^{R}_{i}$ if it is unknown for $i\in S_{R}$. Under a standard Bayesian
framework, a set of independent prior distributions are assigned to the model
parameters, and conditional on the observed data, the associated posterior
distributions are simulated through an appropriate MCMC method, such as
Metropolis–Hastings algorithm. We propose the following steps:
$\displaystyle Step1:\hskip
42.67912pt(\gamma^{T},\phi,\beta^{T},\theta^{T},\sigma)$ $\displaystyle\sim
p(\gamma)p(\phi)p(\beta)p(\theta)p(\sigma)$ $\displaystyle Step2:\hskip
71.13188pt\pi^{R}_{i}|x_{i},\gamma,\phi$ $\displaystyle\sim
Beta(\phi[logit^{-1}(\gamma^{T}x_{i})],\phi[1-logit^{-1}(\gamma^{T}x_{i})])$
$\displaystyle Step3:\hskip 85.35826ptZ_{i}|x_{i},\beta$ $\displaystyle\sim
Bernoulli(logit^{-1}\\{\beta^{T}x_{i}\\})$ $\displaystyle Step4:\hskip
76.82243ptY_{i}|x^{*}_{i},\theta,\sigma$ $\displaystyle\sim
Normal(\theta^{T}x^{*}_{i},\sigma^{2})$
where $(\gamma^{T},\phi)$, $\beta^{T}$ and $(\theta^{T},\sigma)$ are the
parameters associated with modeling $\pi^{R}_{i}$ in a Beta regression
($Step2$), $Z_{i}$ in a binary logistic regression ($Step3$) and $Y_{i}$ is a
linear regression ($Step4$), respectively, and $p(.)$ denotes a prior density
function. Note that in situations where $\pi^{R}_{i}$ is calculable for $i\in
S_{B}$, $Step2$ should be skipped, and $x_{i}$ should be replaced by
$x^{*}_{i}$ for $Step3$. Standard weak or non-informative priors for the
regression parameter models can be used (Kaplan and Chen, 2012). We also note
that $Step2$, which will be required for the estimation of $\pi^{R}_{i}$ when
not provided directly or through the availability of $d_{i}$ in $S_{B}$,
relies on a reasonably strong association between the available $x_{i}$ and
$\pi^{R}_{i}$ in order to accurately estimate $\pi^{R}_{i}$. We explore the
effect of differing degrees of this association via simulation in Sections 3.2
and 3.3.
Suppose
$\Theta^{(m)T}=\left[(\gamma^{(m)T},\phi^{(m)},\beta^{(m)T},\theta^{T(m)},\sigma^{(m)}\right]$
is the $m$-th unit of an $M$-sized random sample from the MCMC draws
associated with the posterior distribution of the models parameters. Then,
given that $\pi^{R}_{i}$ is known for $i\in S_{B}$, one can obtain the m-th
draw of the proposed AIPW estimator as below:
${\overline{y}}^{(m)}_{DR}=\frac{1}{\widehat{N}_{B}}\sum_{i=1}^{n_{B}}\frac{y_{i}-\theta^{(m)T}x^{*}_{i}}{\pi_{i}^{R}\exp[\beta^{(m)T}x^{*}_{i}]}+\frac{1}{\widehat{N}_{R}}\sum_{j=1}^{n_{R}}\frac{\theta^{(m)T}x^{*}_{i}}{\pi^{R}_{j}}$
(2.22)
where $\theta^{(m)T}x^{*}_{i}$ corresponds to an imputation of $y_{i}$ for the
unobserved values in the probability sample. In situations where $\pi^{R}_{i}$
is unknown for $i\in S_{B}$, the $m$-th draw of the AIPW estimator is given by
${\overline{y}}^{(m)}_{DR}=\frac{1}{\widehat{N}_{B}}\sum_{i=1}^{n_{B}}\bigg{\\{}\frac{1+\exp[\gamma^{(m)T}x_{i}]}{\exp[\gamma^{(m)T}x_{i}]}\bigg{\\}}\bigg{\\{}\frac{y_{i}-\theta^{(m)T}x^{*}_{i}}{\exp[\beta^{(m)T}x_{i}]}\bigg{\\}}+\frac{1}{\widehat{N}_{R}}\sum_{j=1}^{n_{R}}\frac{\theta^{(m)T}x^{*}_{i}}{\pi^{R}_{j}}.$
(2.23)
Having ${\overline{y}}^{(m)}_{DR}$ for all $m=1,2,...,M$, then, Rubin’s
combining rule for the point estimate (Rubin, 2004) can be employed to obtain
the final AIPW estimator as below:
${\overline{y}}_{DR}=\frac{1}{M}\sum_{m=1}^{M}{\overline{y}}^{(m)}_{DR}.$
(2.24)
If at least one of the underlying models is correctly specified, we would
expect that this estimator is approximately unbiased. The variance estimation
under the two-step Bayesian method is discussed in Section 2.6.
#### 2.5.2 Non-parametric Bayes
Despite the prominent feature of double robustness, for a given non-
probability sample, neither QR nor PM have a known modeling structure in
practice. When both working models are invalid, the AIPW estimator will be
biased and a non-robust estimator based on PM may produce a more efficient
estimate than the AIPW (Kang et al., 2007). To show the advantage of our
modified estimator in 2.18 over that proposed by Chen et al. (2020), we employ
Bayesian Additive Regression Trees (BART) as a predictive tool for multiply
imputing the $\pi^{B}_{i}$’s as well as the $y_{i}$’s in $S$. A brief
introduction to BART has been provided in Appendix 8.2.
Suppose BART approximates a continuous outcome variable through an implicit
function $f$ as below:
$y_{i}=f(x^{*}_{i})+\epsilon_{i}\hskip 28.45274pt\forall i\in S_{B}$ (2.25)
where $\epsilon_{i}\sim N(0,\sigma^{2})$. Accordingly, one can train BART in
$S_{B}$ and multiply impute $y_{i}$ for $i\in S_{R}$ using the simulated
posterior predictive distribution. Regarding QR, we consider two situations;
first, $\pi^{R}_{i}$ is known for $i\in S_{B}$. Under this circumstance, it
suffices to model $z_{i}$ on $x_{i}^{*}$ in $S$ to estimate $\pi^{B}_{i}$ for
$i\in S_{B}$. For a binary outcome variable, BART utilizes a data augmentation
technique with respect to a given _link_ function, to map $\\{0,1\\}$ values
to $\mathbb{R}$ via a _probit_ link. Suppose
$\Phi^{-1}[p(Z_{i}=1|x^{*}_{i})]=h(x^{*}_{i})\hskip 28.45274pt\forall i\in S$
(2.26)
where $\Phi^{-1}$ is the inverse CDF of the standard normal distribution.
Hence, using the posterior predictive draws generated by BART in 2.26,
$p(Z_{i}=1|x^{*}_{i})$ and consequently $\pi^{B}_{i}$ can be imputed multiply
for $i\in S_{B}$. For a given imputation $m$ $(m=1,2,...,M)$, one can expand
the DR estimator in 2.18 as below:
${\overline{y}}^{(m)}_{DR}=\frac{1}{\widehat{N}_{B}}\sum_{i=1}^{n_{B}}\frac{1}{\pi_{i}^{R}}\bigg{\\{}\frac{1-\Phi[h^{(m)}(x^{*}_{i})]}{\Phi[h^{(m)}(x^{*}_{i})]}\bigg{\\}}\big{\\{}y_{i}-f^{(m)}(x^{*}_{i})\big{\\}}+\frac{1}{\widehat{N}_{R}}\sum_{j=1}^{n_{R}}\frac{f^{(m)}(x^{*}_{j})}{\pi^{R}_{j}}$
(2.27)
where $f^{(m)}(.)$ and $h^{(m)}(.)$ are the constructed sum-of-trees
associated with the $m$-th MCMC draw in 2.25 and 2.26, respectively, after
training BART on the combined sample.
Secondly, in situations where $\pi^{R}_{i}$ is not available for $i\in S_{B}$,
we suggest applying BART to multiply impute the missing $\pi^{R}_{i}$’s in
$S_{B}$. Since the outcome is continuous bounded within $(0,1)$, a _logit_
transformation of the $\pi^{R}_{i}$’s can be used as the outcome variable in
BART to map the values to $\mathbb{R}$. Such a model is presented by
$\log\left(\frac{\pi^{R}_{i}}{1-\pi^{R}_{i}}\right)=k(x_{i})+\epsilon_{i}\hskip
28.45274pt\forall i\in S_{R}$ (2.28)
where $k$ is a sum-of-trees function approximated by BART. Under this
circumstance, ${\overline{y}}_{DR}$ based on the $m$-th draw from the
posterior distribution is given by
${\overline{y}}^{(m)}_{DR}=\frac{1}{\widehat{N}_{B}}\sum_{i=1}^{n_{B}}\bigg{\\{}\frac{1+\exp[k^{(m)}(x_{i})]}{\exp[k^{(m)}(x_{i})]}\bigg{\\}}\bigg{\\{}\frac{1-\Phi[h^{(m)}(x_{i})]}{\Phi[h^{(m)}(x_{i})]}\bigg{\\}}\big{\\{}y_{i}-f^{(m)}(x^{*}_{i})\big{\\}}+\frac{1}{\widehat{N}_{R}}\sum_{j=1}^{n_{R}}\frac{f^{(m)}(x^{*}_{j})}{\pi^{R}_{j}}.$
(2.29)
Having ${\overline{y}}^{(m)}_{DR}$ estimated for $m=1,2,...,M$, one can
eventually use Rubin’s combining rule (Rubin, 2004) to obtain the ultimate
point estimate as in 2.24.
### 2.6 Variance estimation
To obtain an unbiased variance estimate for the the proposed DR estimator, one
needs to account for three sources of uncertainty: (i) the uncertainty due to
estimated pseudo-weights in $S_{B}$, (ii) the uncertainty due to the predicted
outcome in both $S_{B}$ and $S_{R}$, and (iii) the uncertainty due to sampling
itself. In the following, we consider two scenarios:
#### 2.6.1 Scenario I: $\pi^{R}_{i}$ is known for $i\in S_{B}$
In this scenario the derivation of the asymptotic variance estimator for
$\widehat{\overline{y}}_{DR}$ is straightforward and follows Chen et al.
(2020). It is given by
$\widehat{Var}(\widehat{\overline{y}}_{DR})=\widehat{V}_{1}+\widehat{V}_{2}-\widehat{B}(\widehat{V}_{2})$
(2.30)
where $V_{1}=Var(\widehat{\overline{y}}_{PM})$ under the sampling design of
$S_{R}$. $V_{2}$ is the variance of
$\widehat{\overline{y}}_{DR}-\widehat{\overline{y}}_{U}$ under the joint
sampling design of $S_{R}$ and the PS model. This quantity can be estimated
from $S_{R}$ as below:
$\widehat{V}_{2}=\frac{1}{N^{2}}\sum_{i=1}^{n_{B}}\left[\frac{1-\widehat{\pi}^{B}_{i}}{(\widehat{\pi}^{B}_{i})^{2}}\right]\\{y_{i}-m(x^{*}_{i};\widehat{\theta})\\}^{2}.$
(2.31)
Finally, $B(\widehat{V}_{2})$ corrects for the bias in $V_{2}$ under the PM,
and is given by
$\widehat{B}(\widehat{V}_{2})=\frac{1}{N^{2}}\sum_{i=1}^{n}\left[\frac{Z_{i}}{\widehat{\pi}^{B}_{i}}-\frac{1-Z_{i}}{\pi^{R}_{i}}\right]\widehat{\sigma}^{2}_{i}$
(2.32)
where $\widehat{\sigma}^{2}_{i}=\widehat{Var}(y_{i}|x_{i})$. Since the
quantity in 2.32 tends to zero asymptotically under the QR model, the derived
variance estimator in 2.30 is DR.
#### 2.6.2 Scenario II: $\pi_{i}^{R}$ is unknown for $i\in S_{B}$
To estimate the variance of $\widehat{\overline{y}}_{DR}$ in 2.18 under the
GLM, we employ the bootstrap repeated replication method proposed by Rao and
Wu (1988). For a given replication $b$ $(b=1,2,...,B)$, we draw replicated
bootstrap subsamples, $S_{R}^{(b)}$ and $S_{B}^{(b)}$, of sizes $n_{R}-1$ and
$n_{B}-1$ from $S_{R}$ and $S_{B}$, respectively. The sampling weights in
$S_{R}^{(b)}$ are updated as below:
$w^{(b)}_{i}=w_{i}\frac{n_{R}}{n_{R}-1}h_{i}\hskip 28.45274pt\forall i\in
S^{(b)}_{R}$ (2.33)
where $h_{i}$ is the number of times the $i$-th unit has been repeated in
$S_{B}^{(b)}$. Let’s assume $\widehat{\overline{y}}^{(b)}_{DR}$ is the DR
estimate based on the $b$-th combined bootstrap sample, $S^{(b)}$, using Eq.
2.7. The variance estimator is given by
$\widehat{Var}(\widehat{\overline{y}}_{DR})=\frac{1}{B}\sum_{b=1}^{B}\left[\widehat{\overline{y}}^{(b)}_{DR}-\overline{\overline{y}}_{DR}\right]^{2}$
(2.34)
where
$\overline{\overline{y}}_{DR}=\sum_{b=1}^{B}\widehat{\overline{y}}^{(b)}_{DR}/B$.
Note that when $S_{R}$ and $S_{B}$ are clustered, which is the case in our
application, bootstrap subsamples are selected from the primary sampling units
(PSU), and $n_{R}$ and $n_{B}$ are replaced by their respective PSU sizes.
To estimate the variance of $\widehat{\overline{y}}_{DR}$ under a Bayesian
framework, whether parametric or non-parametric, we treat $y_{i}$ for $i\in
S_{R}$, and $\pi^{R}_{i}$ and $e_{i}$ for $i\in S_{B}$, as missing values in
Eq. 2.18 and multiply impute these quantities using the Monte Carlo Markov
Chain (MCMC) sequence of the posterior predictive distribution generated by
BART. For $M$ randomly selected MCMC draws, one can estimate
$\widehat{\overline{y}}^{(m)}_{DR}$ for $m=1,2,...,M$ based on Eq. 2.18.
Following Rubin’s combining rule for variance estimation under multiple
imputation, the final variance estimate of $\widehat{\overline{y}}_{DR}$ is
given as below:
$\widehat{Var}(\widehat{\overline{y}}_{DR})=\overline{V}_{W}+\left(1+\frac{1}{M}\right)V_{B}$
(2.35)
where
$\overline{V}_{W}=\sum_{m=1}^{M}\widehat{Var}(\widehat{\overline{y}}^{(m)}_{DR})/M$,
$V_{B}=\sum_{m=1}^{M}(\widehat{\overline{y}}^{(m)}_{DR}-\overline{\overline{y}}_{DR})^{2}/(M-1)$
and
$\overline{\overline{y}}_{DR}=\sum_{m=1}^{M}\widehat{\overline{y}}^{(m)}_{DR}/M$.
We have shown in the Appendix 8.1 that the within-imputation component can be
approximated by
$\widehat{Var}(\widehat{\overline{y}}^{(m)}_{DR})\approx\frac{1}{\widehat{N}^{2}_{B}}\sum_{i=1}^{n_{B}}\frac{var(y_{i})}{\left(\widehat{\pi}^{B}_{i}\right)^{2}}+\frac{1}{\widehat{N}^{2}_{R}}var\left(\frac{1}{\pi^{R}_{i}}\right)\bigg{\\{}\sum_{i=1}^{n_{R}}\left(\widehat{y}^{(m)}_{i}\right)^{2}+n_{R}\left(\frac{\widehat{t}_{R}}{\widehat{N}_{R}}\right)^{2}-2\sum_{i=1}^{n_{R}}\widehat{y}^{(m)}_{i}\bigg{\\}}$
(2.36)
where $t_{R}=\sum_{i=1}^{n_{R}}\widehat{y}^{(m)}_{i}/\pi^{R}_{i}$. Note that
when $S_{R}$ or $S_{B}$ is clustered, under a Bayesian framework, it is
important to fit multilevel models to obtain unbiased variance (Zhou et al.,
2020). In addition, one needs to account for the intraclass correlation across
the sample units in $\widehat{Var}(\widehat{\overline{y}}^{(m)}_{DR})$ for
$m=1,2,...,M$. Further, one may use the extension of BART with random
intercept to properly specify the working models under a cluster sampling
design (Tan et al., 2016).
## 3 Simulation Study
Three simulations are studied in this section to assess the performance of our
proposed methods and associated variance estimators in terms of bias magnitude
and other repeated sampling properties. In Simulation 1, we mimic the
simulation design in Chen et al. (2020) to compare the proposed methods. Here
the probability of selection in the probability sample is a fixed linear
combination of a subset of the covariates that govern selection into the non-
probability sample. In Simulation 2 we separate the design variable for the
probability sample and the selection covariate for the non-probability sample
in order to consider different associations between these values, as well as
misspecification of the functional form of the means to consider the
advantages of BART in modeling. Simulation 3 extends Simulation 2 to allow for
cluster sampling in the probability sample to better match the design of the
National Household Transportation Survey in the application.
### 3.1 Simulation I
The design of our first simulation is inspired by the one implemented in Chen
et al. (2020). For all three studies, the non-probability samples are given a
random selection mechanism with unequal probabilities, but it is later assumed
that these selection probabilities are unknown at the stage of analysis, and
the goal is to adjust for the selection bias using a parallel probability
sample whose sampling mechanism is known. We conduct the simulation under both
asymptotic frequentist and two-step Bayesian frameworks. Consider a finite
population of size $N=10^{6}$ with $z=\\{z_{1},z_{2},z_{3},z_{4}\\}$ being a
set of auxiliary variables generated as follows:
$z_{1}\sim Ber(p=0.5)\hskip 42.67912ptz_{2}\sim U(0,2)\hskip
42.67912ptz_{3}\sim Exp(\mu=1)\hskip 42.67912ptz_{4}\sim\chi^{2}_{(4)}$ (3.1)
and $x=\\{x_{1},x_{2},x_{3},x_{4}\\}$ is defined as a function of $z$ as
below:
$x_{1}=z_{1}\hskip 28.45274ptx_{2}=z_{2}+0.3z_{1}\hskip
28.45274ptx_{3}=z_{3}+0.2(x_{1}+x_{2})\hskip
28.45274ptx_{4}=z_{4}+0.1(x_{1}+x_{2}+x_{3})$ (3.2)
Given $x$, a continuous outcome variable $y$ is defined by
$y_{i}=2+x_{1i}+x_{2i}+x_{3i}+x_{4i}+\sigma\epsilon_{i}$ (3.3)
where $\epsilon_{i}\sim N(0,1)$, and $\sigma$ is defined such that the
correlation between $y_{i}$ and $\sum_{k=1}^{4}x_{ki}$ equals $\rho$, which
takes one of the values $\\{0.2,0.5,0.8\\}$. Further, associated with the
design of $S_{B}$, a set of selection probabilities are assigned to the
population units through the following logistic model:
$\log\left(\frac{\pi^{B}_{i}}{1-\pi^{B}_{i}}\right)=\gamma_{0}+0.1x_{1i}+0.2x_{2i}+0.1x_{3i}+0.2x_{4i}$
where $\gamma_{0}$ is determined such that $\sum_{i=1}^{N}\pi^{B}_{i}=n_{B}$.
For $S_{R}$, we assume $\pi^{R}_{i}\propto\gamma_{1}+z_{3i}$ where
$\gamma_{1}$ is obtained such that
$\max\\{\pi^{R}_{i}\\}/\min\\{\pi^{R}_{i}\\}=50$. Hence, $\pi^{R}_{i}$ is
assumed to be known for $i\in S_{B}$ as long as $z_{3}$ is observed in
$S_{B}$. Using these measures of size, we repeatedly draw pairs of samples of
sizes $n_{R}=100$ and $n_{B}=1,000$ associated with $S_{R}$ and $S_{B}$ from
$U$ through a Poisson sampling method. Note that units in both $S_{R}$ and
$S_{B}$ are independently selected, and $n_{R}<<n_{B}$, which might be the
case in a Big Data setting. Extensions with $n_{B}=100$ and $n_{B}=10,000$ for
both frequentist and Bayesian methods are provided in Appendix 8.3.
Once $S_{B}$ and $S_{R}$ are drawn from $U$, we assume that the
$\pi^{B}_{i}$’s for $i\in S_{B}$ and $y_{i}$’s for $i\in S_{R}$ are
unobserved. The simulation is then iterated $K=5,000$ times, where the bias-
adjusted mean, standad error (SE), and associated 95% confidence interval (CI)
for the mean of $y$ are estimated in each iteration. Under the frequentist
approach, the AIPW point estimates are obtained by simultaneously solving the
estimating equations in 2.21. In addition, the proposed two-step method is
used to derive the AIPW point estimates under the parametric Bayes. Also, to
estimate the variance, we use the DR asymptotic method proposed by Chen et al.
(2020), and the conditional variance formula in Eq. 2.35 under the frequentist
and Bayesian approaches, respectively. For the latter, we set flat priors to
the model parameters, and simulate the posteriors using $1,000$ MCMC draws
after omitting an additional $1,000$ draws as the burn-in period. We then get
a random sample of size $M=200$ from the posterior draws to obtain the point
and variance estimates.
To evaluate the repeated sampling properties of the competing method, relative
bias (rBias), relative root mean square error (rMSE), the nominal coverage
rate of 95% CIs (crCI) and SE ratio (rSE) are calculated as below:
$\displaystyle rbias(\widehat{\overline{y}}_{DR})$
$\displaystyle=100\times\frac{1}{K}\sum_{k=1}^{K}\left(\widehat{\overline{y}}^{(k)}_{DR}-\overline{y}_{U}\right)/\overline{y}_{U}$
(3.4) $\displaystyle rMSE(\widehat{\overline{y}}_{DR})$
$\displaystyle=100\times\sqrt{\frac{1}{K}\sum_{k=1}^{K}\left(\widehat{\overline{y}}^{(k)}_{DR}-\overline{y}_{U}\right)^{2}}/\overline{y}_{U}$
(3.5) $\displaystyle crCI(\widehat{\overline{y}}_{DR})$
$\displaystyle=100\times\frac{1}{K}\sum_{k=1}^{K}I\left(\big{|}\widehat{\overline{y}}^{(k)}_{DR}-\overline{y}_{U}\big{|}<z_{0.975}\sqrt{var(\widehat{\overline{y}}^{(k)}_{DR})}\right)$
(3.6) $\displaystyle rSE(\widehat{\overline{y}}_{DR})$
$\displaystyle=\frac{1}{K}\sum_{k=1}^{K}\sqrt{var(\widehat{\overline{y}}^{(k)}_{DR})}/\sqrt{\frac{1}{K-1}\sum_{k=1}^{K}\left(\widehat{\overline{y}}^{(k)}_{DR}-\overline{\overline{y}}_{DR}\right)^{2}}$
(3.7)
where $\widehat{\overline{y}}^{(k)}_{DR}$ denotes the DR adjusted sample mean
from iteration $k$,
$\overline{\overline{y}}_{DR}=\sum_{k=1}^{K}\widehat{\overline{y}}^{(k)}_{DR}/K$,
$\overline{y}_{U}$ is the finite population true mean, and $var(.)$ represents
the variance estimate of the adjusted mean based on the sample. Finally, we
consider model misspecification to test the DR property by removing $x_{4}$
from the predictors of the working model. Here $K=5000$.
Table 1: Comparing the performance of the bias adjustment methods and
associated asymptotic variance estimator under the frequentist approach in the
first simulation study for $\rho=\\{0.2,0.5,0.8\\}$
| $\rho=0.2$ | | $\rho=0.5$ | | $\rho=0.8$
---|---|---|---|---|---
Method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Probability sample ($S_{R}$) | | | | |
Unweighted | 8.528 | 19.248 | 92.6 | 1.009 | | 8.647 | 11.065 | 77.4 | 1.018 | | 8.682 | 9.719 | 50.9 | 1.020
Fully weighted | -0.029 | 20.276 | 94.7 | 1.001 | | 0.006 | 8.035 | 95.1 | 1.010 | | 0.015 | 5.008 | 94.9 | 1.008
Non-probability sample ($S_{B}$) | | | | |
Unweighted | 31.742 | 32.230 | 0.0 | 1.009 | | 31.937 | 32.035 | 0.0 | 1.012 | | 31.996 | 32.049 | 0.0 | 1.013
Fully weighted | 0.127 | 6.587 | 95.4 | 1.013 | | 0.078 | 2.583 | 95.7 | 1.014 | | 0.061 | 1.554 | 95.4 | 1.012
Non-robust adjustment | | | | |
Model specification: True | | | | |
PAPW | -1.780 | 8.088 | 97.0 | 1.107 | | -1.906 | 4.734 | 95.7 | 1.103 | | -1.947 | 4.186 | 94.0 | 1.100
IPSW | -3.054 | 10.934 | 97.2 | 1.305 | | -3.134 | 8.145 | 95.2 | 1.173 | | -3.160 | 7.778 | 92.4 | 1.067
PM | 0.490 | 7.577 | 95.2 | 1.007 | | 0.190 | 4.668 | 94.6 | 0.991 | | 0.095 | 4.204 | 94.6 | 0.985
Model specification: False | | | | |
PAPW | 26.338 | 27.089 | 3.1 | 1.112 | | 26.434 | 26.618 | 0.0 | 1.123 | | 26.461 | 26.580 | 0.0 | 1.128
IPSW | 28.269 | 28.917 | 0.6 | 1.021 | | 28.474 | 28.648 | 0.0 | 1.018 | | 28.536 | 28.654 | 0.0 | 1.014
PM | 28.093 | 28.750 | 0.6 | 1.022 | | 28.315 | 28.494 | 0.0 | 1.022 | | 28.382 | 28.505 | 0.0 | 1.021
Doubly robust adjustment | | | | |
Model specification: QR–True, PM–True | | | | |
AIPW–PAPW | 0.238 | 8.070 | 95.2 | 1.017 | | 0.100 | 4.787 | 95.0 | 0.996 | | 0.056 | 4.235 | 94.6 | 0.987
AIPW–IPSW | 0.105 | 7.861 | 95.1 | 1.019 | | 0.053 | 4.737 | 94.8 | 0.996 | | 0.036 | 4.222 | 94.6 | 0.987
Model specification: QR–True, PM–False | | | | |
AIPW–PAPW | 0.311 | 8.197 | 95.4 | 1.021 | | 0.172 | 4.988 | 95.0 | 1.013 | | 0.127 | 4.460 | 95.2 | 1.011
AIPW–IPSW | 0.222 | 7.962 | 95.5 | 1.024 | | 0.170 | 4.901 | 95.4 | 1.019 | | 0.152 | 4.405 | 95.3 | 1.018
Model specification: QR–False, PM–True | | | | |
AIPW–PAPW | 0.877 | 13.362 | 96.9 | 1.028 | | 0.327 | 6.089 | 95.8 | 1.027 | | 0.154 | 4.523 | 95.2 | 1.006
AIPW–IPSW | 0.609 | 12.532 | 96.6 | 1.025 | | 0.232 | 5.842 | 95.5 | 1.022 | | 0.113 | 4.464 | 95.3 | 1.003
Model specification: QR–False, PM–False | | | | |
AIPW–PAPW | 28.301 | 28.995 | 1.0 | 1.024 | | 28.392 | 28.579 | 0.0 | 1.021 | | 28.419 | 28.546 | 0.0 | 1.018
AIPW–IPSW | 28.104 | 28.762 | 0.7 | 1.024 | | 28.313 | 28.493 | 0.0 | 1.023 | | 28.376 | 28.500 | 0.0 | 1.022
* •
PAPW: propensity adjusted probability weighting; IPSW: Inverse propensity
score weighting; QR: quasi-randomization; PM: prediction model; AIPW:
augmented inverse propensity weighting.
Table 2: Comparing the performance of the bias adjustment methods and
associated variance estimator under the two-step parametric Bayesian approach
in the first simulation study for $\rho=\\{0.2,0.5,0.8\\}$
| $\rho=0.2$ | | $\rho=0.5$ | | $\rho=0.8$
---|---|---|---|---|---
Method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Non-robust adjustment | | | | |
Model specification: True | | | | |
Unweighted | 8.528 | 19.248 | 92.6 | 1.009 | | 8.647 | 11.065 | 77.4 | 1.018 | | 8.682 | 9.719 | 50.9 | 1.020
Fully weighted | -0.029 | 20.276 | 94.7 | 1.001 | | 0.006 | 8.035 | 95.1 | 1.010 | | 0.015 | 5.008 | 94.9 | 1.008
Non-probability sample ($S_{B}$) | | | | |
Unweighted | 31.620 | 32.106 | 0.0 | 1.014 | | 31.906 | 32.003 | 0.0 | 1.015 | | 31.993 | 32.045 | 0.0 | 1.017
Fully weighted | 0.026 | 6.615 | 95.3 | 1.010 | | 0.052 | 2.604 | 95.2 | 1.007 | | 0.059 | 1.564 | 95.2 | 1.006
Non-robust adjustment | | | | |
Model specification: True | | | | |
PAPW | -1.846 | 8.148 | 96.3 | 1.081 | | -1.890 | 4.749 | 96.9 | 1.163 | | -1.906 | 4.195 | 96.6 | 1.200
IPSW | 0.113 | 7.566 | 96.5 | 1.076 | | 0.117 | 4.302 | 97.7 | 1.140 | | 0.117 | 3.759 | 97.9 | 1.164
PM | 0.385 | 7.534 | 95.2 | 1.027 | | 0.151 | 4.644 | 95.1 | 1.001 | | 0.078 | 4.190 | 95.0 | 0.989
Model specification: False | | | | |
PAPW | 26.290 | 27.041 | 2.3 | 1.051 | | 26.499 | 26.687 | 0.0 | 1.071 | | 26.562 | 26.684 | 0.0 | 1.083
IPSW | 28.151 | 28.784 | 0.5 | 1.038 | | 28.446 | 28.612 | 0.0 | 1.025 | | 28.535 | 28.647 | 0.0 | 1.015
PM | 27.981 | 28.641 | 0.8 | 1.040 | | 28.291 | 28.472 | 0.0 | 1.025 | | 28.384 | 28.510 | 0.0 | 1.015
Doubly robust adjustment | | | | |
Model specification: QR–True, PM–True | | | | |
AIPW–PAPW | 0.115 | 8.093 | 96.9 | 1.097 | | 0.057 | 4.764 | 97.1 | 1.121 | | 0.037 | 4.219 | 97.2 | 1.130
AIPW–IPSW | 0.009 | 7.803 | 96.6 | 1.083 | | 0.019 | 4.704 | 96.7 | 1.106 | | 0.020 | 4.206 | 97.0 | 1.114
Model specification: QR–True, PM–False | | | | |
AIPW–PAPW | -0.016 | 7.930 | 97.2 | 1.108 | | -0.080 | 4.444 | 97.9 | 1.166 | | -0.098 | 3.842 | 98.1 | 1.193
AIPW–IPSW | -0.079 | 7.648 | 96.8 | 1.095 | | -0.074 | 4.411 | 97.7 | 1.151 | | -0.069 | 3.867 | 97.9 | 1.175
Model specification: QR–False, PM–True | | | | |
AIPW–PAPW | 0.557 | 7.693 | 96.4 | 1.086 | | 0.214 | 4.669 | 96.8 | 1.092 | | 0.105 | 4.195 | 96.6 | 1.090
AIPW–IPSW | 0.392 | 7.526 | 96.0 | 1.067 | | 0.155 | 4.637 | 96.3 | 1.077 | | 0.080 | 4.189 | 96.4 | 1.078
Model specification: QR–False, PM–False | | | | |
AIPW–PAPW | 28.167 | 28.864 | 1.4 | 1.096 | | 28.359 | 28.549 | 0.0 | 1.082 | | 28.416 | 28.548 | 0.0 | 1.068
AIPW–IPSW | 27.990 | 28.647 | 1.0 | 1.069 | | 28.289 | 28.471 | 0.0 | 1.059 | | 28.379 | 28.506 | 0.0 | 1.049
* •
PAPW: propensity adjusted probability weighting; IPSW: Inverse propensity
score weighting; QR: quasi-randomization; PM: prediction model; AIPW:
augmented inverse propensity weighting.
Table 1 summarizes the results of the first simulation study under the
frequentist approach. As illustrated, unweighted estimates of the population
mean are biased in both $S_{R}$ and $S_{B}$. For the non-robust estimators,
when the working model is valid, it seems that PM outperforms QR consistently
in terms of bias correction across different $\rho$ values. While PAPW works
slightly better than IPSW with respect to bias, when the QR model is true, the
latter tends to overestimate the variance slightly according to the values of
rSE. In addition, the smaller value of rMSE indicates that PAPW is more
efficient than IPSW. For the PM, both crCI and rSE reflect accurate estimates
of the variance for all values of $\rho$. When the working model is incorrect,
point estimates associated with both QR and PM are biased, but the variance
remains unbiased. These findings hold across all three values of $\rho$.
For the DR methods, it is evident that estimates based on both PAPW and IPSW
remain unbiased when at least one of the PM or QR models holds. Also, the
values of crCI and rSE reveal that the asymptotic variance estimator is DR for
both methods. Comparing the rMSE values, the AIPW estimate based on IPSW is
slightly more efficient than the one based on PAPW. While the variance
estimates remain unbiased under the false-false model specification status,
point estimates are severely biased. Finally, the performance of both AIPW
estimators improves with respect to bias reduction especially when the QR
model is misspecified.
For the Bayesian approach, the simulation results are displayed in Table 2.
Note that we no longer are able to use the PMLE approach. Instead, we apply
the PAPP method assuming that $\pi^{R}_{i}$ is unknown for $i\in S_{B}$. As
illustrated, PAPP outperforms all the non-robust methods with respect to bias.
Surprisingly, the magnitude of bias is even smaller in the Bayesian PAPP than
the QR methods examined under the frequentist framework. In addition,
estimates under the Bayesian approach are slightly more efficient than those
obtained under the frequentist methods. While variance is approximately
unbiased for $\rho=0.2$, there is evidence that PM and QR increasingly
underestimate and overestimate the true variance, respectively, as the value
of $\rho$ increases. Regarding the DR methods, the Bayesian and frequentist
methods yield similar results. The DR property holds for all values of $\rho$,
when at least one of the working models are correctly specified.
### 3.2 Simulation II
In the previous simulation study, we violated the ignorable assumption in
order to misspecify the working model by dropping a key auxiliary variable.
Now, we focus on a situation where models misspecified with respect to the
functional form of their conditional means. To this end, we consider non-
linear associations and two-way interactions in constructing of the outcome
variables as well as the selection probabilities. This also allows us to
examine the flexibility of BART as a non-parametric method when the true
functional form of the underlying models is unknown. In addition, to simulate
a more realistic situation, this time, two separate sets of auxiliary
variables are generated, $D$ associated with the design of $S_{B}$, and $X$
associated with the design of $S_{R}$. However, we allow the two variables to
be correlated through a bivariate Gaussian distribution as below:
$\begin{pmatrix}d\\\ x\end{pmatrix}\sim MVN\left(\begin{pmatrix}0\\\
0\end{pmatrix},\begin{pmatrix}1&&\rho\\\ \rho&&1\end{pmatrix}\right)$ (3.8)
Note that $\rho$ controls how strongly the sampling design of $S_{R}$ is
associated with that of $S_{B}$, which in turn controls the quality of our
assumption that $\pi^{R}_{i}$ can be well estimated by $x_{i}$ ($Step2$ in
Section 2.5.1). In addition, the values of $d_{i}$ can be either observed or
unobserved for $i\in S_{B}$. In this simulation, we set $\rho=0.2$, but later
we check other values ranging from $0$ to $0.9$ as well.
To generate the outcome variable in $U$, we consider the following non-linear
model:
$y_{i}=2f_{k}(x_{i})-d^{2}_{i}+0.5x_{i}d_{i}+\sigma\epsilon_{i}$ (3.9)
where $\epsilon_{i}\sim N(0,1)$, and $\sigma$ is determined such that the
correlation between $y_{i}$ and $2f_{k}(x_{i})-d_{i}^{2}+0.5x_{i}d_{i}$ equals
$0.5$ for $i\in U$. The function $f_{k}(.)$ is assumed to take one of the
following forms:
$SIN:f_{1}(x)=sin(x)\hskip 42.67912ptEXP:f_{2}(x)=\exp(x/2)\hskip
42.67912ptSQR:f_{3}(x)=x^{2}/3$ (3.10)
We then consider an informative sampling strategy with unequal probabilities
of inclusion, where the selection mechanism of $S_{B}$ and $S_{R}$ depends on
$x$ and $d$, respectively. Thus, each $i\in U$ is assigned two values
corresponding to the probabilities of selection in $S_{R}$ and $S_{B}$ through
a $logistic$ function as below:
$\displaystyle\pi^{R}(x_{i})=P(\delta^{R}_{i}=1|d_{i})$
$\displaystyle=\frac{\exp\\{\gamma_{0}+0.2d^{2}_{i}\\}}{1+\exp\\{\gamma_{0}+0.2d^{2}_{i}\\}}$
(3.11) $\displaystyle\pi_{k}^{A}(x_{i})=P_{k}(\delta^{A}_{i}=1|x_{i})$
$\displaystyle=\frac{\exp\\{\gamma_{1}+f_{k}(x_{i})\\}}{1+\exp\\{\gamma_{1}+f_{k}(x_{i})\\}}$
where $\delta^{R}_{i}$ and $\delta^{A}_{i}$ are the indicators of being
selected in $S_{R}$ and $S_{B}$, respectively.
Associated with $S_{R}$ and $S_{B}$, independent samples of size $n_{R}=100$
and $n_{A}=1,000$ were selected randomly from $U$ with Poisson sampling at the
first stage and simple random sampling at the second stage. The sample size
per cluster, $n_{\alpha}$, was $1$ and $50$ for $S_{R}$ and $S_{B}$,
respectively. The model intercepts, $\gamma_{0}$ and $\gamma_{1}$ in Eq. 3.11,
are obtained such that $\sum_{i=1}^{N}\pi^{R}_{i}=n_{R}$ and
$\sum_{i=1}^{N}\pi^{R}_{i}=n_{A}$. We restrict this simulation to Bayesian
analysis based on the proposed PAPW and PAPP methods but focus on how well the
non-parametric Bayes performs over the parametric Bayes in situations when the
true structure of both underlying models are supposed to be unknown. The rest
of the simulation design is similar to that defined in Simulation I, except
for the way we specify a working model. This is done by including only the
main and linear effects of $X$ and $D$ in the PM model, and the main and
linear effect of $X$ in the QR model. BART’s performance is examined under the
assumption that the true functional form of both QR and PM models is unknown,
and thus, only main effects are included in BART.
The findings of this simulation for the two-step Bayesian approach with
$1,000$ MCMC draws and $M=200$ are exhibited numerically in Table 3. Regarding
the non-robust methods, both QR and PM estimators show unbiased results across
the three defined functions, i.e. SIN, EXP and SQR, as long the working GLM is
valid, with the minimum value of rBias associated with the PAPP method.
According to the rSE values, there is evidence that PAPW and PAPP overestimate
the variance, and PM underestimate the variance to some degrees, especially
under the EXP and SQR scenarios. When the specified GLM is wrong, as seen,
point estimates are biased for both QR and PM methods across all three
functions. However, BART produces approximately unbiased results with smaller
values of rMSE than GLM. In general, the PM method outperforms the QR methods
under BART with respect to bias, but results based on the PAPP method are more
efficient. In addition, BART tends to overestimate the variance under both QR
and PM methods.
When it comes to the DR adjustment, Bayesian GLM produces unbiased results
across all the three defined functions if the working model of either QR or PM
holds. However, the variance is slightly underestimated for the SIN function
when the PM specified model is wrong, and it is overestimated for the EXP
function under all model-specification scenarios. As expected, point estimates
are biased when the GLM is misspecified for both QR and PM. However, BART
tends to produce unbiased estimates consistently across all three functions,
and the magnitude of both rBias and rMSE are smaller in the AIPW estimator
based on PAPP compared to the AIPW estimator based on PAPW. Finally, as in the
non-robust method, variance under BART is overestimated compared to the GLM.
Table 3: Comparing the performance of the bias adjustment methods and
associated variance estimator under the two-step parametric Bayesian approach
in the second simulation study for $\rho=0.2$
| SIN | | EXP | | SQR
---|---|---|---|---|---
Model-method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Probability sample ($S_{R}$)
Unweighted | -17.210 | 23.109 | 80.0 | 0.999 | | -8.406 | 11.126 | 78.3 | 1.000 | | -17.302 | 20.563 | 65.8 | 1.002
Fully weighted | -0.623 | 17.027 | 94.4 | 0.987 | | -0.303 | 7.947 | 94.6 | 0.987 | | -0.675 | 13.219 | 94.0 | 0.975
Non-probability sample ($S_{B}$)
Unweighted | 33.063 | 33.379 | 0.0 | 1.003 | | 40.307 | 40.409 | 0.0 | 1.079 | | 49.356 | 49.570 | 0.0 | 1.016
Fully weighted | 0.019 | 6.010 | 95.1 | 1.006 | | 0.005 | 2.755 | 94.9 | 1.005 | | 0.009 | 3.948 | 95.0 | 0.992
Non-robust adjustment
Model specification: True
GLM–PAPW | -0.425 | 9.257 | 96.3 | 1.072 | | -0.185 | 4.262 | 98.7 | 1.257 | | -0.325 | 6.649 | 98.4 | 1.213
GLM–PAPP | 0.018 | 8.460 | 95.7 | 1.018 | | 0.040 | 3.870 | 98.6 | 1.238 | | -0.037 | 5.914 | 98.8 | 1.222
GLM–PM | -0.411 | 9.899 | 94.7 | 0.982 | | -0.371 | 4.504 | 94.4 | 0.988 | | -0.762 | 8.115 | 92.5 | 0.947
Model specification: False
GLM–PAPW | 7.180 | 11.635 | 86.4 | 1.027 | | 2.511 | 5.299 | 97.2 | 1.316 | | 52.170 | 52.559 | 0.0 | 1.102
GLM–PAPP | 7.647 | 11.265 | 78.0 | 0.954 | | 3.025 | 5.425 | 96.2 | 1.277 | | 53.095 | 53.397 | 0.0 | 1.122
BART–PAPW | 4.035 | 10.078 | 97.0 | 1.217 | | 2.811 | 5.129 | 98.4 | 1.472 | | 8.356 | 11.082 | 97.2 | 1.468
BART–PAPP | 1.098 | 8.530 | 96.7 | 1.121 | | 1.108 | 4.120 | 98.9 | 1.391 | | 4.482 | 7.479 | 98.0 | 1.401
GLM–PM | 5.870 | 10.542 | 87.9 | 0.972 | | -6.589 | 9.264 | 82.5 | 0.976 | | 48.993 | 49.409 | 0.0 | 0.994
BART–PM | 0.577 | 9.635 | 97.0 | 1.115 | | 0.087 | 4.501 | 97.5 | 1.155 | | 0.249 | 8.276 | 96.1 | 1.062
Doubly robust adjustment
Model specification: QR–True, PM–True
GLM–AIPW–PAPW | -0.450 | 9.930 | 95.8 | 1.023 | | -0.165 | 4.593 | 98.2 | 1.200 | | -0.458 | 8.116 | 96.5 | 1.089
GLM–AIPW–PAPP | -0.452 | 9.925 | 95.8 | 1.020 | | -0.162 | 4.592 | 98.1 | 1.193 | | -0.453 | 8.106 | 96.5 | 1.086
Model specification: QR–True, PM–False
GLM–AIPW–PAPW | -0.279 | 9.996 | 93.2 | 0.926 | | 0.310 | 5.697 | 98.8 | 1.303 | | -0.338 | 7.128 | 97.5 | 1.154
GLM–AIPW–PAPP | -0.134 | 9.418 | 94.1 | 0.961 | | 0.508 | 4.977 | 99.5 | 1.475 | | -0.275 | 7.376 | 97.6 | 1.152
Model specification: QR–False, PM–True
GLM–AIPW–PAPW | -0.411 | 10.098 | 96.1 | 1.024 | | -0.176 | 4.715 | 98.5 | 1.234 | | -0.771 | 8.122 | 95.5 | 1.057
GLM–AIPW–PAPP | -0.417 | 10.101 | 96.0 | 1.021 | | -0.173 | 4.705 | 98.4 | 1.229 | | -0.778 | 8.119 | 95.4 | 1.057
Model specification: QR–False, PM–False
GLM–AIPW–PAPW | 9.015 | 13.176 | 84.1 | 1.000 | | 6.735 | 8.693 | 94.1 | 1.456 | | 50.835 | 51.288 | 0.0 | 1.019
GLM–AIPW–PAPP | 9.191 | 12.717 | 84.9 | 1.082 | | 6.787 | 8.181 | 96.7 | 1.761 | | 51.667 | 52.131 | 0.0 | 1.047
BART–AIPW–PAPW | 0.425 | 10.071 | 97.9 | 1.184 | | 0.122 | 4.689 | 99.3 | 1.407 | | -0.259 | 8.349 | 98.0 | 1.231
BART–AIPW–PAPP | -0.144 | 9.794 | 97.8 | 1.184 | | -0.100 | 4.541 | 99.3 | 1.405 | | -0.245 | 8.329 | 97.7 | 1.203
* •
PAPW: propensity adjusted probability weighting; PAPP: propensity adjusted
probability Prediction; QR: quasi-randomization; PM: prediction model; AIPW:
augmented inverse propensity weighting.
### 3.3 Simulation III
Since the non-probability sample in the application of this study is
clustered, we performed a third simulation study. To this end, the
hypothetical population is assumed to be clustered with $A=10^{3}$ clusters,
each of size $n_{\alpha}=10^{3}$ ($N=10^{6}$). Then, three cluster-level
covariates, $\\{x_{1},x_{2},d\\}$, are defined with the following
distributions:
$\begin{pmatrix}d_{\alpha}\\\ x_{0\alpha}\\\ x_{1\alpha}\end{pmatrix}\sim
MVN\left(\begin{pmatrix}0\\\ 0\\\
1\end{pmatrix},\begin{pmatrix}1&-\rho/2&\rho\\\ -\rho/2&1&-\rho/2\\\
\rho&-\rho/2&1\end{pmatrix}\right)\hskip
28.45274ptx_{2\alpha}=I(x_{0\alpha}>0)$ (3.12)
where $d$ denotes a design variable in $S_{R}$, and $\\{x_{1},x_{2}\\}$
describes the selection mechanism in $S_{B}$. Primarily, we set $\rho=0.8$,
but later we check other values ranging from $0$ to $0.9$ as well. Note that
$\rho$ controls how strongly the sampling design of $S_{R}$ is associated with
that of $S_{B}$. Furthermore, we assume that both $d$ and $x$ are observed for
units of $S$.
Again, to be able to assess BART’s performance, we consider non-linear
associations with polynomial terms and two-way interactions in construction of
the outcome variables as well as the selection probabilities. Two outcome
variables are studied, one continuous ($y_{c}$) and one binary ($y_{b}$),
which both depend on $\\{x,d\\}$ as below:
$\displaystyle y^{c}_{\alpha i}|x_{\alpha},d_{\alpha}$ $\displaystyle\sim
N(\mu=1+0.5x_{1\alpha}^{2}+0.4x_{1\alpha}^{3}-0.3x_{2\alpha}-0.2x_{1\alpha}x_{2\alpha}-0.1d_{\alpha}+u_{\alpha},\sigma^{2}=1)$
(3.13) $\displaystyle y^{b}_{\alpha i}|x_{\alpha},d_{\alpha}$
$\displaystyle\sim
Ber\left(p=\frac{\exp\\{-1+0.1x_{1\alpha}^{2}+0.2x_{1\alpha}^{3}-0.3x_{2\alpha}-0.4x_{1\alpha}x_{2}-0.5d_{\alpha}+u_{\alpha}\\}}{1+\exp\\{-1+0.1x_{1\alpha}^{2}+0.2x_{1\alpha}^{3}-0.3x_{2\alpha}-0.4x_{1\alpha}x_{2}-0.5d_{\alpha}+u_{\alpha}\\}}\right)$
(3.14)
where $u_{\alpha}\sim N(0,\sigma^{2}_{u})$, and $\sigma^{2}_{u}$ is determined
such that the intraclass correlation equals $0.2$ (Oman and Zucker, 2001;
Hunsberger et al., 2008). For each $i\in U$, we then consider the following
set of selection probabilities associated with the design of the $S_{R}$ and
$S_{B}$:
$\displaystyle\pi^{R}(x_{\alpha})=P(\delta^{R}_{\alpha}=1|d_{\alpha})$
$\displaystyle=\frac{\exp\\{\gamma_{0}+0.5d_{\alpha}\\}}{1+\exp\\{\gamma_{0}+0.5d_{\alpha}\\}}$
(3.15) $\displaystyle\pi^{B}(x_{\alpha})=P(\delta^{B}_{\alpha}=1|x_{\alpha})$
$\displaystyle=\frac{\exp\\{\gamma_{1}-0.1x_{1\alpha}+0.2x_{1\alpha}^{2}+0.3x_{2\alpha}-0.4x_{1\alpha}x_{2\alpha}\\}}{1+\exp\\{\gamma_{1}-0.1x_{1\alpha}+0.2x_{1\alpha}^{2}+0.3x_{2\alpha}-0.4x_{1\alpha}x_{2\alpha}\\}}$
where $\delta^{R}_{i}$ and $\delta^{B}_{i}$ are the indicators of being
selected in $S_{R}$ and $S_{B}$, respectively. Associated with $S_{R}$ and
$S_{B}$, two-stage cluster samples of size $n_{R}=100$ and $n_{B}=10,000$ were
selected randomly from $U$ with Poisson sampling at the first stage and simple
random sampling at the second stage. The sample size per cluster,
$n_{\alpha}$, was $1$ and $50$ for $S_{R}$ and $S_{B}$, respectively. The
model intercepts, $\gamma_{0}$ and $\gamma_{1}$ in 3.15, are obtained such
that $\sum_{i=1}^{N}\pi^{R}_{i}=n_{R}$ and
$\sum_{i=1}^{N}\pi^{R}_{i}=n_{B}/n_{\alpha}$.
The rest of the simulation design is similar to that defined in Simulation II,
except for the methods we use for point and variance estimation. In addition
to the situation where $\pi^{R}_{i}$ is known for $i\in S_{B}$, we consider a
situation where $\pi^{R}$ is unobserved for $i\in S_{B}$ and draw the
estimates based on PAPP. Furthermore, unlike the simulation I, DR estimates
are achieved by separately fitting the QR and PM models, and to get the
variance estimates, a bootstrap technique is applied with $B=200$ based on Rao
and Wu (1988). Finally, under BART, Rubin’s combining rules are employed to
derive the point and variance estimates based on the random draws of the
posterior predictive distribution. As in Simulations II, we consider different
scenarios of model specification. To misspecify a model, we only include the
main effects in the working model. Also, under BART, no interaction or
polynomial is included as input.
The means of the synthesized $U$ for the outcome variables were
$\overline{y}^{c}_{U}=3.39$ and $\overline{y}^{b}_{U}=0.40$. Figure 2 compares
the bias magnitude and efficiency across the non-robust methods. As
illustrated, point estimates from both $S_{R}$ and $S_{B}$ are biased if the
true sampling weights are ignored. After adjusting, for both continuous and
binary outcomes, the bias is close to zero under both QR and PM methods when
the working model is correct. However, the lengths of the error bars reveal
that the proposed PAPW/PAPP method is more efficient than the IPSW. When only
main effects are included in the model, all adjusted estimates are biased
except for those based on BART. Note that BART cannot be applied under IPSW.
Further details about the simulation results for the non-robust methods are
displayed in Appendix 8.3. We see that IPSW tends to have slightly larger
magnitudes of rBias and rMSE for both $y^{c}$ and $y^{b}$. Also, the values of
rSE close to $1$ indicate that the Rao & Wu’s bootstrap method of variance
estimation performs well under both QR and PM approaches. However, $95\%$
coverage is achieved only when the working model is correct.
In Figure 3, we depict the results of the DR estimators under different
permutations of model specification. One can immediately infer that AIPW
produces unbiased results when either the PM or QR model holds. However, in
situations where the true underlying models for both PM and QR are unknown,
the point estimates based on BART remains unbiased under both the PAPW and
PAPP approaches. Furthermore, under the GLM, it is evident that AIPW estimates
based on PAPW/PAPP are slightly less biased and more efficient than those
based on IPSW when the PM is incorrect. Details of the numerical results can
be found in Appendix 8.3. The latter compares BART with GLM under a situation
where both working models are wrong. Results showing the performance of the
bootstrap variance estimator are provided in Figure 4. The crCI values are all
close to the correct value unless both working models are incorrectly
specified. When the models are incorrrectly specified, the BART approach
yields correct variance estimation for the continuous outcome; variance is
underestimated and anticonservative for the binary outcome, although closer to
nominal coverage than competing methods. To conclude, we observe that when
neither the PM nor QR model are known, BART based on PAPP produces unbiased
and efficient estimates with accurate variance.
As the final step, we replicate the simulation for different values of $\rho$
ranging from $0$ to $0.9$ to show how stable the competing methods perform in
terms of rbias and rMSE. Figure 5 depicts changes in the values of rBias and
rMSE for different adjustment methods as the value of $\rho$ increases.
Generally, it seems that the value of rMSE decreases for all competing methods
as $\rho$ increases, but for all values of $\rho$, PAPW are PAPP are less
biased than IPSW. It is only when $\rho=0$ for the continuous variable that
IPSW outperforms the PPAW/PAPP in bias reduction. However, when $d$ is highly
correlated with $x$, there is also evidence of better performance by PAPP than
IPSW in terms of bias reduction. We believe this is mainly because the
stronger association between $x$ and $d$ implies that the additional ignorable
assumption under PAPP is better met, while this correlation causes a sort of
collinearity in IPSW leading to a loss of efficiency. The rest of the methods
did not show significant changes as the value of $\rho$ increases. Numerical
values associated with Figure 5 have been provided in Appendix 8.3.
Figure 2: Comparing the performance of the non-robust approaches for (a) the
continuous outcome ($Y_{c}$) and (b) the binary outcome ($Y_{b}$) when the
model is correctly specified. Error bars represent the 2.5% and 97.5%
percentiles of the empirical distribution of bias over the simulation
iterations. UW: unweighted; FW: fully weighted; PM: prediction model; PAPP:
propensity adjusted probability prediction; IPSW: inverse propensity score
weighting. Figure 3: Comparing the performance of the doubly robust estimators
under different model-specification scenarios for (a) the continuous outcome
($Y_{c}$) and (b) the binary outcome ($Y_{b}$). 95% CIs have been generated
based on the 2.5% and 97.5% percentiles of the empirical distribution of bias
over the simulation iterations. UW: unweighted; FW: fully weighted; PAPP:
propensity adjusted probability prediction; IPSW: inverse propensity score
weighting. Figure 4: Comparing the 95% CI coverage rates for the means of (a)
continuous outcome and (b) binary outcome and SE ratios for (c) continuous
outcome and (d) binary outcome across different DR methods under different
model specification scenarios. UW: unweighted; FW: fully weighted; PAPP:
propensity adjusted probability prediction; IPSW: inverse propensity score
weighting. Figure 5: Comparing the rBias for the means of (a) continuous
outcome and (b) binary outcome and rMSE for the means of (c) continuous
outcome and (d) binary outcome across different adjustment methods and
different values of $\rho$. UW: unweighted; FW: fully weighted; PAPP:
propensity adjusted probability prediction; IPSW: inverse propensity score
weighting.
## 4 Strategic Highway Research Program 2 Analysis
We briefly describe SHRP2, the non-probability sample, and the NHTS, the
probability sample, as well as the variables used for statistical adjustment.
### 4.1 Strategic Highway Research Program 2
SHRP2 is the largest naturalistic driving study conducted to date, with the
primary aim to assess how people interact with their vehicle and traffic
conditions while driving (of Sciences, 2013). About $A=3,140$ drivers aged
$16-95$ years were recruited from six geographically dispersed sites across
the United States (Florida, Indiana, New York, North Carolina, Pennsylvania,
and Washington), and over five million trips and $50$ million driven miles
have been recorded. The average follow-up time per person was
$\overline{n}_{\alpha}=440$ days. A quasi-random approach was initially
employed to select samples by random cold calling from a pool of $17,000$ pre-
registered volunteers. However, because of the low success rate along with
budgetary constraints, the investigators later chose to pursue voluntary
recruitment. Sites were assigned one of three pre-determined sample sizes
according to their population density (Campbell, 2012). The youngest and
oldest age groups were oversampled because of the higher crash risk among
those subgroups. Thus, one can conclude that the selection mechanism in SHRP2
is a combination of convenience and quota sampling methods. Further
description of the study design and recruitment process can be found in Antin
et al. (2015).
SHRP2 data are collected in multiple stages. Selected participants are
initially asked to complete multiple assessment tests, including executive
function and cognition, visual perception, visual-cognitive, physical and
psychomotor capabilities, personality factors, sleep-related factors, general
medical condition, driving knowledge, etc. In addition, demographic
information such as age, gender, household income, education level, and
marital status as well as vehicle characteristics such as vehicle type, model
year, manufacturer, and annual mileage are gathered at the screening stage. A
trip in SHRP2 is defined as the time interval during which the vehicle is
operating. The in-vehicle sensors start recording kinematic information, the
driver’s behaviors, and traffic events continuously as soon as the vehicle is
switched on. Trip-related information such as average speed, duration,
distance, and GPS trajectory coordinates are obtained by aggregating the
sensor records at the trip level (Campbell, 2012).
### 4.2 National Household Travel Survey data
In the present study, we use data from the eighth round of the NHTS conducted
from March 2016 through May 2017 as the reference survey. The NHTS is a
nationally representative survey, repeated cross-sectionally approximately
every seven years. It is aimed at characterizing personal travel behaviors
among the civilian, non-institutionalized population of the United States. The
2017 NHTS was a mixed-mode survey, in which households were initially
recruited by mailing through an address-based sampling (ABS) technique. Within
the selected households, all eligible individuals aged $\geq 5$ years were
requested to report the trips they made on a randomly assigned weekday through
a web-based travel log. Proxy interviews were requested for younger household
members who were $\leq 15$ years old.
The overall sample size was 129,696, of which roughly 20% was used for
national representativity and the remaining 80% was regarded as add-ons for
the state-level analysis. The recruitment response rate was 30.4%, of which
51.4% reported their trips via the travel logs (Santos et al., 2011). In NHTS,
a travel day is defined from 4:00 AM of the assigned day to 3:59 AM of the
following day on a typical weekday. A trip is defined as that made by one
person using any mode of transportation. While trip distance was measured by
online geocoding, the rest of the trip-related information was based on self-
reporting. A total of 264,234 eligible individuals aged $\geq$5 took part in
the study, for which 923,572 trips were recorded (McGuckin and Fucci, 2018).
### 4.3 Auxiliary variables and analysis plan
Because of the critical role of auxiliary variables in maintaining the
ignorable assumption for the selection mechanism of the SHRP2 sample,
particular attention was paid to identify and build as many common variables
as possible in the combined sample that are expected to govern both selection
mechanism and outcome variables in SHRP2. However, since the SHRP2 sample is
gathered from a limited geographical area, in order to be able to generalize
the findings to the American population of drivers, we had to assume that no
other auxiliary variable apart from those investigated in this study will
define the distribution of the outcome variables. This assumption is in fact
embedded in the ignorable condition in the SHRP2 given the set of common
observed covariates. Three distinct sets of variables were considered: (i)
demographic information of the drivers, (ii) vehicle characteristics, and
(iii) day-level information. These variables and associated levels/ranges are
listed in Table 4.
Our focus was on inference at the day level, so SHRP2 data were aggregated. We
constructed several trip-related outcome variables such as daily frequency of
trips, daily total trip duration, daily total distance driven, mean daily trip
average speed, and mean daily start time of trips that were available in both
datasets as well as daily maximum speed, daily frequency of brakes per mile,
and daily percentage of trip with a full stop, which was available in SHRP2
only. The final sample sizes of the complete day-level datasets were
$n_{B}=837,061$ and $n_{R}=133,582$ in SHRP2 and NHTS, respectively.
In order to make the two datasets more comparable, we filtered out all the
subjects in NHTS who were not drivers or were younger than $16$ years old or
used public transportation or transportation modes other than cars, SUVs,
vans, or light pickup trucks. One major structural difference between NHTS and
SHRP2 was that in the NHTS, participants’ trips were recorded for only one
randomly assigned weekday, while in SHRP2, individuals were followed up for
several months or years. Therefore, to properly account for the potential
intraclass correlation across sample units in SHRP2, we treated SHRP2
participants as clusters for variance estimation. For BART, we fitted random
intercept BART (Tan et al., 2016). In addition, since the $\pi^{R}_{i}$ were
not observed for units of SHRP2, we employed the PAPP and IPSW methods to
estimate pseudo-weights, so variance estimation under the GLM was based on the
Rao & Wu bootstrap method throughout the application section.
Table 4: List of auxiliary variables and associated levels/ranges that are used to adjust for selection bias in SHRP2 Auxiliary variables (scale) | Levels/range
---|---
Demographic information |
gender | (female, male)
age (yrs) | (16-24, 25-34, 35-44, 45-54, 55-64, 65-74, 75+)
race | (White, Black, other)
ethnicity | (Hispanic, non-Hispanic)
birth country | (citizen, alien)
education level | ($\leq$HS, HS completed, associate, graduate, post-graduate)
household income ($\times$$1,000) | (0-49k, 50-99k, 100-149k, 150k+)
household size | (1, 2, 3-5, 6-10, 10+)
job status | (part-time, full time)
home ownership | (owner, renter)
pop. size of resid. area ($\times 1,000$) | (0-49, 50-200, 200-500, 500+)
Vehicle characteristics |
age (yrs) | (0-4, 5-9, 10-14, 15-19, 20+)
type | (passenger car, Van, SUV, truck)
make | (American, European, Asian)
mileage ($\times$1000km) | (0-4, 5-9, 10, 10-19, 20-49, 50+)
fuel type | (gas, other)
Day-level information |
weekend indicator of trip day | {0,1}
season of trip day | (winter, spring, summer, fall)
### 4.4 Results
According to Figure 7 of Appendix 8.4, one can visually infer that the largest
discrepancies between the sample distribution of auxiliary variables in SHRP2
and that in the population stem from participants’ age, race and population
size of residential area as well as vehicles’ age and vehicles’ type. The
youngest and eldest age groups have been oversampled as are Whites and non-
Hispanics. In addition, we found that the proportion of urban dwellers is
higher in SHRP2 than that in the NHTS. In terms of vehicle characteristics,
SHRP2 participants tend to own passenger cars more than the population
average, whereas individuals with other vehicle types were underrepresented in
SHRP2.
As the first step of QR, we checked if there is any evidence of a lack of
common distributional support between the two studies for the auxiliary
variables. Figure 6a compares the kernel density of the estimated PS using
BART across the two samples. As illustrated, a notable lack of overlap appears
on the left tail of the PS distribution in SHRP2. However, owing to the huge
sample size in SHRP2, we believe this does not jeopardize the positivity
assumption. The available auxiliary variables are strong predictors of the
NHTS selection probabilities for SRHP2: the average pseudo-$R^{2}$ was for
BART $73\%$ in a $10$-fold cross validation.
In Figure 6b, we compare the distribution of estimated pseudo-weights across
the QR methods. It seems that PAPP based on BART is the only method that does
not produce influential weights. Also, the highest variability in the
estimated pseudo-weights belonged to the PAPP method under GLM. As can be seen
in Figure 13 in Appendix 8.4, the largest values of area under the ROC curve
(AUC) and the largest values of (pseudo)-$R^{2}$ in the radar across different
trip-related outcome variables are associated with BART, versus classification
and regression trees (CART) or GLM when modeling $Z$ on $X$ and $Y$ on $X$,
respectively. Additionally, Figure 14 in Appendix 8.4 exhibits how pseudo-
weighting based on PAPP-BART improves the imbalance in the distribution of $X$
in SHRP2 with respect to the weighted distribution of NHTS.
Figure 6: Comparing the distribution of (a) estimated propensity scores
between SHRP2 and NHTS using BART and (b) estimated pseudo-weights in SHRP2
across the applied quasi-randomization methods
Figure 8 depicts the adjusted sample means for some trip-related measures that
were available in both SHRP2 and NHTS. The methods we compare here encompass
PAPP, IPSW and PM as the non-robust approaches, and AIPW with PAPP and AIPW
with IPSW as the DR approaches. Also, a comparison is made between GLM and
BART for all the methods except those involving IPSW. Our results suggest
that, as expected, the oversampling of younger and older drivers lead to
underestimating miles driven and length of trips, and overestimating the time
of the first trip of the day; other factors may impact these variables, as
well as the average speed of a given drive. For three of these four variables
(total trip duration, total distance driven, and start hour of daily trip),
there appeared to be improvement with respect to bias considering the NHTS
weighted estimates as the benchmark, although only trip duration appears to be
fully corrected. In Figure 7, we display the posterior predictive density of
mean daily total distance driven under PAPP, PM and AIPW-PAPP. Note that the
narrow variance associated with the PAPP approach is due to the fact that the
posterior predictive distribution under pseudo-weighting does not account for
the clustering effects in SHRP2. It is in fact $\overline{V}_{W}$ in 2.35 that
is capturing this source of uncertainty in variance estimation.
Figure 7: The posterior predictive distributions of the adjusted sample mean
of daily total distance driven based on BART
Among the QR methods, we observed that the PAPP based on BART gives the most
accurate estimate with respect to bias for this variable. However, the
relatively narrow 95% CI associated with BART may indicate that BART does not
properly propagate the uncertainty in pseudo-weighting. Regarding the PM, it
seems BART performs as well as GLM, but with wider uncertainty. As a
consequence, the AIPW estimator performs the same in terms of bias across
different QR methods. The AIPW estimator based on IPSW, on the other hand,
seems to be is more efficient than the ones based on PAPP. However, these
findings are not consistent across the outcome variables.
Results related to the adjusted means for some SHRP2-specific outcome
variables are summarized in Figure 9. These variables consist of (a) daily
maximum speed, (b) frequency of brakes per mile, and (c) percentage of trip
duration when the vehicle is fully stopped. For the daily maximum speed, we
take one further step and present the DR adjusted mean based on the IPSW-GLM
and PAPP-BART by some auxiliary variables in Figure 10. As illustrated, higher
levels of mean daily maximum speed are associated with males, age group 35-44
years, Blacks, high school graduates, Asian cars, and weekends. According to
the lengths of 95% CIs, one can see that the AIPW-PAPP-BART consistently
produces more efficient estimates than the AIPW-IPSW-GLM. Further numerical
details of these findings by the auxiliary variables have been provided in
Tables 11-17 in Appendix 8.4.
Figure 8: Evaluation of pseudo-weights by comparing weighted estimates of
daily frequency of trips between NHTS and SHRP2: (a) Mean daily total trip
duration, (b) Mean daily total distance driven, (c) Mean trip average speed,
and (d) Mean daily start hour of trips. The dashed line and surrounding
shadowed area represent weighted estimates and 95% CIs in NHTS, respectively.
UW: unweighted; PAPP: propensity adjusted probability prediction; IPSW:
inverse propensity score weighting; NA: not applicable.
Figure 9: Adjusted estimates of some SHRP2-specific outcomes: (a) Mean daily
maximum speed, (b) daily frequency of brakes per mile driven, and (c) daily
percentage of stop time. UW: unweighted; PAPP: propensity adjusted probability
prediction; IPSW: inverse propensity score weighting. Figure 10: Bias adjusted
estimates of mean daily maximum speed (MPH) driven by (a) gender, (b) age
groups, (c) race, (d) education, (e) vehicle manufacturer, and (f) weekend
indicator. UW: unweighted; PAPP: propensity adjusted probability prediction;
IPSW: inverse propensity score weighting; NA: not applicable.
## 5 Discussion
In this study, we proposed a doubly robust (DR) adjustment method for finite
population inference in non-probability samples when a well-designed
probability sample is available as a benchmark. Combining the ideas of pseudo-
weighting with prediction modeling, our method involved a modified version of
AIPW, which is DR in the sense that estimates are consistent if either
underlying model holds. More importantly, the proposed method permitted us to
apply a wider class of predictive tools, especially supervised algorithmic
methods. To better address model misspecification, our study employed BART to
multiply impute both pseudo-inclusion probabilities and the outcome variable.
We also proposed a method to estimate the variance of the DR estimator based
on the posterior predictive draws simulated by BART. In a simulation study, we
then assessed the repeated sampling properties of our proposed estimator.
Finally, we apply it to real Big Data from naturalistic driving studies with
the aim to improve the potential selection bias in the estimates of finite
population means.
Generally, the simulation findings revealed that our modified AIPW method
produces less biased estimates than its competitors, especially when
$n_{R}<<n_{B}$. When at least one of the models, i.e. QR or PM, is correctly
specified, all the DR methods generated unbiased results, though our estimator
was substantially more efficient with narrower 95% CIs. However, when both
working models are invalid, our findings suggest that DR estimates based on
the GLM can be severely biased. However, under BART, it seems that estimates
remain approximately unbiased if the true model structure associated with both
QR and PM is unknown to the researcher. In contrast to the conventional IPSW
estimator, we found that the new proposed estimator produces more stable
results in terms of bias and efficiency across different sampling fractions
and various degrees of association between the sampling designs of $S_{R}$ and
$S_{B}$.
Generally, the results of the application suggest near total removal of bias
for only one of the four variables that can be estimated from the reference
survey (daily total distance driven). We believe this failure originates from
several sources. First and foremost, the bias observed in the final estimates
is very likely to be mixed with measurement error because we compared the
results of sensor data with self-reported data as a benchmark. Second, there
was evidence of departure from the positivity assumption in SHRP2. Studies
show that even a slight lack of common support in the distribution of
auxiliary variables may lead to inflated variance and aggravated bias (Hill
and Su, 2013). Part of this can be due to the fact that we attempted to
generalize the results to the general population of American drivers, while
SHRP2 data was restricted to six states. Another reason might be deviation
from the ignorable assumptions: The associations between the auxiliary
variables and the outcome variables were relatively weak and varying across
the variables.
Our study was not without weaknesses. First, our approach assumes the ideal
situation where the $d_{i}$ are available in the non-probability sample, since
that is demanded by the general theory linking together the probability and
non-probability samples. In practice it can be difficult to fully meet this
requirement, and indeed in many practical settings it might be that only the
available subset of $x_{i}^{*}$ is required to fully model selection into the
non-probability sample and the outcome variable, or alternatively, that the
available components of $x_{i}^{*}$ will provide a much better approximation
to the true estimates than simply using the non-probability sample without
correction. Second, our adjustment method assumes that the two samples are
mutually exclusive. However, in many Big Data scenarios (though not the one we
consider), the sampling fraction may be non-trivial, so the two samples may
overlap substantially. In such a situation, it is important to check how
sensitive our proposed pseudo-weighting approach is to this assumption.
Extensions may be plausible to account for the duplicate units of the
population in the pooled sample. Third, our multiple imputation variance
estimator (Eq. 2.35) ignores covariance between $\overline{V}_{W}$ and $V_{B}$
induced by the weights. This covariance is typically negative and leads to
conservative inference, as seen in the modest overestimation of variance in
the BART estimations in Simulations 2 and 3. Use of a bootstrap procedure such
as that described in the simulation study of Chen et al. (2020) may be an
alternative, although impractical in our setting given the computational
demands of fitting the BART models to each bootstrap sample. Another drawback
is that the combined dataset may be subject to differential measurement error
in the variables. This issue is particularly acute in our SHPR2 analysis,
because the definition of a trip may not be identical between the two studies:
although trip measures in the SHRP2 are recorded by sensors, in the NHTS trip
measures are memory and human estimation based, as they are self-reported.
Having such error-prone information either as the outcome or as an auxiliary
variable may lead to biased results. Finally, we failed to use the two-step
Bayesian method under GLM for the application part, because SHRP2 data were
clustered demanding for Bayesian generalized linear mixed effect models to
properly estimate the variance of the DR estimators required computational
resources beyond our reach. This prompted us to apply resampling techniques to
the actual data instead of a full Bayesian method.
There are a number of potential future directions for our research. First, we
would like to expand the asymptotic variance estimator under PAPP when
$\pi_{i}^{R}$ cannot be computed for $i\in S_{B}$. Alternatively, one may be
interested in developing a fully model-based approach, in which a synthetic
population is created by undoing the sampling stages via a Bayesian bootstrap
method, and attempts are made to impute the outcome for non-sampled units of
the population (Dong et al., 2014; Zangeneh and Little, 2015; An and Little,
2008). The synthetic population idea makes it easier to incorporate the design
features of the reference survey into adjustments, especially when Bayesian
inference is of interest. While correcting for selection bias, one can adjust
for the potential measurement error in the outcome variables as well if there
exists a validation dataset where both mismeasured and error-free values of
the variables are observed (Kim and Tam, 2020). When combining data from
multiple sources, it is also likely that auxiliary variables are subject to
differential measurement error. Hong et al. (2017) propose a Bayesian approach
to adjust for a different type of measurement error in a causal inference
context. Also, in a Big Data setting, fitting models can be computationally
demanding. To address this issue, it might be worth expanding the divide-and-
recombine techniques for the proposed DR methods. Finally, as noted by a
reviewer, the basic structure of our problem (see Figure 1) approximates that
tackled by “data fusion” methods, developed primarily in the computer science
literature. While this literature does not appear to have directly addressed
issues around sample design, it may be a useful vein of research to mine for
future connections to non-probability sampling research.
## 6 Acknowledgement
The present study was part of the doctoral research of the first author of
this article at the Michigan Program in Survey and Data Science. The authors
would like to thank all the researchers and staff who have been involved in
collecting the data of both NHTS 2017 and SHRP2. Our gratitude also goes to
professors Katharine Abraham, Stanley Presser and Joseph Sedarski at the
University of Maryland who have improved the quality of this article with
their valuable comments. The findings and conclusions of this paper are those
of the authors and do not necessarily represent the views of the Virginia Tech
Transportation Institute (VTTI), SHRP2, the Transportation Research Board, or
the National Academy of Sciences.
## 7 Conflict of Interest
The authors declare that there was no conflict of interest in the current
research.
## References
* Abu-Nimeh et al. (2008) Abu-Nimeh, S., D. Nappa, X. Wang, and S. Nair (2008). Bayesian additive regression trees-based spam detection for enhanced email privacy. In Availability, Reliability and Security, 2008. ARES 08., pp. 1044–1051. IEEE.
* An and Little (2008) An, H. and R. J. Little (2008). Robust model-based inference for incomplete data via penalized spline propensity prediction. Communications in Statistics–Simulation and Computation 37(9), 1718–1731.
* An (2010) An, W. (2010). 4\. bayesian propensity score estimators: Incorporating uncertainties in propensity scores into causal inference. Sociological Methodology 40(1), 151–189.
* Antin et al. (2015) Antin, J., K. Stulce, L. Eichelberger, and J. Hankey (2015). Naturalistic driving study: descriptive comparison of the study sample with national data. Technical report.
* Baker et al. (2013) Baker, R., J. M. Brick, N. A. Bates, M. Battaglia, M. P. Couper, J. Dever, K. J. Gile, and R. Tourangeau (2013). Summary report of the aapor task force on non-probability sampling. Journal of Survey Statistics and Methodology 1, 90–143.
* Bang and Robins (2005) Bang, H. and J. M. Robins (2005). Doubly robust estimation in missing data and causal inference models. Biometrics 61(4), 962–973.
* Beresewicz et al. (2018) Beresewicz, M., R. Lehtonen, F. Reis, L. Di Consiglio, and M. Karlberg (2018). An overview of methods for treating selectivity in big data sources.
* Brick (2015) Brick, J. M. (2015). Compositional model inference. JSM Proceedings (Survey Research Methods Section), 299–307.
* Campbell (2012) Campbell, K. L. (2012). The shrp 2 naturalistic driving study: Addressing driver performance and behavior in traffic safety. TR News (282), 30–35.
* Cao et al. (2009) Cao, W., A. A. Tsiatis, and M. Davidian (2009). Improving efficiency and robustness of the doubly robust estimator for a population mean with incomplete data. Biometrika 96(3), 723–734.
* Cassel et al. (1976) Cassel, C. M., C. E. Särndal, and J. H. Wretman (1976). Some results on generalized difference estimation and generalized regression estimation for finite populations. Biometrika 63(3), 615–620.
* Chen et al. (2020) Chen, Y., P. Li, and C. Wu (2020). Doubly robust inference with nonprobability survey samples. Journal of the American Statistical Association 115, 2011–2021.
* Chipman et al. (2007) Chipman, H. A., E. I. George, and R. E. McCulloch (2007). Bayesian ensemble learning. In Advances in neural information processing systems, pp. 265–272.
* Chipman et al. (2010) Chipman, H. A., E. I. George, R. E. McCulloch, et al. (2010). Bart: Bayesian additive regression trees. The Annals of Applied Statistics 4(1), 266–298.
* Couper (2013) Couper, M. (2013). Is the sky falling? new technology, changing media, and the future of surveys. keynote presentation at the 5th european survey research association conference. ljubliana, slovenia.
* Daas et al. (2015) Daas, P. J., M. J. Puts, B. Buelens, and P. A. van den Hurk (2015). Big data as a source for official statistics. Journal of Official Statistics 31(2), 249–262.
* Dawid (1982) Dawid, A. P. (1982). The well-calibrated bayesian. Journal of the American Statistical Association 77(379), 605–610.
* Dong et al. (2014) Dong, Q., M. R. Elliott, and T. E. Raghunathan (2014). A nonparametric method to generate synthetic populations to adjust for complex sampling design features. Survey Methodology 40(1), 29.
* Dutwin and Buskirk (2017) Dutwin, D. and T. D. Buskirk (2017). Apples to oranges or gala versus golden delicious? comparing data quality of nonprobability internet samples to low response rate probability samples. Public Opinion Quarterly 81(S1), 213–239.
* Elliott and Valliant (2017) Elliott, M. R. and R. Valliant (2017). Inference for nonprobability samples. Statistical Science 32(2), 249–264.
* Ferrari and Cribari-Neto (2004) Ferrari, S. and F. Cribari-Neto (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics 31(7), 799–815.
* Franke et al. (2016) Franke, B., J.-F. Plante, R. Roscher, E.-s. A. Lee, C. Smyth, A. Hatefi, F. Chen, E. Gil, A. Schwing, A. Selvitella, et al. (2016). Statistical inference, learning and models in big data. International Statistical Review 84(3), 371–389.
* Gelman et al. (2007) Gelman, A. et al. (2007). Struggles with survey weighting and regression modeling. Statistical Science 22(2), 153–164.
* Green and Kern (2012) Green, D. P. and H. L. Kern (2012). Modeling heterogeneous treatment effects in survey experiments with bayesian additive regression trees. Public Opinion Quarterly 76(3), 491–511.
* Groves (2011) Groves, R. M. (2011). Three eras of survey research. Public Opinion Quarterly 75(5), 861–871.
* Guo et al. (2009) Guo, F., J. M. Hankey, et al. (2009). Modeling 100-car safety events: A case-based approach for analyzing naturalistic driving data. Technical report, Virginia Tech. Virginia Tech Transportation Institute.
* Han and Wang (2013) Han, P. and L. Wang (2013). Estimation with missing data: beyond double robustness. Biometrika 100(2), 417–430.
* Haziza and Rao (2006) Haziza, D. and J. N. Rao (2006). A nonresponse model approach to inference under imputation for missing survey data. Survey Methodology 32(1), 53.
* Hill and Su (2013) Hill, J. and Y.-S. Su (2013). Assessing lack of common support in causal inference using bayesian nonparametrics: Implications for evaluating the effect of breastfeeding on children’s cognitive outcomes. The Annals of Applied Statistics, 1386–1420.
* Hill et al. (2011) Hill, J., C. Weiss, and F. Zhai (2011). Challenges with propensity score strategies in a high-dimensional setting and a potential alternative. Multivariate Behavioral Research 46(3), 477–513.
* Hong et al. (2017) Hong, H., K. E. Rudolph, and E. A. Stuart (2017). Bayesian approach for addressing differential covariate measurement error in propensity score methods. Psychometrika 82(4), 1078–1096.
* Huisingh et al. (2019) Huisingh, C., C. Owsley, E. B. Levitan, M. R. Irvin, P. Maclennan, and G. McGwin (2019). Distracted driving and risk of crash or near-crash involvement among older drivers using naturalistic driving data with a case-crossover study design. The Journals of Gerontology. Series A, Biological Sciences and Medical Sciences 74, 550–555.
* Hunsberger et al. (2008) Hunsberger, S., B. I. Graubard, and E. L. Korn (2008). Testing logistic regression coefficients with clustered data and few positive outcomes. Statistics in Medicine 27(8), 1305–1324.
* Japec et al. (2015) Japec, L., F. Kreuter, M. Berg, P. Biemer, P. Decker, C. Lampe, J. Lane, C. O’Neil, and A. Usher (2015). Big data in survey research: Aapor task force report. Public Opinion Quarterly 79(4), 839–880.
* Johnson and Smith (2017) Johnson, T. P. and T. W. Smith (2017). Big data and survey research: Supplement or substitute? In Seeing Cities Through Big Data, pp. 113–125. Springer.
* Kang et al. (2007) Kang, J. D., J. L. Schafer, et al. (2007). Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical Science 22(4), 523–539.
* Kaplan and Chen (2012) Kaplan, D. and J. Chen (2012). A two-step bayesian approach for propensity score analysis: Simulations and case study. Psychometrika 77(3), 581–609.
* Kern et al. (2016) Kern, H. L., E. A. Stuart, J. Hill, and D. P. Green (2016). Assessing methods for generalizing experimental impact estimates to target populations. Journal of Research on Educational Effectiveness 9(1), 103–127.
* Kim and Haziza (2014) Kim, J. K. and D. Haziza (2014). Doubly robust inference with missing data in survey sampling. Statistica Sinica 24(1), 375–394.
* Kim and Park (2006) Kim, J. K. and H. Park (2006). Imputation using response probability. Canadian Journal of Statistics 34(1), 171–182.
* Kim et al. (2018) Kim, J. K., S. Park, Y. Chen, and C. Wu (2018). Combining non-probability and probability survey samples through mass imputation. arXiv preprint arXiv:1812.10694.
* Kim and Rao (2012) Kim, J. K. and J. N. Rao (2012). Combining data from two independent surveys: a model-assisted approach. Biometrika 99(1), 85–100.
* Kim and Tam (2020) Kim, J.-k. and S.-M. Tam (2020). Data integration by combining big data and survey sample data for finite population inference. arXiv preprint arXiv:2003.12156.
* Kim et al. (2018) Kim, J. K., Z. Wang, Z. Zhu, and N. B. Cruze (2018). Combining survey and non-survey data for improved sub-area prediction using a multi-level model. Journal of Agricultural, Biological and Environmental Statistics 23(2), 175–189.
* Kitchin (2015) Kitchin, R. (2015). The opportunities, challenges and risks of big data for official statistics. Statistical Journal of the IAOS 31(3), 471–481.
* Kott (1994) Kott, P. S. (1994). A note on handling nonresponse in sample surveys. Journal of the American Statistical Association 89(426), 693–696.
* Kott (2006) Kott, P. S. (2006). Using calibration weighting to adjust for nonresponse and coverage errors. Survey Methodology 32(2), 133–142.
* Kott and Chang (2010) Kott, P. S. and T. Chang (2010). Using calibration weighting to adjust for nonignorable unit nonresponse. Journal of the American Statistical Association 105(491), 1265–1275.
* Kreuter and Peng (2014) Kreuter, F. and R. D. Peng (2014). Extracting information from big data: Issues of measurement, inference and linkage. Privacy, Big Data, and the Public Good: Frameworks for Engagement, 257.
* Lane (2016) Lane, J. (2016). Big data for public policy: The quadruple helix. Journal of Policy Analysis and Management 35(3), 708–715.
* Lee (2006) Lee, S. (2006). Propensity score adjustment as a weighting scheme for volunteer panel web surveys. Journal of Official Statistics 22(2), 329.
* Lee and Valliant (2009) Lee, S. and R. Valliant (2009). Estimation for volunteer panel web surveys using propensity score adjustment and calibration adjustment. Sociological Methods & Research 37(3), 319–343.
* Lenis et al. (2018) Lenis, D., B. Ackerman, and E. A. Stuart (2018). Measuring model misspecification: Application to propensity score methods with complex survey data. Computational Statistics & Data Analysis.
* Little (2004) Little, R. J. (2004). To model or not to model? competing modes of inference for finite population sampling. Journal of the American Statistical Association 99(466), 546–556.
* Little and Zheng (2007) Little, R. J. and H. Zheng (2007). The bayesian approach to the analysis of finite population surveys. Bayesian Statistics 8(1), 1–20.
* Lohr and Raghunathan (2017) Lohr, S. L. and T. E. Raghunathan (2017). Combining survey data with other data sources. Statistical Science 32(2), 293–312.
* Mayer et al. (2020) Mayer, I., E. Sverdrup, T. Gauss, J.-D. Moyer, S. Wager, J. Josse, et al. (2020). Doubly robust treatment effect estimation with missing attributes. Annals of Applied Statistics 14(3), 1409–1431.
* McCandless et al. (2009) McCandless, L. C., P. Gustafson, and P. C. Austin (2009). Bayesian propensity score analysis for observational data. Statistics in medicine 28(1), 94–112.
* McConnell and Lindner (2019) McConnell, K. J. and S. Lindner (2019). Estimating treatment effects with machine learning. Health Services Research 54(6), 1273–1282.
* McGuckin and Fucci (2018) McGuckin, N. and A. Fucci (2018). Summary of travel trends: 2017 national household travel survey (report fhwa-pl-18-019). Washington, DC: Federal Highway Administration, US Department of Transportation.
* Meng et al. (2018) Meng, X.-L. et al. (2018). Statistical paradises and paradoxes in big data (i): Law of large populations, big data paradox, and the 2016 us presidential election. The Annals of Applied Statistics 12(2), 685–726.
* Mercer (2018) Mercer, A. W. (2018). Selection Bias in Nonprobability Surveys: A Causal Inference Approach. Ph. D. thesis.
* Miller (2017) Miller, P. V. (2017). Is there a future for surveys? Public Opinion Quarterly 81(S1), 205–212.
* Murdoch and Detsky (2013) Murdoch, T. B. and A. S. Detsky (2013). The inevitable application of big data to health care. Journal of the American Medical Association 309(13), 1351–1352.
* of Sciences (2013) of Sciences, T. R. B. N. A. (2013). The 2nd Strategic Highway Research Program Naturalistic Driving Study Dataset.
* Oman and Zucker (2001) Oman, S. D. and D. M. Zucker (2001). Modelling and generating correlated binary variables. Biometrika 88(1), 287–290.
* Rafei et al. (2020) Rafei, A., C. A. Flannagan, and M. R. Elliott (2020). Big data for finite population inference: Applying quasi-random approaches to naturalistic driving data using bayesian additive regression trees. Journal of Survey Statistics and Methodology 8(1), 148–180.
* Rao (2015) Rao, J. N. (2015). Small-Area Estimation. New York: Wiley.
* Rao and Wu (1988) Rao, J. N. and C. Wu (1988). Resampling inference with complex survey data. Journal of the american statistical association 83(401), 231–241.
* Rivers (2007) Rivers, D. (2007). Sampling for web surveys. In Joint Statistical Meetings.
* Robins et al. (1994) Robins, J. M., A. Rotnitzky, and L. P. Zhao (1994). Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association 89(427), 846–866.
* Rosenbaum and Rubin (1983) Rosenbaum, P. R. and D. B. Rubin (1983). The central role of the propensity score in observational studies for causal effects. Biometrika 70(1), 41–55.
* Rubin (2004) Rubin, D. B. (2004). Multiple imputation for nonresponse in surveys. New York: Wiley.
* Rubin (2007) Rubin, D. B. (2007). The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. Statistics in Medicine 26(1), 20–36.
* Saarela et al. (2016) Saarela, O., L. R. Belzile, and D. A. Stephens (2016). A bayesian view of doubly robust causal inference. Biometrika 103(3), 667–681.
* Santos et al. (2011) Santos, A., N. McGuckin, H. Y. Nakamoto, D. Gray, and S. Liss (2011). Summary of travel trends: 2009 national household travel survey. Technical report.
* Scharfstein et al. (1999) Scharfstein, D. O., A. Rotnitzky, and J. M. Robins (1999). Adjusting for nonignorable drop-out using semiparametric nonresponse models. Journal of the American Statistical Association 94(448), 1096–1120.
* Senthilkumar et al. (2018) Senthilkumar, S., B. K. Rai, A. A. Meshram, A. Gunasekaran, and S. Chandrakumarmangalam (2018). Big data in healthcare management: A review of literature. American Journal of Theoretical and Applied Business 4(2), 57–69.
* Smith (1983) Smith, T. (1983). On the validity of inferences from non-random sample. Journal of the Royal Statistical Society A146, 394–403.
* Spertus and Normand (2018) Spertus, J. V. and S.-L. T. Normand (2018). Bayesian propensity scores for high-dimensional causal inference: A comparison of drug-eluting to bare-metal coronary stents. Biometrical Journal 60, 721–733.
* Struijs et al. (2014) Struijs, P., B. Braaksma, and P. J. Daas (2014). Official statistics and big data. Big Data & Society 1(1), 1–6.
* Tan et al. (2017) Tan, Y. V., M. R. Elliott, and C. A. Flannagan (2017). Development of a real-time prediction model of driver behavior at intersections using kinematic time series data. Accident Analysis & Prevention 106, 428–436.
* Tan et al. (2016) Tan, Y. V., C. A. Flannagan, and M. R. Elliott (2016). Predicting human-driving behavior to help driverless vehicles drive: random intercept bayesian additive regression trees. arXiv preprint arXiv:1609.07464.
* Tan et al. (2019) Tan, Y. V., C. A. Flannagan, and M. R. Elliott (2019). “robust-squared” imputation models using bart. Journal of Survey Statistics and Methodology 7(4), 465–497.
* Tan (2006) Tan, Z. (2006). A distributional approach for causal inference using propensity scores. Journal of the American Statistical Association 101(476), 1619–1637.
* Valliant (2020) Valliant, R. (2020). Comparing alternatives for estimation from nonprobability samples. Journal of Survey Statistics and Methodology 8, 231–263.
* Valliant and Dever (2011) Valliant, R. and J. A. Dever (2011). Estimating propensity adjustments for volunteer web surveys. Sociological Methods & Research 40(1), 105–137.
* Wang et al. (2015) Wang, W., D. Rothschild, S. Goel, and A. Gelman (2015). Forecasting elections with non-representative polls. International Journal of Forecasting 31(3), 980–991.
* Wendling et al. (2018) Wendling, T., K. Jung, A. Callahan, A. Schuler, N. Shah, and B. Gallego (2018). Comparing methods for estimation of heterogeneous treatment effects using observational data from health care databases. Statistics in medicine.
* Wu and Sitter (2001) Wu, C. and R. R. Sitter (2001). A model-calibration approach to using complete auxiliary information from survey data. Journal of the American Statistical Association 96(453), 185–193.
* Yang and Kim (2018) Yang, S. and J. K. Kim (2018). Integration of survey data and big observational data for finite population inference using mass imputation. arXiv preprint arXiv:1807.02817.
* Yang et al. (2020) Yang, S., J. K. Kim, and R. Song (2020). Doubly robust inference when combining probability and non-probability samples with high dimensional data. Journal of the Royal Statistical Society B82, 445–465.
* Zangeneh and Little (2015) Zangeneh, S. Z. and R. J. Little (2015). Bayesian inference for the finite population total from a heteroscedastic probability proportional to size sample. Journal of Survey Statistics and Methodology 3(2), 162–192.
* Zhang and Little (2011) Zhang, G. and R. Little (2011). A comparative study of doubly robust estimators of the mean with missing data. Journal of Statistical Computation and Simulation 81(12), 2039–2058.
* Zhou et al. (2020) Zhou, Q., C. McNeal, L. A. Copeland, J. P. Zachariah, and J. J. Song (2020). Bayesian propensity score analysis for clustered observational data. Statistical Methods & Applications 29(2), 335–355.
* Zigler et al. (2013) Zigler, C. M., K. Watts, R. W. Yeh, Y. Wang, B. A. Coull, and F. Dominici (2013). Model feedback in bayesian propensity score estimation. Biometrics 69(1), 263–273.
## 8 Appendix
### 8.1 Theoretical proofs
Suppose there exists an infinite sequence of finite populations $U_{\nu}$ of
sizes $N_{\nu}$ with $\nu=1,2,...,\infty$. Corresponding to $U_{\nu}$ are a
non-probability sample $S_{B,\nu}$ and a probability sample $S_{R,\nu}$ with
$n_{B,\nu}$ and $n_{R,\nu}$ being the respective sample sizes. Also, let us
assume that $N_{\nu}{\to}\infty$, $n_{B,\nu}{\to}\infty$ and
$n_{R,\nu}{\to}\infty$ as $\nu{\to}\infty$, while
$n_{B,\nu}/N_{\nu}{\to}f_{B}$, and $n_{R,\nu}/N_{\nu}{\to}f_{R}$ with
$0<f_{R}<1$ and $0<f_{B}<1$. However, from now on, we suppress the subscript
$\nu$ for rotational simplicity. In order to be able to make unbiased
inference based on $S_{B}$, we consider the following conditions:
1. 1.
The set of observed auxiliary variables, $X$, fully governs the selection
mechanism in $S_{B}$. This is called an _ignorable_ condition, implying
$p(\delta^{B}_{i}=1|y_{i},x_{i})=p(\delta_{i}^{B}=1|x_{i})$ for $i\in U$.
2. 2.
The $S_{B}$ actually does have a probability sampling mechanism, albeit
unknown. This means $p(\delta^{B}_{i}=1|x_{i})>0$ for all $i\in U$.
3. 3.
Units of $S_{R}$ and $S_{B}$ are selected independently from $U$ given the
observed auxiliary variables, $X^{*}$, i.e.
$\delta^{R}_{i}\rotatebox[origin={c}]{90.0}{$\models$}\delta^{B}_{j}|X^{*}$
for $i\neq j$.
4. 4.
The sampling fractions, $f_{R}$ and $f_{B}$, are small enough such that the
possible overlap between $S_{R}$ and $S_{B}$ is negligible, i.e. $S_{R}\cap
S_{B}=\emptyset$.
5. 5.
The true underling models for $Y|X^{*}$ and $\delta_{B}|X$ and $\delta^{R}|X$
are known.
In addition, to be able to drive the asymptotic properties of the proposed
estimators, we consider the following regularity conditions according to Chen
et al. (2020):
1. 1.
For any given $x$, $\partial m(x;\theta)/\partial\theta$ exists and is
continuous with respect to $\theta$, and $|\partial
m(x;\theta)/\partial\theta|\leq h(x;\theta)$ for $\theta$ in the neighborhood
of $\theta$, and $\sum_{i=1}^{N}h(x_{i};\theta)=O(1)$.
2. 2.
For any given $x$, $\partial^{2}m(x;\theta)/\partial\theta^{T}$ exists and is
continuous with respect to $\theta$, and
$max_{j,l}|\partial^{2}m(x;\theta)/\partial\theta_{j}\partial\theta_{l}|\leq
k(x;\theta)$ for $\theta$ in the neighborhood of $\theta$, and
$\sum_{i=1}^{N}k(x_{i};\theta)=O(1)$.
3. 3.
For $u_{i}=\\{x_{i},y_{i},m(x_{i};\theta)\\}$, the finite population and the
sampling design in $S_{R}$ satisfy
$N^{-1}\sum_{i=1}^{n_{R}}u_{i}/\pi^{R}_{i}-N^{-1}\sum_{i=1}^{N}u_{i}=O_{p}(n_{R}^{-1/2})$.
4. 4.
There exist $c_{1}$ and $c_{2}$ such that $0<c_{1}\leq N\pi^{B}_{i}/n_{B}\leq
c_{2}$ and $0<c_{1}\leq N\pi^{R}_{i}/n_{R}\leq c_{2}$ for all $i\in U$.
5. 5.
The finite population and the propensity scores satisfy
$N^{-1}\sum_{i=1}^{N}y_{i}^{2}=O(1)$,
$N^{-1}\sum_{i=1}^{N}||x_{i}||^{3}=O(1)$, and
$N^{-1}\sum_{i=1}^{N}\pi^{B}_{i}(1-\pi^{B}_{i})x_{i}x_{i}^{T}$ is a positive
definite matrix.
Note that while we assume $\pi^{R}_{i}$ is calculable for $i\in S_{B}$
throughout the proofs, extensions can be provided for situations where
$\pi^{R}_{i}$ need to be predicted for $i\in S_{B}$.
#### 8.1.1 Asymptotic properties of PAPW estimator
Since $\widehat{\beta}_{1}$ is the MLE estimate of $\beta_{1}$ in the logistic
regression of $Z_{i}$ on $x^{*}_{i}$, it is clear that
$\widehat{\beta}_{1}\overset{p}{\to}\beta_{1}$. Two immediate result of this
are that $\widehat{\pi}^{B}_{i}\overset{p}{\to}\pi^{B}_{i}$ and
$E(\widehat{\pi}^{B}_{i}|x^{*}_{i})=\pi^{B}_{i}$ where $\widehat{\pi}^{B}_{i}$
is defined as in 2.8. Now, we prove the consistency and asymptotic
unbiasedness of the PAPW estimator in 2.16. To this end, we show that
$\widehat{\overline{y}}_{PAPW}-{\overline{y}}_{U}=O_{p}(n_{B}^{-1/2})$.
Consider the following set of estimating equations:
$\displaystyle\Phi_{n}(\eta)$
$\displaystyle=\begin{bmatrix}n^{-1}\sum_{i=1}^{n}Z_{i}(y_{i}-\overline{y}_{U})/\pi^{B}_{i}\\\
\\\
n^{-1}\sum_{i=1}^{n}\\{Z_{i}-p_{i}(\beta_{1})\\}x^{*}_{i}\end{bmatrix}=\begin{bmatrix}N^{-1}\sum_{i=1}^{N}\delta^{B}_{i}(y_{i}-\overline{y}_{U})/\pi^{B}_{i}\\\
\\\
N^{-1}\sum_{i=1}^{N}\delta_{i}\\{Z_{i}-p_{i}(\beta_{1})\\}x^{*}_{i}\end{bmatrix}=0$
(8.1)
where $\eta=(\overline{y}_{U},\beta_{1})$.
In the following, we show that
$E_{\delta^{B}}[\Phi_{n}(\widehat{\eta})|x^{*}_{i}]=0$. We start with the
first component of $\Phi_{n}(\widehat{\eta})$
$\displaystyle
E_{\delta^{B}}\left[\frac{1}{N}\sum_{i=1}^{N}\frac{E_{\delta^{B}}(\delta^{B}_{i})(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}}\big{|}x^{*}_{i}\right]$
$\displaystyle=\frac{1}{N}\sum_{i=1}^{N}\frac{E_{\delta^{B}}(\delta^{B}_{i}|x^{*}_{i})(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}}$
$\displaystyle=\frac{1}{N}\sum_{i=1}^{N}\frac{\pi^{B}_{i}(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}}$
$\displaystyle=0$
Noting that
$E_{\delta^{B}}[\Phi_{n}(\widehat{\eta})]=E_{\delta}[E_{Z}\\{\Phi_{n}(\widehat{\eta})|\delta_{i}=1\\}]$,
for the second component, we have
$\displaystyle
E_{\delta^{B}}\left[\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\\{Z_{i}-p_{i}(\beta_{1})\\}x_{i}\bigg{|}x_{i}\right]$
$\displaystyle=E_{\delta}\left[E_{Z}\bigg{\\{}\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\\{Z_{i}-p_{i}(\beta_{1})\\}x_{i}\big{|}\delta_{i}=1,x_{i}\bigg{\\}}\right]$
(8.2)
$\displaystyle=E_{\delta}\left[\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\\{E_{Z}(Z_{i}|\delta_{i}=1,x_{i})-p_{i}(\beta_{1})\\}x^{*}_{i}\right]$
$\displaystyle=E_{\delta}\left[\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\\{p_{i}(\beta_{1})-p_{i}(\beta_{1})\\}x^{*}_{i}\right]$
$\displaystyle=0$
Now, we apply the first-order Taylor approximation to
$\Phi_{n}(\widehat{\eta})$ around $\eta_{1}$ as below:
$\widehat{\eta}-\eta_{1}=[E\\{\phi_{n}(\eta_{1})\\}]^{-1}\Phi_{n}(\eta_{1})+O_{p}(n_{B}^{-1/2})$
(8.3)
where $\phi_{n}(\eta)=\partial\Phi_{n}(\eta)/\partial\eta$.
$\frac{\partial}{\partial\overline{y}_{U}}\left[\frac{1}{N}\sum_{i=1}^{N}\delta^{B}_{i}\frac{(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}}\right]=-\frac{1}{N}\sum_{i=1}^{N}\frac{\delta^{B}_{i}}{\pi^{B}_{i}}$
(8.4)
$\displaystyle\frac{\partial}{\partial\beta_{1}}\left[\frac{1}{N}\sum_{i=1}^{N}\delta^{B}_{i}\frac{(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}}\right]$
$\displaystyle=\frac{\partial}{\partial\beta_{1}}\left[\frac{1}{N}\sum_{i=1}^{N}\frac{\delta^{B}_{i}}{\pi^{B}_{i}}\bigg{\\{}\frac{p_{i}(\beta_{1})}{1-p_{i}(\beta_{1})}\bigg{\\}}(y_{i}-\overline{y}_{U})\right]$
(8.5)
$\displaystyle=-\frac{1}{N}\sum_{i=1}^{N}\frac{\delta^{B}_{i}}{\pi^{B}_{i}}(y_{i}-\overline{y}_{U})x_{i}^{*T}$
$\frac{\partial}{\partial\beta_{1}}\left[\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\big{\\{}Z_{i}-p_{i}(\beta_{1})\big{\\}}\right]=-\frac{1}{N}\sum_{i=1}^{N}\delta_{i}p_{i}(\beta_{1})\left[1-p_{i}(\beta_{1})\right]x^{*}_{i}x_{i}^{*T}$
(8.6)
Therefore, we have
$\phi_{n}(\eta_{1})=\begin{pmatrix}-\frac{1}{N}\sum_{i=1}^{N}\frac{\delta^{B}_{i}}{\pi^{B}_{i}}&-\frac{1}{N}\sum_{i=1}^{N}\frac{\delta^{B}_{i}}{\pi^{B}_{i}}(y_{i}-\overline{y}_{U})x_{i}^{*T}\\\
0&-\frac{1}{N}\sum_{i=1}^{N}\delta_{i}p_{i}(\beta_{1})\left[1-p_{i}(\beta_{1})\right]x^{*}_{i}x_{i}^{*T}\end{pmatrix}$
(8.7)
Thus, it follows that
$\widehat{\overline{y}}_{PM}=\overline{y}_{U}+O_{p}(n_{B}^{-1/2})$.
Now, we turn to deriving the asymptotic variance estimator for
$\widehat{\overline{y}}_{PM}$. According to the sandwich formula, we have
$Var(\widehat{\eta}_{1})=\left[E\\{\phi_{n}(\eta_{1})\\}\right]^{-1}Var\big{\\{}\phi_{n}(\eta_{1})\big{\\}}\left[E\\{\phi_{n}(\eta_{1})\\}^{T}\right]^{-1}+O_{p}(n_{B}^{-1})$
(8.8)
Given the fact that
$E(\delta_{i}=1|x^{*}_{i})=\frac{p(\delta^{B}_{i}=1|x^{*}_{i})}{p(Z_{i}=1|x^{*}_{i})}=\frac{\pi^{R}_{i}}{1-p_{i}(\beta_{1})}$
(8.9)
It can be shown that
$E\big{\\{}\phi_{n}(\eta_{1})\big{\\}}=\begin{pmatrix}-1&-\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\overline{y}_{U})x_{i}^{*T}\\\
0&-\frac{1}{N}\sum_{i=1}^{N}\pi^{B}_{i}\left[1-p_{i}(\beta_{1})\right]x^{*}_{i}x_{i}^{*T}\end{pmatrix}$
(8.10)
And
$\left[E\big{\\{}\phi_{n}(\eta_{1})\big{\\}}\right]^{-1}=\begin{pmatrix}-1&b^{T}\\\
0&-\left[\frac{1}{N}\sum_{i=1}^{N}\pi^{B}_{i}\left[1-p_{i}(\beta_{1})\right]x^{*}_{i}x_{i}^{*T}\right]^{-1}\end{pmatrix}$
(8.11)
where
$b^{T}=\bigg{\\{}\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\overline{y}_{U})x_{i}^{*T}\bigg{\\}}\bigg{\\{}\frac{1}{N}\sum_{i=1}^{N}\pi^{B}_{i}\big{\\{}1-p_{i}(\beta_{1})\big{\\}}x^{*}_{i}x_{i}^{*T}\bigg{\\}}^{-1}$
(8.12)
Now, the goal is to calculate $Var\big{\\{}\phi_{n}(\eta_{1})\big{\\}}$. We
know that
$\displaystyle
Var_{\delta^{B}}\left(\frac{1}{N}\sum_{i=1}^{N}\frac{\delta^{B}_{i}(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}}\bigg{|}x_{i}\right)$
$\displaystyle=\frac{1}{N}\sum_{i=1}^{N}\frac{(y_{i}-\overline{y}_{U})^{2}}{(\pi^{B}_{i})^{2}}\pi^{B}_{i}(1-\pi^{B}_{i})$
(8.13)
$\displaystyle=\frac{1}{N}\sum_{i=1}^{N}\bigg{\\{}\frac{1-\pi^{B}_{i}}{\pi^{B}_{i}}\bigg{\\}}(y_{i}-\overline{y}_{U})^{2}$
$\displaystyle
Var_{\delta^{B}}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\big{\\{}Z_{i}-p_{i}(\beta_{1})\big{\\}}\bigg{|}x^{*}_{i}\right)$
$\displaystyle=E_{\delta}\left[Var_{Z}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\big{\\{}Z_{i}-p_{i}(\beta_{1})\big{\\}}\big{|}\delta_{i}=1,x^{*}_{i}\right)\right]$
(8.14)
$\displaystyle+Var_{\delta}\left[\delta_{i}E_{Z}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\big{\\{}Z_{i}-p_{i}(\beta_{1})\big{\\}}\big{|}\delta_{i}=1,x^{*}_{i}\right)\right]$
$\displaystyle=\frac{1}{N^{2}}E_{\delta}\left(\sum_{i=1}^{N}\delta^{2}_{i}Var_{Z}(Z_{i})x^{*}_{i}x_{i}^{*T}\bigg{|}x^{*}_{i}\right)+0$
$\displaystyle=\frac{1}{N^{2}}\sum_{i=1}^{N}\pi^{R}_{i}p_{i}(\beta_{1})x^{*}_{i}x_{i}^{*T}$
$\displaystyle
Cov\left(\frac{1}{N}\sum_{i=1}^{N}\frac{\delta^{B}_{i}(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}},\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\big{\\{}Z_{i}-p_{i}(\beta_{1})\big{\\}}\bigg{|}x^{*}_{i}\right)$
$\displaystyle=E_{\delta}\left[E_{Z}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{i}\frac{Z_{i}(y_{i}-\overline{y}_{U})}{\pi^{B}_{i}}\bigg{|}\delta_{i}=1,x^{*}_{i}\right)\right]$
(8.15)
$\displaystyle=\frac{1}{N^{2}}\sum_{i=1}^{N}\big{\\{}1-p_{i}(\beta_{1})\big{\\}}(y_{i}-\overline{y}_{U})x^{*}_{i}$
Therefore, we have
$Var\big{\\{}\Phi_{n}(\eta_{1})\big{\\}}=\begin{pmatrix}\frac{1}{N^{2}}\sum_{i=1}^{N}\\{(1-\pi^{B}_{i})/\pi^{B}_{i}\\}(y_{i}-\overline{y}_{U})&\frac{1}{N^{2}}\sum_{i=1}^{N}\big{\\{}1-p_{i}(\beta_{1}\big{)}\\}(y_{i}-\overline{y}_{U})x_{i}^{*T}\\\
\frac{1}{N^{2}}\sum_{i=1}^{N}\big{\\{}1-p_{i}(\beta_{1}\big{)}\\}(y_{i}-\overline{y}_{U})x^{*}_{i}&\frac{1}{N^{2}}\sum_{i=1}^{N}\pi^{B}_{i}\big{\\{}1-p_{i}(\beta_{1})\big{\\}}x^{*}_{i}x_{i}^{*T}\end{pmatrix}$
(8.16)
The final asymptotic variance estimator of $\widehat{\overline{y}}_{PAPW}$ is
given by
$Var\big{\\{}\widehat{\overline{y}}_{PAPW}\big{\\}}=\frac{1}{N^{2}}\sum_{i=1}^{N}\bigg{\\{}\frac{1-\pi^{B}_{i}}{\pi^{B}_{i}}\bigg{\\}}(y_{i}-\overline{y}_{U})^{2}-2\frac{b^{T}}{N^{2}}\sum_{i=1}^{N}\big{\\{}1-p_{i}(\beta_{1})\big{\\}}(y_{i}-\overline{y}_{U})x^{*}_{i}+b^{T}\left[\frac{1}{N^{2}}\sum_{i=1}^{N}\pi^{B}_{i}\big{\\{}1-p_{i}(\beta_{1})\big{\\}}x^{*}_{i}x_{i}^{*T}\right]b$
(8.17)
To obtain the variance estimate based on the observed samples of $S_{B}$ and
$S_{R}$, we substitute the population components with their estimates from the
samples.
$\widehat{Var}\big{\\{}\widehat{\overline{y}}_{PAPW}\big{\\}}=\frac{1}{N^{2}}\sum_{i=1}^{n_{B}}\big{\\{}1-\widehat{\pi}^{B}_{i}\big{\\}}\left(\frac{y_{i}-\overline{y}_{U}}{\widehat{\pi}^{B}_{i}}\right)^{2}-2\frac{\widehat{b}^{T}}{N^{2}}\sum_{i=1}^{n_{B}}\big{\\{}1-p_{i}(\widehat{\beta}_{1})\big{\\}}\left(\frac{y_{i}-\overline{y}_{U}}{\widehat{\pi}^{B}_{i}}\right)x^{*}_{i}+\widehat{b}^{T}\left[\frac{1}{N^{2}}\sum_{i=1}^{n}p_{i}(\widehat{\beta}_{1})x^{*}_{i}x_{i}^{*T}\right]\widehat{b}$
(8.18)
where
$\widehat{b}^{T}=\bigg{\\{}\frac{1}{N}\sum_{i=1}^{n_{B}}\left(\frac{y_{i}-\overline{y}_{U}}{\widehat{\pi}^{B}_{i}}\right)x_{i}^{*T}\bigg{\\}}\bigg{\\{}\frac{1}{N}\sum_{i=1}^{n}p_{i}(\widehat{\beta}_{1})x^{*}_{i}x_{i}^{*T}\bigg{\\}}^{-1}$
(8.19)
#### 8.1.2 Proof of doubly robustness
As discussed in the section 2.4, a doubly robust estimator should be
consistent even if either model is misspecified. To prove the doubly
robustness property of the AIPW estimator proposed here, let initially assume
that $\widehat{\theta}\overset{p}{\to}\theta$ if the prediction model (PM) is
correctly specified, and $\widehat{\phi}\overset{p}{\to}\phi$ and
$\widehat{\beta}\overset{p}{\to}\beta$ if the pseudo-weighting model is
correctly specified. Given the true probabilities of selection in $S_{B}$, we
know that HT estimator is design-unbiased for the population total, i.e.
$\displaystyle E\left(\sum_{i=1}^{n_{B}}y_{i}/\pi^{B}_{i}\right)$
$\displaystyle=E\left(\sum_{i=1}^{N}\delta^{B}_{i}y_{i}/\pi^{B}_{i}\right)$
(8.20) $\displaystyle=\sum_{i=1}^{N}E(\delta^{B}_{i})y_{i}/\pi^{B}_{i}$
$\displaystyle=\sum_{i=1}^{N}\pi^{B}_{i}y_{i}/\pi^{B}_{i}$
$\displaystyle=\sum_{i=1}^{N}y_{i}$ $\displaystyle=\widehat{y}_{U}$
And the same result will be obtained for $S_{R}$. Therefore
$\displaystyle E\left(\sum_{i=1}^{n_{B}}y_{i}/\pi^{B}_{i}\right)$
$\displaystyle=E\left(\sum_{i=1}^{n_{R}}y_{i}/\pi^{R}_{i}\right)$ (8.21)
$\displaystyle=\widehat{y}_{U}$
Now we have
$\displaystyle\widehat{y}_{DR}\overset{p}{\to}E(\widehat{y}_{DR})$
$\displaystyle=E\bigg{\\{}\sum_{i=1}^{n_{B}}\frac{(y_{i}-\widehat{y}_{i})}{\widehat{\pi}^{B}_{i}}+\sum_{i=1}^{n_{R}}\frac{\widehat{y}_{i}}{\pi^{R}_{i}}\bigg{\\}}$
(8.22)
$\displaystyle=E\bigg{\\{}\sum_{i=1}^{n_{B}}\frac{(y_{i}-\widehat{y}_{i})}{\widehat{\pi}^{B}_{i}}+\sum_{i=1}^{n_{B}}\frac{\widehat{y}_{i}}{\pi^{B}_{i}}\bigg{\\}}$
$\displaystyle=E\bigg{\\{}\sum_{i=1}^{n_{B}}\frac{(y_{i}-\widehat{y}_{i})}{\widehat{\pi}^{B}_{i}}+\frac{\widehat{y}_{i}}{\pi^{B}_{i}}\bigg{\\}}$
$\displaystyle=E\bigg{\\{}\sum_{i=1}^{n_{B}}\frac{\widehat{y}_{i}}{\pi^{B}_{i}}+\frac{(y_{i}-\widehat{y}_{i})}{\widehat{\pi}^{B}_{i}}-\frac{(\widehat{y}_{i}-\widehat{y}_{i})}{\pi^{B}_{i}}\bigg{\\}}$
$\displaystyle\widehat{y}_{DR}\overset{p}{\to}E(\widehat{y}_{DR})$
$\displaystyle=y_{U}+E\bigg{\\{}\sum_{i=1}^{n_{B}}(y_{i}-\widehat{y}_{i})(\frac{1}{\widehat{\pi}^{B}_{i}}-\frac{1}{\pi^{B}_{i}})\bigg{\\}}$
(8.23)
$\displaystyle=y_{U}+E\bigg{\\{}\sum_{i=1}^{n_{B}}(y_{i}-\widehat{y}_{i})(\frac{\pi^{B}_{i}}{\widehat{\pi}^{B}_{i}}-1)\bigg{\\}}$
Under the ignorable assumption in $S_{B}$, we have
$Y\rotatebox[origin={c}]{90.0}{$\models$}\pi^{B}|X,\pi^{R}$. Hence
$\displaystyle\widehat{y}_{DR}\overset{p}{\to}E(\widehat{y}_{DR})$
$\displaystyle=y_{U}+E\bigg{\\{}\sum_{i=1}^{n_{B}}(y_{i}-\widehat{y}_{i})(\frac{\pi^{B}_{i}}{\widehat{\pi}^{B}_{i}}-1)\bigg{\\}}$
(8.24)
$\displaystyle=y_{U}+E\bigg{\\{}E\bigg{\\{}\sum_{i=1}^{n_{B}}(y_{i}-\widehat{y}_{i})(\frac{\pi^{B}_{i}}{\widehat{\pi}^{B}_{i}}-1)|x_{i},\pi^{R}_{i}\bigg{\\}}\bigg{\\}}$
$\displaystyle=y_{U}+E\bigg{\\{}\sum_{i=1}^{n_{B}}E(y_{i}-\widehat{y}_{i}|x_{i},\pi^{R}_{i})E(\frac{\pi^{B}_{i}}{\widehat{\pi}^{B}_{i}}-1|x_{i},\pi^{R}_{i})\bigg{\\}}$
If we assume the pseudo-weighting model is correctly specified, then we expect
$\widehat{\pi}^{B}_{i}\overset{p}{\to}\pi^{B}_{i}$ and
$\displaystyle
E\left(\frac{\pi^{B}_{i}}{\widehat{\pi}^{B}_{i}}-1|x_{i},\pi^{R}_{i}\right)\overset{p}{\to}\frac{\pi^{B}_{i}}{\pi^{B}_{i}}-1=0$
(8.25)
which implies that $\widehat{y}_{DR}\overset{p}{\to}y_{U}$ regardless of
whether the PM is correctly specified or not. In situations where the mean
model is correctly specified, then we expect that
$\widehat{y}_{i}\overset{p}{\to}y_{i}$. Hence
$\displaystyle
E\left(y_{i}-\widehat{y}_{i}|x_{i},\pi^{R}_{i}\right)\overset{p}{\to}E\left(y_{i}-y_{i}|x_{i},\pi^{R}_{i}\right)=0$
(8.26)
which means that $\widehat{y}_{DR}\overset{p}{\to}y_{U}$ even if the PW model
is incorrectly specified.
#### 8.1.3 Variance estimation under the Bayesian approach
As discussed in Section 2.6, in this study, we use Rubin’s combining rule to
estimate the variance of the AIPW estimator under the two-step Bayesian
approach. The idea stems from the conditional variance formula, which involves
two parts: (1) within-imputation variance and between-imputation variance. The
latter is straightforward and achieves by taking the variance of the
$\widehat{\overline{y}}^{(m)}_{DR}$ across the $M$ MCMC draws. The within-
imputation variance requires more attention as one needs to account for the
intraclass correlations due to clustering and use linearization techniques
when dealing with a ratio estimator.
It is clear that this component is calculated conditional on the observed
$\widehat{y}^{(m)}_{i}$ for $i\in S$, $\widehat{\pi}^{B(m)}_{i}$ for $i\in
S_{B}$ and $\widehat{p}i^{B}_{i}$.
$\displaystyle
var\left(\widehat{\overline{y}}^{(m)}_{DR}\big{|}\widehat{\pi}^{B(m)}_{i},\widehat{y}^{(m)}_{i}\right)$
$\displaystyle=var\left(\frac{1}{\widehat{N}_{B}}\sum_{i=1}^{n_{B}}\frac{\left(y_{i}-\widehat{y}^{(m)}_{i}\right)}{\widehat{\pi}^{B}_{i}}+\frac{1}{\widehat{N}_{R}}\sum_{i=1}^{n_{R}}\frac{\widehat{y}^{(m)}_{i}}{\pi^{R}_{i}}\bigg{|}\widehat{\pi}^{B(m)}_{i},\widehat{y}^{(m)}_{i}\right)$
(8.27)
$\displaystyle=\frac{1}{\widehat{N}^{2}_{B}}\sum_{i=1}^{n_{B}}\frac{var(y_{i})}{\left(\widehat{\pi}^{B}_{i}\right)^{2}}+var\left(\frac{1}{\widehat{N}_{R}}\sum_{i=1}^{n_{R}}\frac{\widehat{y}^{(m)}_{i}}{\pi^{R}_{i}}\bigg{|}\widehat{y}^{(m)}_{i}\right)$
For the first component, which equals
$var(y)\sum_{i=1}^{n_{B}}\left(\widehat{\pi}^{B}_{i}\right)^{-2}/\widehat{N}^{2}_{B}$,
it suffices to estimate the variance of $y$. The second component, however,
deals with the variance of a ratio estimator, which requires linearization
techniques. Let’s define
$\widehat{t}_{R}=\sum_{i=1}^{n_{R}}\widehat{y}^{(m)}_{i}/\pi^{R}_{i}$, Taylor-
series approximation of the variance is given by
$\displaystyle
var\left(\frac{1}{\widehat{N}_{R}}\sum_{i=1}^{n_{R}}\frac{\widehat{y}^{(m)}_{i}}{\pi^{R}_{i}}\bigg{|}\widehat{y}^{(m)}_{i}\right)$
$\displaystyle=var\left(\frac{\widehat{t}_{R}}{\widehat{N}_{R}}\bigg{|}\widehat{y}^{(m)}_{i}\right)$
(8.28)
$\displaystyle\approx\frac{1}{\widehat{N}^{2}_{R}}\bigg{\\{}var\left(\widehat{t}_{R}\big{|}\widehat{y}^{(m)}_{i}\right)+\left(\frac{\widehat{t}_{R}}{\widehat{N}_{R}}\right)^{2}var(\widehat{N}_{R})-2\left(\frac{\widehat{t}_{R}}{\widehat{N}_{R}}\right)cov\left(\widehat{t}_{R},\widehat{N}_{R}\big{|}\widehat{y}^{(m)}_{i}\right)\bigg{\\}}$
Since $\widehat{t}_{R}$ depends on $\widehat{y}^{(m)}_{i}$, we have
$var\left(\widehat{t}_{R}\big{|}\widehat{y}^{(m)}_{i}\right)=\sum_{i=1}^{n_{R}}\left(\widehat{y}^{(m)}_{i}\right)^{2}var\left(\frac{1}{\pi^{R}_{i}}\right)$
(8.29)
$cov\left(\widehat{t}_{R},\widehat{N}_{R}\big{|}\widehat{y}^{(m)}_{i}\right)=\sum_{i=1}^{n_{R}}\widehat{y}^{(m)}_{i}var\left(\frac{1}{\pi^{R}_{i}}\right)$
(8.30)
Therefore, the variance of the ratio estimator is approximated by
$var\left(\frac{1}{\widehat{N}_{R}}\sum_{i=1}^{n_{R}}\frac{\widehat{y}^{(m)}_{i}}{\pi^{R}_{i}}\bigg{|}\widehat{y}^{(m)}_{i}\right)\approx\frac{1}{\widehat{N}^{2}_{R}}var\left(\frac{1}{\pi^{R}_{i}}\right)\bigg{\\{}\sum_{i=1}^{n_{R}}\left(\widehat{y}^{(m)}_{i}\right)^{2}+n_{R}\left(\frac{\widehat{t}_{R}}{\widehat{N}_{R}}\right)^{2}-2\sum_{i=1}^{n_{R}}\widehat{y}^{(m)}_{i}\bigg{\\}}$
(8.31)
And the final within-imputation variance can be given by
$var\left(\widehat{\overline{y}}^{(m)}_{DR}\big{|}\widehat{\pi}^{B(m)}_{i},\widehat{y}^{(m)}_{i}\right)\approx\frac{1}{\widehat{N}^{2}_{B}}\sum_{i=1}^{n_{B}}\frac{var(y_{i})}{\left(\widehat{\pi}^{B}_{i}\right)^{2}}+\frac{1}{\widehat{N}^{2}_{R}}var\left(\frac{1}{\pi^{R}_{i}}\right)\bigg{\\{}\sum_{i=1}^{n_{R}}\left(\widehat{y}^{(m)}_{i}\right)^{2}+n_{R}\left(\frac{\widehat{t}_{R}}{\widehat{N}_{R}}\right)^{2}-2\sum_{i=1}^{n_{R}}\widehat{y}^{(m)}_{i}\bigg{\\}}$
(8.32)
Note that in situations where either $S_{R}$ or $S_{B}$ is a clustered sample,
the derivation of the within-imputation variance would remain the same, but
$y_{i}$, $\pi^{R}_{i}$, $\widehat{\pi}^{B(m)}_{i}$, and
$\widehat{y}^{(m)}_{i}$ will represent the total for cluster $i$, and $n_{R}$
and $n_{B}$ are the number of clusters in $S_{R}$ and $S_{B}$, respectively.
### 8.2 Bayesian Additive Regression Trees
BART is a flexible ensemble of trees method, which allows handling non-linear
relationships as well as multi-way interaction effects. The idea of BART is
based on the sum-of-trees, where trees are sequentially modified on the basis
of residuals from the other trees. In a tree-based method, the variation in
the response variable is explained by hierarchically splitting the sample into
more homogeneous subgroups (Green and Kern, 2012). As illustrated in Figure
11, a binary-structured tree consists of a root node, a set of interior nodes,
a set of terminal nodes associated with parameters and decision rules that
links these nodes (Abu-Nimeh et al., 2008).
Figure 11: Example of a binary-structured trees model
#### 8.2.1 BART for continuous outcomes
Suppose $y=f(x)+\epsilon$ as is the case in every statistical model, where
$y\in\mathbb{R}$ is a continuous outcome, $x$ denotes an $n\times p$ matrix of
covariates, and $\epsilon\sim N(0,\sigma^{2})$ is the error term. BART will
then approximate the outcome as below:
$y\approx\sum_{j=1}^{m}f(x,T_{j},M_{j})$ (8.33)
where $T_{j}$ is the $j$-th tree with $b_{j}$ terminal nodes, and associated
$M_{j}=(\mu_{1j},\mu_{2j},…,\mu_{{b_{j}}j})^{T}$ parameters. BART is a
Bayesian approach, since it assigns prior distributions to $T$, $M$, and
$\sigma$ (Chipman et al., 2010; Tan et al., 2016). Assuming an independence
structure between trees, we can define the prior as follows:
$p[(T_{1},M_{1}),...,(T_{m},M_{m}),\sigma^{-2}]=[\prod_{j=1}^{m}p(T_{j},M_{j})]p(\sigma^{-2})$
(8.34)
Using the multiplication law of probability, the joint distribution of
$p(T_{j},M_{j})$ can be written as:
$\displaystyle p(T_{j},M_{j})$ $\displaystyle=p(M_{j}|T_{j})p(T_{j})$ (8.35)
$\displaystyle=\prod_{i=1}^{b_{j}}p(\mu_{ij}|T_{j})p(T_{j})$
where $i=1,...,b_{j}$ denotes the terminal node parameters for tree $j$.
Therefore, the joint distribution in 8.34 can be factored as below:
$p\left[(T_{1},M_{1}),...,(T_{m},M_{m}),\sigma^{-2}\right]=\left[\prod_{j=1}^{m}\bigg{\\{}\prod_{i=1}^{b_{j}}p(\mu_{ij}|T_{j})\bigg{\\}}p(T_{j})\right]p(\sigma^{-2})$
(8.36)
Suggested by Chipman et al. (2007), the following distributions can be used
for $\mu_{ij}|T_{j}$ and $\sigma^{-2}$:
$\mu_{ij}|T_{j}\sim N(\mu_{\mu},\sigma_{\mu}^{2})$ (8.37) $\sigma^{-2}\sim
G(\frac{\nu}{2},\frac{\nu\lambda}{2})$ (8.38)
The prior for $T_{j}$ involves three components of the tree structure: length
of the tree, decision rules and the choice of covariate at a given node.
However, prior specification for $T_{j}$ depends on several factors, and
detailed discussions can be found in Chipman et al. (2010). Given the data,
these parameters are updated through a combination of “Bayesian backfitting”
and MCMC Gibbs sampler method. The trained trees are then summed up to
approximate the outcome variable. Finally, $m$ is typically assumed to be
fixed, but can be assessed by cross-validation.
#### 8.2.2 BART for binary outcomes
For the binary outcome, a $probit$ link function is usually employed in the
sense that $y$ is an indicator variable dichotomizing a normally distributed
latent continuous outcome like $y^{*}$ at a real value $c$ so that:
$y=\begin{cases}\text{1}&\quad\text{y}^{\text{*}}>\text{c}\\\
\text{0}&\quad\text{y}^{*}\leq\text{c}\\\ \end{cases},\hskip
14.22636pty^{*}\sim N(0,1)$ (8.39)
Therefore, the new model will be given by:
$G(x)=\Phi^{-1}[p(y=1|x)]=\sum_{j=1}^{m}f(x,T_{j},M_{j})$ (8.40)
where $\Phi^{-1}[.]$ is the inverse of standard normal CDF. Since we
implicitly assumed $\sigma\equiv 1$, the only priors we need to specify are
$p(\mu_{ij}|T_{j})$ and $p(T_{j})$. In order to be able to draw the posterior
distribution of $T_{j}$ and $\mu_{ij}$, we need to generate the latent
continuous variable, $y^{*}$, given $y_{k}$. Chipman et al. (2010) recommends
a data augmentation method based on the following algorithm:
$y_{k}^{*}=\begin{cases}max(N(G(x_{k}),1),0)&\text{if }y_{k}=1\\\
max(N(G(x_{k}),1),0)&\text{if }y_{k}=0\end{cases}$ (8.41)
Since the structure of priors is very similar to BART for continuous outcomes
(Tan et al., 2016), we update the estimates $G(x_{k})$ after drawing samples
from $T_{j}$’s and $\mu_{ij}$’s. To apply BART, in this research, we utilize
the ‘BayesTree’ and ‘BART’ packages in R.
### 8.3 Further extensions of the simulation study
#### 8.3.1 Simulation study I
This subsection provides additional results associated with Simulation I.
Table 5 and Table 6 summarize the findings of the simulation in 3.1 under the
frequentist approach when $n_{B}=100$, and $n_{B}=10,000$. We report the
corresponding results under the two-step Bayesian approach in Table 7 and
Table 8, respectively.
Table 5: Comparing the performance of the bias adjustment methods and
associated asymptotic variance estimator under the frequentist approach in the
first simulation study for $n_{R}=100$ and $n_{B}=100$
| $\rho=0.3$ | | $\rho=0.5$ | | $\rho=0.8$
---|---|---|---|---|---
Method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Probability sample ($S_{R}$) | | | | |
Unweighted | 8.528 | 19.248 | 92.6 | 1.009 | | 8.647 | 11.065 | 77.4 | 1.018 | | 8.682 | 9.719 | 50.9 | 1.02
Fully weighted | -0.029 | 20.276 | 94.7 | 1.001 | | 0.006 | 8.035 | 95.1 | 1.010 | | 0.015 | 5.008 | 94.9 | 1.008
Non-probability sample ($S_{B}$) | | | | |
Unweighted | 31.895 | 36.418 | 57.0 | 1.014 | | 32.213 | 33.2 | 1.740 | 1.008 | | 32.310 | 32.853 | 0.0 | 0.995
Fully weighted | 0.171 | 21.078 | 94.8 | 0.996 | | 0.247 | 8.265 | 94.9 | 0.999 | | 0.268 | 4.994 | 94.2 | 0.995
Non-robust adjustment | | | | |
Model specification: True | | | | |
PAPW | -1.192 | 23.466 | 95.2 | 1.018 | | -1.205 | 9.452 | 95.3 | 1.015 | | -1.211 | 5.982 | 95.8 | 1.007
IPSW | -2.917 | 26.505 | 97.3 | 1.386 | | -3.036 | 12.700 | 97.0 | 1.355 | | -3.075 | 9.470 | 97.0 | 1.308
PM | 0.372 | 20.989 | 94.6 | 0.994 | | 0.148 | 8.351 | 94.9 | 0.995 | | 0.077 | 5.160 | 95.0 | 0.992
Model specification: False | | | | |
PAPW | 27.140 | 33.436 | 75.6 | 1.059 | | 27.393 | 28.814 | 16.6 | 1.043 | | 27.470 | 28.276 | 2.5 | 1.025
IPSW | 28.372 | 33.972 | 67.9 | 1.012 | | 28.711 | 29.951 | 8.3 | 1.002 | | 28.815 | 29.515 | 0.5 | 0.99
PM | 28.199 | 33.790 | 68.4 | 1.011 | | 28.541 | 29.771 | 8.3 | 1.001 | | 28.645 | 29.337 | 0.3 | 0.988
Doubly robust adjustment | | | | |
Model specification: QR–True, PM–True | | | | |
AIPW–PAPW | -0.084 | 22.973 | 96.4 | 1.047 | | -0.014 | 8.996 | 96.2 | 1.038 | | 0.007 | 5.368 | 95.5 | 1.017
AIPW–IPSW | -0.184 | 22.449 | 96.3 | 1.046 | | -0.049 | 8.826 | 96.1 | 1.038 | | -0.009 | 5.314 | 95.9 | 1.016
Model specification: QR–True, PM–False | | | | |
AIPW–PAPW | -0.436 | 23.709 | 96.4 | 1.038 | | -0.286 | 9.866 | 96.6 | 1.062 | | -0.241 | 6.520 | 97.2 | 1.101
AIPW–IPSW | -0.427 | 23.083 | 96.4 | 1.039 | | -0.227 | 9.570 | 96.6 | 1.070 | | -0.166 | 6.298 | 97.5 | 1.119
Model specification: QR–False, PM–True | | | | |
AIPW–PAPW | -0.045 | 29.068 | 97.3 | 1.107 | | 0.011 | 11.113 | 96.9 | 1.097 | | 0.026 | 6.073 | 96.2 | 1.068
AIPW–IPSW | -0.194 | 28.208 | 97.5 | 1.104 | | -0.044 | 10.825 | 97.1 | 1.094 | | 0.001 | 5.974 | 96.5 | 1.062
Model specification: QR–False, PM–False | | | | |
AIPW–PAPW | 28.301 | 34.194 | 71.3 | 1.037 | | 28.570 | 29.868 | 10.9 | 1.028 | | 28.652 | 29.379 | 0.7 | 1.016
AIPW–IPSW | 28.178 | 33.806 | 70.4 | 1.035 | | 28.525 | 29.764 | 9.4 | 1.025 | | 28.631 | 29.326 | 0.5 | 1.013
* •
PAPW: propensity adjusted probability weighting; IPSW: Inverse propensity
score weighting; QR: quasi-randomization; PM: prediction model; AIPW:
augmented inverse propensity weighting. Fully weighted implies the weighted
means if the true sampling weights are known.
Table 6: Comparing the performance of the bias adjustment methods and
associated asymptotic variance estimator under the frequentist approach in the
first simulation study for $n_{R}=100$ and $n_{B}=10,000$
| $\rho=0.2$ | | $\rho=0.5$ | | $\rho=0.8$
---|---|---|---|---|---
Method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Probability sample ($S_{R}$) | | | | |
Unweighted | 8.528 | 19.248 | 92.6 | 1.009 | | 8.647 | 11.065 | 77.4 | 1.018 | | 8.682 | 9.719 | 50.9 | 1.02
Fully weighted | -0.029 | 20.276 | 94.7 | 1.001 | | 0.006 | 8.035 | 95.1 | 1.010 | | 0.015 | 5.008 | 94.9 | 1.008
Non-probability sample ($S_{B}$) | | | | |
Unweighted | 30.014 | 30.066 | 0.0 | 1.008 | | 30.197 | 30.207 | 0.0 | 1.019 | | 30.252 | 30.257 | 0.0 | 1.033
Fully weighted | 0.032 | 2.083 | 95.3 | 1.005 | | 0.018 | 0.816 | 95.1 | 1.007 | | 0.012 | 0.490 | 95.1 | 1.007
Non-robust adjustment | | | | |
Model specification: True | | | | |
PAPW | -2.067 | 4.582 | 94.9 | 1.108 | | -2.145 | 4.120 | 92.8 | 1.107 | | -2.170 | 4.072 | 92.2 | 1.107
IPSW | -2.618 | 7.717 | 94.5 | 0.958 | | -2.673 | 7.334 | 91.1 | 0.923 | | -2.692 | 7.308 | 90.6 | 0.979
PM | 0.296 | 4.515 | 95.2 | 0.994 | | 0.121 | 4.134 | 94.8 | 0.986 | | 0.065 | 4.095 | 94.6 | 0.985
Model specification: False | | | | |
PAPW | 24.493 | 24.616 | 0.0 | 1.126 | | 24.592 | 24.651 | 0.0 | 1.153 | | 24.621 | 24.673 | 0.0 | 1.161
IPSW | 26.675 | 26.804 | 0.0 | 0.992 | | 26.871 | 26.949 | 0.0 | 0.970 | | 26.930 | 27.002 | 0.0 | 0.964
PM | 26.509 | 26.645 | 0.0 | 1.003 | | 26.717 | 26.800 | 0.0 | 0.989 | | 26.779 | 26.856 | 0.0 | 0.986
Doubly robust adjustment | | | | |
Model specification: QR–True, PM–True | | | | |
AIPW–PAPW | 0.180 | 4.633 | 95.1 | 0.994 | | 0.080 | 4.162 | 94.8 | 0.986 | | 0.047 | 4.104 | 94.7 | 0.985
AIPW–IPSW | 0.052 | 4.582 | 95.2 | 0.995 | | 0.035 | 4.152 | 94.6 | 0.987 | | 0.028 | 4.101 | 94.5 | 0.985
Model specification: QR–True, PM–False | | | | |
AIPW–PAPW | 0.262 | 4.719 | 95.1 | 1.000 | | 0.163 | 4.250 | 94.9 | 0.997 | | 0.130 | 4.191 | 94.7 | 0.996
AIPW–IPSW | 0.188 | 4.652 | 95.4 | 1.002 | | 0.171 | 4.225 | 95.0 | 0.998 | | 0.164 | 4.174 | 94.8 | 0.998
Model specification: QR–False, PM–True | | | | |
AIPW–PAPW | 1.376 | 8.569 | 94.5 | 0.953 | | 0.503 | 4.829 | 95.1 | 0.995 | | 0.231 | 4.215 | 95.2 | 0.992
AIPW–IPSW | 0.864 | 7.648 | 94.7 | 0.948 | | 0.322 | 4.643 | 95.3 | 0.990 | | 0.152 | 4.182 | 95.0 | 0.989
Model specification: QR–False, PM–False | | | | |
AIPW–PAPW | 26.696 | 26.835 | 0.0 | 0.998 | | 26.779 | 26.862 | 0.0 | 0.987 | | 26.803 | 26.880 | 0.0 | 0.985
AIPW–IPSW | 26.520 | 26.655 | 0.0 | 1.001 | | 26.718 | 26.801 | 0.0 | 0.989 | | 26.777 | 26.854 | 0.0 | 0.986
* •
PAPW: propensity adjusted probability weighting; IPSW: Inverse propensity
score weighting; QR: quasi-randomization; PM: prediction model; AIPW:
augmented inverse propensity weighting. Fully weighted implies the weighted
means if the true sampling weights are known.
Table 7: Comparing the performance of the bias adjustment methods and
associated variance estimator under the two-step Bayesian approach in the
first simulation study for $n_{R}=100$ and $n_{B}=100$
| $\rho=0.3$ | | $\rho=0.5$ | | $\rho=0.8$
---|---|---|---|---|---
Method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Probability sample ($S_{R}$) | | | | |
Unweighted | 8.528 | 19.248 | 92.6 | 1.009 | | 8.647 | 11.065 | 77.4 | 1.018 | | 8.682 | 9.719 | 50.9 | 1.020
Fully weighted | -0.029 | 20.276 | 94.7 | 1.001 | | 0.006 | 8.035 | 95.1 | 1.010 | | 0.015 | 5.008 | 95.0 | 1.008
Non-probability sample ($S_{B}$) | | | | |
Unweighted | 32.238 | 36.815 | 56.3 | 1.003 | | 32.303 | 33.3 | 1.620 | 1.003 | | 32.322 | 32.865 | 0.0 | 0.996
Fully weighted | 0.494 | 21.398 | 94.3 | 0.981 | | 0.329 | 8.400 | 94.0 | 0.981 | | 0.276 | 5.057 | 93.6 | 0.979
Non-robust adjustment | | | | |
Model specification: True | | | | |
PAPW | -0.589 | 24.195 | 97.4 | 1.117 | | -0.755 | 9.795 | 99.0 | 1.326 | | -0.801 | 6.178 | 99.8 | 1.653
IPSW | 1.169 | 22.844 | 97.2 | 1.118 | | 1.016 | 9.163 | 98.6 | 1.345 | | 0.976 | 5.719 | 99.8 | 1.701
PM | 0.709 | 21.489 | 95.280 | 1.029 | | 0.272 | 8.545 | 95.580 | 1.020 | | 0.140 | 5.245 | 94.640 | 1.000
Model specification: False | | | | |
PAPW | 28.008 | 34.396 | 76.3 | 1.091 | | 28.027 | 29.477 | 19.6 | 1.116 | | 28.022 | 28.840 | 3.4 | 1.141
IPSW | 29.763 | 35.215 | 70.2 | 1.083 | | 29.827 | 31.032 | 10.0 | 1.106 | | 29.841 | 30.519 | 0.8 | 1.125
PM | 28.588 | 34.226 | 70.9 | 1.055 | | 28.658 | 29.895 | 10.6 | 1.050 | | 28.691 | 29.380 | 0.7 | 1.042
Doubly robust adjustment | | | | |
Model specification: QR–True, PM–True | | | | |
AIPW–PAPW | 0.320 | 23.802 | 97.8 | 1.154 | | 0.125 | 9.306 | 99.1 | 1.357 | | 0.067 | 5.493 | 99.9 | 1.731
AIPW–IPSW | 0.249 | 22.778 | 97.4 | 1.142 | | 0.099 | 8.976 | 99.1 | 1.339 | | 0.056 | 5.387 | 99.9 | 1.688
Model specification: QR–True, PM–False | | | | |
AIPW–PAPW | 0.304 | 23.858 | 97.7 | 1.156 | | 0.126 | 9.386 | 99.2 | 1.389 | | 0.065 | 5.661 | 99.9 | 1.781
AIPW–IPSW | 0.226 | 22.814 | 97.5 | 1.146 | | 0.096 | 9.041 | 99.1 | 1.376 | | 0.052 | 5.543 | 99.8 | 1.747
Model specification: QR–False, PM–True | | | | |
AIPW–PAPW | 0.881 | 22.077 | 96.8 | 1.126 | | 0.333 | 8.742 | 98.6 | 1.281 | | 0.153 | 5.303 | 99.8 | 1.558
AIPW–IPSW | 0.762 | 21.483 | 96.6 | 1.103 | | 0.290 | 8.554 | 98.4 | 1.251 | | 0.135 | 5.246 | 99.7 | 1.509
Model specification: QR–False, PM–False | | | | |
AIPW–PAPW | 28.659 | 34.756 | 77.6 | 1.135 | | 28.660 | 30.013 | 17.4 | 1.142 | | 28.649 | 29.399 | 2.1 | 1.151
AIPW–IPSW | 28.575 | 34.237 | 74.7 | 1.115 | | 28.656 | 29.903 | 13.7 | 1.124 | | 28.674 | 29.368 | 1.1 | 1.132
* •
PAPW: propensity adjusted probability weighting; IPSW: Inverse propensity
score weighting; QR: quasi-randomization; PM: prediction model; AIPW:
augmented inverse propensity weighting. Fully weighted implies the weighted
means if the true sampling weights are known.
Table 8: Comparing the performance of the bias adjustment methods and
associated variance estimator under the two-step Bayesian approach in the
first simulation study for $n_{R}=100$ and $n_{B}=10,000$
| $\rho=0.2$ | | $\rho=0.5$ | | $\rho=0.8$
---|---|---|---|---|---
Method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Probability sample ($S_{R}$) | | | | |
Unweighted | 8.528 | 19.248 | 92.6 | 1.009 | | 8.647 | 11.065 | 77.4 | 1.018 | | 8.682 | 9.719 | 50.9 | 1.020
Fully weighted | -0.029 | 20.276 | 94.7 | 1.001 | | 0.006 | 8.035 | 95.1 | 1.010 | | 0.015 | 5.008 | 94.9 | 1.008
Non-probability sample ($S_{B}$) | | | | |
Unweighted | 30.014 | 30.066 | 0.0 | 1.008 | | 30.197 | 30.207 | 0.0 | 1.019 | | 30.252 | 30.257 | 0.0 | 1.033
Fully weighted | 0.032 | 2.083 | 95.3 | 1.005 | | 0.018 | 0.816 | 95.1 | 1.007 | | 0.012 | 0.490 | 95.1 | 1.007
Non-robust adjustment | | | | |
Model specification: True | | | | |
PAPW | -2.032 | 4.578 | 93.0 | 1.031 | | -2.106 | 4.111 | 90.9 | 1.032 | | -2.138 | 4.062 | 90.2 | 1.035
IPSW | -0.015 | 4.094 | 95.2 | 1.011 | | -0.036 | 3.605 | 95.1 | 1.004 | | -0.042 | 3.547 | 95.2 | 1.002
PM | 0.297 | 4.517 | 81.6 | 0.679 | | 0.120 | 4.136 | 75.3 | 0.579 | | 0.065 | 4.094 | 73.1 | 0.563
Model specification: False | | | | |
PAPW | 24.524 | 24.647 | 0.0 | 1.042 | | 24.618 | 24.678 | 0.0 | 1.062 | | 24.650 | 24.702 | 0.0 | 1.069
IPSW | 26.406 | 26.518 | 0.0 | 0.982 | | 26.602 | 26.662 | 0.0 | 0.940 | | 26.663 | 26.717 | 0.0 | 0.931
PM | 26.512 | 26.648 | 0.0 | 0.851 | | 26.715 | 26.798 | 0.0 | 0.728 | | 26.779 | 26.856 | 0.0 | 0.700
Doubly robust adjustment | | | | |
Model specification: QR–True, PM–True | | | | |
AIPW–PAPW | 0.178 | 4.635 | 84.7 | 0.721 | | 0.079 | 4.160 | 77.3 | 0.607 | | 0.047 | 4.103 | 75.7 | 0.588
AIPW–IPSW | 0.058 | 4.574 | 83.6 | 0.705 | | 0.036 | 4.149 | 77.0 | 0.601 | | 0.028 | 4.100 | 75.5 | 0.585
Model specification: QR–True, PM–False | | | | |
AIPW–PAPW | 0.151 | 4.273 | 94.5 | 0.971 | | 0.050 | 3.734 | 93.7 | 0.943 | | 0.025 | 3.660 | 93.9 | 0.941
AIPW–IPSW | 0.106 | 4.245 | 94.4 | 0.966 | | 0.083 | 3.767 | 93.7 | 0.945 | | 0.075 | 3.712 | 93.7 | 0.941
Model specification: QR–False, PM–True | | | | |
AIPW–PAPW | 0.496 | 4.566 | 83.7 | 0.709 | | 0.193 | 4.142 | 76.8 | 0.599 | | 0.096 | 4.096 | 75.2 | 0.581
AIPW–IPSW | 0.312 | 4.514 | 82.7 | 0.695 | | 0.127 | 4.133 | 76.7 | 0.595 | | 0.068 | 4.094 | 74.9 | 0.579
Model specification: QR–False, PM–False | | | | |
AIPW–PAPW | 26.709 | 26.849 | 0.0 | 0.893 | | 26.786 | 26.869 | 0.0 | 0.751 | | 26.808 | 26.885 | 0.0 | 0.717
AIPW–IPSW | 26.521 | 26.656 | 0.0 | 0.870 | | 26.718 | 26.800 | 0.0 | 0.740 | | 26.777 | 26.854 | 0.0 | 0.709
* •
PAPW: propensity adjusted probability weighting; IPSW: Inverse propensity
score weighting; QR: quasi-randomization; PM: prediction model; AIPW:
augmented inverse propensity weighting. Fully weighted implies the weighted
means if the true sampling weights are known.
#### 8.3.2 Simulation study III
Table 9 and Table 10 exhibits the numerical results associated with the plots
of Simulation III.
Table 9: Comparing the performance of the bias adjustment methods in the third
simulation study for $\rho=0.8$
| Continuous outcome ($Y_{c}$) | | Binary outcome ($Y_{b}$)
---|---|---|---
Model-method | rBias | rMSE | crCI | rSE | | rBias | rMSE | crCI | rSE
Probability sample ($S_{R}$)
Unweighted | 48.705 | 52.900 | 30.7 | 1.015 | | 11.304 | 16.881 | 88.2 | 1.022
Fully weighted | 0.080 | 15.400 | 96.2 | 1.025 | | 0.131 | 13.858 | 95.3 | 1.026
Non-probability sample ($S_{B}$)
Unweighted | 68.309 | 70.415 | 0.0 | 0.156 | | 21.763 | 22.794 | 0.5 | 0.181
Fully weighted | 0.137 | 7.581 | 95.7 | 1.023 | | 0.074 | 6.512 | 94.7 | 0.99
Non-robust adjustment
Model specification: True
GLM–PAPW | 0.448 | 10.994 | 94.7 | 1.036 | | 0.072 | 7.266 | 96.2 | 1.034
GLM–PAPP | 0.204 | 11.192 | 93.9 | 1.037 | | 0.080 | 7.188 | 96.2 | 1.031
GLM–IPSW | 0.839 | 18.138 | 96.0 | 1.275 | | -0.838 | 9.458 | 97.3 | 1.116
GLM–PM | 0.110 | 11.157 | 94.2 | 1.015 | | 0.055 | 7.401 | 94.4 | 0.995
Model specification: False
GLM–PAPW | 7.337 | 13.187 | 94.2 | 1.033 | | 5.115 | 8.502 | 90.4 | 1.02
GLM–PAPP | 6.762 | 13.546 | 94.2 | 1.032 | | 5.046 | 8.471 | 88.5 | 1.035
GLM–IPSW | 22.513 | 35.600 | 99.5 | 1.155 | | 9.390 | 13.098 | 89.5 | 1.099
BART–PAPW | 2.272 | 10.468 | 100.0 | 2.487 | | 1.633 | 7.391 | 99.5 | 1.436
BART–PAPP | 3.990 | 11.469 | 100.0 | 2.299 | | 0.313 | 7.243 | 99.3 | 1.342
GLM–PM | 37.071 | 42.523 | 53.0 | 1.006 | | 12.600 | 14.932 | 63.6 | 1.003
BART–PM | 0.286 | 11.581 | 92.7 | 0.996 | | 0.594 | 9.102 | 81.2 | 0.688
Doubly robust adjustment
Model specification: QR–True, PM–True
GLM–AIPW–PAPW | 0.307 | 11.186 | 95.0 | 1.019 | | 0.083 | 7.459 | 94.2 | 1.001
GLM–AIPW–PAPP | 0.295 | 11.187 | 94.5 | 1.019 | | 0.089 | 7.439 | 94.0 | 0.998
GLM–AIPW–IPSW | 0.372 | 11.193 | 95.8 | 1.037 | | 0.120 | 7.478 | 94.4 | 1.003
Model specification: QR–True, PM–False
GLM–AIPW–PAPW | 0.381 | 12.774 | 95.5 | 1.035 | | 0.047 | 7.487 | 96.2 | 1.04
GLM–AIPW–PAPP | 0.424 | 11.934 | 94.7 | 1.041 | | 0.155 | 7.275 | 96.0 | 1.032
GLM–AIPW–IPSW | -8.223 | 17.625 | 92.3 | 1.181 | | -2.842 | 9.086 | 95.2 | 1.047
Model specification: QR–False, PM–True
GLM–AIPW–PAPW | 0.127 | 11.177 | 94.7 | 1.020 | | 0.067 | 7.451 | 94.0 | 0.997
GLM–AIPW–PAPP | 0.122 | 11.172 | 94.7 | 1.019 | | 0.054 | 7.438 | 94.2 | 0.997
GLM–AIPW–IPSW | 0.117 | 11.167 | 94.8 | 1.020 | | 0.055 | 7.433 | 94.0 | 0.998
Model specification: QR–False, PM–False
GLM–AIPW–PAPW | 50.327 | 53.922 | 21.9 | 1.002 | | 15.651 | 17.552 | 50.3 | 1.007
GLM–AIPW–PAPP | 50.793 | 54.215 | 20.9 | 1.002 | | 15.834 | 17.605 | 47.8 | 1.003
GLM–AIPW–IPSW | 47.867 | 51.106 | 27.9 | 1.163 | | 15.112 | 16.884 | 53.8 | 1.051
BART–AIPW–PAPW | 0.276 | 11.593 | 94.4 | 1.035 | | 0.701 | 9.186 | 81.9 | 0.698
BART–AIPW–PAPP | 0.261 | 11.591 | 94.2 | 1.031 | | 0.682 | 9.155 | 81.7 | 0.697
* •
PAPW: propensity adjusted probability weighting; PAPP: propensity adjusted
probability Prediction; IPSW: Inverse propensity score weighting; QR: quasi-
randomization; PM: prediction model; AIPW: augmented inverse propensity
weighting.
Table 10: Comparing the values of rBias and rMSE for different methods across
different values of $\rho$.
| Continuous outcome ($Y_{c}$) | | Binary outcome ($Y_{b}$)
---|---|---|---
| Non-robust | Doubly robust | | Non-robust | Doubly robust
$\rho$ | PAPW | PAPP | IPSW | PM | PAPW | PAPP | IPSW | | PAPW | PAPP | IPSW | PM | PAPW | PAPP | IPSW
| rBias
0.0 | 0.545 | 0.870 | -6.791 | 0.259 | 0.447 | 0.443 | 0.511 | | -0.186 | 0.128 | -3.248 | -0.016 | -0.006 | -0.005 | 0.004
0.1 | -0.179 | 0.022 | -4.772 | -0.224 | -0.215 | -0.218 | -0.235 | | -0.537 | -0.220 | -2.345 | -0.399 | -0.464 | -0.475 | -0.489
0.2 | -0.195 | 0.493 | -4.406 | 0.048 | -0.161 | -0.160 | -0.250 | | -0.329 | 0.095 | -2.071 | -0.144 | -0.159 | -0.149 | -0.172
0.3 | 0.288 | 0.668 | -3.832 | 0.420 | 0.459 | 0.449 | 0.435 | | -0.244 | 0.069 | -1.980 | 0.160 | 0.108 | 0.114 | 0.111
0.4 | 0.212 | 0.361 | -2.332 | 0.425 | 0.237 | 0.254 | 0.233 | | -0.097 | 0.150 | -1.372 | -0.031 | 0.031 | 0.048 | 0.085
0.5 | 0.248 | 0.173 | -2.257 | 0.175 | 0.227 | 0.216 | 0.239 | | -0.169 | -0.067 | -1.817 | 0.117 | 0.068 | 0.065 | -0.010
0.6 | 0.286 | 0.516 | -1.010 | 0.411 | 0.404 | 0.404 | 0.420 | | 0.128 | 0.231 | -1.146 | 0.133 | 0.162 | 0.156 | 0.198
0.7 | 0.072 | -0.052 | -0.217 | -0.027 | 0.084 | 0.084 | 0.100 | | -0.019 | 0.062 | -1.029 | 0.021 | -0.001 | 0.009 | -0.001
0.8 | 0.538 | 0.527 | 0.652 | 0.623 | 0.509 | 0.517 | 0.498 | | 0.012 | 0.175 | -1.017 | 0.015 | 0.053 | 0.048 | 0.078
0.9 | 0.343 | 0.424 | 2.090 | 0.469 | 0.496 | 0.495 | 0.466 | | 0.079 | 0.122 | -0.932 | 0.155 | 0.158 | 0.144 | 0.164
| rMSE
0.0 | 13.916 | 14.724 | 20.949 | 14.934 | 14.964 | 14.934 | 14.994 | | 8.702 | 8.702 | 11.773 | 9.214 | 9.214 | 9.214 | 9.214
0.1 | 12.979 | 13.640 | 19.378 | 13.760 | 13.790 | 13.790 | 13.850 | | 8.443 | 8.188 | 10.746 | 8.699 | 8.699 | 8.699 | 8.699
0.2 | 12.297 | 13.220 | 18.877 | 13.161 | 13.220 | 13.220 | 13.250 | | 8.237 | 8.237 | 10.811 | 8.494 | 8.494 | 8.494 | 8.494
0.3 | 12.187 | 13.049 | 18.132 | 13.019 | 13.049 | 13.019 | 13.049 | | 7.859 | 7.859 | 9.887 | 8.113 | 8.113 | 8.113 | 8.366
0.4 | 11.823 | 12.392 | 17.330 | 12.452 | 12.511 | 12.511 | 12.511 | | 7.910 | 7.654 | 9.951 | 8.165 | 8.165 | 8.165 | 8.165
0.5 | 11.745 | 12.101 | 17.647 | 12.190 | 12.190 | 12.190 | 12.190 | | 7.576 | 7.576 | 9.849 | 7.829 | 7.829 | 7.829 | 7.829
0.6 | 11.691 | 12.145 | 18.748 | 12.085 | 12.115 | 12.115 | 12.115 | | 7.673 | 7.673 | 9.975 | 7.929 | 7.929 | 7.929 | 7.929
0.7 | 10.927 | 11.166 | 15.567 | 11.196 | 11.226 | 11.226 | 11.226 | | 7.332 | 7.080 | 8.849 | 7.332 | 7.585 | 7.332 | 7.585
0.8 | 10.769 | 10.918 | 17.888 | 10.918 | 10.918 | 10.918 | 10.948 | | 7.359 | 7.105 | 9.389 | 7.359 | 7.612 | 7.359 | 7.612
0.9 | 10.951 | 11.042 | 22.084 | 10.981 | 11.012 | 11.012 | 11.012 | | 6.935 | 6.935 | 8.990 | 7.192 | 7.192 | 7.192 | 7.192
* •
NOTE: GLM has been used for prediction, and the underlying models in each
method have been correctly specified.
### 8.4 Supplemental results of the application on SHRP2 data
Table 11: Mean daily trip duration (min) and associated 95% CIs by different
covariates across DR adjustment methods
| | Unweighted | GLM-AIPW-PAPP | GLM-AIPW-PMLE | BART–AIPW-PAPP
---|---|---|---|---|---
Covariate | n | (95%CI) | (95%CI) | (95%CI) | (95%CI)
Total | 837,061 | 68.94 (67.955,69.925) | 71.603 (66.565,76.641) | 70.058 (67.902,72.214) | 69.582 (66.117,73.047)
Gender | | | | |
Male | 407,312 | 70.289 (68.809,71.77) | 72.411 (63.583,81.238) | 70.97 (67.971,73.97) | 70.61 (66.131,75.088)
Female | 429,749 | 67.662 (66.355,68.968) | 70.79 (67.353,74.226) | 69.107 (66.683,71.531) | 68.522 (64.432,72.611)
Age group | | | | |
16-24 | 311,106 | 70 (68.514,71.485) | 72.889 (69.435,76.342) | 72.318 (69.636,74.999) | 71.937 (66.79,77.085)
25-34 | 117,758 | 73.889 (71.099,76.679) | 72.669 (67.713,77.625) | 71.562 (67.688,75.435) | 72.511 (66.132,78.889)
35-44 | 61,908 | 75.4 (71.304,79.496) | 71.215 (64.668,77.762) | 75.72 (69.882,81.559) | 71.919 (63.874,79.964)
45-54 | 77,903 | 74.666 (71.734,77.599) | 71.803 (61.432,82.175) | 70.437 (66.525,74.349) | 73.237 (67.727,78.747)
55-64 | 63,891 | 70.823 (67.027,74.62) | 66.99 (60.85,73.13) | 67.054 (62.252,71.855) | 67.518 (60.885,74.152)
65-74 | 88,762 | 67.122 (64.13,70.113) | 84.262 (52.155,116.369) | 64.475 (59.374,69.576) | 64.286 (59.779,68.794)
75+ | 115,733 | 54.103 (51.965,56.241) | 49.358 (46.14,52.576) | 51.359 (47.896,54.822) | 51.442 (46.894,55.99)
Race | | | | |
White | 745,596 | 67.845 (66.833,68.858) | 71.687 (65.246,78.128) | 68.183 (65.836,70.529) | 67.861 (64.386,71.336)
Black | 43,109 | 86.294 (80.759,91.83) | 74.42 (66.374,82.466) | 81.587 (75.046,88.127) | 79.728 (68.019,91.437)
Asian | 26,265 | 68.723 (63.684,73.761) | 66.792 (58.089,75.495) | 66.777 (60.785,72.769) | 65.958 (53.748,78.169)
Other | 22,091 | 72.284 (66.895,77.674) | 79.723 (69.505,89.942) | 75.924 (69.729,82.118) | 75.314 (63.089,87.539)
Ethnicity | | | | |
Non-Hisp | 808,098 | 68.699 (67.697,69.701) | 71.999 (66.066,77.933) | 69.337 (67.166,71.507) | 68.555 (64.866,72.244)
Hispanic | 28,963 | 75.681 (70.488,80.873) | 72.068 (63.599,80.536) | 74.545 (69.145,79.944) | 75.449 (66.582,84.316)
Education | | | | |
¡High school | 50,943 | 61.108 (58.134,64.083) | 67.647 (58.129,77.165) | 67.32 (61.588,73.051) | 68.246 (56.385,80.108)
HS completed | 78,045 | 69.025 (65.979,72.071) | 86.848 (58.569,115.128) | 69.752 (64.868,74.637) | 70.399 (61.472,79.325)
College | 237,206 | 68.997 (67.153,70.841) | 70.312 (64.184,76.44) | 70.712 (66.638,74.785) | 70.896 (65.722,76.069)
Graduate | 326,860 | 70.859 (69.188,72.529) | 71.314 (68.333,74.296) | 71.313 (69.073,73.554) | 69.984 (65.783,74.186)
Post-grad | 144,007 | 67.218 (64.984,69.451) | 64.26 (60.143,68.377) | 68.713 (64.864,72.562) | 66.496 (62.395,70.597)
HH income | | | | |
0-49 | 332,586 | 68.105 (66.553,69.658) | 75.441 (62.136,88.745) | 69.441 (65.872,73.009) | 69.049 (65.13,72.968)
50-99 | 309,387 | 69.755 (68.089,71.421) | 70.608 (63.639,77.578) | 70.359 (67.276,73.442) | 69.836 (66.552,73.12)
100-149 | 132,757 | 69.487 (66.999,71.975) | 68.685 (63.743,73.626) | 70.276 (66.911,73.642) | 69.55 (60.835,78.265)
150+ | 62,331 | 68.187 (65.109,71.264) | 69.772 (66.389,73.154) | 69.9 (66.158,73.643) | 70.352 (64.31,76.394)
HH size | | | | |
1 | 177,140 | 66.779 (64.452,69.106) | 80.258 (54.973,105.544) | 66.501 (62.817,70.186) | 67.607 (63.28,71.934)
2 | 286,106 | 67.608 (65.994,69.223) | 65.532 (61.489,69.574) | 66.781 (63.894,69.667) | 67.282 (63.371,71.193)
3 | 152,684 | 71.233 (68.836,73.631) | 72.398 (66.412,78.384) | 74.177 (69.507,78.848) | 71.127 (67.04,75.214)
4 | 143,442 | 70.161 (67.969,72.352) | 69.794 (65.273,74.315) | 69.944 (66.494,73.395) | 70.839 (65.417,76.261)
5+ | 77,689 | 72.012 (68.913,75.11) | 74.664 (64.68,84.648) | 76.567 (71.368,81.765) | 73.321 (68.479,78.163)
Urban size | | | | |
¡50k | 34,987 | 67.602 (62.771,72.432) | 79.22 (59.18,99.26) | 65.75 (59.749,71.751) | 66.109 (57.069,75.149)
50-200k | 119,970 | 62.608 (60.337,64.879) | 65.759 (61.25,70.268) | 65.151 (62.164,68.138) | 67.211 (61.409,73.014)
200-500k | 44,578 | 68.576 (63.52,73.632) | 87.248 (73.018,101.477) | 68.884 (63.664,74.104) | 69.636 (61.746,77.526)
500-1000k | 276,629 | 68.017 (66.289,69.745) | 66.524 (61.364,71.685) | 68.123 (65.323,70.923) | 70.338 (64.971,75.704)
1000k+ | 360,897 | 71.928 (70.451,73.404) | 70.91 (67.926,73.894) | 73.567 (71.441,75.693) | 72.962 (68.493,77.43)
Vehicle make | | | | |
American | 290,228 | 66.507 (64.905,68.108) | 71.826 (59.917,83.734) | 68.256 (65.302,71.21) | 69.04 (63.968,74.113)
Asian | 528,810 | 70.265 (69,71.53) | 72.7 (69.653,75.747) | 71.602 (69.436,73.768) | 70.211 (66.415,74.007)
European | 18,023 | 69.261 (63.898,74.624) | 66.191 (59.703,72.679) | 71.403 (65.95,76.855) | 69.836 (60.506,79.166)
Vehicle type | | | | |
Car | 610,245 | 68.686 (67.539,69.834) | 73.853 (65.931,81.776) | 69.706 (67.4,72.012) | 70.236 (66.799,73.673)
Van | 27,866 | 69.2 (64.432,73.968) | 68.389 (61.064,75.714) | 73.096 (66.388,79.804) | 64.905 (54.298,75.512)
SUV | 158,202 | 68.993 (66.851,71.134) | 68.424 (62.145,74.704) | 69.291 (66.318,72.263) | 69.469 (64.351,74.587)
Pickup | 40,748 | 72.361 (66.713,78.008) | 69.934 (59.062,80.805) | 74.495 (64.949,84.04) | 70.256 (58.87,81.643)
Fuel type | | | | |
Gas/D | 761,292 | 68.637 (67.61,69.664) | 71.334 (66.221,76.446) | 69.895 (67.66,72.131) | 69.443 (65.954,72.931)
Other | 75,769 | 71.986 (68.598,75.373) | 82.674 (72.987,92.361) | 77.039 (72.37,81.708) | 75.696 (67.822,83.571)
Weekend | | | | |
Weekday | 712,411 | 67.671 (66.701,68.64) | 70.362 (65.734,74.991) | 68.72 (66.616,70.824) | 68.348 (64.806,71.89)
Weekend | 124,650 | 76.196 (75.001,77.392) | 78.646 (71.128,86.164) | 77.649 (75.099,80.199) | 76.577 (73.08,80.074)
* •
Table 12: Mean daily trip distance (mile) and associated 95% CIs by different
covariates across DR adjustment methods
| | Unweighted | GLM-AIPW-PAPP | GLM-AIPW-PMLE | BART–AIPW-PAPP
---|---|---|---|---|---
Covariate | n | (95%CI) | (95%CI) | (95%CI) | (95%CI)
Total | 837,061 | 32.418 (31.823,33.013) | 33.76 (31.806,35.715) | 33.39 (32.22,34.56) | 32.926 (31.185,34.667)
Gender | | | | |
Male | 407,312 | 33.852 (32.963,34.741) | 35.51 (32.247,38.773) | 34.782 (33.146,36.418) | 34.358 (32.254,36.461)
Female | 429,749 | 31.06 (30.27,31.849) | 31.901 (29.932,33.871) | 31.947 (30.601,33.293) | 31.428 (28.965,33.89)
Age group | | | | |
16-24 | 311,106 | 32.828 (32,33.657) | 34.904 (33.085,36.723) | 34.864 (33.358,36.369) | 34.491 (32.804,36.178)
25-34 | 117,758 | 36.246 (34.603,37.888) | 36.546 (33.364,39.728) | 34.841 (32.742,36.94) | 35.324 (32.837,37.81)
35-44 | 61,908 | 35.958 (33.318,38.597) | 31.774 (28.585,34.962) | 35.067 (32.321,37.813) | 33.9 (30.173,37.627)
45-54 | 77,903 | 36.103 (34.231,37.976) | 35.721 (31.46,39.981) | 34.301 (31.843,36.759) | 34.578 (30.108,39.049)
55-64 | 63,891 | 35.037 (32.735,37.34) | 33.159 (29.352,36.966) | 33.138 (30.097,36.178) | 32.135 (29.896,34.375)
65-74 | 88,762 | 31.548 (29.552,33.544) | 33.875 (24.893,42.856) | 29.948 (26.934,32.962) | 29.028 (26.593,31.464)
75+ | 115,733 | 22.269 (21.044,23.493) | 20.184 (18.108,22.259) | 21.037 (19.304,22.77) | 21.421 (18.979,23.863)
Race | | | | |
White | 745,596 | 32.189 (31.554,32.824) | 33.173 (30.862,35.484) | 33.426 (32.11,34.743) | 32.85 (31.118,34.582)
Black | 43,109 | 37.275 (34.577,39.973) | 35.696 (31.364,40.029) | 34.187 (31.146,37.228) | 34.131 (28.834,39.427)
Asian | 26,265 | 30.638 (28.095,33.181) | 29.601 (25.527,33.675) | 28.311 (25.238,31.383) | 28.289 (22.62,33.958)
Other | 22,091 | 32.789 (29.699,35.879) | 37.919 (30.975,44.862) | 35.518 (30.553,40.484) | 35.058 (27.876,42.239)
Ethnicity | | | | |
Non-Hisp | 808,098 | 32.328 (31.723,32.933) | 32.852 (30.823,34.881) | 33.217 (32.085,34.349) | 32.362 (30.845,33.879)
Hispanic | 28,963 | 34.935 (31.713,38.158) | 36.882 (32.766,40.997) | 35.126 (31.226,39.027) | 36.344 (29.859,42.828)
Education | | | | |
¡High school | 50,943 | 25.659 (23.905,27.412) | 28.248 (24.014,32.482) | 28.977 (25.986,31.967) | 30.351 (22.902,37.8)
HS completed | 78,045 | 32.04 (30.14,33.939) | 36.853 (29.19,44.515) | 33.596 (30.989,36.203) | 33.497 (30.336,36.658)
College | 237,206 | 31.848 (30.812,32.885) | 33.666 (30.509,36.824) | 33.038 (31.177,34.899) | 32.956 (30.622,35.29)
Graduate | 326,860 | 33.879 (32.85,34.908) | 35.56 (32.73,38.389) | 35.093 (33.48,36.706) | 34.091 (31.938,36.244)
Post-grad | 144,007 | 32.637 (31.235,34.039) | 29.935 (27.6,32.269) | 32.801 (30.877,34.725) | 31.414 (29.555,33.273)
HH income | | | | |
0-49 | 332,586 | 31.185 (30.273,32.097) | 32.845 (28.788,36.901) | 31.979 (30.333,33.626) | 31.538 (28.932,34.144)
50-99 | 309,387 | 33.024 (32.004,34.043) | 35.811 (32.755,38.866) | 33.907 (32.201,35.613) | 33.744 (32.011,35.476)
100-149 | 132,757 | 33.765 (32.235,35.295) | 33.354 (30.228,36.479) | 34.433 (32.472,36.394) | 33.532 (30.736,36.329)
150+ | 62,331 | 33.124 (31.231,35.017) | 31.693 (29.605,33.781) | 33.795 (31.147,36.442) | 33.428 (29.886,36.97)
HH size | | | | |
1 | 177,140 | 30.588 (29.231,31.945) | 34.322 (27.864,40.779) | 31.133 (28.899,33.366) | 30.768 (28.385,33.152)
2 | 286,106 | 32.415 (31.372,33.458) | 31.701 (29.362,34.039) | 32.742 (30.989,34.494) | 32.301 (30.95,33.651)
3 | 152,684 | 33.786 (32.452,35.12) | 34.54 (30.838,38.242) | 34.806 (32.647,36.966) | 34.421 (31.549,37.293)
4 | 143,442 | 32.95 (31.524,34.376) | 32.048 (29.257,34.84) | 33.04 (31.09,34.99) | 32.898 (30.012,35.784)
5+ | 77,689 | 32.934 (31.314,34.554) | 36.522 (32.397,40.647) | 36.383 (33.774,38.993) | 34.731 (32.103,37.359)
Urban size | | | | |
¡50k | 34,987 | 36.147 (32.885,39.408) | 34.93 (28.343,41.518) | 34.077 (30.506,37.648) | 32.945 (30.263,35.628)
50-200k | 119,970 | 31.028 (29.388,32.668) | 32.379 (29.47,35.288) | 32.032 (29.692,34.372) | 32.636 (30.058,35.215)
200-500k | 44,578 | 36.416 (33.616,39.216) | 44.143 (36.054,52.231) | 35.585 (33.228,37.942) | 36.461 (32.151,40.771)
500-1000k | 276,629 | 31.973 (30.952,32.994) | 32.453 (27.672,37.234) | 31.005 (29.331,32.679) | 32.781 (28.04,37.522)
1000k+ | 360,897 | 32.366 (31.497,33.236) | 32.475 (30.577,34.373) | 32.371 (31.117,33.626) | 32.302 (30.134,34.471)
Vehicle make | | | | |
American | 290,228 | 30.948 (29.995,31.9) | 35.285 (31.317,39.254) | 32.784 (31.234,34.335) | 32.956 (30.909,35.004)
Asian | 528,810 | 33.249 (32.476,34.022) | 33.339 (31.554,35.124) | 34.022 (32.623,35.42) | 33.124 (31.156,35.092)
European | 18,023 | 31.719 (28.896,34.542) | 29.905 (26.439,33.37) | 32.979 (30.006,35.951) | 32.05 (27.154,36.946)
Vehicle type | | | | |
Car | 610,245 | 32.126 (31.428,32.823) | 33.619 (31.253,35.985) | 32.916 (31.691,34.141) | 32.518 (30.426,34.611)
Van | 27,866 | 31.212 (28.225,34.199) | 29.109 (24.54,33.679) | 32.682 (28.052,37.312) | 31.109 (24.519,37.699)
SUV | 158,202 | 32.848 (31.558,34.137) | 33.857 (30.466,37.249) | 33.15 (31.374,34.926) | 32.559 (29.986,35.133)
Pickup | 40,748 | 35.958 (32.813,39.103) | 35.086 (30.375,39.798) | 36.557 (32.917,40.197) | 36.352 (31.036,41.668)
Fuel type | | | | |
Gas/D | 761,292 | 32.121 (31.502,32.739) | 33.524 (31.522,35.526) | 33.18 (32.006,34.354) | 32.813 (31.021,34.605)
Other | 75,769 | 35.409 (33.302,37.515) | 43.864 (37.513,50.214) | 39.259 (35.942,42.576) | 37.388 (34.271,40.505)
Weekend | | | | |
Weekday | 712,411 | 31.895 (31.307,32.482) | 33.181 (31.327,35.036) | 32.817 (31.666,33.968) | 32.41 (30.68,34.14)
Weekend | 124,650 | 35.41 (34.689,36.132) | 37.037 (34.351,39.724) | 36.64 (35.09,38.19) | 35.853 (33.806,37.899)
* •
Table 13: Mean daily average speed (MPH) of trips and associated 95% CIs by
different covariates across DR adjustment methods
| | Unweighted | GLM-AIPW-PAPP | GLM-AIPW-PMLE | BART–AIPW-PAPP
---|---|---|---|---|---
Covariate | n | (95%CI) | (95%CI) | (95%CI) | (95%CI)
Total | 837,061 | 25.03 (24.8,25.261) | 25.775 (24.277,27.274) | 25.562 (25.063,26.06) | 25.39 (24.309,26.471)
Gender | | | | |
Male | 407,312 | 25.474 (25.137,25.811) | 26.964 (24.667,29.261) | 26.199 (25.578,26.82) | 25.999 (24.877,27.12)
Female | 429,749 | 24.61 (24.297,24.923) | 24.463 (23.239,25.688) | 24.906 (24.32,25.492) | 24.76 (23.59,25.93)
Age group | | | | |
16-24 | 311,106 | 25.239 (24.893,25.586) | 25.876 (24.878,26.873) | 25.921 (25.287,26.555) | 25.831 (24.724,26.938)
25-34 | 117,758 | 26.951 (26.318,27.583) | 27.242 (26.062,28.422) | 26.571 (25.723,27.419) | 27.07 (26.055,28.085)
35-44 | 61,908 | 26.065 (25.183,26.947) | 25.045 (23.196,26.894) | 25.696 (24.675,26.716) | 25.516 (23.327,27.705)
45-54 | 77,903 | 26.527 (25.727,27.328) | 27.731 (22.197,33.265) | 26.412 (25.419,27.405) | 25.665 (23.242,28.088)
55-64 | 63,891 | 26.22 (25.471,26.969) | 26.075 (24.48,27.67) | 26.275 (25.152,27.398) | 25.525 (24.078,26.971)
65-74 | 88,762 | 23.956 (23.339,24.572) | 22.601 (20.487,24.716) | 23.618 (22.683,24.554) | 23.216 (22.2,24.232)
75+ | 115,733 | 21.12 (20.559,21.681) | 20.545 (19.251,21.838) | 21.317 (20.539,22.094) | 20.728 (19.29,22.167)
Race | | | | |
White | 745,596 | 25.109 (24.863,25.354) | 25.225 (24.255,26.196) | 26.086 (25.565,26.608) | 25.656 (24.95,26.363)
Black | 43,109 | 24.227 (23.188,25.265) | 27.223 (22.295,32.151) | 23.198 (22.202,24.194) | 23.433 (20.47,26.397)
Asian | 26,265 | 24.35 (23.278,25.423) | 25.203 (23.46,26.947) | 23.038 (21.674,24.402) | 24.508 (21.082,27.934)
Other | 22,091 | 24.76 (23.473,26.046) | 25.049 (23.008,27.091) | 24.68 (23.128,26.232) | 25.956 (22.168,29.744)
Ethnicity | | | | |
Non-Hisp | 808,098 | 25.039 (24.805,25.274) | 25.023 (24.156,25.89) | 25.674 (25.197,26.152) | 25.246 (24.437,26.055)
Hispanic | 28,963 | 24.777 (23.526,26.028) | 27.808 (24.042,31.574) | 25.233 (23.873,26.593) | 26.284 (22.881,29.688)
Education | | | | |
¡High school | 50,943 | 23.16 (22.31,24.01) | 23.142 (21.364,24.919) | 23.825 (22.322,25.328) | 24.791 (21.638,27.943)
HS completed | 78,045 | 25.192 (24.438,25.945) | 24.851 (22.707,26.995) | 26.025 (25.019,27.031) | 25.165 (22.538,27.793)
College | 237,206 | 24.506 (24.09,24.921) | 25.888 (22.586,29.189) | 24.988 (24.184,25.793) | 24.7 (23.717,25.683)
Graduate | 326,860 | 25.426 (25.043,25.81) | 26.769 (25.764,27.775) | 26.363 (25.795,26.93) | 26.368 (25.077,27.659)
Post-grad | 144,007 | 25.569 (25.027,26.11) | 25.031 (23.955,26.107) | 25.446 (24.775,26.117) | 25.555 (24.399,26.711)
HH income | | | | |
0-49 | 332,586 | 24.333 (23.956,24.709) | 23.975 (22.766,25.183) | 24.659 (23.851,25.467) | 24.401 (22.918,25.884)
50-99 | 309,387 | 25.25 (24.878,25.623) | 27.547 (24.479,30.615) | 25.971 (25.316,26.627) | 25.744 (24.828,26.66)
100-149 | 132,757 | 25.963 (25.411,26.515) | 25.892 (24.611,27.174) | 26.281 (25.569,26.994) | 25.981 (24.275,27.687)
150+ | 62,331 | 22.937 (22.564,23.31) | 25.096 (21.747,28.444) | 23.419 (22.632,24.206) | 23.288 (22.044,24.533)
HH size | | | | |
1 | 177,140 | 23.837 (23.337,24.337) | 23.986 (22.176,25.797) | 24.355 (23.538,25.173) | 24.024 (22.597,25.452)
2 | 286,106 | 25.155 (24.746,25.563) | 25.606 (24.679,26.532) | 25.778 (25.128,26.428) | 25.55 (24.735,26.365)
3 | 152,684 | 25.77 (25.223,26.316) | 26.042 (24.785,27.3) | 25.645 (24.886,26.403) | 26.035 (24.807,27.262)
4 | 143,442 | 25.423 (24.895,25.952) | 25.025 (23.669,26.381) | 25.766 (25.015,26.517) | 25.622 (24.254,26.991)
5+ | 77,689 | 25.112 (24.48,25.745) | 27.472 (22.365,32.58) | 26.14 (24.981,27.3) | 25.155 (23.23,27.08)
Urban size | | | | |
¡50k | 34,987 | 28.437 (27.061,29.813) | 25.595 (22.943,28.247) | 27.951 (26.354,29.548) | 27.097 (25.536,28.659)
50-200k | 119,970 | 24.455 (23.814,25.096) | 25.081 (23.851,26.31) | 24.784 (23.965,25.603) | 25.031 (24.155,25.907)
200-500k | 44,578 | 27.64 (26.634,28.645) | 27.073 (23.546,30.601) | 27.024 (26.049,27.999) | 26.931 (24.582,29.279)
500-1000k | 276,629 | 25.758 (25.355,26.162) | 26.189 (23.701,28.678) | 25.05 (24.289,25.812) | 25.513 (24.121,26.904)
1000k+ | 360,897 | 20.451 (18.941,21.961) | 23.557 (21.359,25.755) | 21.9 (20.41,23.389) | 23.488 (20.639,26.336)
Vehicle make | | | | |
American | 290,228 | 24.799 (24.402,25.195) | 27.212 (24.331,30.094) | 25.766 (25.047,26.485) | 25.353 (24.079,26.627)
Asian | 528,810 | 25.174 (24.884,25.464) | 24.771 (23.69,25.853) | 25.509 (25.004,26.015) | 25.464 (24.358,26.569)
European | 18,023 | 24.534 (23.553,25.514) | 24.974 (23.083,26.866) | 24.291 (22.942,25.64) | 25.307 (23.285,27.329)
Vehicle type | | | | |
Car | 610,245 | 24.893 (24.622,25.164) | 25.115 (24.327,25.904) | 25.313 (24.794,25.832) | 25.357 (24.467,26.247)
Van | 27,866 | 23.562 (22.539,24.586) | 23.064 (21.378,24.75) | 23.484 (22.376,24.591) | 23.527 (20.832,26.223)
SUV | 158,202 | 25.398 (24.87,25.925) | 26.495 (22.622,30.369) | 25.635 (24.887,26.384) | 25.008 (23.511,26.505)
Pickup | 40,748 | 26.43 (25.484,27.375) | 26.245 (23.43,29.059) | 26.628 (25.453,27.804) | 25.788 (23.842,27.733)
Fuel type | | | | |
Gas/D | 761,292 | 24.955 (24.711,25.199) | 25.727 (24.205,27.249) | 25.507 (25.005,26.01) | 25.361 (24.277,26.446)
Other | 75,769 | 25.784 (25.091,26.476) | 27.804 (26.114,29.493) | 27.052 (25.991,28.113) | 26.676 (24.691,28.66)
Weekend | | | | |
Weekday | 712,411 | 25.077 (24.847,25.308) | 25.744 (24.351,27.138) | 25.598 (25.1,26.096) | 25.425 (24.36,26.49)
Weekend | 124,650 | 24.76 (24.518,25.003) | 25.939 (23.843,28.034) | 25.356 (24.811,25.901) | 25.194 (23.987,26.401)
* •
Table 14: Mean start time of the first daytrips and associated 95% CIs by
different covariates across DR adjustment methods
| | Unweighted | GLM-AIPW-PAPP | GLM-AIPW-PMLE | BART–AIPW-PAPP
---|---|---|---|---|---
Covariate | n | (95%CI) | (95%CI) | (95%CI) | (95%CI)
Total | 837,061 | 13.811 (13.763,13.859) | 13.564 (13.391,13.737) | 13.553 (13.427,13.68) | 13.5 (13.364,13.636)
Gender | | | | |
Male | 407,312 | 13.824 (13.751,13.898) | 13.556 (13.304,13.807) | 13.572 (13.418,13.725) | 13.486 (13.304,13.667)
Female | 429,749 | 13.799 (13.736,13.861) | 13.578 (13.362,13.793) | 13.533 (13.386,13.681) | 13.515 (13.389,13.64)
Age group | | | | |
16-24 | 311,106 | 14.411 (14.354,14.468) | 14.396 (14.254,14.537) | 14.351 (14.218,14.485) | 14.266 (14.13,14.402)
25-34 | 117,758 | 13.999 (13.891,14.106) | 13.864 (13.562,14.165) | 13.923 (13.734,14.112) | 13.843 (13.65,14.037)
35-44 | 61,908 | 13.57 (13.399,13.741) | 13.694 (13.178,14.211) | 13.467 (13.164,13.77) | 13.448 (13.117,13.78)
45-54 | 77,903 | 13.489 (13.368,13.61) | 13.414 (13.028,13.8) | 13.389 (13.187,13.592) | 13.335 (13.043,13.626)
55-64 | 63,891 | 13.344 (13.185,13.503) | 13.59 (13.248,13.933) | 13.279 (13.004,13.555) | 13.27 (12.902,13.637)
65-74 | 88,762 | 13.244 (13.091,13.397) | 12.529 (12.082,12.975) | 13.131 (12.874,13.388) | 13.026 (12.663,13.388)
75+ | 115,733 | 13.047 (12.9,13.193) | 13.412 (13.089,13.736) | 13.051 (12.815,13.287) | 13.216 (12.904,13.528)
Race | | | | |
White | 745,596 | 13.77 (13.719,13.821) | 13.522 (13.331,13.712) | 13.506 (13.372,13.64) | 13.46 (13.318,13.602)
Black | 43,109 | 14.065 (13.887,14.242) | 13.632 (13.387,13.876) | 13.695 (13.49,13.901) | 13.662 (13.21,14.114)
Asian | 26,265 | 14.351 (14.135,14.567) | 14.281 (13.405,15.157) | 14.051 (13.522,14.58) | 13.841 (13.51,14.172)
Other | 22,091 | 14.055 (13.786,14.324) | 13.349 (12.959,13.739) | 13.585 (13.254,13.917) | 13.492 (12.695,14.289)
Ethnicity | | | | |
Non-Hisp | 808,098 | 13.803 (13.754,13.851) | 13.563 (13.359,13.767) | 13.547 (13.419,13.675) | 13.49 (13.362,13.617)
Hispanic | 28,963 | 14.049 (13.81,14.288) | 13.602 (13.303,13.901) | 13.544 (13.278,13.81) | 13.558 (13.084,14.031)
Education | | | | |
¡High school | 50,943 | 14.3 (14.177,14.424) | 13.601 (13.416,13.786) | 13.604 (13.394,13.814) | 13.513 (13.106,13.921)
HS completed | 78,045 | 13.895 (13.73,14.06) | 13.425 (12.957,13.893) | 13.509 (13.236,13.781) | 13.478 (13.072,13.884)
College | 237,206 | 14.003 (13.913,14.092) | 13.641 (13.36,13.922) | 13.611 (13.433,13.788) | 13.532 (13.339,13.725)
Graduate | 326,860 | 13.695 (13.617,13.774) | 13.68 (13.378,13.982) | 13.65 (13.5,13.799) | 13.558 (13.42,13.696)
Post-grad | 144,007 | 13.539 (13.431,13.648) | 13.307 (12.878,13.737) | 13.369 (13.197,13.542) | 13.399 (13.122,13.677)
HH income | | | | |
0-49 | 332,586 | 13.891 (13.809,13.973) | 13.62 (13.357,13.882) | 13.641 (13.465,13.817) | 13.612 (13.339,13.886)
50-99 | 309,387 | 13.745 (13.669,13.822) | 13.573 (13.341,13.805) | 13.55 (13.395,13.705) | 13.469 (13.323,13.615)
100-149 | 132,757 | 13.777 (13.671,13.882) | 13.383 (13.062,13.704) | 13.424 (13.243,13.605) | 13.415 (13.247,13.584)
150+ | 62,331 | 13.531 (13.437,13.625) | 13.201 (12.949,13.454) | 13.457 (13.277,13.636) | 13.342 (13.14,13.544)
HH size | | | | |
1 | 177,140 | 13.649 (13.533,13.765) | 13.337 (12.98,13.694) | 13.518 (13.349,13.688) | 13.489 (13.276,13.703)
2 | 286,106 | 13.6 (13.513,13.687) | 13.462 (13.164,13.761) | 13.469 (13.275,13.663) | 13.383 (13.215,13.551)
3 | 152,684 | 14.02 (13.918,14.122) | 13.718 (13.351,14.085) | 13.58 (13.395,13.765) | 13.5 (13.336,13.664)
4 | 143,442 | 14.033 (13.941,14.125) | 13.514 (13.189,13.838) | 13.64 (13.491,13.788) | 13.542 (13.362,13.723)
5+ | 77,689 | 14.138 (14.017,14.259) | 13.819 (13.433,14.206) | 13.581 (13.321,13.841) | 13.73 (13.474,13.985)
Urban size | | | | |
¡50k | 34,987 | 13.52 (13.266,13.773) | 13.383 (12.795,13.972) | 13.337 (12.845,13.829) | 13.328 (13.031,13.625)
50-200k | 119,970 | 13.928 (13.794,14.062) | 13.747 (13.489,14.005) | 13.842 (13.672,14.011) | 13.698 (13.522,13.873)
200-500k | 44,578 | 13.918 (13.705,14.13) | 13.518 (12.933,14.103) | 13.817 (13.571,14.064) | 13.66 (13.276,14.045)
500-1000k | 276,629 | 13.759 (13.678,13.84) | 13.395 (13.213,13.576) | 13.564 (13.45,13.679) | 13.503 (13.305,13.7)
1000k+ | 360,897 | 14.286 (13.859,14.713) | 13.451 (12.893,14.01) | 13.654 (13.317,13.992) | 13.385 (12.683,14.087)
Vehicle make | | | | |
American | 290,228 | 13.8 (13.714,13.886) | 13.339 (13.067,13.61) | 13.455 (13.258,13.651) | 13.426 (13.221,13.631)
Asian | 528,810 | 13.799 (13.741,13.858) | 13.646 (13.485,13.808) | 13.627 (13.517,13.736) | 13.561 (13.43,13.692)
European | 18,023 | 14.337 (14.094,14.58) | 14.19 (13.492,14.888) | 13.684 (13.334,14.034) | 13.552 (13.171,13.933)
Vehicle type | | | | |
Car | 610,245 | 13.849 (13.792,13.906) | 13.649 (13.445,13.854) | 13.658 (13.53,13.787) | 13.611 (13.476,13.747)
Van | 27,866 | 13.588 (13.336,13.84) | 13.414 (13.064,13.764) | 13.472 (13.172,13.772) | 13.514 (13.168,13.861)
SUV | 158,202 | 13.773 (13.675,13.871) | 13.623 (13.279,13.966) | 13.497 (13.336,13.657) | 13.528 (13.264,13.793)
Pickup | 40,748 | 13.714 (13.536,13.893) | 13.725 (13.41,14.04) | 13.544 (13.305,13.783) | 13.502 (13.079,13.925)
Fuel type | | | | |
Gas/D | 761,292 | 13.841 (13.791,13.891) | 13.565 (13.389,13.741) | 13.556 (13.428,13.685) | 13.498 (13.36,13.636)
Other | 75,769 | 13.51 (13.338,13.683) | 13.525 (13.217,13.833) | 13.463 (13.254,13.672) | 13.56 (13.264,13.856)
Weekend | | | | |
Weekday | 712,411 | 13.824 (13.775,13.872) | 13.576 (13.408,13.744) | 13.558 (13.431,13.684) | 13.502 (13.364,13.64)
Weekend | 124,650 | 13.74 (13.685,13.794) | 13.496 (13.22,13.773) | 13.531 (13.397,13.664) | 13.486 (13.334,13.637)
* •
Table 15: Mean daily maximum speed (MPH) and associated 95% CIs by different
covariates across DR adjustment methods
| | Unweighted | GLM-AIPW-PAPP | GLM-AIPW-PMLE | BART–AIPW-PAPP
---|---|---|---|---|---
Covariate | n | (95%CI) | (95%CI) | (95%CI) | (95%CI)
Total | 837,061 | 59.808 (59.467,60.149) | 61.547 (59.717,63.377) | 60.447 (59.833,61.062) | 59.947 (58.623,61.27)
Gender | | | | |
Male | 407,312 | 60.187 (59.706,60.669) | 62.687 (59.483,65.89) | 60.847 (60.045,61.649) | 60.677 (58.953,62.402)
Female | 429,749 | 59.448 (58.969,59.928) | 60.28 (59.195,61.366) | 60.023 (59.205,60.84) | 59.193 (58.034,60.353)
Age group | | | | |
16-24 | 311,106 | 61.475 (60.97,61.98) | 62.484 (61.235,63.733) | 62.212 (61.442,62.981) | 62.078 (60.788,63.368)
25-34 | 117,758 | 63.41 (62.567,64.253) | 62.907 (61.082,64.733) | 62.359 (61.171,63.546) | 62.373 (60.598,64.148)
35-44 | 61,908 | 62.617 (61.358,63.878) | 62.761 (60.791,64.731) | 63.986 (62.235,65.737) | 62.039 (59.911,64.166)
45-54 | 77,903 | 60.872 (59.853,61.89) | 64.943 (58.295,71.591) | 60.117 (58.688,61.545) | 59.738 (57.435,62.041)
55-64 | 63,891 | 59.478 (58.406,60.55) | 59.666 (57.225,62.107) | 59.611 (57.872,61.35) | 58.797 (56.332,61.262)
65-74 | 88,762 | 55.91 (55.068,56.753) | 56.915 (55.026,58.805) | 55.693 (54.645,56.742) | 55.5 (53.256,57.744)
75+ | 115,733 | 52.613 (51.8,53.426) | 52.602 (50.493,54.71) | 52.88 (51.871,53.889) | 52.262 (50.92,53.604)
Race | | | | |
White | 745,596 | 59.449 (59.092,59.806) | 60.312 (59.478,61.145) | 60.254 (59.581,60.929) | 59.586 (58.217,60.954)
Black | 43,109 | 64.628 (62.991,66.264) | 68.229 (60.656,75.802) | 62.22 (60.3,64.14) | 62.152 (58.85,65.453)
Asian | 26,265 | 61.08 (59.323,62.836) | 60.573 (58.414,62.733) | 59.081 (56.873,61.288) | 59.464 (55.818,63.11)
Other | 22,091 | 61.008 (59.099,62.917) | 60.986 (58.585,63.387) | 59.968 (58.45,61.485) | 60.988 (55.744,66.231)
Ethnicity | | | | |
Non-Hisp | 808,098 | 59.718 (59.37,60.066) | 60.221 (59.395,61.048) | 60.232 (59.595,60.869) | 59.528 (58.485,60.571)
Hispanic | 28,963 | 62.31 (60.707,63.914) | 66.308 (60.971,71.645) | 61.976 (60.418,63.533) | 62.437 (58.69,66.184)
Education | | | | |
¡High school | 50,943 | 58.103 (56.954,59.251) | 59.199 (57.013,61.386) | 58.949 (57.325,60.572) | 60.162 (57.18,63.145)
HS completed | 78,045 | 59.865 (58.83,60.901) | 61.116 (59.657,62.576) | 60.812 (59.457,62.166) | 60.382 (57.673,63.092)
College | 237,206 | 59.874 (59.25,60.497) | 62.623 (58.482,66.764) | 59.982 (58.9,61.065) | 59.491 (57.762,61.219)
Graduate | 326,860 | 60.185 (59.62,60.751) | 61.474 (60.049,62.898) | 61.405 (60.379,62.43) | 60.718 (59.63,61.807)
Post-grad | 144,007 | 59.414 (58.566,60.262) | 59.809 (58.019,61.598) | 60.113 (58.801,61.426) | 59.221 (57.561,60.881)
HH income | | | | |
0-49 | 332,586 | 59.127 (58.575,59.68) | 59.271 (57.864,60.679) | 59.263 (58.317,60.208) | 58.757 (57.339,60.175)
50-99 | 309,387 | 60.031 (59.461,60.6) | 64.22 (60.289,68.151) | 60.94 (60.026,61.853) | 60.508 (58.903,62.113)
100-149 | 132,757 | 60.663 (59.901,61.425) | 60.409 (58.784,62.035) | 61.507 (60.192,62.822) | 60.611 (59.169,62.052)
150+ | 62,331 | 60.513 (59.305,61.721) | 61.386 (59.529,63.244) | 60.484 (58.966,62.004) | 60.549 (58.951,62.147)
HH size | | | | |
1 | 177,140 | 57.902 (57.123,58.682) | 58.973 (57.29,60.655) | 58.243 (57.246,59.24) | 57.958 (56.619,59.298)
2 | 286,106 | 59.02 (58.421,59.619) | 60.033 (58.585,61.48) | 59.371 (58.389,60.353) | 59.371 (57.722,61.019)
3 | 152,684 | 61.35 (60.569,62.132) | 60.399 (58.548,62.25) | 61.373 (59.998,62.748) | 60.541 (58.797,62.285)
4 | 143,442 | 61.214 (60.476,61.951) | 61.592 (60.031,63.152) | 61.488 (60.377,62.599) | 61.068 (59.616,62.52)
5+ | 77,689 | 61.428 (60.57,62.286) | 66.669 (60.605,72.733) | 62.759 (60.967,64.551) | 60.989 (58.922,63.057)
Urban size | | | | |
¡50k | 34,987 | 60.422 (58.928,61.917) | 61.118 (58.986,63.251) | 59.964 (58.006,61.921) | 59.665 (57.238,62.093)
50-200k | 119,970 | 56.12 (55.22,57.021) | 57.621 (55.544,59.698) | 57.162 (56.071,58.254) | 57.313 (55.844,58.782)
200-500k | 44,578 | 62.847 (61.27,64.423) | 64.07 (60.75,67.39) | 62.507 (60.978,64.037) | 62.711 (60.338,65.086)
500-1000k | 276,629 | 60.193 (59.62,60.766) | 61.073 (57.067,65.078) | 59.886 (58.928,60.846) | 60.296 (58.82,61.773)
1000k+ | 360,897 | 60.303 (59.8,60.807) | 62.075 (59.415,64.736) | 60.647 (59.977,61.317) | 60.287 (59.099,61.475)
Vehicle make | | | | |
American | 290,228 | 59.36 (58.773,59.946) | 63.24 (59.63,66.85) | 60.218 (59.339,61.096) | 59.877 (58.058,61.695)
Asian | 528,810 | 60.013 (59.588,60.438) | 60.796 (59.891,61.701) | 60.751 (60.095,61.407) | 60.203 (59.036,61.371)
European | 18,023 | 61.016 (59.074,62.958) | 58.842 (56.171,61.514) | 58.984 (56.944,61.025) | 59.049 (55.175,62.922)
Vehicle type | | | | |
Car | 610,245 | 59.744 (59.338,60.149) | 60.92 (60.083,61.757) | 60.44 (59.765,61.115) | 60.119 (58.854,61.383)
Van | 27,866 | 57.722 (56.154,59.289) | 58.36 (55.812,60.907) | 58.674 (56.088,61.26) | 58.263 (55.586,60.94)
SUV | 158,202 | 60.093 (59.36,60.825) | 62.613 (57.431,67.795) | 60.444 (59.297,61.59) | 59.511 (57.678,61.345)
Pickup | 40,748 | 61.092 (59.557,62.627) | 62.97 (61.201,64.739) | 61.359 (59.346,63.371) | 60.707 (57.51,63.905)
Fuel type | | | | |
Gas/D | 761,292 | 59.878 (59.516,60.239) | 61.537 (59.67,63.404) | 60.473 (59.842,61.105) | 59.937 (58.565,61.309)
Other | 75,769 | 59.105 (58.131,60.079) | 61.685 (58.505,64.865) | 61.082 (59.975,62.189) | 60.645 (58.056,63.234)
Weekend | | | | |
Weekday | 712,411 | 59.684 (59.344,60.023) | 61.322 (59.663,62.982) | 60.295 (59.687,60.902) | 59.801 (58.483,61.119)
Weekend | 124,650 | 60.517 (60.151,60.883) | 62.809 (60.044,65.575) | 61.312 (60.601,62.023) | 60.768 (59.333,62.204)
* •
Table 16: Mean daily frequency of brakes per driven mile and associated 95%
CIs by different covariates across DR adjustment methods
| | Unweighted | GLM-AIPW-PAPP | GLM-AIPW-PMLE | BART–AIPW-PAPP
---|---|---|---|---|---
Covariate | n | (95%CI) | (95%CI) | (95%CI) | (95%CI)
Total | 837,061 | 4.499 (4.387,4.611) | 4.356 (3.887,4.825) | 4.644 (4.345,4.942) | 4.426 (3.984,4.867)
Gender | | | | |
Male | 407,312 | 4.415 (4.247,4.583) | 3.835 (3.139,4.531) | 4.456 (4.129,4.784) | 4.345 (3.789,4.902)
Female | 429,749 | 4.579 (4.43,4.728) | 4.957 (4.518,5.396) | 4.825 (4.471,5.179) | 4.508 (4.04,4.977)
Age group | | | | |
16-24 | 311,106 | 4.283 (4.114,4.451) | 4.368 (3.735,5.001) | 4.417 (4.068,4.766) | 4.173 (3.777,4.57)
25-34 | 117,758 | 4.085 (3.819,4.351) | 4.201 (3.59,4.812) | 4.609 (4.084,5.133) | 3.984 (3.288,4.681)
35-44 | 61,908 | 4.422 (4.052,4.792) | 4.575 (3.779,5.371) | 4.643 (4.05,5.236) | 4.574 (3.905,5.243)
45-54 | 77,903 | 4.14 (3.846,4.435) | 3.764 (2.22,5.308) | 4.174 (3.719,4.628) | 3.997 (3.359,4.636)
55-64 | 63,891 | 4.565 (4.136,4.995) | 5.003 (4.102,5.904) | 4.589 (4.042,5.136) | 4.62 (3.365,5.875)
65-74 | 88,762 | 4.801 (4.41,5.193) | 3.522 (1.749,5.296) | 4.799 (4.195,5.402) | 4.596 (3.372,5.821)
75+ | 115,733 | 5.518 (5.147,5.888) | 6.902 (5.723,8.081) | 5.857 (5.25,6.464) | 6.44 (5.548,7.332)
Race | | | | |
White | 745,596 | 4.521 (4.401,4.641) | 4.518 (4.018,5.018) | 4.583 (4.272,4.895) | 4.402 (4,4.805)
Black | 43,109 | 4.366 (3.885,4.848) | 3.807 (2.117,5.496) | 5.074 (4.442,5.706) | 5.053 (4.117,5.988)
Asian | 26,265 | 4.256 (3.675,4.837) | 4.349 (3.393,5.304) | 5.222 (4.061,6.383) | 4.483 (3.405,5.561)
Other | 22,091 | 4.319 (3.732,4.907) | 4.197 (3.011,5.382) | 4.438 (3.574,5.302) | 3.705 (2.167,5.243)
Ethnicity | | | | |
Non-Hisp | 808,098 | 4.488 (4.375,4.601) | 4.56 (4.117,5.003) | 4.597 (4.323,4.871) | 4.442 (4.051,4.832)
Hispanic | 28,963 | 4.813 (4.037,5.588) | 3.842 (2.609,5.075) | 4.84 (3.782,5.898) | 4.3 (2.899,5.701)
Education | | | | |
¡High school | 50,943 | 4.942 (4.522,5.362) | 4.779 (3.58,5.979) | 5.209 (4.526,5.893) | 5.204 (3.91,6.498)
HS completed | 78,045 | 4.163 (3.8,4.526) | 3.495 (1.656,5.335) | 4.241 (3.662,4.819) | 4.179 (3.275,5.084)
College | 237,206 | 4.561 (4.347,4.775) | 4.466 (3.609,5.323) | 4.713 (4.269,5.158) | 4.604 (4.062,5.145)
Graduate | 326,860 | 4.347 (4.165,4.528) | 4.174 (3.765,4.583) | 4.564 (4.201,4.927) | 4.17 (3.59,4.749)
Post-grad | 144,007 | 4.769 (4.509,5.029) | 5.031 (4.514,5.548) | 4.788 (4.387,5.19) | 4.541 (3.857,5.224)
HH income | | | | |
0-49 | 332,586 | 4.542 (4.353,4.731) | 4.639 (3.669,5.608) | 4.728 (4.325,5.131) | 4.752 (4.157,5.347)
50-99 | 309,387 | 4.386 (4.207,4.566) | 3.75 (2.847,4.653) | 4.482 (4.098,4.866) | 4.196 (3.682,4.71)
100-149 | 132,757 | 4.579 (4.32,4.838) | 4.8 (4.301,5.3) | 4.743 (4.228,5.258) | 4.564 (3.933,5.194)
150+ | 62,331 | 4.66 (4.265,5.056) | 4.662 (3.942,5.383) | 4.721 (4.213,5.229) | 4 (3.02,4.98)
HH size | | | | |
1 | 177,140 | 4.644 (4.38,4.908) | 4.13 (2.528,5.733) | 4.852 (4.438,5.265) | 4.658 (4.058,5.258)
2 | 286,106 | 4.674 (4.46,4.888) | 4.855 (4.259,5.451) | 4.782 (4.302,5.262) | 4.515 (3.92,5.111)
3 | 152,684 | 4.328 (4.097,4.558) | 4.425 (3.937,4.913) | 4.593 (4.123,5.064) | 4.321 (3.785,4.857)
4 | 143,442 | 4.294 (4.064,4.524) | 4.356 (3.762,4.95) | 4.475 (4.051,4.9) | 4.201 (3.71,4.692)
5+ | 77,689 | 4.241 (3.949,4.533) | 3.851 (2.313,5.388) | 4.396 (3.893,4.899) | 4.459 (4.017,4.901)
Urban size | | | | |
¡50k | 34,987 | 4.051 (3.567,4.535) | 4.313 (2.858,5.768) | 4.293 (3.57,5.015) | 4.24 (3.746,4.734)
50-200k | 119,970 | 4.789 (4.49,5.089) | 4.921 (4.454,5.389) | 4.761 (4.265,5.257) | 4.696 (4.02,5.372)
200-500k | 44,578 | 4.241 (3.825,4.657) | 4.177 (3.147,5.206) | 4.567 (4.075,5.059) | 4.231 (3.049,5.413)
500-1000k | 276,629 | 3.969 (3.793,4.145) | 3.977 (3.342,4.612) | 4.21 (3.867,4.552) | 4.171 (3.549,4.793)
1000k+ | 360,897 | 4.884 (4.703,5.066) | 4.539 (3.985,5.093) | 4.948 (4.599,5.297) | 4.626 (4.025,5.227)
Vehicle make | | | | |
American | 290,228 | 4.762 (4.548,4.975) | 4.239 (3.381,5.097) | 5.007 (4.549,5.466) | 4.914 (4.347,5.482)
Asian | 528,810 | 4.392 (4.261,4.524) | 4.569 (4.244,4.894) | 4.451 (4.181,4.721) | 4.206 (3.78,4.632)
European | 18,023 | 3.401 (2.946,3.856) | 3.161 (2.359,3.963) | 3.664 (3.006,4.322) | 2.898 (0.958,4.837)
Vehicle type | | | | |
Car | 610,245 | 4.504 (4.368,4.641) | 4.281 (3.65,4.913) | 4.564 (4.225,4.903) | 4.248 (3.785,4.711)
Van | 27,866 | 4.435 (4.064,4.806) | 5.298 (4.287,6.31) | 4.855 (4.41,5.3) | 4.569 (3.345,5.792)
SUV | 158,202 | 4.351 (4.148,4.555) | 4.222 (3.078,5.365) | 4.56 (4.215,4.904) | 4.381 (3.73,5.032)
Pickup | 40,748 | 5.043 (4.391,5.696) | 4.611 (3.881,5.34) | 5.022 (4.255,5.789) | 5.226 (4.061,6.391)
Fuel type | | | | |
Gas/D | 761,292 | 4.435 (4.319,4.551) | 4.345 (3.865,4.825) | 4.649 (4.34,4.959) | 4.426 (3.992,4.859)
Other | 75,769 | 5.145 (4.737,5.553) | 4.718 (3.688,5.747) | 4.934 (4.308,5.561) | 4.388 (3.347,5.429)
Weekend | | | | |
Weekday | 712,411 | 4.492 (4.379,4.605) | 4.365 (3.91,4.82) | 4.639 (4.339,4.938) | 4.413 (3.963,4.864)
Weekend | 124,650 | 4.54 (4.427,4.654) | 4.305 (3.738,4.871) | 4.675 (4.374,4.975) | 4.497 (4.073,4.921)
* •
Table 17: Mean daily percentage of stop time and associated 95% CIs by
different covariates across DR adjustment methods
| | Unweighted | GLM-AIPW-PAPP | GLM-AIPW-PMLE | BART–AIPW-PAPP
---|---|---|---|---|---
Covariate | n | (95%CI) | (95%CI) | (95%CI) | (95%CI)
Total | 837,061 | 25.518 (25.202,25.834) | 25.515 (24.043,26.987) | 24.949 (24.217,25.681) | 0.251 (0.242,0.26)
Gender | | | | |
Male | 407,312 | 24.618 (24.157,25.079) | 24.048 (21.863,26.234) | 24.06 (23.158,24.961) | 0.242 (0.231,0.252)
Female | 429,749 | 26.371 (25.945,26.797) | 27.107 (25.226,28.988) | 25.873 (24.968,26.779) | 0.261 (0.25,0.271)
Age group | | | | |
16-24 | 311,106 | 26.713 (26.221,27.204) | 26.551 (25.177,27.925) | 25.913 (25.109,26.716) | 0.258 (0.245,0.271)
25-34 | 117,758 | 25.199 (24.385,26.014) | 24.178 (22.653,25.704) | 25.013 (23.946,26.08) | 0.247 (0.232,0.262)
35-44 | 61,908 | 25.575 (24.528,26.621) | 27.6 (23.466,31.735) | 26.828 (25.017,28.639) | 0.265 (0.247,0.284)
45-54 | 77,903 | 23.406 (22.257,24.555) | 24.908 (20.144,29.672) | 22.926 (21.57,24.281) | 0.239 (0.22,0.258)
55-64 | 63,891 | 22.879 (21.906,23.852) | 22.949 (21.035,24.862) | 23.408 (21.72,25.095) | 0.235 (0.211,0.259)
65-74 | 88,762 | 24.425 (23.448,25.402) | 27.099 (23.644,30.554) | 24.739 (23.395,26.084) | 0.262 (0.246,0.279)
75+ | 115,733 | 26.315 (25.367,27.264) | 27.682 (25.755,29.609) | 26.185 (25.039,27.332) | 0.28 (0.264,0.296)
Race | | | | |
White | 745,596 | 25.216 (24.882,25.55) | 25.693 (24.172,27.213) | 24.071 (23.292,24.849) | 0.245 (0.237,0.253)
Black | 43,109 | 29.711 (28.421,31.001) | 27.666 (25.29,30.042) | 29.697 (28.071,31.324) | 0.294 (0.267,0.32)
Asian | 26,265 | 25.989 (24.582,27.396) | 23.631 (20.325,26.937) | 26.098 (24.013,28.184) | 0.252 (0.216,0.289)
Other | 22,091 | 26.955 (24.952,28.958) | 26.442 (23.616,29.269) | 26.484 (23.923,29.045) | 0.252 (0.21,0.294)
Ethnicity | | | | |
Non-Hisp | 808,098 | 25.438 (25.118,25.758) | 25.622 (24.061,27.183) | 24.532 (23.79,25.274) | 0.251 (0.242,0.26)
Hispanic | 28,963 | 27.746 (25.946,29.546) | 26.345 (24.033,28.657) | 27.102 (25.369,28.836) | 0.251 (0.224,0.277)
Education | | | | |
¡High school | 50,943 | 27.86 (26.587,29.134) | 28.96 (26.302,31.618) | 27.223 (25.326,29.119) | 0.262 (0.233,0.292)
HS completed | 78,045 | 26.136 (25.022,27.249) | 28.155 (25.07,31.241) | 25.534 (24.179,26.889) | 0.263 (0.24,0.287)
College | 237,206 | 26.881 (26.288,27.474) | 27.043 (24.085,30.001) | 26.128 (24.88,27.377) | 0.264 (0.253,0.276)
Graduate | 326,860 | 24.96 (24.472,25.448) | 22.543 (21.102,23.983) | 23.625 (22.803,24.447) | 0.236 (0.218,0.254)
Post-grad | 144,007 | 23.375 (22.656,24.094) | 24.105 (22.558,25.651) | 23.845 (22.697,24.992) | 0.237 (0.219,0.254)
HH income | | | | |
0-49 | 332,586 | 26.578 (26.059,27.098) | 27.376 (25.379,29.374) | 26.485 (25.414,27.557) | 0.265 (0.254,0.276)
50-99 | 309,387 | 25.205 (24.717,25.694) | 24.47 (21.599,27.341) | 24.537 (23.514,25.559) | 0.246 (0.232,0.26)
100-149 | 132,757 | 24.218 (23.441,24.996) | 24.214 (22.662,25.766) | 23.707 (22.615,24.799) | 0.241 (0.226,0.256)
150+ | 62,331 | 24.176 (22.962,25.39) | 25.297 (20.664,29.931) | 23.96 (22.384,25.536) | 0.243 (0.223,0.264)
HH size | | | | |
1 | 177,140 | 26.133 (25.442,26.823) | 26.166 (22.88,29.452) | 25.409 (24.247,26.571) | 0.257 (0.247,0.266)
2 | 286,106 | 24.412 (23.871,24.953) | 24.289 (23.042,25.537) | 23.693 (22.808,24.578) | 0.244 (0.234,0.254)
3 | 152,684 | 25.507 (24.748,26.267) | 24.228 (21.958,26.498) | 25.177 (23.648,26.705) | 0.243 (0.231,0.256)
4 | 143,442 | 26.236 (25.522,26.951) | 26.843 (23.645,30.042) | 25.755 (24.608,26.901) | 0.259 (0.244,0.273)
5+ | 77,689 | 26.882 (25.884,27.88) | 27.356 (22.56,32.152) | 25.719 (24.458,26.98) | 0.263 (0.244,0.282)
Urban size | | | | |
¡50k | 34,987 | 20.874 (19.097,22.651) | 26.022 (21.85,30.194) | 21.679 (19.496,23.862) | 0.228 (0.21,0.247)
50-200k | 119,970 | 23.798 (22.902,24.694) | 23.87 (22.364,25.376) | 24.287 (22.881,25.692) | 0.239 (0.222,0.257)
200-500k | 44,578 | 22.435 (21.355,23.515) | 24.487 (18.513,30.461) | 23.436 (22.035,24.837) | 0.236 (0.209,0.263)
500-1000k | 276,629 | 25.334 (24.785,25.882) | 25.695 (24.531,26.86) | 26.128 (25.326,26.93) | 0.263 (0.249,0.278)
1000k+ | 360,897 | 27.062 (26.615,27.508) | 26.883 (25.813,27.953) | 27.43 (26.73,28.131) | 0.271 (0.26,0.283)
Vehicle make | | | | |
American | 290,228 | 26.28 (25.728,26.831) | 24.962 (22.25,27.673) | 24.957 (23.906,26.007) | 0.255 (0.244,0.265)
Asian | 528,810 | 25.075 (24.682,25.468) | 26.283 (24.65,27.917) | 24.94 (24.125,25.755) | 0.249 (0.239,0.258)
European | 18,023 | 26.233 (24.82,27.646) | 23.294 (19.732,26.857) | 25.298 (22.99,27.607) | 0.243 (0.21,0.276)
Vehicle type | | | | |
Car | 610,245 | 25.632 (25.267,25.997) | 25.738 (24.14,27.335) | 25.054 (24.321,25.788) | 0.25 (0.24,0.261)
Van | 27,866 | 27.205 (25.523,28.887) | 28.585 (24.943,32.227) | 28.5 (25.898,31.103) | 0.277 (0.237,0.316)
SUV | 158,202 | 25.274 (24.518,26.031) | 25.441 (22.085,28.796) | 25.053 (23.917,26.189) | 0.254 (0.241,0.267)
Pickup | 40,748 | 23.596 (22.247,24.945) | 23.906 (20.704,27.107) | 23.839 (21.462,26.217) | 0.239 (0.211,0.267)
Fuel type | | | | |
Gas/D | 761,292 | 25.777 (25.444,26.11) | 25.638 (24.128,27.147) | 25.059 (24.318,25.801) | 0.252 (0.243,0.261)
Other | 75,769 | 22.913 (21.982,23.844) | 20.192 (18.427,21.956) | 22.057 (20.348,23.765) | 0.216 (0.195,0.237)
Weekend | | | | |
Weekday | 712,411 | 25.395 (25.079,25.712) | 25.465 (24.053,26.876) | 24.83 (24.105,25.554) | 0.25 (0.241,0.259)
Weekend | 124,650 | 26.218 (25.892,26.545) | 25.812 (23.822,27.803) | 25.623 (24.805,26.442) | 0.258 (0.248,0.268)
* •
Figure 12: Comparing the distribution of common auxiliary variables in SHRP2
with weighted NHTS
Figure 13: Comparing the performance of BART vs GLM in both estimating
propensity scores and predicting some trip-related outcomes. The radar plot on
the right side displays the values of (pseudo-)$R^{2}$ between BART and GLM.
AUC: area under curve; CART: classification and regression trees
Figure 14: Comparing the distribution of common auxiliary variables in pseudo-
weighted SHRP2 (PAPP–BART) with weighted NHTS
|
# Learning Control of Quantum Systems
Daoyi Dong D. Dong is with the School of Engineering and Information
Technology, University of New South Wales, Canberra, ACT 2600, Australia
<EMAIL_ADDRESS>
###### Abstract
This paper provides a brief introduction to learning control of quantum
systems. In particular, the following aspects are outlined, including
gradient-based learning for optimal control of quantum systems, evolutionary
computation for learning control of quantum systems, learning-based quantum
robust control, and reinforcement learning for quantum control.
## I Introduction
Controlling quantum systems has become a central task in the development of
quantum technologies, and quantum control has witnessed rapid progress in the
last two decades; for an overview, see, e.g., the survey papers [1, 2, 3, 4,
5] or the monographs [6, 7]. The general goal of quantum control is to
actively manipulate and control the dynamics of quantum systems for achieving
given objectives [8, 9] (e.g., rapid state transfer, high-fidelity gate
operation). Two of fundamental issues in quantum control include investigating
controllability of quantum systems and designing control laws to achieve
expected control systems performance. Controllability is concerned with what
control targets can be achieved and the controllability of finite-dimensional
closed systems has been well addressed [7]. A few results on the
controllability of open quantum systems have also been presented. For control
law design, optimal control theory [1], Lyapunov control approaches [10],
learning control algorithms [2] and robust control methods [11] have been
developed in manipulating quantum systems for achieving various control
objectives.
Among various control design approaches, learning control is recognized as a
powerful method for many complex quantum control tasks and has achieved great
success in laser control of molecules and other applications since the
approach was presented in the seminal paper [12]. Many quantum control tasks
may be formulated as an optimization problem and a learning algorithm can be
employed to search for an optimal control field satisfying a desired
performance condition. Gradient algorithms have been demonstrated to be an
excellent candidate for numerically finding an optimal field and have achieved
successful applications in nuclear magnetic resonance (NMR) systems due to
their high efficiency [13]. In many other optimal control problems, the
gradient information may not be easy to obtain and some complex problems may
have local optima. For these situations, stochastic search algorithms usually
have improved performance to find a good control field. The genetic algorithm
(GA) and differential evolution (DE) [11] have been widely used in the area of
quantum control of molecular systems and achieved great success [2]. Another
task in quantum control is to achieve robustness performance in quantum
systems. Gradient-based learning algorithms and stochastic search algorithms
may be smartly modified to search for robust control fields. Also, some other
machine learning algorithms such as reinforcement learning have found
successful applications in various tasks (e.g., quantum error correction
[14]).
## II Gradient-based learning for optimal control of quantum systems
Consider a finite-dimensional quantum control system where its state
$|\psi(t)\rangle$ (using the Dirac notation) is described by the following
Schrödinger equation (setting $\hbar=1$):
$\frac{d}{dt}|{\psi}(t)\rangle=-i[H_{0}+\sum_{m=1}^{M}u_{m}(t)H_{m}]|\psi(t)\rangle,\quad
t\in[0,T],$ (1)
where $H_{0}$ is the free Hamiltonian of the system and
$H_{c}(t)=\sum_{m=1}^{M}u_{m}(t)H_{m}$ is the control Hamiltonian at time $t$
that represents the interaction of the system with the external fields
$u_{m}(t)$. The $H_{m}$ are Hermitian operators through which the controls
couple to the system. The objective of quantum optimal control is to find
control fields $u_{m}(t)$ for maximizing a performance functional $\Phi$.
$\Phi$ may be a given functional of the state $|\psi\rangle$ and control
defined according to practical requirement. For example, the fidelity
$\Phi=|\langle\psi(T)|\psi_{f}\rangle|^{2}$ between the final state
$|\psi(T)\rangle$ and target state $|\psi_{f}\rangle$ or the expectation
$\Phi=|\langle\psi(T)|\hat{O}|\psi(T)\rangle|^{2}$ of an operator $\hat{O}$
may be defined as a performance index for state transfer task. In order to
maximize the performance $\Phi$, we may employ the GRAPE (gradient ascent
pulse engineering) algorithm [13] or the Krotov method [15] to search for the
control field. For simplicity, we may discretize time $T$ in $N$ equal steps
and during each step let the control fields $u_{m}$ be constant. The basic
idea in the GRAPE algorithm is that the control fields are iteratively updated
following the gradient direction of $\frac{\delta\Phi}{\delta u_{m}(k)}$ with
a learning rate $\eta$, i.e.,
$u_{m}(k+1)=u_{m}(k)+\eta\frac{\delta\Phi}{\delta u_{m}(k)}.$ (2)
The gradient-based learning method can also be extended to the optimal control
problem of unitary transformations (e.g., quantum gates) and open quantum
systems. For example, for a unitary transformation $U$, its evolution is
described by the following equation
$\dot{U}(t)=-i[H_{0}+\sum_{m=1}^{M}u_{m}(t)H_{m}]U(t),\ \ \ \ U(0)=I.$ (3)
Now the objective is to design the controls $u_{m}(t)$ to steer the unitary
$U(t)$ from $U(0)=I$ to a desired target $U_{F}$ with high fidelity. We may
define the performance function as $\Phi=|{\langle
U_{F}|e^{i\varphi}U(T)\rangle}|^{2}$ for an arbitrary phase factor $\varphi$.
Then we can calculate the gradient $\delta\Phi/\delta u_{m}(k)$, and the
optimal control field can be searched for by following the gradient [16]. When
we consider the optimal control problem of an open quantum system, its state
should be represented by a density matrix $\rho$ and its dynamics should be
described by a master equation. The dynamics of a Markovian open quantum
system can be described using the following master equation in the Lindblad
form as [6]
$\dot{\rho}(t)=-i[H_{0}+\sum_{m=1}^{M}u_{m}(t)H_{m},\rho(t)]+\sum_{k}\gamma_{k}\mathcal{D}[L_{k}]\rho(t),$
(4)
where the non-negative coefficients $\gamma_{k}$ specify the relevant
relaxation rates, $L_{k}$ are appropriate Lindblad operators and
$\mathcal{D}[L_{k}]\rho=(L_{k}\rho
L_{k}^{\dagger}-\frac{1}{2}L_{k}^{\dagger}L_{k}\rho-\frac{1}{2}\rho
L_{k}^{\dagger}L_{k}).$ The open GRAPE algorithm has also been developed to
calculate the gradient based on the master equation (see [17]).
Using the basic idea of gradient-based learning control, some variants have
been developed for various requirements in quantum optimal control. For
example, a data-driven gradient optimization algorithm (d-GRAPE) has been
proposed to correct deterministic gate errors in high-precision quantum
control by jointly learning from a design model and the experimental data from
quantum tomography [18]. A gradient-based frequency-domain optimization
algorithm has been developed to solve the optimal control problem with
constraints in the frequency domain [19]. Existing results show that gradient-
based learning methods can usually achieve excellent performance for solving
optimal control problems when the system model is known and the dynamics can
be equivalently (or approximately) described using a closed quantum system.
This is also analyzed using quantum control landscape theory [20].
## III Evolutionary computation for learning control of quantum systems
Gradient algorithms have shown powerful capability for numerically finding
optimal controls due to their excellent performance [13]. In many practical
applications, it may be difficult to obtain the gradient information or there
exist local optima in complex quantum control problems. For these situations,
a natural idea is to employ stochastic search algorithms to seek good
controls. Evolutionary computation including GA and DE has been widely used in
the area of quantum control. In these evolutionary computation methods,
crossover, mutation and selection operations are iteratively implemented to
search for good solutions (optimal controls) in a parameter space. For
example, a subspace-selective self-adaptive differential evolution (SUSSADE)
algorithm has been proposed to achieve a high-fidelity single-shot Toffoli
gate and single-shot three-qubit gates [21], [22]. Existing results showed
that DE with equally-mixed strategies can achieve improved performance for
quantum control problems [23]. Several promising evolution algorithms have
been investigated comparatively in [24] and it was found that DE usually
outperformed GA and particle swarm optimization for hard quantum control
problems.
The above introduction of gradient-based learning and evolutionary computation
mainly involves open-loop control strategies. Evolutionary computation has
demonstrated extremely powerful capability when it is integrated into closed-
loop control design. Closed-loop learning control, where each cycle of the
closed-loop is executed on a new sample, has achieved great successes in the
laser control of laboratory chemical reactions [2, 12]. A closed-loop learning
control procedure generally involves three components [1, 2]: (i) a trial
laser control input, (ii) the laboratory generation of the control that is
applied to the sample and subsequently observed for its impact, and (iii) a
learning algorithm to suggest the form of the next control input by
considering the prior experiments. The initial trial control input may be a
random input field or a well-designed laser pulse. A feature of a good closed-
loop learning control design is its insensitivity to the initial trials. A key
task is to develop a good learning algorithm for ensuring that the learning
process converges to achieve a predetermined objective. GA, DE and several
rapid convergence algorithms have been developed for this task [4, 12]. The
optimal control problem is usually formulated as solving an optimization
problem by maximizing a functional which is related to some variables such as
the control inputs, quantum states and control time but may have no analytical
form. In the learning process, the optimization problem is solved iteratively.
First, a trial input is applied to a sample to be controlled and the result is
observed. Second, a learning algorithm suggests a better control input based
on the prior experiments. Third, the “better” control input is applied to a
new sample. This process continues until the control objective is achieved or
the maximum permitted iteration number is reached. It is often feasible to
produce many identical-state samples for laboratory chemical molecules. If the
control objective is well selected, there is a capability to apply specified
control inputs to the samples, and the learning algorithm is sufficiently
smart for searching for good control inputs, this process will converge and an
optimal control pulse can be found [2].
## IV Learning-based quantum robust control
The robust control of quantum systems has been recognized as a key task in
developing practical quantum technology since the existence of noise and
uncertainties is unavoidable. Learning control is an effective candidate for
achieving robust performance in some quantum control problems [11]. We first
consider the control problem of inhomogeneous quantum ensembles. An
inhomogeneous quantum ensemble consists of many individual quantum systems
(e.g., atoms, molecules or spin systems) and the parameters describing the
system dynamics of these individual systems could have variations [25, 26]. An
example is that a spin ensemble in NMR may encounter large dispersion in the
strength of the applied radio frequency field and there also exist variations
in the natural frequencies of these spins [25]. Inhomogeneous quantum
ensembles have wide applications in many fields ranging from quantum memory to
magnetic-resonance imaging. Hence, it is highly desirable to design control
laws for an inhomogeneous ensemble to employ the same control inputs to steer
individual systems with different dynamics from a given initial state to a
target state.
A sampling-based learning control (SLC) method has been developed to achieve
high fidelity control of inhomogeneous quantum ensembles [26]. Consider an
inhomogeneous ensemble in which the Hamiltonian of each individual system has
the following form
$H_{\omega,\theta}(t)=\omega H_{0}+\sum_{m=1}^{M}\theta u_{m}(t)H_{m}.$ (5)
We assume that the parameters $\omega\in[1-\Omega,1+\Omega]$ and
$\theta\in[1-\Theta,1+\Theta]$, and the constants $\Omega\in[0,1]$ and
$\Theta\in[0,1]$ represent the bounds of the parameter dispersion. The
objective is to design the controls $\\{u_{m}(t)\\}$ to simultaneously
stabilize the individual systems (with different $\omega$ and $\theta$) of the
quantum ensemble from an initial state $|\psi_{0}\rangle$ to the same target
state $|\psi_{f}\rangle$ with high fidelity. This task can be achieved using
the SLC method including two steps of “training” and “testing and evaluation”
[26]. In the training step, we select $N$ samples from the quantum ensemble
regarding the distribution (e.g., uniform distribution) of the inhomogeneity
parameters and then construct a generalized system as follows
$\frac{d}{dt}\left(\begin{array}[]{c}|{\psi}_{\omega_{1},\theta_{1}}(t)\rangle\\\
|{\psi}_{\omega_{2},\theta_{2}}(t)\rangle\\\ \vdots\\\
|{\psi}_{\omega_{N},\theta_{N}}(t)\rangle\\\
\end{array}\right)=-i\left(\begin{array}[]{c}H_{\omega_{1},\theta_{1}}(t)|\psi_{\omega_{1},\theta_{1}}(t)\rangle\\\
H_{\omega_{2},\theta_{2}}(t)|\psi_{\omega_{2},\theta_{2}}(t)\rangle\\\
\vdots\\\
H_{\omega_{N},\theta_{N}}(t)|\psi_{\omega_{N},\theta_{N}}(t)\rangle\\\
\end{array}\right)$ (6)
where
$H_{\omega_{n},\theta_{n}}=\omega_{n}H_{0}+\sum_{m}\theta_{n}u_{m}(t)H_{m}$
with $n=1,2,\dots,N$. The cost function for the generalized system is defined
by
$\Phi_{N}(u):=\frac{1}{N}\sum_{n=1}^{N}\Phi_{\omega_{n},\theta_{n}}(u).$ (7)
The task of the training step is to find a control strategy $u^{*}$ to
maximize the cost functional $\Phi_{N}(u)$. A gradient-based learning
algorithm (s-GRAPE) can be developed to complete this task. In the process of
testing and evaluation, a number of sampling individual systems are randomly
selected to evaluate the control performance. Results show that the SLC method
is potential for control design of various inhomogeneous quantum ensembles
(including inhomogeneous open quantum ensembles).
Besides inhomogeneous quantum ensembles, the SLC method is useful for robust
control of single quantum systems with various uncertainties. For example, Eq.
(5) can also correspond to the Hamiltonian of a quantum system with inaccurate
model parameter $\omega$ and uncertain multiplicative noise $\theta$. In order
to achieve robust control for such a quantum system, we may employ the SLC
method to search for robust control pulses [27, 28, 29]. The performance of
SLC approach can be further improved by exploring the richness and diversity
of samples. Inspired by deep learning, a batch-based gradient algorithm
(b-GRAPE) has been presented for efficiently seeking robust quantum controls,
and numerical results showed that b-GRAPE can achieve improved performance
over the SLC method for remarkably enhancing the control robustness while
maintaining high fidelity [30]. In other applications where we need to enhance
the robustness in closed-loop learning control, we may either use the Hessian
matrix information [31] or integrate the idea of SLC into the learning
algorithm in searching for robust control fields. For example, an improved DE
algorithm (called as _msMS_ _DE) has been proposed to search for robust
femtosecond laser pulses to control fragmentation of the molecule
$\text{CH}_{2}\text{BrI}$ [11]. In _msMS_ _DE, multiple samples are used for
fitness evaluation and a mixed strategy is employed for the mutation
operation.
## V Reinforcement learning for quantum control
Reinforcement learning (RL) [32] is another important machine learning
approach and it addresses the problem of how an active agent can learn to
approximate an optimal strategy while interacting with its environment. It is
a model-free feedback-based approach and works well even when the system model
is unknown or with uncertainties. RL has been used for learning control of
quantum systems. For example, a fidelity-based probabilistic Q-learning
approach has been presented to naturally solve the balance problem between
exploration and exploitation and was applied to learning control of quantum
systems [33]. The authors in [34] showed that the performance of RL is
comparable to optimal control approaches in the task of finding a short and
high-fidelity protocol, controlling from an initial to a given target state in
nonintegrable many-body quantum systems of interacting qubits. RL can also
help identify variational protocols with nearly optimal fidelity even in the
glassy phase. In [35], deep reinforcement learning is employed to
simultaneously optimize the speed and fidelity of quantum computation against
both leakage and stochastic control errors. A universal quantum control
framework was presented to improve the control robustness by adding control
noise into training environments for RL agents trained with trusted-region-
policy-optimization. By utilizing two-stage learning with teacher and student
networks and a reward quantifying the capability to recover the quantum
information stored in a quantum system, the authors in [14] showed how a
network-based “agent” in RL can discover good quantum-error-correction
strategies to protect qubits against noise.
## VI Conclusions
Machine learning has shown powerful capability in discovering high quality
controls to achieve optimal control and enhance robust performance for quantum
systems. On one hand, it is necessary to further develop or improve existing
machine learning algorithms to effectively solve complex quantum control
problems emerged from new quantum technologies. On the other hand, various
cutting-edge machine learning techniques should be able to find new
application opportunities in the area of quantum control.
## References
* [1] D. Dong, and I. R. Petersen, “Quantum control theory and applications: a survey,” IET Control Theory & Applications, vol. 4, no. 12, pp. 2651-2671, 2010.
* [2] H. Rabitz, R. De Vivie-Riedle, M. Motzkus, and K. Kompa, “Whither the future of controlling quantum phenomena?” Science, vol. 288, no. 5467, pp. 824-828, 2000.
* [3] S. J. Glaser, U. Boscain, T. Calarco, C. P. Koch, W. Köckenberger, R. Kosloff, I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrüggen, D. Sugny and F. K. Wilhelm, “Training Schrödinger’s cat: quantum optimal control,” The European Physical Journal D, vol. 69, p. 279, 2015.
* [4] C. Brif, R. Chakrabarti and H. Rabitz, “Control of quantum phenomena: past, present and future,” New Journal of Physics, Vol.12, p.075008, 2010.
* [5] C. Altafini and F. Ticozzi, “Modeling and control of quantum systems: an introduction,” IEEE Transactions on Automatic Control, vol.57, no.8, pp.1898-1917, 2012.
* [6] H. M. Wiseman, and G. J. Milburn, “Quantum Measurement and Control,” Cambridge, England: Cambridge University Press, 2010\.
* [7] D. D’Alessandro, Introduction to Quantum Control and Dynamics, Chapman & Hall/CRC, 2007.
* [8] A. Acín, I. Bloch, H. Buhrman, T. Calarco, C. Eichler, J. Eisert, _et al._ , “The quantum technologies roadmap: a European community view”, New Journal of Physics, vol. 20, p. 080201, 2018.
* [9] Y. Guo, C. C. Shu , D. Dong, and F. Nori, “Vanishing and revival of resonance Raman scattering”, Physical Review Letters, vol. 123, p. 223202, 2019.
* [10] S. Kuang, D. Dong, and I. R. Petersen, “Rapid Lyapunov control of finite-dimensional quantum systems” Automatica, vol. 81, pp.164-175, 2017.
* [11] D. Dong, X. Xing, H. Ma, C. Chen, Z. Liu and H. Rabitz, “Learning-based quantum robust control: algorithm, applications and experiments,” IEEE Transactions on Cybernetics, vol.50, pp.3581- 3593, 2020.
* [12] R. S. Judson, and H. Rabitz, “Teaching lasers to control molecules,” Physical Review Letters, vol. 68, pp. 1500-1503, 1992.
* [13] N. Khaneja, T. Reiss, C Kehlet, T. Schulte-Herbrüggen, and S. J. Glaser, “Optimal control of coupled spin dynamics: design of NMR pulse sequences by gradient ascent algorithms,” Journal of Magnetic Resonance, vol. 172, no. 2, pp. 296-305, 2005.
* [14] T. Fösel, P. Tighineanu, T. Weiss, and F. Marquardt, “Reinforcement learning with neural networks for quantum feedback”, Physical Review X, vol. 8, p. 031084, 2018.
* [15] G. Jäger, D. M. Reich, M. H. Goerz, C. P. Koch, and U. Hohenester, “Optimal quantum control of Bose-Einstein condensates in magnetic microtraps: Comparison of gradient-ascent-pulse-engineering and Krotov optimization schemes,” Physical Review A, vol. 90, p. 033628, 2014.
* [16] D. Dong, C. Wu, C. Chen, B. Qi, I. R. Petersen and F. Nori, “Learning robust pulses for generating universal quantum gates,” Scientific Reports, vol. 6, p. 36090, 2016.
* [17] T. Schulte-Herbrüggen, A. Spörl, N. Khaneja, & S. J. Glaser. “Optimal control for generating quantum gates in open dissipative systems”. J. Phys. B: At. Mol. Opt. Phys. vol. 44, p. 154013, 2011.
* [18] R. B. Wu, B. Chu, D.H. Owens, H. Rabitz, “Data-driven gradient algorithm for high-precision quantum control,” Physical Review A, vol. 97, p. 042122, 2018.
* [19] C. C. Shu, T. S. Ho, X. Xing and H. Rabitz, “Frequency domain quantum optimal control under multiple constraints,” Physical Review A, vol. 93, p. 033417, 2016.
* [20] R. Chakrabarti and H. Rabitz, “Quantum control landscapes,” International Reviews in Physical Chemistry, vol. 26, no. 4, pp. 671-735, 2007.
* [21] E. Zahedinejad, J. Ghosh, and B. C. Sanders, “High-fidelity single-shot Toffoli gate via quantum control,” Physical Review Letters, vol. 114, no. 20, p. 200502, 2015.
* [22] E. Zahedinejad, J. Ghosh, and B. C. Sanders, “Desgining high-fidelity single-shot three-qubit gates: A machine-learning approach,” Physical Review Applied, vol. 6, p. 054005, 2016.
* [23] H. Ma, D. Dong, C.-C. Shu, Z. Zhu, and C. Chen, “Differential evolution with equally-mixed strategies for robust control of open quantum systems,” Control Theory and Technology, vol. 15, pp. 226-241, 2017.
* [24] E. Zahedinejad, S. Schirmer, and B. C. Sanders, “Evolutionary algorithms for hard quantum control,” Physical Review A, vol. 90, no. 3, p. 032310, 2014.
* [25] J. S. Li, and N. Khaneja, “Control of inhomogeneous quantum ensembles,” Physical Review A, vol. 73, no. 3, p. 030302, 2006.
* [26] C. Chen, D. Dong, R Long, I. R. Petersen and H. A. Rabitz, “Sampling-based learning control of inhomogeneous quantum ensembles,” Physical Review A, vol. 89, no. 2, p. 023402, 2014.
* [27] D. Dong, M. A. Mabrok, I. R. Petersen, B. Qi, C. Chen, and H. Rabitz, “Sampling-based learning control for quantum systems with uncertainties,” IEEE Transactions on Control Systems Technology, vol. 23, pp. 2155-2166, 2015.
* [28] D. Dong, C. Chen, B. Qi, I. R. Petersen and F. Nori, “Robust manipulation of superconducting qubits in the presence of fluctuations,” Scientific Reports, vol. 5, p. 7873, 2015.
* [29] C. Wu, B. Qi, C. Chen, and D. Dong, “Robust learning control design for quantum unitary transformations,” IEEE Transactions on Cybernetics, vo. 47, pp. 4405-4417, 2017.
* [30] R. Wu, H. Ding, D. Dong and X. Wang, “Learning robust and high-precision quantum controls,” Physical Review A, vol. 99, p. 042327, 2019.
* [31] X. Xing, R. Rey-de-Castro and H. Rabitz, “Assessment of optimal control mechanism complexity by experimental landscape Hessian analysis: fragmentation of CH2BrI,” New Journal of Physics, vol. 16, p. 125004, 2014.
* [32] R. Sutton and A. G. Barto, _Reinforcement Learning: An Introduction_. Cambridge, MA: MIT Press, 1998.
* [33] C. Chen, D. Dong, H.X. Li, J. Chu and T.J. Tarn, “Fidelity-based probabilistic Q-learning for control of quantum systems”, _IEEE Transactions on Neural Networks and Learning Systems_ , Vol. 25, pp.920-933, 2014.
* [34] M. Bukov, A. G. R. Day, D. Sels, P. Weinberg, A. Polkovnikov, and P. Mehta, “Reinforcement learning in different phases of quantum control”, Physical Review X, vol. 8, p. 031086, 2018.
* [35] M. Y. Niu, S. Boixo, V. N. Smelyanskiy, and H. Neven, “Universal quantum control through deep reinforcement learning”, npj Quantum Information, vol. 5, p. 33, 2019.
|
# Householder Dice: A Matrix-Free Algorithm for Simulating Dynamics on
Gaussian and Random Orthogonal Ensembles
Yue M. Lu Y. M. Lu is with the John A. Paulson School of Engineering and
Applied Sciences, Harvard University, Cambridge, MA 02138, USA (e-mail:
yuelu@seas.harvard.edu). The initial part of this work was done during his
sabbatical at the École normale supérieure (ENS) in Paris, France in Fall
2019. He thanks colleagues at the ENS for their hospitality and stimulating
discussions. This work was supported by the Harvard FAS Dean’s Fund for
Promising Scholarship, by the chaire CFM-ENS “Science des donnees”, and by the
US National Science Foundation under grants CCF-1718698 and CCF-1910410.
###### Abstract
This paper proposes a new algorithm, named Householder Dice (HD), for
simulating dynamics on dense random matrix ensembles with translation-
invariant properties. Examples include the Gaussian ensemble, the Haar-
distributed random orthogonal ensemble, and their complex-valued counterparts.
A “direct” approach to the simulation, where one first generates a dense
$n\times n$ matrix from the ensemble, requires at least $\mathcal{O}(n^{2})$
resource in space and time. The HD algorithm overcomes this
$\mathcal{O}(n^{2})$ bottleneck by using the principle of deferred decisions:
rather than fixing the entire random matrix in advance, it lets the randomness
unfold with the dynamics. At the heart of this matrix-free algorithm is an
adaptive and recursive construction of (random) Householder reflectors. These
orthogonal transformations exploit the group symmetry of the matrix ensembles,
while simultaneously maintaining the statistical correlations induced by the
dynamics. The memory and computation costs of the HD algorithm are
$\mathcal{O}(nT)$ and $\mathcal{O}(nT^{2})$, respectively, with $T$ being the
number of iterations. When $T\ll n$, which is nearly always the case in
practice, the new algorithm leads to significant reductions in runtime and
memory footprint. Numerical results demonstrate the promise of the HD
algorithm as a new computational tool in the study of high-dimensional random
systems.
## 1 Introduction
To do research involving large random systems, one must make a habit of
experimenting on the computer. Indeed, computer simulations help verify
theoretical results and provide new insights, not to mention that they can
also be incredibly fun. For many problems in statistical learning, random
matrix theory, and statistical physics, the simulations that one encounters
are often given as an iterative process in the form of
$\boldsymbol{x}_{t+1}=f_{t}(\boldsymbol{M}_{t}\boldsymbol{x}_{t},\boldsymbol{x}_{t},\boldsymbol{x}_{t-1},\ldots,\boldsymbol{x}_{t-d}),\qquad\text{for
}1\leq t\leq T.$ (1)
Here, $\boldsymbol{M}_{t}$ is either $\boldsymbol{Q}$ or
$\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$, where $\boldsymbol{Q}$ is a random
matrix; $f_{t}(\cdot)$ denotes some general vector-valued function that maps
$\boldsymbol{M}_{t}\boldsymbol{x}_{t}$ and a few previous iteration vectors
$\left\\{\boldsymbol{x}_{t-i}\right\\}_{0\leq i\leq d}$ to the next one
$\boldsymbol{x}_{t+1}$; and $T$ is the total number of iterations.
With suitable definitions of the mappings $f_{t}(\cdot)$, the formulation in
(1) includes many well-known algorithms as its special cases. A classical
example is to use iterative methods [1] to compute the extremal
eigenvalues/eigenvectors of a (spiked) random matrix [2, 3]. Other examples
include approximate message passing on dense random graphs [4, 5, 6, 7, 8],
and gradient descent algorithms for solving learning and estimation problems
with random design [9, 10]. In this paper, we show that all of these
algorithms can be simulated by an efficient _matrix-free_ scheme, if the
random matrix $\boldsymbol{Q}$ is drawn from an ensemble with translation-
invariant properties. Examples of such ensembles include the i.i.d. Gaussian
(i.e. the rectangular Ginibre) ensemble, the Haar-distributed random
orthogonal ensemble, the Gaussian orthogonal ensemble, and their complex-
valued counterparts.
What is wrong with the standard way of simulating (1), where we first draw a
sample $\boldsymbol{Q}$ from the matrix ensemble and then carry through the
iterations? This direct approach is straightforward to implement, but it
cannot handle large dimensions. To see this, suppose that
$Q\in\mathbb{R}^{m\times n}$ with $m\asymp n$. We shall also assume that the
computational cost of the nonlinear mapping $f_{t}(\cdot)$ is
$\mathcal{O}(n)$. It follows that, at each iteration of (1), most of the
computation is spent on the matrix-vector multiplication
$\boldsymbol{M}_{t}\boldsymbol{x}_{t}$, at a cost of $\mathcal{O}(n^{2})$
work. It is not at all obvious that one can do much better: Merely generating
an $n\times n$ Gaussian matrix already requires $O(n^{2})$ resource in
computation and storage. When $n$ is large, $n^{2}$ is huge. In practice, this
$\mathcal{O}(n^{2})$ bottleneck means that one cannot simulate (1) at a
dimension much larger than $n=10^{4}$ on a standard computer (in a reasonable
amount of time). However, there are many occasions, especially in the study of
high-dimensional random systems, where one does wish to simulate large random
matrices. A common workaround is to choose a moderate dimension (_e.g._ ,
$n=1000$), repeat the simulation over many independent trials, and then
average the results to reduce statistical fluctuations. In addition to having
to spend extra time on the repeated trials, this strategy can still suffer
from strong finite size effects, making it a poor approximation of the true
high-dimensional behavior of the underlying random systems. (An example is
given in Section 2.2 to illustrate this issue.)
In this paper, we propose a new algorithm, named _Householder Dice_ (HD), for
simulating the dynamics in (1) on the Gaussian, Haar, and other related random
matrix ensembles. Our new approach is _statistically-equivalent_ to the direct
approach discussed above, but the memory and computation costs of the HD
algorithm are $\mathcal{O}(nT)$ and $\mathcal{O}(nT^{2})$, respectively, where
$T$ is the number of iterations. In many problems, $T$ is much smaller than
$n$. Typically, $T=\mathcal{O}(\text{polylog}(n))$. In such cases, the new
algorithm leads to significant reductions in runtime and memory footprint. In
the numerical examples presented in Section 2, we show that the crossover
value of $n$ at which the HD algorithm outperforms the direct approach can be
as low as $500$. The speedup becomes orders of magnitude greater for $n\geq
10^{4}$. Moreover, the HD algorithm expands the limits of what could be done
on standard computers by making it tractable to perform dense random matrix
experiments in dimensions as large as $n=10^{7}$.
The basic idea of the HD algorithm follows the so-called principle of deferred
decisions [11]. Intuitively, each iteration of (1) only probes
$\boldsymbol{Q}$ in a one-dimensional space spanned by $\boldsymbol{x}_{t}$.
Thus, if the total number of iterations $T\ll n$, we only need to expose the
randomness of $\boldsymbol{Q}$ over a few low-dimensional subspaces. It is
then clearly wasteful to fix and store in memory the full matrix in advance.
The situation is analogous to that of simulating a simple random walk for $T$
steps. We can let the random choices gradually unfold with the progress of the
walk, fixing only the randomness that must be revealed at any given step. The
challenge in our problem though is that the dynamics in (1) can create a
complicated dependence structure between the random matrix $\boldsymbol{Q}$
and the iteration vectors
$\boldsymbol{x}_{t},\boldsymbol{x}_{t-1}\ldots,\boldsymbol{x}_{0}$.
Nevertheless, we show that this dependence structure can be exactly accounted
for by an adaptive and recursive construction of (random) Householder
reflectors [12, 13] which exploit the inherent group symmetry of the matrix
ensembles.
Using Householder reflectors to speed up random matrix experiments is not a
new idea. It is well-known [14, 15] that a Haar-distributed random orthogonal
matrix can be factorized as a product of Householder reflectors. This leads to
an efficient way of generating a random orthogonal matrix with
$\mathcal{O}(n^{2})$ operations (rather than the $\mathcal{O}(n^{3})$ cost
associated with a full QR decomposition on a Gaussian matrix). Householder
reflectors have also been applied to reduce a Gaussian matrix to a
particularly simple random bidiagonal form [16, 17]. This clever factorization
leads to an $\mathcal{O}(n^{2})$ algorithm for simulating the spectrum
densities of Gaussian and Wishart matrices. (Recall that a standard eigenvalue
decomposition on a dense matrix requires $\mathcal{O}(n^{3})$ work in
practice.) The proposed HD algorithm differs from the previous work in that it
is a truly _matrix-free_ construction. With the progress of the dynamics, it
gradually builds a recursive set of (random) Householder reflectors based on
the current iteration vector $\boldsymbol{x}_{t}$ and the history of the
iterations up to this point. This adaptive, “on-the-fly” construction is
essential for us to capture the correlation structures generated by the
dynamics without fixing the matrix in advance.
The rest of the paper is organized as follows. We first present in Section 2 a
few motivating examples to showcase the applications of the HD algorithm.
Section 3 contains the main technical results of this paper. After a brief
review of the basic properties of the Haar measure (on classical matrix
groups) and Householder reflectors, we present the construction of the
proposed algorithm for the Gaussian and random orthogonal ensembles. Theorems
1 and 2 establish the statistical equivalence of the HD algorithm and the
direct approach to simulating (1). Generalizations to complex-valued and other
related ensembles are discussed in Section 3.4. We conclude the paper in
Section 4.
## 2 Numerical Examples
Before delving into technical details, it is helpful to go through a few
motivating applications that show how the HD algorithm can significantly speed
up the simulation tasks.111All of the numerical experiments presented in this
section have been done in Julia [18]. The source code implementing the HD
algorithm is available online at https://github.com/yuelusip/HouseholderDice.
### 2.1 Lasso with Random Designs
In the first example, we consider the simulation of the lasso estimator widely
used in statistics and machine learning. The goal is to estimate a sparse
vector $\boldsymbol{\beta}^{\ast}\in\mathbb{R}^{n}$ from its noisy linear
observation given by
$\boldsymbol{y}=\boldsymbol{Q}\boldsymbol{\beta}^{\ast}+\boldsymbol{w},$
where $\boldsymbol{Q}\in\mathbb{R}^{m\times n}$ is a design (or covariate)
matrix, and $\boldsymbol{w}\sim\mathcal{N}(0,\sigma_{w}^{2}\boldsymbol{I})$
denotes the noise in $\boldsymbol{y}$. The lasso estimator is formulated as an
optimization problem
$\widehat{\boldsymbol{\beta}}=\underset{\boldsymbol{\beta}}{\arg\min}\
\frac{1}{2}\mathinner{\\!\left\lVert\boldsymbol{y}-\boldsymbol{Q}\boldsymbol{\beta}\right\rVert}^{2}+\lambda\mathinner{\\!\left\lVert\boldsymbol{\beta}\right\rVert}_{1},$
(2)
where $\widehat{\boldsymbol{\beta}}$ is an estimate of
$\boldsymbol{\beta}^{\ast}$ and $\lambda>0$ is a regularization parameter.
A popular method for solving (2) is the iterative soft-thresholding algorithm
(ISTA) [19]:
$\boldsymbol{x}_{t+1}=\eta_{\lambda\tau}[\boldsymbol{x}_{t}+\tau\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}(\boldsymbol{y}-\boldsymbol{Q}\boldsymbol{x}_{t})],\qquad
0\leq t<T,$ (3)
where $\tau>0$ denotes the step size and
$\eta_{\lambda\tau}(x)=\operatorname{sign}(x)\max\left\\{\mathinner{\\!\left\lvert
x\right\rvert}-\lambda\tau,0\right\\}$ is an element-wise soft-thresholding
operator. In many theoretical studies of lasso, one assumes that the design
matrix is random with i.i.d. normal entries, _i.e._
$Q_{ij}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,\tfrac{1}{m})$. In this
case, ISTA is an iterative process on a Gaussian matrix $\boldsymbol{Q}$ and
its transpose. With some change of variables, we can rewrite (3) as a special
case of the general dynamics given in (1), with one iteration of (3) mapped to
two iterations of (1).
We simulate the ISTA dynamics using both the proposed HD algorithm and the
direct simulation approach that fixes the Gaussian matrix $\boldsymbol{Q}$ in
advance. In our experiments, the target sparse vector
$\boldsymbol{\beta}^{\ast}$ has i.i.d. entries drawn from the Bernoulli-
Gaussian prior
$\beta^{\ast}_{i}\sim\rho\delta(\beta)+(1-\rho)\frac{1}{\sqrt{2\pi\sigma_{s}^{2}}}\exp\Big{\\{}-\frac{\beta^{2}}{2\sigma_{s}^{2}}\Big{\\}},$
where $0<\rho<1$ and $\sigma_{s}>0$ are two constants. The design matrix
$\boldsymbol{Q}$ is of size $m\times n$ with $m=\lfloor n/2\rfloor$.
Figure 1: Simulating the ISTA dynamics (3) using two approaches: the standard
approach where the random matrix $\boldsymbol{Q}$ is generated in advance, and
the proposed HD algorithm. (a) The time-varying MSE averaged over $10^{5}$
independent trials, with the results from the two approaches matching. (b)
Runtime versus the matrix dimension $n$, shown in log-log scale. In all the
experiments, the parameters are set to $T=50$, $\lambda=2$, $\tau=0.3$,
$\rho=0.2$, $\sigma_{s}=2$ and $\sigma_{w}=0.1$.
Figure 1 shows the mean-squared error (MSE)
$e^{(t)}\overset{\text{def}}{=}\frac{1}{n}\mathinner{\\!\left\lVert\boldsymbol{x}_{t}-\boldsymbol{\beta}^{\ast}\right\rVert}^{2}$
at each iteration of the dynamics, obtained by averaging over $10^{5}$
independent trials. The dimension here is $n=1000$. The results from the HD
algorithm (the red circles in the figure) match those from the standard
approach (the blue line). This is expected, since Householder Dice is designed
to be statistically equivalent to the direct approach. However, the two
simulation approaches behave very differently in runtime and memory footprint,
as shown in Figure 1. When we increase the dimension $n$, the runtime of the
standard approach exhibits a quadratic growth rate $\mathcal{O}(n^{2})$,
whereas the runtime of the HD algorithm scales linearly with $n$. For
comparison, we also plot in the figure the runtime for merely generating an
i.i.d. Gaussian matrix $\boldsymbol{Q}$ of size $m\times n$.
For small dimensions ($250\leq n<500$), the HD algorithm takes slightly more
time than the direct approach, likely due to the additional overhead in
implementing the former. Starting from $n\geq 500$, it becomes the more
efficient choice. In fact, for $n\geq 2500$, the HD algorithm can simulate the
ISTA dynamics (for 50 iterations) in less time than it takes to generate the
Gaussian matrix. For dimensions beyond $n=10^{5}$, Householder Dice becomes
the only feasible method, as implementing the direct approach would require
more memory than available on the test computer (equipped with 32 GB of RAM).
Finally, for $n=10^{7}$, the runtime for the HD algorithm is 92 seconds,
whereas by extrapolation the direct approach would have taken $7.7\times
10^{6}$ seconds (approximately 89 days).
### 2.2 Spectral Method for Generalized Linear Models
In the second example, we consider a spectral method [20, 21, 22, 23] with
applications in signal estimation and exploratory data analysis. Let
$\boldsymbol{\xi}$ be an unknown vector in $\mathbb{R}^{n}$ and
$\left\\{\boldsymbol{a}_{i}\right\\}_{1\leq i\leq m}$ a set of sensing
vectors. We seek to estimate $\boldsymbol{\xi}$ from a number of generalized
linear measurements
$\left\\{y_{i}=f(\boldsymbol{a}_{i}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{\xi})\right\\}_{1\leq
i\leq m}$, where $f(\cdot)$ is some function modeling the acquisition process.
The spectral method works as follows. Let
$\boldsymbol{D}\overset{\text{def}}{=}\frac{1}{m}\boldsymbol{A}\operatorname{diag}\left\\{y_{1},\ldots,y_{m}\right\\}\boldsymbol{A}^{\mkern-1.5mu\mathsf{T}},$
(4)
where
$\boldsymbol{A}=[\boldsymbol{a}_{1},\boldsymbol{a}_{2},\ldots,\boldsymbol{a}_{m}]$
is a matrix whose columns are the sensing vectors. Denote by
$\boldsymbol{x}_{1}$ a normalized eigenvector associated with the largest
eigenvalue of $\boldsymbol{D}$. This vector $\boldsymbol{x}_{1}$ is then our
estimate of $\boldsymbol{\xi}$, up to a scaling factor. The performance of the
spectral method is usually given in terms of the squared cosine similarity
$\rho(\boldsymbol{\xi},\boldsymbol{x}_{1})=\frac{(\boldsymbol{\xi}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{1})^{2}}{\mathinner{\\!\left\lVert\boldsymbol{\xi}\right\rVert}^{2}\mathinner{\\!\left\lVert\boldsymbol{x}_{1}\right\rVert}^{2}}$.
Asymptotic limits of $\rho(\boldsymbol{\xi},\boldsymbol{x}_{1})$ have been
derived for the cases where $\boldsymbol{A}$ is an i.i.d. Gaussian matrix [22,
23] or a subsampled random orthogonal matrix [24]. In our experiment, we
consider the latter setting. Assume $m=\lfloor\alpha n\rfloor$ for some
$\alpha>1$. We can write
$\boldsymbol{A}=\begin{bmatrix}\boldsymbol{I}_{n}&\boldsymbol{0}_{n\times(m-n)}\end{bmatrix}\boldsymbol{Q},$
(5)
where $\boldsymbol{Q}\in\mathbb{R}^{m\times m}$ is a random orthogonal matrix
drawn from the Haar distribution.
Figure 2: Simulating the spectral method given in (4) and comparing the
empirical results against the asymptotic predictions given in [24]. The result
for $n=10^{3}$ shows strong statistical fluctuations. This can be reduced by
averaging over multiple independent trials, but the average curve still
suffers from strong finite size effects, especially near the phase transition
point. At $n=10^{5}$, the match between the empirical results and the
theoretical curve is nearly perfect in any (typical) trial.
We simulate the spectral method and compare its empirical performance with the
asymptotic limit given in [24]. In our experiment, the measurement model is
set to be
$y_{i}=\tanh\big{(}\mathinner{\\!\left\lvert\boldsymbol{a}_{i}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{\xi}\right\rvert}\big{)}$.
We compute the leading eigenvector $\boldsymbol{x}_{1}$ by using the Krylov-
Schur algorithm [1], which involves the repeated multiplication of
$\boldsymbol{D}$ with some vectors. With the forms of $\boldsymbol{D}$ and
$\boldsymbol{A}$ given above, this algorithm can again be regarded as a
special case of the general dynamics in (1). We use the HD algorithm for the
simulation and show the results in Figure 2 for two different matrix
dimensions: $n=10^{3}$ and $n=10^{5}$. Observe that, at $n=10^{3}$, there is
still noticeable fluctuations between the actual performance of the spectral
method (shown as green dots in the figure) and the theoretical prediction (the
blue line). To get a better match, the standard practice is to do many
independent trials (2000 in our experiment) and average the results. This
gives us the green curve in the figure. Averaging can indeed reduce
statistical fluctuations, but there are still strong finite size effects,
especially near the phase transition point. This is a case where the
capability of the proposed HD algorithm to handle large matrices becomes
particularly attractive: when we increase the dimension to $n=10^{5}$, the
empirical results match the theoretical curve very closely in any (typical)
trial, with no need for averaging over repeated simulations. In terms of
runtime, it takes the HD algorithm less than 4 seconds on average to obtain an
extremal eigenvalue/eigenvector of $\boldsymbol{D}$ for $n=10^{5}$.
## 3 Main Results
_Notation_ : In what follows, $\boldsymbol{e}_{i}$ denotes the $i$th natural
basis vector, and
$\boldsymbol{Z}_{i}\overset{\text{def}}{=}\boldsymbol{I}-\boldsymbol{e}_{i}\boldsymbol{e}_{i}^{\mkern-1.5mu\mathsf{T}}$.
For $i\leq j$, we use $\boldsymbol{Z}_{i\mathrel{\mathop{\ordinarycolon}}j}$
as a shorthand notation for $\prod_{i\leq k\leq j}\boldsymbol{Z}_{k}$. The
dimension of $\boldsymbol{Z}_{i}$ and
$\boldsymbol{Z}_{i\mathrel{\mathop{\ordinarycolon}}j}$ is either $m\times m$
or $n\times n$, which will be made clear from the context. For any
$\boldsymbol{v}\in\mathbb{R}^{n}$, the “slicing” operation that takes a subset
of $\boldsymbol{v}$ is denoted by
$\boldsymbol{v}[i\mathrel{\mathop{\ordinarycolon}}j]\overset{\text{def}}{=}[v_{i},v_{i+1},\ldots,v_{j}]^{\mkern-1.5mu\mathsf{T}},$
(6)
where $1\leq i\leq j\leq n$. We use
$\mathbb{O}(n)\overset{\text{def}}{=}\\{\boldsymbol{M}\in\mathbb{R}^{n\times
n}\mathrel{\mathop{\ordinarycolon}}\boldsymbol{M}\boldsymbol{M}^{\mkern-1.5mu\mathsf{T}}=\boldsymbol{I}_{n}\\}$
to denote the set of $n\times n$ orthogonal matrices, and
$\mathbb{U}(n)\overset{\text{def}}{=}\\{\boldsymbol{M}\in\mathbb{C}^{n\times
n}\mathrel{\mathop{\ordinarycolon}}\boldsymbol{M}\boldsymbol{M}^{\ast}=\boldsymbol{I}_{n}\\}$
its complex-valued counterpart. We will be mainly focusing on two real-valued
random matrix ensembles: $\text{Ginibre}({m,n})$ represents the ensemble of
$m\times n$ matrices with i.i.d. standard normal entries, and $\text{Haar}(n)$
represents the ensemble of random orthogonal matrices drawn from the Haar
measure on $\mathbb{O}(n)$. The generalizations to the complex-valued cases
and other closely related ensembles will be discussed in Section 3.4.
### 3.1 Preliminaries
The ensembles $\text{Ginibre}({m,n})$ and $\mathbb{O}(n)$ share an important
property: they are both _invariant_ with respect to multiplications by
orthogonal matrices. For example, for any $\boldsymbol{G}$ drawn from
$\text{Ginibre}({m,n})$, it is easy to verify that
$\boldsymbol{G}\sim\text{Ginibre}(m,n)\implies\boldsymbol{U}\boldsymbol{G}\boldsymbol{V}\sim\text{Ginibre}(m,n),$
(7)
where $\boldsymbol{U}\in\mathbb{O}(m),\boldsymbol{V}\in\mathbb{O}(n)$ are any
two deterministic or _random_ orthogonal matrices independent of
$\boldsymbol{G}$.
Translation-invariant properties similar to (7) are actually what defines the
Haar measure. We call a probability measure $\mu$ on $\mathbb{O}(n)$ a Haar
measure if
$\mu(\mathcal{A})=\mu(\boldsymbol{U}\circ\mathcal{A})=\mu(\mathcal{A}\circ\boldsymbol{U})$
(8)
for any measurable subset $\mathcal{A}\subset\mathbb{O}(n)$ and any fixed
$\boldsymbol{U}\in\mathbb{O}(n)$. Here,
$\boldsymbol{U}\circ\mathcal{A}\overset{\text{def}}{=}\left\\{\boldsymbol{U}\boldsymbol{V}\mathrel{\mathop{\ordinarycolon}}\boldsymbol{V}\in\mathcal{A}\right\\}$
and $\mathcal{A}\circ\boldsymbol{U}$ is defined similarly. The classical
Haar’s theorem [25, 26] shows that there is one, and only one, translation-
invariant probability measure in the sense of (8) on $\mathbb{O}(n)$. In fact,
the theorem holds in much greater generality. For example, it remains true for
any compact Lie group, which includes $\mathbb{O}(n)$ [and $\mathbb{U}(n)$] as
its special case.
An additional property of $\mathbb{O}(n)$, $\mathbb{U}(n)$ (and compact Lie
groups in general) is that left-invariance [the first equality in (8)] implies
right-invariance (the second equality), and vice versa. This then allows us to
have a simplified characterization of the Haar measure on $\mathbb{O}(n)$.
Specifically, to show that a random orthogonal matrix
$\boldsymbol{Q}\sim\text{Haar}(n)$, it is sufficient to verify that
$\boldsymbol{Q}\overset{d}{=}\boldsymbol{U}\boldsymbol{Q}$
for any fixed $\boldsymbol{U}\in\mathbb{O}(n)$, where $\overset{d}{=}$ means
that two random variables have the same distribution. We will use this
convenient characterization in Section 3.3, when we establish the statistical
equivalence between the proposed HD algorithm and the direct simulation of
(1).
Finally, we recall the construction of Householder reflectors [12, 13] from
numerical linear algebra, as they will play important roles in our subsequent
discussions. Given a vector $\boldsymbol{v}\in\mathbb{R}^{n}$, how can we
build an orthogonal matrix $\boldsymbol{H}$ such that
$\boldsymbol{H}\boldsymbol{v}=\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}\boldsymbol{e}_{1}$?
This is exactly the problem addressed by Householder reflectors, defined here
as
$\boldsymbol{H}(\boldsymbol{v})\overset{\text{def}}{=}-\operatorname{sign}(v_{1})\Big{(}\boldsymbol{I}-2\,\frac{\boldsymbol{u}\boldsymbol{u}^{\mkern-1.5mu\mathsf{T}}}{\boldsymbol{u}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{u}}\Big{)},$
(9)
where
$\boldsymbol{u}=\boldsymbol{v}+\operatorname{sign}(v_{1})\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}\boldsymbol{e}_{1}$,
and $\operatorname{sign}(v_{1})=1$ if $v_{1}\geq 0$ and $0$ otherwise. The
choice of the sign in (9) helps improve numerical stability (see [13, Lecture
10]).
By construction, $\boldsymbol{H}(\boldsymbol{v})$ is a symmetric matrix whose
eigenvalues are equal to $\pm 1$. It follows that
$\boldsymbol{H}(\boldsymbol{v})\in\mathbb{O}(n)$. Moreover, we can verify from
direct calculations that
$\boldsymbol{H}(\boldsymbol{v})\boldsymbol{e}_{1}=\boldsymbol{v}/\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}\quad\text{and}\quad\boldsymbol{H}(\boldsymbol{v})\boldsymbol{v}=\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}\boldsymbol{e}_{1}.$
(10)
Geometrically, $\boldsymbol{H}(\boldsymbol{v})$ represents a reflection across
the exterior (or interior) angle bisector of
$\boldsymbol{v}/\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}$ and
$\boldsymbol{e}_{1}$. It is widely used in numerical linear algebra thanks to
its low memory/computational costs. The matrix
$\boldsymbol{H}(\boldsymbol{v})$ itself can be efficiently represented with
$\mathcal{O}(n)$ space, and matrix-vector multiplications involving
$\boldsymbol{H}(\boldsymbol{v})$ only require $\mathcal{O}(n)$ work.
For any $\boldsymbol{p}\in\mathbb{R}^{n}$ and $1\leq k\leq n$, we define a
generalized Householder reflector as
$\boldsymbol{H}_{k}(\boldsymbol{p})\overset{\text{def}}{=}\begin{bmatrix}\boldsymbol{I}_{k-1}&\\\
&\boldsymbol{H}(\boldsymbol{p}[k\mathrel{\mathop{\ordinarycolon}}n])\end{bmatrix},$
(11)
where $\boldsymbol{H}(\cdot)$ is the reflector defined in (9), and
$\boldsymbol{p}[k\mathrel{\mathop{\ordinarycolon}}n]$ denotes a subvector
obtained by removing the first $k-1$ elements of $\boldsymbol{p}$. The
construction in (9) requires that the reflecting vector
$\boldsymbol{p}[k\mathrel{\mathop{\ordinarycolon}}n]$ be nonzero. In order for
(11) to be always well-defined, we set
$\boldsymbol{H}_{k}(\boldsymbol{p})=\boldsymbol{I}_{n}$ if
$\boldsymbol{p}[k\mathrel{\mathop{\ordinarycolon}}n]=\boldsymbol{0}$. Recall
the notation $\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}k}$ introduced
at the beginning of the section. It is easy to verify that
$\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}k}\boldsymbol{H}_{k}(\boldsymbol{p})\boldsymbol{p}=\boldsymbol{0},$
(12)
which means that the orthogonal transformation
$\boldsymbol{H}_{k}(\boldsymbol{p})$ can turn the last $n-k$ entries of
$\boldsymbol{p}$ to zero. We will use this property in the construction of the
HD algorithm.
### 3.2 Gaussian Random Matrices
We start by considering the case where the random matrix $\boldsymbol{Q}$ in
the dynamics (1) has i.i.d. Gaussian entries, _i.e._ ,
$\boldsymbol{Q}\sim\text{Ginibre}(m,n)$. In addition, we shall always assume
that $\boldsymbol{Q}$ is independent of the initial condition
$\left\\{\boldsymbol{x}_{1},\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{1-d}\right\\}$.
Suppose that the first step of (1) is in the form of
$\boldsymbol{x}_{2}=f_{1}(\boldsymbol{Q}\boldsymbol{x}_{1},\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{1-d})$,
i.e., $\boldsymbol{M}_{1}=\boldsymbol{Q}$. How do we simulate this step
without generating the entire Gaussian matrix $\boldsymbol{Q}$? This can be
achieved by a simple observation:
$\boldsymbol{Q}\overset{d}{=}\boldsymbol{g}_{1}\boldsymbol{e}_{1}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{G}_{1}\boldsymbol{Z}_{1}\overset{d}{=}(\boldsymbol{g}_{1}\boldsymbol{e}_{1}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{G}_{1}\boldsymbol{Z}_{1})\boldsymbol{R}_{1}\sim\text{Ginibre}(m,n),$
(13)
where
$\boldsymbol{Z}_{1}=\boldsymbol{I}-\boldsymbol{e}_{1}\boldsymbol{e}_{1}^{\mkern-1.5mu\mathsf{T}}$,
$\boldsymbol{R}_{1}\overset{\text{def}}{=}\boldsymbol{H}_{1}(\boldsymbol{x}_{1})$
is a (generalized) Householder reflector defined in (11),
$\boldsymbol{g}_{1}\sim\text{Ginibre}(m,1)$ is a Gaussian vector, and
$\boldsymbol{G}_{1}\sim\text{Ginibre}(m,n)$ is an independent Gaussian matrix.
Here and subsequently, whenever we generate new random vectors and matrices,
they are always independent of each other and of the $\sigma$-algebra
generated by all the other random variables constructed up to that point. For
example, $\boldsymbol{g}_{1}$ and $\boldsymbol{G}_{1}$ in (13) are understood
to be independent of the initial condition
$\left\\{\boldsymbol{x}_{1},\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{1-d}\right\\}$.
In (13), the first equality (in distribution) is obvious, and the second
equality is due to the translation invariance of the Ginibre ensemble. (Recall
(7) and the fact that $\boldsymbol{R}_{1}$ is an orthogonal matrix.)
The new representation
$\boldsymbol{Q}^{(1)}=(\boldsymbol{g}_{1}\boldsymbol{e}_{1}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{G}_{1}\boldsymbol{Z}_{1})\boldsymbol{R}_{1}$
(14)
looks like a rather convoluted way of writing an i.i.d. Gaussian matrix, but
it turns out to be the right choice for efficient simulations. To see this, we
use the property of the Householder reflector [see (10)] which gives us
$\boldsymbol{R}_{1}\boldsymbol{x}_{1}=\boldsymbol{H}_{1}(\boldsymbol{x}_{1})\boldsymbol{x}_{1}=\mathinner{\\!\left\lVert\boldsymbol{x}_{1}\right\rVert}\boldsymbol{e}_{1}$
and thus
$\boldsymbol{Z}_{1}\boldsymbol{R}_{1}\boldsymbol{x}_{1}=\boldsymbol{0}$. It
follows that
$\boldsymbol{Q}^{(1)}\boldsymbol{x}_{1}=\mathinner{\\!\left\lVert\boldsymbol{x}_{1}\right\rVert}\boldsymbol{g}_{1}.$
Thus, to simulate the first step of the dynamics, we only need to generate a
Gaussian vector $\boldsymbol{g}_{1}$. The more expensive Gaussian matrix
$\boldsymbol{G}_{1}$ does not need to be revealed (yet), as it is invisible to
$\boldsymbol{x}_{1}$.
It is helpful to consider two more iterations to see how this idea can be
applied recursively. Suppose that the second iteration takes the form of
$\boldsymbol{x}_{3}=f_{2}(\boldsymbol{Q}\boldsymbol{x}_{2},\boldsymbol{x}_{2},\ldots,\boldsymbol{x}_{2-d})$.
In general, $\boldsymbol{x}_{2}$ will have a nonzero component in the space
orthogonal to $\boldsymbol{x}_{1}$, and thus the Gaussian matrix
$\boldsymbol{G}_{1}$ in (14) is no longer invisible to $\boldsymbol{x}_{2}$,
meaning that
$\boldsymbol{G}_{1}\boldsymbol{Z}_{1}\boldsymbol{R}_{1}\boldsymbol{x}_{2}\neq\boldsymbol{0}$.
However, we can use the trick in (13) again by writing
$\boldsymbol{G}_{1}\overset{d}{=}(\boldsymbol{g}_{2}\boldsymbol{e}_{2}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{G}_{2}\boldsymbol{Z}_{2})\boldsymbol{R}_{2}\sim\text{Ginibre}(m,n),$
(15)
where $\boldsymbol{g}_{2}\sim\text{Ginibre}(m,1)$,
$\boldsymbol{G}_{2}\sim\text{Ginibre}(m,n)$, and
$\boldsymbol{R}_{2}\overset{\text{def}}{=}\boldsymbol{H}_{2}(\boldsymbol{R}_{1}\boldsymbol{x}_{2})$
is again a generalized Householder reflector in (11). The subscript in
$\boldsymbol{H}_{2}$ should not be overlooked, as it signifies the precise way
the matrix is constructed. [Recall (11) for the notation convention we use.]
Observe that $\boldsymbol{R}_{2}$ commutes with $\boldsymbol{Z}_{1}$.
Substituting (15) into (14) then allows us to write
$\boldsymbol{Q}^{(2)}=\boldsymbol{u}_{1}\boldsymbol{v}_{1}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{u}_{2}\boldsymbol{v}_{2}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{G}_{2}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}2}\boldsymbol{R}_{2}\boldsymbol{R}_{1}\sim\text{Ginibre}(m,n),$
(16)
where $\boldsymbol{u}_{1}\overset{\text{def}}{=}\boldsymbol{g}_{1}$,
$\boldsymbol{u}_{2}\overset{\text{def}}{=}\boldsymbol{g}_{2}$,
$\boldsymbol{v}_{1}\overset{\text{def}}{=}\boldsymbol{R}_{1}\boldsymbol{e}_{1}$,
and
$\boldsymbol{v}_{2}\overset{\text{def}}{=}\boldsymbol{R}_{1}\boldsymbol{R}_{2}\boldsymbol{e}_{2}$.
Just like what happens in (14), there is again no need to explicitly generate
the dense Gaussian matrix $\boldsymbol{G}_{2}$ in (16). To see this, we note
that
$\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}2}\boldsymbol{R}_{2}\boldsymbol{R}_{1}\boldsymbol{x}_{2}=\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}2}\boldsymbol{H}_{2}(\boldsymbol{R}_{1}\boldsymbol{x}_{2})\boldsymbol{R}_{1}\boldsymbol{x}_{2}=\boldsymbol{0}$,
where the second equality is due to (12). It follows that
$\boldsymbol{Q}^{(2)}\boldsymbol{x}_{2}=(\boldsymbol{v}_{1}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{2})\boldsymbol{u}_{1}+(\boldsymbol{v}_{2}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{2})\boldsymbol{u}_{2}.$
So far we have only been considering the case where we access $\boldsymbol{Q}$
from the right. For the third iteration, let us suppose that we access
$\boldsymbol{Q}$ from the left, i.e.,
$\boldsymbol{x}_{4}=f_{3}(\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{3},\boldsymbol{x}_{3},\ldots,\boldsymbol{x}_{3-d})$.
The idea is similar. Let
$\boldsymbol{G}_{2}=\boldsymbol{L}_{1}(\boldsymbol{e}_{1}\boldsymbol{g}_{3}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{Z}_{1}\boldsymbol{G}_{3})\sim\text{Ginibre}(m,n),$
(17)
where
$\boldsymbol{L}_{1}\overset{\text{def}}{=}\boldsymbol{H}_{1}(\boldsymbol{x}_{3})$,
$\boldsymbol{g}_{3}\sim\text{Ginibre}(n,1)$, and
$\boldsymbol{G}_{3}\sim\text{Ginibre}(m,n)$. Substituting (17) into (16) gives
us
$\boldsymbol{Q}^{(3)}=\textstyle\sum_{1\leq i\leq
3}\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{L}_{1}\boldsymbol{Z}_{1}\boldsymbol{G}_{3}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}2}\boldsymbol{R}_{2}\boldsymbol{R}_{1}\sim\text{Ginibre}(m,n),$
where
$\boldsymbol{u}_{3}\overset{\text{def}}{=}\boldsymbol{L}_{1}\boldsymbol{e}_{1}$
and
$\boldsymbol{v}_{3}\overset{\text{def}}{=}\boldsymbol{R}_{1}\boldsymbol{R}_{2}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}2}\boldsymbol{g}_{3}$.
Moreover,
$[\boldsymbol{Q}^{(3)}]^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{3}=\sum_{i\leq
3}(\boldsymbol{u}_{i}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{3})\boldsymbol{v}_{i}$.
The general idea should now be clear. Rather than fixing the entire Gaussian
matrix in advance, we let the random choices gradually unfold as the iteration
goes on, generating only the randomness that must be revealed at each step.
Continuing this process for $T$ steps, we reach the HD algorithm for the
Ginibre ensemble, summarized in Algorithm 1. Its memory and computational
costs can be determined as follows.
Algorithm 1 Simulating (1) on $\text{Ginibre}(m,n)$ using Householder Dice
1:The initial condition
$\left\\{\boldsymbol{x}_{1},\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{1-d}\right\\}$,
and the number of iterations $T\leq\min\left\\{m,n\right\\}$
2:Set $r=0$, $\ell=0$, $\boldsymbol{L}_{0}=\boldsymbol{I}_{m}$, and
$\boldsymbol{R}_{0}=\boldsymbol{I}_{n}$.
3:for $t=1,\ldots,T$ do
4: if $\boldsymbol{M}_{t}=\boldsymbol{Q}$ then
5: $r\leftarrow r+1$
6: Generate $\boldsymbol{g}_{t}\sim\text{Ginibre}(m,1)$
7:
$\boldsymbol{R}_{r}=\boldsymbol{H}_{r}(\boldsymbol{R}_{r-1}\ldots\boldsymbol{R}_{1}\boldsymbol{R}_{0}\boldsymbol{x}_{t})$
8:
$\boldsymbol{u}_{t}=\boldsymbol{L}_{0}\boldsymbol{L}_{1}\ldots\boldsymbol{L}_{\ell}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}\ell}\boldsymbol{g}_{t}$
9:
$\boldsymbol{v}_{t}=\boldsymbol{R}_{0}\boldsymbol{R}_{1}\ldots\boldsymbol{R}_{r}\boldsymbol{e}_{r}$
10: $\boldsymbol{y}_{t}=\sum_{i\leq
t}(\boldsymbol{v}_{i}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{t})\boldsymbol{u}_{i}$
11: else
12: $\ell\leftarrow\ell+1$
13: Generate $\boldsymbol{g}_{t}\sim\text{Ginibre}(n,1)$
14:
$\boldsymbol{L}_{\ell}=\boldsymbol{H}_{\ell}(\boldsymbol{L}_{\ell-1}\ldots\boldsymbol{L}_{1}\boldsymbol{L}_{0}\boldsymbol{x}_{t})$
15:
$\boldsymbol{u}_{t}=\boldsymbol{L}_{0}\boldsymbol{L}_{1}\ldots\boldsymbol{L}_{\ell}\boldsymbol{e}_{\ell}$
16:
$\boldsymbol{v}_{t}=\boldsymbol{R}_{0}\boldsymbol{R}_{1}\ldots\boldsymbol{R}_{r}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}r}\boldsymbol{g}_{t}$
17: $\boldsymbol{y}_{t}=\sum_{i\leq
t}(\boldsymbol{u}_{i}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{t})\boldsymbol{v}_{i}$
18: end if
19:
$\boldsymbol{x}_{t+1}=f_{t}(\boldsymbol{y}_{t},\boldsymbol{x}_{t},\boldsymbol{x}_{t-1},\ldots,\boldsymbol{x}_{t-d})$
20:end for
During its operation, Algorithm 1 keeps track of $2T$ vectors
$\left\\{\boldsymbol{u}_{t}\in\mathbb{R}^{m},\boldsymbol{v}_{t}\in\mathbb{R}^{n}\right\\}_{t\leq
T}$ and $T$ Householder reflectors
$\left\\{\boldsymbol{L}_{i}\in\mathbb{R}^{m\times
m}\right\\}_{i\leq\ell_{T}}\text{ and
}\left\\{\boldsymbol{R}_{i}\in\mathbb{R}^{n\times n}\right\\}_{i\leq r_{T}},$
where $\ell_{T}$ (resp. $r_{T}$) records the number of times we have used
$\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$ (resp. $\boldsymbol{Q}$) in the $T$
iterations of the dynamics. Clearly, $r_{T}+\ell_{T}=T$. Thanks to the
structures of the Householder reflectors in (9), the total memory footprint of
Algorithm 1 is $\mathcal{O}((m+n)T)$. At each iteration, computations mainly
take place in lines 7–10 (or lines 14–17 if
$\boldsymbol{M}_{t}=\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$). Since the
matrices used there are always products of Householder reflectors, these steps
require $\mathcal{O}((m+n)t)$ operations. As $t$ ranges from 1 to $T$, the
computational complexity of Algorithm 1 is thus $\mathcal{O}((m+n)T^{2})$.
###### Remark 1.
In line 7 and line 14, Algorithm 1 recursively constructs two products of
(generalized) Householder reflectors. Readers familiar with numerical linear
algebra will recognize that this process is essentially the Householder
algorithm for QR factorization [13, Lecture 10]. Special data structures have
been developed (see, e.g., [27]) to efficiently represent and operate on such
products of reflectors.
We can now exhibit the statistical equivalence of the HD algorithm and the
direct simulation approach.
###### Theorem 1.
Fix $T\leq\min\left\\{m,n\right\\}$, and let
$\left\\{\boldsymbol{x}_{t}\mathrel{\mathop{\ordinarycolon}}1-d\leq t\leq
T+1\right\\}$ be a sequence of vectors generated by Algorithm 1. Let
$\left\\{\widetilde{\boldsymbol{x}}_{t}\mathrel{\mathop{\ordinarycolon}}1-d\leq
t\leq T+1\right\\}$ be another sequence obtained by the direct approach to
simulating (1), where we use the same initial condition (i.e.
$\widetilde{\boldsymbol{x}}_{t}=\boldsymbol{x}_{t}$ for $1-d\leq t\leq 1$) but
generate a full matrix $\boldsymbol{Q}\sim\text{Ginibre}(m,n)$ in advance. The
joint probability distribution of $\left\\{\boldsymbol{x}_{t}\right\\}$ is
equivalent to that of $\left\\{\widetilde{\boldsymbol{x}}_{t}\right\\}$.
###### Proof.
We start by describing the general structure of the algorithm. At the $t$-th
iteration, the algorithm keeps the following representation of the matrix
$\boldsymbol{Q}$:
$\boldsymbol{Q}^{(t)}=\textstyle\sum_{i\leq
t}\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{\mkern-1.5mu\mathsf{T}}+\underbrace{\boldsymbol{L}_{0}\boldsymbol{L}_{1}\ldots\boldsymbol{L}_{\ell_{t}}}_{\text{Householder}}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}\ell_{t}}\boldsymbol{G}_{t}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}r_{t}}\underbrace{\boldsymbol{R}_{r_{t}}\ldots\boldsymbol{R}_{1}\boldsymbol{R}_{0}}_{\text{Householder}},$
(18)
where $\boldsymbol{G}_{t}\sim\text{Ginibre}(m,n)$ is a Gaussian matrix
independent of the $\sigma$-algebra generated by all the other random
variables constructed up to this point, and $\ell_{t}$ (resp. $r_{t}$) denotes
the number of times we have used $\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$
(resp. $\boldsymbol{Q}$) in the first $t$ iterations of the dynamics. To
lighten the notation, we will omit the subscript in the remainder of the proof
and simply write them as $\ell$ and $r$.
The vectors $\left\\{\boldsymbol{u}_{i},\boldsymbol{v}_{i}\right\\}$ and the
Householder reflectors $\left\\{\boldsymbol{L}_{i}\right\\}$,
$\left\\{\boldsymbol{R}_{i}\right\\}$ in (18) are constructed recursively, as
follows. We start with $\boldsymbol{L}_{0}=\boldsymbol{I}_{m}$ and
$\boldsymbol{R}_{0}=\boldsymbol{I}_{n}$. At the $t$-th iteration (for $1\leq
t\leq T$), if $\boldsymbol{M}_{t}=\boldsymbol{Q}$ (i.e. if we need to compute
$\boldsymbol{Q}\boldsymbol{x}_{t}$), we add a new Householder reflector
$\boldsymbol{R}_{r}=\boldsymbol{H}_{r}(\boldsymbol{R}_{r-1}\ldots\boldsymbol{R}_{1}\boldsymbol{R}_{0}\boldsymbol{x}_{t})$
(19)
and two new “basis” vectors
$\boldsymbol{u}_{t}=\boldsymbol{L}_{0}\boldsymbol{L}_{1}\ldots\boldsymbol{L}_{\ell}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}\ell}\boldsymbol{g}_{t}\quad\text{and}\quad\boldsymbol{v}_{t}=\boldsymbol{R}_{0}\boldsymbol{R}_{1}\ldots\boldsymbol{R}_{r}\boldsymbol{e}_{r},$
where $\boldsymbol{g}_{t}\sim\text{Ginibre}(m,1)$. The procedure for the case
of $\boldsymbol{M}_{t}=\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$ is completely
analogous: we add a new Householder reflector $\boldsymbol{L}_{\ell}$ (on the
left) and construct the basis vectors $\boldsymbol{u}_{t},\boldsymbol{v}_{t}$
accordingly.
It is important to note that the Gaussian matrix $\boldsymbol{G}_{t}$ in (18)
is never explicitly constructed in the algorithm. Assume without loss of
generality that $\boldsymbol{M}_{t}=\boldsymbol{Q}$. Let
$\boldsymbol{p}=\boldsymbol{R}_{r-1}\ldots\boldsymbol{R}_{1}\boldsymbol{R}_{0}\boldsymbol{x}_{t}$.
We then have
$\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}r}\boldsymbol{R}_{r}\ldots\boldsymbol{R}_{1}\boldsymbol{R}_{0}\boldsymbol{x}_{t}=\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}r}\boldsymbol{H}_{r}(\boldsymbol{p})\boldsymbol{p}=\boldsymbol{0},$
where the second equality is due to (12). Consequently, $\boldsymbol{G}_{t}$
remains invisible to $\boldsymbol{x}_{t}$, and
$\boldsymbol{Q}^{(t)}\boldsymbol{x}_{t}=\textstyle\sum_{i\leq
t}(\boldsymbol{v}_{i}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{t})\boldsymbol{u}_{i}.$
To prove the assertion of the theorem, it suffices to show that, for all
$1\leq t\leq T$, $\boldsymbol{Q}^{(t)}$ has the correct distribution, namely
$\boldsymbol{Q}^{(t)}\sim\text{Ginibre}(m,n)$ and $\boldsymbol{Q}^{(t)}$ is
independent of the initial condition
$\left\\{\boldsymbol{x}_{1},\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{1-d}\right\\}$.
This is clearly true for $t=1$, based on our discussions around (14). Now
suppose that the condition on the distribution has been verified for
$\boldsymbol{Q}^{(t)}$ for some $t\geq 1$, . To go to $t+1$, we rewrite the
Gaussian matrix $\boldsymbol{G}_{t}$ in (18) by using a decomposition similar
to (15). Specifically, if $\boldsymbol{M}_{t}=\boldsymbol{Q}$, we write
$\boldsymbol{G}_{t}\overset{d}{=}(\boldsymbol{g}_{t+1}\boldsymbol{e}_{r+1}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{G}_{t+1}\boldsymbol{Z}_{r+1})\boldsymbol{R}_{r+1}\sim\text{Ginibre}(m,n),$
(20)
where $\boldsymbol{g}_{t+1}\sim\text{Ginibre}(m,1)$,
$\boldsymbol{G}_{t+1}\sim\text{Ginibre}(m,n)$, and
$\boldsymbol{R}_{r+1}\overset{\text{def}}{=}\boldsymbol{H}_{r+1}(\boldsymbol{R}_{r}\ldots,\boldsymbol{R}_{1}\boldsymbol{R}_{0}\boldsymbol{x}_{t+1})$.
(The decomposition for the case where
$\boldsymbol{M}_{t}=\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$ is completely
analogous.)
That the new representation on the right-hand side of (20) has the same
distribution as $\boldsymbol{G}_{t}$ is due to the translation-invariant
property of the Ginibre ensemble [see (7)]. Substituting (20) into (18) allows
us to conclude that the matrix
$\textstyle\sum_{i\leq
t}\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{L}_{0}\ldots\boldsymbol{L}_{\ell}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}\ell}(\boldsymbol{g}_{t+1}\boldsymbol{e}_{r+1}^{\mkern-1.5mu\mathsf{T}}+\boldsymbol{G}_{t+1}\boldsymbol{Z}_{r+1})\boldsymbol{R}_{r+1}\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}r}\boldsymbol{R}_{r}\ldots\boldsymbol{R}_{0}$
(21)
satisfies the required condition on its distribution. By construction,
$\boldsymbol{R}_{r+1}$ commutes with
$\boldsymbol{Z}_{1\mathrel{\mathop{\ordinarycolon}}r}$. [Recall (11).] This
simple observation allows us to check that the matrix in (21) is exactly
$\boldsymbol{Q}^{(t+1)}$. By induction on $t$ from 1 to $T$, we then complete
the proof. ∎
### 3.3 Haar-Distributed Random Orthogonal Matrices
We now turn to the case where $\boldsymbol{Q}$ is a Haar-distributed random
orthogonal matrix. The construction of the HD algorithm relies on the
following factorization of the Haar measure on $\mathbb{O}(n)$.
###### Lemma 1.
Let $\boldsymbol{g}\sim\text{Ginibre}(n,1)$,
$\boldsymbol{Q}_{n-1}\sim\text{Haar}(n-1)$, and
$\boldsymbol{v}\in\mathbb{R}^{n}$, all of which are independent. Then
$\boldsymbol{H}_{1}(\boldsymbol{g})\begin{bmatrix}1&\\\
&\boldsymbol{Q}_{n-1}\end{bmatrix}\boldsymbol{H}_{1}(\boldsymbol{v})\sim\text{Haar}(n).$
(22)
###### Proof.
Let $\boldsymbol{M}$ denote the left-hand side of (22). It is sufficient to
show that $\boldsymbol{M}\overset{d}{=}\boldsymbol{U}\boldsymbol{M}$ for any
fixed $\boldsymbol{U}\in\mathbb{O}(n)$. The statement of the lemma then
follows from the fact that the Haar measure is the unique (left) translation-
invariant measure on $\mathbb{O}(n)$.
For any nonzero vector $\boldsymbol{x}\in\mathbb{R}^{n}$, we denote by
$\boldsymbol{B}(\boldsymbol{x})\in\mathbb{R}^{n\times(n-1)}$ the submatrix
consisting of the last $n-1$ columns of $\boldsymbol{H}_{1}(\boldsymbol{x})$.
It is also useful to notice that the first column of
$\boldsymbol{H}_{1}(\boldsymbol{x})$ is
$\boldsymbol{x}/\mathinner{\\!\left\lVert\boldsymbol{x}\right\rVert}$. Thus,
$\boldsymbol{H}_{1}(\boldsymbol{x})=\big{[}\frac{\boldsymbol{x}}{\mathinner{\\!\left\lVert\boldsymbol{x}\right\rVert}}\mid\boldsymbol{B}(\boldsymbol{x})\big{]}$.
The following observation is easy to verify. For any fixed
$\boldsymbol{U}\in\mathbb{O}(n)$, there exists some
$\boldsymbol{R}\in\mathcal{O}(n-1)$ such that
$\boldsymbol{U}\boldsymbol{B}(\boldsymbol{x})=\boldsymbol{B}(\boldsymbol{U}\boldsymbol{x})\boldsymbol{R}.$
Applying this to $\boldsymbol{B}(\boldsymbol{g})$ [in
$\boldsymbol{H}_{1}(\boldsymbol{g})$] then allows us to write
$\boldsymbol{U}\boldsymbol{M}=\boldsymbol{H}_{1}(\boldsymbol{U}\boldsymbol{g})\begin{bmatrix}1&\\\
&\boldsymbol{R}\boldsymbol{Q}_{n-1}\end{bmatrix}\boldsymbol{H}_{1}(\boldsymbol{v}),$
where $\boldsymbol{R}$ is an orthogonal matrix independent of
$\boldsymbol{Q}_{n-1}$ and $\boldsymbol{v}$. Since the joint distribution of
$(\boldsymbol{U}\boldsymbol{g},\boldsymbol{R}\boldsymbol{Q}_{n-1},\boldsymbol{v})$
is equal to that of $(\boldsymbol{g},\boldsymbol{Q}_{n-1},\boldsymbol{v})$ in
(22), we must have $\boldsymbol{M}\overset{d}{=}\boldsymbol{U}\boldsymbol{M}$.
∎
The HD algorithm exploits the factorization in (22) to speed up the simulation
of Haar random matrices. Before presenting the algorithm in its full
generality, we first illustrate how it unfolds in the first two iterations of
(1). For simplicity, we assume that
$\boldsymbol{M}_{1}=\boldsymbol{M}_{2}=\boldsymbol{Q}$. For the first
iteration, we use (22) to write $\boldsymbol{Q}$ as
$\boldsymbol{Q}^{(1)}=\boldsymbol{L}_{1}\begin{bmatrix}1&\\\
&\boldsymbol{Q}_{n-1}\end{bmatrix}\boldsymbol{R}_{1}\sim\text{Haar}(n),$ (23)
where $\boldsymbol{R}_{1}=\boldsymbol{H}_{1}(\boldsymbol{x}_{1})$,
$\boldsymbol{L}_{1}=\boldsymbol{H}_{1}(\boldsymbol{g}_{1})$,
$\boldsymbol{g}_{1}\sim\text{Ginibre}(n,1)$ and
$\boldsymbol{Q}_{n-1}\sim\text{Haar}(n-1)$. Using the property of Householder
reflectors given in (10), we have
$\boldsymbol{Q}^{(1)}\boldsymbol{x}_{1}=\mathinner{\\!\left\lVert\boldsymbol{x}_{1}\right\rVert}\boldsymbol{H}_{1}(\boldsymbol{g}_{1})\boldsymbol{e}_{1}=\frac{\mathinner{\\!\left\lVert\boldsymbol{x}_{1}\right\rVert}}{\mathinner{\\!\left\lVert\boldsymbol{g}_{1}\right\rVert}}\,\boldsymbol{g}_{1}.$
Notice that only a Gaussian vector $\boldsymbol{g}_{1}$ is needed here, and
that the matrix $\boldsymbol{Q}_{n-1}$ is invisible.
To simulate the second iteration, we apply the factorization (22) recursively
to write $\boldsymbol{Q}_{n-1}$ as
$\boldsymbol{Q}_{n-1}=\boldsymbol{H}_{1}(\boldsymbol{g}_{2}[2\mathrel{\mathop{\ordinarycolon}}n])\begin{bmatrix}1&\\\
&\boldsymbol{Q}_{n-2}\end{bmatrix}\boldsymbol{H}_{1}(\boldsymbol{p}[2\mathrel{\mathop{\ordinarycolon}}n])\sim\text{Haar}(n-1),$
(24)
where $\boldsymbol{g}_{2}\sim\text{Ginibre}(n,1)$,
$\boldsymbol{Q}_{n-2}\sim\text{Haar}(n-2)$, and
$\boldsymbol{p}=\boldsymbol{R}_{1}\boldsymbol{x}_{2}$. Substituting (24) into
(23) then gives us
$\boldsymbol{Q}^{(2)}=\boldsymbol{L}_{1}\boldsymbol{L}_{2}\begin{bmatrix}\boldsymbol{I}_{2}&\\\
&\boldsymbol{Q}_{n-2}\end{bmatrix}\boldsymbol{R}_{2}\boldsymbol{R}_{1},$ (25)
where $\boldsymbol{L}_{2}=\boldsymbol{H}_{2}(\boldsymbol{g}_{2})$ and
$\boldsymbol{R}_{2}=\boldsymbol{H}_{2}(\boldsymbol{p})$. By construction, the
vector $\boldsymbol{R}_{2}\boldsymbol{R}_{1}\boldsymbol{x}_{2}$ has nonzero
entries only in the first two coordinates. It follows that
$\boldsymbol{Q}^{(2)}\boldsymbol{x}_{2}=\boldsymbol{L}_{1}\boldsymbol{L}_{2}\boldsymbol{R}_{2}\boldsymbol{R}_{1}\boldsymbol{x}_{2},$
with $\boldsymbol{Q}_{n-2}$ in (25) remaining invisible.
Algorithm 2 Simulating (1) on $\text{Haar}(n)$ using Householder Dice
1:The initial condition
$\left\\{\boldsymbol{x}_{1},\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{1-d}\right\\}$,
and the number of iterations $T\leq n$
2:Set $\boldsymbol{L}_{0}=\boldsymbol{I}_{m}$, and
$\boldsymbol{R}_{0}=\boldsymbol{I}_{n}$.
3:for $t=1,\ldots,T$ do
4: Generate $\boldsymbol{g}_{t}\sim\text{Ginibre}(n,1)$
5: if $\boldsymbol{M}_{t}=\boldsymbol{Q}$ then
6:
$\boldsymbol{p}_{t}=\boldsymbol{R}_{t-1}\ldots\boldsymbol{R}_{1}\boldsymbol{R}_{0}\boldsymbol{x}_{t}$
7: $\boldsymbol{R}_{t}=\boldsymbol{H}_{t}(\boldsymbol{p}_{t})$
8: $\boldsymbol{L}_{t}=\boldsymbol{H}_{t}(\boldsymbol{g}_{t})$
9:
$\boldsymbol{y}_{t}=\boldsymbol{L}_{1}\ldots\boldsymbol{L}_{t}\boldsymbol{R}_{t}\boldsymbol{p}_{t}$
10: else
11:
$\boldsymbol{p}_{t}=\boldsymbol{L}_{t-1}\ldots\boldsymbol{L}_{1}\boldsymbol{L}_{0}\boldsymbol{x}_{t}$
12: $\boldsymbol{L}_{t}=\boldsymbol{H}_{t}(\boldsymbol{p}_{t})$
13: $\boldsymbol{R}_{t}=\boldsymbol{H}_{t}(\boldsymbol{g}_{t})$
14:
$\boldsymbol{y}_{t}=\boldsymbol{R}_{1}\ldots\boldsymbol{R}_{t}\boldsymbol{L}_{t}\boldsymbol{p}_{t}$
15: end if
16:
$\boldsymbol{x}_{t+1}=f_{t}(\boldsymbol{y}_{t},\boldsymbol{x}_{t},\boldsymbol{x}_{t-1},\ldots,\boldsymbol{x}_{t-d})$
17:end for
Continuing this process, we see a simple pattern emerging. We summarize it in
Algorithm 2. In general, the algorithm recursively constructs two sequences of
Householder reflectors
$\left\\{\boldsymbol{L}_{t},\boldsymbol{R}_{t}\right\\}_{t\leq T}$, starting
from $\boldsymbol{L}_{0}=\boldsymbol{R}_{0}=\boldsymbol{I}_{n}$. At the $t$-th
iteration, we first generate a new Gaussian vector
$\boldsymbol{g}\sim\text{Ginibre}(n,1)$. Suppose
$\boldsymbol{M}_{t}=\boldsymbol{Q}$, we compute
$\boldsymbol{p}_{t}=\boldsymbol{R}_{t-1}\ldots\boldsymbol{R}_{1}\boldsymbol{R}_{0}\boldsymbol{x}_{t}$
(26)
and add two reflectors
$\boldsymbol{R}_{t}=\boldsymbol{H}_{t}(\boldsymbol{p}_{t})$ and
$\boldsymbol{L}_{t}=\boldsymbol{H}_{t}(\boldsymbol{g}_{t})$. The algorithm
then proceeds to the next iteration by letting
$\boldsymbol{x}_{t+1}=f_{t}(\boldsymbol{y}_{t},\boldsymbol{x}_{t},\ldots,\boldsymbol{x}_{t-d})$,
where
$\boldsymbol{y}_{t}=\boldsymbol{L}_{1}\ldots\boldsymbol{L}_{t}\boldsymbol{R}_{t}\boldsymbol{p}_{t}$.
The steps the algorithms takes if
$\boldsymbol{M}=\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$ is exactly symmetric,
with the roles of $\left\\{\boldsymbol{R}_{i}\right\\}$ and
$\left\\{\boldsymbol{L}_{i}\right\\}$ switched. The computational and memory
complexity of Algorithm 2 is similar to that of Algorithm 1. Specifically, the
Householder reflectors can be efficiently represented by the corresponding
reflection vectors, at a cost of $\mathcal{O}(nT)$ space. At each iteration,
the matrix-vector multiplications in lines 6, 9, 11 and 14 can all be
implemented in $\mathcal{O}(nt)$ operations (thanks to the Householder
structure). Therefore, the total computational complexity is
$\mathcal{O}(nT^{2})$.
Finally, we establish the statistical “correctness” of Algorithm 2 in the
following theorem.
###### Theorem 2.
Fix $T\leq n$, and let
$\left\\{\boldsymbol{x}_{t}\mathrel{\mathop{\ordinarycolon}}1-d\leq t\leq
T+1\right\\}$ be a sequence of vectors generated by Algorithm 2. Let
$\left\\{\widetilde{\boldsymbol{x}}_{t}\mathrel{\mathop{\ordinarycolon}}1-d\leq
t\leq T+1\right\\}$ be another sequence obtained by the direct approach to
simulating (1), where we use the same initial condition (i.e.
$\widetilde{\boldsymbol{x}}_{t}=\boldsymbol{x}_{t}$ for $1-d\leq t\leq 1$) but
generate a random orthogonal matrix $\boldsymbol{Q}\sim\text{Haar}(n)$ in
advance. The joint probability distribution of
$\left\\{\boldsymbol{x}_{t}\right\\}$ is equivalent to that of
$\left\\{\widetilde{\boldsymbol{x}}_{t}\right\\}$.
###### Proof.
The proof is very similar to that of Theorem 1. At the $t$-th iteration, the
algorithm has constructed a representation of the random orthogonal matrix
$\boldsymbol{Q}$ as
$\boldsymbol{Q}^{(t)}=\boldsymbol{L}_{1}\boldsymbol{L}_{2}\ldots\boldsymbol{L}_{t}\begin{bmatrix}\boldsymbol{I}_{t}&\\\
&\boldsymbol{Q}_{n-t}\end{bmatrix}\boldsymbol{R}_{t}\ldots\boldsymbol{R}_{2}\boldsymbol{R}_{1},$
(27)
where $\left\\{\boldsymbol{L}_{i},\boldsymbol{R}_{i}\right\\}_{i\leq t}$ is a
collection of Householder reflectors, and
$\boldsymbol{Q}_{n-t}\sim\text{Haar}(n-t)$ is an $(n-t)\times(n-t)$ random
orthogonal matrix independent of the $\sigma$-algebra generated by all the
other random variables constructed up to this point. We shall have established
the theorem if we prove the following two claims for $1\leq t\leq T$: (a)
$\boldsymbol{Q}^{(t)}\sim\text{Haar}(n)$ and $\boldsymbol{Q}^{(t)}$ is
independent of the initial condition
$\left\\{\boldsymbol{x}_{t}\right\\}_{1-d\leq t\leq 1}$; (b) If
$\boldsymbol{M}_{t}=\boldsymbol{Q}$ in (1), then
$\boldsymbol{Q}^{(t)}\boldsymbol{x}_{t}=\boldsymbol{L}_{1}\ldots\boldsymbol{L}_{t}\boldsymbol{R}_{t}\boldsymbol{p}_{t},$
(28)
where $\boldsymbol{p}_{t}$ is as defined in (26). If
$\boldsymbol{M}_{t}=\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$, then
$[\boldsymbol{Q}^{(t)}]^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}_{t}=\boldsymbol{R}_{1}\boldsymbol{R}_{2}\ldots\boldsymbol{R}_{t}\boldsymbol{L}_{t}\ldots\boldsymbol{L}_{2}\boldsymbol{L}_{1}\boldsymbol{x}_{t}$.
Claim (a) can be proved by induction. We have already established its
correctness for $t=1$. [See (23).] To propagate the claim from iteration $t$
to $t+1$, we simply apply Lemma 1 to rewrite $\boldsymbol{Q}_{n-t}$ in (27) as
$\boldsymbol{Q}_{n-t}\overset{d}{=}\boldsymbol{H}_{1}(\boldsymbol{g}_{t+1}[t+1\mathrel{\mathop{\ordinarycolon}}n])\begin{bmatrix}1&\\\
&\boldsymbol{Q}_{n-t-1}\end{bmatrix}\boldsymbol{H}_{1}(\boldsymbol{p}_{t+1}[t+1\mathrel{\mathop{\ordinarycolon}}n])\sim\text{Haar}(n-t),$
where $\boldsymbol{g}_{t+1}\sim\text{Ginibre}(n,1)$,
$\boldsymbol{Q}_{n-t-1}\sim\text{Haar}(n-t-1)$, and
$\boldsymbol{p}_{t+1}=\boldsymbol{R}_{t}\ldots\boldsymbol{R}_{2}\boldsymbol{R}_{1}\boldsymbol{x}_{t+1}$.
(This is for the case of $\boldsymbol{M}_{t+1}=\boldsymbol{Q}$, but the
treatment for the case of
$\boldsymbol{M}_{t+1}=\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}$ is completely
analogous.) Substituting this equivalent representation into (27) gives us
$\boldsymbol{Q}^{(t+1)}$.
To establish Claim (b), we again assume without loss of generality that
$\boldsymbol{M}_{t}=\boldsymbol{Q}$. By the definition of $\boldsymbol{p}_{t}$
in (26) and that of $\boldsymbol{R}_{t}$, we have
$\boldsymbol{Q}^{(t)}\boldsymbol{x}_{t}=\boldsymbol{L}_{1}\boldsymbol{L}_{2}\ldots\boldsymbol{L}_{t}\begin{bmatrix}\boldsymbol{I}_{t}&\\\
&\boldsymbol{Q}_{n-t}\end{bmatrix}\boldsymbol{H}_{t}(\boldsymbol{p}_{t})\boldsymbol{p}_{t}.$
Using (12), we can then verify the expression given in (28). ∎
### 3.4 Other Random Matrix Ensembles
The Gaussian and Haar ensembles studied above can serve as building blocks for
simulating other related random matrix ensembles. For example, consider the
classical Gaussian orthogonal ensemble (GOE). A symmetric $n\times n$ matrix
$G$ is drawn from $\text{GOE}(n)$ if $\\{G_{ij}\\}_{1\leq i\leq j\leq n}$ are
independent random variables, with $G_{ii}\sim\mathcal{N}(0,2)$ and
$G_{ij}\sim\mathcal{N}(0,1)$ for $i<j$. Clearly,
$\boldsymbol{Q}\sim\text{Ginibre}(n,n)\implies\frac{1}{\sqrt{2}}(\boldsymbol{Q}+\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}})\sim\text{GOE}(n).$
It follows that a single matrix-vector multiplication involving
$\boldsymbol{G}\sim\text{GOE}(n)$ can be simulated via two matrix-vector
multiplications involving a nonsymmetric Gaussian matrix, i.e.,
$\boldsymbol{y}=\boldsymbol{G}\boldsymbol{x}\implies\widehat{\boldsymbol{y}}=\boldsymbol{Q}\boldsymbol{x}\text{
and
}\boldsymbol{y}=(\boldsymbol{Q}^{\mkern-1.5mu\mathsf{T}}\boldsymbol{x}+\widehat{\boldsymbol{y}})/\sqrt{2}.$
As a second example, we consider random matrices in the form of
$\boldsymbol{Q}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V},$ (29)
where $\boldsymbol{U}\sim\text{Haar}(m)$ and
$\boldsymbol{V}\sim\text{Haar}(n)$ are two independent random orthogonal
matrices, and $\boldsymbol{\Sigma}\in\mathbb{R}^{m\times n}$ is a rectangular
diagonal matrix independent of $\boldsymbol{U},\boldsymbol{V}$. Matrices like
these often appear in the study of free probability theory [28]. They are also
used as a convenient model for matrices whose singular vectors are _generic_
[6, 7, 8]. Strictly speaking, Theorem 2 only applies to the case where the
dynamics operates on a single random orthogonal matrix. However, it is obvious
from the proof that the idea applies to more general dynamics involving a
finite number of independent random orthogonal matrices. Thus, Algorithm 2 can
be easily adapted to handle the matrix ensemble given in (29).
Finally, we note that the constructions of the HD algorithm can be generalized
to the complex-valued cases, with the random matrices drawn from the complex
Ginibre ensemble, the Haar ensemble on the unitary group $\mathbb{U}(n)$, and
the Gaussian unitary ensemble, respectively. We avoid repetitions, as most
changes in such generalizations are straightforward (such as replacing
$\boldsymbol{M}^{\mkern-1.5mu\mathsf{T}}$ by $\boldsymbol{M}^{\ast}$). In what
follows, we only present the formula for a complex version of the Householder
reflector, as it might be less well-known.
Let $\boldsymbol{v}\in\mathbb{C}^{n}$ be a nonzero vector. Write
$v_{1}/\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}=re^{i\theta}$,
where $r$ is a nonnegative real number. (When $v_{1}=0$, we have $r=0$ and set
$\theta=0$.) We define a unitary reflector [29, pp. 48–49] as
$\boldsymbol{H}(\boldsymbol{v})=(-e^{-i\theta})\Big{[}\boldsymbol{I}_{n}-\frac{(\boldsymbol{v}/\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}+e^{i\theta}\boldsymbol{e}_{1})(\boldsymbol{v}/\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}+e^{i\theta}\boldsymbol{e}_{1})^{\ast}}{1+r}\Big{]}.$
(30)
It is easy to check that $\boldsymbol{H}(\boldsymbol{v})$ is a unitary matrix
such that
$\boldsymbol{H}(\boldsymbol{v})\boldsymbol{v}=\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}\boldsymbol{e}_{1}$
and
$\boldsymbol{H}^{\ast}(\boldsymbol{v})\boldsymbol{e}_{1}=\boldsymbol{v}/\mathinner{\\!\left\lVert\boldsymbol{v}\right\rVert}$.
Moreover, if $\boldsymbol{v}$ is real, then (30) reduces to the Householder
reflector given in (9).
## 4 Conclusion
We proposed a new algorithm called Householder Dice for simulating dynamics on
several dense random matrix ensembles with translation-invariant properties.
Rather than fixing the entire random matrix in advance, the new algorithm is
matrix-free, generating only the randomness that must be revealed at any given
step of the dynamics. The name of the algorithm highlights the central role
played by an adaptive and recursive construction of (random) Householder
reflectors. These orthogonal transformations exploit the group symmetry of the
matrix ensembles, while simultaneously maintaining the statistical
correlations induced by the dynamics. Numerical results demonstrate the
promise of the HD algorithm as a new computational tool in the study of high-
dimensional random systems.
## References
* [1] G. W. Stewart, “A Krylov–Schur algorithm for large eigenproblems,” SIAM J. Matrix Anal. Appl., vol. 23, no. 3, pp. 601–614, 2002.
* [2] J. Baik, G. B. Arous, and S. Péché, “Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices,” Ann. Probab., vol. 33, pp. 1643–1697, Sept. 2005.
* [3] F. Benaych-Georges and R. R. Nadakuditi, “The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices,” Adv. Math., vol. 227, pp. 494–521, May 2011.
* [4] E. Bolthausen, “An iterative construction of solutions of the TAP equations for the Sherrington–Kirkpatrick model,” Commun. Math. Phys., no. 325, pp. 333–366, 2014.
* [5] M. Bayati and A. Montanari, “The dynamics of message passing on dense graphs, with applications to compressed sensing,” IEEE Trans. Inf. Theory, vol. 57, pp. 764 –785, Feb. 2011.
* [6] M. Opper, B. Cakmak, and O. Winther, “A theory of solving TAP equations for Ising models with general invariant random matrices,” J. Phys. A, vol. 49, no. 11, p. 114002, 2016.
* [7] S. Rangan, P. Schniter, and A. K. Fletcher, “Vector approximate message passing,” IEEE Trans. Inf. Theory, vol. 65, pp. 6664–6684, Oct 2019.
* [8] Z. Fan, “Approximate message passing algorithms for rotaltionally invariant matrices,” arXiv:2008.11892, 2020.
* [9] E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory, vol. 61, no. 4, pp. 1985–2007, 2015.
* [10] S. Goldt, M. S. Advani, A. M. Saze, F. Krzakala, and L. Zdeborová, “Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup,” in Advances in Neural Information Processing Systems 32, 2019.
* [11] M. Mitzenmacher and E. Upfal, Probability and computing: Randomized algorithms and probabilistic analysis. Cambridge University Press, 2005.
* [12] A. S. Householder, “Unitary triangularization of a nonsymmetric matrix,” J. ACM, vol. 5, no. 4, pp. 339–342, 1958.
* [13] L. N. Trefethen and D. Bau III., Numerical Linear Algebra. Philadelphia, PA: SIAM, 1997.
* [14] G. W. Stewart, “The efficient generation of random orthogonal matrices with an application to condition numbers,” SIAM J. Numer. Anal., vol. 17, no. 3, pp. 403–425, 1980.
* [15] F. Mezzadri, “How to generate random matrices from the classical compact groups,” Notice of the AMS, vol. 54, pp. 592–604, May 2007.
* [16] J. W. Silverstein, “The smallest eigenvalue of a large dimensional Wishart matrix,” Ann. Probab., vol. 13, no. 4, pp. 1364–1368, 1985.
* [17] A. Edelman, Eigenvalues and Condition Numbers of Random Matrices. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, May 1989\.
* [18] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, “Julia: A fresh approach to numerical computing,” SIAM Rev., vol. 59, no. 65–98, 2017\.
* [19] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci., vol. 2, no. 1, pp. 183–202, 2009.
* [20] K.-C. Li, “On principal hessian directions for data visualization and dimension reduction: Another application of Stein’s lemma,” J. Am. Stat. Assoc, vol. 87, no. 420, pp. 1025–1039, 1992.
* [21] P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Advances in Neural Information Processing Systems, pp. 2796–2804, 2013.
* [22] Y. M. Lu and G. Li, “Phase transitions of spectral initialization for high-dimensional nonconvex estimation,” Information and Inference, vol. 9, pp. 507–541, September 2020.
* [23] M. Mondelli and A. Montanari, “Fundamental limits of weak recovery with applications to phase retrieval,” in Proceedings of Machine Learning Research, vol. 75, 2018.
* [24] R. Dudeja, M. Bakhshizadeh, J. Ma, and A. Maleki, “Analysis of spectral methods for phase retrieval with random orthogonal matrices,” IEEE Trans. Inf. Theory, vol. 66, pp. 5182–5203, Aug 2020.
* [25] A. Haar, “Der massbegriff in der theorie der kontinuierlichen gruppen,” Ann. Math., vol. 34, pp. 147–169, January 1933.
* [26] E. S. Meckes, The Random Matrix Theory of the Classical Compact Groups. Cambridge, UK: Cambridge University Press, 2019.
* [27] R. Schreiber and C. V. Loan, “A storage-efficient WY representation for products of Householder transformations,” SIAM J. Sci. Stat. Comput., vol. 10, January 1989.
* [28] J. A. Mingo and R. Speicher, Free Probability and Random Matrices. New York, NY: Springer Science & Business Media, 2017.
* [29] J. H. Wilkinson, The Algebraic Eigenvalue Problem. Oxford, UK: Clarendon Press, Apr. 1988.
|
Supplementary
# Role of interface morphology on the martensitic transformation in pure Fe
Pawan Kumar Tripathi Department of Materials Science and Engineering, Indian
Institute of Technology Kanpur, Kanpur 208016, India International College of
Semiconductor Technology, National Chiao Tung University, Taiwan Shivraj
Karewar Department of Mechanical Engineering, Eindhoven University of
Technology, Eindhoven, The Netherlands Yu-Chieh Lo Department of Material
Science and Engineering, National Chiao Tung University, Taiwan Somnath
Bhowmick<EMAIL_ADDRESS>Department of Materials Science and Engineering,
Indian Institute of Technology Kanpur, Kanpur 208016, India
###### Abstract
Using classical molecular dynamics simulations, we study austenite to ferrite
phase transformation in iron, focusing on the role of interface morphology. We
compare two different morphologies; a flat interface in which the two phases
are joined according to Nishiyama-Wasserman orientation relationship vs. a
ledged one, having steps similar to the vicinal surface. We identify the
atomic displacements along a misfit dislocation network at the interface
leading to the phase transformation. In case of ledged interface, stacking
faults are nucleated at the steps, which hinder the interface motion, leading
to a lower mobility of the inter-phase boundary, than that of flat interface.
Interestingly, we also find the temperature dependence of the interface
mobility to show opposite trends in case of flat vs. ledged boundary. We
believe that our study is going to present a unified and comprehensive view of
martensitic transformation in iron with different interface morphology, which
is lacking at present, as flat and ledged interfaces are treated separately in
the existing literature.
## I Introduction
High temperature Austenite ($\gamma$-phase, face centered cubic or FCC) to low
temperature ferrite ($\alpha$-phase, body centered cubic or BCC) phase
transformation in iron is crucial, as it governs microstructure, and
subsequently various material properties of different types of steels.Moritani
_et al._ (2002); Caballero and Bhadeshia (2004); Raabe _et al._ (2013); Wang
_et al._ (2014a); Toji _et al._ (2015) As iron is quenched, the
transformation from the high to low temperature phase is known to take place
via a diffusionless process, which can further be subdivided in two
categories, martensitic and massive. In case of martensitic transformation, a
definite orientation relationship (OR) between the parent and product phase is
required to facilitate coordinated movement of atoms (at the velocity of the
sound Bunshah and Mehl (1953)), giving rise to the name military
transformation. Massive transformation, on the other hand, is civilian in
nature, as the atoms move individually and there is no restriction on OR
between the parent and product phase. During the transformation, the motion of
the interface is dependent on temperature and cooling rate; and it is also
affected by various factors like interface morphology, grain boundaries,
alloying elements, and precipitates etc.Moritani _et al._ (2002); Caballero
and Bhadeshia (2004); Raabe _et al._ (2013); Wang _et al._ (2014a); Toji
_et al._ (2015)
During the phase transition certain transformation paths are followed,
depending on the ORs between the parent and product phase. In 1930, Kurdjumov
- Sachs (KS) Kurdjumow and Sachs (1930) identified an OR in mild steel using
X-ray diffraction as
$\\{111\\}_{fcc}||\\{011\\}_{bcc};[\bar{1}01]_{fcc}||[\bar{1}\bar{1}1]_{bcc}$
and later, Nishiyama -Wasserman (NW) Nishiyama (1934) found a slightly
distinct OR as
$\\{111\\}_{fcc}||\\{110\\}_{bcc};[\bar{1}\bar{1}2]_{fcc}||[0\bar{1}1]_{bcc}$
in Fe-30$\%$Ni alloys. Based on the crystal symmetry of the two parent phases,
24 and 12 orientational variants are possible in KSKurdjumow and Sachs (1930)
and NWNishiyama (1934) ORs, respectively. Several other ORs, like BainBain and
Dunkirk (1924), PitschPitsch (1962) and Greninger and TroianoGreninger and
Troiano (1949) have also been discussed in the literature.
Our work is concerned with the austenite to ferrite phase transition in iron,
where we consider the inter-phase interface to be formed according to NW OR or
some derivative of it. For this particular OR, the inter-phase boundary is
classified as a semi-coherent interface, having an array of dislocations
(termed as misfit dislocations), which help to partially reduce the misfit
strain, originating from the lattice mismatch across the interface. It is now
widely accepted that experimentally observed ORs do not correspond exactly to
NW (or KS). The deviations can appear in the form of ledges or disconnections,
which exhibit both step character and dislocation properties and higher index
habit planes are observed at the inter-phase boundary.Shiflet and Merwe
(1994); Shiflet and van der Merwe (1994)
Mainly because of importance of steel as a structural material, austenite to
ferrite phase transition in iron is a widely researched topic. Various aspects
of $\alpha-\gamma$ transformation have been investigated in pure ironLee _et
al._ (2013); Bos _et al._ (2006); Tateyama _et al._ (2008); Sandoval _et
al._ (2009); Sandoval and Urbassek (2009a); Song and Hoyt (2012, 2013, 2018,
2015); Tripathi _et al._ (2018); Wang and Urbassek (2013a); Sinclair and
Hoagland (2008), iron alloys Wang _et al._ (2014b, c), thin filmsWang and
Urbassek (2013b), single crystals Karewar _et al._ (2018, 2019), bicrystals
Karewar _et al._ (2020), and nanowiresSandoval and Urbassek (2009b, c).
Several studies, based on molecular dynamics (MD) simulations, cover kinetics
and atomic mechanisms during the transformation, for flatBos _et al._ (2006);
Wang and Urbassek (2013a); Sandoval and Urbassek (2009c), as well as ledged
interfaces Song and Hoyt (2012, 2013); Tripathi _et al._ (2018). Bos et al.
Bos _et al._ (2006) performed MD simulations for several possible ORs and
concluded that martensitic transformation takes place with the aid of glissile
screw dislocation networks nucleated at the interface during the incubation
time. Tateyama et al.Tateyama _et al._ (2008) found a planar and needle-like
growth in case of NW and KS OR, respectively. They also reported a decrease in
interface velocity when the parent and product phase was rotated in the range
of $0-5.26^{\circ}$ from NW to KS OR. Wang and Urbassek Wang and Urbassek
(2013a) conducted MD simulation to uncover the pressure and temperature
dependence of the FCC-BCC transformation using Meyer-Entel potential and
identified the phonon softening, leading to the phase transition. Song and
Hoyt Song and Hoyt (2013) investigated an inter-phase boundary obeying NW OR,
along with structural disconnections (sessile in nature) and reported the
nucleation and movement of a secondary set of glissile disconnections at the
terrace plane, leading to the FCC-BCC transformation. Ou et al. Ou _et al._
(2016) investigated three semicoherent interfaces joined by NW, KS, and Nagano
ORs and found martensitic transformations in the coherent areas of the
interface (having low potential energy), while some diffusional jumps were
also observed in the incoherent areas (having high potential energy). Karewar
et al. Karewar _et al._ (2018, 2019) conducted a MD simulation of martensitic
transformation in single-crystal Fe and Fe-C systems with different planar
defects such as twin boundaries and stacking faults (SFs). They observed
several well-known transformation mechanisms (such as NW, KS, Burgers path
Burgers (1934), and BB/OC model Bogers and Burgers (1964); Olson and Cohen
(1972)) depending on the type of the defect structure present in the
simulation system. Tripathi et al. Tripathi _et al._ (2018) reported that the
ledges or disconnections at the interface are the preferential nucleation
sites for the BCC phase and interface movement was guided by the growth of the
BCC phase along the ledge vector. Maresca et al. Maresca and Curtin (2017)
also studied the motion of the ledged FCC/BCC interface. A recent MD study by
Karewar et al. Karewar _et al._ (2020) discussed the role of misfit
dislocations on the martensitic transformation in case of a flat interface.
The authors observed that the screw dislocations within the interface plane
govern the atomic displacements, leading to the phase transformation.
The phenomenological theory of martensitic transformation Bhadeshia and Wayman
(2014) provides the orientation relationship between the parent and product
phases but does not elaborate on the atomistic displacements during the
transformation. Similarly, most of the MD studies mentioned in the previous
paragraphBos _et al._ (2006); Tateyama _et al._ (2008); Sandoval _et al._
(2009); Sandoval and Urbassek (2009a); Song and Hoyt (2012, 2013, 2018, 2015);
Tripathi _et al._ (2018); Wang and Urbassek (2013a); Sinclair and Hoagland
(2008); Wang _et al._ (2014b, c); Wang and Urbassek (2013b); Sandoval and
Urbassek (2009b, c); Wang and Urbassek (2013a); Karewar _et al._ (2020);
Maresca and Curtin (2017); Sandoval and Urbassek (2009b); Ou _et al._ (2016)
do not explicitly discuss the movements of the individual or group of atoms
during the transformation. Further, all the studies focus on transformation in
the presence of either flat or ledged interfaces, lacking a clear qualitative
or quantitative comparison among different interface morphologies. Moreover,
there exist conflicting reports on the mobility of the interface, with some
studies claiming the flat interface to be immobile,Song and Hoyt (2012, 2013,
2018); Tripathi _et al._ (2018) contradicting with others.Bos _et al._
(2006); Tateyama _et al._ (2008); Ou _et al._ (2016) In order to bridge this
gap, one needs to compare transformation in flat vs. ledged interfaces using
exactly similar simulation parameters like boundary conditions, interatomic
potential, thermostat and barostat etc. In this work, we attempt to resolve
this issue by studying atomic mechanisms during the austenite to ferrite phase
transformation in iron using classical MD simulations. We capture several on-
the-fly pictures of atomic displacements during the martensitic
transformation. By doing this, we are able to provide a comprehensive
description of the martensitic transformation in iron; specifically, the
effect of morphology, by comparing flat vs. ledged interfaces, which has not
been done so far (to the best of our knowledge). We identify two distinct
types of atomic motions leading to the phase transformation; one set of
movements (mainly along the misfit dislocation network present at the
interface) are confined to the interface planes and the other having out-of-
the interface plane component as well (the latter observed exclusively in case
of the ledged interface). Interestingly, the temperature dependence of the
interface mobility is found to show opposite trends in case of flat and ledged
interfaces.
## II Simulation Details
Atomic trajectories are obtained using classical MD simulations, as
implemented in Large-scale Atomic/Molecular Massively Parallel Simulator
(LAMMPS).lam (1995) An embedded atom method (EAM) based potential, developed
by Ackland et al.Ackland _et al._ (1997), is used to describe the inter-
atomic interactions among the iron atoms. Several materials characteristics,
like lattice parameter, cohesive energy, vacancy formation energy, and elastic
constants, predicted by this potential are known to be in good agreement with
the experimental data, as well as density functional theory (DFT) based
predictions.Tripathi _et al._ (2018) However, this empirical potential
overestimates the melting point of FeSun _et al._ (2004) and also fails to
correctly predict the austenite-ferrite transition temperature of 1185 K.
Despite these limitations, Ackland potential has been used in several MD
studies of iron, including the FCC-to-BCC phase transformation.Song and Hoyt
(2012, 2013); Abe _et al._ (2016); Terentyev _et al._ (2016)
Figure 1: (a) Sandwich structure of BCC-FCC-BCC simulation system, used in this study, (b) Close up view near the flat interface. (c) Close up view at the ledged interface showing the atomic steps or disconnections. The atoms are color-coded as per a-CNA; blue: body centered cubc (BCC), green: face centered cubic (FCC), red: hexagonal close packed (HCP), and grey: unidentified atoms, not belonging to any of the three. Same atomic color coding is followed in the rest of the figures. Table 1: Details of crystallographic orientations parallel to the $x$, $y$, and $z$ directions of the simulation boxes used in this work. Phase | Direction | Orientation | Dimensions (Å) | Tilt angle | No. of atoms
---|---|---|---|---|---
BCC (Flat) | x y z | $[110]$ $[\bar{1}10]$ $[001]$ | 20.39 260.97 77.85 | - | 69120
BCC (Ledged) | x y z | $[110]$ $[\bar{1}10]$ $[001]$ | 20.39 240.58 77.85 | - | 63720
FCC (Flat) | x y z | $[111]$ $[11\bar{2}]$ $[\bar{1}10$] | 261.17 261.71 78.15 | 0∘ | 351480
FCC (Ledged) | x y z | $[776]$ $[33\bar{7}]$ $[\bar{1}10$] | 213.24 241.25 78.15 | 4.04∘ | 321600
A sandwich structure of the BCC-FCC-BCC sequence is created, such that the
$x$-axis is normal to the phase boundary, which is aligned parallel to the
$yz$ plane [Figure 1(a)]. The crystallographic orientation of $x$, $y$, and
$z$ direction of the BCC and FCC regions, and their dimensions (at T=600 K)
are listed in Table 1. After creating the two regions separately, they are
joined together to get the final simulation box of BCC-FCC-BCC sandwich shape
[Fig. 1(a)]. Spacing between BCC and FCC phase is taken as an average of
inter-planar spacing of both the phases. The $y$ and $z$ dimensions of the BCC
and FCC phases are chosen very carefully, such that the mismatch of the
dimensions at the $yz$ cross-section is minimal (less than 0.5%). A large
mismatch leads to high stresses at the phase boundary, which may affect the
transformation behavior. The sandwich shaped simulation box remains periodic
and without any free surfaces in each direction.
Two types of $\gamma-\alpha$ interfaces, namely flat [Figure 1 (b)] and ledged
[Figure 1 (c)], are created. The flat interface is created using ideal NW OR,
such that
$\\{111\\}_{fcc}||\\{110\\}_{bcc};[\bar{1}10]_{fcc}||[001]_{bcc}$.
In case of the ledged interface, the FCC phase is rotated about the
$z$-direction (parallel to the $[1\bar{1}0]$ direction of FCC) by an angle of
$4.04^{\circ}$, with respect to the ideal NW OR [Table 1]. As a result of
this, $x$ and $y$ axis becomes parallel to the crystallographic directions
$[776]$ and $[33\bar{7}]$, respectively, in the FCC phase. Because of the
rotation, the ledged interface contain steps with one atom height in the FCC
side of the interface [Fig. 1(c)], similar to the vicinal surface. A detailed
analysis of the phase boundary is given in our previous work Tripathi _et
al._ (2018), while the important features relevant for the current work are
shown in Fig S1 in the Supporting Information. According to adaptive common
neighbor analysis (a-CNA)Stukowski (2012), most of the atoms present at the
phase boundary does not belong to the parent BCC or product FCC phase, but
marked as some unidentified category, which happens because of the lattice
parameter mismatch between the parent phases. After creating the sandwich
shaped simulation box, both BCC and FCC phases are individually equilibrated,
first using NVT ensemble for 100 ps to bring all the atoms in thermal
equilibrium. Subsequently, NPxT ensemble is used for volume equilibration for
another 100 ps, keeping $y$ and $z$ dimensions fixed. This reduced the
stresses in the simulation system to less than $\pm$20 MPa. After
equilibrating the BCC and FCC phases independently, phase transformation
dynamics are executed using the NPT ensemble for 5-30 ns (depending on
interface type and temperature). During the phase transformation, each
dimension was allowed to equilibrate independently to reduce the
transformation stresses.
Figure 2: The misfit dislocation structure at the flat interface analyzed by
NTA at 100 K. The image is taken after the entire simulation system is
equilibrated and before the transformation starts. (a) Misfit dislocation line
directions (red arrows), (b) Burgers vectors (black arrows), and their
enlarged views are shown in (c,d). Incoherent regions of the interface are
formed along the dashed lines (making a diamond-like pattern) and
particularly, within the octagon. Atoms in these regions are having high
potential energy, as shown in Figure S2 of the Supporting Information.
Four different values of temperature (600 K, 800 K, 1000 K, and 1200 K) are
chosen to study the transformation kinetics in the flat and ledged interface.
The thermal vibrations of the atoms at high temperatures make it difficult to
identify the atomic displacements responsible for FCC-to-BCC transformation.
Therefore, atomic positions averaged over 1 ps are used for the purpose of
visualization and data analysis. The atomic displacements are further scaled
by a factor of 1.75 for better visibility. Detailed analysis of the interface
(in terms of misfit dislocations) is carried out at an even low temperature of
100 K, than that of transformation dynamics (600 to 1200 K). We use Nye Tensor
analysis in the assigned mode (NTA),Hartley and Mishin (2005); Nye (1953) as
implemented in AADIS (atomistic analyzer for dislocation character and
distribution) code, Yao and Zhang (2020) to study the misfit dislocation
structure at the interfaces. The analysis helps us to characterize the
dislocation line directions and Burgers vectors of the misfit dislocations.
Figure 3: The misfit dislocation structure at the ledged interface analyzed by
NTA. (a) Dislocation line directions, (b) Burgers vector directions, and a
close-up view (c-d) in YZ plane. (e-f) A further close-up view (in XY plane
and within the rectangular area marked in (c-d)) of one of the ledges; also
showing some out-of-the interface plane component of the Burger vector
$b_{5}$. Such out-of-the interface plane components are not observed in the
case of the flat interface.
## III Results
### III.1 Structure of the interfaces
Before studying the transformation, first we analyze the atomic structure of
the interface using NTA, which reveals the presence of misfit dislocations at
the BCC-FCC interface. In case of the flat interface, the dislocation line
directions (red line) and the Burgers vectors (black line) are separately
shown in Figures 2(a) and (b), respectively. Interestingly, the dislocation
network forms a diamond-shaped pattern, where the dislocations are lined along
the edges of the diamond, while the area inside forms the coherent region of
the interface. For a better visibility, a close-up view of the interface is
shown in Figures 2(c-d). As seen in this figure, the dislocation line
directions and the Burgers vectors are parallel to each other, which indicates
that the misfit dislocations present at the interface are screw dislocations,
with Burgers vectors along $b=<111>_{bcc}$. The dislocation line directions
and Burgers vectors are parallel to the interface plane ($yz$ plane), and
there is no component out-of-the interface plane. Such a screw dislocation
network at the FCC/BCC interface has previously been postulated by Howe et
al.Howe _et al._ (2009) Further analysis reveals that the flat interface
consists of areas with low and high potential energy, as shown in Figure S2 in
the Supporting Information. Comparing Figure 2 with Figure S2, it can be seen
that low potential energy areas lie within the diamond, showing a coherent
FCC-BCC interface in these regions. On the other hand, areas with misfit
dislocations (along the edges of the diamond and in particular, the octagon
marked in Figure 2) have higher potential energy, resulting from incoherent
stacking of the FCC and BCC phase.
Figure 3(a-b) shows misfit dislocation network structure at the ledged
interface. Similar to the case of the flat inter-phase boundary, the diamond-
shaped network of screw dislocation is formed at the ledged interface also.
However, due to the presence of the ledges, the pattern is not as regular, as
observed in the case of the flat interface. Figure 3(c-f) presents the
enlarged view of the NTA at one of the diamond regions for better visibility,
clearly showing that the dislocation line directions along the edges of the
diamond are parallel to the interface ($yz$ plane), having no out-of-the
interface plane component.
However, an important distinction (compared to the flat boundary) arises near
the ledges, where an additional set of dislocations appear, with Burgers
vectors also having an out-of-the interface plane component. As shown in
figure 3(d,f), unlike the dislocation lines, Burgers vector
$b_{5}=<110>_{fcc}$ on the $\\{111\\}$ plane has noticeable $x$-component. The
SFs are nucleated from this region when the transformation starts (to be
discussed later in subsection III.3). Atoms located near the ledges are found
to have the highest potential energy, as shown in Figure S3, Supporting
Information.
Figure 4: The FCC-to-BCC transformation at 600 K for the simulation system
with the flat interface. Atoms with FCC crystal structure are not shown for
better visibility. (a) snapshot after equilibration at 0 ns; various stages
during the transformation at (b) 0.4 ns, (c) 0.85 ns; and after the
transformation is over at (d) 1.2 ns. Figure 5: Atomic displacements observed
during the FCC-to-BCC transformation in the presence of a flat interface at
600 K. (a-b) Snapshots at 0 ps, just after equilibration and before the
transformation starts. Snapshots at 10 ps after the transformation starts,
illustrated for the (c) $XY$ plane and (d-f) $YZ$ plane, showing displacements
causing the phase transformation to be mainly localized in the $YZ$ plane. The
red dashed-diamonds are guide to the eye for the original positions of the
misfit dislocation networks.
### III.2 Transformation mechanisms: Flat interface
Figure 4 shows the overview of the transformation for the flat interface at
600 K. The structure at 0 ns (just after equilibration) is shown in figure
4(a), which is the starting configuration for the transformation simulations.
Two intermediate states at 0.4 and 0.85 ns are shown in figures 4(b-c) and the
transformation is completed at 1.2 ns [Figure 4(d)]. From the figures, it is
clear that the transformation from the FCC-to-BCC phase happens only at the
interface. Otherwise, the BCC phase would have nucleated somewhere inside the
FCC phase, which is not observed in any of our simulations.
Since the transformation happens at the phase boundary, we further investigate
the atomic displacements (leading to the FCC-to-BCC transformation) near the
interface. Figure 5(a) shows the configuration just after equilibration (0
ns), and a zoomed-in view of the area marked by the rectangle is shown in
subfigure (b). Atomic displacements at one of the FCC monolayers (at the
interface) is captured during the transformation at 10 ps [Figure 5(c)]. As
shown in the figure, all the atomic displacements are parallel to the $YZ$
plane (blue arrow showing the atomic movements vertically upwards or
downwards), having no out-of-the interface plane component.
Having established that the atomic displacements responsible for the phase
transition are confined to the interface planes, we now analyze atomic motions
in these planes. The zoomed-in view of three monolayers (originally FCC
$\\{111\\}$ planes) adjacent to the interface are illustrated in figures
5(d-f) in $YZ$ projection (parallel to the interface). Coordinated movements
of several groups of atoms (confined to the interface planes) can be observed
in all three monolayers. Moreover, the atomic displacements are found to take
place mainly in the regions, where the misfit dislocation networks were
located in the beginning. The same type of atomic displacements observed in
the diamond region [Figure 5(d-f)] are also found on the entire monolayer at
the interface, as well as few adjacent layers in the FCC region. The atomic
displacements at the interface plane create an atomic drag, that propels the
atoms on the neighboring atomic layers to move in a similar fashion. This
interface governed transformation propagates through the entire FCC phase
layer-by-layer until the transition is complete, as shown in the Supplementary
Information Figure S4.
In the case of a flat interface, the transformation is aided by the screw
dislocations at the interface. These screw dislocations (having high PE) move
on the $\\{111\\}<110>_{fcc}$ or $\\{110\\}<111>_{bcc}$ slip system, as shown
by red arrows and red ellipses in Figure 5(d-f), thereby reducing the PE and
facilitating the phase transformation. This mechanism is the same as the
Kurdjmov-Sachs (KS) FCC-to-BCC transformation mechanism Kurdjumow and Sachs
(1930). The corresponding FCC-to-BCC orientation relationship during the
transformation can be written as,
$(111)_{fcc}||(110)_{bcc},[10\bar{1}]_{fcc}||[11\bar{1}]_{bcc}$
$(111)_{fcc}||(110)_{bcc},[\bar{1}0\bar{1}]_{fcc}||[\bar{1}1\bar{1}]_{bcc}$
$(111)_{fcc}||(110)_{bcc},[101]_{fcc}||[111]_{bcc}$
$(111)_{fcc}||(110)_{bcc},[\bar{1}01]_{fcc}||[\bar{1}11]_{bcc}$.
On the other hand, within the red circles, atomic movements as per the
Burgers-Bogers-Olson-Cohen (BB/OC) method Bogers and Burgers (1964); Olson and
Cohen (1972) are observed [Figure 5(d-f)]. As per the BB/OC method, the BCC
phase nucleates as a result of the two shears, $T/3$ and $3T/8$, where
$T=a/6<112>$ is the Burgers vector of the Shockley partial for the FCC twin
shear. During the transformation, regions of HCP stacking is observed, as
shown in the Supplementary Information Figure S2 and S4, which is the meta-
stable structure after the first atomic shear $T/3$. The atoms in this meta-
stable state move further according to the second shear ($3T/8$), completing
the transformation. Figure S5 in the supplementary material shows the
schematic of the BB/OC mechanism and similar transformation pathways are
observed in the present simulations. The KS and BB/OC mechanisms were also
reported in previous MD studies of phase transformations in single crystal and
bicrystal iron Karewar _et al._ (2020); Ou _et al._ (2016); Karewar _et
al._ (2018, 2019).
Figure 6: The FCC-to-BCC transformation at 600 K for the ledged interface.
Atoms with FCC crystal structure are not shown for better visibility. (a)
snapshot after equilibration at 0 ns; various stages during the transformation
at (b) 5 ns, (c) 23 ns; and after the transformation is over at (d) 26 ns.
Figure 7: Atomic displacements observed during the FCC-to-BCC transformation
in the presence of ledged interface at 600 K. Few layers close to the
interface are shown at (a) 0 ns, before the transformation starts and (b) 1.5
ns, in $XY$ projection. The close-up view of the highlighted monolayers are
shown in (c-e) $YZ$ and (f-h) $XZ$ projection.
### III.3 Transformation mechanisms: Ledged interface
Figure 6(a-d) shows the evolution of the martensitic transformation in the
presence of the ledged interface at 600 K. The equilibrated structure of the
simulation box at 0 ns is shown in figure 6(a). Two intermediate states are
shown at 5 ns and 23 ns [Figures 6(b-c)] and the transformation is completed
at 26 ns [Figure 6(d)]. In addition to the transformation mechanisms found at
the flat interface, an additional feature is observed in case of ledged inter-
phase boundary. During the process of transformation, stacking faults (SFs)
are nucleated from the atomic ledges [Figure 6(b-c)]. The nucleation of SFs at
the atomic ledges has been observed previously in pure Fe.Song and Hoyt (2012,
2013); Tripathi _et al._ (2018) These SFs play an important role in FCC-to-
BCC phase transformation, which is going to be described in detail in the
following discussion.
The atomic displacements inducing the FCC-to-BCC transformation are shown in
Figure 7. The close-up view of the interface at 0 ns (just before starting the
transformation), which includes few layers from both FCC and BCC phase, is
shown in Figure 7(a). One can clearly see the atoms with HCP stacking near the
ledges, where SFs are nucleated. Figure 7(b) shows the close-up view of the
atomic displacements near the interface on a monolayer, in the same $XY$
projection plane as in sub-figure (a). The interface in this figure is
perpendicular to the $x$ direction; and thus, it can be concluded that, in the
presence of the ledged inter-phase boundary, out-of-the interface plane atomic
displacements (marked by the blue rectangle) also takes place, in addition to
the in-plane atomic displacements (similar to the case of the flat interface
planes), leading to the phase transformation.
To analyze the in-plane and out-of-the interface plane atomic displacements in
ledged interfaces, we investigate the two highlighted monolayers (within the
dashed rectangles), as shown in Figure 7(a). For these monolayers, the atomic
displacements (in a plane perpendicular to the arrows) are shown in sub-
figures (c-e) and (f-h). Sub-figures 7(c-e) show the atomic displacements on
$\\{111\\}_{fcc}$ plane. The atomic displacements in this monolayer lie within
the interface plane and there is no out-of-plane component. Similar to the
flat interface [Figure 5(d-f)], the atomic displacements on this monolayer
[Figure7(c-e)] follow two mechanisms- (i) KS mechanism shown in the red
ellipses and (ii) BB/OC mechanism shown within the red circles.
Sub-figures 7(f-h) show the monolayers parallel to the stacking fault
nucleated from the ledges. The atomic displacements on this layer follow the
HCP-to-BCC Burgers path on the $\\{1\bar{1}00\\}$ $<11\bar{2}0>_{hcp}$ slip
system shown within the black ellipses. Figure S6 [Supporting Information]
further shows the similarity between the atomic displacements as per the
Burgers path and the displacements observed in the present simulations. These
type of out-of-the interface plane displacements are mainly observed on the
atomic layers at and near the SFs. Therefore, the SFs, nucleated from the
ledges affect the martensitic transformation by generating an additional out-
of-the interface plane atomic displacement component.
## IV Discussion
Table 2: Gibbs free energy change (Eq. 1 in Supporting Information) and interface velocities calculated (Eq. 2 in the Supporting Information) at different orientations and temperatures. These numbers are calculated by taking an average of the values obtained from eight independent simulations starting with different initial velocities for each of the temperature and orientation. Tempera- ture (K) | $\Delta(G)_{\gamma-\alpha}$ (kJ/mole) | Flat interface velocity (m/s) | Ledged interface velocity (m/s)
---|---|---|---
600 | 2.20 | 13.46$\pm$0.95 | 0.21$\pm$0.05
800 | 2.06 | 10.33$\pm$0.48 | 0.78$\pm$0.04
1000 | 1.91 | 7.66$\pm$0.31 | 1.37$\pm$0.06
1200 | 1.76 | 5.69$\pm$1.23 | 2.06$\pm$0.32
As seen in the previous section, in the case of the flat interface, FCC-to-BCC
transformation takes place by only in-plane (of the interface) atomic shear
component. On the other hand, an additional set of out-of-the interface plane
displacements are observed in the presence of the ledged interface. The two
types of displacements in presence of the ledged interface (in-plane and out-
of interface plane) compete with each other during the transformation. The
out-of-plane atomic displacements caused by the SFs create a barrier to the
in-plane atomic movements (due to the screw dislocations) and thereby slow
down the transformation rate in systems having ledged interfaces, compared to
the ones with flat interfaces.
Figure 8: Potential energy profile of atoms in (a) flat NW, and (b) rotated NW
i.e. ledged interface as a function of time. Lines become horizontal once the
transformations are over. Figure 9: The fraction of atoms with HCP stacking
during transformation for the flat and ledged interfaces at 600 K. The inset
shows an enlarged view near the origin of the graph.
In figure 8, it is also observed that for the systems with flat interfaces,
the transformation rate increase with a decrease in temperature. On the
contrary, for systems with the ledged interface, the transformation slows down
with decreasing temperature. Such contrasting behavior in the presence of the
two different types of interfaces needs thorough analysis and is beyond the
scope of the present work. Our preliminary analysis suggests that for the flat
interfaces, the transformation is mainly driven by the reduction of the Gibbs
free energy $\Delta(G)_{\gamma-\alpha}$. Using a similar approach used
previously by Tripathi et al. Tripathi _et al._ (2018), thermodynamic
integration at different temperatures are carried out to get the values of
$\Delta(G)_{\gamma-\alpha}$ at different temperatures, as shown in table 2. It
can be seen that the value of $\Delta(G)_{\gamma-\alpha}$ increases with
decreasing temperature. The increase in the driving force at lower
temperatures suggests faster kinetics in the case of systems with flat
interfaces.
The transformation in the systems with the ledged interface is mainly
controlled by the defect mobility i.e. SF movement in the simulation system.
While the calculation of the SF mobility at different temperatures is beyond
the scope of this paper, our results suggest that the mobility of the SFs
affects the velocity of the interface. Lower velocity of the SFs results a
lower velocity of the interface and thereby a slower transformation kinetics
and vice versa. The interface velocities are reported in the table 2,
calculated using the approach previously mentioned in the literature Song and
Hoyt (2012); Tripathi _et al._ (2018). The velocity of the ledged interface
increases with an increase in temperature from 600 K to 1200 K, which is
possibly due to the higher SF mobility at a higher temperature.
Since the presence of SFs significantly impacts the transformation rate,
further analysis is carried out in terms of the fraction of SFs (or fraction
of atoms with HCP stacking) present in the simulation systems. The fraction of
atoms with HCP stacking in systems with different morphologies are compared in
Figure 9 at 600 K. As seen here the fraction of atoms with HCP stacking is
much higher in the presence of a ledged interface than that of a flat
interface. A similar trend is observed at higher temperatures as well. This is
because of the nucleation and propagation of the SFs from the atomic ledges.
Moreover, the fraction of atoms having HCP stacking remains almost unchanged
during transformation in systems with the flat interface; on the other hand,
it increases during transformation (at 20 ns) in the presence of a ledged
interface in the system. This increase in the fraction of SFs happens when the
two interfaces, from left and right, come closer and the SFs propagating from
them merge [Figure 6(c)]. As expected, no atoms with HCP stacking are found
once the FCC-to-BCC transformation is completed.
## V Conclusions
In summary, using classical molecular dynamics simulations and an embedded
atom method type interatomic potential, we illustrate the effect of the
morphology of the FCC/BCC interface on the martensitic transformation in iron.
We compare two types of morphologies: a flat one, where the phases are joined
according to NW OR and a ledged one, where the FCC phase is rotated by
$4.04^{\circ}$ with respect to the ideal NW OR. In case of the flat
morphology, we find misfit screw dislocations, having their Burger vector
component lying within the interface plane. We identify the atomic
displacements, parallel to the interface plane and taking place along the
misfit dislocation network, leading to the phase transition, following the KS
and BB/OC FCC-to-BCC transformation mechanism. In addition to the in-plane
(parallel to the interface) movements, we also find out-of-the interface plane
motion, contributing to the phase transition in case of ledged interface. We
find some SFs to be nucleated at the ledges, causing the atomic shear as per
the HCP-to-BCC Burgers path, leading to transformation from the parent FCC to
product BCC phase, via the intermediate HCP phase (appearing as SFs). We find
the mobility of the inter-phase boundary to be lower in case of the ledged
interface, as SFs hinder the interface motion. Interestingly, the
transformation kinetics shows opposite trend as a function of temperature in
case of the flat and ledged interface. In presence of the flat interfaces, the
kinetics is controlled by the driving force for the transformation (i.e.
$\Delta G_{\gamma-\alpha}$). In presence of the ledged interface, the mobility
of the defects (SFs) controls the velocity of the interfaces and which
subsequently controls the transformation kinetics. We believe that our study
will provide a better understanding of austenite to ferrite phase
transformation in iron and help to bridge the existing gaps in the literature.
## VI Acknowledgements
S. B. acknowledges funding from SERB (CRG/2019/006961). Authors acknowledge
HPC IITK and NCHC NCTU Taiwan for providing computational facilities.
Author contribution: P. K. T. and S. K. contributed equally to this work.
## References
* Moritani _et al._ (2002) T. Moritani, N. Miyajima, T. Furuhara, and T. Maki, “Comparison of interphase boundary structure between bainite and martensite in steel,” Scripta Materialia 47, 193 – 199 (2002)
* Caballero and Bhadeshia (2004) F.G. Caballero and H.K.D.H. Bhadeshia, “Very strong bainite,” Current Opinion in Solid State and Materials Science 8, 251 – 257 (2004)
* Raabe _et al._ (2013) D. Raabe, S. Sandlöbes, J. Millán, D. Ponge, H. Assadi, M. Herbig, and P.-P. Choi, “Segregation engineering enables nanoscale martensite to austenite phase transformation at grain boundaries: A pathway to ductile martensite,” Acta Materialia 61, 6132 – 6152 (2013)
* Wang _et al._ (2014a) M-M. Wang, C.C. Tasan, D. Ponge, A. Kostka, and D. Raabe, “Smaller is less stable: Size effects on twinning vs. transformation of reverted austenite in trip-maraging steels,” Acta Materialia 79, 268 – 281 (2014a)
* Toji _et al._ (2015) Yuki Toji, Goro Miyamoto, and Dierk Raabe, “Carbon partitioning during quenching and partitioning heat treatment accompanied by carbide precipitation,” Acta Materialia 86, 137 – 147 (2015)
* Bunshah and Mehl (1953) RF Bunshah and RF Mehl, “Rate of propagation of martensite,” Trans. Aime 197, 1251–1258 (1953)
* Kurdjumow and Sachs (1930) G. Kurdjumow and G. Sachs, “Uber den Mechanismus der stahlhärtung,” Zeitschrift für Physik 64, 325–343 (1930)
* Nishiyama (1934) Z. Nishiyama, “Mechanism of transformation from face-centred to body-centred cubic lattice,” Sci Rep Tohoku Imp Univ. 23, 637–664 (1934)
* Bain and Dunkirk (1924) Edgar C Bain and NY Dunkirk, “The nature of martensite,” trans. AIME 70, 25–47 (1924)
* Pitsch (1962) W. Pitsch, “Der orientierungszusammenhang zwischen zementit und austenit,” Acta Metallurgica 10, 897 – 900 (1962)
* Greninger and Troiano (1949) Alden B Greninger and Alexander R Troiano, “The mechanism of martensite formation,” JOM 1, 590–598 (1949)
* Shiflet and Merwe (1994) G. J. Shiflet and J. H. Merwe, “The role of structural ledges as misfit- compensating defects: fcc-bcc interphase boundaries,” Metallurgical and Materials Transactions A 25, 1895–1903 (1994)
* Shiflet and van der Merwe (1994) G.J. Shiflet and J.H. van der Merwe, “The role of structural ledges at phase boundaries—ii. f.c.c.-b.c.c. interfaces in nishiyama-wasserman orientation,” Acta Metallurgica et Materialia 42, 1189 – 1198 (1994)
* Lee _et al._ (2013) Tae-Ho Lee, Heon-Young Ha, Jun-Yun Kang, Joonoh Moon, Chang-Hoon Lee, and Seong-Jun Park, “An intersecting-shear model for strain-induced martensitic transformation,” Acta Materialia 61, 7399 – 7410 (2013)
* Bos _et al._ (2006) C. Bos, J. Sietsma, and B. J. Thijsse, “Molecular dynamics simulation of interface dynamics during the fcc-bcc transformation of a martensitic nature,” Phys. Rev. B 73, 104117 (2006)
* Tateyama _et al._ (2008) Shinji Tateyama, Yasushi Shibuta, and Toshio Suzuki, “A molecular dynamics study of the fcc–bcc phase transformation kinetics of iron,” Scripta Materialia 59, 971 – 974 (2008)
* Sandoval _et al._ (2009) Luis Sandoval, Herbert M. Urbassek, and Peter Entel, “Solid-solid phase transitions and phonon softening in an embedded-atom method model for iron,” Phys. Rev. B 80, 214108 (2009)
* Sandoval and Urbassek (2009a) Luis Sandoval and Herbert M. Urbassek, “Transformation pathways in the solid-solid phase transitions of iron nanowires,” Applied Physics Letters 95, 191909 (2009a), https://doi.org/10.1063/1.3258002
* Song and Hoyt (2012) H. Song and J.J. Hoyt, “A molecular dynamics simulation study of the velocities, mobility and activation energy of an austenite–ferrite interface in pure fe,” Acta Materialia 60, 4328 – 4335 (2012)
* Song and Hoyt (2013) H. Song and J.J. Hoyt, “An atomistic simulation study of the migration of an austenite–ferrite interface in pure fe,” Acta Materialia 61, 1189 – 1196 (2013)
* Song and Hoyt (2018) H. Song and J. J. Hoyt, “A molecular dynamics study of the nucleus interface structure and orientation relationships during the austenite-to-ferrite transformation in pure fe,” Canadian Metallurgical Quarterly 57, 12–19 (2018)
* Song and Hoyt (2015) H Song and J J Hoyt, “An atomistic simulation study of the crystallographic orientation relationships during the austenite to ferrite transformation in pure fe,” Modelling and Simulation in Materials Science and Engineering 23, 085012 (2015)
* Tripathi _et al._ (2018) Pawan Kumar Tripathi, Sumit Kumar Maurya, and Somnath Bhowmick, “Role of disconnections in mobility of the austenite-ferrite interphase boundary in fe,” Phys. Rev. Materials 2, 113403 (2018)
* Wang and Urbassek (2013a) B. Wang and H. M. Urbassek, “Phase transitions in an fe system containing a bcc/fcc phase boundary: An atomistic study,” Phys. Rev. B 87, 104108 (2013a)
* Sinclair and Hoagland (2008) C.W. Sinclair and R.G. Hoagland, “A molecular dynamics study of the fcc→bcc transformation at fault intersections,” Acta Materialia 56, 4160 – 4171 (2008)
* Wang _et al._ (2014b) Binjun Wang, Emilia Sak-Saracino, Nina Gunkelmann, and Herbert M. Urbassek, “Molecular-dynamics study of the α↔γ phase transition in fe–c,” Computational Materials Science 82, 399 – 404 (2014b)
* Wang _et al._ (2014c) Binjun Wang, Emilia Sak-Saracino, Luis Sandoval, and Herbert M Urbassek, “Martensitic and austenitic phase transformations in fe–c nanowires,” Modelling and Simulation in Materials Science and Engineering 22, 045003 (2014c)
* Wang and Urbassek (2013b) Binjun Wang and Herbert M Urbassek, “Computer simulation of strain-induced phase transformations in thin fe films,” Modelling and Simulation in Materials Science and Engineering 21, 085007 (2013b)
* Karewar _et al._ (2018) S. Karewar, J. Sietsma, and M.J. Santofimia, “Effect of pre-existing defects in the parent fcc phase on atomistic mechanisms during the martensitic transformation in pure fe: A molecular dynamics study,” Acta Materialia 142, 71 – 81 (2018)
* Karewar _et al._ (2019) Shivraj Karewar, Jilt Sietsma, and Maria J. Santofimia, “Effect of c on the martensitic transformation in fe-c alloys in the presence of pre-existing defects: A molecular dynamics study,” Crystals 9 (2019), 10.3390/cryst9020099
* Karewar _et al._ (2020) S. Karewar, A. Elzas, J. Sietsma, and M. J. Santofimia, “An atomistic perspective of martensite twinning in iron,” (2020), arXiv:2001.11053 [cond-mat.mtrl-sci]
* Sandoval and Urbassek (2009b) Luis Sandoval and Herbert M. Urbassek, “Finite-size effects in fe-nanowire solid−solid phase transitions: A molecular dynamics approach,” Nano Letters 9, 2290–2294 (2009b), pMID: 19438190, https://doi.org/10.1021/nl9004767
* Sandoval and Urbassek (2009c) Luis Sandoval and Herbert M Urbassek, “Solid–solid phase transitions in fe nanowires induced by axial strain,” Nanotechnology 20, 325704 (2009c)
* Ou _et al._ (2016) X Ou, J Sietsma, and M J Santofimia, “Molecular dynamics simulations of the mechanisms controlling the propagation of bcc/fcc semi-coherent interfaces in iron,” Modelling and Simulation in Materials Science and Engineering 24, 055019 (2016)
* Burgers (1934) W.G. Burgers, “On the process of transition of the cubic-body-centered modification into the hexagonal-close-packed modification of zirconium,” Physica 1, 561 – 586 (1934)
* Bogers and Burgers (1964) A.J Bogers and W.G Burgers, “Partial dislocations on the 110 planes in the b.c.c. lattice and the transition of the f.c.c. into the b.c.c. lattice,” Acta Metallurgica 12, 255 – 261 (1964)
* Olson and Cohen (1972) G.B. Olson and Morris Cohen, “A mechanism for the strain-induced nucleation of martensitic transformations,” Journal of the Less Common Metals 28, 107 – 118 (1972)
* Maresca and Curtin (2017) F. Maresca and W.A. Curtin, “The austenite/lath martensite interface in steels: Structure, athermal motion, and in-situ transformation strain revealed by simulation and theory,” Acta Materialia 134, 302 – 323 (2017)
* Bhadeshia and Wayman (2014) H.K.D.H. Bhadeshia and C.M. Wayman, “9 - phase transformations: Nondiffusive,” in _Physical Metallurgy (Fifth Edition)_, edited by David E. Laughlin and Kazuhiro Hono (Elsevier, Oxford, 2014) fifth edition ed., pp. 1021 – 1072
* lam (1995) “Fast parallel algorithms for short-range molecular dynamics,” J. Comput. Phys. 117, 1 (1995), http://lammps.sandia.gov
* Ackland _et al._ (1997) G. J. Ackland, D. J. Bacon, A. F. Calder, and T. Harry, “Computer simulation of point defect properties in dilute fe-cu alloy using a many-body interatomic potential,” Philosophical Magazine A 75, 713–732 (1997)
* Sun _et al._ (2004) D. Y. Sun, M. Asta, J. J. Hoyt, M. I. Mendelev, and D. J. Srolovitz, “Crystal-melt interfacial free energies in metals: fcc versus bcc,” Phys. Rev. B 69, 020102 (2004)
* Abe _et al._ (2016) Y. Abe, T. Tsuru, S. Shi, N. Oono, and S. Ukai, “Effect of the dilation caused by helium bubbles on edge dislocation motion in $\alpha$-iron: molecular dynamics simulation,” Journal of Nuclear Science and Technology 53, 1528–1534 (2016)
* Terentyev _et al._ (2016) D. Terentyev, A. Bakaev, D. Van Neck, and E. E. Zhurkin, “Glide of dislocations in $<111>\\{321\\}$ slip system: an atomistic study,” Philosophical Magazine 96, 71–83 (2016)
* Stukowski (2012) Alexander Stukowski, “Structure identification methods for atomistic simulations of crystalline materials,” Modelling and Simulation in Materials Science and Engineering 20, 045021 (2012)
* Hartley and Mishin (2005) C.S. Hartley and Y. Mishin, “Characterization and visualization of the lattice misfit associated with dislocation cores,” Acta Materialia 53, 1313 – 1321 (2005)
* Nye (1953) J.F Nye, “Some geometrical relations in dislocated crystals,” Acta Metallurgica 1, 153 – 162 (1953)
* Yao and Zhang (2020) B.N. Yao and R.F. Zhang, “AADIS: An atomistic analyzer for dislocation character and distribution,” Computer Physics Communications 247, 106857 (2020)
* Howe _et al._ (2009) J.M. Howe, R.C. Pond, and J.P. Hirth, “The role of disconnections in phase transformations,” Progress in Materials Science 54, 792 – 838 (2009)
|
# Self-similar curve shortening flow in hyperbolic 2-space
Eric Woolgar Dept of Mathematical and Statistical Sciences, and Theoretical
Physics Institute, University of Alberta, Edmonton, AB, Canada T6G 2G1.
ewoolgar(at)ualberta.ca and Ran Xie School of Mathematics and Statistics,
Xi’an Jiaotong University, Xi’an, PR China dhsieh29(at)gmail.com
###### Abstract.
We find and classify self-similar solutions of the curve shortening flow in
standard hyperbolic 2-space. Together with earlier work of Halldórsson on
curve shortening flow in the plane and Santos dos Reis and Tenenblat in the
2-sphere, this completes the classification of self-similar curve shortening
flows in the constant curvature model spaces in 2-dimensions.
## 1\. Introduction
Consider a smooth map $X:I\times[0,T)\to M:(x,t)\mapsto X(x,t)=:X_{t}(x)$, for
$I$ a connected interval of the real line and $T\in(0,\infty]$. We will take
$M\subset{\mathbb{R}}^{3}$ to be a surface, though the flow can also be
contemplated for $M$ a manifold with arbitrary dimension. Then a curve
shortening flow is a solution of the differential equation
(1.1) $\frac{\partial X}{\partial t}=\kappa_{g}N\ ,$
where $\kappa_{g}$ is the geodesic curvature of the curve $X_{t}(x)$ (for
fixed $t$) and $N$ is the principal normal to the curve, with the signs chosen
so that $\kappa_{g}N$ points to the concave side of the curve.
The theory of the curve shortening flow, often denoted simply as CSF, was
developed in a number of papers in the 1980s, among them Gage [2, 3], Gage and
Hamilton [5], and Grayson [6]. There is now an extensive literature, including
at least one book [1].
Geodesics are trivial solutions of this flow since they have $\kappa_{g}=0$.
The next simplest solutions are curves that evolve self-similarly, sometimes
called soliton solutions. This class includes geodesics but also includes
nontrivial examples. The well-known _grim reaper_ or _paperclip_ , whose trace
at fixed $t$ is the graph $y(x)=C+\log\sec x$, is an example of a non-geodesic
self-similar solution of CSF in ${\mathbb{R}}^{2}$. In a thesis in 2013,
Halldórsson [7, 8] classified the self-similar solutions in
${\mathbb{R}}^{2}$. Self-similar solutions on the 2-sphere ${\mathbb{S}}^{2}$
were classified by Santos dos Reis and Tenenblat [12] in 2019. In the present
paper, we study self-similar curve shortening flows in the remaining complete
constant curvature surface, standard hyperbolic 2-space ${\mathbb{H}}^{2}$.
###### Definition 1.1.
Two unit speed curves $\gamma,{\tilde{\gamma}}:I\to(M,g)$ are _congruent_ if
there is an isometry $\varphi$ of $(M,g)$ such that
$\varphi\circ\gamma={\tilde{\gamma}}$. A curve shortening flow _evolves by
isometries_ if $\gamma_{t}(s)$ is a unit speed curve for each $t\in J$, $0\in
J\subset{\mathbb{R}}$, and there is a one-parameter family of isometries
$\varphi_{t}$ such that $\varphi_{t}\circ\gamma=\gamma_{t}$; i.e., each
$\gamma_{t}$ is congruent to every other one. A curve that evolves by
isometries is also said to be _self-similar_ or a _soliton_.
We will further take solitons to be _inextendible_ , meaning that the soliton
curve admits a unit speed parametrization $X(s)$ on an open connected interval
$I\in{\mathbb{R}}$ such that $X(s)$ cannot be defined as a unit speed curve on
any open connected interval $J\supset I$ containing the closure of $I$ as a
proper subset.
Note that self-similarity of curves only makes sense on surfaces that have
families of sufficiently smooth isometries. By the fundamental theorem of
curves in surfaces, when $M$ is a surface we can determine whether the curves
$\gamma$ and ${\tilde{\gamma}}$ are congruent by comparing their curvatures as
functions of arclength. Using this fact, a consequence of [4, Proposition 4.2]
is that closed curves on surfaces cannot evolve by isometries under CSF.
However they can evolve by the composition of isometries and time-dependent
rescalings; often the terms“self-similar” and “scaling soliton” are employed
to describe curves that evolve in this more general sense, but this paper is
concerned with solitons that do not rescale in time.
This brings us to our main theorem.
###### Theorem 1.2.
Let ${\mathbb{M}}^{3}$ be ${\mathbb{R}}^{3}$ equipped with the quadratic form
(Minkowski metric) $\eta=\operatorname{diag}(1,1,-1)$ and denote the Minkowski
inner product by $\langle U,W\rangle_{\eta}=\eta(U,W)$ for
$U,W\in{\mathbb{M}}^{3}$. For each
${\tilde{v}}\in{\mathbb{M}}^{3}\setminus\\{0\\}$ there is a $2$-parameter
family of nontrivial solutions $X$ of CSF evolving by isometries in standard
hyperbolic $2$-space ${\mathbb{H}}^{2}$. These soliton curves are complete,
unbounded, and properly embedded, and asymptote either to a horocycle or to a
geodesic. The soliton cannot asymptote to a geodesic
* (i)
at both ends, or
* (ii)
if ${\tilde{v}}$ is timelike, or
* (iii)
if $\mu(s)=\left\langle X(s),{\tilde{v}}\right\rangle_{\eta}$ has a critical
point (and it can have at most one critical point), or
* (iv)
if $\mu(s)$ has a zero (and it can have at most one zero).
While the results of this paper can be viewed as a natural outgrowth of the
work reported in [7, 8] and [12], there are important independent reasons to
study the curve shortening flow and its solitons in hyperbolic space. There
are two generalization of this problem to higher dimensions which are
important for physics. The first generalization is the mean curvature flow of
codimension one objects (hypersurfaces) flowing in standard hyperbolic
$n$-space. The fixed points are minimal surfaces bounded by a curve on the
boundary at conformal infinity. These surfaces play an important role in the
AdS/CFT correspondence. The surface areas of the minimal surfaces are
proportional to the entanglement entropy of the region $R$ enclosed by the
curve on the conformal boundary. This entropy is a property of quantum states
of a conformal field theory defined on the conformal boundary, and encodes
uncertainty in measurements of those states within $R$ due to correlations
which extend beyond $R$. One way to construct these minimal surfaces is as
limits of convergent mean curvature flows. But these mean curvature flows
could instead approach “generalized fixed points”, the self-similar solutions,
as limits. The problem of self-similar curve shortening flows in
${\mathbb{H}}^{2}$ is the obvious first step to complete before addressing the
higher dimensional mean curvature flow version that arises in AdS/CFT physics.
Our result that self-similar curves asymptote to geodesics only in limited
cases, and never at both ends, may suggest that solitons of the higher
dimensional problem would not meet the boundary at infinity orthogonally, and
perhaps not even transversally.
The second application arises by considering dimension one flows in a
spacetime of dimension $n+1$ with metric $g=-dt^{2}+a^{2}(t)h$, for $h$ a
metric on a Riemannian $n$-manifold. This situation is of interest in
cosmology, particularly when $h$ is any constant curvature metric. So-called
_cosmic strings_ in this spacetime can be modelled as solutions of the wavemap
equation for an embedded timelike 2-surface $X:(u,w)\mapsto(X^{0},X^{i})$,
$i=1,\dots,n$, in spacetime. In this setting this becomes
(1.2) $\Box_{\eta}X^{i}+2H(t)\eta^{ab}\partial_{a}X^{i}\partial_{b}X^{0}=0\ ,$
where $H(t):=\frac{a^{\prime}(t)}{a(t)}$, $\eta$ is the induced Lorentzian
metric on the 2-surface, and $\Box_{\eta}$ is the trace of the Hessian of
$\eta$ (regarded as operating on an $n$-tuple of functions $X^{i}$). If one
takes the $w$ parameter to be the $t$-coordinate and the $u$ parameter to be
an arclength $s$ along the curves $X_{t}(s)=X(s,t)$, then $X^{0}=t$ and one
obtains [9, Equation 15 with $\Delta=\frac{\partial^{2}}{\partial s^{2}}$ and
assuming the second time derivative term is small enough to ignore]
(1.3) $\frac{\partial^{2}X^{i}}{\partial s^{2}}=2H(t)\frac{\partial
X^{i}}{\partial t}\ .$
For inflationary cosmology models, $H(t)$ is constant, and then (1.3) is a
curve shortening flow. Therefore, for this problem we increase the dimension
of the ambient manifold, which should now be any constant curvature space, and
study curve shortening flows in it.
The manuscript is organized as follows. In Section 2, we first review the
model of ${\mathbb{H}}^{2}$ which represents it as the $z\geq 1$ sheet of the
hyperboloid $x^{2}+y^{2}-z^{2}=-1$ in the Minkowski spacetime
${\mathbb{M}}^{3}$. This model permits us straightforwardly to adapt ideas of
[8, 12] to our setting, which we do in Sections 2.2–2.4. Specifically, we
formulate the problem as an autonomous system of ordinary differential
equations. In Section 3, we discuss certain special solutions such as
geodesics and horocycles. Horocycles were discussed by Grayson [6] and
arguably are hyperbolic space analogues of the grim reaper self-similar flow
in ${\mathbb{R}}^{2}$. For completeness, we briefly discuss hypercycles, which
are scaling solitons (so they evolve by a composition of isometries and
rescalings). In Section 4 we analyze the autonomous system from Section 2 and
prove a series of lemmata leading to the proof of Theorem 1.2.
After this paper appeared in preprint form, we learned of the beautiful thesis
[10] which independently derived similar results in great detail. This work is
now described in [11].
### 1.1. Acknowledgements
We are indebted to K Tenenblat for comments on the preprint version of this
paper, which helped us to improve our presentation (especially, to correct an
error in Proposition 2.3), and for bringing reference [10] to our attention.
EW is grateful to the organizers and audience of the Clark University
Geometric Analysis seminar of 13 Nov 2020, where these results were presented,
for their interest and insightful comments. We thank to Michael Yi Li and
Yingfei Yi for organizing the 2019 International Undergraduate Summer
Enrichment Programme (IUSEP) at the University of Alberta, during which this
work was begun. EW is supported by NSERC Discovery Grant RGPIN–2017–04896.
## 2\. The hyperboloidal model of hyperbolic space
### 2.1. Elementary properties
We review some basic facts of the hyperboloidal model, which represents
hyperbolic 2-space ${\mathbb{H}}^{2}$ as the $z\geq 1$ sheet of the
hyperboloid $x^{2}+y^{2}-z^{2}=-1$ in Minkowski $3$-space
${\mathbb{M}}^{3}:=\left({\mathbb{R}}^{3},\eta\right)$ where $\eta$ is the
quadratic form $\operatorname{diag}(1,1,-1)$. We will adopt the convention
that our coordinates are enumerated as $x^{i}=(x^{1},x^{2},x^{3})=(x,y,z)$
with $\eta(\partial_{x},\partial_{x})=\eta(\partial_{y},\partial_{y})=+1$ so
that $\partial_{x}$ and $\partial_{y}$ are _spacelike_ and
$\eta(\partial_{z},\partial_{z})=-1$ so that $\partial_{z}$ is _timelike_ in
the terminology common in physics.
The hyperboloid modelling ${\mathbb{H}}^{2}$ is a _spacelike hypersurface_ in
${\mathbb{M}}^{3}$; i.e., any vector tangent to this surface is spacelike.
However, any vector $X$ from the origin to a point on ${\mathbb{H}}^{2}$ is
timelike and future-directed ($\eta(X,\partial_{z})<0$), and has
$\eta(X,X)=-1$. In fact, any such vector is a future-directed timelike unit
normal field for the surface.
The group $G$ that preserves the quadratic form $\eta$ is the orthogonal group
$\operatorname{O}(2,1)$. The proper orthochronous subgroup $G$ that preserves
spatial orientation and time orientation will preserve the hyperboloid sheet
$z=\sqrt{1+x^{2}+y^{2}}$ and the orientation of bases for its tangent spaces
(as well as preserving the choice of future and past). This subgroup lies in
the connected component of the identity in the isometry group of
${\mathbb{H}}^{2}$. The one-parameter subgroups of $G$ can be classified as
compositions of _boosts_ in the $x$-direction
(2.1) $A_{1}(\zeta)=\left[\begin{array}[]{ccc}\cosh\zeta&0&-\sinh\zeta\\\
0&1&0\\\ -\sinh\zeta&0&\cosh\zeta\end{array}\right]\ ,$
boosts in the $y$-direction
(2.2) $A_{2}(\xi)=\left[\begin{array}[]{ccc}1&0&0\\\ 0&\cosh\xi&-\sinh\xi\\\
0&-\sinh\xi&\cosh\xi\end{array}\right]\ ,$
and rotations about the $z$-axis
(2.3) $A_{3}(\theta)=\left[\begin{array}[]{ccc}\cos\theta&-\sin\theta&0\\\
\sin\theta&\cos\theta&0\\\ 0&0&1\end{array}\right]\ .$
The corresponding Lie algebra is spanned by
(2.4) $A_{1}^{\prime}(0)=\left[\begin{array}[]{ccc}0&0&-1\\\ 0&0&0\\\
-1&0&0\end{array}\right]\ ,\
A_{2}^{\prime}(0)=\left[\begin{array}[]{ccc}0&0&0\\\ 0&0&-1\\\
0&-1&0\end{array}\right]\ ,\
A_{3}^{\prime}(0)=\left[\begin{array}[]{ccc}0&-1&0\\\ 1&0&0\\\
0&0&0\end{array}\right]\ .$
The boosts in the $x$-direction preserve $\partial_{y}$ and each of the two
null planes in which it lies. Likewise, the boosts in the $y$-direction
preserve $\partial_{x}$ and the two null planes in which it lies. Using this,
one can see that a general $G$-transformation mapping any orthonormal basis of
${\mathbb{M}}^{3}$ to any other (preserving orientation and time orientation)
can be constructed from a product $A_{1}(\zeta)A_{2}(\xi)A_{3}(\theta)$, whose
parameters $\zeta$, $\xi$, and $\theta$ play the role of “Euler angles”. We
will sketch the argument. Consider two $\eta$-orthonormal basis (ONB) sets
$\\{e_{i}\\}$ and $\\{{\tilde{e}}_{i}\\}$, where $\\{e_{i}\\}$ is the
coordinate basis defined by the $x^{i}$ coordinates, with $e_{3}=\partial_{z}$
future-timelike and $\\{e_{1},e_{2}\\}$ right-handed (from here on, we simply
say an _oriented basis_). Likewise, ${\tilde{e}}_{3}$ will be future-timelike
as well. The span of $\\{{\tilde{e}}_{1},{\tilde{e}}_{3}\\}$ is a timelike
plane $\Pi$. It’s a simple matter to find a (normalized spacelike) vector,
call it $e_{1}^{\prime}$, that lies in $\Pi$ and is orthogonal to $e_{3}$.
It’s also a simple exercise in linear algebra to find a rotation
$A_{3}(\theta)$ about $e_{3}$ such that $A_{3}(\theta)e_{1}=e_{1}^{\prime}$.
This obviously leaves $e_{3}$ invariant, but maps $e_{2}$ to
$e_{2}^{\prime}=A_{3}(\theta)e_{2}$. Now apply a boost in the plane spanned by
$e_{2}^{\prime}$ and $e_{3}$. Such a boost $A_{2}(\xi)$ will leave
$e_{1}^{\prime}$ invariant, but we can choose $\xi$ such that
$e_{3}^{\prime}:=A_{2}(\xi)e_{3}$, which remains timelike under a boost, lies
in $\Pi$. This boost, incidentally, acts on $e_{2}^{\prime}$ to produce
$e_{2}^{\prime\prime}:=A_{2}(\xi)e_{2}^{\prime}$. The plane $\Pi$ is now a
coordinate plane for the ONB
$\\{e_{1}^{\prime},e_{2}^{\prime\prime},e_{3}^{\prime}\\}$, with
$\Pi=\operatorname{Span}\\{e_{1}^{\prime},e_{3}^{\prime}\\}=\operatorname{Span}\\{{\tilde{e}}_{1},{\tilde{e}}_{3}\\}$,
and $e_{2}^{\prime\prime}$ is normal (i.e., $\eta$-normal) to this plane. A
final boost $A_{1}(\zeta)$ in the plane $\Pi$ preserves $e_{2}^{\prime\prime}$
and can be chosen such that
${\tilde{e}}_{1}=e_{1}^{\prime\prime}:=A_{1}(\zeta)e_{1}^{\prime}$. Since
$e_{3}^{\prime\prime}:=A_{1}(\zeta)e_{3}^{\prime}$ must be orthogonal to
$e_{1}^{\prime\prime}$, it follows that
$\\{e_{1}^{\prime\prime},e_{2}^{\prime\prime},e_{3}^{\prime\prime}\\}=\\{{\tilde{e}}_{1},{\tilde{e}}_{2},{\tilde{e}}_{3}\\}$.
Then a curve of isometries may be written as
(2.5) $A(t)=A_{1}(\zeta(t))A_{2}(\xi(t))A_{3}(\theta(t))\ .$
If this curve passes through the identity isometry at $t=0$, we may write its
tangent there as
(2.6)
$\begin{split}{\dot{A}}(0)=&\,{\dot{A}}_{1}(0){\dot{\zeta}}(0)+{\dot{A}}_{2}(0){\dot{\xi}}(0)+{\dot{A}}_{3}(0){\dot{\theta}}(0)\\\
=&\,\left[\begin{array}[]{ccc}0&0&-1\\\ 0&0&0\\\
-1&0&0\end{array}\right]{\dot{\zeta}}(0)+\left[\begin{array}[]{ccc}0&0&0\\\
0&0&-1\\\
0&-1&0\end{array}\right]{\dot{\xi}}(0)+\left[\begin{array}[]{ccc}0&-1&0\\\
1&0&0\\\ 0&0&0\end{array}\right]{\dot{\theta}}(0)\\\
=&\,\left[\begin{array}[]{ccc}0&-{\dot{\theta}}(0)&-{\dot{\xi}}(0)\\\
{\dot{\theta}}(0)&0&-{\dot{\zeta}}(0)\\\
-{\dot{\xi}}(0)&-{\dot{\zeta}}(0)&0\end{array}\right]\ .\end{split}$
For any curve of isometries containing the identity at $t=0$, differentiation
at $t=t_{0}$ can be accomplished using that
$A(t_{0}+\epsilon)=A(t_{0}+\epsilon)A^{-1}(t_{0})A(t_{0})=:B(\epsilon)A(t_{0})$
where $B(\epsilon):=A(t_{0}+\epsilon)A^{-1}(t_{0})$ so
$B(0)=\operatorname{id}$. But $B(\epsilon)$ can be written as a product
$B(\epsilon)=A_{1}(\zeta(\epsilon))A_{2}(\xi(\epsilon))A_{3}(\theta(\epsilon))$,
and so
(2.7)
${\dot{A}}(t_{0})=\frac{d}{d\epsilon}\bigg{|}_{\epsilon=0}B(\epsilon)A(t_{0})={\dot{B}}(0)A(t_{0})\
,$
with ${\dot{B}}$ given by the right-hand side of (2.6).
### 2.2. Curves in the hyperboloid model
Given a hypersurface $S$ in a Riemannian manifold and vectors $U$ and $V$
tangent to it, a standard formula in Riemannian geometry gives
(2.8) $\nabla_{U}V=D_{U}V+K(U,V)\ ,$
where $\nabla$ is the connection on the ambient manifold, $D$ is the
connection induced on the hypersurface, and $K$ is the vector-valued second
fundamental form of the hypersurface. In the case of present interest,
$\nabla$ is the Levi-Civita connection of the Minkowski metric
$\eta=\langle\cdot,\cdot\rangle_{\eta}$. Since the vector $X$ from the origin
to the unit hyperboloid is a unit future-timelike vector orthogonal to the
hyperboloid, we may write a general vector $V$ as
(2.9) $V=P(V)-\langle X,V\rangle_{\eta}X\ ,$
where $P$ is orthogonal projection to the tangent space of the hyperboloid.
The negative sign in the second term arises because $X$ is timelike. The
second fundamental form can then be written as
(2.10) $K=P(\nabla X_{\flat})X\ ,$
where $X_{\flat}:=\langle\cdot,X\rangle_{\eta}$ is the 1-form metric-dual to
$X$. If we write the first fundamental form of the hyperboloid as $h$ then the
hyperboloid is umbilic in ${\mathbb{M}}^{3}$ such that $K=Xh$ (so that
$\langle X,K(V,V)\rangle_{\eta}=-1$ for any unit vector $V$ tangent to the
hyperboloid).
Now consider a (smooth) unit-speed curve $X(s)$ on the unit hyperboloid. The
unit tangent vector is $T(s):=\frac{dX}{ds}$. It follows from the above
formulas that
(2.11) $\nabla_{T}T=D_{T}T+K(T,T)=D_{T}T+X\ .$
Since $T$ is a unit vector, $\nabla_{T}T$ lies in its orthogonal complement,
from which we see that $D_{T}T$ does as well. But $D_{T}T$ must be tangent to
the hyperboloid and so orthogonal to $X$, so we write
(2.12) $D_{T}T=\kappa_{g}N\ ,$
where $N$ is called the _principal normal vector_ to the curve $s\mapsto X(s)$
and $\kappa_{g}$ is the _geodesic curvature_. The sign choices are such that
$\\{T,N,X\\}$ is an orthonormal oriented basis oriented such that $X$ is
future-pointing and the vector $\kappa_{g}N$ points to the concave side of the
curve, viewed as a curve in the hyperboloid, whenever $\kappa_{g}$ is not
zero. Now denote $\eta(\nabla_{T}T,\nabla_{T}T)=:\epsilon\kappa^{2}$ where
$\epsilon=1$ if $\nabla_{T}T$ is spacelike, $0$ if it’s null, and $-1$ if it’s
timelike. Then from (2.11) we can relate the curvature $\kappa$ of $X$ as a
space curve in $({\mathbb{M}}^{3},\eta)$ to its geodesic curvature
$\kappa_{g}$ in the hyperboloid by
(2.13) $\kappa_{g}^{2}=\epsilon\kappa^{2}+1\ .$
Note that $\langle T,\nabla_{T}N\rangle_{\eta}=-\langle
N,\nabla_{T}T\rangle_{\eta}=-\langle N,D_{T}T\rangle_{\eta}=-\kappa_{g}$. By
similar reasoning, we see that $\langle X,\nabla_{T}N\rangle_{\eta}=0$, and of
course since $N$ is a unit vector then $\langle
N,\nabla_{T}N\rangle_{\eta}=0$. Thus $\nabla_{T}N=-\kappa_{g}T$. Collecting
our results, we have that the $\\{T,N,X\\}$ basis evolves along the curve
according to the _Frenet-Serret equations_ in ${\mathbb{H}}^{2}$, which are
(2.14) $\begin{split}\nabla_{T}T=&\,\kappa_{g}N+X\ ,\\\
\nabla_{T}N=&\,-\kappa_{g}T\ ,\\\ \nabla_{T}X=&\,T\ .\end{split}$
### 2.3. Curves and axes
Here we follow closely the work of [12] for the ${\mathbb{S}}^{2}$ case,
making changes as necessary. Choose an arbitrary vector
${\tilde{v}}\in{\mathbb{M}}^{3}\backslash\\{0\\}$, which can be either
timelike, spacelike, or null. It is convenient to define ${\tilde{v}}=av$
where $a=\sqrt{|\eta({\tilde{v}},{\tilde{v}})|}$ when $v$ is not null. For
presentation purposes, we will introduce the scale factor even when
${\tilde{v}}$ is null, but in that case $a\neq 0$ will not be determined and
$v$, being null, cannot be normailzed. Then
$\eta({\tilde{v}},{\tilde{v}})=\epsilon=0,\pm 1$ depending on whether $v$ is
null ($0$), spacelike ($+1$), or timelike ($-1$).
Fix a unit speed curve $X:I\to{\mathbb{M}}^{3}$ on the unit hyperboloid, and
define the functions
(2.15) $\begin{split}\tau(s):=&\,\langle T(s),v\rangle_{\eta}\ ,\\\
\nu(s):=&\,\langle N(s),v\rangle_{\eta}\ ,\\\ \mu(s):=&\,\langle
X(s),v\rangle_{\eta}\ .\end{split}$
Then we can write
(2.16) $v=\tau(s)T(s)+\nu(s)N(s)-\mu(s)X(s)$
and
(2.17) $\epsilon=\eta(v,v)=\tau^{2}(s)+\nu^{2}(s)-\mu^{2}(s)\ .$
The next result shows that if the geodesic curvature of the curve $X(s)$ takes
the form $\kappa_{g}(s)=a\tau(s)$, a choice corresponding to a flow by
isometries under CSF as we will see in the next subsection, then $\tau$,
$\nu$, and $\mu$ obey a certain autonomous system of differential equations
along $X(s)$. The analogous result for curves in ${\mathbb{S}}^{2}$ is found
in [12, Proposition 3.1].
###### Proposition 2.1.
Along any smooth unit speed curve $X(s)$ on the hyperboloid we have that
$\kappa_{g}(s)=a\tau(s)$ if and only if
(2.18) $\begin{cases}\tau^{\prime}(s)=a\tau(s)\nu(s)+\mu(s),\\\
\nu^{\prime}(s)=-a\tau^{2}(s),\\\ \mu^{\prime}(s)=\tau(s).\end{cases}$
###### Proof.
Equations (2.14) imply that
(2.19) $\begin{split}\tau^{\prime}(s)=&\,\kappa_{g}(s)\nu(s)+\mu(s)\ ,\\\
\nu^{\prime}(s)=&\,-\kappa_{g}(s)\tau(s)\ ,\\\ \mu^{\prime}(s)=&\,\tau(s)\
.\end{split}$
To prove the forward implication, simply substitute $\kappa_{g}(s)=a\tau(s)$
into (2.19).
To prove the reverse implication, note that we can combine equations (2.18)
and (2.19) to write that $\left(a\tau-\kappa_{g}\right)\nu=0$ and
$\left(a\tau-\kappa_{g}\right)\tau=0$. But if $\tau$ and $\nu$ both vanish,
then $v$ must be orthogonal to both $T$ and $N$ and hence parallel to $X$, so
$v$ is normal to the hyperboloid. But $v$ is a constant vector (i.e., it is
parallel in $({\mathbb{M}}^{3},\eta)$), so this can only happen at isolated
points along a nontrivial curve $X$, so it must instead be that
$a\tau-\kappa_{g}=0$. ∎
Next we show that solutions of the system (2.18) are always realized by actual
curves.
###### Proposition 2.2.
Given a solution $(\tau(s),\nu(s),\mu(s))$ of (2.18) obeying initial
conditions $(\tau_{0},\nu_{0},\mu_{0})=(\tau(0),\nu(0),\mu(0))$, there is a
smooth unit speed curve $X:I\to{\mathbb{M}}^{3}$ on the unit hyperboloid
satisfying (2.15).
###### Proof.
Choose a point $X_{0}$ on the hyperboloid and two vectors $T_{0}$ and $N_{0}$
such that $\\{T_{0},N_{0},X_{0}\\}$ is an oriented orthonormal frame. Then
define $v$, normalized as above (if non-null), by
$v=\tau_{0}T_{0}+\nu_{0}N_{0}-\mu_{0}X_{0}$. Now, given $\tau(s)$ for $s\in I$
and given $a>0$, define a function $k:I\to{\mathbb{R}}:s\mapsto a\tau(s)$. By
the fundamental theorem for curves in ${\mathbb{H}}^{2}$, there is a unique
curve $X:I\to{\mathbb{M}}^{3}$ such that $X(0)=X_{0}$,
$X^{\prime}(0)=:T(0)=T_{0}$, and $N(0)=N_{0}$, whose curvature is $k(s)$. This
curve must satisfy equations (2.14) with $\kappa_{g}(s)=k(s)$. Then equations
(2.15) hold for any constant vector $v$, and so hold for
$v=\tau_{0}T_{0}+\nu_{0}N_{0}-\mu_{0}X_{0}$. ∎
### 2.4. CSF and flow by isometries
Having defined a moving basis along an arbitrary curve in the hyperboloid, we
can now define the _curve shortening flow_ in the hyperboloid to be a flow
$X_{t}(s)=X(t,s)$ of smooth curves
$X_{t}(s)=\left(x_{t}(s),y_{t}(s),z_{t}(s)\right)$ with principal normal
$N_{t}$, for $t\in J\subset{\mathbb{R}}$ some connected interval about $t=0$,
such that
(2.20) $\begin{split}\left\langle\frac{\partial X_{t}}{\partial
t},N_{t}\right\rangle_{\eta}=&\,\kappa_{g}\ ,\\\ \left\langle\frac{\partial
X_{t}}{\partial t},X_{t}\right\rangle_{\eta}=&\,0\ ,\\\
\left|X_{0}\right|_{\eta}^{2}=x_{0}^{2}+y_{0}^{2}-z_{0}^{2}=&\,-1\ ,\
z_{0}\geq 1\ .\end{split}$
The last two equations are equivalent to the conditions $\left\langle
X_{t},X_{t}\right\rangle_{\eta}=-1$ and
$X_{0}=\left(x_{0},y_{0},z_{0}\right)$.
We are interested in those solutions of (2.20) that are of the form
(2.21) $X_{t}=A(t)X_{0}\ ,\ t\in J\ ,\ A(0)=\operatorname{id}\ ,$
where $A(t):=A_{1}(\zeta(t))A_{2}(\xi(t))A_{3}(\theta(t))$ with
$\zeta(0)=\xi(0)=\theta(0)=0$, with the $A_{i}$ defined in equations
(2.1)–(2.3). Then $t\mapsto X_{t}$ is called a _flow by isometries_. Motion by
an isometry preserves the curvature, so $\kappa_{g}(s,t)=\kappa)g(s)$ (i.e.,
it is independent of $t$).
As we saw in Proposition 2.1, if there is a (normalized or null) vector $v$
such that $a\langle T,v\rangle_{\eta}=\kappa_{g}$, then a system of three
scalar equations governs the moving frame. The next result shows that if a
curve shortening flow is also a flow by isometries, there is always such a $v$
which is constant with respect to the flow parameter $t$. For convenience, we
work with the unrescaled vector ${\tilde{v}}=av$ ($a(s)$ is used for another
purpose in the proof). We follow the proof of the analogous result for curves
in ${\mathbb{S}}^{2}$ [12, Theorem 2.2].
###### Proposition 2.3.
Let $X(s,t)=:X_{t}(s)$ be a smooth function of two variables such that
$X_{t}(s)$ is a regular unit speed curve for each $t\in J$, lying in the
hyperboloid $z=\sqrt{1+x^{2}+y^{2}}$ in ${\mathbb{M}}^{3}$ and evolving by
isometries as in (2.21). Then (2.20) holds if and only if there is a
${\tilde{v}}\in{\mathbb{M}}^{3}\backslash\\{0\\}$ such that
(2.22) $\left\langle T(s),{\tilde{v}}\right\rangle_{\eta}=\kappa_{g}(s)$
for all $s\in I$, where $T(s)=\frac{dX_{t_{0}}}{ds}=:X_{t_{0}}^{\prime}(s)$
and $\kappa_{g}(s)$ is the geodesic curvature of $X_{t_{0}}(s)=:X(s)$.
###### Proof.
Let $X_{t}(s)=A(t)X_{0}$, so that $X$ flows by isometries. Then at any
$t=t_{0}\in J$, we compute
(2.23) $\frac{\partial}{\partial
t}\bigg{|}_{t_{0}}X_{t}(s)=\left[\begin{array}[]{ccc}0&-{\dot{\theta}}&-{\dot{\xi}}\\\
{\dot{\theta}}&0&-{\dot{\zeta}}\\\
-{\dot{\xi}}&-{\dot{\zeta}}&0\end{array}\right]\left[\begin{array}[]{c}x_{t_{0}}(s)\\\
y_{t_{0}}(s)\\\
z_{t_{0}}(s)\end{array}\right]=\left[\begin{array}[]{c}-{\dot{\theta}}y_{t_{0}}-{\dot{\xi}}z_{t_{0}}\\\
{\dot{\theta}}x_{t_{0}}-{\dot{\zeta}}z_{t_{0}}\\\
-{\dot{\xi}}x_{t_{0}}-{\dot{\zeta}}y_{t_{0}}\end{array}\right]\ ,$
using (2.7) and (2.6). Let $T_{t_{0}}(s):=X_{t}^{\prime}(s)\big{|}_{t=t_{0}}$
be the unit tangent vector to $X_{t_{0}}(s)$ and let $N_{t_{0}}(s)$ be the
unit normal vector to $X_{t_{0}}(s)$ in the tangent space to the hyperboloid,
defined so that $\\{T_{t_{0}},N_{t_{0}},-X_{t_{0}}\\}$ is an oriented
orthonormal basis. Writing $N_{t_{0}}(s)=(a(s),b(s),c(s))$, we have
(2.24) $\begin{split}\left\langle
N_{t_{0}},X_{t_{0}}\right\rangle_{\eta}=&\,a(s)x_{t_{0}}(s)+b(s)y_{t_{0}}(s)-c(s)z_{t_{0}}(s)=0\,\\\
\left\langle
N_{t_{0}},T_{t_{0}}\right\rangle_{\eta}=&\,a(s)x_{t_{0}}^{\prime}(s)+b(s)y_{t_{0}}^{\prime}(s)-c(s)z_{t_{0}}^{\prime}(s)=0\
,\end{split}$
so that at each $s$ along $X_{t_{0}}$ we have
(2.25)
$N_{t_{0}}=(a(s),b(s),c(s))=\left(y_{t_{0}}^{\prime}z_{t_{0}}-y_{t_{0}}z_{t_{0}}^{\prime},z_{t_{0}}^{\prime}x_{t_{0}}-z_{t_{0}}x_{t_{0}}^{\prime},-x_{t_{0}}^{\prime}y_{t_{0}}+x_{t_{0}}y_{t_{0}}^{\prime}\right)\
.$
It’s not difficult to check that this vector has norm $1$ and has the correct
sign. Then
(2.26) $\begin{split}\left\langle N_{t_{0}},\frac{\partial X_{t}}{\partial
t}\bigg{|}_{t=t_{0}}\right\rangle_{\eta}=&\,\left[\begin{array}[]{ccc}a(s)&b(s)&c(s)\end{array}\right]\left[\begin{array}[]{ccc}1&0&0\\\
0&1&0\\\
0&0&-1\end{array}\right]\left[\begin{array}[]{c}-{\dot{\theta}}y-{\dot{\xi}}z\\\
{\dot{\theta}}x-{\dot{\zeta}}z\\\
-{\dot{\xi}}x-{\dot{\zeta}}y\end{array}\right]\\\
=&\,{\dot{\zeta}}\left(xyy^{\prime}-xzz^{\prime}-x^{\prime}y^{2}+x^{\prime}z^{2}\right)+{\dot{\xi}}\left(-xx^{\prime}y+zz^{\prime}y+x^{2}y^{\prime}-z^{2}y^{\prime}\right)\\\
&\,+{\dot{\theta}}\left(-xx^{\prime}z-yy^{\prime}z+x^{2}z^{\prime}+y^{2}z^{\prime}\right)\end{split}$
where we’ve removed the $t_{0}$ subscripts on the right to lessen the clutter.
Using that $x^{2}+y^{2}-z^{2}=-1$ and, therefore, that
$xx^{\prime}+yy^{\prime}=zz^{\prime}$, this last line simplifies and implies
that, for any curve moving by isometries described by
$A(\zeta(t),\xi(t),\theta(t))$, we have
(2.27) $\left\langle N_{t_{0}},\frac{\partial X_{t}}{\partial
t}\bigg{|}_{t=t_{0}}\right\rangle_{\eta}={\dot{\zeta}}x^{\prime}-{\dot{\xi}}y^{\prime}-{\dot{\theta}}z^{\prime}=\langle
T_{t_{0}},{\tilde{v}}\rangle_{\eta}\ ,$
where ${\tilde{v}}:=\left({\dot{\zeta}},-{\dot{\xi}},{\dot{\theta}}\right)$.
(Note that ${\tilde{v}}$ can be either timelike, spacelike, or null.)
To prove necessity (“only if”), by (2.20) we have $\left\langle\frac{\partial
X_{t_{0}}}{\partial t},N_{t_{0}}\right\rangle_{\eta}=\kappa_{g}(s)$, so
(2.28) $\left\langle
T_{t_{0}}(s),{\tilde{v}}\right\rangle_{\eta}\equiv\left\langle
T(s),{\tilde{v}}\right\rangle_{\eta}=\kappa_{g}(s)\ .$
To prove sufficiency, we note that one can by direct computation prove that if
$A_{i}(t)$ is any of the matrices (2.1)–(2.3) then in matrix notation
$(A_{i}^{\prime}(t))^{T}\eta A_{i}(t)=(A_{i}^{\prime}(0))^{T}\eta A_{0}(0)$
where $M^{T}$ denotes the transpose of $M$ (note that
$A_{i}(0)=\operatorname{id}$). For a flow of the form $X(s,t)=A_{i}(t)X(s)$,
we have $N(s,t)=A_{i}(t)N(s)$, so
(2.29) $\begin{split}\kappa_{g}(s,t)=&\,\left\langle\frac{\partial X}{\partial
t}(s,t),N(s,t)\right\rangle=X^{T}(s)\left(A_{i}^{\prime}(t)\right)^{T}\eta
A_{i}(t)N(s)=X^{T}(s)(A_{i}^{\prime}(0))^{T}\eta A_{0}(0)N(s)\\\
=&\,\kappa_{g}(s,0)\ .\end{split}$
Thus, any of the families of isometries listed in (2.1)–(2.3) produce curve
shortening flows such that $\kappa_{g}(s)$ is independent of $t$. So if there
is a vector $v$ and a unit speed curve $X(s)$ with curvature function
$\kappa_{g}(s)$ such that $\langle X^{\prime}(s),v\rangle=\kappa_{g}(s)$, then
a flow by any of these families will be a curve shortening flow with
$\kappa_{g}(s,t)=\kappa_{g}(s,0)\equiv\kappa_{g}(s)$. ∎
###### Remark 2.4.
It is always possible to reduce (2.29) to the form $\frac{\partial
X_{t}}{\partial t}=\kappa_{g}N_{t}$ by a reparametrization
$u\mapsto\varphi(u,t)$ of the curve $X(u,t)$ (e.g., [1, Proposition 1.1]).
## 3\. Examples
### 3.1. Geodesics
These are solutions of (2.18) with $\tau\equiv 0$, so $\mu\equiv 0$ as well,
and $\nu$ is constant. They are fixed points of (3.1). Since $\mu=0$,
geodesics are plane curves lying in the plane orthogonal to $v$.
We may view the system (2.18) as an autonomous third-order system
(3.1) $\Phi^{\prime}=F_{a}(\Phi)\ ,$
where $\Phi=(\tau,\nu,\mu)$, $a>0$ is a constant, and
$F_{a}:{\mathbb{R}}^{3}\to{\mathbb{R}}^{3}:(\tau,\nu,\mu)\mapsto(a\tau\nu+\mu,-a\tau^{2},\tau)$.
We now show that at fixed points, the differential of $F_{a}$ has real
eigenvalues of all three signs (positive, negative, and zero), so geodesics
are unstable fixed points but attract in one direction.
###### Lemma 3.1.
For $F_{a}$ as above, then $F_{a}=0\Leftrightarrow(\tau,\nu,\mu)=(0,\pm
1,0)=\pm e_{2}$, and the corresponding curves are geodesics. The eigenvalues
of $dF_{a}|_{e_{2}}$ are
(3.2) $\lambda=\ 0,\ \frac{a+\sqrt{a^{2}+4}}{2}>0,\
\frac{a-\sqrt{a^{2}+4}}{2}<0\ .$
The eigenvalues of $dF_{a}|_{-e_{2}}$ are the negatives of the above
eigenvalues.
Specifically, when $a=1$ and $(\tau,\nu,\mu)=(0,1,0)$ (respectively,
$(\tau,\nu,\mu)=(0,1,0)$), then
$\lambda=0,\frac{1+\sqrt{5}}{2},\frac{1-\sqrt{5}}{2}$ (respectively,
$\lambda=0,\frac{-1+\sqrt{5}}{2},\frac{-1-\sqrt{5}}{2}$).
###### Proof.
The fixed points at $\pm e_{2}$ are obvious. Then since $\mu=0$ and $\tau=0$,
we have from (2.15) that $v$ is normal to a timelike plane containing $X$ and
$T$. The curve $X(s)$ is then the intersection curve of this timelike plane,
which contains the origin, and the hyperboloid. It is therefore a hyperbolic
great circle, and thus a geodesic (this can also be seen from (2.22)).
The differential of $F_{a}$ is given by
(3.3) $dF_{a}=\begin{pmatrix}a\nu&a\tau&1\\\ -2a\tau&0&0\\\
1&0&0\end{pmatrix}\ .$
Hence, at $\pm e_{2}=(0,\pm 1,0)$ we have
(3.4) $dF_{a}|_{\pm e_{2}}=\begin{pmatrix}\pm a&0&1\\\ 0&0&0\\\
1&0&0\end{pmatrix}\ .$
Therefore, $\lambda$ is an eigenvalue of $dF_{a}|_{\pm e_{2}}$ iff
$\lambda^{2}(\pm a-\lambda)+\lambda=-\lambda\left(\lambda^{2}\mp
a\lambda-1\right)=0$. ∎
### 3.2. Horocycles
In the Poincaré disk model, horocycles are Euclidean circles tangent to the
boundary of the Poincaré disk at one point and otherwise lying within the
disk. If the point of tangency is $(1,0)$ then the horocycle with centre
$(\varpi,0)$ can be written in parametrized form as
(3.5) $u(\rho)=\varpi+(1-\varpi)\cos\frac{\rho}{(1-\varpi)}\quad,\quad
v(\rho)=(1-\varpi)\sin\frac{\rho}{(1-\varpi)}\ ,$
where $\rho$ is a parameter. By inverting the stereographic projection
$(u,v)=\left(\frac{x}{1+z},\frac{y}{1+z}\right)$ where
$z=\sqrt{1+x^{2}+y^{2}}$, we obtain a parametric description of this horocycle
in the hyperboloidal model
(3.6) $x=\frac{\varpi^{2}s^{2}+2\varpi-1}{2\varpi(1-\varpi)}\quad,\quad
y=s\quad,\quad z=\frac{\varpi^{2}s^{2}+1}{2\varpi(1-\varpi)}-1\ .$
Here $s$ is an arclength parameter. It is straightforward to compute the
Frenet frame
(3.7) $\begin{split}T=&\,\left(\frac{\varpi s}{(1-\varpi)},1,\frac{\varpi
s}{(1-\varpi)}\right)\quad\,\quad
N=\left(\frac{\varpi^{2}s^{2}-2\varpi^{2}+2\varpi-1}{2\varpi(1-\varpi)},s,\frac{\varpi^{2}s^{2}-2\varpi+1}{2\varpi(1-\varpi)}\right)\
,\\\
X=&\,\left(\frac{\varpi^{2}s^{2}+2\varpi-1}{2\varpi(1-\varpi)},s,\frac{\varpi^{2}s^{2}+1}{2\varpi(1-\varpi)}-1\right)\
.\end{split}$
Choosing $v=(0,1,0)$ we obtain
(3.8) $\tau=1\ ,\ \nu=-\mu=-s\ .$
Entering these values into the system (2.18), we see that the system is
satisfied if $a=1$. Then the relation $\kappa_{g}=a\tau$ yields
$\kappa_{g}=1$, as it must for horocycles.
As discussed by Grayson [6], under the curve shortening flow horocycles remain
horocycles, and so evolve self-similarly, with curvature $\kappa_{g}=1$
throughout the flow. In the next section, we will seek all solutions of the
system (2.18) corresponding to self-similar evolutions with $v=(0,1,0)$. This
will be the case of curves which evolve purely by a boost in
${\mathbb{M}}^{3}$.
### 3.3. Hypercycles
In the Poincaré disk model, hypercycles are arcs of Euclidean circles that end
when they meet the boundary of the Poincaré disk transversally. Under CSF they
do not evolve by isometries, but do evolve under the composition of isometries
and rescalings.
Applying a rotation if necessary, we can place the centre of the Euclidean
circle along one of the axes, say at $(\varpi,0)$ where $\varpi>0$. Let the
Euclidean radius of the circle be $c<1+\varpi$. We can parametrize the
hypercycle as we would any such Euclidean circle, say as
(3.9) $u(t)=\varpi+c\cos\frac{t}{c}\quad,\quad v(t)=c\sin\frac{t}{c}\ ,$
with the domain of $t$ chosen so that $u^{2}+v^{2}<1$.
In the hyperboloidal model, by using stereographic projection to lft the above
parametrized curve to the unit hyperboloid we obtain
(3.10)
$\begin{split}x(t)=&\,\frac{2\left(\varpi+c\cos\frac{t}{c}\right)}{1-c^{2}-\varpi^{2}-2c\varpi\cos\frac{t}{c}}\quad,\quad
y(t)=\frac{2c\sin\frac{t}{c}}{1-c^{2}-\varpi^{2}-2c\varpi\cos\frac{t}{c}}\
,\\\
z(t)=&\,\frac{1+c^{2}+\varpi^{2}+2c\varpi\cos\frac{t}{c}}{1-c^{2}-\varpi^{2}-2c\varpi\cos\frac{t}{c}}\
.\end{split}$
These curves are curves of intersection of the hyperboloid
$z=\sqrt{1+x^{2}+y^{2}}$ in ${\mathbb{M}}^{3}$ with the planes
(3.11)
$2\varpi\left(x+\frac{1}{\varpi}\right)+\left(c^{2}-\varpi^{2}-1\right)(z+1)=0\
,$
for $c+\varpi>1$. These planes are timelike.
In his seminal paper, Grayson computes the evolution of hypercycles in
${\mathbb{H}}^{2}$ under CSF [6, p 76]. The curve remains a hypercycle, but
the curvature evolves as
(3.12) $\kappa_{g}(s,t)=\kappa_{g}(t)=\frac{1}{\sqrt{1-Ae^{2t}}}\ .$
## 4\. Curve shortening flows evolving by isometries
### 4.1. Properties of solutions
From the system of equations (2.18), we can deduce the following lemmata. We
exclude the case of $\mu$ identically zero. In this case, the curve $X(s)$ and
the constant vector $v$ are orthogonal, and then $X$ is a geodesic. We also
take $X(s)$ to be defined for all $s\in{\mathbb{R}}$. Note that by equations
(2.15) and (2.22) $tau$, $\nu$, $\mu$, and $\kappa$ will be bounded on the
intersection of $X$ with any compact subset of ${\mathbb{H}}^{2}$, so
inextendible curves trapped in a bounded set have infinite arclength. For
those that escape any bounded set, the distance from any chosen
$p\in{\mathbb{H}}^{2}$ increases without bound, so the arclength parameter is
unbounded in this case as well.
###### Lemma 4.1 (Behaviour of $\mu$).
Let $(\tau(s),\nu(s),\mu(s))$ be a solution of the system (2.18) and let
$s_{0}$ be a critical point of $\mu$. If $\mu$ is not identically zero then
$s_{0}$ is the unique critical point of $\mu$. If $\mu(s_{0})>0$ it is a
global minimum. If $\mu(s_{0})<0$ it is a global maximum. Furthermore, $\mu$
has at most one zero. If $\mu$ converges as $s\to\infty$, then in fact
$\lim_{s\to\infty}\mu(s)=0$. Likewise, if $\mu$ converges as $s\to-\infty$,
then $\lim_{s\to-\infty}\mu(s)=0$. Otherwise, $\mu$ diverges to $\pm\infty$.
###### Proof.
We have from (2.18) that
(4.1)
$\mu^{\prime\prime}(s)=\tau^{\prime}(s)=a\tau(s)\nu(s)+\mu(s)=a\mu^{\prime}(s)\nu(s)+\mu(s)\
.$
Now at a critical point we have $\mu^{\prime}(s_{0})=\tau(s_{0})=0$. Since by
(2.17) $\nu$ cannot diverge at $s_{0}$ then from (4.1) we see that
$\mu^{\prime\prime}(s_{0})=\mu(s_{0})$. If $\mu(s_{0})>0$ then the second
derivative test implies that it must be a local minimum. Now say there is
another critical point at $s_{1}$, and that no critical point lies between
$s_{0}$ and $s_{1}$. Then necessarily $\mu(s_{1})>\mu(s_{0})>0$, so
$\mu(s_{1})$ is also a local minimum and so there must be a local maximum
between these critical points, which is a contradiction. Hence $s_{0}$ is the
unique critical point, and therefore a global minimum. The dual statement for
the case of $\mu(s_{0})<0$ now follows in precisely similar fashion. In either
of these cases, $\mu$ has no zero.
If $\mu(s_{0})=0$ but $\mu(s)$ is not identically zero, then either (i)
$\mu(s)<0$ for $s_{0}<s<s_{0}+\delta$ (for some $\delta>0$) or (ii) $\mu(s)>0$
for $s_{0}<s<s_{0}+\delta$. In case (i), then by the above argument the next
critical point at some $s_{1}>s_{0}$ must have $\mu^{\prime\prime}(s_{1})<0$
and would therefore be a local maximum, a contradiction; in case (ii) it must
have $\mu^{\prime\prime}(s_{1})>0$, again a contradiction. So there is no
critical point at $s_{1}>s_{0}$. A similar argument shows that no critical
point at $s_{1}<s_{0}$ either. Therefore, if $\mu(s_{0})=0$ then either
$s_{0}$ is the unique critical point, or $\mu$ has no critical points at all
in this case, and in either situation $s_{0}$ is then the unique zero of
$\mu$.
Since $\mu$ has at most one critical point $s_{0}$, on the domain $s<s_{0}$ we
may take $\mu$ to be monotonic and $\tau$ to have a sign. The same statements
are true on the domain $s>s_{0}$, and for all $s$ if $\mu$ has no critical
point. Then if $|\mu|$ does not diverge to $\infty$, $\mu$ will converge. By
way of contradiction, assume that $\mu$ converges to a constant $c\neq 0$.
Then from the third equation in (2.18) and recalling the $\tau$ has a sign, we
have $\tau\to 0$. But then $\eta(v,v)=\tau^{2}+\nu^{2}-\mu^{2}$, so
$\tau^{2}+\nu^{2}$ is bounded, which implies that $\nu^{2}$ cannot diverge to
$\infty$. Then the first equation in (2.18) yields $\tau^{\prime}\to c$. But
then $\tau$ would diverge, a contradiction, unless $c=0$. ∎
###### Lemma 4.2 (Behaviour of $\nu$).
The value $s_{0}$ is a critical point of $\nu$ if and only if $s_{0}$ is a
critical point of $\mu$, and then $s_{0}$ is a point of inflection for $\nu$,
which is therefore monotonic. Either $\nu$ is bounded and thus converges or it
diverges to $+\infty$ as $s\to-\infty$ or to $-\infty$ as $s\to+\infty$.
###### Proof.
Since by (2.18) we have $\nu^{\prime}=-a\tau^{2}=-a(\mu^{\prime})^{2}$, then
it is obvious that $s_{0}$ is a critical point of $\nu$ if and only if $s_{0}$
is a critical point of $\mu$, and that $\nu(s)$ is monotonic so any critical
point is an inflection point. It also follows from monotonicity that if $\nu$
is bounded it converges as $s\to\pm\infty$, and since it decreases
monotonically, if it is not bounded it diverges as claimed. ∎
Notice that we have both a monotone quantity $\nu$ and a conserved quantity
$\eta(v,v)=\tau^{2}+\nu^{2}-\mu^{2}$.
###### Lemma 4.3 (Linear growth).
Let $(\tau(s),\nu(s),\mu(s))$ be a solution of the system (2.18) such that
$\mu$ diverges as $s\to\infty$. Then
$\tau(s),\nu(s),\mu(s)\in{\mathcal{O}}(s)$; i.e., they are each bounded above
in magnitude by $Cs$ for some constant $C>0$. This is also true as
$s\to-\infty$.
###### Proof.
Since $x\mapsto x^{2}$ is a convex function, Jensen’s inequality yields for
$s>s_{0}$ that
(4.2)
$\left(\frac{1}{(s-s_{0})}\int\limits_{s_{0}}^{s}\tau(r)dr\right)^{2}\leq\frac{1}{(s-s_{0})}\int\limits_{s_{0}}^{s}\tau^{2}(r)dr\
.$
Since $\mu^{\prime}=\tau$ and $\nu^{\prime}=-a\tau^{2}$, the above inequality
implies that
(4.3)
$\begin{split}&\,\frac{\left(\mu(s)-\mu(s_{0})\right)^{2}}{\left(s-s_{0}\right)^{2}}\leq-\frac{\left(\nu(s)-\nu(s_{0})\right)}{a\left(s-s_{0}\right)}\\\
\implies&\,\mu(s)\leq\mu(s_{0})+\sqrt{\frac{1}{a}\left(s-s_{0}\right)\left(\nu(s_{0})-\nu(s)\right)}\
.\end{split}$
In the last line, we recall that $\nu$ is a decreasing function. Writing
$\mu_{0}:=\mu(s_{0})$ and $\nu_{0}:=\nu(s_{0})$, then
(4.4)
$\eta(v,v)\geq\tau^{2}+\nu\left[\nu+\frac{(s-s_{0})}{a}\right]-2\mu_{0}\sqrt{\frac{1}{a}\left(s-s_{0}\right)\left(\nu_{0}-\nu\right)}-\frac{(s-s_{0})}{a}\nu_{0}-\mu_{0}^{2}\
,$
so that
(4.5)
$\frac{\eta(v,v)+\mu_{0}^{2}}{(s-s_{0})^{2}}+\frac{\nu_{0}}{a(s-s_{0})}+2\mu_{0}\sqrt{\frac{\left(\nu_{0}-\nu\right)}{a(s-s_{0})^{3}}}\geq\frac{\tau^{2}}{(s-s_{0})^{2}}+\frac{\nu}{(s-s_{0})}\left[\frac{\nu}{(s-s_{0})}+\frac{1}{a}\right]\
.$
If there is a sequence of values $s_{i}\to\pm\infty$ such that either
$|\tau(s_{i})|$ or $|\nu(s_{i})|$ (or both) grows faster than linearly, then
this inequality cannot hold. But then by (4.3), $|\mu|$ also cannot grow
faster than linearly. ∎
###### Lemma 4.4 (Bounded curvature).
The curvature and $\tau$ are bounded.
###### Proof.
Since $\kappa_{g}=a\tau$, then $\kappa_{g}$ is bounded iff $\tau$ is.
Consider first $s\to\infty$. If $\nu$ diverges, then necessarily
$\nu\to-\infty$ as $s\to\infty$, so take $s_{1}$ such that $a\nu(s_{1})+2<0$.
Since $\nu$ is monotonic, then $a\nu(s)+2<0$ for all $s\geq s_{1}.$ Since
$\mu$ has at most one critical point and therefore $\tau$ has at most one
zero, by increasing $s_{1}$ if necessary we can take $\tau(s)>0$ and
$\mu(s)>0$ for $s\geq s_{1}$. (We will deal with other cases at the end.)
Under these circumstances, we have from equations (2.18) and (2.17) that
(4.6)
$\begin{split}\tau^{\prime}=&\,a\nu\tau+\mu=av\tau+\sqrt{\tau^{2}+\nu^{2}-\eta(v,v)}\leq
a\nu\tau+\sqrt{\tau^{2}+\nu^{2}+\left|\eta(v,v)\right|}\\\
\leq&\,a\nu\tau+2\sqrt{\tau^{2}+\nu^{2}}\ ,\end{split}$
where we may have to increase $s_{1}$ so that the last inequality holds
(indeed, we have $\tau^{\prime}\leq a\nu\tau+\sqrt{\tau^{2}+\nu^{2}}$ if $v$
is null or spacelike). Furthermore, since $\nu<0$ and $\tau>0$, then
$\sqrt{\tau^{2}+\nu^{2}}<\sqrt{\tau^{2}+\nu^{2}-2\tau\nu}=\tau-\nu$, so we get
from (4.6) that
(4.7) $\tau^{\prime}<a\nu\tau+2\tau-2\nu=(a\nu+2)\tau-2\nu\ .$
But $a\nu+2<0$, and so whenever $\tau>\frac{2\nu}{a\nu+2}$ equation (4.7) will
yield $\tau^{\prime}<2\nu-2\nu=0$. Rewriting
(4.8)
$\frac{2\nu}{a\nu+2}=\frac{2}{a}-\frac{4}{a(a\nu(s)+2)}\leq\frac{2}{a}-\frac{4}{a(a\nu(s_{1})+2)}=\frac{2\nu(s_{1})}{a\nu(s_{1})+2}$
using the monotonicity of $\nu$, we therefore obtain for $s\geq s_{1}$ that
(4.9)
$0<\tau(s)\leq\max\left\\{\tau(s_{1}),\frac{2\nu(s_{1})}{a\nu(s_{1})+2}\right\\}\
.$
Next we deal briefly with the case where $\tau(s_{1})<0$ and $\mu(s_{1})<0$.
Then subsequently $\tau(s)<0$ and $\mu(s)<0$ for all $s>s_{1}$. As above,
$\nu(s)<0$ and $\nu\to-\infty$. Repeating the calculations of equations (4.6)
and (4.7) with the appropriate sign changes, we now have
(4.10)
$\begin{split}\tau^{\prime}=&\,a\nu\tau-\sqrt{\tau^{2}+\nu^{2}-\eta(v,v)}\geq
a\nu\tau-\sqrt{\tau^{2}+\nu^{2}+\left|\eta(v,v)\right|}\\\
\geq&\,a\nu\tau-2\sqrt{\tau^{2}+\nu^{2}}\geq a\nu\tau-2(\tau+\nu)\\\
\geq&\,(a\nu-2)\tau\ ,\end{split}$
where in the last line we dropped the term $-2\nu$ since $\nu(s)<0$. Using
$a\nu(s)-2<0$ for $s\geq s_{1}$ when $s_{1}$ large enough, then we have that
$\tau^{\prime}>0$ whenever $\tau<\frac{1}{a\nu-2}<0$. Alternatively, we can
simply observe that the logarithmic derivative of $\tau$ is negative, so the
magnitude of $\tau$ decreases. Hence $\tau$ is bounded.
To treat the limit $s\to-\infty$, note that we can replace $s$ by $r:=-s$ in
the derivatives in (2.18) and then treat the limit $r\to\infty$. If we also
replace $\mu\mapsto-\mu$ and $\nu\mapsto-\nu$, we recover the same system as
(2.18). We will again obtain that $\tau$ is bounded as $r\to\infty$ and
therefore as $s\to-\infty$. ∎
###### Lemma 4.5 (Convergence).
If either $\mu$ or $\nu$ is bounded, then all three functions $\tau$, $\nu$,
and $\mu$ converge, with $\mu\to 0$, $\tau\to 0$, and
$\nu\to\pm\sqrt{\eta(v,v)}$. This can occur only when $v$ is achronal (i.e.,
spacelike or null). If $\mu$ and $\nu$ are not bounded, then
$\tau\to\pm\frac{1}{a}$.
###### Proof.
If $\mu$ is bounded (and thus converges to zero by Lemma 4.1), then from
(2.17) it is clear that $\tau$ and $\nu$ are bounded. Since $\nu$ is
monotonic, it must converge, and then by (2.17) so must $\tau$.
If instead $\nu$ is bounded, since it’s monotonic it must converge, and then
as argued in the proof in the previous lemma, we have $\tau\to 0$. Then by
(2.17) $\mu$ converges as well. By Lemma 4.1, then $\mu\to 0$. Hence (2.17)
yields $\nu^{2}\to\eta(v,v)$, which requires that $\eta(v,v)\geq 0$ so $v$
must be achronal.
If instead neither $\mu$ nor $\nu$ are bounded, then write (2.17) as
(4.11) $(\nu+\mu)(\nu-\mu)=\tau^{2}+\eta(v,v)\ .$
Since $\tau$ is bounded, so is the product $(\nu+\mu)(\nu-\mu)$. But neither
$\mu$ nor $\nu$ is bounded so one factor in this product diverges and so the
other factor must converge to zero. Then $\lim_{s\to\infty}\frac{\mu}{\nu}=\pm
1$.
Because $\tau$ is bounded, $\limsup_{s\to\infty}\tau(s)$ and
$\liminf_{s\to\infty}\tau(s)$ exist. Now
$\limsup_{s\to\infty}\tau(s)=\lim_{\alpha\to\infty}\tau(s_{\alpha})$ where
each $\tau(s_{\alpha})$ is a local maximum of $\tau$. At each local maximum,
we have by the first equation in (2.18) that
$\tau(s_{\alpha})=-\frac{\mu(s_{\alpha})}{a\nu(s_{\alpha})}$, so
$\limsup_{s\to\infty}\tau=-\limsup_{s\to\infty}\frac{\mu(s_{\alpha})}{a\nu(s_{\alpha})}=\mp\frac{1}{a}$
using the result of the previous paragraph. But we can replace the local
maxima $\tau(s_{\alpha})$ by local minima, say $\tau(s_{\beta})$ and obtain
$\liminf_{s\to\infty}\tau=\mp\frac{1}{a}$ (with the same choice of sign).
Hence $\lim_{s\to\infty}\tau=\mp\frac{1}{a}$. ∎
### 4.2. Spacelike ${\tilde{v}}$
Each spacelike vector ${\tilde{v}}\in{\mathbb{M}}^{3}$ is contained in a
$1$-parameter family of planes, exactly two of which are null. Boosts in the
spacelike directions orthogonal to each such plane preserve the planes and
preserve $\pm{\tilde{v}}$, but do not preserve any spacelike vector in these
planes.
We can apply a constant boost to the standard basis of ${\mathbb{M}}^{3}$ so
that an arbitrary spacelike ${\tilde{v}}$ has the form
${\tilde{v}}=({\tilde{v}}_{1},{\tilde{v}}_{2},0)$ in the transformed basis. A
subsequent constant rotation of the basis brings $v$ to the form
${\tilde{v}}=a(0,1,0)$ for some $a>0$. As before, we will normalize and write
$v={\tilde{v}}/a$. Then $v$ lies in the intersection of the null plane
$\operatorname{Span}\\{(0,1,0),(1,0,1)\\}$ with the vertical $x=0$ plane,
while $\mu(s)$ as defined by (2.15) is simply the $y$-coordinate of $X(s)$.
The curve of intersection of the $x=0$ plane and the unit hyperboloid is the
geodesic in ${\mathbb{M}}^{3}$ whose trace is the graph of $z=\sqrt{1+y^{2}}$,
$x=0$. The intersection of the unit hyperboloid and the plane $y=0$ orthogonal
to $v$ is also a geodesic, which we denote by $\Gamma_{v}$. We will label the
$y>0$ half-space (into which $v$ points) in ${\mathbb{M}}^{3}$ by $H_{+}$ and
the $y<0$ half-space by $H_{-}$.
###### Lemma 4.6.
Let $X(s)$ be a non-geodesic curve evolving by isometries under CSF with
spacelike ${\tilde{v}}$ and let
$v={\tilde{v}}/\sqrt{\eta({\tilde{v}},{\tilde{v}})}$. Then exactly one of the
following holds:
* (i)
$\mu(s)$ has exactly one critical point $s_{0}$, a minimum, $X(s)$ lies
entirely in $H_{+}$, and $X(s)$ extends to infinity at both ends. See Figure
1.
* (ii)
$\mu(s)$ has exactly one critical point $s_{0}$, a maximum, $X(s)$ lies
entirely in $H_{-}$, and $X(s)$ extends to infinity at both ends. See Figure
2.
* (iii)
$\mu(s)$ has no critical point, and either converges to $\Gamma_{v}$ at one
end while extending to infinity at the other or intersects $\Gamma_{v}$ at one
point and extends to infinity at both ends. See Figures 3 and 4.
###### Proof.
If $\mu$ has a critical point in $H_{+}$, by Lemma 4.1 it is a global minimum.
Then $\mu(s)\geq\mu(s_{0})$ for all $s$, so the curve must remain in $H_{+}$.
Likewise if $\mu$ has a critical point in $H_{-}$, is it a global maximum, and
so the curve must remain in $H_{-}$. In either case, $\mu$ is bounded away
from zero so by Lemma 4.1 $\mu\to\infty$ at both ends $s\to\pm\infty$ if the
curve is in $H_{+}$, and $\mu\to-\infty$ at both ends $s\to\pm\infty$ if the
curve is in $H_{-}$.
If a critical point $s_{0}$ of $\mu$ were to lie on $\Gamma_{v}$, then
$\mu(s_{0})=\mu^{\prime}(s_{0})=0$. Then $\tau(s_{0})=0$ and $\nu(s_{0})=\pm
1$. (We can further deduce that $\tau^{\prime}(s_{0})=0$ and
$\mu^{\prime\prime}(s_{0})=0$.) For these initial data, the unique maximal
solution of the system (2.18) has $\tau(s)=0$ for all $s$, and thus
$\kappa_{g}(s)=a\tau(s)=0$ for all $s$, so the corresponding curve is a
geodesic, contrary to assumption. This possibility therefore does not occur.
If a curve has no critical point of $\mu$ then $\mu$ is monotonic so $\mu$ has
at most one zero. Then the curve meets $\Gamma_{v}$ in at most one point. If
it does not meet $\Gamma_{v}$, then it must approach $\Gamma_{v}$ in the
limit, since it cannot have a limit for $\mu$ other than $\mu\to 0$. ∎
Figure 1. Curve in the Poincaré disk model, evolving by isometry with
spacelike ${\tilde{v}}$, as in Lemma 4.6.(i). The valley in the graph of
$\tau$ indicates the maximum of $|\kappa_{g}|$.
Figure 2. Curve in the Poincaré disk model evolving by isometries with
spacelike ${\tilde{v}}$, as in Lemma 4.6.(ii).
Figure 3. Curve in the Poincaré disk model evolving by isometries with
spacelike ${\tilde{v}}$ and converging to a geodesic at one end, as in Lemma
4.6.(iii).
Figure 4. Curve in the Poincaré disk model of hyperbolic space that evolves by
isometry with spacelike $v$ corresponding to Lemma 4.6.(iii), such that $\mu$
has a zero and diverges at both ends.
### 4.3. Timelike ${\tilde{v}}$
For a timelike ${\tilde{v}}$, one can apply a constant boost to the axes of
the standard basis of ${\mathbb{M}}^{3}$ so that in the boosted basis
${\tilde{v}}$ takes the form
${\tilde{v}}=(0,0,{\dot{\theta}})={\dot{\theta}}(0,0,1)={\dot{\theta}}v$. In
this case, $\mu$ is the negative of the $z$-coordinate of the curve. We’ve
chosen the sign so that $v$ is future-timelike, though ${\tilde{v}}$ can be
either future- or past-timelike depending on the sign of ${\dot{\theta}}$.
###### Lemma 4.7.
Let $X(s)$ be a non-geodesic curve evolving by isometries under CSF with
future-timelike $v$. Then there is exactly one critical point of $\mu(s)$
along $X(s)$, a global maximum, and $\mu\to-\infty$ as $s\to\pm\infty$ along
$X(s)$. See Figures 5 and 6.
###### Proof.
Since both $X$ and $v$ are future-timelike and nonzero, then $\mu(s)\leq c<0$
for some $c>0$ and all $s$, so by Lemma 4.1 $\mu$ has a negative global
maximum and no other critical point. Since $\mu(s)$ cannot converge to $0$
along $X(s)$, it cannot converge at all and so must diverge to $-\infty$ as
$s\to\pm\infty$. ∎
Since $\mu$ is proportional to $-z$ along a soliton with future-timelike $v$,
then $\mu\to\infty$ implies that the curve $X(s)$ extends to infinity along
the unit hyperboloid in ${\mathbb{M}}^{3}$.
Figure 5. Curve in the Poincaré disk model evolving by isometries with
timelike ${\tilde{v}}$.
Figure 6. Curve in the Poincaré disk model evolving by isometries with
timelike ${\tilde{v}}$, which passes (nearly) through the origin, where $\mu$
is maximized.
### 4.4. Null ${\tilde{v}}$
The remaining possibility is that ${\tilde{v}}$ is null. Applying a constant
rotation to the axes if necessary, we can take ${\tilde{v}}=a(0,1,1)=:av$, so
$\mu=y(s)-z(s)<0$ along $X(s)=(x(s),y(s),z(s))$, since $y<z$ at any point on
the unit hyperboloid $z=\sqrt{x^{2}+y^{2}}$ in ${\mathbb{M}}^{3}$.
###### Lemma 4.8.
Let $X(s)$ be a non-geodesic curve evolving by isometries under CSF with
future-null $v$. Then there is at most one critical point of $\mu(s)$ along
the curve $X(s)=(x(s),y(s),z(s))$, a global maximum, and $z(s)\to\infty$ at
both ends of its domain. See Figure 7.
###### Proof.
Since $y<z$ at each point of the hyperboloid $x^{2}+y^{2}-z^{2}=-1$, $z>0$,
then $\mu(s)=y(s)-z(s)<0$ so by Lemma 4.1 there is at most one critical point
$s_{0}$ of $\mu$, and it must be a global maximum. When this occurs, then
$\mu(s)\leq\mu(s_{0})<0$ so $\mu(s)$ cannot converge to zero, and so
$\mu\to-\infty$ and $z\to\infty$ as $s\to\pm\infty$.
On the other hand, if there is no such maximum then $\mu$ converges to zero
from below as $s\to\infty$ (or as $s\to-\infty$, but not both). But for
future-null $v$, $\left\langle v,X\right\rangle\equiv\mu\to 0$ implies that
$y-z\to 0$ along the curve. Since $x^{2}+(y-z)(y+z)=-1$, this can only happen
if $y,z\to\pm\infty$. ∎
For past-null $v$, we take $v\mapsto-v$ so $\mu\mapsto-\mu$ and the global
maximum in the lemma becomes a global minimum.
Figure 7. Curve in the Poincaré disk model evolving by isometries with null
${\tilde{v}}$.
### 4.5. Proof of the main theorem
First we give a brief proof of the following simple property.
###### Lemma 4.9.
Curves evolving by isometries in ${\mathbb{H}}^{2}$ have no self-
intersections, and are properly embedded.
###### Proof.
By way of contradiction, say that a curve $X(s)$ self-intersects at two
parameter values $s_{2}>s_{1}$ with no self-intersections in between. At the
self-intersection, the ingoing and outgoing tangents make angle $\alpha$. The
region bounded by $X(s)$ for $s_{1}\leq s\leq s_{2}$ has area $A>0$. Since the
Gauss curvature of ${\mathbb{H}}^{2}$ is $-1$, Gauss-Bonnet yields
(4.12) $\oint\limits_{s_{1}}^{s_{2}}\kappa_{g}ds=A+\pi+\alpha>\pi\ .$
But at a point of self-intersection we must have $\mu(s_{1})=\mu(s_{2})$, and
so
(4.13)
$0=\mu(s_{2})-\mu(s_{1})=\oint\limits_{s_{1}}^{s_{2}}\mu^{\prime}(s)ds=\oint\limits_{s_{1}}^{s_{2}}\tau
ds=\frac{1}{a}\oint\limits_{s_{1}}^{s_{2}}\kappa_{g}ds\ .$
Comparing (4.12) and (4.13), we arrive at a contradiction, so there are no
self-intersections. Then since every curve in Lemmata 4.6–4.8 escapes any
bounded set, there are no accumulation points either, so the curves are
properly embedded. ∎
The main theorem now follows easily from the above results.
###### Proof of Theorem 1.2.
Choose a vector ${\tilde{v}}$. If it is not null, write ${\tilde{v}}=av$ such
that $v$ is normalized and define the variables $\tau$, $\nu$, and $\mu$ along
smooth curves $X(s)$ using (2.15). If we require that the system (2.18) holds
then from Proposition 2.2, each solution of (2.18) determines a curve $X(s)$.
Solutions of (2.18) are parametrized by the initial data
$(\tau_{0},\nu_{0},\mu_{0})$ for (2.18), subject to the constraint (2.20) at
$s=0$. Hence we obtain $2$-parameter families of solutions of (2.18) and of
the associated curves $X(s)$. The solution curves have domain
$s\in{\mathbb{R}}$, for $s$ a unit speed parameter, and $\mu=\langle
X,v\rangle$ is divergent at least at one end, so the solutions are complete
and noncompact.
If $\mu$ converges at one end, then by Lemma 4.1 we have $\mu\to 0$ while by
Lemma 4.5 we also have $\tau\to 0$ and thus by Proposition 2.3 then
$\kappa_{g}\to 0$. In this case $X(s)$ approaches a geodesic at this end.
Lemmata 4.6–4.8 give conditions under which this situation does not arise.
If $\mu$ diverges, invoking Lemma 4.5 for $s\to\infty$, or for $s\to-\infty$
as the case may be, we have that the curvature obeys
$\kappa_{g}\to\pm\frac{1}{a}\neq 0$, and so $X$ approaches a horosphere.
Lemma 4.9 establishes that $X$ is properly embedded. ∎
## References
* [1] K-S Chou and X-P Zhu, _The curve shortening problem_ , (CRC Press, Boca Raton, 2001).
* [2] M Gage, _An isoperimetric inequality with application to curve shortening_ , Duke Math J 50 (1983, 1225–1229.
* [3] M Gage, _Curve shortening makes convex curves circular_ , Invent Math 76 (1984) 357–364.
* [4] ME Gage, _Curve shortening on surfaces_ , Ann Sci l’ÉNS $4^{e}$ série, 23 (1990) 229–256.
* [5] M Gage and RS Hamilton, _The heat equation shrinking convex plane curves_ , J Diff Geom 23 (1986) 69–96.
* [6] MA Grayson, _Shortening embedded curves_ , Ann Math 129 (1989) 71–111.
* [7] HP Halldórsson, _Self-similar solutions to the mean curvature flow in Euclidean and Minkowski space_ , PhD thesis, Massachusetts Institute of Technology, 2013 (unpublished).
* [8] HP Halldórsson, _Self-similar solutions to the curve shortening flow_ , Trans Amer Math Soc 364 (2012) 5285–5309.
* [9] TWB Kibble, _Topology of cosmic domains and strings_ , J Phys A: Math Gen 9 (1976) 1387–1398.
* [10] FN da Silva, _Fluxos de curvas no espaco hiperbólico e no cone de luz_ , University of Brasilia PhD thesis, (2020, unpublished).
* [11] FN da Silva and K Tenenblat, _Soliton solutions to the curve shortening flow on the 2-dimensional hyperbolic plane_ , preprint [arxiv:2102.07916v2].
* [12] HF Santos Dos Reis and K Tenenblat, _Soliton solutions to the curve shortening flow on the sphere_ , Proc Amer Math Soc 147 (2019) 11, 4955-4967.
|
# Darboux Transformations for orthogonal differential systems and differential
Galois Theory
Primitivo ACOSTA-HUMÁNEZ
Instituto de Matemática
Universidad Autónoma de Santo Domingo
Dominican Republic
<EMAIL_ADDRESS>
&Moulay BARKATOU
Institute XLim
Université de Limoges & CNRS
Limoges, France
<EMAIL_ADDRESS>
Raquel SÁNCHEZ-CAUCE
Department of Artificial Intelligence
Universidad Nacional de Educación a Distancia (UNED)
Madrid, Spain
<EMAIL_ADDRESS>
&Jacques-Arthur WEIL
Institute XLim
Université de Limoges & CNRS
Limoges, France
<EMAIL_ADDRESS>
###### Abstract
Darboux developed an algebraic mechanism to construct an infinite chain of
“integrable" second order differential equations as well as their solutions.
After a surprisingly long time, Darboux’s results had important features in
the analytic context, for instance in quantum mechanics where it provides a
convenient framework for Supersymmetric Quantum Mechanics. Today, there are a
lot of papers regarding the use of Darboux transformations in various
contexts, not only in mathematical physics. In this paper, we develop a
generalization of the Darboux transformations for tensor product constructions
on linear differential equations or systems. Moreover, we provide explicit
Darboux transformations for
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\mathrm{SL}(2,\mathbb{C}))$ systems
and, as a consequence, also for $\mathfrak{so}(3,C_{K})$ systems, to construct
an infinite chain of integrable (in Galois sense) linear differential systems.
We introduce SUSY toy models for these tensor products, giving as an
illustration the analysis of some shape invariant potentials.
_Keywords_ Darboux transformations $\cdot$ differential Galois group $\cdot$
differential Galois Theory $\cdot$ Frenet-Serret formulas $\cdot$ orthogonal
differential systems $\cdot$ rigid solid problem $\cdot$ Schrödinger equation
$\cdot$ shape invariant potentials $\cdot$ supersymmetric quantum mechanics
$\cdot$ symmetric power $\cdot$ tensor product.
MSC 2010. 12H05; 35Q40; 81Q60
## Introduction
In this paper we study, from a Galoisian point of view, a combination of two
results of Darboux: the first one is the celebrated Darboux transformation
[15], see also [17, 18], and the second one, from [18, 19], shows how to
express solutions of a system with an orthogonal matrix using solutions of a
second order differential equation. This combination will allow us to
generalize Darboux transformation to higher order systems.
The first result, the Darboux transformation, has been studied from different
points of view for ordinary differential equations as well as for partial
differential equations and as application in physics (supersymmetric quantum
mechanics). The starting point was Darboux’s result given in [15], see also
[17, 18], followed by plenty of papers and books: for example, we refer to the
seminal works of Witten (supersymmetric quantum mechanics [33]) and
Gendenshteïn (shape invariant potentials [23] ). A matrix formalism for
Darboux transformations for Schrödinger equation was developed in the
beginning of the twenty first century by Pecheritsin, Pupasov and Samsonov,
see [29]. Recently, a Galoisian approach to Darboux transformations and shape
invariant potentials has been proposed in [1, 2, 4], where it was proved that
the Darboux transformation preserves the galoisian structure of the
differential equation (the Darboux transformation is _isogaloisian_). A
similar approach was presented in [25, 26, 27] in the context of integrable
systems. There, the authors studied the behavior of the galoisian structure of
some families of linear systems with respect to Darboux transformations. An
extension of the Darboux transformation for polynomial vector fields has been
developed by the first author and Pantazi in [5]. They give a mechanism to
construct an infinity chain of Darboux-integrable polynomial vector fields
using the same philosophy of Darboux from [15, 18]. In the formalism of
supersymmetric quantum mechanics, the shape invariance property was also among
these tools. An important feature of the Darboux transformation is that it
preserves algebraic and analytic conditions. Thus, any generalization of
Darboux transformation should preserve these kind of conditions.
The second result of Darboux used here concerns the transformation of three
dimensional linear differential systems $X^{\prime}=-AX$, where $A$ is a skew-
symmetric matrix ($A\in\mathfrak{so}(3,\mathbb{C})$) so that it can be reduced
to solving a Riccati equation, see [18, 19]. This reduction has been used for
example to study the integrability of dynamical systems, see [3, 21]. We will
show how to understand and systematize this little miracle by viewing
$\mathfrak{so}(3,\mathbb{C})$ as a tensor product of
$\mathrm{SL}(2,\mathbb{C})$. To achieve this, we introduce gauge
transformations and the tensor constructions from representation theory: see
[6, 8, 7] for an introduction to these objects in the context of integrability
for dynamical systems.
In this mostly self-contained paper, we present a methodology to extend
Darboux transformations to classes of higher order systems. Our approach
maintains the original "shape invariance" which is the core of Darboux
transformation. We use linear differential equations tools such as gauge
transformations, differential Galois theory and tensor constructions.
The Darboux transformation may be viewed as a gauge transformation, an idea
present in [4, 1, 2]; we give here (Proposition 1) a matrix factorization of
this gauge transformation which in turn gives a simpler expression for the
solutions of a Darboux transformation.
This allows us to extend Darboux transformations to systems whose Galois group
is represented as a symmetric power of $\mathrm{SL}(2,\mathbb{C})$. We then
extend Darboux transformation to $\mathfrak{so}(3,\mathbb{C})$-systems which
allows us to construct infinite chains of integrable (in Galois sense) linear
differential systems. As an application of this approach, we study shape
invariant potentials in Supersymmetric Quantum Mechanics for Schrödinger
equations and we apply this formalism to recover some results from other
authors (Fedorov, Maciejewski, Przybylska and others) related to
$\mathfrak{so}(3,\mathbb{C})$ systems, in particular, the rigid solid problem
and Frenet-Serret formulas in section 3.2.
## 1 Preliminaries
This section contains a brief summary that will be used along this paper. We
present the original results of Darboux as well as a preliminary material
concerning invariants and symmetric powers.
### 1.1 Differential Galois Groups
Differential Galois theory, also known as Picard-Vessiot theory, is analogous
to the classical Galois theory for polynomials; it describes algebraic
relations that may exist between solutions of linear differential equations
and their derivatives, see [14, 30]. A differential field $K$, depending on a
variable $x$, is a field equipped with a derivation
$\partial_{x}=\,^{\prime}\,$. We denote by $C_{K}$ the field of constants of
$K$, the set of $c\in K$ such that $c^{\prime}=0$. Along this paper, we
consider differential equations or systems whose coefficients belong to a
differential field $K$ whose constant field $C_{K}$ is algebraically closed
and of characteristic zero. For simplicity, we will explain this Galois theory
on operators of order two but it applies similarly to linear differential
equations or systems of any order.
Consider the differential operator
$\mathcal{L}:=\partial_{x}^{2}+p\partial_{x}+q,\quad p,q\in K.$ (1)
Let $\\{y_{1},y_{2}\\}$ denote a basis of solutions of $\mathcal{L}y=0$. We
let $F:=K(y_{1},y_{2},y_{1}^{\prime},y_{2}^{\prime})$ be the smallest
differential extension of $K$ containing these solutions of $\mathcal{L}y=0$
and such that $C_{K}=C_{L}$. The differential extension $F$ is called a
_Picard-Vessiot extension_ of $K$ associated to $\mathcal{L}y=0$. The
differential automorphisms of $F$ are the automorphisms that commutes with the
derivation. The differential Galois group of $\mathcal{L}y=0$, denoted by
$\mathrm{DGal}(F/K)$, is the group of differential automorphisms of $L$ which
leave invariant each element of $K$.
Let $\sigma\in\mathrm{DGal}(F/K)$. Then, $\\{\sigma(y_{1}),\sigma(y_{2})\\}$
is another basis of solutions of $\mathcal{L}y=0$. It follows that
$\sigma(y_{1},y_{2})=(y_{1},y_{2})M_{\sigma}$. This $M_{\sigma}$ is the matrix
of the automorphism $\sigma$. We see that $\mathrm{DGal}(F/K)$ is a group of
matrices and it is actually a linear algebraic group.
$\mathrm{DGal}(F/K)\subseteq\mathrm{GL}(2,C_{K}).$
The Wronskian $W$ of the solutions $y_{1}$ and $y_{2}$ satisfies the
differential equation $W^{\prime}+pW=0$. Thus, $W=\exp(\int(-p)dx)$. We find
that $W\in K$ if and only if $p=\frac{w^{\prime}}{w}=(\ln w)^{\prime}$ for
some $w\in K$. In this case, we have $\sigma(W)=W$ (because $W\in K$). As
$\sigma(W)=\det(M_{\sigma})W$ with $W\neq 0$, we obtain $\det(M_{\sigma})=1$,
that is:
$\mathrm{DGal}(F/K)\subseteq\mathrm{SL}(2,C_{K})\;\Longleftrightarrow\;p=\frac{w^{\prime}}{w}=(\ln
w)^{\prime},\quad w\in K.$
We say that an algebraic group $G$ is _virtually solvable_ when the connected
identity component of $G$, denoted by $G^{\circ}$, is a solvable group. In
this paper, we say that $\mathcal{L}y=0$ is _integrable_ whenever
$\mathrm{DGal}(F/K)$ is virtually solvable. This corresponds to cases when one
can compute formulas for the solutions.
Given a non-zero solution $y$ and some non-zero function $c$, the standard
change of variables (see e.g the book of Ince [24])
$u=-\frac{1}{c}\cdot\frac{y^{\prime}}{y}$
changes our second order linear differential equation to the first order (non
linear) Riccati equation
$(R)\;:\quad u^{\prime}=a+bu+cu^{2},\;\textrm{ where
}\;b=-p-\frac{c^{\prime}}{c}\textrm{ and }a=\frac{1}{c}q.$
Differential Galois theory shows that the equation $\mathcal{L}(y)=0$ is
integrable if and only if the Riccati equation $(R)$ has an algebraic
solution. Similar statements (although more technical) can be obtained for
higher order equations, see [30].
A scalar differential equation such as $\mathcal{L}y=0$ is equivalent to its
companion linear differential system $[A]$:
$[A]:\;X^{\prime}=-AX,\quad\text{ where }A=\begin{pmatrix}0&-1\\\
q&p\end{pmatrix}{\text{ and }X=\begin{pmatrix}y\\\ y^{\prime}\end{pmatrix}.}$
(2)
Differential Galois theory applies naturally to linear differential systems as
well (solution spaces are vector spaces, the groups act on these vector
spaces).
Given a linear differential system $[A]:\;X^{\prime}=-AX$, a _gauge
transformation_ is a linear change of variables $X=PY$ with $P\in GL(n,K)$. It
transforms the system $[A]$ into a linear differential system
$Y^{\prime}=-P[A]Y$ with
$P[A]:=P^{-1}AP+P^{-1}P^{\prime}.$
We say that two linear differential systems $[A]$ and $[B]$ are _equivalent_
over $K$ when there exists a gauge transformation $P\in GL(n,K)$ such that
$B=P[A]$. Equivalent differential systems share the same differential Galois
group.
Given any linear differential system $[A]:X^{\prime}=-AX$, the cyclic vector
method (see [9] for a simple constructive process) allows to construct an
equivalent differential system in companion form. So we may go from operator
to system and vice-versa without altering the theory. In what follows, we will
start from a Darboux transformation on operators, recast it as a
transformation on systems and this will allow us to use the machinery on
systems to build higher order Darboux transformations.
### 1.2 Darboux Transformation
In [15], Darboux proposes a transformation which, given a family of linear
differential equations produces a new family of differential equations with a
similar shape and similar properties. This transformation has proved to be
powerful, for example, in the study of Shrödinger equations. We recast it here
in modern language. This proposition appears in Darboux’s note [16] and then
in his book [15]. Ince mentions it in [24] page 132.
###### Theorem 1 (Darboux, [16])
Consider the family of differential equations $\mathcal{L}(y)=m\;ry$:
$y^{\prime\prime}+py^{\prime}+(q-mr)y=0,$ (3)
where $p,q,r$ are functions ($r\neq 0$ by hypothesis) and $m$ is a constant
parameter. Given a non-zero value for $m$, we let $y_{m}$ denote a general
solution of (3). Suppose that we know a non-zero solution $y_{0}$ of equation
(3) for $m=0$. Let $\tilde{y}_{m}$ be a function defined by
$\tilde{y}_{m}=\frac{1}{\sqrt{r}}\left({y_{m}^{\prime}-\theta_{0}y_{m}}\right)\;\textrm{
with }\;\theta_{0}:=\frac{y_{0}^{\prime}}{y_{0}}.$ (4)
Then, for $m\neq 0$, $\tilde{y}_{m}$ is a general solution of the new
differential equation
$\tilde{y}^{\prime\prime}+p\tilde{y}^{\prime}+(\tilde{q}-mr)\tilde{y}=0,\quad\textrm{
with }\tilde{q}=q+q_{0}$ (5)
where, letting $\hat{r}:=\frac{r^{\prime}}{2r}$, the new part $q_{0}$ is given
by
$q_{0}:=2\,\theta_{{0}}^{\prime}+\hat{r}^{\prime}+p^{\prime}-\hat{r}\left(\hat{r}+2\;p-4\;\theta_{0}\right)$
(6)
The new $\tilde{q}$ is given by Darboux [16] in the compact expression
$\tilde{q}=y_{0}\sqrt{r}\left(\frac{p}{y_{0}\sqrt{r}}-\left(\frac{1}{y_{0}\sqrt{r}}\right)^{\prime}\right)^{\prime}.$
This transformation has been made famous by its applications to Schrödinger
equations where $p=0$ and $r=1$. In this case, the formula is much simpler:
$\tilde{q}=q+2\theta_{0}^{\prime}$.
###### Definition 1
The map (4) transforming the family of equations (3) to the family (5) is
called a _Darboux transformation_.
###### Remark 1
A starting point in the philosophy of the Darboux transformation is that there
is a sort of _covariance_ , which can be seen in Theorem 1. Both equations
have the same structure. Their only difference is that $q$ is changed into
$\tilde{q}$. Darboux presented in [15, 18] the particular case for $r=1$ and
$p=0$, which today is known as Darboux transformation, but it is really a
corollary of the general Darboux transformation given in Theorem 1 .
Consider the differential field $K$ and the family of differential operators
${\mathcal{L}}_{m}:=\partial_{x}^{2}+p\partial_{x}+(q-mr),\quad p,q,r\in
K\textrm{ with }\underline{r\neq 0}.$ (7)
Here, $m$ is a constant parameter. When $m$ is given any value, we let $F_{m}$
be a Picard-Vessiot extension of $K$ corresponding to the equation
${\mathcal{L}}_{m}y=0$. Without loss of generality, we assume from now on that
there exists $w\in K$ such that $p=w^{\prime}/w=(\ln w)^{\prime}$. Therefore,
the differential Galois groups $\mathrm{DGal}(F_{m}/K)$ of
${\mathcal{L}}_{m}y=0$ are subgroups of $\mathrm{SL}(2,C_{K})$.
Following Acosta-Humánez and co-authors in [4], see also [1, 2], we denote by
$\Lambda$ the set of values of $m$ for which ${\mathcal{L}}_{m}y=0$ is
integrable (over $K$), the so-called _algebraic spectrum_. If we have a family
of parameters for which the equations ${\mathcal{L}}_{m}$ are integrable (over
$K$), Darboux transformation will construct a new family with the same shape
while preserving the integrability properties.
After performing a Darboux transformation, our new family
$\widetilde{\mathcal{L}}_{m}$ of differential equations has a new coefficients
field
$\widetilde{K}:=K(\theta_{0})$ (with notations of Theorem 1). Note that the
Darboux transformation itself is defined over the bigger field
$\widetilde{K}(\sqrt{r})$.
We say that the Darboux transformation is _isogaloisian_ when the differential
Galois group is preserved (see [4, 1, 2]), i.e
$\mathrm{DGal}(\widetilde{F}_{m}/{\widetilde{K}})=\mathrm{DGal}(F_{m}/K)$; it
is _strong isogaloisian_ when $\widetilde{K}=K$ (i.e. when $\theta_{0}\in K$).
Results on the isogaloisian character of the Darboux transformation appear in
[4], see also [1, 2] ; they will reappear in the next section as a consequence
of the view of the Darboux transformation as a gauge transformation. When
$\theta_{0}$ is algebraic over $K$, then Darboux transformation is virtually
isogaloisian; in this case, for any value $m$ such that ${\mathcal{L}}_{m}$ is
integrable, its Darboux tranformation $\widetilde{\mathcal{L}}_{m}$ is also
integrable (see [4]). We see that the Darboux transformation transforms a
family of integrable equations into another integrable family with the same
shape.
We now turn to a rather different result, also due to Darboux, which allows
one to solve third order orthogonal systems using only solutions of a first
order Riccati equation, see [19, part I, chap. II] for details and proofs. A
third-order orthogonal system is one of the form
$\begin{pmatrix}\alpha\\\ \beta\\\
\gamma\end{pmatrix}^{\prime}=\begin{pmatrix}0&h&-g\\\ -h&0&f\\\
g&-f&0\end{pmatrix}\cdot\begin{pmatrix}\alpha\\\ \beta\\\
\gamma\end{pmatrix}.$
A simple calculation shows that $\alpha^{2}+\beta^{2}+\gamma^{2}$ is always
constant for such a system.
###### Theorem 2 (Darboux, [19], chap. II, pages 30-31)
Consider a differential system
$\begin{pmatrix}\alpha\\\ \beta\\\
\gamma\end{pmatrix}^{\prime}=\begin{pmatrix}0&h&-g\\\ -h&0&f\\\
g&-f&0\end{pmatrix}\cdot\begin{pmatrix}\alpha\\\ \beta\\\
\gamma\end{pmatrix}.$ (8)
A solution $(\alpha,\beta,\gamma)$ such that
$\alpha^{2}+\beta^{2}+\gamma^{2}=1$ can be parametrized by
$\alpha=\dfrac{1-uv}{u-v},\qquad\beta=i\dfrac{1+uv}{u-v}\qquad\text{ and
}\qquad\gamma=\dfrac{u+v}{u-v},$ (9)
where $u$ and $v$ are distinct solutions of the same Riccati equation
$\theta^{\prime}=\omega_{0}+\mu\theta+{\omega_{1}}\theta^{2},\quad\textrm{
with
}\quad\omega_{0}=\dfrac{g-if}{2},\quad\omega_{1}=\dfrac{g+if}{2},\quad\mu=-ih.$
(10)
Furthermore,
$u=\dfrac{\alpha+i\beta}{1-\gamma}=\dfrac{1+\gamma}{\alpha-i\beta}\qquad\text{
and }\qquad
v=-\dfrac{1-\gamma}{\alpha-i\beta}=-\dfrac{\alpha+i\beta}{1+\gamma}.$ (11)
###### Remark 2
As we recalled, solutions of a first order Riccati equation are logarithmic
derivatives of solutions of a second order linear differential equation. So,
this result shows that one can solve third order orthogonal systems using
solutions of second order linear differential equations. Namely, performing
the change of variable $u=-\frac{1}{\omega_{1}}\frac{y^{\prime}}{y}$, we see
that $y$ is a solution of
$y^{\prime\prime}+\left(\mu\omega_{1}+\frac{\omega_{1}^{\prime}}{\omega_{1}}\right)y^{\prime}+\omega_{1}\omega_{0}\;y=0$
(12)
So, the parametrization given by (9) can be restated as
$\begin{pmatrix}\alpha\\\ \beta\\\
\gamma\end{pmatrix}=\frac{1}{w}\left(\begin{array}[]{ccc}-\omega_{1}&0&{\frac{1}{\omega_{{1}}}}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr-i\omega_{{1}}&0&{\frac{-i}{\omega_{{1}}}}\\\ \vskip 6.0pt plus 2.0pt
minus 2.0pt\cr 0&1&0\end{array}\right)\begin{pmatrix}y_{1}y_{2}\\\
y_{1}^{\prime}y_{2}+y_{1}y_{2}^{\prime}\\\
y_{1}^{\prime}y_{2}^{\prime}\end{pmatrix}$ (13)
where $y_{1},y_{2}$ are linearly independent solutions of the second order
linear differential equation (12) and
$w:=y_{1}^{\prime}y_{2}-y_{1}y_{2}^{\prime}$ is their wronskian. This gives an
explicit solution for an orthogonal system in terms of solutions of second
order equations. This will be further explored in the next section.
In the study of the rigid solid, see for example Fedorov et. al. [21, 22],
orthogonal systems such as (8) are traditionally written in a more compact
way, using the cross-product $\times$:
$Z^{\prime}=Z\times\Omega,\quad
Z=(\alpha,\beta,\gamma)^{T},\quad\Omega=(f,g,h)^{T},\quad f,g,h\in K.$ (14)
From now on, we work with either equation (14) or equation (8): they are the
same equation, although presented differently. In what follows, the
terminology _orthogonal systems_ will refer to systems of the form (14) or
(8).
### 1.3 Tensor Constructions, Invariants and Symmetric Powers
In differential Galois theory, one classically translates linear algebra
constructions on the solution space (tensor product, symmetric powers, etc.)
into constructions on differential systems. This allows to measure properties
on solutions by looking for rational function solutions in these tensor
constructions (see chapters 3 and 4 in [30]). We review here the construction
of symmetric powers for later use.
Let $V$ denote a vector space over $C_{K}$ of dimension $n$. We fix a basis
$\cal{B}$ of $V$ and consider $g\in\mathrm{End}(V)$. Let
$M=(m_{i,j})\in{\mathcal{M}}_{n}(C_{K})$ denote the $n\times n$ matrix of the
endomorphism $g$ in that basis $\cal{B}$. We define a linear action of $g$ on
the variables $X_{j}$ by $g(X_{j}):=\sum_{i=1}^{n}m_{i,j}X_{i}$ so $g$ acts on
the indeterminates $X_{i}$ as if they were the basis $\cal{B}$.
Consider a homogeneous polynomial $P\in K[X_{1},\ldots,X_{n}]_{m}$ of degree
$m$. We may identify the $m$-th symmetric power of $V$ with the linear span of
all monomials $\\{X_{1}^{m},X_{1}^{m-1}X_{2},\ldots,X_{n}^{m}\\}$ of degree
$m$. This way, our polynomial $P$ may be identified with its vector $v_{P}$ of
coefficients on the monomial basis
$\\{X_{1}^{m},X_{1}^{m-1}X_{2},\ldots,X_{n}^{m}\\}$. Using the relations
$g(X_{j}):=\sum_{i=1}^{n}m_{i,j}X_{i}$, we can define an action of $g$ on $P$
by linear substitution $g(P):=P(g(X_{1}),\ldots,g(X_{n}))$. This action by
substitution translates into a natural action of $g$ on the coefficient vector
$v_{P}$ by $g(v_{P}):=v_{g(P)}$. Using this action, we define the _$m$ -th
symmetric power in the sense of groups_ $\mathrm{Sym}^{m}(M)$ of $M$ as the
matrix of the linear map $v_{P}\mapsto v_{g(P)}$: it is defined by the
relation
$v_{g(P)}=\mathrm{Sym}^{m}(M)\cdot v_{P}.$
The map $M\mapsto\mathrm{Sym}^{m}(M)$ is a group morphism. Given a group
$G\subset GL(V)$, an invariant of $G$ (in $\mathrm{Sym}(V)$) is a polynomial
$P$ such that $\forall g\in G$, $g(P)=P$.
Similarly, one can associate to the matrix $M$ the derivation
$D_{M}=\sum_{j=1}^{n}\left(\sum_{i=1}^{n}m_{i,j}X_{i}\right)\frac{\partial}{\partial
X_{j}}.$
Then, we define the _$m$ -th symmetric power in the sense of Lie algebras_
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(M)$ of $M$ as the matrix of the linear
map $v_{P}\mapsto v_{D_{M}(P)}$: it is defined by the relation
$v_{D_{M}(P)}=\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(M)\cdot v_{P}.$
The map $M\mapsto\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(M)$ is a Lie algebra
morphism. Given a Lie algebra $\mathfrak{g}$, an invariant of $\mathfrak{g}$
is a polynomial $P$ such that, $\forall D\in\mathfrak{g}$, $D(P)=0$. We have
the equivalence: $P$ is an invariant of $G^{\circ}$ if and only if $P$ is an
invariant of its Lie algebra $Lie(G)$.
Take a linear differential system $[A]:X^{\prime}=-AX$. Its _$m$ -th symmetric
power system_ is $[\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(A)]$. If $X$ is a
solution matrix of $[A]$ then $\mathrm{Sym}^{m}(X)$ is a solution matrix of
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(A)]$:
$\mathrm{Sym}^{m}(X)^{\prime}=-\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(A)\mathrm{Sym}^{m}(X)$,
see e. g. [6, 30].
###### Example 1
We will make this explicit on a construction used later on in the paper.
Consider a system $X^{\prime}=-AX$ with
$A=\begin{pmatrix}0&-1\\\ q&p\end{pmatrix}\textrm{ where
}\;p=\frac{w^{\prime}}{w}\textrm{ with }w\in K.$
It is in companion form so it admits a fundamental solution matrix of the form
$\mathbf{X}=\begin{pmatrix}y_{1}&y_{2}\\\
y_{1}^{\prime}&y_{2}^{\prime}\end{pmatrix}.$ (15)
The second symmetric power system is
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A)]:Y^{\prime}=-\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A)Y$
with
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A)=\begin{pmatrix}0&-1&0\\\ 2q&p&-2\\\
0&q&2p\end{pmatrix}.$
It admits the fundamental solution matrix
$\mathbf{Y}=\mathrm{Sym}^{2}(\mathbf{X})=\begin{pmatrix}y_{1}^{2}&y_{1}y_{2}&y_{2}^{2}\\\\[3.0pt]
2y_{1}y_{1}^{\prime}&y_{1}^{\prime}y_{2}+y_{1}y_{2}^{\prime}&2y_{2}y_{2}^{\prime}\\\\[3.0pt]
(y_{1}^{\prime})^{2}&y_{1}^{\prime}y_{2}^{\prime}&(y_{2}^{\prime})^{2}\end{pmatrix}.$
(16)
Now, let $\sigma\in\mathrm{DGal}(F_{0}/K)$ be an automorphism of the
differential Galois group of $[A]$ with matrix representation
$M_{\sigma}=\begin{pmatrix}\lambda_{11}&\lambda_{12}\\\
\lambda_{21}&\lambda_{22}\end{pmatrix}\in\mathrm{DGal}(F_{0}/K),\qquad\lambda_{11}\lambda_{22}-\lambda_{12}\lambda_{21}=1.$
This means that $\sigma(\mathbf{X})=\mathbf{X}\cdot M_{\sigma}$. As
$\mathrm{Sym}(\bullet)$ is a group morphism, we have
$\sigma(\mathbf{Y})=\sigma(\mathrm{Sym}^{2}(\mathbf{X}))=\mathrm{Sym}^{2}(\sigma(\mathbf{X}))=\mathrm{Sym}^{2}(\mathbf{X}\cdot
M_{\sigma})=\mathbf{Y}\cdot\mathrm{Sym}^{2}(M_{\sigma}).$
So the matrix of $\sigma$ acting on the solution $\mathbf{Y}$ of
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A)]$ is
$\mathrm{Sym}^{2}(M_{\sigma})$ (computed as in formula (16)). By a slight
abuse of notation, we will denote the set of such matrices by
$\mathrm{Sym}^{2}(\mathrm{DGal}(F_{0}/K))\subseteq SL(3,C_{K})$.
Note that, using the cyclic vector $(1,0,0)^{T}$, this symmetric power system
can be written as the traditional third order differential operator known as
the second symmetric power of $\mathcal{L}$:
$\textrm{sym}^{2}(\mathcal{L}):=\partial_{x}^{3}+3p\partial_{x}^{2}+(4q+p^{\prime}+2p^{2})\partial_{x}+2(q^{\prime}+2pq).$
(17)
Last, we look at gauge transformations $P$. The change $X=PY$ transforms the
system $[A]$ into $P[A]$. Then, as $Sym^{m}(\bullet)$ is a group morphism,
$Sym^{m}(X)=Sym^{m}(P)Sym^{m}(Y)$ which shows that
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(P[A])=Sym^{m}(P)[\mathfrak{s}\mathrm{y}\mathfrak{m}^{m}(A)]$.
## 2 Darboux transformations for third order orthogonal systems
### 2.1 Matrix Formalism, Darboux Transformation as a Gauge Transformation
We write the differential equation ${\mathcal{L}}_{m}y=0$, equation (7), in
its companion system form:
$[A_{m}]:X^{\prime}=-(A_{0}+mN)X,\quad X=\begin{pmatrix}y\\\
y^{\prime}\end{pmatrix},\quad A_{0}=\begin{pmatrix}0&-1\\\
q&p\end{pmatrix},\quad N=\begin{pmatrix}0&0\\\ -r&0\end{pmatrix}.$ (18)
Note that $N^{2}$ is the null matrix. The Darboux transformation transforms
the family ${\mathcal{L}}_{m}y=0$ into a family
$\widetilde{\mathcal{L}}_{m}y=0$ whose companion form is now
$[\widetilde{A}_{m}]:\widetilde{X}^{\prime}=-(\widetilde{A}_{0}+mN)\widetilde{X},\quad\widetilde{X}=\begin{pmatrix}\tilde{y}\\\
\tilde{y}^{\prime}\end{pmatrix},\quad\widetilde{A}_{0}=\begin{pmatrix}0&-1\\\
\tilde{q}&p\end{pmatrix},$ (19)
where $\tilde{y}$ and $\tilde{q}$ have the explicit form expressed in Theorem
1. We denote by $\widetilde{K}=K(\theta_{0})$ the field of coefficients of
system (19).
###### Proposition 1
The Darboux transformation given in Theorem 1 is equivalent to a gauge
transformation between the above families of systems $[\tilde{A}_{m}]$ and
$[A_{m}]$, whose matrix $P_{m}$ is given by
$P_{m}:=\frac{1}{\sqrt{r}}\begin{pmatrix}-\theta_{0}&1\\\
mr-\theta_{0}\rho&\rho\end{pmatrix}=L_{m}.R\textrm{ where
}L_{m}:=\begin{pmatrix}0&1\\\
mr&\rho\end{pmatrix},\;R:=\frac{1}{\sqrt{r}}\begin{pmatrix}1&0\\\
-\theta_{0}&1\end{pmatrix},$ (20)
with $\theta_{0}=\frac{y_{0}^{\prime}}{y_{0}}$, $p=\frac{w^{\prime}}{w}$ and
$\rho:=-\theta_{0}-p-\frac{r^{\prime}}{2r}$. When $\theta_{0}$ is algebraic
over $K$, this gauge transformation preserves the identity component of the
differential Galois group. Moreover, whenever $\theta_{0}\in K$, this Darboux
transformation preserves the differential Galois group.
###### Remark 3
Note that, in this factorization, the first matrix $L_{m}$ contains the
dependence on $m$ and the second matrix $R$ only depends on the known solution
$y_{0}$.
Proof. Let $Y:=P_{m}X$ with $Y=(z_{1},z_{2})^{T}$ and $X$ given in (18). The
first line is $z_{1}=\frac{1}{\sqrt{r}}(-\theta_{0}y+y^{\prime})$ so we
recognize the Darboux transformation from Theorem 1 and we have
$z_{1}=\tilde{y}$. Now we would like $z_{2}$ to be $\tilde{y}^{\prime}$. So we
differentiate $\frac{1}{\sqrt{r}}(-\theta_{0}y+y^{\prime})$ modulo the
relation ${\mathcal{L}}_{m}y=0$: this gives us $\tilde{y}^{\prime}$ as a
linear combination of $y$ and $y^{\prime}$ and we find the expression of
$P_{m}$ giving $\tilde{X}=P_{m}X$.
The new coefficient field is $\widetilde{K}:=K(\theta_{0})$. When $\theta_{0}$
is an algebraic function over $K$, $\widetilde{K}$ is an algebraic extension
of the differential field $K$ and the Picard-Vessiot extension
$\widetilde{F}_{m}$ of equation $\widetilde{\mathcal{L}}_{m}u=0$ is an
algebraic extension of the Picard-Vessiot extension $F_{m}$ of
${\mathcal{L}}_{m}y=0$. Thus,
$(\mathrm{DGal}(F_{m}/K))^{\circ}=(\mathrm{DGal}(\widetilde{F}_{m}/K))^{\circ}$.
Finally, if $\theta_{0}$ belongs to $K$, then $\widetilde{K}=K$ and
$\widetilde{F}_{m}=F_{m}$, which implies that the Darboux transformation
preserves the Galois groups. $\blacksquare$
###### Remark 4
This matrix factorization has an interesting interpretation. The gauge
transformation is $\widetilde{X}=P_{m}X$. Now, it is easily seen that
$RX=\begin{pmatrix}\frac{1}{\sqrt{r}}y\\\ \tilde{y}\end{pmatrix}.$
Indeed, see Theorem 1, the second row is the Darboux transformation. The
matrix factorization $P_{m}=L_{m}R$, combined with this relation, thus shows
that
$\begin{pmatrix}\tilde{y}\\\
\tilde{y}^{\prime}\end{pmatrix}=L_{m}\begin{pmatrix}\frac{1}{\sqrt{r}}y\\\
\tilde{y}\end{pmatrix}.$
The first row is an obvious identity. The second row provides us a notably
simple first order link between solutions $y$ of $\mathcal{L}_{m}$ and
solutions $\tilde{y}$ of its Darboux transformation
$\widetilde{\mathcal{L}}_{m}$:
$\tilde{y}^{\prime}-\rho\tilde{y}=m\sqrt{r}\;y.$ (21)
Note that $\rho$ does not depend on the parameter: $m$ appears only in the
right hand side.
### 2.2 Systems in
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\mathrm{SL}(2,C_{K}))$ and
$\mathfrak{so}(3,C_{K})$
Now we use the method of section 1.3 to build linear differential systems in
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\mathrm{SL}(2,C_{K}))$ and give their
relations to systems in $\mathfrak{so}(3,C_{K})$.
Consider the linear differential system $[A_{0}]$ as in equation (18) for
$m=0$, where $p=w^{\prime}/w$ and $w,q\in K$. We recall that its second
symmetric power system is given by the linear differential system
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0})]:Y^{\prime}=-S_{2}Y,$ (22)
for
$Y_{2}:=\mathrm{Sym}^{2}(X)=\begin{pmatrix}y^{2}\\\ 2yy^{\prime}\\\
(y^{\prime})^{2}\end{pmatrix}\quad\textrm{ and }\quad
S_{2}:=\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0})=\begin{pmatrix}0&-1&0\\\
2q&p&-2\\\ 0&q&2p\end{pmatrix}.$
We recall that we showed, in Example 1, how to build a solution matrix and a
representation of the Galois group in this construction. We record that for
further use in this classical Lemma.
###### Lemma 1
Let $\mathbf{X}$ be a fundamental matrix of system (18) and
$\mathrm{DGal}(F_{m}/K)$ be its differential Galois group. Then,
$\mathbf{Y}=\mathrm{Sym}^{2}(\mathbf{X})$ is a fundamental matrix for system
(22) and $\mathrm{Sym}^{2}(\mathrm{DGal}(F_{m}/K))$ is a representation of its
differential Galois group.
This situation can be illustrated by the following diagram:
$\textstyle{[A]:X^{\prime}=-AX\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}}$$\textstyle{\mathrm{DGal}(F_{0}/K)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{Sym}^{2}}$$\textstyle{[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A)]:Y^{\prime}=-\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A)Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathrm{Sym}^{2}(\mathrm{DGal}(F_{0}/K))}$
(23)
The next result provides us with a gauge transformation to go from an
$\mathfrak{so}(3,C_{K})$ system to a second symmetric power system of the form
(22). This enables us to express equation $\mathcal{L}_{0}y=0$ as a linear
differential system in $\mathfrak{so}(3,C_{K})$ written in the form
$Z^{\prime}=Z\times\Omega$ (equation (14)). As a consequence of this result,
we can extend previous reasoning for differential Galois groups to
$\mathfrak{so}(3,C_{K})$ systems, as we will show next.
###### Lemma 2
Let $Q$ be the gauge matrix given by
$Q=\begin{pmatrix}1&0&-1\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr i&0&i\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&-1&0\end{pmatrix}.$ (24)
The gauge transformation $Z=QY$ transforms the $\mathfrak{so}(3,C_{K})$ system
$Z^{\prime}=Z\times\Omega$, where $\Omega=(f,g,h)^{T}$ into the system
$Y^{\prime}=\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(C)Y$ with
$C:=\frac{1}{2}\left(\begin{array}[]{cc}ih&(g+if)\\\ \vskip 6.0pt plus 2.0pt
minus 2.0pt\cr-(g-if)&-ih\end{array}\right)$
Proof. We have
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(C)=\left(\begin{array}[]{ccc}ih&\frac{1}{2}(g+if)&0\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr-(g-if)&0&g+if\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr 0&-\frac{1}{2}(g-if)&-ih\end{array}\right)$
As $Q$ is constant, the effect of the gauge transformation is just conjugation
by $Q$. We have
$Q.\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(C)=\left(\begin{array}[]{ccc}ih&g&ih\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr-h&-f&h\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr g-if&0&-g-if\end{array}\right)\;\textrm{ and
}\;Q.\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(C).Q^{-1}=\left(\begin{array}[]{ccc}0&h&-g\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr-h&0&f\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr g&-f&0\end{array}\right).$
$\blacksquare$
This shows that, given an orthogonal system, we have an explicit gauge
transformation formula to view it as a symmetric square of a second order
system.
###### Remark 5
Using an additional gauge transformation, one can transform the matrix $C$ of
the above Lemma 2 into a companion form. This allows one to recover the
formulas from Remark 2 and reprove Theorem 2.
Conversely, if we start from
$\mathcal{L}_{0}y:=y^{\prime\prime}+py^{\prime}+qy=0$, reversing the above
process will produce an equivalent orthogonal system
$Z^{\prime}=Z\times\Omega$ with
$\Omega=(i(q-1),q+1,i\;p)^{T}=:(f,g,h)^{T}$
D. Blázquez-Sanz and J.J. Morales-Ruiz in [13], see also [12], have also given
such a classical isomorphism between $\mathfrak{so}(3,C_{K})$ and
$\mathfrak{sl}(2,C_{K})$, see Proposition 6.7 of [13]:
$\begin{pmatrix}&1&\\\ -1&&\\\
&&0\end{pmatrix}\mapsto\begin{pmatrix}\frac{i}{2}&0\\\
0&-\frac{i}{2}\end{pmatrix},\quad\begin{pmatrix}&&1\\\ &0&\\\
-1&&\end{pmatrix}\mapsto\begin{pmatrix}0&\frac{1}{2}\\\
-\frac{1}{2}&0\end{pmatrix},\quad\begin{pmatrix}0&&\\\ &&1\\\
&-1&\end{pmatrix}\mapsto\begin{pmatrix}0&-\frac{i}{2}\\\
-\frac{i}{2}&0\end{pmatrix}.$
###### Corollary 1
A fundamental matrix for system $Z^{\prime}=Z\times\Omega$ is
$\mathbf{Z}:=Q\mathbf{Y}=w\begin{pmatrix}y_{1}^{2}-(y_{1}^{\prime})^{2}&y_{1}y_{2}-y_{1}^{\prime}y_{2}^{\prime}&y_{2}^{2}-(y_{2}^{\prime})^{2}\\\\[3.0pt]
i(y_{1}^{2}+(y_{1}^{\prime})^{2})&i(y_{1}y_{2}+y_{1}^{\prime}y_{2}^{\prime})&i(y_{2}^{2}+(y_{2}^{\prime})^{2})\\\\[3.0pt]
-2\,y_{1}y_{1}^{\prime}&-y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}&-2\,y_{2}y_{2}^{\prime}\end{pmatrix},$
(25)
for $Q$ and $\mathbf{Y}$ defined by (24) and (16) respectively.
Finally, we can compute the differential Galois group of system (14).
###### Corollary 2
Using the fundamental matrix (25), the matrices in the differential Galois
group of the $\mathfrak{so}(3)$ system $Z^{\prime}=Z\times\Omega$ are the
matrices $\mathrm{Sym}^{2}(M_{\sigma})$ of Lemma 1.
Proof. Let $\sigma$ be in the Galois group. It acts on $\mathbf{Y}$ via
$\sigma(\mathbf{Y})=\mathbf{Y}\cdot\mathrm{Sym}^{2}(M_{\sigma})$ (Lemma 1).
Now, $\sigma(Q)=Q$ because $w\in K$. So, we have
$\sigma(\mathbf{Z})=\sigma(Q\mathbf{Y})=Q\cdot\mathbf{Y}\cdot\mathrm{Sym}^{2}(M_{\sigma})=\mathbf{Z}\cdot\mathrm{Sym}^{2}(M_{\sigma})$.
$\blacksquare$
Now, we can complete diagram (23) by adding the action on the
$\mathfrak{so}(3,C_{K})$ system:
$\textstyle{[A_{0}]:X^{\prime}=-A_{0}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}}$$\textstyle{\mathrm{DGal}(L_{0}/K)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{Sym}^{2}}$$\textstyle{[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0})]:Y^{\prime}=-S_{2}Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Q}$$\textstyle{\mathrm{Sym}^{2}(\mathrm{DGal}(L_{0}/K))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{[\Omega]:Z^{\prime}=-\Omega
Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathrm{Sym}^{2}(\mathrm{DGal}(L_{0}/K))}$
(26)
Finally we record an easy Corollary.
###### Corollary 3
Consider the orthogonal system $[\Omega]:Z^{\prime}=\Omega\times Z$ with
$Z=(\alpha,\beta,\gamma)^{T}$ and the equivalent second symmetric power
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0})]:Y^{\prime}=\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0})Y$
with $Y=(z_{1},z_{2},z_{3})^{T}$. The system $[\Omega]$ admits the first
integral $\alpha^{2}+\beta^{2}+\gamma^{2}$ and
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0})]$ admits the first integral
$w^{2}(4z_{1}z_{3}-z_{2}^{2})$.
Proof. The first part is well known and is proved by Darboux in [19], see part
I, chap. I, page 8 and chap. II, page 28. The second part follows from the
application of the gauge transformation of Lemma 2: it transforms the first
integral $\alpha^{2}+\beta^{2}+\gamma^{2}$ of $[\Omega]$ into
$w^{2}(4z_{1}z_{3}-z_{2}^{2})$: this is still a constant of motion and hence a
first integral of $[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0})]$.
$\blacksquare$
### 2.3 Darboux Transformations in $\mathrm{Sym}^{2}(\mathrm{SL}(2,C_{K}))$
and $\mathfrak{so}(3,C_{K})$
The aim of this subsection is to construct Darboux transformations for third
order orthogonal systems using diagram (26). Our construction will ensure that
these Darboux transformations will preserve the identity component of the
differential Galois group of each equation.
In order to construct the Darboux transformations for the second symmetric
power system coming from the general second order linear differential equation
${\mathcal{L}}_{m}y=0$ we extend Proposition 1. We present two ways to extend
it. Like in previous subsection, this will allow us to obtain Darboux
transformations for the $\mathfrak{so}(3,C_{K})$ system.
For the first one, consider the linear differential system
$[A_{m}]:X^{\prime}=-(A_{0}+mN)X$ given by equation (18). Its second symmetric
power system is given by the linear differential system
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{m})]:Y^{\prime}=-(S_{2}+mN_{2})Y,$
(27)
where $Y=\mathrm{Sym}^{2}(X)=(y^{2},2yy^{\prime},(y^{\prime})^{2})^{T}$ and
$S_{2}+mN_{2}=\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{0}+mN)$ are given by
$S_{2}:=\begin{pmatrix}0&-1&0\\\ 2q&p&-2\\\ 0&q&2p\end{pmatrix}\quad\text{ and
}\quad N_{2}:=\begin{pmatrix}0&0&0\\\ -2r&0&0\\\ 0&-r&0\end{pmatrix}.$
A fundamental matrix for this system is given by matrix (16), where
$\\{y_{1},y_{2}\\}$ is a basis of solutions of equation (7).
Recall that, after applying the gauge transformation (20), system (18) is
transformed into the linear differential system
$[\widetilde{A}_{m}]:\widetilde{X}^{\prime}=-(\widetilde{A}_{0}+mN)\widetilde{X}$,
defined by equation (19), whose second symmetric power system is given by the
linear differential system
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\widetilde{A}_{m})]:\widetilde{Y}^{\prime}=-(\widetilde{S}_{2}+mN_{2})\widetilde{Y},$
(28)
where
$\widetilde{Y}=\mathrm{Sym}^{2}(\widetilde{X})=(\tilde{y}^{2},2\tilde{y}\tilde{y}^{\prime},(\tilde{y}^{\prime})^{2})^{T}$
and
$\widetilde{S}_{2}+mN_{2}=\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\widetilde{A}_{0}+mN)$,
for
$\widetilde{S}_{2}:=\begin{pmatrix}0&-1&0\\\ 2\tilde{q}&p&-2\\\
0&\tilde{q}&2p\end{pmatrix},$
where $\tilde{y}$ and $\tilde{q}$ have the explicit form expressed in Theorem
1. Thus, since
$\widetilde{Y}=\mathrm{Sym}^{2}(\widetilde{X})=\mathrm{Sym}^{2}(P_{m})\cdot\mathrm{Sym}^{2}(X)=\mathrm{Sym}^{2}(P_{m})\cdot
Y$, the gauge transformation (20) also induces a transformation in the second
symmetric power systems which sends system (27) to system (28). The following
result formalizes this idea.
###### Proposition 2 (First Darboux Transformation for
$\mathrm{Sym}^{2}(\mathrm{SL}(2,C_{K}))$)
Let $P_{1,m}:=\mathrm{Sym}^{2}(P_{m})$. We have
$P_{1,m}=\dfrac{1}{r}\begin{pmatrix}{\theta_{0}}^{2}&-\theta_{0}&1\\\ \vskip
6.0pt plus 2.0pt minus 2.0pt\cr 2\,\theta_{0}\nu&\nu-\theta_{0}\rho&2\,\rho\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\nu^{2}&-\rho\,\nu&{\rho}^{2}\end{pmatrix}=\frac{1}{r}\left(\begin{array}[]{ccc}0&0&1\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&mr&2\,\rho\\\ \vskip 6.0pt plus 2.0pt
minus
2.0pt\cr{m}^{2}{r}^{2}&\rho\,mr&{\rho}^{2}\end{array}\right)\cdot\left(\begin{array}[]{ccc}1&0&0\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr-2\,\theta_{{0}}&1&0\\\ \vskip 6.0pt
plus 2.0pt minus
2.0pt\cr{\theta_{{0}}}^{2}&-\theta_{{0}}&1\end{array}\right),$ (29)
where $P_{m}$ is defined by expression (20),
$\theta_{0}=\frac{y_{0}^{\prime}}{y_{0}}$,
$\rho=-\theta_{0}-p-\frac{r^{\prime}}{2r}$ and $\nu=mr-\theta_{0}\rho$.
Then, $P_{1,m}$ is a gauge transformation which sends system
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}([A_{m}])$ to system
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}([\widetilde{A}_{m}])$.
Proof. Since matrix $P_{1,m}$ is the second symmetric power matrix of matrix
$P_{m}$, defined by (20), given $Y$ and $\widetilde{Y}$ solutions of equations
(27) and (28) respectively, it satisfies
$\widetilde{Y}=\mathrm{Sym}^{2}(P_{m})Y$. Proposition 1 shows that
$P_{m}=L_{m}.R$. As $\mathrm{Sym}(\bullet)$ is a group morphism, we have
$\mathrm{Sym}^{2}(P_{m})=\mathrm{Sym}^{2}(L_{m})\mathrm{Sym}^{2}(R)$, which
gives the matrix factorization and the result. $\blacksquare$
As systems $\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}([A_{m}])$ and
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}([\widetilde{A}_{m}])$ have the same
shape, we may say that $P_{1,m}$ is a _Darboux transformation_ from
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}([A_{m}])$ to
$\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}([\widetilde{A}_{m}])$. This new
Darboux transformation is induced by the original Darboux transformation for
second order systems. We recover the matrix factorization from our expression
of the Darboux transformation as a gauge transformation in Proposition 1.
There is another way to build a Darboux transformation for second symmetric
power systems. That is, transform the first system
$[A_{m}]:X^{\prime}=-(A_{0}+mN)X$, given by equation (18), into a system in
$\mathfrak{sl}(2,K)$ and perform all the previous process with the resulting
system. By doing that, we ensure that the differential Galois group of
equation (18) lays in $SL(2,C_{K})$. In order to obtain an
$\mathfrak{sl}(2,K)$ system, we consider the gauge change:
$X_{1}:=\Delta X,\qquad\Delta:=\begin{pmatrix}1&0\\\ 0&w\end{pmatrix}.$ (30)
Thus, $X_{1}=(y,{w}y^{\prime})^{T}$ and the resulting system is the
$\mathfrak{sl}(2,K)$ system
$[B_{m}]:X_{1}^{\prime}=-(B_{0}+mN_{1})X_{1},$ (31)
with $B_{0}+mN_{1}\in\mathfrak{sl}(2,K)$, given by
$B_{0}:=\begin{pmatrix}0&-\frac{1}{w}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr wq&0\end{pmatrix}\quad\text{ and }\quad N_{1}:=\begin{pmatrix}0&0\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr-wr&0\end{pmatrix}.$
By an elimination process (cyclic vector), we can show that this systems is
still equivalent to the second order linear differential equation (7). And a
fundamental matrix for this system is given by:
$\mathbf{X}_{1}=\begin{pmatrix}y_{1}&y_{2}\\\
wy_{1}^{\prime}&wy_{2}^{\prime}\end{pmatrix},$ (32)
where $\\{y_{1},y_{2}\\}$ is a basis of solutions of equation (7).
The Darboux transformation $P_{m}$, given by (20), for system (18) induces the
Darboux transformation $\Delta P_{m}\Delta^{-1}$ for system (31).
Now, we consider the second symmetric power system of system (31). This system
is given by
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(B_{m})]:Y_{1}^{\prime}=-(\widehat{S}_{2}+m\widehat{N}_{2})Y_{1},$
(33)
where $Y_{1}:=\mathrm{Sym}^{2}(X_{1})=(y_{1}^{2},2y_{1}y_{2},y_{2}^{2})^{T}$
and
$\widehat{S}_{2}+m\widehat{N}_{2}=\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(B_{0}+mN_{1})\in\mathfrak{sl}(3,K)$,
for
$\widehat{S}_{2}:=\begin{pmatrix}0&-\frac{1}{w}&0\\\ \vskip 6.0pt plus 2.0pt
minus 2.0pt\cr 2\,wq&0&\frac{-2}{w}\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr
0&wq&0\end{pmatrix}\quad\text{ and
}\quad\widehat{N}_{2}:=\begin{pmatrix}0&0&0\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr-2\,wr&0&0\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr
0&-wr&0\end{pmatrix}.$
From the above, we can easily compute the expression of a fundamental matrix
for system (33):
$\mathbf{Y}_{1}=\mathrm{Sym}^{2}(\mathbf{X}_{1})=\begin{pmatrix}y^{2}_{1}&y_{1}y_{2}&y_{2}\\\\[2.0pt]
2wy_{1}y_{1}^{\prime}&w(y_{1}y_{2}^{\prime}+y_{1}^{\prime}y_{2})&2wy_{2}y_{2}^{\prime}\\\\[3.0pt]
w^{2}(y_{1}^{\prime})^{2}&w^{2}y_{1}^{\prime}y_{2}^{\prime}&w^{2}(y_{2}^{\prime})^{2}\end{pmatrix},$
(34)
where $\mathbf{X}_{1}$ is given by (32).
Next, we find the second expression for the Darboux transformation for systems
that can be written as a second symmetric power.
###### Corollary 4 (Second Darboux Transformation for
$\mathrm{Sym}^{2}(\mathrm{SL}(2,C_{K}))$)
Consider the system
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(B_{m})]:Y_{1}^{\prime}=-(\widehat{S}_{2}+m\widehat{N}_{2})Y_{1}$
given by (33).
Let $P_{2,m}:=\mathrm{Sym}^{2}(\Delta P_{m}\Delta^{-1})$. We have
$P_{2,m}=\dfrac{1}{r}\left(\begin{array}[]{ccc}{\theta_{0}}^{2}&-w\,\theta_{0}&{w}^{2}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr{\frac{-2\,\theta_{0}\nu}{w}}&\nu-\,\rho\,\theta_{0}&2\,w\,\rho\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr{\frac{\nu^{2}}{{w}^{2}}}&{\frac{\rho\,\nu}{w}}&{\rho}^{2}\end{array}\right)=\frac{1}{r}\begin{pmatrix}0&0&1\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&wmr&2\,w\rho\\\ \vskip 6.0pt plus
2.0pt minus
2.0pt\cr{w}^{2}{m}^{2}{r}^{2}&{w}^{2}\rho\,mr&{w}^{2}{\rho}^{2}\end{pmatrix}\cdot\begin{pmatrix}1&0&0\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr-2\,\theta_{{0}}&\frac{1}{w}&0\\\ \vskip
6.0pt plus 2.0pt minus
2.0pt\cr{\theta_{{0}}}^{2}&-{\frac{\theta_{{0}}}{w}}&\frac{1}{w^{2}}\end{pmatrix}$
(35)
where matrices $P_{m}$ and $\Delta$ are defined by (20) and (30) respectively,
$\theta_{0}=\frac{y_{0}^{\prime}}{y_{0}}$,
$\rho=-\theta_{0}-p-\frac{r^{\prime}}{2r}$ and $\nu=mr-\rho\theta_{0}$. Then,
$P_{2,m}$ is a Darboux transformation for system
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(B_{m})]$.
Proof. As we have seen, a Darboux transformation for the $\mathfrak{sl}(2,K)$
system (31) is given by $\widetilde{X}_{1}=\Delta P_{m}\Delta^{-1}X_{1}$,
where $\Delta$ and $P_{m}$ are defined by (30) and (20) respectively. The
transformed system is the $\mathfrak{sl}(2,K)$ system
$[\widetilde{B}_{m}]:=\widetilde{X}_{1}^{\prime}=-(\widetilde{B}_{0}+mN_{1})\widetilde{X}_{1},$
(36)
with $\widetilde{B}_{0}+mN_{1}\in\mathfrak{sl}(2,K)$, for
$\widetilde{X}_{1}=\begin{pmatrix}\tilde{y}_{1}\\\
\tilde{y}_{2}\end{pmatrix}\quad\text{ and
}\quad\widetilde{B}_{0}:=\begin{pmatrix}0&\frac{1}{w}\\\
-\tilde{q}w&0\end{pmatrix}.$
Its corresponding second symmetric power system is
$[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\widetilde{B}_{m})]:\widetilde{Y}_{1}^{\prime}=-(\widetilde{\widehat{S}}_{2}+m\widehat{N}_{2})\widetilde{Y}_{1},$
(37)
where
$\widetilde{Y}_{1}:=\mathrm{Sym}^{2}(\widetilde{X}_{1})=(\tilde{y}_{1}^{2},2\tilde{y}_{1}\tilde{y}_{2},\tilde{y}_{2}^{2})^{T}$
and
$\widetilde{\widehat{S}}_{2}+m\widehat{N}_{2}=\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\widetilde{B}_{0}+mN_{1})\in\mathfrak{sl}(3,K)$,
for
$\widetilde{\widehat{S}}_{2}:=\begin{pmatrix}0&-\frac{1}{w}&0\\\ \vskip 6.0pt
plus 2.0pt minus 2.0pt\cr 2\,w\tilde{q}&0&\frac{-2}{w}\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr 0&w\tilde{q}&0\end{pmatrix},$
where $\tilde{q}$ is the "new potential" obtained by the Darboux
transformation in Theorem 1.
Now, consider the second symmetric power system (33). Since
$\widetilde{Y}_{1}=\mathrm{Sym}^{2}(\widetilde{X}_{1})=\mathrm{Sym}^{2}(\Delta
P_{m}\Delta^{-1})\cdot\mathrm{Sym}^{2}(X_{1})=\mathrm{Sym}^{2}(\Delta
P_{m}\Delta^{-1})\cdot Y_{1},$
we find that matrix $P_{2,m}:=\mathrm{Sym}^{2}(\Delta P_{m}\Delta^{-1})$ is a
Darboux transformation which sends system (33) into system (37). We had the
matrix factorization $P_{m}=L_{m}R$ (Proposition 1). So, as
$\mathrm{Sym}(\bullet)$ is a group morphism,
$\mathrm{Sym}^{2}(\Delta P_{m}\Delta^{-1})=\mathrm{Sym}^{2}\left(\Delta
L_{m}\right)\cdot\mathrm{Sym}^{2}\left(R\Delta^{-1}\right)$
and this gives us the desired matrix factorization above. $\blacksquare$
We recall that if $w=1$, Darboux transformations $P_{1,m}$ and $P_{2,m}$ are
the same. This is useful in the applications to non-relativistic and one
dimensional Quantum Mechanics where $r=1$ as well, see Section 3.1.
Once we have defined the Darboux transformations for second symmetric power
systems, we can state the Darboux transformation for $\mathfrak{so}(3,C_{K})$
systems as follows. As we have found two Darboux transformations for second
symmetric power systems, we will have two Darboux transformations for
$\mathfrak{so}(3,C_{K})$ systems as well: one using Proposition 2 and another
one using Corollary 4.
Applying Lemma 2, we can transform the second symmetric power system (27) into
the $\mathfrak{so}(3,C_{K})$ system
$[\Omega_{m}]:Z^{\prime}=-(\Omega_{0}+mN_{3})Z,$ (38)
where $Z=QY$ and $-(\Omega_{0}+mN_{3})=Q^{\prime}Q^{-1}-Q(S_{2}+mN_{2})Q^{-1}$
are given by
$Z=\begin{pmatrix}\alpha\\\ \beta\\\
\gamma\end{pmatrix},\quad\Omega_{0}=\begin{pmatrix}0&-ip&-(q+1)\\\ \vskip
6.0pt plus 2.0pt minus 2.0pt\cr ip&0&i(q-1)\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr q+1&-i(q-1)&0\end{pmatrix}\quad\text{and}\quad
N_{3}=r\begin{pmatrix}0&0&-1\\\ 0&0&i\\\ 1&-i&0\end{pmatrix}.$
A fundamental matrix for this system is given by matrix (25), where
$\\{y_{1},y_{2}\\}$ is a basis of solutions of equation (7).
After performing the Darboux transformation (29), system (27) is transformed
into system (28), which, again by Lemma 2, can be transformed into the
$\mathfrak{so}(3,C_{K})$ system
$[\widetilde{\Omega}_{m}]:\widetilde{Z}^{\prime}=-(\widetilde{\Omega}_{0}+mN_{3})\widetilde{Z},$
(39)
where $\widetilde{Z}=Q\widetilde{Y}$ and
$-(\widetilde{\Omega}_{0}+mN_{3})=Q^{\prime}Q^{-1}-Q(\widetilde{S}_{2}+mN_{2})Q^{-1}$
for
$\widetilde{Z}=\begin{pmatrix}\widetilde{\alpha}\\\ \widetilde{\beta}\\\
\widetilde{\gamma}\end{pmatrix}\quad\text{ and
}\quad\widetilde{\Omega}_{0}=\begin{pmatrix}0&-ip&-(\tilde{q}+1)\\\ \vskip
6.0pt plus 2.0pt minus 2.0pt\cr ip&0&i(\tilde{q}-1)\\\ \vskip 6.0pt plus 2.0pt
minus 2.0pt\cr\tilde{q}+1&-i(\tilde{q}-1)&0\end{pmatrix},$
for $\tilde{q}$ as in Theorem 1. Thus, the Darboux transformation (29) also
induces a transformation in the corresponding $\mathfrak{so}(3,C_{K})$ systems
which sends system (38) to system (39).
The following result shows that this induced transformation is indeed a
Darboux transformation for $\mathfrak{so}(3,C_{K})$ systems.
###### Proposition 3 (First Darboux Transformation for
$\mathfrak{so}(3,C_{K})$)
Consider the systems $[\Omega_{m}]:Z^{\prime}=-(\Omega_{0}+mN_{3})Z$ and
$[\widetilde{\Omega}_{m}]:\widetilde{Z}^{\prime}=-(\widetilde{\Omega}_{0}+mN_{3})\widetilde{Z}$
given by (38) and (39) respectively. Let $T_{1,m}$ be the matrix defined by
$T_{1,m}=QP_{1,m}Q^{-1}=\frac{1}{2r}\begin{pmatrix}-{\nu}^{2}+{\rho}^{2}+{\theta_{0}}^{2}-1&i\left({\nu}^{2}+{\rho}^{2}-{\theta_{0}}^{2}-1\right)&2(\nu\,\rho+\theta_{0})\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr
i\left({\nu}^{2}-{\rho}^{2}+{\theta_{0}}^{2}-1\right)&{\nu}^{2}+{\rho}^{2}+{\theta_{0}}^{2}+1&2\,i\left(\theta_{0}-\nu\,\rho\right)\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr
2(\nu\,\theta_{0}+\rho)&-2\,i\left(\nu\,\theta_{0}-\rho\right)&2(\nu-\theta_{0}\rho)\end{pmatrix},$
(40)
where matrix $Q$ is defined by (24), matrix $P_{1,m}$ is defined by expression
(29) and $\nu=mr-\theta_{0}\rho$.
Then, $T_{1,m}$ is a Darboux transformation, i.e., a virtually strong
isogaloisian gauge transformation, which sends system
$[\Omega_{m}]:Z^{\prime}=-(\Omega_{0}+mN_{3})Z$ to a system
$[\widetilde{\Omega}_{m}]:\widetilde{Z}^{\prime}=-(\widetilde{\Omega}_{0}+m{N}_{3})\widetilde{Z}$
of the same shape.
###### Remark 6
As in the previous results, the matrix can be factored into an $m$-dependent
part and an independent part :
$T_{1,m}=\begin{pmatrix}-{m}^{2}{r}^{2}&-\rho\,mr&-{\rho}^{2}+1\\\ \vskip
6.0pt plus 2.0pt minus 2.0pt\cr i{m}^{2}{r}^{2}&i\rho\,mr&i+i{\rho}^{2}\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr
0&-mr&-2\,\rho\end{pmatrix}.\begin{pmatrix}\frac{1}{2}&-\frac{i}{2}&0\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr-\theta_{{0}}&i\,\theta_{{0}}&-1\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\frac{1}{2}(\theta_{0}^{2}-1)&-\frac{i}{2}(\theta_{0}^{2}+1)&\theta_{{0}}\end{pmatrix}$
where the $m$-dependent matrix is $Q\,\mathrm{Sym}^{2}\left(\Delta
L_{m}\right)$ and the matrix on the right is
$\mathrm{Sym}^{2}\left(R\Delta^{-1}\right)Q^{-1}$.
Proof. The proof follows from the application of Lemma 2 and Theorem 2. Given
$Y$ and $\widetilde{Y}$ solutions of equations (27) and (28) respectively, and
$Z$ and $\widetilde{Z}$ solutions of (38) and (39) respectively, by Lemma 2,
we have that $Z=QY$ and $\widetilde{Z}=Q\widetilde{Y}$. On the other hand, by
Theorem 2, we know that $\widetilde{Y}=P_{1,m}Y$. Thus, we get the gauge
transformation $\widetilde{Z}=(QP_{1,m}Q^{-1})Z=T_{1,m}Z$. From this, we
immediately obtain the gauge transformation for the coefficient matrix:
$-(\widetilde{\Omega}_{0}+mN_{3})=T^{\prime}_{1,m}T^{-1}_{1,m}-T_{1,m}(\Omega_{0}+mN_{3})T^{-1}_{1,m}.$
The rest of the corollary is proved following the same argument as in
Proposition 1. $\blacksquare$
Propositions 2 and 3 can be summarized in the following commutative diagram:
$\textstyle{[A_{m}]:X^{\prime}=-(A_{0}+mN)X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{P_{m}}$$\scriptstyle{\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}}$$\textstyle{[\widetilde{A}_{m}]:\widetilde{X}^{\prime}=-(\widetilde{A}_{0}+mN)\widetilde{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}}$$\textstyle{[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A_{m})]:=Y^{\prime}=-(S_{2}+mN_{2})Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{P_{1,m}}$$\scriptstyle{Q}$$\textstyle{[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\widetilde{A}_{m})]:=\widetilde{Y}^{\prime}=-(\widetilde{S}_{2}+mN_{2})\widetilde{Y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Q}$$\textstyle{[\Omega_{m}]:Z^{\prime}=-(\Omega_{0}+mN_{3})Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{T_{1,m}}$$\textstyle{[\widetilde{\Omega}_{m}]:\widetilde{Z}^{\prime}=-(\widetilde{\Omega}_{0}+mN_{3})\widetilde{Z}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
(41)
We end this section by establishing a second Darboux transformation for
$\mathfrak{so}(3,C_{K})$ systems. For that, we transform the
$\mathrm{Sym}^{2}(SL(2,K))$ system (33) into an $SO(3,C_{K})$ system. Consider
the matrix
$S=\left(\begin{array}[]{ccc}1&0&1\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr
0&i&0\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr i&0&-i\end{array}\right)$
and the gauge change $Z_{1}=S\cdot Y_{1}$. Then, the system
$[\widehat{\Omega}_{m}]:Z_{1}^{\prime}=-(\widehat{\Omega}_{0}+m\widehat{N}_{3})Z_{1},$
(42)
where
$\widehat{\Omega}_{0}=\begin{pmatrix}0&i\left(\frac{1}{w}-wq\right)&0\\\\[6.0pt]
-i\left(\frac{1}{w}-wq\right)&0&\frac{1}{w}+wq\\\\[6.0pt]
0&-\left(\frac{1}{w}+wq\right)&0\end{pmatrix}\quad\text{ and
}\quad\widehat{N}_{3}=wr\begin{pmatrix}0&i&0\\\ -i&0&-1\\\
0&1&0\end{pmatrix},$ (43)
is an $\mathfrak{so}(3,C_{K})$ system corresponding to the linear differential
equation (18). A fundamental matrix for this system is:
$\mathbf{Z}_{1}=S\cdot\mathbf{Y}_{1}=\begin{pmatrix}y_{1}^{2}+w^{2}(y_{1}^{\prime})^{2}&y_{1}y_{2}+w^{2}y_{1}^{\prime}y_{2}^{\prime}&y_{2}^{2}+w^{2}(y_{2}^{\prime})^{2}\\\\[3.0pt]
2iwy_{1}y_{1}^{\prime}&iw(y_{1}y_{2}^{\prime}+y_{1}^{\prime}y_{2})&2iwy_{2}y_{2}^{\prime}\\\\[3.0pt]
i(y_{1}^{2}-w^{2}(y_{1}^{\prime})^{2})&i(y_{1}y_{2}-w^{2}y_{1}^{\prime}y_{2}^{\prime})&i(y_{2}^{2}-w^{2}(y_{2}^{\prime})^{2})\end{pmatrix},$
(44)
where $\mathbf{Y}_{1}$ is given by (34) and $\\{y_{1},y_{2}\\}$ is a basis of
solutions of equation (7).
Then, the second expression for the Darboux transformation for
$\mathfrak{so}(3,C_{K})$ systems is given by:
###### Corollary 5 (Second Darboux Transformation for
$\mathfrak{so}(3,C_{K})$)
Consider the system
$[\widehat{\Omega}_{m}]:Z_{1}^{\prime}=-(\widehat{\Omega}_{0}+m\widehat{N}_{3})Z_{1}$
given by (42). Let $T_{2,m}$ be the matrix
$T_{2,m}=SP_{2,m}S^{-1}=\dfrac{1}{2r}\left(\begin{array}[]{ccc}{w}^{2}+{\rho}^{2}+{\theta_{0}}^{2}+{\frac{{\nu}^{2}}{{w}^{2}}}&2i\left(w\,\theta_{0}-{\frac{\nu\,\rho}{w}}\right)&i\left({w}^{2}+{\rho}^{2}-{\theta_{0}}^{2}-{\frac{{\nu}^{2}}{{w}^{2}}}\right)\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr
2i\left(w\,\rho-{\frac{\nu\,\theta_{0}}{w}}\right)&2(\nu-\rho\,\theta_{0})&-2\left(w\,\rho+{\frac{\nu\,\theta_{0}}{w}}\right)\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr
i\left({w}^{2}-{\rho}^{2}+{\theta_{0}}^{2}-{\frac{{\nu}^{2}}{{w}^{2}}}\right)&-2\left(w\,\theta_{0}+{\frac{\nu\,\rho}{w}}\right)&-{w}^{2}+{\rho}^{2}+{\theta_{0}}^{2}-{\frac{{\nu}^{2}}{{w}^{2}}}\end{array}\right),$
(45)
where matrix $P_{2,m}$ is defined by expression (35) and
$\nu=mr-\theta_{0}\rho$. Then, the gauge transformation $T_{2,m}$ is a Darboux
transformation for system $[\widehat{\Omega}_{m}]$.
Proof. Since $\widetilde{Y}_{1}=P_{2,m}\cdot Y_{1}$ by Corollary 4, it follows
that:
$T_{2,m}=S\cdot P_{2,m}\cdot S^{-1}.$
By construction, the image of $\widehat{\Omega}_{0}+m\widehat{N}_{3}$ by this
gauge transformation is $\widetilde{\widehat{\Omega}}_{0}+m\widehat{N}_{3}$,
where $\widetilde{\widehat{\Omega}}_{0}$ is obtained from
$\widehat{\Omega}_{0}$ by changing $q$ by the function $q_{1}$ obtained in (5)
by the Darboux transformation:
$\widetilde{\widehat{\Omega}}_{0}=\begin{pmatrix}0&i\left(\frac{1}{w}-w\tilde{q}\right)&0\\\\[5.0pt]
-i\left(\frac{1}{w}-w\tilde{q}\right)&0&\frac{1}{w}+w\tilde{q}\\\\[5.0pt]
0&-\left(\frac{1}{w}+w\tilde{q}\right)&0\end{pmatrix}.$
The rest of the corollary is proved following the same argument as in
Proposition 1. $\blacksquare$
As in all the above results, the transformation can be factored into an
$m$-dependent part and an independent part. The formula is not as compact as
the previous ones but is easily found with a computer algebra system once one
is equipped with this paper’s methodology and results.
We note that Darboux transformations $T_{1,m}$ and $T_{2,m}$ are not
equivalent when $w\neq 1$ because matrices $Q$ and $S$ are then different.
Corollaries 4 and 5 can be summarized in the following commutative diagram:
$\textstyle{[A_{m}]:X^{\prime}=-(A_{0}+mN)X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{P_{m}}$$\scriptstyle{\Delta}$$\textstyle{[\widetilde{A}_{m}]:\widetilde{X}^{\prime}=-(\widetilde{A}_{0}+mN)\widetilde{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Delta}$$\textstyle{[B_{m}]:X_{1}^{\prime}=-(B_{0}+mN_{1})X_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Delta
P_{m}\Delta^{-1}}$$\scriptstyle{\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}}$$\textstyle{[\widetilde{B}_{m}]:\widetilde{X}_{1}^{\prime}=-(\widetilde{B}_{0}+mN_{1})\widetilde{X}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}}$$\textstyle{[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(B_{m})]:Y_{1}^{\prime}=-(\widehat{S}_{2}+m\widehat{N}_{2})Y_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{P_{2,m}}$$\scriptstyle{S}$$\textstyle{[\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(\widetilde{B}_{m})]:\widetilde{Y}_{1}^{\prime}=-(\widetilde{\widehat{S}}_{2}+m\widehat{N}_{2})\widetilde{Y}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S}$$\textstyle{[\widehat{\Omega}_{m}]:Z_{1}^{\prime}=-(\widehat{\Omega}_{0}+m\widehat{N}_{3})Z_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{T_{2,m}}$$\textstyle{[\widetilde{\widehat{\Omega}}_{m}]:\widetilde{Z}_{1}^{\prime}=-(\widetilde{\widehat{\Omega}}_{0}+m\widehat{N}_{3})\widetilde{Z}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
(46)
### 2.4 Extension to General Differential Systems with an Orthogonal Galois
Group
The results can be extended to the general case of a linear differential
system $Y^{\prime}=-AY$ whose differential Galois group is in the special
orthogonal group $SO(3,C_{K})$. This can be tested the following way: the
system should be irreducible (it has no hyperexponential solution), the trace
of $A$ should be a logarithmic derivative (i.e the equation
$y^{\prime}=-Tr(A)y$ has a rational solution) and the second symmetric power
system $Z^{\prime}=-\mathfrak{s}\mathrm{y}\mathfrak{m}^{2}(A)Z$ should have a
rational solution, corresponding to the quadratic invariant of the special
orthogonal group.
In general, the matrix of such a differential systems is not in
$\mathfrak{so}(3,C_{K})$; then, the results of the previous section would not
apply directly. However, using the constructive theory of reduced forms from
[10, 11], we may find a gauge transformation matrix $P$ such that, letting
$Y=PZ$, the new unknown $Z$ satisfies a system $Z^{\prime}=-BZ$ with
$B\in\mathfrak{so}(3,K)$. Then the results of the previous sections apply: we
may solve using solutions of second order equations and construct families of
equations of similar shapes via Darboux transformation.
Another version of such a process also appears in Singer’s work [31] and
several subsequent works on solving linear differential equations in terms of
lower order equations.
Note that this observation already appears in the book of Darboux [19], pages
28-29: he shows (in old language) how to identify third order linear
differential systems with an orthogonal Galois group using a first integral ;
he then shows how to transform such a system into the orthogonal form treated
in this paper and calls it the _type_ or _la forme réduite_ of the class of
third order systems admitting a quadratic first integral.
## 3 Applications
In this section, to motivate the results of this paper, we present some
examples coming from supersymmetric quantum mechanics and differential
geometry.
### 3.1 Supersymmetric Quantum Mechanics
The Schrödinger equation for the stationary and non-relativistic case is given
by
$H\psi=\lambda\psi,\quad H=-\partial_{x}^{2}+V(x),$
where $\lambda$ is called the _energy_ , $V$ is called the _potential_ and
$\psi$ is called the _wave function_. Supersymmetric quantum mechanics in the
Witten’s formalism was introduced by himself in [33, §6] as a toy model.
Witten introduced the _supersymmetric partner hamiltonians_ $H_{\pm}$ as
follows
$H_{\pm}=-\partial_{x}^{2}+V_{\pm}(x),\quad V_{\pm}=W^{2}\pm W^{\prime},$
where $V_{\pm}$ are called the _supersymmetric partner potentials_ and $W$ is
called the _superpotential_ which satisfies
$W=-\dfrac{\psi_{0}^{\prime}}{\psi_{0}},\quad
H_{-}\psi_{0}=\lambda_{0}\psi_{0},$
where $\psi_{0}$ is called the _ground state_ and $\lambda_{0}$ is an specific
value of the energy $\lambda$.
We can go from $H_{-}\psi=\lambda\psi$ to
$H_{+}\widetilde{\psi}=\lambda\widetilde{\psi}$ through a Darboux
transformation, where $\theta=\psi_{0}$, $m=-\lambda$, $y=\psi$,
$u=\widetilde{\psi}$, $p=0$, $q=-V$, and $r=1$.
Gendenshtein in [23] introduced what today is called _shape invariant_
potentials, that is, potentials with the _shape invariance property_ : the
potential $V=V_{-}=V_{-}(x;a)$ has the shape invariance property if and only
if its supersymmetric partner potential $V_{+}=V_{+}(x;a)$ can be written as
$V_{+}(x;a)=V_{-}(x;a_{1})+R(a_{1})$, where $a$ is a set of parameters and
$a_{1}=f(a)$. In other words, the supersymmetric partner potentials differs
only in parameters. Applying systematically this procedure, one can obtain the
spectrum as the values of energy $\lambda$ such that
$\lambda=\sum_{k=1}^{n}R(a_{k}).$
Moreover, $H_{-}=A^{\dagger}A$ and $H_{+}=AA^{\dagger}$, where
$A^{\dagger}=-\partial_{x}+W$ and $A=\partial_{x}+W$ are called the _ladder
(raising and lowering) operators_ , see [20]. We can rewrite the starting
potential $V$ as $V_{-}-\lambda_{0}$ to apply Darboux transformations. Thus,
we can obtain the rest of wave functions applying it as follows:
$\psi_{1}=A(x;a_{0})^{\dagger}\psi_{0}$, and in general as
$\psi_{k}=A(x;a_{k-1})^{\dagger}\psi_{k-1}$, where $a_{0}=a$,
$a_{1}=f(a_{0})$. First examples of rational shape invariant potentials
correspond to harmonic oscillator and Coulomb potentials, for one dimensional
and three dimensional cases.
In the following, we combine this theoretical background regarding the
Schrödinger equation and Supersymmetric Quantum Mechanics with our results
from the previous section.
#### 3.1.1 Darboux Transformation in Matrix Form for Supersymmetric Quantum
Mechanics
We apply our previous results to a matrix form of the Schrödinger equation. We
start by introducing the following $2\times 2$ matrix Schrödinger operators,
with supersymmetric partner potentials, related to the systems (18) and (19)
as follows.
$\mathcal{H}_{\pm}=-\partial_{x}+\mathbf{V}_{\pm},\quad\mathbf{V}_{\pm}=\begin{pmatrix}0&1\\\
V_{\pm}&0\end{pmatrix},\quad\mathbf{V}_{-}=-A_{0},\quad\mathbf{V}_{+}=-\widetilde{A}_{0},$
where
$\mathcal{H}_{-}\Psi=\mathfrak{E}_{\lambda}\Psi,\quad
H_{+}\widetilde{\Psi}=\mathfrak{E}_{\lambda}\widetilde{\Psi},\quad\Psi=\begin{pmatrix}\psi\\\
\psi^{\prime}\end{pmatrix},\quad\widetilde{\Psi}=\begin{pmatrix}\widetilde{\psi}\\\
\widetilde{\psi^{\prime}}\end{pmatrix},\quad\mathfrak{E}_{\lambda}=\lambda(-N),\quad-N=\begin{pmatrix}0&0\\\
1&0\end{pmatrix}.$
According to Proposition 1, the relevant Darboux transformation in this
$2\times 2$ matrix formalism is given by
$\widetilde{\Psi}=P_{\lambda}\Psi,\quad P_{\lambda}=\begin{pmatrix}W&1\\\
W^{2}-\lambda&W\end{pmatrix}=\begin{pmatrix}0&1\\\
-\lambda&W\end{pmatrix}\cdot\begin{pmatrix}1&0\\\ W&1\end{pmatrix}.$
and the supersymmetric partner potentials $\mathbf{V}_{\pm}$ will depend on
the supersymmetric partner potentials $V_{\pm}$ according to the original
Witten’s formalism, i.e., $V_{+}=V_{-}+2W^{\prime}$, which lead us to
$\mathbf{V}_{+}=\mathbf{V}_{-}+2W^{\prime}(-N)$.
Now we present the shape invariance property for this $2\times 2$ matrix
formalism of Schrödinger equation as follows. Consider the parametric
supersymmetric partner potentials and matrix $\mathbf{R}(a_{1})$ as follows
$\mathbf{V}_{\pm}(x;a)=\begin{pmatrix}0&1\\\
V_{\pm}(x;a)&0\end{pmatrix},\quad\mathbf{R}(a_{1})=R(a_{1})(-N),$
where, as in the classical case, $a$ is a set of parameters and $a_{1}=f(a)$.
The potential $\mathbf{V}=\mathbf{V}_{-}=\mathbf{V}_{-}(x;a)$ has the shape
invariance property if and only if its supersymmetric partner potential
satisfies
$\mathbf{V}_{+}=\mathbf{V}_{+}(x;a)=\mathbf{V}_{-}(x;a_{1})+\mathbf{R}(a_{1}).$
We can see again that supersymmetric partner potentials differ only in
parameters. Applying systematically this procedure, we can obtain the spectrum
as the values of energy $\mathfrak{E}_{\lambda}(a)$, where
$\mathfrak{E}_{\lambda}(1)=\mathfrak{E}_{\lambda}$, such that
$\mathfrak{E}_{\lambda}(a)=\sum_{k=1}^{n}\mathbf{R}(a_{k}).$
Also we present the ladder operators for this $2\times 2$ matrix formalism of
Schrödinger equation:
$\mathbf{A}=\begin{pmatrix}A&0\\\
WA&0\end{pmatrix},\quad\mathbf{A}^{\dagger}=\begin{pmatrix}A^{\dagger}&0\\\
2W^{\prime}-WA^{\dagger}&0\end{pmatrix}.$
We illustrate this formalism with the 1D-harmonic oscillator, which is a
classical rational shape invariant potential. The superpotential for harmonic
oscillator is $W=x$, thus the supersymmetric partner potentials are given by
$\mathbf{V}_{-}=\begin{pmatrix}0&1\\\
x^{2}-1&0\end{pmatrix},\quad\mathbf{V}_{+}=\begin{pmatrix}0&1\\\
x^{2}+1&0\end{pmatrix}=\mathbf{V}_{-}+\begin{pmatrix}0&0\\\ 2&0\end{pmatrix}.$
Therefore, introducing a multiplicative parameter $a$ in $\mathbf{V}_{-}$,
such that $\mathbf{V}_{-}(x;a)=\mathbf{V}_{-}(x)$ for $a=1$, we obtain
$f(a_{1})=2a,\quad\mathbf{R}(a_{1})=\begin{pmatrix}0&0\\\
2a&0\end{pmatrix},\quad\mathfrak{E}_{\lambda}(a)=\sum_{k=1}^{n}\mathbf{R}(a_{k})=\begin{pmatrix}0&0\\\
2na&0\end{pmatrix}.$
Thus, for $a=1$ the spectrum of $\mathcal{H}_{-}$ is
$\mathrm{Spec}(\mathcal{H}_{-})=\\{\mathfrak{E}_{\lambda}:\,\,\lambda\in
2\mathbb{Z}_{+}\\}.$
For instance, we have
$\widetilde{\Psi}=P_{\lambda}\Psi=\begin{pmatrix}x&1\\\
x^{2}-\lambda&x\end{pmatrix}\begin{pmatrix}H_{\frac{\lambda}{2}}(x)\\\
H^{\prime}_{\frac{\lambda}{2}}(x)-xH_{\frac{\lambda}{2}}(x)\end{pmatrix}\exp\left(-\frac{x^{2}}{2}\right),$
where $H_{\frac{\lambda}{2}}$ denotes the Hermite polynomial of degree
$\frac{\lambda}{2}$. The ladder operators are given respectively by
$\mathbf{A}=\begin{pmatrix}A&0\\\
xA&0\end{pmatrix},\quad\mathbf{A}^{\dagger}=\begin{pmatrix}A^{\dagger}&0\\\
2-xA^{\dagger}&0\end{pmatrix},$
where $A=\partial_{x}+x$ and $A^{\dagger}=-\partial_{x}+x$. Using these ladder
operators we can obtain
$\Psi_{n}=\mathbf{A}^{\dagger}\Psi_{n-1}(x),\quad\textrm{ where }\ \lambda=2n\
\textrm{ and }\ \Psi_{0}=\begin{pmatrix}\exp(-\frac{x^{2}}{2})\\\
-x\exp(-\frac{x^{2}}{2})\end{pmatrix}.$
#### 3.1.2 Second Symmetric Power Approach for Supersymmetric Quantum
Mechanics
The following results correspond to the second symmetric power of Schrödinger
equation in the previous matrix formalism. Thus, we obtain the following
$3\times 3$ matrix Schrödinger operators, with supersymmetric partner
potentials according to equations (27), (28), (33) and (37) as follows.
$\mathcal{H}_{\pm}=-\partial_{x}+\mathbf{V}_{\pm},\quad\mathbf{V}_{-}=-S=-\widehat{S},\quad\mathbf{V}_{+}=-\widetilde{S}=-\widetilde{\widehat{S}},\quad\mathbf{V}_{\pm}=\begin{pmatrix}0&1&0\\\
2V_{\pm}&0&2\\\ 0&V_{\pm}&0\end{pmatrix},$
where
$\mathcal{H}_{-}\Psi=\mathfrak{E}_{\lambda}\Psi,\quad
H_{+}\widetilde{\Psi}=\mathfrak{E}_{\lambda}\widetilde{\Psi},\quad\Psi=\begin{pmatrix}\psi^{2}\\\
2\psi\psi^{\prime}\\\
(\psi^{\prime})^{2}\end{pmatrix},\quad\widetilde{\Psi}=\begin{pmatrix}\widetilde{\psi}^{2}\\\
2\widetilde{\psi}\widetilde{\psi^{\prime}}\\\
(\widetilde{\psi}^{\prime})^{2}\end{pmatrix},\quad\mathfrak{E}_{\lambda}=\lambda(-N_{1}),$
and
$-N_{1}=-N_{2}=\begin{pmatrix}0&0&0\\\ 2&0&0\\\ 0&1&0\end{pmatrix}.$
Using (29), our generalized Darboux transformation in this $3\times 3$ matrix
formalism is given by
$\widetilde{\Psi}=P_{\lambda}\Psi,\quad
P_{\lambda}=P_{1,\lambda}=P_{2,\lambda}=\begin{pmatrix}W^{2}&W&1\\\
2W(W^{2}-\lambda)&2W^{2}-\lambda&2W\\\
(W^{2}-\lambda)^{2}&W(W^{2}-\lambda)&W^{2}\end{pmatrix}$
with the factorization into a $\lambda$-dependent part and a part with only
$W$:
$P_{\lambda}=\begin{pmatrix}0&0&1\\\ 0&-\lambda&2W\\\ \lambda^{2}&-\lambda
W&W^{2}\end{pmatrix}\cdot\begin{pmatrix}1&0&0\\\ 2W&1&0\\\
W^{2}&W&1\end{pmatrix}.$
As in the previous case, the supersymmetric partner potentials
$\mathbf{V}_{\pm}$ will depend on the supersymmetric partner potentials
$V_{\pm}$ according to the original Witten’s formalism, i.e.,
$V_{+}=V_{-}+2W^{\prime}$, which lead us to
$\mathbf{V}_{+}=\mathbf{V}_{-}+2W^{\prime}(-N_{1}),$
Now we present the shape invariance property for this $3\times 3$ matrix
formalism of Schrödinger equation as follows. Consider the parametric
supersymmetric partner potentials and matrix $\mathbf{R}(a_{1})$ as follows
$\mathbf{V}_{\pm}(x;a)=\begin{pmatrix}0&1&0\\\ 2V_{\pm}(x;a)&0&2\\\
0&V_{\pm}(x;a)&0\end{pmatrix},\quad\mathbf{R}(a_{1})=R(a_{1})(-N),$
where, as in the previous case, $a$ is a set of parameters and $a_{1}=f(a)$.
The potential $\mathbf{V}=\mathbf{V}_{-}=\mathbf{V}_{-}(x;a)$ has the shape
invariance property if and only if its supersymmetric partner potential can be
written as
$\mathbf{V}_{+}=\mathbf{V}_{+}(x;a)=\mathbf{V}_{-}(x;a_{1})+\mathbf{R}(a_{1}).$
We can see again that supersymmetric partner potentials differ only in
parameters. Applying systematically this procedure, we can obtain the spectrum
as the values of energy $\mathfrak{E}_{\lambda}(a)$ given by
$\mathfrak{E}_{\lambda}(1)=\mathfrak{E}_{\lambda}$ and
$\mathfrak{E}_{\lambda}(a)=\sum_{k=1}^{n}\mathbf{R}(a_{k}).$
The ladder operators for this $3\times 3$ matrix formalism of Schrödinger
equation are:
$\mathbf{A}=\begin{pmatrix}WA&0&1\\\ 2W^{2}A&0&2W\\\
W^{3}A&0&W^{2}\end{pmatrix},\quad\mathbf{A}^{\dagger}=\begin{pmatrix}WA^{\dagger}&0&1\\\
-2W^{2}A^{\dagger}&0&-2W\\\ W^{3}A^{\dagger}&0&W^{2}\end{pmatrix}.$
We illustrate this formalism with the 1D-harmonic oscillator, which is a
classical rational shape invariant potential. The superpotential for harmonic
oscillator is $W=x$, thus the supersymmetric partner potentials are given by
$\mathbf{V}_{-}=\begin{pmatrix}0&1&0\\\ 2x^{2}-2&0&2\\\
0&x^{2}-1&0\end{pmatrix},\quad\mathbf{V}_{+}=\begin{pmatrix}0&1&0\\\
2x^{2}+2&0&2\\\
0&x^{2}+1&0\end{pmatrix}=\mathbf{V}_{-}+\begin{pmatrix}0&0&0\\\ 4&0&0\\\
0&2&0\end{pmatrix}.$
Therefore, introducing a multiplicative parameter $a$ in $\mathbf{V}_{-}$,
such that $\mathbf{V}_{-}(x;a)=\mathbf{V}_{-}(x)$ for $a=1$, we obtain
$f(a_{1})=2a,\quad\mathbf{R}(a_{1})=\begin{pmatrix}0&0&0\\\ 4a&0&0\\\
0&2a&0\end{pmatrix},\quad\mathfrak{E}_{\lambda}(a)=\sum_{k=1}^{n}\mathbf{R}(a_{k})=\begin{pmatrix}0&0&0\\\
4na&0&0\\\ 0&2na&0\end{pmatrix}.$
Thus, for $a=1$ the spectrum of $\mathcal{H}_{-}$ is
$\mathrm{Spec}(\mathcal{H}_{-})=\\{\mathfrak{E}_{\lambda}:\,\,\lambda\in
2\mathbb{Z}_{+}\\}.$
For instance we have
$\widetilde{\Psi}=P_{\lambda}\Psi=\begin{pmatrix}x^{2}&x&1\\\ 2x^{3}-2\lambda
x&2x^{2}-\lambda&2x\\\ x^{4}-2\lambda x^{2}+\lambda^{2}&x^{3}-\lambda
x&x^{2}\end{pmatrix}\begin{pmatrix}H^{2}_{\frac{\lambda}{2}}(x)\\\
\left(H^{2}\right)^{\prime}_{\frac{\lambda}{2}}(x)-2xH^{2}_{\frac{\lambda}{2}}(x)\\\
\left(H^{\prime}_{\frac{\lambda}{2}}(x)-xH_{\frac{\lambda}{2}}(x)\right)^{2}\end{pmatrix}\exp\left(-x^{2}\right),$
where $H_{\frac{\lambda}{2}}$ denotes the Hermite polynomial of degree
$\frac{\lambda}{2}$. The ladder operators are given respectively by
$\mathbf{A}=\begin{pmatrix}xA&0&1\\\ 2x^{2}A&0&2x\\\
x^{3}A&0&x^{2}\end{pmatrix},\quad\mathbf{A}^{\dagger}=\begin{pmatrix}xA^{\dagger}&0&1\\\
-2x^{2}A^{\dagger}&0&-2x\\\ x^{3}A^{\dagger}&0&x^{2}\end{pmatrix}.$
### 3.2 Some $\mathfrak{so}(3,C_{K})$ Systems
In this section, we revisit from the point of view developed in this article
two well-known problems which arise expressed as $\mathfrak{so}(3,C_{K})$
systems.
#### 3.2.1 Frenet-Serret Formulas
Given a non degenerate curve in the space, denote by $T$ the tangent unit
vector to the curve, by $N$ the normal unit vector and by $B=T\times N$ the
binormal unit vector. Then, the Frenet-Serret formulas can be formulated as
the following $\mathfrak{so}(3,C_{K})$ system:
$\begin{pmatrix}T\\\ N\\\
B\end{pmatrix}^{\prime}=-\begin{pmatrix}0&-\kappa&0\\\ \kappa&0&-\tau\\\
0&\tau&0\end{pmatrix}\cdot\begin{pmatrix}T\\\ N\\\
B\end{pmatrix}=-\Omega\cdot\begin{pmatrix}T\\\ N\\\ B\end{pmatrix},$ (47)
where ′ denotes the derivative with respect to arclength, $\kappa$ is the
curvature of the curve and $\tau$ is its torsion, see [32, Chap. 1] for more
details.
In order to apply our previous formalism to this $\mathfrak{so}(3,C_{K})$
system, we have two possibilities: we can use system (38) or system (42).
In the first case, the identification of matrix $\Omega$ with matrix
$\Omega_{0}$ given by Lemma 2 yields the degenerate situation: $\kappa=-ip$,
$\tau=i(q-1)$ and $0=q+1$, hence, $q=-1$ and $\tau=-2i$. The second order
linear differential equation associated to this system is:
$\mathcal{L}y=y^{\prime\prime}+i\kappa y^{\prime}-y=0.$ (48)
Since $p=\frac{w^{\prime}}{w}=i\kappa$, we find that $w=e^{(i\int\kappa dx)}$.
As immediate application of $\mathfrak{so}(3,C_{K})$ systems to Frenet-Serret
formulas, we obtain directly a fundamental matrix of solutions through
Corollary 1:
$\mathbf{Z}=e^{(i\int\kappa
dx)}\cdot\begin{pmatrix}y_{1}^{2}-(y_{1}^{\prime})^{2}&y_{1}y_{2}-y_{1}^{\prime}y_{2}^{\prime}&y_{2}^{2}-(y_{2}^{\prime})^{2}\\\\[3.0pt]
i(y_{1}^{2}+(y_{1}^{\prime})^{2})&i(y_{1}y_{2}+y_{1}^{\prime}y_{2}^{\prime})&i(y_{2}^{2}+(y_{2}^{\prime})^{2})\\\\[3.0pt]
-2\,y_{1}y_{1}^{\prime}&-y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}&-2\,y_{2}y_{2}^{\prime}\end{pmatrix},$
where $\\{y_{1},y_{2}\\}$ is a basis of solutions of equation (48).
Notice that this framework is only valid for curves with torsion $\tau=-2i$.
However, we can avoid this restriction by using the second approach, given by
equation (42). In this case, the gauge change given by $S$ transform the
matrix $\Omega_{0}$ into a matrix with the same structure as $\Omega$, namely
$\widehat{\Omega}_{0}$. Identifying the entries of both matrices we obtain:
$\kappa=-i(1/w-wq)$ and $\tau=-(1/w+wq)$. Thus, $w=\frac{2}{i\kappa-\tau}$ and
$q=\frac{\kappa^{2}+\tau^{2}}{4}$ and the second order linear differential
equation associated to the system in this case is:
$\mathcal{L}y=y^{\prime\prime}-\dfrac{(i\kappa-\tau)^{\prime}}{i\kappa-\tau}y^{\prime}+\dfrac{\kappa^{2}+\tau^{2}}{4}y=0.$
(49)
Under this assumptions, a fundamental matrix for this system, given by (44),
becomes:
$\mathbf{Z}_{1}=\begin{pmatrix}y_{1}^{2}+\dfrac{4(y_{1}^{\prime})^{2}}{(i\kappa-\tau)^{2}}&y_{1}y_{2}+\dfrac{4y_{1}^{\prime}y_{2}^{\prime}}{(i\kappa-\tau)^{2}}&y_{2}^{2}+\dfrac{4(y_{2}^{\prime})^{2}}{(i\kappa-\tau)^{2}}\\\\[9.0pt]
\dfrac{4iy_{1}y_{1}^{\prime}}{i\kappa-\tau}&\dfrac{2i(y_{1}y_{2}^{\prime}+y_{1}^{\prime}y_{2})}{i\kappa-\tau}&\dfrac{4iy_{2}y_{2}^{\prime}}{i\kappa-\tau}\\\\[9.0pt]
i\left(y_{1}^{2}-\dfrac{4(y_{1}^{\prime})^{2}}{(i\kappa-\tau)^{2}}\right)&i\left(y_{1}y_{2}-\dfrac{4y_{1}^{\prime}y_{2}^{\prime}}{(i\kappa-\tau)^{2}}\right)&i\left(y_{2}^{2}-\dfrac{4(y_{2}^{\prime})^{2}}{(i\kappa-\tau)^{2}}\right)\end{pmatrix},$
for $\\{y_{1},y_{2}\\}$ a basis of solutions of equation (49).
Finally, we consider the following perturbation for the Frenet-Serret system,
according to the second approach for $\mathfrak{so}(3,C_{K})$ systems (see
equation (42)):
$\begin{pmatrix}T\\\ N\\\
B\end{pmatrix}^{\prime}=-\left(\begin{pmatrix}0&-\kappa&0\\\ \kappa&0&-\tau\\\
0&\tau&0\end{pmatrix}+\frac{2m}{i\kappa-\tau}\begin{pmatrix}0&i&0\\\
-i&0&-1\\\ 0&1&0\end{pmatrix}\right)\cdot\begin{pmatrix}T\\\ N\\\
B\end{pmatrix}.$
Following the philosophy of Darboux we can construct an infinite chain of such
perturbed Frenet-Serret systems applying Corollary 5 (we restrict ourselves to
the case $r=1$ since it is the usual situation in the applications, see, for
instance, the classical book of Matveev and Salle [28]). The Darboux
transformation is given by:
$T_{2,m}=\dfrac{1}{2}\begin{pmatrix}\dfrac{4}{\eta^{2}}+\rho^{2}+\theta_{0}^{2}+\dfrac{\nu^{2}\eta^{2}}{4}&i\left(\dfrac{4\theta_{0}}{\eta}-\nu\rho\eta\right)&i\left(\dfrac{4}{\eta^{2}}+\rho^{2}-\theta_{0}^{2}-\dfrac{\nu^{2}\eta^{2}}{4}\right)\\\\[10.0pt]
i\left(\dfrac{4\rho}{\eta}-\nu\theta_{0}\eta\right)&2(\nu-\rho\theta_{0})&\dfrac{-4\rho}{\eta}-\nu\theta_{0}\eta\\\\[10.0pt]
i\left(\dfrac{4}{\eta^{2}}-\rho^{2}+\theta_{0}^{2}-\dfrac{\nu^{2}\eta^{2}}{4}\right)&\dfrac{-4\theta_{0}}{\eta}-\nu\rho\eta&\rho^{2}+\theta_{0}^{2}-\dfrac{4}{\eta^{2}}-\dfrac{\nu^{2}\eta^{2}}{4}\end{pmatrix},$
for $\theta_{0}=\frac{y^{\prime}}{y}$,
$\rho=-\theta_{0}+\frac{\eta^{\prime}}{\eta}$, $\nu=m-\theta_{0}\rho$ and
$\eta=i\kappa-\tau$, where $y$ is a solution of equation (49). As before, this
matrix could be factored into an $m$-dependent and an independant part.
#### 3.2.2 Rigid Solid Problem
A rigid solid consists of a set of points in the space preserving the distance
among them under the action of some applied forces. The transformations
allowed for a rigid solid are translations and rotations.
The Poisson equation describes the motion of the rigid body in space:
$\gamma^{\prime}=\gamma\times\omega,$
where $\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})^{T}$ is a unit vector fixed
in space, $\omega=(\omega_{1},\omega_{2},\omega_{3})^{T}$ the angular velocity
vector and $\,{}^{\prime}=\partial_{t}$. See [3, 21, 22] for more details.
We follow the constraints and notation of [21]. Hence, we take $\omega_{3}=0$
and restrict ourselves to rigid transformations in the plane. In this case,
the Poisson equation can be rewritten as the $\mathfrak{so}(3,C_{K})$ system:
$\begin{pmatrix}\gamma_{1}\\\ \gamma_{2}\\\
\gamma_{3}\end{pmatrix}^{\prime}=-\begin{pmatrix}0&0&\omega_{2}\\\
0&0&-\omega_{1}\\\
-\omega_{2}&\omega_{1}&0\end{pmatrix}\cdot\begin{pmatrix}\gamma_{1}\\\
\gamma_{2}\\\ \gamma_{3}\end{pmatrix}=-A\cdot\begin{pmatrix}\gamma_{1}\\\
\gamma_{2}\\\ \gamma_{3}\end{pmatrix}.$ (50)
Fedorov et al. studied in [21] this system for a general case of matrix $A$.
Acosta-Humánez et al. considered in [3] a particular case with
$\omega_{1}=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}$ and
$\omega_{2}=\frac{2\sqrt{2}}{e^{x}+e^{-x}}$. In this work, we are going to
restrict the rigid transformations allowed to the coupled case
$i\omega_{1}+\omega_{2}=2$. For that, we consider the $\mathfrak{so}(3,C_{K})$
system given in Lemma 2 and apply the formalism developed for the
$\mathfrak{so}(3,C_{K})$ system (38). The identification of matrix $A$ with
matrix $\Omega$ leads to: $p=0$, $\omega_{1}=i(q-1)$, $\omega_{2}=q+1$, hence,
$q=1-i\omega_{1}=\omega_{2}-1$. This yields the announced coupled situation
$i\omega_{1}+\omega_{2}=2$.
The second order linear differential equation associated to this problem is:
$\mathcal{L}y=y^{\prime\prime}+(1-i\omega_{1})y=y^{\prime\prime}+(\omega_{2}-1)y=0.$
(51)
Since $p=\frac{w^{\prime}}{w}=0$, we find that $w\in C_{K}$. Without lost of
generality, we can assume that $w=1$. Applying Corollary 1 we obtain a
fundamental matrix of solutions for the rigid solid problem:
$\mathbf{Z}=\begin{pmatrix}y_{1}^{2}-(y_{1}^{\prime})^{2}&y_{1}y_{2}-y_{1}^{\prime}y_{2}^{\prime}&y_{2}^{2}-(y_{2}^{\prime})^{2}\\\\[3.0pt]
i(y_{1}^{2}+(y_{1}^{\prime})^{2})&i(y_{1}y_{2}+y_{1}^{\prime}y_{2}^{\prime})&i(y_{2}^{2}+(y_{2}^{\prime})^{2})\\\\[3.0pt]
-2\,y_{1}y_{1}^{\prime}&-y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}&-2\,y_{2}y_{2}^{\prime}\end{pmatrix},$
where $\\{y_{1},y_{2}\\}$ is a basis of solutions of equation (51).
Next, we consider the second approach for $\mathfrak{so}(3,C_{K})$ systems,
given by equation (42). The identification of matrices $A$ and
$\widehat{\Omega}_{0}$ leads to the degenerate situation:
$\omega_{1}=-(1/w+wq)$, $\omega_{2}=0$ and $1/w-wq=0$. Thus, $w=-2/\omega_{1}$
and $q=\omega_{1}^{2}/4$. This case also produces a coupled situation, this
time between $w$ and $q$: $q=1/w^{2}$. The fact that $\omega_{2}=0$ means that
we are only considering rigid transformations in the line, which is a
simplification of the problem. The second order linear differential equation
associated to the system in this case is:
$\mathcal{L}y=y^{\prime\prime}-\dfrac{\omega_{1}^{\prime}}{\omega_{1}}y^{\prime}+\dfrac{\omega_{1}^{2}}{4}y=0.$
(52)
Under this assumptions, a fundamental matrix for this system, given by (44),
becomes:
$\mathbf{Z}_{1}=\begin{pmatrix}y_{1}^{2}+\dfrac{4(y_{1}^{\prime})^{2}}{\omega_{1}^{2}}&y_{1}y_{2}+\dfrac{4y_{1}^{\prime}y_{2}^{\prime}}{\omega_{1}^{2}}&y_{2}^{2}+\dfrac{4(y_{2}^{\prime})^{2}}{\omega_{1}^{2}}\\\\[10.0pt]
-\dfrac{4iy_{1}y_{1}^{\prime}}{\omega_{1}}&-\dfrac{2i(y_{1}y_{2}^{\prime}+y_{1}^{\prime}y_{2})}{\omega_{1}}&-\dfrac{4iy_{2}y_{2}^{\prime}}{\omega_{1}}\\\\[10.0pt]
i\left(y_{1}^{2}-\dfrac{4(y_{1}^{\prime})^{2}}{\omega_{1}^{2}}\right)&i\left(y_{1}y_{2}-\dfrac{4y_{1}^{\prime}y_{2}^{\prime}}{\omega_{1}^{2}}\right)&i\left(y_{2}^{2}-\dfrac{4(y_{2}^{\prime})^{2}}{\omega_{1}^{2}}\right)\end{pmatrix},$
for $\\{y_{1},y_{2}\\}$ a basis of solutions of equation (52).
Finally, we consider the following perturbation for the rigid solid system,
according to the first approach for $\mathfrak{so}(3,C_{K})$ systems (see
equation (38)):
$\begin{pmatrix}\gamma_{1}\\\ \gamma_{2}\\\
\gamma_{3}\end{pmatrix}^{\prime}=-\left(\begin{pmatrix}0&0&\omega_{2}\\\
0&0&-\omega_{1}\\\
-\omega_{2}&\omega_{1}&0\end{pmatrix}+m\begin{pmatrix}0&0&-1\\\ 0&0&i\\\
1&-i&0\end{pmatrix}\right)\cdot\begin{pmatrix}\gamma_{1}\\\ \gamma_{2}\\\
\gamma_{3}\end{pmatrix}.$
Following the philosophy of Darboux in the same vein as we did for the Frenet-
Serret system, we can construct an infinite chain of such perturbed Poisson
equations for the rigid body problem by applying Proposition 3 (again, we
restrict ourselves to the case $r=1$). The Darboux transformation is given by:
$T_{1,m}=\frac{1}{2}\begin{pmatrix}-{\nu}^{2}+2\theta_{0}^{2}-1&i\left({\nu}^{2}-1\right)&2\theta_{0}\left(1-\nu\right)\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr
i\left({\nu}^{2}-1\right)&{\nu}^{2}+2{\theta_{0}}^{2}+1&2i\theta_{0}\left(1+\nu\right)\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr
2\theta_{0}\left(\nu-1\right)&-2i\theta_{0}\left(\nu+1\right)&2\left(\nu+\theta_{0}^{2}\right)\end{pmatrix},$
for $\theta_{0}=\frac{y^{\prime}}{y}$ and $\nu=m+\theta_{0}^{2}$, where $y$ is
a solution of equation (51). This transformation factors as
$T_{1,m}=\begin{pmatrix}-m^{2}&\theta_{0}m&-\theta_{0}^{2}+1\\\
im^{2}&-i\theta_{0}m&i+i\theta_{0}^{2}\\\
0&-m&2\theta_{0}\end{pmatrix}\cdot\begin{pmatrix}\frac{1}{2}&-\frac{i}{2}&0\\\
-\theta_{0}&i\theta_{0}&-1\\\
\frac{1}{2}(\theta_{0}^{2}-1)&-\frac{i}{2}(\theta_{0}^{2}+1)&\theta_{0}\end{pmatrix}.$
## Final Remarks
In this paper, we have shown how, using tensor construction on $SL(2,C_{K})$,
we can define Darboux transformations for higher order linear differential
systems such as $\mathrm{Sym}^{2}(SL(2,C_{K}))$-systems or $SO(3,C_{K})$
systems, as summarized in the diagrams (41) and (46).
Our tool to achieve this is the observation that Darboux transformations can
be viewed as gauge transformations and hence may be extended using the tools
of Tannakian constructions.
We present an approach to solve $\mathrm{Sym}^{2}(\mathrm{SL}(2,C_{K}))$
systems together with two Darboux transformations for these kind of systems.
From that, in a natural way, we fall into $\mathfrak{so}(3,C_{K})$ systems.
For that, we make explicit a direct method to transform an
$\mathfrak{so}(3,C_{K})$ system into a construction on a second order linear
differential equation by means of the isomorphism between the Lie algebras
$\mathfrak{so}(3,C_{K})$ and $\mathfrak{sl}(2,C_{K})$. This allows us to
explicitly compute fundamental matrices and “generalized” Darboux
transformations for $\mathrm{Sym}^{2}(\mathrm{SL}(2,C_{K}))$ systems as well
as for $\mathfrak{so}(3,C_{K})$ systems in terms of the solutions of the
initial second order linear differential equation. The Darboux transformations
obtained in this form come up in a natural way.
These constructions are applied to toy formalisms for Supersymmetric Quantum
Mechanics in the non-relativistic case. We construct systems-like Schrödinger
equations following these approaches. Some well-known $\mathfrak{so}(3,C_{K})$
systems such as Frenet-Serret formulas and the rigid solid problem are also
included in these constructions.
Finally, we notice that both approaches for $\mathfrak{so}(3,C_{K})$ systems
developed in this article are not strictly equivalent, since they produce
different results. This can be seen in Section 3.2, where two applications to
$\mathfrak{so}(3,C_{K})$ systems are showed. In the first one, the Frenet-
Serret formulas, the first approach leads to a degenerate situation for curves
with torsion $\tau=-2i$, whilst the second approach allows us to deal with any
curve. However, in the rigid solid application, it is the other way around:
the first approach produces a situation that, despite being a coupled case, is
more general than the one given by the second approach, which restricts to
rigid transformations in the line.
The philosophy developed in this work can allow one to construct Darboux
transformations to differential systems of order higher than two, namely those
which can be obtained from a tensor construction on $\mathrm{SL}(2,C_{K})$;
for any such system, the methodology exposed here allows to solve via
solutions of second order equations and extends Darboux transformations to
these families.
### Acknowledgements
The first author thanks the hospitality of XLim and suggestions of J.J.
Morales-Ruiz during the initial stage of this work. The third author thanks
Autonomous University of Madrid for the financial support for a research stay
at XLim, where she started to work in this article. She also thanks the
hospitality of XLim and the support of J.J. Morales-Ruiz to participate in
this work.
This work was partially supported by the grant TIN2016-77206-R from the
Spanish Government, co-financed by the European Regional Development Fund. The
third author received a postdoctoral grant (PEJD-2018-POST/TIC-9490) from
Universidad Nacional de Educación a Distancia (UNED), co-financed by the
Regional Government of Madrid and the Youth Employment Initiative (YEI) of the
European Social Fund.
## References
* [1] P. B. Acosta-Humánez. Galoisian Approach to Supersymmetric Quantum Mechanics. Phd thesis, Technical Univ. of Catalonia, `(http://arxiv.org/pdf/1008.3445.pdf)`, 2009.
* [2] P. B. Acosta-Humánez. Galoisian approach to Supersymmetric Quantum Mechanics. The integrability of the Schrödinger equation by means of differential Galois theory. VDM Verlag, Berlin, 2010.
* [3] P. B. Acosta-Humánez, M. Jiménez, and J. Ospino. Galoisian and numerical approach of three dimensional linear differential systems with skew symmetric matrices defined in a non-constant differential field. 2018\.
* [4] P. B. Acosta-Humánez, J. J. Morales-Ruiz, and J.-A. Weil. Galoisian approach to integrability of Schrödinger equation. Reports on Mathematical Physics, 67:305–374, 2011.
* [5] P. B. Acosta-Humánez and Ch. Pantazi. Darboux integrals for planar vector fields via Darboux transformations. SIGMA, 8(043):26 pages, 2012.
* [6] A. Aparicio Monforte, É. Compoint, and J.-A. Weil. A characterization of reduced forms of linear differential systems. Journal of Pure and Applied Algebra, 217(8):1504–1516, March 2013\.
* [7] A. Aparicio-Monforte, T. Dreyfus, and J.-A. Weil. Liouville integrability: an effective morales–ramis–simó theorem. Journal of Symbolic Computation, 74:537–560, 2016.
* [8] A. Aparicio-Monforte and J.-A. Weil. A reduction method for higher order variational equations of Hamiltonian systems. Symmetries and related topics in differential and difference equations, 549:1–15, 2011.
* [9] M. A. Barkatou. An algorithm for computing a companion block diagonal form for a system of linear differential equations. Appl. Algebra Engrg. Comm. Comput., 4(3):185–195, 1993.
* [10] M. A. Barkatou, T. Cluzeau, L. Di Vizio, and J.-A. Weil. Computing the lie algebra of the differential galois group of a linear differential system. In Proceedings of the 40th international symposium on Symbolic and algebraic computation, ISSAC ’16, New York, NY, USA, 2016. ACM.
* [11] M. A. Barkatou, T. Cluzeau, L. Di Vizio, and J.-A. Weil. Reduced forms of linear differential systems and the intrinsic Galois-Lie algebra of Katz. SIGMA Symmetry Integrability Geom. Methods Appl., 16:Paper No. 054, 13, 2020.
* [12] D. Blázquez-Sanz. Differential Galois Theory and Lie-Vessiot Systems. VDM Verlag, 2008.
* [13] D. Blázquez-Sanz and J. J. Morales-Ruiz. Differential Galois Theory of algebraic Lie–Vessiot systems. Amer. Math. Soc., Providence, RI, 509:1–58, 2010.
* [14] T. Crespo and Z. Hajto. Introduction to Differential Galois Theory. Cracow University of Technology Publishers, 2007. Monograph with an appendix by Juan J. Morales-Ruiz.
* [15] G. Darboux. Sur une proposition relative aux équations linéaires. Comptes Rendus Acad. Sci., 94:1456–1459, 1882.
* [16] G. Darboux. Sur une proposition relative aux équations linéaires. C. R. Acad. Sci., Paris, 94:1456–1459, 1882.
* [17] G. Darboux. Sur la représentation sphérique des surfaces. Ann. Sci. É. N. S. $3^{e}$ série, 5:79–96, 1888.
* [18] G. Darboux. Théorie des Surfaces, II. Gauthier-Villars, Paris, 1889.
* [19] G. Darboux. Leçons sur la théorie générale des surfaces. I, II. Les Grands Classiques Gauthier-Villars. Éditions Jacques Gabay, Sceaux, Reprint of the second (1914) edition (I) and the second (1915) edition (II), 1993.
* [20] R. Dutt, A. Khare, and U. Sukhatme. Supersymmetry, shape invariance, and exactly solvable potentials. American Journal of Physics, 56(2):163–168, 1988.
* [21] Y. N. Fedorov, A. J. Maciejewski, and M. Przybylska. The Poisson equations in the nonholonomic Suslov problem: integrability, meromorphic and hypergeometric solutions. Nonlinearity, 22(9):2231–2259, 2009.
* [22] Y. N. Fedorov, A. J. Maciejewski, and M. Przybylska. The generalized Euler–Poinsot rigid body equations: explicit elliptic solutions. J. Phys. A, 46:26 pages, 2013.
* [23] L. Gendenshteïn. Derivation of the exact spectra of the Schrödinger equation by means of supersymmetry. JETP Lett., 38:356–359, 1983.
* [24] E. L. Ince. Ordinary Differential Equations. Dover publications, 1927.
* [25] S. Jiménez, J. J. Morales-Ruiz, R. Sánchez-Cauce, and M. A. Zurro. Differential Galois Theory and Darboux transformations for integrable systems. Journal of Geometry and Physics, 115:75–88, 2017.
* [26] S. Jiménez, J. J. Morales-Ruiz, R. Sánchez-Cauce, and M. A. Zurro. A computational approach to KdV rational solitons and their differential Galois groups. Monografías de la Real Academia de Ciencias. Zaragoza, 43:107–110, 2018.
* [27] S. Jiménez, J. J. Morales-Ruiz, R. Sánchez-Cauce, and M. A. Zurro. Rational KdV potentials and Differential Galois Theory. SIGMA, 15(047):40 pages, 2019.
* [28] V. B. Matveev and M. A. Salle. Darboux Transformations and Solitons. Springer Series in Nonlinear Dynamics. Springer-Verlag, Berlin, 1991.
* [29] A. Pecheritsin and B. Samsonov. Chains of Darboux transformations for the matrix Schrödinger equation. J. Phys. A, 37:239–250, 2004.
* [30] M. F. Singer and M. van der Put. Galois Theory of Linear Differential Equations, volume 328 of Grundlehren der mathematischen Wissenschaften. Springer Verlag, 2003.
* [31] Michael F. Singer. Algebraic relations among solutions of linear differential equations: Fano’s theorem. Amer. J. Math., 110(1):115–143, 1988.
* [32] M. Spivak. A Comprehensive Introduction to Differential Geometry, vol. 2. Publish or Perish, 1999.
* [33] E. Witten. Dynamical breaking of supersymmetry. Nucl.Phys. B, 185:513–554, 1981.
|
# Softening and residual loss modulus of jammed grains under oscillatory
shear in an absorbing state
Michio Otsuki<EMAIL_ADDRESS>Graduate School of Engineering
Science, Osaka University, Toyonaka, Osaka 560-8531, Japan Hisao Hayakawa
Yukawa Institute for Theoretical Physics, Kyoto University,
Kitashirakawaoiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan
###### Abstract
From a theoretical study of the mechanical response of jammed materials
comprising frictionless and overdamped particles under oscillatory shear, we
find that the material becomes soft, and the loss modulus remains non-zero
even in an absorbing state where any irreversible plastic deformation does not
exist. The trajectories of the particles in this region exhibit hysteresis
loops. We succeed in clarifying the origin of the softening of the material
and the residual loss modulus with the aid of Fourier analysis. We also
clarify the roles of the yielding point in the softening to distinguish the
plastic deformation from reversible deformation in the absorbing state.
Introduction— The mechanical response of jammed disordered materials, such as
granular materials, foams, emulsions, and colloidal suspensions, garners much
attention Hecke ; Behringer . For vanishingly small strain, the shear stress
$\sigma$ is proportional to the shear strain $\gamma$, which is characterized
by the shear modulus satisfying a critical scaling law near the jamming point
$\phi_{J}$ OHern02 ; Tighe11 ; Otsuki17 . However, the region of the linear
response is quite narrow near $\phi_{J}$ Coulais ; Otsuki14 . Hence, revealing
the nonlinear response is essential for understanding the dynamics of
disordered materials.
In crystalline materials, the nonlinear response originates from yielding
associated with irreversible plastic deformation. Yielding also takes place in
disordered materials when the strain is sufficiently large Nagamanasa ;
Knowlton ; Kawasaki16 ; Leishangthem ; Clark ; Boschan19 . The yielding
transition attracts much attention among researchers as an example of the
reversible-irreversible transition Hinrichsen ; Henkel ; Pine ; Corte . When
plastic deformation causes rearrangements of contact networks, the mechanical
response becomes nonlinear. It had been believed that plastic deformation is
always necessary for the nonlinear response. Unlike this expectation, recent
studies have revealed that plastic deformation is not always necessary for the
nonlinear response Boschan ; Nakayama ; Kawasaki20 ; Bohy ; Ishima . Under
steady shear, $\sigma$ becomes hypoelastic before the yielding Boschan ;
Kawasaki20 , and the storage modulus in the steady state after applying a
sufficient number of cyclic shears decreases as the strain amplitude increases
without any irreversible plastic deformation Bohy . The decrease of the
storage modulus is called softening.
It is known that plastic deformation causes dissipation characterized by the
loss modulus Bohy ; Ishima . It is natural that the loss modulus disappears in
quasi-static strains without any plastic deformation. However, we need careful
check of this naive picture, because the loss modulus might be related to the
softening observed without any plastic deformation.
The mechanical response should be related to the motion of particles
constituting the disordered materials. This suggests that the trajectories of
particles provide information on the softening of the materials. Several
studies have reported that the trajectories of dense particles form closed
loops under oscillatory shear below the yielding point associated with
reversible contact changes where there are some cyclic open and closed
contacts between particles Lundberg ; Schreck ; Keim13 ; Keim14 ; Regev13 ;
Regev15 ; Priezjev ; Lavrentovich ; Nagasawa ; Das ; Deen ; Khirallah . The
formation of closed loops means that the system is reduced to an absorbing
state after some time has passed. A previous study numerically showed that the
softening in the absorbing state becomes significant when there are closed
loops associated with many contact changes. However, the quantitative
relationship remains unclear Bohy .
In this study, we numerically investigate jammed materials comprising $N$
frictionless and overdamped particles under oscillatory shear to clarify the
origin of the softening. For this purpose, we focus on the roles of the
trajectories to clarify the relationship between the softening in the
absorbing state and the softening in the plastic regime. We find that the
shear modulus exhibits softening, and the loss modulus remains non-zero even
in the absorbing state below the yielding point. The trajectory of a test
particle forms a nontrivial loop in this region. With the aid of Fourier
analysis, we investigate the geometric structure of the trajectories and
reveal the role of Fourier components for the storage and loss moduli. We also
present the theoretical expressions for the storage and loss moduli, whose
quantitative validities are numerically confirmed.
Setup— Let us consider a jammed two-dimensional system consisting of
frictionless particles under oscillatory shear. The particles are driven by
the overdamped equation with Stokes’ drag under Lees–Edwards boundary
conditions Evans , where the equation of motion is given by
$\zeta\left\\{\frac{d}{dt}{\bm{r}}_{i}-\dot{\gamma}(t)y_{i}\bm{e}_{x}\right\\}=-\sum_{j\neq
i}\frac{\partial}{\partial\bm{r}_{i}}U(r_{ij})$ (1)
with the position ${\bm{r}}_{i}=(x_{i},y_{i})$ of particle $i$. Here, $\zeta$
and $\dot{\gamma}(t)$ are the drag coefficient and strain rate, respectively.
The interaction potential $U(r_{ij})$ is assumed to be
$U(r_{ij})=\frac{k}{2}(d_{ij}-r_{ij})^{2}\Theta(d_{ij}-r_{ij}),$ (2)
where $\Theta(x)$, $k$, $d_{ij}$, and
$r_{ij}=|\bm{r}_{ij}|=|\bm{r}_{i}-\bm{r}_{i}|$ are the Heaviside step function
satisfying $\Theta(x)=1$ for $x\geq 0$ and $\Theta(x)=0$ otherwise, the spring
constant, the average diameter of particles $i$ and $j$, and the distance
between particles $i$ and $j$, respectively. The system is bidisperse and
consists of an equal number of particles with diameters $d_{0}$ and
$d_{0}/1.4$. We have verified that particles with inertia and damping at
contact, which corresponds to the model in Ref. Bohy , exhibit almost
identical behavior in our system Supple .
We prepare the initial state with a given packing fraction $\phi$ by slowly
compressing the system from a state below the jamming point $\phi_{\rm
J}\simeq 0.841$ Otsuki17 . The oscillatory shear strain is applied for $n_{c}$
cycles as
$\gamma(\theta)=\gamma_{0}\sin\theta$ (3)
with the phase $\theta=\omega t$, where $\gamma_{0}$ and $\omega$ are the
strain amplitude and angular frequency, respectively. Note that the shear rate
satisfies $\dot{\gamma}(t)=(d\theta/dt)(d/d\theta)\gamma(\theta)$. In the last
cycle, we measure the storage and loss moduli $G^{\prime}$ and
$G^{\prime\prime}$, respectively, given by Doi
$\displaystyle G^{\prime}$ $\displaystyle=$
$\displaystyle\frac{1}{\pi}\int_{0}^{2\pi}\ d\theta\
\frac{\left\langle\sigma(\theta)\right\rangle\sin\theta}{\gamma_{0}},$ (4)
$\displaystyle G^{\prime\prime}$ $\displaystyle=$
$\displaystyle\frac{1}{\pi}\int_{0}^{2\pi}\ d\theta\
\frac{\left\langle\sigma(\theta)\right\rangle\cos\theta}{\gamma_{0}},$ (5)
with shear stress
$\sigma=\frac{1}{L^{2}}\sum_{i}\sum_{j>i}\frac{x_{ij}y_{ij}}{r_{ij}}U^{\prime}(r_{ij}),$
(6)
where $x_{ij}=x_{i}-x_{j}$, $y_{ij}=y_{i}-y_{j}$, $\langle\cdot\rangle$
represents the ensemble average, and $L$ is the linear system size. See Ref.
Supple for the stress-strain curves in our system. We have verified that
$G^{\prime}$ and $G^{\prime\prime}$ are independent of $N$ and $n_{c}$ for
$N\geq 1000$ and $n_{c}\geq 20$ Supple . We use $N=1000$ and $n_{c}=20$ in our
numerical analysis. We adopt the Euler method using the time step $\Delta
t=0.05\tau_{0}$ with $\tau_{0}=\zeta/k$.
Closed Trajectories— As the number of cycles increases, the system reaches a
statistically steady state through a transient regime as shown in Ref. Supple
. Figure 1 displays typical non-affine trajectories of a particle
$\tilde{\bm{r}}_{i}(\theta)=\bm{r}_{i}(\theta)-\gamma(\theta)y_{i}(\theta)\bm{e}_{x}$
(7)
in the last two cycles with $\phi=0.87$ and $\omega=10^{-4}\tau_{0}^{-1}$ in
the steady state. In Fig.1(a) ($\gamma_{0}=0.02$), the trajectories are
closed, and the particle returns to its original position after every cycle.
This indicates that irreversible plastic deformation does not occur, at least
in the last two cycles. The closed trajectories form nontrivial loops, which
differ from ellipses or lines observed for small $\gamma_{0}$ as shown in Ref.
Supple . In Fig. 1(b) ($\gamma_{0}=0.1$), the particle moves away from its
original positions after a cycle, as a characteristic behavior of plastic
deformation. Here, we define the absorbing state where the displacement of
each particle after several cycles is smaller than $d_{c}=10^{-4}d_{0}$ in the
statistically steady state. We also define the plastic state where the
displacement after several cycles exceeds $d_{c}$. It should be noted that
some rare samples exhibit trajectories where particles return to their
original positions after more than one cycle Regev13 ; Regev15 ; Lavrentovich
; Nagasawa ; Khirallah . However, our theoretical results shown below are
unchanged even if such samples exist Supple .
---
Figure 1: Non-affine particle trajectories in the last two cycles for
$\gamma_{0}=0.02$ (a) and $0.1$ (b) with $\omega=10^{-4}\tau_{0}^{-1}$ and
$\phi=0.87$, which corresponds to $\phi-\phi_{J}=0.029$. The circles represent
the trajectory in the last cycle. The line represents the trajectory in the
second to the last cycle.
Shear Modulus— We plot the storage modulus $G^{\prime}$ against the strain
amplitude $\gamma_{0}$ for $\omega=10^{-4}\tau_{0}^{-1}$ with
$\phi=0.870,0.860,0.850,$ and $0.845$ in Fig. 2. The yielding points to
distinguish the absorbing state from the plastic state for various $\phi$ are
shown by open pentagons Supple . The storage modulus $G^{\prime}$ decreases as
$\gamma_{0}$ increases, but the yielding point is not identical to the point
where $G^{\prime}$ starts to decrease. We call the decrease for
$\gamma_{0}<\gamma_{c}$, the yielding strain amplitude, the softening in the
absorbing state (SAS). We also call the decrease for $\gamma_{0}>\gamma_{c}$
the softening in the plastic state (SPS). It is remarkable that SAS is
continuously connected to SPS, while a shoulder in $G^{\prime}$ appears in SPS
for $0.04\leq\gamma_{0}\leq 0.1$ with $\phi=0.845$. In the inset of Fig. 2, we
demonstrate that $G^{\prime}$ and $\gamma_{0}$ can be scaled by
$\sqrt{\phi-\phi_{J}}$ and $\phi-\phi_{J}$, respectively, as indicated in
Refs. OHern02 ; Bohy . We have confirmed that $G^{\prime}$ is independent of
$\omega$ for $\omega\leq 10^{-3}\tau_{0}^{-1}$.
Figure 2: Storage modulus $G^{\prime}$ obtained in our simulation (filled
symbols) against $\gamma_{0}$ for $\omega=10^{-4}\tau_{0}^{-1}$ with
$\phi=0.870,0.860,0.850,$ and $0.845$, which corresponds to
$\phi-\phi_{J}=0.029,0.019,0.009$, and $0.004$, respectively. The legends
represent the packing fraction $\phi$. The data in the absorbing (plastic)
state obtained in our simulation are shown in larger (smaller) filled symbols.
The open pentagons represent the yielding strain amplitude $\gamma_{c}$, while
other open symbols represent the theoretical expression using $G^{\prime}_{\rm
T}$ in Eq. (14). (Inset) Scaled storage modulus
$\tilde{G}^{\prime}=G^{\prime}/\sqrt{\phi-\phi_{J}}$ obtained in our
simulation (filled symbols) and its theoretical expression using
$G^{\prime}_{\rm T}$ (open symbols) in Eq. (14) against scaled strain
amplitude $\tilde{\gamma}_{0}=\gamma_{0}/(\phi-\phi_{J})$ in the absorbing
state.
Figure 3(a) displays the loss modulus $G^{\prime\prime}$ in the absorbing
state against $\gamma_{0}$ for $\omega=10^{-4}\tau_{0}^{-1}$ with
$\phi=0.870,0.860,0.850,$ and $0.845$, in which $G^{\prime\prime}$ does not
strongly depend on $\phi$ and $\gamma_{0}$. See Ref. Supple for
$G^{\prime\prime}$ in the plastic state. In Fig. 3(b), we plot the loss
modulus $G^{\prime\prime}$ in the absorbing state against $\omega$ for
$\phi=0.87$ with $\gamma=0.01$. Remarkably, $G^{\prime\prime}$ in Fig. 3(b)
seems to converge to a non-zero value in the limit $\omega\to 0$, which
contrasts with the behavior of the Kelvin–Voigt model (i.e.,
$G^{\prime\prime}\propto\omega$ Meyers ). This behavior indicates that
dissipation remains even in the quasi-static limit in the absorbing state.
Note that $G^{\prime\prime}\propto\omega$ is recovered when we adopt a
sufficiently small $\gamma_{0}$ Supple .
Figure 3: (a) Loss modulus $G^{\prime\prime}$ in the absorbing state obtained
in our simulation (filled symbols) and its theoretical expression
$G^{\prime\prime}_{\rm T}$ (open symbols) in Eq. (15) against $\gamma_{0}$ for
$\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870,0.860,0.850,$ and $0.845$,
which corresponds to $\phi-\phi_{J}=0.029,0.019,0.009$, and $0.004$,
respectively. (b) Loss modulus $G^{\prime\prime}$ against $\omega\tau_{0}$ for
$\phi=0.87$ with $\gamma_{0}=0.01$.
Fourier Analysis— In the absorbing state, the non-affine trajectory
$\tilde{\bm{r}}_{i}(\theta)$ of particle $i$ can be expressed in a Fourier
series as
$\tilde{\bm{r}}_{i}(\theta)=\bm{R}_{i}+\sum_{n=1}^{\infty}\left(\bm{a}_{i}^{(n)}\sin
n\theta+\bm{b}_{i}^{(n)}\cos n\theta\right)$ (8)
with the center of the trajectory
$\displaystyle\bm{R}_{i}$ $\displaystyle=$
$\displaystyle(X_{i},Y_{i})=\frac{1}{2\pi}\int_{0}^{2\pi}\ d\theta\
\tilde{\bm{r}}_{i}(\theta),$ (9)
and the Fourier coefficients
$\displaystyle\bm{a}_{i}^{(n)}$ $\displaystyle=$
$\displaystyle\frac{1}{\pi}\int_{0}^{2\pi}\ d\theta\ \sin n\theta\
\tilde{\bm{r}}_{i}(\theta),$ (10) $\displaystyle\bm{b}_{i}^{(n)}$
$\displaystyle=$ $\displaystyle\frac{1}{\pi}\int_{0}^{2\pi}\ d\theta\ \cos
n\theta\ \tilde{\bm{r}}_{i}(\theta).$ (11)
If $\bm{a}_{i}^{(n)}=\bm{b}_{i}^{(n)}=\bm{0}$ for all $n$, the particle motion
is affine. When only $\bm{a}_{i}^{(1)}$ is non-zero, the non-affine trajectory
is a straight line, as shown in Fig. 4(a). In contrast, the trajectory
exhibits an ellipse when $\bm{b}_{i}^{(1)}$ is also non-zero, as shown in Fig.
4(b). A nontrivial trajectory, as shown in Fig. 1(a), contains modes with
$n\geq 2$. See Ref. Supple for the relationship between the trajectories and
the Fourier coefficients.
Figure 4: Schematics of the non-affine trajectory when only
$\bm{a}_{i}^{(1)}$ is non-zero (a) and only $\bm{a}_{i}^{(1)}$ and
$\bm{b}_{i}^{(1)}$ are non-zero (b).
In. Fig. 5(a), we plot the magnitudes of the Fourier components
$a^{(n)}=\sum_{i}\left\langle\left|\bm{a}_{i}^{(n)}\right|\right\rangle/N,\ \
b^{(n)}=\sum_{i}\left\langle\left|\bm{b}_{i}^{(n)}\right|\right\rangle/N$ (12)
obtained from our numerical data using Eqs. (10) and (11) against $n$ for
$\phi=0.87$ and $\gamma_{0}=0.01$ with $\omega\tau_{0}=10^{-4}$ and $10^{-5}$.
The Fourier components do not strongly depend on $\omega$, which indicates
that the nontrivial loops do not disappear in the limit $\omega\to 0$. For
different $\phi>\phi_{J}$ and $\gamma_{0}\geq 10^{-3}$, we have confirmed that
$a^{(1)}$ is always the largest Comment , the other modes are non-zero to make
loops with non-zero areas, and the Fourier components are independent of
$\omega$. In Fig. 5(b), we plot $a^{(n)}/\gamma_{0}$ and $b^{(n)}/\gamma_{0}$
against $\gamma_{0}$ for $\phi=0.87$ and $\omega\tau_{0}=10^{-4}$ with $n=1$,
where $a^{(n)}/\gamma_{0}$ and $b^{(n)}/\gamma_{0}$ are almost independent of
$\gamma_{0}$. This behavior is consistent with that for the number of contact
changes Supple .
---
Figure 5: (a) Magnitudes of Fourier coefficients $a^{(n)}$ and $b^{(n)}$
against $n$ for $\phi=0.87$ and $\gamma_{0}=0.02$ with
$\omega\tau_{0}=10^{-4}$ (filled symbols) and $10^{-5}$ (open symbols). (b)
Magnitudes of the Fourier coefficients $a^{(n)}$ and $b^{(n)}$ normalized by
$\gamma_{0}$ against $\gamma_{0}$ for $\phi=0.87$ and $\omega\tau_{0}=10^{-4}$
with $n=1$. $\phi=0.87$ corresponds to $\phi-\phi_{J}=0.029$.
Theoretical Analysis— Now, let us reproduce the numerical results by a simple
analytic calculation. Substituting Eq. (8) into Eq. (7),
${\bm{r}}_{ij}(\theta)$ is given by
$\displaystyle{\bm{r}}_{ij}(\theta)$ $\displaystyle=$
$\displaystyle\bm{R}_{ij}+\gamma_{0}Y_{ij}\sin\theta\bm{e}_{x}$ (13)
$\displaystyle+\sum_{n=1}^{\infty}\left(\bm{a}_{ij}^{(n)}\sin
n\theta+\bm{b}_{ij}^{(n)}\cos n\theta\right)$
Here, we define $\bm{a}^{(n)}_{ij}=\bm{a}^{(n)}_{i}-\bm{a}^{(n)}_{j}$,
$\bm{b}^{(n)}_{ij}=\bm{b}^{(n)}_{i}-\bm{b}^{(n)}_{j}$, and
$\bm{R}_{ij}=(X_{ij},Y_{ij})=\bm{R}_{i}-\bm{R}_{j}$. Substituting Eq. (13)
into Eq. (4) with Eq. (6) and neglecting the terms of $O(\gamma_{0})$, we
obtain the expression $G^{\prime}_{\rm T}$ of the storage modulus in SAS as
Supple
$\displaystyle G^{\prime}_{\rm T}$ $\displaystyle=$
$\displaystyle-\frac{1}{L^{2}}\sum_{i,j}\left\langle\frac{X_{ij}^{2}Y_{ij}^{2}}{R_{ij}}\Psi^{\prime}(R_{ij})\right\rangle$
(14) $\displaystyle-\frac{1}{L^{2}}\sum_{i,j}\left\langle
Y_{ij}^{2}\Psi(R_{ij})\right\rangle$
$\displaystyle-\frac{1}{L^{2}}\sum_{i,j}\left\langle\left(\frac{a_{ij,x}^{(1)}}{\gamma_{0}}Y_{ij}+X_{ij}\frac{a_{ij,y}^{(1)}}{\gamma_{0}}\right)\Psi(R_{ij})\right\rangle$
$\displaystyle-\frac{1}{L^{2}}\sum_{i,j}\left\langle
X_{ij}Y_{ij}\Psi^{\prime}(R_{ij})\frac{\bm{R}_{ij}\cdot\bm{a}_{ij}^{(1)}}{\gamma_{0}R_{ij}}\right\rangle,$
where $\Psi(r)=-U^{\prime}(r)/r$. Here, we have assumed
$|a_{i}^{(n)}|\sim|b_{i}^{(n)}|\sim\gamma_{0}$ and $\gamma_{0}\ll 1$. In the
expression of Eq. (14), only the first harmonic contribution from
$\bm{a}_{i}^{(1)}$ can survive because of Eq. (4). Note that $\bm{R}_{i}$ and
$\bm{a}^{(1)}_{i}$ cannot be determined within the theory but are determined
by our simulation data. In Fig. 2, we plot the theoretical prediction
$G^{\prime}_{\rm T}$ as open symbols. The theoretical prediction
quantitatively reproduces the numerical results except for large $\gamma_{0}$,
which is out of the scope of our theory. The first and second terms on the
right-hand side (RHS) of Eq. (14) represent the contributions from the affine
transformation depending only on $\bm{R}_{i}$, while the third and fourth
terms including $\bm{a}^{(1)}_{ij}/\gamma_{0}$ indicate the contributions from
the non-affine trajectories. As shown in Ref. Supple , the contributions from
the non-affine trajectories are almost independent of $\gamma_{0}$, which is
consistent with the behavior of $a^{(1)}/\gamma_{0}$ shown in Fig. 5(b).
Numerical evaluation in Ref. Supple reveals that SAS is dominated by the
first term on RHS of Eq. (14) through the change of $\bm{R}_{i}$. The center
of the non-affine trajectories $\bm{R}_{i}$ is changed by the rearrangement of
the configuration during the transient to the absorbing state, which is
consistent with the memory formation of dense particles during oscillatory
shear Fiocco ; Paulsen ; Adhikari .
The theoretical expression $G^{\prime\prime}_{\rm T}$ of the loss modulus in
SAS is given by Supple
$\displaystyle G_{\rm T}^{\prime\prime}$ $\displaystyle=$
$\displaystyle-\frac{1}{L^{2}}\sum_{i,j}\left\langle\left(\frac{b_{ij,x}^{(1)}}{\gamma_{0}}Y_{ij}+X_{ij}\frac{b_{ij,y}^{(1)}}{\gamma_{0}}\right)\Psi(R_{ij})\right\rangle$
(15) $\displaystyle-\frac{1}{L^{2}}\sum_{i,j}\left\langle
X_{ij}Y_{ij}\Psi^{\prime}(R_{ij})R_{ij}\frac{\bm{R}_{ij}\cdot\bm{b}_{ij}^{(1)}}{\gamma_{0}R_{ij}^{2}}\right\rangle,$
where we have used the same assumption to obtain Eq. (14). Similar to the case
of $G_{T}^{\prime}$, only the contribution of the first harmonics
$\bm{b}_{i}^{(1)}$ in the expression of Eq. (8) can survive because of Eq.
(5). Note that $\bm{b}_{i}^{(1)}$ cannot be determined within the theory but
is evaluated by the simulation data. The loss modulus depends only on the non-
affine contribution including ${\bm{b}}_{i}^{(1)}$. The amplitude $b^{(1)}$
remains non-zero in the limit $\omega\to 0$, which leads to the residual loss
modulus as in Fig. 3(b). We plot the theoretical expression
$G^{\prime\prime}_{\rm T}$ using the open symbols in Fig. 3(a).
$G^{\prime\prime}_{\rm T}$ also reproduces the numerical results except for
large $\gamma_{0}$. Thus, our theory reveals the quantitative relationship
between the loss modulus and closed trajectories, which was suggested in Ref.
Keim14 .
Conclusion— We numerically studied the mechanical response of jammed materials
consisting of frictionless and overdamped particles under oscillatory shear.
The shear modulus exhibits SAS and the residual loss modulus exists in the
quasi-static limit in the absorbing state. Through Fourier analysis of the
closed trajectories, the theoretical expressions for the storage and loss
moduli quantitatively agree with the numerical results.
Reference Tighe11 reported that the loss modulus vanishes in the absorbing
jammed states in the limit $\omega\to 0$, which is inconsistent with our
result. It is noteworthy that Ref. Tighe11 did not consider any transient
state associated with contact changes before the system reaches the absorbing
state. Since the loss modulus is expected to be given by the generalized
Green-Kubo formula Chong ; Hayakawa , the origin of the residual loss modulus
might be plastic events in the transient dynamics.
Recent studies of large amplitude oscillatory shear (LAOS) reveal that there
are contributions from higher harmonics in the mechanical response of
nonlinear viscoelastic materials Wagner ; Hyun . We calculate nonlinear
viscoelastic moduli $G^{\prime}_{n}$ and $G^{\prime\prime}_{n}$ with $n\geq 2$
and confirm that such higher order moduli are negligible in our system as
shown in Ref. Supple .
In this Letter, we focus only on the nonlinear response of disordered
frictionless particles. However, even frictional grains and exhibit SAS
depending on the friction coefficient Otsuki21 . Therefore, an extension of
our theory to these systems will be our future work.
###### Acknowledgements.
The authors thank K. Saitoh, D. Ishima, T. Kawasaki, K. Miyazaki, and K.
Takeuchi for fruitful discussions. This work was supported by JSPS KAKENHI
Grants No. JP16H04025 and No. JP19K03670 and ISHIZUE 2020 of the Kyoto
University Research Development Program.
## References
* (1) M. van Hecke, J. Phys. Condens. Matter 22, 033101 (2009)
* (2) R. P. Behringer and B. Chakraborty, Rep. Prog. Phys. 82 012601 (2019)
* (3) C. S. O’Hern, S. A. Langer, A. J. Liu, and S. R. Nagel, Phys. Rev. Lett. 88, 075507 (2002).
* (4) B. P. Tighe, Phys. Rev. Lett. 107, 158303 (2011).
* (5) M. Otsuki and H. Hayakawa, Phys. Rev. E 95, 062902 (2017).
* (6) C. Coulais, A. Seguin, and O. Dauchot, Phys. Rev. Lett. 113, 198001 (2014).
* (7) M. Otsuki and H. Hayakawa, Phys. Rev. E 90, 042202 (2014).
* (8) K. Hima Nagamanasa, S. Gokhale, A. K. Sood, and R. Ganapathy, Phys. Rev. E 89, 062308 (2014).
* (9) E. D. Knowlton, D. J. Pine, and L. Cipelletti, Soft Matter 10, 6931 (2014).
* (10) T. Kawasaki and L. Berthier, Phys. Rev. E 94, 022615 (2016).
* (11) P. Leishangthem, A. D. S. Parmar, and S. Sastry, Nat. Commun. 8, 14653 (2017).
* (12) A. H. Clark, J. D. Thompson, M. D. Shattuck, N. T. Ouellette, and C. S. O’Hern, Phys. Rev. E 97, 062901 (2018).
* (13) J. Boschan, S. Luding, and B. P. Tighe, Granul. Matter 21, 58 (2019).
* (14) H. Hinrichsen, Adv. Phys. 49, 815 (2000).
* (15) M. Henkel, H. Hinrichsen, and S. Lubeck, Non-equilibrium Phase Transition I: Absorbing Phase Transitions (Springer, Heidelberg, 2008).
* (16) D. J. Pine, J. P. Collub, J. F. Brady, and A. M. Leshansky, Nature (London) 438, 997 (2005).
* (17) L. Corté, P. M. Chaikin, J. P. Gollub, and D. J. Pine, Nature Phys. 4, 420 (2008).
* (18) J. Boschan, D. Vågberg, E. Somfai, and B. P. Tighe, Soft Matter 12, 5450 (2016).
* (19) D. Nakayama, H. Yoshino, and F. Zamponi,J. Stat. Mech. 2016 104001 (2016).
* (20) T. Kawasaki and K. Miyazaki, arXiv:2003.10716.
* (21) S. Dagois-Bohy, E. Somfai, B. P. Tighe, and M. van Hecke, Soft Matter 13, 9036 (2017).
* (22) D. Ishima and H. Hayakawa, Phys. Rev. E 101, 042902 (2020).
* (23) M. Lundberg, K. Krishan, N. Xu, C. S. O’Hern, and M. Dennin, Phys. Rev. E 77, 041505 (2008).
* (24) C. F. Schreck, R. S. Hoy, M. D. Shattuck, and C. S. O’Hern, Phys. Rev. E 88, 052205 (2013).
* (25) N. C. Keim and P. E. Arratia, Soft Matter 9, 6222 (2013).
* (26) N. C. Keim and P. E. Arratia, Phys. Rev. Lett. 112, 028302 (2014).
* (27) I. Regev, T. Lookman, and C. Reichhardt Phys. Rev. E 88, 062401 (2013).
* (28) I. Regev, J. Weber, C. Reichhardt, K. A. Dahmen, and T. Lookman, Nat. Commun. 6, 8805 (2015).
* (29) N. V. Priezjev, Phys. Rev. E 93, 013001 (2016).
* (30) M. O. Lavrentovich, A. J. Liu, and S. R. Nagel, Phys. Rev. E 96, 020101(R) (2017).
* (31) K. Nagasawa, K. Miyazaki and T. Kawasaki, Soft Matter 15, 7557 (2019).
* (32) P. Das, H. A. Vinutha, and S. Sastry, Proc. Natl. Acad. Sci. USA 117, 10203 (2020).
* (33) K. Khirallah, B. Tyukodi, D. Vandembroucq, and C. E. Maloney, Phys. Rev. Lett. 126, 218005 (2021).
* (34) M. S. van Deen, J. Simon, Z. Zeravcic, S. Dagois-Bohy, B. P. Tighe, and M. van Hecke, Phys. Rev. E 90, 020202(R) (2014).
* (35) D. J. Evans and G. P. Morriss, Statistical Mechanics of Nonequilibrium Liquids 2nd ed. (Cambridge University Press, Cambridge, 2008).
* (36) See Supplemental Material.
* (37) M. Doi and S. F. Edwards, The Theory of Polymer Dynamics (Oxford University Press, Oxford, 1986).
* (38) M. Meyers and K. Chawla, Mechanical Behavior of Materials (Cambridge University Press, Cambridge, 2008).
* (39) We suppose $a^{(1)}$ is the largest because the mode proportional to $a^{(1)}$ is synchronized with the external oscillation $\sin\theta$.
* (40) D. Fiocco, G. Foffi, and S. Sastry, Phys. Rev. Lett. 112, 025702 (2014).
* (41) J. D. Paulsen, N. C. Keim, and S. R. Nagel, Phys. Rev. Lett. 113, 068301 (2014).
* (42) M. Adhikari and S. Sastry, Eur. Phys. J. E 41, 105 (2018).
* (43) S-H. Chong, M. Otsuki, and H. Hayakawa, Phys. Rev. E 81, 041130 (2010).
* (44) H. Hayakawa and M. Otsuki, Phys. Rev. E 88, 032117 (2013).
* (45) M. H. Wagner , V. H. Rolón-Garrido, K. Hyun, and M. Wilhelm, J. Rheol. 55, 495 (2011).
* (46) K. Hyun, M. Wilhelm, C. O. Klein, K. S. Cho, J. G. Nam, K. H. Ahn, S. J. Lee, R. H. Ewoldt, and G. H. McKinley, Prog. Polym. Sci. 36, 1697 (2011).
* (47) M. Otsuki and H. Hayakawa, Eur. Phys. J. E 44, 70 (2021).
Supplemental Material:
This Supplemental Material provides some details that are not written in the
main text. The results for underdamped frictionless granular particles without
background friction are presented in Sec. I. In Sec. II, we show the
dependence of $G^{\prime}$ and $G^{\prime\prime}$ on the number of particles
$N$ and the number of cycles $n_{c}$. In Sec. III, we present the time
evolution of the displacements of particles before reaching the absorbing
state and the evaluation of the yielding strain amplitude $\gamma_{c}$. In
Sec. IV, we illustrates the time evolutions of the stress-strain curves in the
absorbing and plastic states. In Sec. V, we show how particle trajectories
depend on $\gamma_{0}$ and $\omega$. In Sec. VI, we show that trajectories in
the absorbing state with longer periods do not affect our theoretical results
based on the absorbing trajectories whose periods are identical to the period
of the external oscillation. In Sec. VII, we present the loss modulus in the
absorbing and plastic states. In Sec. VIII, we demonstrate how the naive
result of the Kelvin–Voigt model can be recovered for sufficiently small
strain amplitude. In Sec. IX, we illustrate the relation between the Fourier
coefficients and the shape of particle trajectories. In Sec. X, we show the
number of contact changes during the last cycle in the absorbing state. In
Sec. XI, we derive Eqs. (14) and (15) in the main text. In Sec. XII, we
decompose the storage and loss moduli into several components, and clarify
what components are dominant contributions for the storage and loss moduli. In
Sec. XIII, we show the nonlinear viscoelastic moduli in our system to clarify
the roles of higher harmonics.
## I Underdamped granular particles
In this section, we show that our results are qualitatively unchanged in
underdamped frictionless granular particles without background friction. Here,
we use the SLLOD equation given by [34]
$\displaystyle\frac{d}{dt}{\bm{r}}_{i}$ $\displaystyle=$
$\displaystyle\dot{\gamma}(t)y_{i}\bm{e}_{x}+\bm{p}_{i},$ (S1)
$\displaystyle\frac{d}{dt}{\bm{p}}_{i}$ $\displaystyle=$
$\displaystyle-\dot{\gamma}(t)p_{i,y}\bm{e}_{x}+\bm{F}_{i}$ (S2)
under the Lees–Edwards boundary condition, where
$\bm{p}_{i}=m(\dot{\bm{r}}_{i}-\dot{\gamma}(t)y_{i})\bm{e}_{x}$ and
$\bm{F}_{i}=-\sum_{j\neq
i}\frac{\partial}{\partial\bm{r}_{i}}U(r_{ij})-\sum_{j\neq i}\eta
v^{\rm(n)}_{ij}\Theta(d_{ij}-r_{ij})\frac{\bm{r}_{ij}}{r_{ij}}$ (S3)
with mass $m$, the interaction potential $U(r_{ij})$ given by Eq. (2), the
viscous constant $\eta$, and the normal velocity
$\displaystyle
v^{\rm(n)}_{ij}=\left\\{\frac{d}{dt}\bm{r}_{i}-\frac{d}{dt}\bm{r}_{j}\right\\}\cdot\frac{\bm{r}_{ij}}{r_{ij}}.$
(S4)
We adopt $\eta=\sqrt{mk}$. This model corresponds to frictionless granular
particles with the restitution coefficient $e=0.043$. We adopt the leapfrog
algorithm using the time step $\Delta t=0.05t_{0}$ with the characteristic
time with $t_{0}=\sqrt{m/k}$.
In Fig. S1, we plot non-affine trajectories in the last two cycles for a
particle with $\gamma_{0}=0.02$, $\phi=0.87$, and
$\omega=10^{-4}\tau_{0}^{-1}$ in the absorbing state. The trajectories are
closed, but there is a region where the position of the particle depends on
the number of cycles $n_{c}$. The result of Fig. S1 is a typical one from the
inertia effect in the underdamped system.
Figure S1: Non-affine trajectories in the last two cycles for underdamped
particles with $\gamma_{0}=0.02$, $\phi=0.87$, and
$\omega=10^{-4}\tau_{0}^{-1}$ in the absorbing state. The circles represent
the trajectory in the last cycle. The line represents the trajectory in the
second to the last cycle.
Figure S2 plots the magnitude of the Fourier coefficients $a^{(n)}$ and
$b^{(n)}$ against $n$ for $\phi=0.87$ and $\gamma_{0}=0.02$ with
$\omega\tau_{0}=10^{-4}$. As in the overdamped system, $a^{(1)}$ takes the
largest value, and the other components are non-zero. In the inset of Fig. S2,
we show $a^{(1)}$ and $b^{(1)}$ against $\gamma_{0}$ for $\phi=0.87$ and
$\gamma_{0}=0.02$ with $\omega\tau_{0}=10^{-4}$, which are proportional to
$\gamma_{0}$.
Figure S2: Magnitudes of Fourier coefficients $a^{(n)}$ and $b^{(n)}$ of the
underdamped particles against $n$ for $\phi=0.87$ and $\gamma_{0}=0.02$ with
$\omega\tau_{0}=10^{-4}$. (Inset) Magnitudes of the Fourier coefficients
$a^{(n)}$ and $b^{(n)}$ against $\gamma_{0}$ for $\phi=0.87$ and
$\omega\tau_{0}=10^{-4}$ with $n=1$. The dashed line represents
$a^{(n)},b^{(n)}\propto\gamma_{0}$.
In Fig. S3, we plot the scaled storage modulus $G^{\prime}$ of the underdamped
particles in the absorbing state against the scaled amplitude $\gamma_{0}$ for
$\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$ and $0.860$. The storage
modulus exhibits SAS. The corresponding theoretical expression
$G^{\prime}_{\rm T}$ in Eq. (14) as open symbols is also presented in Fig. S3,
which quantitatively reproduces the numerical results except for the region of
quite large $\gamma_{0}$
Figure S3: Scaled storage modulus $G^{\prime}/\sqrt{\phi-\phi_{J}}$ of
underdamped particles (filled symbols) and its theoretical expression using
$G^{\prime}_{\rm T}$ (open symbols) in Eq. (14) against scaled
$\gamma_{0}/(\phi-\phi_{J})$ in the absorbing state scaled by the distance
$\phi-\phi_{J}$ from the jamming point for $\omega=10^{-4}\tau_{0}^{-1}$ with
$\phi=0.870$ and $0.860$.
In Fig. S4(a), we present the loss modulus $G^{\prime\prime}$ obtained in our
simulation and its theoretical expression $G^{\prime\prime}_{\rm T}$ in Eq.
(15) against $\gamma_{0}$ for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$
and $0.860$ in the underdamped system. The loss modulus $G^{\prime\prime}$
does not strongly depend on $\phi$ and $\gamma_{0}$, and the theoretical
expression $G^{\prime\prime}_{\rm T}$ reproduces the numerical results. In
Fig. S4(b), we plot the loss modulus $G^{\prime\prime}$ against $\omega$ for
$\phi=0.87$ with $\gamma=0.01$. The loss modulus seems to converge to a non-
zero value in the limit $\omega\to 0$.
Figure S4: (a) Loss modulus $G^{\prime\prime}$ of underdamped particles
obtained in our simulation (filled symbols) and its theoretical expression
$G^{\prime\prime}_{\rm T}$ (open symbols) in Eq. (14) against $\gamma_{0}$ for
$\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$ and $0.860$. (b) Loss modulus
$G^{\prime\prime}$ against $\omega$ for $\phi=0.87$ with $\gamma_{0}=0.01$.
The results in this section are consistent with those in the main text for the
overdamped system. This indicates that the results presented in the main text
are universal for jammed disordered materials.
## II Dependence of $G^{\prime}$ and $G^{\prime\prime}$ on $N$ and $n_{c}$
In this section, we show the dependence of $G^{\prime}$ and $G^{\prime\prime}$
on the numbers of particles $N$ and cycles $n_{c}$ for the overdamped dynamics
discussed in the main text. In Figs. S5(a) and (b), we plot $G^{\prime}$ and
$G^{\prime\prime}$ against $\gamma_{0}$ for $\omega=10^{-4}\tau_{0}^{-1}$,
$\phi=0.870$, and $n_{c}=20$ with $N=1000$ and $4000$, respectively. The shear
moduli $G^{\prime}$ and $G^{\prime\prime}$ for $N=1000$ and $4000$ are
consistent within error bars.
---
Figure S5: (a) Storage modulus $G^{\prime}$ against $\gamma_{0}$ for
$\omega=10^{-4}\tau_{0}^{-1}$, $\phi=0.870$, and $n_{c}=20$ with $N=1000$ and
$4000$. (b) Loss modulus $G^{\prime\prime}$ against $\gamma_{0}$ for
$\omega=10^{-4}\tau_{0}^{-1}$, $\phi=0.870$, and $n_{c}=20$ with $N=1000$ and
$4000$.
Figures S6(a) and (b) show $G^{\prime}$ and $G^{\prime\prime}$ against $n_{c}$
for $\omega=10^{-4}\tau_{0}^{-1}$, $\phi=0.870$, and $N=1000$ with
$\gamma_{0}=0.02,0.04$, and $0.08$, respectively. The shear moduli
$G^{\prime}$ and $G^{\prime\prime}$ reach statistical steady states for
$n_{c}\geq 20$ within error bars.
---
Figure S6: (a) Storage modulus $G^{\prime}$ against $n_{c}$ for
$\omega=10^{-4}\tau_{0}^{-1}$, $\phi=0.870$, and $N=1000$ with
$\gamma_{0}=0.02,0.04$, and $0.08$. (b) Loss modulus $G^{\prime\prime}$
against $n_{c}$ for $\omega=10^{-4}\tau_{0}^{-1}$, $\phi=0.870$, and $N=1000$
with $\gamma_{0}=0.02,0.04$, and $0.08$.
## III Particle displacement and yielding strain amplitude
In this section, we show the time evolution of the displacements of particles
before reaching the absorbing state and the evaluation of the yielding strain
amplitude $\gamma_{c}$. Here, we introduce the particle displacement between
$n_{c}$-th and $(n_{c}-m)$-th cycles as
$dr_{m}(n_{c})=\sum_{i=1}^{N}\left|\bm{r}_{i}(n_{c}T)-\bm{r}_{i}((n_{c}-m)T)\right|/N$
(S5)
with the period $T=2\pi/\omega$. We define $dr(n_{c})$ as the minimum value of
$dr_{m}(n_{c})$ for $m$. In Fig. S7, we plot $dr(n_{c})$ against $n_{c}$ for
$\omega=10^{-4}\tau_{0}^{-1}$ and $\phi=0.860$ with $\gamma_{0}=0.08,0.04$,
and $0.02$. For $\gamma_{0}=0.08$, $dr(n_{c})$ remains non-zero, while it
approaches $0$ after a transient for $\gamma_{0}=0.04$ and $0.02$. It is
noteworthy that $dr(n_{c})$ reaches a steady state for $n_{c}>50$ in the case
of $\gamma_{0}=0.04$, which is much larger than the steady $n_{c}$ in Fig. S6.
Figure S7: Displacements of particles $dr(n_{c})$ against $n_{c}$ for
$\omega=10^{-4}\tau_{0}^{-1}$ and $\phi=0.860$ with $\gamma_{0}=0.08,0.04$,
and $0.02$.
In Fig. S8, we plot $dr(n_{c})$ against $\gamma_{0}$ at $n_{c}=100$ for
$\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870,0.860,0.850$ and $0.845$. For
all $\phi$, $dr(n_{c})$ changes from $0$ to non-zero values as $\gamma_{0}$
increases. We call the absorbing state for $dr(n_{c})<d_{c}$ with smaller
$\gamma_{0}$ and the plastic state for $dr(n_{c})>d_{c}$ with larger
$\gamma_{0}$. The yielding strain amplitude $\gamma_{c}$ is defined as the
boundary between these states. From Fig. S8, we estimate
$0.04<\gamma_{c}<0.08$ for $\phi=0.870$, $0.04<\gamma_{c}<0.05$ for
$\phi=0.860$, $0.02<\gamma_{c}<0.03$ for $\phi=0.850$, and
$0.01<\gamma_{c}<0.02$ for $\phi=0.845$.
Figure S8: Displacement of particles $dr(n_{c})$ against $\gamma_{0}$ at
$n_{c}=100$ for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870,0.860,0.850$
and $0.845$.
## IV Stress-strain curve
In this section, we present typical stress strain curves in the absorbing and
plastic states including their time evolution. Figure S9 displays the shear
stress $\sigma$ against the strain $\gamma$ with $\gamma_{0}=0.02$ for
different $n_{c}$. For $n_{c}\leq 5$, the stress-strain curves are not
convergent, which indicate the system is in a transient state. For $n_{c}=6$
and $7$, the stress-strain curves become identical in the absorbing state. We
plot the shear stress $\sigma$ against the strain $\gamma$ for
$\gamma_{0}=0.1$ in Fig. S10. All the stress-strain curves are different for
all $n_{c}$ because the system is in the plastic state.
Figure S9: Plots of shear stress $\sigma$ against $\gamma$ for
$\gamma_{0}=0.02$, $\omega=10^{-4}\tau_{0}^{-1}$, and $\phi=0.87$
corresponding to $\phi-\phi_{J}=0.029$ with various $n_{c}$. Figure S10:
Plots of shear stress $\sigma$ against $\gamma$ for $\gamma_{0}=0.1$,
$\omega=10^{-4}\tau_{0}^{-1}$ and $\phi=0.87$ corresponding to
$\phi-\phi_{J}=0.029$ with various $n_{c}$.
## V Dependence of trajectories on $\gamma_{0}$ and $\omega$
In this section, we present how particle trajectories depend on $\gamma_{0}$
and $\omega$. In Fig. S11, we plot the non-affine particle trajectories in the
last cycle for $\omega=10^{-3}\tau_{0}^{-1}$ and $10^{-5}\tau_{0}^{-1}$ with
$\phi=0.87$ and $\gamma_{0}=0.01$. Let us introduce
${\bm{r}}^{\prime}_{i}=(x^{\prime}_{i},y^{\prime}_{i})=\tilde{\bm{r}}_{i}-{\bm{R}}_{i}.$
(S6)
The trajectories form nontrivial loops, which remain for smaller $\omega$.
---
Figure S11: Non-affine particle trajectories in the last cycle for
$\omega=10^{-3}\tau_{0}^{-1}$ (a) and $10^{-5}\tau_{0}^{-1}$ (b) with
$\phi=0.87$ and $\gamma_{0}=0.01$.
Figure S12 represents the non-affine particle trajectories in the last cycle
for $\omega=10^{-3}\tau_{0}^{-1}$ and $10^{-5}\tau_{0}^{-1}$ with $\phi=0.87$
and $\gamma_{0}=1.0\times 10^{-7}$. In Fig. S12 (a), the trajectory with
$\omega=10^{-3}\tau_{0}^{-1}$ form an ellipse, but the trajectory becomes a
straight line for $\omega=10^{-5}\tau_{0}^{-1}$ in Fig S12 (b).
---
Figure S12: Non-affine particle trajectories in the last cycle for
$\omega=10^{-3}\tau_{0}^{-1}$ (a) and $10^{-5}\tau_{0}^{-1}$ (b) with
$\phi=0.87$ and $\gamma_{0}=1.0\times 10^{-7}$.
## VI Effect of trajectories with longer periods
In this section, we discuss the effect of closed trajectories with periods
longer than $2\pi$. As indicated by Refs. [27, 28, 30, 31, 33], some samples
exhibit non-trivial absorbing trajectories where particles return to their
original positions after more than one cycle of oscillatory shear. In these
samples, the non-affine trajectories of a particle $\bm{r}_{i}(\theta)$
satisfy
$\bm{r}_{i}(\theta)=\bm{r}_{i}(\theta+2M\pi)$ (S7)
with $M=2,3,4,\cdots$. In this case, $\bm{r}_{i}(\theta)$ for $0<\theta<2M\pi$
is expressed in the Fourier series as
$\tilde{\bm{r}}_{i}(\theta)=\bm{R}^{\prime}_{i}+\sum_{m=1}^{\infty}\left(\bm{A}_{i}^{(m)}\sin\frac{m\theta}{M}+\bm{B}_{i}^{(m)}\cos\frac{m\theta}{M}\right)$
(S8)
with
$\displaystyle\bm{R}^{\prime}_{i}$ $\displaystyle=$
$\displaystyle\frac{1}{2M\pi}\int_{0}^{2M\pi}\ d\theta\
\tilde{\bm{r}}_{i}(\theta),$ (S9)
and the Fourier coefficients
$\displaystyle\bm{A}_{i}^{(m)}$ $\displaystyle=$
$\displaystyle\frac{1}{M\pi}\int_{0}^{2M\pi}\ d\theta\ \sin\frac{m\theta}{M}\
\tilde{\bm{r}}_{i}(\theta),$ (S10) $\displaystyle\bm{B}_{i}^{(m)}$
$\displaystyle=$ $\displaystyle\frac{1}{M\pi}\int_{0}^{2M\pi}\ d\theta\
\cos\frac{m\theta}{M}\ \tilde{\bm{r}}_{i}(\theta).$ (S11)
However, in Eqs. (4) and (5), we need $\bm{r}_{i}(\theta)$ for
$0\leq\theta<2\pi$ to calculate $G^{\prime}$ and $G^{\prime\prime}$. When
$\bm{r}_{i}(\theta)$ is restricted to $0\leq\theta<2\pi$, we can use Eq. (8)
with the Fourier coefficient given by Eqs. (10) and (11) as an expression of
the trajectory, and we obtain the theoretical expressions Eqs. (14) and (15)
even in this case. It should be noted that samples in the absorbing state with
longer periods are rare, and the probability of emerging such a trajectory is
smaller than $0.01$ for sufficiently packed systems above the jamming point,
as shown in Ref. [30]. Therefore, we can ignore the effect of rare samples.
## VII Loss modulus
In Fig. S13, we plot the loss modulus $G^{\prime\prime}$ against $\gamma_{0}$
for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$ and $0.860$ including the
data in the absorbing and plastic states. This figure corresponds to Fig. 3(a)
in the main text, but Fig. S13 contains the data for a wide range of
$\gamma_{0}$. The previous studies [21, 22] reported that the loss modulus has
a peak around the yield strain for an underdamped system, but the peak of
$G^{\prime\prime}$ is not clearly visible in our overdamped system.
Figure S13: Loss modulus $G^{\prime\prime}$ against $\gamma_{0}$ for
$\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$ and $0.860$. The larger
(smaller) filled symbols represent the data in the absorbing (plastic) state.
The open pentagons represent the yield strain amplitude $\gamma_{c}$.
## VIII Shear modulus for small $\gamma_{0}$
In this section, we demonstrate that $G^{\prime}$ and $G^{\prime\prime}$ obey
the Kelvin–Voigt model for a sufficiently small $\gamma_{0}$. Figure S14 is a
set of plots of $G^{\prime}$ and $G^{\prime\prime}$ against $\omega\tau_{0}$
for $\phi=0.870$ and $\gamma_{0}=1.0\times 10^{-7}$, where $G^{\prime}$ is
almost independent of $\omega$ and $G^{\prime\prime}$ is proportional to
$\omega$. This behavior is consistent with that of the Kelvin–Voigt model.
Figure S14: Plots of $G^{\prime}$ and $G^{\prime\prime}$ against $\omega$ for
$\phi=0.870$ and $\gamma_{0}=1.0\times 10^{-7}$. The dashed line represents
$G^{\prime\prime}\propto\omega$.
## IX Relationship between closed trajectories and the Fourier coefficients
In this section, we present how the trajectory of a particle depends on the
Fourier coefficients. Figure S15 compares the trajectory of a particle
corresponding to Fig. 1(a) with its approximate trajectory using Eq. (8) with
some restricted modes, where we estimate the coefficients using the true
trajectory. In Fig. S15 (a), we plot the approximate trajectory (blue filled
circles) using only $\bm{a}_{i}^{\rm(1)}$, where we set the other coefficients
to $0$. The approximate trajectory (blue filled circles) is a straight line.
Figure S15 (b) shows the approximate trajectory using $\bm{a}_{i}^{\rm(1)}$
and $\bm{b}_{i}^{\rm(1)}$, where the trajectory becomes an ellipse. As we
increase the number of modes, the approximate trajectory approaches the true
trajectory, as shown in Figs. S15 (c) and (d).
Figure S15: Trajectory shown in Fig. 1(a) of the main text and its approximate
trajectories with some restricted modes. The red solid lines represent the
original data, and the blue filled circles represent the approximate
trajectory using (a) $\bm{a}_{i}^{\rm(1)}$, (b) $\bm{a}_{i}^{\rm(1)}$ and
$\bm{b}_{i}^{\rm(1)}$, (c) $\bm{a}_{i}^{\rm(n)}$ and $\bm{b}_{i}^{\rm(n)}$
with $n=1$ and $2$, and (d) $\bm{a}_{i}^{\rm(n)}$ and $\bm{b}_{i}^{\rm(n)}$
with $n=1,2,$ and $3$.
## X Cyclic contact changes
In this section, we show the number of contact changes during the last cycle
in the absorbing state. References [21, 23, 25, 26] demonstrate that the
nontrivial loops originate from cyclic open and close contacts. Here, we
define $N_{cc}$ as the number of events where the same contact opens and
closes again during the last cycle. In Fig. S16, we present $N_{\rm cc}$
during the last cycle for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$
against $\gamma_{0}$ in the absorbing state. The number of cyclic contact
changes $N_{\rm cc}$ is nearly proportional to $\gamma_{0}$. This dependence
is consistent with the behaviors of $a^{(n)}$ and $b^{(n)}$ of the Fourier
components, which are almost proportional to $\gamma_{0}$.
Figure S16: The number of cyclic contact changes $N_{\rm cc}$ during the last
cycle for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$ against
$\gamma_{0}$. The solid line represents $N_{\rm cc}\sim\gamma_{0}$.
## XI Details of the theoretical analysis
In this section, we derive Eqs. (14) and (15) in the main text by assuming
$|a_{i}^{(n)}|\sim|b_{i}^{(n)}|\sim\gamma_{0}$ and $\gamma_{0}\ll 1$. From Eq.
(13) in the main text, $x_{ij}(\theta)$ and $y_{ij}(\theta)$ are given by
$\displaystyle x_{ij}=X_{ij}+\gamma_{0}\sin\theta
Y_{ij}+\sum_{n=1}^{\infty}\left(a_{ij,x}^{(n)}\sin n\theta+b_{ij,x}^{(n)}\cos
n\theta\right),$ (S12) $\displaystyle
y_{ij}=Y_{ij}+\sum_{n=1}^{\infty}\left(a_{ij,y}^{(n)}\sin
n\theta+b_{ij,y}^{(n)}\cos n\theta\right),$ (S13)
where
$\bm{a}^{(n)}_{ij}=(a_{ij,x}^{(n)},a_{ij,y}^{(n)})=\bm{a}^{(n)}_{i}-\bm{a}^{(n)}_{j}$,
$\bm{b}^{(n)}_{ij}=(b_{ij,x}^{(n)},b_{ij,y}^{(n)})=\bm{b}^{(n)}_{i}-\bm{b}^{(n)}_{j}$.
Using this equation and neglecting the terms of $O(\gamma_{0}^{2})$,
$|\bm{r}_{ij}(\theta)|^{2}=x_{ij}^{2}+y_{ij}^{2}$ is given by
$\displaystyle|\bm{r}_{ij}(\theta)|^{2}\simeq
R_{ij}^{2}\left\\{1+2E_{ij}(\theta)\right\\}$ (S14)
with
$\displaystyle E_{ij}(\theta)$ $\displaystyle=$
$\displaystyle\sum_{n=1}^{\infty}\frac{\bm{R}_{ij}\cdot\bm{a}_{ij}^{(n)}}{R_{ij}^{2}}\sin
n\theta+\sum_{n=1}^{\infty}\frac{\bm{R}_{ij}\cdot\bm{b}_{ij}^{(n)}}{R_{ij}^{2}}\cos
n\theta$ (S15)
$\displaystyle+\gamma_{0}\frac{X_{ij}Y_{ij}}{R_{ij}^{2}}\sin\theta.$
From Eq. (S14), $r_{ij}(\theta)$ is approximately obtained as
$\displaystyle r_{ij}(\theta)\simeq R_{ij}\left\\{1+E_{ij}(\theta)\right\\}$
(S16)
up to $O(\gamma_{0})$. Using this equation, we obtain
$\Psi(r)=-U^{\prime}(r)/r$ up to $O(\gamma_{0})$ as
$\displaystyle\Psi(r_{ij}(\theta))$ $\displaystyle\simeq$
$\displaystyle\Psi(R_{ij})+\Psi^{\prime}(R_{ij})R_{ij}E_{ij}(\theta).$ (S17)
Substituting Eqs. (S12)–(S17) into Eq. (6), we obtain
$\displaystyle\sigma(\theta)$ $\displaystyle=$
$\displaystyle-\frac{1}{L^{2}}\sum_{(i,j)}\left\\{\Psi(R_{ij})+\Psi^{\prime}(R_{ij})R_{ij}E_{ij}(\theta)\right\\}$
(S18) $\displaystyle\times\left\\{X_{ij}+\gamma_{0}\sin\theta
Y_{ij}+\sum_{n=1}^{\infty}\left(a_{ij,x}^{(n)}\sin n\theta+b_{ij,x}^{(n)}\cos
n\theta\right)\right\\}$
$\displaystyle\times\left\\{Y_{ij}+\sum_{n=1}^{\infty}\left(a_{ij,y}^{(n)}\sin
n\theta+b_{ij,y}^{(n)}\cos n\theta\right)\right\\}.$
Here, we abbreviate $\displaystyle\sum_{i}\sum_{j>i}$ as
$\displaystyle\sum_{(i,j)}$. Neglecting the terms of $O(\gamma_{0}^{2})$,
$\sigma(\theta)$ is approximated as
$\displaystyle\sigma(\theta)$ $\displaystyle\simeq$
$\displaystyle-\frac{1}{L^{2}}\sum_{(i,j)}X_{ij}Y_{ij}\Psi(R_{ij})-\frac{1}{L^{2}}\sum_{(i,j)}\gamma_{0}\sin\theta
Y_{ij}^{2}\Psi(R_{ij})$ (S19)
$\displaystyle-\frac{1}{L^{2}}\sum_{(i,j)}\sum_{n=1}^{\infty}\left(a_{ij,x}^{(n)}\sin
n\theta+b_{ij,x}^{(n)}\cos n\theta\right)Y_{ij}\Psi(R_{ij})$
$\displaystyle-\frac{1}{L^{2}}\sum_{(i,j)}\sum_{n=1}^{\infty}X_{ij}\left(a_{ij,y}^{(n)}\sin
n\theta+b_{ij,y}^{(n)}\cos n\theta\right)\Psi(R_{ij})$
$\displaystyle-\frac{1}{L^{2}}\sum_{(i,j)}X_{ij}Y_{ij}\Psi^{\prime}(R_{ij})R_{ij}E_{ij}(\theta).$
By substituting this equation into Eqs. (4) and (5) and using
$\displaystyle\frac{1}{\pi}\int_{0}^{2\pi}\ d\theta\ \sin m\theta\sin
n\theta=\delta_{mn},$ (S20) $\displaystyle\frac{1}{\pi}\int_{0}^{2\pi}\
d\theta\ \sin m\theta\cos n\theta=0,$ (S21)
we obtain Eqs. (14) and (15) in the main text.
## XII Components of shear moduli
In this section, we clarify what terms of the theoretical expressions
$G^{\prime}_{\rm T}$ and $G^{\prime\prime}_{\rm T}$ in the absorbing state in
Eqs. (14) and (15) are dominant. Here, $G^{\prime}_{\rm T}$ consists of four
terms as
$\displaystyle G^{\prime}_{\rm T}$ $\displaystyle=$ $\displaystyle
G^{\prime}_{{\rm T},1}+G^{\prime}_{{\rm T},2}+G^{\prime}_{{\rm
T},3}+G^{\prime}_{{\rm T},4}$ (S22)
with
$\displaystyle G^{\prime}_{{\rm T},1}$
$\displaystyle=-\frac{1}{L^{2}}\sum_{i,j}\left\langle\frac{X_{ij}^{2}Y_{ij}^{2}}{R_{ij}}\Psi^{\prime}(R_{ij})\right\rangle,$
(S23) $\displaystyle G^{\prime}_{{\rm T},2}$
$\displaystyle=-\frac{1}{L^{2}}\sum_{i,j}\left\langle
Y_{ij}^{2}\Psi(R_{ij})\right\rangle,$ (S24) $\displaystyle G^{\prime}_{{\rm
T},3}$
$\displaystyle=-\frac{1}{L^{2}}\sum_{i,j}\left\langle\left(\frac{a_{ij,x}^{(1)}}{\gamma_{0}}Y_{ij}+X_{ij}\frac{a_{ij,y}^{(1)}}{\gamma_{0}}\right)\Psi(R_{ij})\right\rangle,$
(S25) $\displaystyle G^{\prime}_{{\rm T},4}$
$\displaystyle=-\left\langle\frac{1}{L^{2}}\sum_{i,j}X_{ij}Y_{ij}\Psi^{\prime}(R_{ij})\frac{\bm{R}_{ij}\cdot\bm{a}_{ij}^{(1)}}{\gamma_{0}R_{ij}}\right\rangle,$
(S26)
where $G^{\prime}_{{\rm T},1}$ and $G^{\prime}_{{\rm T},2}$ represent the
contributions from the affine motion, respectively, while $G^{\prime}_{{\rm
T},1}$ and $G^{\prime}_{{\rm T},2}$ are the contributions from the non-affine
motion, respectively. In Fig. S17, we show $G^{\prime}_{{\rm T},n}$ in the
absorbing state for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$. We find
that $G^{\prime}_{{\rm T},1}$ and $G^{\prime}_{{\rm T},4}$ are dominant.
$G^{\prime}_{{\rm T},1}$ decreases with $\gamma_{0}$, while the other $G_{{\rm
T},n}^{\prime}$ with $n=2,3,4$ are almost independent of $\gamma_{0}$. This
indicates that SAS results from the behavior of $G^{\prime}_{{\rm T},1}$.
Figure S17: $G^{\prime}_{{\rm T},n}$ with $n=1,2,3$ and $4$ against
$\gamma_{0}$ in the absorbing state for $\omega=10^{-4}\tau_{0}^{-1}$ with
$\phi=0.870$. The horizontal lines represent $G^{\prime}_{{\rm T},n}$ in the
limit $\gamma_{0}\to 0$, which is estimated at $\gamma_{0}=0.001$.
On the other hand, the loss modulus $G^{\prime\prime}_{\rm T}$ consists of two
terms as
$\displaystyle G_{\rm T}^{\prime\prime}$ $\displaystyle=$ $\displaystyle
G_{{\rm T},1}^{\prime\prime}+G_{{\rm T},2}^{\prime\prime}$ (S27)
with
$\displaystyle G_{{\rm T},1}^{\prime\prime}$
$\displaystyle=-\frac{1}{L^{2}}\sum_{i,j}\left\langle\left(\frac{b_{ij,x}^{(1)}}{\gamma_{0}}Y_{ij}+X_{ij}\frac{b_{ij,y}^{(1)}}{\gamma_{0}}\right)\Psi(R_{ij})\right\rangle,$
(S28) $\displaystyle G_{{\rm T},2}^{\prime\prime}$
$\displaystyle=-\frac{1}{L^{2}}\sum_{i,j}\left\langle
X_{ij}Y_{ij}\Psi^{\prime}(R_{ij})R_{ij}\frac{\bm{R}_{ij}\cdot\bm{b}_{ij}^{(1)}}{\gamma_{0}R_{ij}^{2}}\right\rangle.$
(S29)
In Fig. S17, we show $G^{\prime\prime}_{{\rm T},n}$ with $n=1$ and $2$ in the
absorbing state for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$. The
result shows that $G^{\prime\prime}_{{\rm T},1}$ is dominant and almost
independent of $\gamma_{0}$. $G^{\prime\prime}_{{\rm T},2}$ depends on
$\gamma_{0}$, but it is much smaller than $G^{\prime\prime}_{{\rm T},1}$ for
$\gamma_{0}<0.1$.
Figure S18: $G^{\prime}_{{\rm T},n}$ with $n=1$ and $2$ against $\gamma_{0}$
in the absorbing state for $\omega=10^{-4}\tau_{0}^{-1}$ with $\phi=0.870$.
## XIII Non-linear viscoelastic moduli
In this section, we examine the non-liner viscoelastic moduli in our system.
The nonlinear elastic response is generally characterized by nonlinear
viscoelastic moduli $G^{\prime}_{n}$ and $G^{\prime\prime}_{n}$ satisfying
[41, 42]
$\sigma(t)=\gamma_{0}\sum_{n=1}\\{G^{\prime}_{n}\sin(n\omega
t)+G^{\prime\prime}_{n}\cos(n\omega t)\\}.$ (S30)
The storage and loss moduli are, respectively, given by
$G^{\prime}=G^{\prime}_{1}$ and $G^{\prime\prime}=G^{\prime\prime}_{1}$.
$G^{\prime}_{n}$ and $G^{\prime\prime}_{n}$ for $n\geq 2$ represent higher
harmonics. In Figs. S19 and S20, we plot $G^{\prime}_{n}$ and
$G^{\prime\prime}_{n}$ in the absorbing state for
$\omega=10^{-4}\tau_{0}^{-1}$ and $\phi=0.870$ with $n=1,2$, and $3$,
respectively. These figures indicate that the higher harmonics are negligible
in our system.
Figure S19: $G_{n}^{\prime}$ against $\gamma_{0}$ in the absorbing state for
$\omega=10^{-4}\tau_{0}^{-1}$ and $\phi=0.870$ with $n=1,2$, and $3$. Figure
S20: $G_{n}^{\prime\prime}$ against $\gamma_{0}$ in the absorbing state for
$\omega=10^{-4}\tau_{0}^{-1}$ and $\phi=0.870$ with $n=1,2$, and $3$.
|
# Indices of equilibrium points of linear control systems with saturated state
feedback
Xiao-Song Yang1111\. School of Mathematics and Statistics, Huazhong University
of Science and Technology, Wuhan 430074, Hubei, People’s Republic of China. 2.
Hubei Key Laboratory of Engineering Modeling and Science Computing, Huazhong
University of Science and Technology, Wuhan 430074, Hubei, People’s Republic
of China<EMAIL_ADDRESS>and Weisheng Huang222School of Mathematics
and Statistics, Huazhong University of Science and Technology, Wuhan 430074,
Hubei, People’s Republic of China<EMAIL_ADDRESS>
###### Abstract
In this paper we investigate some properties of equilibrium points in
n-dimensional linear control systems with saturated state feedback. We provide
an index formula for equilibrium points and discuss its relation to boundaries
of attraction basins in feedback systems with single input. In addition, we
also touch upon convexity of attraction basin.
###### keywords:
Index, Equilibrium point, Saturated state feedback, Linear control system
## 1 Introduction
Attraction basin of an attractor is a domain in which every point has the
property that a trajectory starting at it approaches the attractor as time
goes to infinity. Clearly, attraction basin is a central research focus in
dynamical system theory and control theory because of its practical
significance as well as a theoretical challenge.
One topic of much importance is the structure of the boundary of the
attraction basin of a locally stable equilibrium point. To some extent, the
structure of the basin boundary provides a measure of how a trajectory in the
attraction basin approaches the equilibrium point. For example, fractal
boundary may give rise to transient chaotic behaviour for trajectories near
the boundary. Thus a perturbed state probably goes through a long period
irregular motion before going to the stable equilibrium state!
For linear systems with saturated state feedback, the topic on the structure
of the basin boundary has received much attention in recent decades Hu and
Lin, (2001); Kapila and Grigoriadis, (2002); Tarbouriech et al., (2011);
Corradini et al., (2012); Li and Lin, (2018); Rodolfo et al., (1997); Hu and
Lin, (2005).
Among these is the elegant and complete treatment Hu and Lin, (2001) on the
boundary of the attraction basin (the domain of stability as termed as in Hu
and Lin, (2001)) and on convexity of the attraction basin of the origin in two
dimensional situation. Now it is well known that the boundary of the
attraction basin is a convex differentiable closed curve, which is a limit
cycle of closed loop system. In higher dimensional cases, even though there
are many publications on estimation of attraction basin in linear control
system with stabilizing saturated state feedback, no such satisfactory results
have been obtained up to now for anti-stable linear systems, to the best
knowledge of the authors. It is well-known that even in 3-dimensional case,
studying dynamics of a nonlinear system is nearly a formidable task, and this
is also the case in studying the boundaries of attraction basins of
attractors.
Since existence and distribution of equilibrium points affect estimation of
attraction basin, we will investigate properties of equilibrium points in
n-dimensional linear control systems with saturated state feedback and mainly
focus on the single input case. In addition, we also touch upon convexity of
attraction basin.
## 2 Indices of differentiable maps
Consider a differentiable map $f:\mathbb{R}^{n}\to\mathbb{R}^{n}$. Suppose
that there is an $r>0$ such that $f(x)\neq 0$ for all $x\in S^{n-1}(r)$, where
$S^{n-1}(r)=\\{x\in\mathbb{R}^{n}|\,\lVert x\rVert=r\\}.$
Define the sphere map $\bar{f}:S^{n-1}(r)\to S^{n-1}(1)$ as
$\bar{f}(x)=\frac{f(x)}{\lVert f(x)\rVert},\;x\in S^{n-1}(r).$
###### Definition 2.1.
The index of $f$, denoted by $\text{ind}\,f_{B(r)}$, on $B(r)$ is defined as
the topological degree of $\bar{f}$ where
$B(r)=\\{x\in\mathbb{R}^{n}|\,\lVert x\rVert\leq r\\}.$
For degree of differentiable maps, see Milnor, (1965), for degree of
continuous map, the reader is referred to Brown, (1993).
In the following we give a simple result which is useful for the arguments in
section 3.
###### Proposition 2.2.
Consider a differentiable map $F:\mathbb{R}^{n}\to\mathbb{R}^{n}$,
$F(x)=f(x)+g(x),\;x\in\mathbb{R}^{n}.$
Suppose that there is an $r>0$ such that
$\lVert f(x)\rVert>\lVert g(x)\rVert,\;x\in S^{n-1}(r).$
Then
$\textnormal{ind}\,F_{B(r)}=\textnormal{ind}\,f_{B(r)}.$
The proof of this statement is an exercise in differential topology. We
provide a proof for reader’s convenience.
###### Proof 2.3.
Consider the homotopy $\hat{F}:[0,1]\times S^{n-1}(r)\to S^{n-1}(1)$,
$\hat{F}(t,x)=\frac{f(x)+tg(x)}{\lVert f(x)+tg(x)\rVert},\;(t,x)\in[0,1]\times
S^{n-1}(r).$
Then $\hat{F}$ makes sense, and is a homotopy between $F$ and $f$:
$\hat{F}(0,x)=\bar{f}(x)=\frac{f(x)}{\lVert
f(x)\rVert},\;\hat{F}(1,x)=\bar{F}(x).$
Since the index is homotopy invariant, then
$\textnormal{ind}\,F_{B(r)}=\textnormal{ind}\,f_{B(r)}$.
In particular, we have the following fact.
###### Proposition 2.4.
Let $A$ be a nonsingular matrix. Suppose $g:\mathbb{R}^{n}\to\mathbb{R}^{n}$
is bounded, then there is an $r>0$ such that the map $Ax+g(x)$ restricted to
$B(r)$, has index $(-1)^{m}$, where $m$ is the number of eigenvalues of $A$
with negative real parts.
This fact is obvious, because for the map $\bar{A}=\frac{Ax}{\lVert
Ax\rVert}$, one has ind $\bar{A}_{B(r)}=\text{sign}(\det A)$ by arguments in
Milnor, (1965).
For a continuous map $f:\mathbb{R}^{n}\to\mathbb{R}^{n}$. Suppose that $f$ has
only isolated zero points. Let $\bar{x}$ be a zero point of $f$, the index of
at $\bar{x}$ is defined as
$\textnormal{ind}\,f(\bar{x})=\text{degree
of}\;\bar{f}:S^{n-1}(\bar{x},\delta)\to S^{n-1}(1)$
where
$S^{n-1}(\bar{x},\delta)=\\{x\in\mathbb{R}^{n}|\,\lVert
x-\bar{x}\rVert=\delta\\}$
and
$\bar{f}(x)=\frac{f(x)}{\lVert f(x)\rVert},\;x\in S^{n-1}(x,\delta).$
We have the following theorem.
###### Theorem 2.5.
Suppose $f:B(r)\to\mathbb{R}^{n}$ has the following properties:
1. 1)
$\lVert f(x)\rVert\neq 0$ for $\forall x\in S^{n-1}(r)$.
2. 2)
Every zero point of $f$ is isolated.
Denote by $E$ the set of zero points in $B(r)$, then
$\sum_{x\in E}\textnormal{ind}_{f}\,(\bar{x})=\textnormal{ind}\,f_{B(r)}.$
The proof is an elementary exercise in differential topology.
For reader’s convenience, we will give a proof for differentiable maps,
because continuous maps can be approximated by differentiable maps Brown,
(1993). To prove this theorem, we need the following lemma which is adapted
from Milnor, (1965).
###### Lemma 2.6.
Let $M$ be a compact oriented manifold, and $K=\partial M$ be the boundary of
$M$. $N$ is a connected differentiable manifold. Suppose
$\text{dim}K=\text{dim}N$. If a map $f:K\mapsto N$ can be extended to a
differentiable map $F:M\mapsto N$, then for every regular value $y\in N$, the
degree satisfies
$\deg(f,y)=0.$
The proof of Theorem 2.5:
Since $B(r)$ is compact, it follows from 2) that $E$ contains finite number of
zero points. By 1), $E\subset\text{int}\,B(r)$. For each $x\in E$, let
$B_{x}\subset B(r)$ be a small open ball centered at $x$, so that
$\bar{B}=B(r)-\bigcup_{x\in E}B_{x}$ is a manifold with boundary
$\partial\bar{B}=S^{n-1}(r)\bigcup\cup_{x\in E}\partial B_{x}.$
Now consider the map $\bar{f}:\partial\bar{B}\mapsto S^{n-1}(1)$
$\bar{f}=\frac{f(x)}{\lVert f(x)\rVert},\;x\in\partial\bar{B}.$
For a regular value $y$ of $\bar{f}$, it follows from the above lemma that
$\deg(\bar{f},y)=0.$
Since the degree of $\bar{f}$ is independent of regular values:
$\deg(\bar{f})=0.$
On the other hand, by the properties of degree,
$\displaystyle\deg(\bar{f})$
$\displaystyle=\deg(\bar{f})|_{S^{n-1}(r)}-\sum_{x\in
E}\deg(\bar{f})|_{\partial B_{x}}$
$\displaystyle=\text{ind}f_{B(r)}-\sum_{x\in E}\text{ind}f_{B_{x}}.$
Therefore
$\text{ind}f_{B(r)}=\sum_{x\in E}\text{ind}f_{B_{x}}.$
The minus in $-\sum_{x\in E}\text{ind}f_{B_{x}}$ is due to the fact that the
orientation of $\partial B_{x}$ is opposite to that of $S^{n-1}(r)$ (see
Milnor, (1965) for a discussion).
## 3 The indices of linear control systems with saturated state feedback
For the control system of the form
$\dot{x}=Ax+Bu,\,x\in\mathbb{R}^{n},\,u\in\mathbb{R}^{m},\,\lVert u\rVert\leq
M,M>0,$
where $\lVert u\rVert=\max\\{u_{i}\\},u=(u_{1},\cdots,u_{m})$.
What we are interested in this paper is the following proplems. Assume that
$A$ is anti-stable, i.e., every eigenvalue of $A$ has positive real part. The
system $(A,B)$ is controllable.
Define sat: $\mathbb{R}\to\mathbb{R}$ as
$\text{sat}(s)=\text{sign}(s)\min\\{M,\,\lvert s\rvert\\}$, and for
$u\in\mathbb{R}^{m}$,
$\text{sat}(u)=(\text{sat}(u_{1}),\,\cdots,\,\text{sat}(u_{m}))^{T}.$
By the above assumption, it is easy to see that there is stabilizing state
feedback $u=Kx$, such that the closed loop system
$\dot{x}=Ax+B\,\text{sat}(Kx)$ (1)
has the origin as its asymptotically stable equilibrium.
Since $A$ is anti-stable, the attraction basin is bounded, and the boundary of
the attraction basin is of much interest from both of theoretical and
practical point of view. On the other hand, the locations of other (unstable)
equilibrium points are also of some interest because the ”size” of the
attraction basin can not be large to contain the equilibrium points other than
the origin!
As noted in Hu and Lin, (2001), system (1) may has $3^{m}$ ”potential”
equilibrium points. However, only some of them are true equilibrium points.
Note that $K=(k_{1},\cdots,k_{m})^{T}$, where $k_{i}$ is row with $n$-entries.
###### Definition 3.1.
Consider the equation
$Ax+B\text{sat}(Kx)=0.$ (2)
A zero point of (2) is said to be in general position if it is not on the
plane $k_{i}x=\pm M$, for every $i\in\\{1,\cdots,n\\}$.
Clearly, in generic case, each zero point of (2) is in general position. For
convenience, we consider the control system with single input.
###### Theorem 3.2.
For control system with single input
$\dot{x}=Ax+bu,\;x\in\mathbb{R}^{n},$ (3)
let $u=kx$ be a stabilizing state feedback for (3), then generically the
system
$\dot{x}=Ax+b\,\text{sat}(kx),$ (4)
has a unique equilibrium point, the origin, if $n$ is an even number, and has
three equilibrium points if $n$ is an odd number.
###### Proof 3.3.
In generic case every equilibrium point is not on the hyperplane $kx=\pm M$.
Since the index of the origin is $(-1)^{n}$, and the other equilibrium point
has index 1, because $A$ is anti-stable and all these equilibrium points are
located off the saturated region.
Now for $r$ sufficently large, following from Proposition 2.4, we have
$\textnormal{ind}(Ax+b\,\text{sat}(kx))_{B(r)}=\textnormal{ind}(Ax)_{B(r)}=1.$
Thus by Theorem 2.5,
$\sum_{x\in E}\textnormal{ind}_{f}\,(x)=1,\;f=Ax+b\,\text{sat}(kx).$
Consequently we have the conclusions in the theorem.
## 4 Further discussions on properties of attraction basin boundary
Since our concern with the equilibrium points of the closed loop stabilized
system is how to characterize the boundary of the attraction basin of the
origin, we will give a brief discussion on the boundary topic in this section.
In view of the differential topology theory, the following is obvious.
###### Proposition 4.1.
For the stabilized closed loop system (4), if the attraction basin of the
origin is homeomorphic to $S^{n-1}(1)$, then the equilibrium points other than
the origin all lie on the boundary if $n$ is odd, and no equilibrium point
lies on the boundary if $n$ is even.
An interesting question is whether the boundary of the attraction basin is
convex if it is homeomorphic to $S^{n-1}(1)$. It is well known that the null
controllability region is convex if the input set is convex, and in the two
dimensional case, it has been shown by Hu and Lin Hu and Lin, (2001) that
attraction basin of the origin is bounded by a limit cycle. They also provided
an elegant proof of convexity of the limit cycle. All these results give rise
to the expectation that the boundary of attraction should be convex if it is
homeomorphic to the sphere even in higher dimensional case.
Unfortunately, this is denied by the following example.
Consider the closed loop system (4) with
$A=\begin{bmatrix}1&-3&0\\\ 3&1&0\\\
0&0&4\end{bmatrix},\;b=\begin{bmatrix}1\\\ 2\\\
4\end{bmatrix},\;k=[\frac{7}{3},-\frac{4}{3},-\frac{35}{12}].$ (5)
The eigenvalues of $A+bk$ are -1,-2 and -3, hence the origin is asymptotically
stable.
The attraction basin $D$ of the origin can be obtained by numerical
simulation, as shown in Figure 1. The boundary of $D$ is divided into two
parts by a periodic orbit $\Gamma$, one of which is colored and the other is
transparent. These two parts are symmetric about the origin.
Figure 1: The boundary of the attraction basin of the origin of system (4)
with parameters (5).
In particular, let
$\displaystyle p_{1}$ $\displaystyle=(-1.080860,-0.487008,-0.804244),$
$\displaystyle p_{2}$ $\displaystyle=(0.514148,-0.183494,0.797384),$
$\displaystyle p_{3}$ $\displaystyle=(-0.283356,-0.335251,-0.003430),$
which $p_{3}$ is the midpoint of $p_{1}$ and $p_{2}$, i.e.,
$p_{3}=(p_{1}+p_{3})/2$.
By numerical calculation, we find that $p_{1},p_{2}\in D$, but $p_{3}\not\in
D$. Three different trajectories starting from $p_{1},p_{2}$ and $p_{3}$
respectively are shown in Figure 2.
Figure 2: The trajectories of system (4) with parameters (5).
This counter-example shows that the attraction basin of the origin of three-
dimensional system (2) can be non-convex, which is not possible in two-
dimensional case.
## 5 Acknowledgement
This work is partially supported by National Natural Science Foundation of
China (51979116).
## References
* Brown, (1993) Brown, R. F. (1993). A Topological Introduction to Nonlinear Analysis. Boston: Birkhäuser.
* Corradini et al., (2012) Corradini, M. L., Cristofaro, A., Giannoni, F., and Orlando, G. (2012). Control Systems with Saturating Inputs: Analysis Tools and Advanced Design, volume 424. Springer London, Limited.
* Hu and Lin, (2001) Hu, T. and Lin, Z. (2001). Control systems with actuator saturation: analysis and design. Springer Science & Business Media.
* Hu and Lin, (2005) Hu, T. and Lin, Z. (2005). Convex analysis of invariant sets for a class of nonlinear systems. Systems & Control Letters, 54(8):729–737.
* Kapila and Grigoriadis, (2002) Kapila, V. and Grigoriadis, K. M. (2002). Actuator Saturation Control. Marcel Dekker, Inc.
* Li and Lin, (2018) Li, Y. and Lin, Z. (2018). Stability and Performance of Control Systems with Actuator Saturation. Birkhäuser Basel.
* Milnor, (1965) Milnor, J. W. (1965). Topology from the Differentiable Viewpoint. University Press of Virginia.
* Rodolfo et al., (1997) Rodolfo, Suárez, José, lvarez Ramírez, Julio, and Solís-Daun (1997). Linear systems with bounded inputs: global stabilization with eigenvalue placement. International Journal of Robust and Nonlinear Control, 7(9):835–845.
* Tarbouriech et al., (2011) Tarbouriech, S., Garcia, G., Silva, J. M. G. D., and Queinnec, I. (2011). Stability and Stabilization of Linear Systems with Saturating Actuators. Springer London.
|
# Tunneling Under the Influence of Quantum Gravity in Black Rings
Riasat Ali<EMAIL_ADDRESS>Department of Mathematics, GC University
Faisalabad Layyah Campus, Layyah-31200, Pakistan Kazuharu Bamba
<EMAIL_ADDRESS>Division of Human Support System, Faculty of
Symbiotic Systems Science, Fukushima University, Fukushima 960-1296, Japan
Muhammad Asgher<EMAIL_ADDRESS>Department of Mathematics, The Islamia
University of Bahawalpur, Bahawalpur-63100, Pakistan Syed Asif Ali Shah
<EMAIL_ADDRESS>Department of Mathematics and Statistics, The
University of Lahore 1-Km Raiwind Road, Sultan Town Lahore 54000, Pakistan
###### Abstract
We explore the Lagrangian equation in the background of generalized
uncertainty principle. The tunneling radiation through the black ring horizon
is observed. We investigated the tunneling radiation through the Hamilton-
Jacobi method for solutions of Einstein-Maxwell-dilation gravity theory. The
radiation of black ring without back reaction and self interaction of
particles are studied. Furthermore, we consider the quantum gravity effect on
the stability of black ring.
Neutral black ring; Dipole black ring; Spin-1 particles; Lagrangian gravity
equation; Hawking radiation; Stability of rings
## I Introduction
The five dimensions rotating black ring solution has been analyzed in 1 . It
is observed that the angular momentum and mass of black ring is same as a
black hole. Many researchers have analyzed the metric information for the
well-known black rings 2 ; 3 ; 4 and studied that the black ring with
$S^{2}\times S^{1}$ horizon. The nature of the black ring space-time for the
nonlinear $\sigma$-model and the normal horizon should be studied in 5 .
The black ring evaporation as a result of tunneling radiation is one of the
significant phenomenon. The tunneling method for Dirac spin up/down particles
for neutral black ring (NBR) five dimension spaces have been studied 6 . The
investigation is established on tunneling to study the complex mathematical
object for particle to go across with the black ring. The NBR has horizon of
the topology $S^{1}\times S^{2}$. The solution is describing a NBR, which
takes necessity conic singularity at that region, there is no centrifugal
force exist to develop equilibrium of NBR due to self gravity. The tunneling
radiation method by applying the Hamilton-Jacobi process and the WKB
approximation of different types of particle of the black rings in five
dimensions have been investigated 7 . It is suggested that to study the
quantum method of tunneling radiation from the black rings. Different
phenomena are usually applied to study the particle action by computing its
imaginary component. The tunneling radiation of the particles and their the
corresponding Hawking temperature of the black rings have been obtained.
The tunneling radiation from the different spin boson particle emission by
applying the Hamilton Jacobi method to the Lagrangian field equation and
evaluated the tunneling probability for the well-knows black holes 8 ; 9 ; 10
; 11 . Also, WKB approximation is used to a general black hole and computed
the corresponding temperature.
The generalized uncertainty principle (GUP) plays a very significant purpose
to analyze the gravitational effects. To take the gravity effects of tunneling
radiation and the Lagrangian equation will be modified by taking GUP effects.
The black hole and rotating black ring thermodynamics have furthermore been
studied within the GUP effects 12 ; 13 ; 14 ; 15 ; 16 ; 16b .
The paper is sketched as: In Sec. II, we analyzed metric of the NBR and also
studied the tunneling radiation from the NBR. Section III provides tunneling
radiation of dipole black ring (DBR) by using the same method in section II.
In Sec. IV, it contains the graphical behavior of temperature and analyze the
GUP effect on temperature. In Sec. V we have expressed the discussion and
conclusions.
## II Neutral Black Ring
The quantum gravity caused by different spin particles which gets important at
really high densities and keeps singularities in black rings. The particle
production is due to quantum gravity, every black ring may produce a new
radiation on the inner and outer of its horizon. Liu 17 analyzed the black
ring for first thermodynamic law and the emission probability is associated
with the black ring entropy.
In this chapter, we discuss boson tunneling radiation for NBR five dimension
spaces. We study the Lagrangian equation with GUP by applying WKB
approximation. The study of black ring thermodynamics has implication in
gravitational physics. The metric of NBR 6 is given by
$\displaystyle ds^{2}$ $\displaystyle=$
$\displaystyle-\frac{G(y)}{G(x)}[dt-C(\lambda,\nu)R\frac{1+y}{G(y)}d\psi]^{2}+\frac{R^{2}}{(y-x)^{2}}G(x)$
(1)
$\displaystyle\left[-\frac{F(y)}{G(y)}d\psi^{2}-\frac{1}{F(y)}dy^{2}+\frac{1}{F(x)}dx^{2}+\frac{F(x)}{G(x)}d\phi^{2}\right],$
and
$\displaystyle G(\xi)$ $\displaystyle=$
$\displaystyle\lambda\xi+1,~{}~{}~{}F(\xi)=(1+\nu\xi)(1-\xi^{2}),$
$\displaystyle C(\lambda,\nu)$ $\displaystyle=$
$\displaystyle\sqrt{(\lambda-\nu)\lambda\frac{1+\lambda}{1-\lambda}}.$
The $\nu$ and $\lambda$ are dimensionless parameters and takes values in the
range $(1>\lambda\geq\nu>0)$ and to expect the singularity of conical at
$x=1$, $\lambda$ and $\nu$ to be associated with each other, such that
$\lambda=\frac{2\nu}{\nu^{2}+1}.$ The component $\psi$ and $\phi$ are two BR
cycles, $x$ and $y$ takes the values $1\geq x\geq-1$ and $-1\geq
y\geq-\infty$. The horizon is around at $y=-\frac{1}{\nu}=y_{h}$. The $R$ has
the dimension with fixed length. The BR mass is $M=\frac{3\pi\lambda
R^{2}}{2(1-\nu)^{2}}$. In addition, there are three space-time coordinates
$t$, $\phi$ and $\psi$ associated with Killing vectors and 5D space-time
coordinates are taken as $x^{\mu}=(t,\phi,y,x,\psi)$. Next, we will analyze
boson particles tunneling from the NBR. The metric from Eq. (1) can be
rewritten as
$ds^{2}=Udt^{2}+Vd\phi^{2}+Wdy^{2}+Xdx^{2}+Yd\psi^{2}+2Zdtd\psi,$ (2)
where
$\displaystyle U$ $\displaystyle=$
$\displaystyle-\frac{G(y)}{G(x)},~{}~{}~{}~{}W=\frac{G(x)R^{2}}{(y-x)^{2}F(y)},~{}~{}~{}V=\frac{F(x)R^{2}}{(x-y)^{2}},$
$\displaystyle Y$ $\displaystyle=$
$\displaystyle-\frac{C^{2}(\lambda,\nu)(1+y)^{2}R^{2}}{G(x)G(y)}-\frac{F(y)G(x)R^{2}}{(y-x)^{2}G(y)},$
$\displaystyle X$ $\displaystyle=$
$\displaystyle\frac{G(x)R^{2}}{(y-x)^{2}F(x)},~{}~{}~{}Z=\frac{C(\lambda,\nu)(1+y)R}{G(x)}.$
Now we focus on analyzing boson particles tunneling from the NBR. In curved
space-time, boson particles should be satisfied with the following Lagrangian
gravity equation without charge 12 ; 13
$\displaystyle\partial_{\mu}(\sqrt{-g}\chi^{\nu\mu})+\sqrt{-g}\frac{m^{2}}{\hbar^{2}}\chi^{\nu}+\beta\hbar^{2}\partial_{0}\partial_{0}\partial_{0}(\sqrt{-g}g^{00}\chi^{0\nu})$
(3) $\displaystyle-$
$\displaystyle\beta\hbar^{2}\partial_{i}\partial_{i}\partial_{i}(\sqrt{-g}g^{ii}\chi^{i\nu})=0,$
and
$\displaystyle\chi_{\nu\mu}$ $\displaystyle=$
$\displaystyle(1-\beta{\hbar^{2}\partial_{\nu}^{2}})\partial_{\nu}\chi_{\mu}-(1-\beta{\hbar^{2}\partial_{\mu}^{2}})\partial_{\mu}\chi_{\nu}.$
Here $\beta$, $m$ and $\chi$ are the quantum gravity parameter, particle mass
and anti-symmetric tensor and $\partial_{0}$ and $\partial_{i}$ denotes the
partial derivative with respect to $t$ and $i=(\phi,y,x,\psi)$, respectively.
The $\chi$ components are calculated as
$\displaystyle\chi^{0}=\frac{V}{UV-Z^{2}}\chi_{0}-\frac{Z}{UV-Z^{2}}\chi_{1},~{}~{}\chi^{1}=\frac{-Z}{UV-Z^{2}}\chi_{0}-\frac{U}{UV-Z^{2}}\chi_{1},$
$\displaystyle\chi^{2}=\frac{1}{W}\chi_{2},~{}~{}\chi^{3}=\frac{1}{X}\chi_{3},~{}~{}\chi^{4}=\frac{1}{Y}\chi_{4},~{}~{}\chi^{01}=\frac{Z^{2}\chi_{10}+UV\chi_{01}}{(UV-Z^{2})^{2}},$
$\displaystyle\chi^{02}=\frac{V\chi_{02}-Z\chi_{12}}{W(UV-Z^{2})},~{}~{}\chi^{03}=\frac{-Z\chi_{03}+U\chi_{13}}{X(UV-Z^{2})},~{}~{}\chi^{04}=\frac{-Z^{2}\chi_{04}+U\chi_{14}}{Y(UV-Z^{2})},$
$\displaystyle\chi^{12}=-\frac{Z\chi_{02}+U\chi_{12}}{W(UV-Z^{2})},~{}~{}\chi^{13}=\frac{-Z\chi_{03}+U\chi_{13}}{X(UV-Z^{2})},~{}~{}\chi^{14}=\frac{-Z\chi_{04}+U\chi_{14}}{Y(UV-Z^{2})},$
$\displaystyle\chi^{23}=\frac{1}{WX}\chi_{23},~{}~{}\chi^{24}=\frac{1}{WY}\chi_{24},~{}~{}\chi^{34}=\frac{1}{XY}\chi_{34}.$
The WKB approximation is 18
$\chi_{\nu}=c_{\nu}\exp[\frac{i}{\hbar}\Theta_{0}(t,\phi,y,x,\psi)+\Sigma\hbar^{n}\Theta_{n}(t,\phi,y,x,\psi)].$
Here,$(\Theta_{0},~{}~{}\Theta_{n})$ and $c_{\nu}$ are arbitrary functions and
constant. By neglecting the higher order terms for $n=1,2,3,...$ and applying
Eq. (3), we obtain the field equations and applying variables separation
technique 6 , we can take
$\Theta_{0}=-Et+j\phi+I(x,y)+L\psi+K.$ (4)
Here, $E$ and $K$ are the particle energy and complex constant, $j$ and $L$
are denoting the angular momentum of particles and also associating to the
directions $\phi$ and $\psi$ respectively. From a set of field equations, we
obtain a $5\times 5$ equation of a matrix
$G(c_{0},c_{1},c_{2},c_{3},c_{4})^{T}=0,$ the matrix elements should be
expressed as
$G(c_{0},c_{1},c_{2},c_{3},c_{4})^{T}=0.$
Which gives a $5\times 5$ matrix presented as ”$G$”, whose elements are given
as follows:
$\displaystyle G_{00}$ $\displaystyle=$ $\displaystyle
Z^{2}\tilde{U}[J_{1}^{2}+\beta J_{1}^{4}]-UV\tilde{U}[J_{1}^{2}+\beta
J_{1}^{4}]-\frac{V}{W}[I_{1}^{2}+\beta I_{1}^{4}]+\frac{Z}{X}[I_{2}^{2}+\beta
I_{2}^{4}]$ $\displaystyle+$ $\displaystyle\frac{Z}{Y}[L_{1}^{2}+\beta
L_{1}^{4}]-m^{2}V,$ $\displaystyle G_{01}$ $\displaystyle=$ $\displaystyle
Z^{2}\tilde{U}[J_{1}+\beta J_{1}^{3}]E-UV\tilde{U}[J_{1}+\beta
J_{1}^{3}]E+\frac{Z}{W}[I_{1}^{2}+\beta I_{1}^{4}]+\frac{U}{X}[I_{2}^{2}+\beta
I_{2}^{4}]$ $\displaystyle-$ $\displaystyle\frac{U}{Y}[L_{1}^{2}+\beta
L_{1}^{4}]-m^{2}Z,$ $\displaystyle G_{02}$ $\displaystyle=$
$\displaystyle-\frac{V}{W}[E+\beta
E^{3}]I_{1}-\frac{Z\tilde{U}}{W}[J_{1}+\beta J_{1}^{3}]I_{1},$ $\displaystyle
G_{03}$ $\displaystyle=$ $\displaystyle\frac{V}{X}[E+\beta
E^{3}]I_{2}+\frac{U\tilde{U}}{X}[J_{1}+\beta J_{1}^{3}]I_{2},$ $\displaystyle
G_{04}$ $\displaystyle=$ $\displaystyle\frac{Z}{Y}[E+\beta
E^{3}]L_{1}+\frac{U\tilde{U}}{Y}[J_{1}+\beta J_{1}^{3}]L_{1},$ $\displaystyle
G_{10}$ $\displaystyle=$ $\displaystyle Z^{2}\tilde{U}[J_{1}+\beta
J_{1}^{3}]E-UV\tilde{U}[J_{1}+\beta J_{1}^{3}]E+\frac{Z}{W}[I_{1}^{2}+\beta
I_{1}^{4}]+\frac{Z}{X}[I_{2}^{2}+\beta I_{2}^{4}]$ $\displaystyle+$
$\displaystyle\frac{Z}{Y}[L_{1}^{2}+\beta L_{1}^{4}]+m^{2}Z,$ $\displaystyle
G_{11}$ $\displaystyle=$ $\displaystyle Z^{2}\tilde{U}[E^{2}+\beta
E^{4}]-UV\tilde{U}[E^{2}+\beta E^{4}]-\frac{U}{W}[I_{1}^{2}+\beta
I_{1}^{4}]-\frac{U}{X}[I_{2}^{2}+\beta I_{2}^{4}]I_{2}$ $\displaystyle-$
$\displaystyle\frac{U}{Y}[L_{1}^{2}+\beta L_{1}^{4}]+m^{2}U,$ $\displaystyle
G_{12}$ $\displaystyle=$ $\displaystyle\frac{Z}{W}[E+\beta
E^{3}]I_{1}+\frac{U\tilde{U}}{W}[J_{1}+\beta J_{1}^{3}]I_{1},$ $\displaystyle
G_{13}$ $\displaystyle=$ $\displaystyle\frac{Z}{X}[E+\beta
E^{3}]I_{1}+\frac{U\tilde{U}}{X}[J_{1}+\beta J_{1}^{3}]I_{2},$ $\displaystyle
G_{14}$ $\displaystyle=$ $\displaystyle\frac{Z}{Y}[E+\beta
E^{3}]L_{1}+\frac{U\tilde{U}}{Y}[J_{1}+\beta J_{1}^{3}]L_{1},$ $\displaystyle
G_{20}$ $\displaystyle=$ $\displaystyle-V\tilde{U}[I_{1}+\beta
I_{1}^{3}]E-Z\tilde{U}[I_{1}+\beta I_{1}^{3}]J_{1},$ $\displaystyle G_{21}$
$\displaystyle=$ $\displaystyle Z\tilde{U}[I_{1}+\beta
I_{1}^{3}]E+U\tilde{U}[I_{1}+\beta I_{1}^{3}]J_{1},$ $\displaystyle G_{22}$
$\displaystyle=$ $\displaystyle-V\tilde{U^{2}}[E^{2}+\beta
E^{4}]-Z\tilde{U^{2}}[J_{1}+\beta J_{1}^{3}]E-U\tilde{U}[J_{1}^{2}+\beta
J_{1}^{4}]-m^{2}$ $\displaystyle-$ $\displaystyle\frac{1}{X}[I_{2}^{2}+\beta
I_{2}^{4}]-\frac{1}{Y}[L_{1}^{2}+\beta L_{1}^{4}],$ $\displaystyle G_{23}$
$\displaystyle=$ $\displaystyle\frac{1}{X}[I_{1}+\beta
I_{1}^{3}]I_{2},~{}~{}G_{24}=\frac{1}{Y}[I_{1}+\beta I_{1}^{3}]L_{1},$
$\displaystyle G_{30}$ $\displaystyle=$ $\displaystyle-V\tilde{U}[I_{2}+\beta
I_{2}^{3}]E-Z\tilde{U}[I_{2}+\beta I_{2}^{3}]J_{1},$ $\displaystyle G_{31}$
$\displaystyle=$ $\displaystyle Z\tilde{U}[I_{2}+\beta
I_{2}^{3}]E+U\tilde{U}[I_{2}+\beta I_{2}^{3}]J_{1},$ $\displaystyle G_{32}$
$\displaystyle=$ $\displaystyle\frac{1}{W}[I_{2}+\beta I_{2}^{3}]I_{1},$
$\displaystyle G_{33}$ $\displaystyle=$ $\displaystyle-V\tilde{U}[E^{2}+\beta
E^{4}]-Z\tilde{U}[J_{2}+\beta J_{1}^{4}]-\frac{1}{W}[I_{1}^{2}+\beta
I_{1}^{4}]-\frac{1}{Y}[L_{1}^{2}+\beta L_{1}^{4}]-m^{2},$ $\displaystyle
G_{34}$ $\displaystyle=$ $\displaystyle\frac{1}{Y}[I_{2}+\beta
I_{2}^{3}]L_{1},$ $\displaystyle G_{40}$ $\displaystyle=$
$\displaystyle-V\tilde{U}[L_{1}+\beta L_{1}^{3}]E-Z\tilde{U}[L_{1}+\beta
L_{1}^{3}]J_{1},$ $\displaystyle G_{41}$ $\displaystyle=$ $\displaystyle
Z\tilde{U}[L_{1}+\beta L_{1}^{3}]E+U\tilde{U}[L_{1}+\beta L_{1}^{3}]J_{1},$
$\displaystyle G_{42}$ $\displaystyle=$ $\displaystyle\frac{1}{W}[L_{1}+\beta
L_{1}^{3}]I_{1},~{}~{}G_{43}=\frac{1}{X}[L_{1}+\beta L_{1}^{3}]I_{2},$
$\displaystyle G_{44}$ $\displaystyle=$ $\displaystyle-V\tilde{U}[E^{2}+\beta
E^{4}]-Z\tilde{U}[J_{1}+\beta J_{1}^{3}]-Z\tilde{U}[E+\beta
E^{3}]J_{1}-U\tilde{U}[J_{1}^{2}+\beta J_{1}^{4}]$
$\displaystyle-\frac{1}{W}[I_{1}^{2}+\beta
I_{1}^{4}]-\frac{1}{X}[I_{2}^{2}+\beta I_{2}^{4}]-m^{2},$
where $\tilde{U}=\frac{1}{UV-Z^{2}},~{}J_{1}=\partial_{\phi}\Theta_{0}$,
$I_{1}=\partial_{x}\Theta_{0}$, $I_{2}=\partial_{y}\Theta_{0}$ and
$L_{1}=\partial_{\psi}\Theta_{0}$. For the non-trivial result, the determinant
G is zero and we get
$ImI_{\pm}=\pm\int\sqrt{\frac{E^{2}+X_{1}[1+\beta\frac{X_{2}}{X_{1}}]}{-\frac{UV-Z^{2}}{VX}}}dy,$
(5)
where, positive and negative sign denotes the radical functions of outgoing
and ingoing boson particles respectively, while $X_{1}$ and $X_{2}$ functions
can be defined as
$\displaystyle X_{1}$ $\displaystyle=$ $\displaystyle-
ZXE\tilde{U}J_{1}-U\tilde{U}XJ_{1}^{2}-\frac{X}{Y}L_{1}^{2}-m^{2}X$
$\displaystyle X_{2}$ $\displaystyle=$ $\displaystyle-
VXE^{4}\tilde{U}-ZXE\tilde{U}J_{1}^{3}-U\tilde{U}XJ_{1}^{4}-I_{2}^{4}-\frac{X}{Y}L_{1}^{4}$
Extending the A(y) and B(y) functions in Taylor’s series near the NBR horizon,
we get
$A(y_{+})\simeq\acute{A}(y_{+})(y-y_{+}),~{}~{}~{}~{}B(y_{+})\simeq\acute{B}(y_{+})(y-y_{+}).$
(6)
Applying the Eq. (6) in Eq. (5), one can take that the resulting solution has
pole at $y=y_{+}$. For the computation of the temperature by applying
tunneling phenomenon, it is assumed that to take the singularity by particular
contour around the pole. In our investigation, the coordinates of the NBR
metric, the tunnel of outgoing boson particles can be found by assuming an
infinitesimal semi-circle under the pole $y=y_{+}$, as the incoming boson
particles this contour is assumed over the pole. Since applying Eqs. (5) and
(6), integrating the lead field equation around the NBR pole, we have
$ImI_{\pm}=\pm\frac{i\pi E}{2\kappa(y_{+})}[1+\Xi\beta].$ (7)
The surface gravity of NBR is given as
$\kappa(y_{+})=\frac{R(\lambda
x+1)\sqrt{\lambda[a^{2}(-2y+\nu-3y^{2})+2ab]}}{2(\lambda y+1)a^{2}b}.$ (8)
Here, $a=x-y$ and $b=y^{2}-\nu y+\nu y^{3}-1$, as boson particles taking
spin-1, when $y$ direction measuring spin and there would be two kinds, one is
spin up kind and which exist the like $y$ direction, the other spin down kind
take the opposite direction
$(I_{+}(y)=-I_{-}(y))$.
The tunneling probability of NBR is given as
$\displaystyle\Gamma$ $\displaystyle=$
$\displaystyle\frac{\Gamma_{emission}}{\Gamma_{absorption}}=\frac{\exp(-2ImI_{+}(x)-2ImI_{+}(y)-2Im\Theta)}{\exp(-2ImI_{-}(x)-2ImI_{-}(y)-2Im\Theta)}=\exp{\left(-4ImI_{+}(y)\right)}$
(9) $\displaystyle=$ $\displaystyle\exp\left(-\frac{4\pi(\lambda
y+1)a^{2}bE[1+\Xi\beta]}{R(\lambda
x+1)\sqrt{\lambda[a^{2}(-2y+\nu-3y^{2})+2ab]}}\right).$
It should be calculated that Eq. (7) gives near the horizon of the NBR along
$y$ direction.
By equating Eq.(9) formulation with the factor of Boltzmann
$\exp[-\beta_{1}E]$, one can deduce the temperature which is
$T_{H}=\frac{1}{\beta_{1}}$ at the outer NBR horizon $y$. For this result, we
can obtain the NBR temperature as
$T_{H}=\frac{R(\lambda
x+1)\sqrt{\lambda[a^{2}(-2y+\nu-3y^{2})+2ab]}}{\pi(\lambda
y+1)a^{2}b}[1+\beta\Xi]^{-1}.$ (10)
The Hawking of NBR depends upon these parameters $\nu$, $R$, $\beta$ and
$\lambda$. The resulting temperature at which boson particle tunnel by the
horizon is different to the temperature of a charge boson particle at which
they tunnel through the NBR horizon. It is observed that the resulting Hawking
temperature (10) is just for spin up boson particles. For spin up case,
assuming a way fully corresponds to the spin down case solution, but in the
opposite direction which means both spin down and spin up boson particles are
radiated at the like rate.
## III Dipole Black Ring
In this section, we will analyze Hawking temperature of boson particles
through the tunneling process from the DBR. The DBR contribution the like
action as NBR, since they physically obtain more similar behavior. We have
analyzed that, there is an important physical object DBR from the gravity
theory. The five dimensions DBR was constructed in 6 and its metric assumes
in the form as
$\displaystyle ds^{2}$ $\displaystyle=$
$\displaystyle-\frac{G(y)}{G(x)}\left(\frac{K(x)}{K(y)}\right)^{\frac{N}{3}}\left[dt-C(\lambda,\nu)R\frac{y+1}{G(y)}d\psi\right]^{2}$
(11)
$\displaystyle+\frac{R^{2}}{(x-y)^{2}}G(x)\left(K(x)K^{2}(y)\right)^{\frac{N}{3}}$
$\displaystyle\times\left[\frac{F(y)}{G(y)K^{N}(y)}d\psi^{2}-\frac{dy^{2}}{F(y)}+\frac{dx^{2}}{F(x)}+\frac{F(x)}{G(x)K^{N}(x)}d\phi^{2}\right],$
where, $C(\lambda,\nu)$, $F(\xi)$ and $G(\xi)$ are the similar for NBR and
$K(\xi)=1-\xi\mu$
$(1>\mu\geq 0).$ The constant of dilation coupling is associated with the
dimensionless constant as $\alpha^{2}=(\frac{4}{N}-\frac{4}{3})(3\geq N>0).$
The event horizon of DBR is located at $y=-\frac{1}{\nu}=yK$. We analyze the
tunneling and temperature for boson particles in the horizon of DBR.
The tunneling rate of boson particle in DBR horizon is given as
$\displaystyle\acute{\Gamma}$ $\displaystyle=$
$\displaystyle\exp\left(-4\pi\frac{E[1+\Xi\beta]}{\sqrt{fd(g+h)(e+4ab-2al)}}\right).$
(12)
Here, $a=x-y$, $b=y^{2}-\nu y+\nu y^{3}-1$, $d=R^{2}(\lambda x+1)[(1-\nu
x)(1-\nu y)]^{\frac{N}{3}}$, $e=\frac{-2N\nu}{3(1-\nu y)}$, $f=(\frac{1-\nu
x}{1-\nu y})^{\frac{N}{3}},$ $g=\frac{N\nu(\lambda y+1)}{3(\lambda x+1)(1-\nu
y)}$ and $h=\frac{\lambda}{\lambda x+1}.$
Now we compute the Hawking temperature,
$\displaystyle\acute{T}_{H}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{fd(g+h)(e+4ab-2al)}}{\pi}[1+\Xi\beta]^{-1}.$ (13)
This solution has been computed by above applies Hamilton-Jacobi phenomenon
and boson particles out the horizon of DBR are only for vector cases. We are
only assuming the case of boson particles with spin up. In our investigation,
assuming a same method and we will compute the like solution for boson
particles with spin down but opposite direction.
## IV Gravitational Effect on Temperature
In this section, we study the physical importance of quantum gravity and also
analyze the impact of quantum gravity parameter on the instability and
stability of NBR and DBR.
### IV.1 $T_{H}$ versus $y$
In this subsection, we can take quantum gravity value in the range of
$100\leq\beta\leq 300$.
* •
Figure 1 indicates the graph between T and y for varying quantum gravity and
fixed values for other parameters.
(i). We analyzed that the gravity parameter decreases for a small variance of
Hawking temperature.
(ii). We examined that the quantum gravity gradually decreasing and then
finally goes to a large change of $\beta$ which is the unstable condition of
NBR.
So, we observed that for small values of $\beta$ the NBR is stable but as
$\beta$ increases it indicates the unstable behavior of NBR.
* •
Figure 2, shows the behavior of temperature by varying gravity parameter.
(i). We observed that the Hawking temperature decreasing constantly for small
range values of $\beta$.
(ii). As gravity increases the Hawking temperature gradually increases and
then approximation goes to infinity. So, it is observed that in the range of
$0\leq y\leq 2.7$, the DBR is stable and Hawking temperature remains constant
and the range of $2.7<y<\infty$ the DBR is unstable.
We observed that the graphical behavior of decrease and increase Hawking
temperature when $\beta<100$ and $\beta>100$, respectively. As, for
$\beta=100$ the temperature remains constant in a particular range of $y$.
Figure 1: $T_{H}$ versus $y$ for
$R=1,~{}~{}\lambda=0.9,~{}~{}\nu=0.75,~{}~{}x=-0.5$ and $\Xi=1.$
Figure 2: $T^{\prime}_{H}$ versus $y$ for
$R=1,~{}~{}N=3,~{}~{}\lambda=0.9,~{}~{}\nu=0.75,~{}~{}x=-0.5$ and $\Xi=1.$
## V Conclusions
The tunneling spectrum of the Dirac and boson particles for black ring space-
time have been evaluated in different coordinate system 6 ; 7 . The coordinate
system in the rotating frame does not affect the value of $T_{H}$. We
evaluated boson particulate tunneling from the black rings to the Lagrangian
gravity equation using the WKB approximation to the Hamilton-Jacobi
phenomenon.
The WKB approximation tells us the tunneling probability for the classical
forbidden trajectory from inner to outer horizon is associated with imaginary
part of the emitted boson particles action across the black rings horizon.
Firstly, we analyze the imaginary part of the action, as we know that the
tunneling rate is proportional to the imaginary part of the particle action.
Moreover, we examine the comparison of Hawking temperature values by assuming
Boltzmann factor in both cases and analysis of the spectra in general. We
consider both the cases of boson particles with spin-up and spin-down in the
$y$-direction. Tunneling of radiation by assuming conservation of energy-
charge and quantum gravity is studied.
For 5D black ring, the $T_{H}$ given by equation (10) related to the
$\lambda$, $x$, $y$, $a$, $b$, $R$, $\nu$ and $\beta$, which is similar to the
$\acute{T}_{H}$ as given in Eq. (13) when $N=3$ and $\mu=0$. The modified
temperatures are associated with the quantum gravity and geometry of the black
ring. In the absence of quantum gravity, the normal tunneling and normal
temperature are obtained. The Hawking temperature rises with the lowering of
the horizon in Fig. (1) (which is physical) has been analyzed. Moreover, we
have studied the Hawking temperature increases as the horizon increases in
Fig. (2) (which is non physical).
## Acknowledgments
The work of KB was supported in part by the JSPS KAKENHI Grant Number JP
25800136 and Competitive Research Funds for Fukushima University Faculty
(19RI017).
## References
* (1) R. Emparan, H. S. Reall, Phys. Rev. Lett. 88(2002)101101.
* (2) H. Elvang, Phy. Rev. D 68(2003)124016.
* (3) H. Elvang, JHEP 11(2003)035.
* (4) Y. Morisawa, S. Tomizawa, Y. Yasui, Phy. Rev. D 77(2008)064019.
* (5) M. Rogatko, Phy. Rev. D 77(2008)124037.
* (6) Q. Q. Jiang, Phys. Rev. D 78(2008)044009.
* (7) W. Javed, R. Ali, G. Abbas, Can. J. Phys. 97(2019)176.
* (8) X. Q. Li and G. R. Chen, Phys. Lett. B 751(2015)34.
* (9) W. Javed, G. Abbas, R. Ali, Eur. Phys. J. C 77(2017)296.
* (10) A. Övgün, W. Javed, R. Ali, Advances in High Energy Physics 2018(2018)11.
* (11) W. Javed, R, Ali, R. Babar, A. Övgün, Eur. Phys. J. Plus 134(2019)511.
* (12) R. Ali, K. Bamba, S. A. A. Shah, Symmetry 631(2019)11.
* (13) W. Javed, R. Ali, R. Babar, A. Övgün, Chinese Physics C 44, No. 1(2020)015104.
* (14) W. Javed, R. Babar, A. Övgün, Mod. Phys. Lett. A 34(2019)1950057.
* (15) R. Babar, W. Javed, A. Övgün, Mod. Phys. Lett. A 35, No. 13(2020)2050104.
* (16) R. Ali, K. Bamba, M. Asgher, M. F. Malik, S. A. A. Shah, Symmetry 1165(2020)12.
* (17) R. Ali, M. Asgher and M. F. Malik, Mod. Phys. Lett. A 35, No. 27(2020)2050225.
* (18) Z. Liu, Commun. Theor. Phys. 47(2007)835.
* (19) T. Shivalingaswamy, B. A. Kagali, European J. Physics Education 2(2011)1309.
|
PandaX-II Collaboration
# Search for Light Dark Matter-Electron Scatterings in the PandaX-II
Experiment
Chen Cheng School of Physics, Sun Yat-Sen University, Guangzhou 510275, China
Pengwei Xie Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai,
200240, China Abdusalam Abdukerim Wei Chen School of Physics and Astronomy,
Shanghai Jiao Tong University, Key Laboratory for Particle Astrophysics and
Cosmology (MOE), Shanghai Key Laboratory for Particle Physics and Cosmology,
Shanghai 200240, China Xun Chen School of Physics and Astronomy, Shanghai
Jiao Tong University, Key Laboratory for Particle Astrophysics and Cosmology
(MOE), Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai
200240, China Shanghai Jiao Tong University Sichuan Research Institute,
Chengdu 610213, China Yunhua Chen Yalong River Hydropower Development
Company, Ltd., 288 Shuanglin Road, Chengdu 610051, China Xiangyi Cui Tsung-
Dao Lee Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
Yingjie Fan School of Physics, Nankai University, Tianjin 300071, China
Deqing Fang Changbo Fu Key Laboratory of Nuclear Physics and Ion-beam
Application (MOE), Institute of Modern Physics, Fudan University, Shanghai
200433, China Mengting Fu School of Physics, Peking University, Beijing
100871, China Lisheng Geng School of Physics, Beihang University, Beijing
100191, China International Research Center for Nuclei and Particles in the
Cosmos & Beijing Key Laboratory of Advanced Nuclear Materials and Physics,
Beihang University, Beijing 100191, China Karl Giboni Linhui Gu School of
Physics and Astronomy, Shanghai Jiao Tong University, Key Laboratory for
Particle Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for
Particle Physics and Cosmology, Shanghai 200240, China Xuyuan Guo Yalong
River Hydropower Development Company, Ltd., 288 Shuanglin Road, Chengdu
610051, China Ke Han School of Physics and Astronomy, Shanghai Jiao Tong
University, Key Laboratory for Particle Astrophysics and Cosmology (MOE),
Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai 200240,
China Changda He School of Physics and Astronomy, Shanghai Jiao Tong
University, Key Laboratory for Particle Astrophysics and Cosmology (MOE),
Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai 200240,
China Shengming He Yalong River Hydropower Development Company, Ltd., 288
Shuanglin Road, Chengdu 610051, China Di Huang School of Physics and
Astronomy, Shanghai Jiao Tong University, Key Laboratory for Particle
Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for Particle Physics
and Cosmology, Shanghai 200240, China Yan Huang Yalong River Hydropower
Development Company, Ltd., 288 Shuanglin Road, Chengdu 610051, China Yanlin
Huang School of Medical Instrument and Food Engineering, University of
Shanghai for Science and Technology, Shanghai 200093, China Zhou Huang
School of Physics and Astronomy, Shanghai Jiao Tong University, Key Laboratory
for Particle Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for
Particle Physics and Cosmology, Shanghai 200240, China Xiangdong Ji
Department of Physics, University of Maryland, College Park, Maryland 20742,
USA Yonglin Ju School of Mechanical Engineering, Shanghai Jiao Tong
University, Shanghai 200240, China Shuaijie Li Tsung-Dao Lee Institute,
Shanghai Jiao Tong University, Shanghai, 200240, China Qing Lin State Key
Laboratory of Particle Detection and Electronics, University of Science and
Technology of China, Hefei 230026, China Department of Modern Physics,
University of Science and Technology of China, Hefei 230026, China Huaxuan
Liu School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai
200240, China Jianglai Liu<EMAIL_ADDRESS>School of Physics and
Astronomy, Shanghai Jiao Tong University, Key Laboratory for Particle
Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for Particle Physics
and Cosmology, Shanghai 200240, China Tsung-Dao Lee Institute, Shanghai Jiao
Tong University, Shanghai, 200240, China Shanghai Jiao Tong University
Sichuan Research Institute, Chengdu 610213, China Liqiang Liu Yalong River
Hydropower Development Company, Ltd., 288 Shuanglin Road, Chengdu 610051,
China Xiaoying Lu Research Center for Particle Science and Technology,
Institute of Frontier and Interdisciplinary Science, Shandong University,
Qingdao 266237, Shandong, China Key Laboratory of Particle Physics and
Particle Irradiation of Ministry of Education, Shandong University, Qingdao
266237, Shandong, China Wenbo Ma School of Physics and Astronomy, Shanghai
Jiao Tong University, Key Laboratory for Particle Astrophysics and Cosmology
(MOE), Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai
200240, China Yugang Ma Key Laboratory of Nuclear Physics and Ion-beam
Application (MOE), Institute of Modern Physics, Fudan University, Shanghai
200433, China Yajun Mao School of Physics, Peking University, Beijing
100871, China Yue Meng<EMAIL_ADDRESS>School of Physics and Astronomy,
Shanghai Jiao Tong University, Key Laboratory for Particle Astrophysics and
Cosmology (MOE), Shanghai Key Laboratory for Particle Physics and Cosmology,
Shanghai 200240, China Shanghai Jiao Tong University Sichuan Research
Institute, Chengdu 610213, China Parinya Namwongsa School of Physics and
Astronomy, Shanghai Jiao Tong University, Key Laboratory for Particle
Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for Particle Physics
and Cosmology, Shanghai 200240, China Kaixiang Ni School of Physics and
Astronomy, Shanghai Jiao Tong University, Key Laboratory for Particle
Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for Particle Physics
and Cosmology, Shanghai 200240, China Jinhua Ning Yalong River Hydropower
Development Company, Ltd., 288 Shuanglin Road, Chengdu 610051, China Xuyang
Ning School of Physics and Astronomy, Shanghai Jiao Tong University, Key
Laboratory for Particle Astrophysics and Cosmology (MOE), Shanghai Key
Laboratory for Particle Physics and Cosmology, Shanghai 200240, China
Xiangxiang Ren Research Center for Particle Science and Technology, Institute
of Frontier and Interdisciplinary Science, Shandong University, Qingdao
266237, Shandong, China Key Laboratory of Particle Physics and Particle
Irradiation of Ministry of Education, Shandong University, Qingdao 266237,
Shandong, China Nasir Shaheed Research Center for Particle Science and
Technology, Institute of Frontier and Interdisciplinary Science, Shandong
University, Qingdao 266237, Shandong, China Key Laboratory of Particle
Physics and Particle Irradiation of Ministry of Education, Shandong
University, Qingdao 266237, Shandong, China Changsong Shang Yalong River
Hydropower Development Company, Ltd., 288 Shuanglin Road, Chengdu 610051,
China Guofang Shen School of Physics, Beihang University, Beijing 100191,
China Lin Si School of Physics and Astronomy, Shanghai Jiao Tong University,
Key Laboratory for Particle Astrophysics and Cosmology (MOE), Shanghai Key
Laboratory for Particle Physics and Cosmology, Shanghai 200240, China Andi
Tan Department of Physics, University of Maryland, College Park, Maryland
20742, USA Anqing Wang Research Center for Particle Science and Technology,
Institute of Frontier and Interdisciplinary Science, Shandong University,
Qingdao 266237, Shandong, China Key Laboratory of Particle Physics and
Particle Irradiation of Ministry of Education, Shandong University, Qingdao
266237, Shandong, China Hongwei Wang Shanghai Advanced Research Institute,
Chinese Academy of Sciences, Shanghai 201210, China Meng Wang Research
Center for Particle Science and Technology, Institute of Frontier and
Interdisciplinary Science, Shandong University, Qingdao 266237, Shandong,
China Key Laboratory of Particle Physics and Particle Irradiation of Ministry
of Education, Shandong University, Qingdao 266237, Shandong, China Qiuhong
Wang Key Laboratory of Nuclear Physics and Ion-beam Application (MOE),
Institute of Modern Physics, Fudan University, Shanghai 200433, China Siguang
Wang School of Physics, Peking University, Beijing 100871, China Wei Wang
School of Physics, Sun Yat-Sen University, Guangzhou 510275, China Xiuli Wang
School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai
200240, China Zhou Wang School of Physics and Astronomy, Shanghai Jiao Tong
University, Key Laboratory for Particle Astrophysics and Cosmology (MOE),
Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai 200240,
China Shanghai Jiao Tong University Sichuan Research Institute, Chengdu
610213, China Mengmeng Wu School of Physics, Sun Yat-Sen University,
Guangzhou 510275, China Shiyong Wu Yalong River Hydropower Development
Company, Ltd., 288 Shuanglin Road, Chengdu 610051, China Weihao Wu Jingkai
Xia School of Physics and Astronomy, Shanghai Jiao Tong University, Key
Laboratory for Particle Astrophysics and Cosmology (MOE), Shanghai Key
Laboratory for Particle Physics and Cosmology, Shanghai 200240, China
Mengjiao Xiao Department of Physics, University of Maryland, College Park,
Maryland 20742, USA Xiang Xiao School of Physics, Sun Yat-Sen University,
Guangzhou 510275, China Binbin Yan School of Physics and Astronomy, Shanghai
Jiao Tong University, Key Laboratory for Particle Astrophysics and Cosmology
(MOE), Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai
200240, China Jijun Yang Yong Yang School of Physics and Astronomy,
Shanghai Jiao Tong University, Key Laboratory for Particle Astrophysics and
Cosmology (MOE), Shanghai Key Laboratory for Particle Physics and Cosmology,
Shanghai 200240, China Chunxu Yu School of Physics, Nankai University,
Tianjin 300071, China Jumin Yuan Research Center for Particle Science and
Technology, Institute of Frontier and Interdisciplinary Science, Shandong
University, Qingdao 266237, Shandong, China Key Laboratory of Particle
Physics and Particle Irradiation of Ministry of Education, Shandong
University, Qingdao 266237, Shandong, China Ying Yuan School of Physics and
Astronomy, Shanghai Jiao Tong University, Key Laboratory for Particle
Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for Particle Physics
and Cosmology, Shanghai 200240, China Xinning Zeng School of Physics and
Astronomy, Shanghai Jiao Tong University, Key Laboratory for Particle
Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for Particle Physics
and Cosmology, Shanghai 200240, China Dan Zhang Department of Physics,
University of Maryland, College Park, Maryland 20742, USA Tao Zhang Li Zhao
School of Physics and Astronomy, Shanghai Jiao Tong University, Key Laboratory
for Particle Astrophysics and Cosmology (MOE), Shanghai Key Laboratory for
Particle Physics and Cosmology, Shanghai 200240, China Qibin Zheng School of
Medical Instrument and Food Engineering, University of Shanghai for Science
and Technology, Shanghai 200093, China Jifang Zhou Yalong River Hydropower
Development Company, Ltd., 288 Shuanglin Road, Chengdu 610051, China Ning
Zhou School of Physics and Astronomy, Shanghai Jiao Tong University, Key
Laboratory for Particle Astrophysics and Cosmology (MOE), Shanghai Key
Laboratory for Particle Physics and Cosmology, Shanghai 200240, China
Xiaopeng Zhou School of Physics, Beihang University, Beijing 100191, China
###### Abstract
We report constraints on light dark matter through its interactions with shell
electrons in the PandaX-II liquid xenon detector with a total 46.9
tonne$\cdot$day exposure. To effectively search for these very low energy
electron recoils, ionization-only signals are selected from the data. 1821
candidates are identified within ionization signal range between 50 to 75
photoelectrons, corresponding to a mean electronic recoil energy from 0.08 to
0.15 keV. The 90% C.L. exclusion limit on the scattering cross section between
the dark matter and electron is calculated with systematic uncertainties
properly taken into account. Under the assumption of point interaction, we
provide the world’s most stringent limit within the dark matter mass range
from 15 to 30 $\rm{MeV/c^{2}}$, with the corresponding cross section from
$2.5\times 10^{-37}$ to $3.1\times 10^{-38}$ cm2.
Figure 1: Event rate of the US2 signals with charge range from 50 to 200 PE
The existence of dark matter (DM) is established by overwhelming evidence in
cosmological and astronomical observations [1]. Possible interactions between
DM particles and baryonic matter have been searched for in underground
laboratories using ultra-low background detectors by directly detecting recoil
signals [2, 3]. DM within a mass range between $\rm{GeV/c^{2}}$ to
$\rm{TeV/c^{2}}$ have been mostly searched for via its elastic scattering off
atomic nucleus [4, 5, 6, 7, 8, 9, 10, 11, 12]. The scatterings of DM within
this mass range with electrons, while still possible, is difficult to be
kinematically probed as the energy of electron recoils (ERs) is suppressed by
the smallness of electron mass. For sub-GeV light DM, on the other hand, the
nuclear recoil energy becomes much more difficult to detect with conventional
detection techniques. It was realized that these light DM can scatter with
shell electrons, which may subsequently produce sufficiently large ionization
signals in the detector [13]. Such DM-electron scatterings open up a new
experimental paradigm, which has since been pursued by numbers of groups [14,
15, 16, 17, 18, 19].
The PandaX-II experiment [4, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29], located
in the China Jinping Underground Laboratory (CJPL), utilizes a dual-phase Time
Projection Chamber (TPC), which contains an active cylindrical target with 580
$\rm{kg}$ of liquid xenon (LXe). DM scattering produces prompt scintillation
photons ($S1$) in the liquid region, and ionized electron signal ($S2$)
through delayed electroluminescence in the gaseous region. Both $S1$ and $S2$
can be detected by two arrays of Hamamatsu R11410-20 photomultiplier tubes
(PMTs) located at the top and bottom of the cylindrical volume, with a
corresponding photon detection efficiency and electron extraction efficiency
around 10% and 50%, respectively [28]. Conventional analysis searched for
recoil energy from keV and above by requiring correlated $S1$ and $S2$
signals. In this work, we search for light DM through its scattering off shell
electrons, which would produce energy deposition in the order of 100 eV [13].
In this region, $S1$ would be nearly invisible, but unpaired $S2$ (US2)
signals can still be used to probe these scatterings effectively.
Figure 2: Data quality cut efficiency (red), trigger efficiency (blue) and
combined efficiency (black) vs. detected ionization signals in the Run 9 (top)
and Runs 10/11 (bottom). Details about the uncertainties are described in
[30]. The quality cut efficiencies (dashed) from 241Am-Be and 220Rn
calibration runs are used to validate that from tritium. The upper axes
correspond to mean ER energy computed using the constant model ($f_{e}=0.83$
and $E=13.7~{}{\rm eV}\times S2/{\rm SEG/EEE}/f_{e}$, see Fig. 3).
All DM search runs (Runs 9, 10, and 11), and calibration runs with tritium,
220Rn, and 241Am-Be sources are used in this analysis. Most $S2$ related data
quality cuts are inherited from previous analysis [28], including a $S2$
isolation time cut of 0.02 s [30], a position reconstruction quality cut to
suppress single electron pileup due to multiple scatterings, and a gate event
cut which searches sub-threshold charge between 1.5-3.5 $\rm{\mu s}$ window
preceding an US2. Three cuts are tightened, including the full-
width-10%-maxima, the rising edge (defined as the ratio of the charge in the
first 1 ${\rm{\mu}}$s to the total), and the top/bottom charge ratio of the
US2s, based on the distribution of the ER calibration data [30]. Although the
US2 data are not intentionally blinded, which allows a confirmation of the
validity of the basic data quality cuts from previous analysis, cuts that are
most relevant to the selection efficiency (S2 width, rising edge, and
top/bottom ratio) are developed entirely based on multiple sets of calibration
data. Events are further selected with a more conservative radial cut so
events within 15 cm from the wall are removed to avoid un-paired field cage
surface events, leading to a 117.0 $\pm$ 6.6 kg fiducial mass of LXe. The
uncertainty is dominated by the 4.2 mm radial position resolution, estimated
using different reconstruction algorithms. The event distribution within the
fiducial volume (FV) is mostly uniform [30]. The radial cut efficiency, since
it has little dependence on the charge and characters of the US2s, is factored
in the FV and exposure. The time evolution of the surviving candidates in the
three DM search runs with charge range from 50 to 200 photoelectron (PE) is
shown in Fig. 1, and appears to be reasonably stable within a total time span
of two years.
The combined event selection efficiency for US2 events is a product of the
trigger efficiency and data quality cut efficiency. PandaX-II utilizes an
FPGA-based realtime trigger system, with its efficiency directly measured from
events with multiple $S2$s [31]. The uncertainty of the trigger efficiency is
estimated to be 12.0% (fractional unless otherwise specified) by making
comparison of the efficiencies for S2s with different widths. The data quality
cut efficiency is obtained from the tritium calibration run, given that the
distributions of the low energy electron recoils are closer to the desired DM-
electron scattering events. The overall efficiency is a product of each
individual cut efficiency, estimated by the ratio of survived events with all-
cuts-applied to those with all-but-this-cut, also known as the “N/(N-1)”
approach. An alternative approach (“N/all”), which takes the ratio of all-
cuts-applied to those with only the inherited basic data quality cuts, yields
good agreement, indicating that the correlation between cuts can be ignored.
For comparison, the data quality efficiency curves obtained using 241Am-Be
(Runs 9/10/11) and 220Rn (Runs 10/11) data are overlaid. The residual
systematic uncertainty of the data quality cuts is estimated to be 13.6%,
consisting of the effect of non-source contamination in the calibration
(10.0%), the dependence of selection efficiency to event energy and position
distributions thereby the shapes of US2s (9.2%), and the S2 identification
efficiency (1.5%) [30]. The trigger, data quality, and the combined
efficiencies are shown in Fig. 2 for US2 events with charge range between 20
to 200 PE.
Figure 3: Charge yield vs. electron recoil energy for three DM search runs
for NEST 2.0 [32, 33], the constant model [34], and the PandaX-II model [35],
with uncertainty bands obtained from original publications. The lowest energy
measurement at 186 eV [36] is also shown. The shaded vertical band indicates
the mean energy range corresponding to the ROI (50-75 PE).
As mentioned in the introduction, the light DM-electron scatterings produce
sub-keV recoil energy, therefore knowledge of the photon and ionization
productions in LXe in this energy regime is required. Three independent signal
response models are compared, all under the standard $W$ value of 13.7 eV to
produce either a photon or electron [37]: 1) the Noble Element Simulation
Technique (NEST 2.0) model [33, 32], 2) the constant model in which the
fraction of ionization to total quanta is $f_{e}=0.83$ [34] with no energy
dependence and without recombination effects, and 3) the PandaX-II model,
obtained by fitting the tritium calibration data with correlated $S1$ and $S2$
but with lower energy truncated to 0.9 keV due to threshold [35]. The charge
yield vs. recoil energy is shown in Fig. 3 for Run 9 and Runs 10/11 under a
drift field of 400 and 318 V/cm, respectively. In general, the constant model
predicts smaller charge yield in comparison with NEST 2.0. For Run 9, the
PandaX-II model agrees with other two models within 1$\sigma$ at 0.9 keV. On
the other hand, for Runs 10/11, the PandaX-II model agrees with the constant
model, but has slight tension with NEST 2.0. Therefore, the constant model is
selected as the nominal model in this analysis to conservatively estimate the
number of primary ionized electrons, as well as to be consistent with other
analysis. But one should keep in mind that for the charge yield in a liquid
xenon detector, the lowest energy measurement is only recently made at 186 eV
at 180 V/cm [36]. The spectrum of detected ionization signals, i.e. US2 events
in PE, can then be predicted based on the measured detector parameters [28],
listed in Table 1 for convenience, and the efficiencies in Fig. 2. The
electron lifetime, i.e. the attenuation of ionized electrons due to
electronegative impurities, is incorporated in the DM-electron signal model,
by randomly distributing vertices in the 60 cm drift length, and then by
applying the measured variations of the electron lifetime throughout the data
taking.
Table 1: The PandaX-II detector parameters, including electron extraction efficiency (EEE), single electron gain (SEG) and its measured resolution ($\sigma_{\rm SE}$) [28]. They are used to estimate the relation between ER energy and detected ionization electrons. | Run 9 | Run 10 | Run 11
---|---|---|---
EEE (%) | $46.4\pm 1.4$ | $50.8\pm 2.1$ | $47.5\pm 2.0$
SEG (PE) | $24.4\pm 0.4$ | $23.7\pm 0.8$ | $23.5\pm 0.8$
$\sigma_{\rm SE}$ (PE) | 8.3 | 7.8 | 8.1
Table 2: The number of US2 candidates, exposure, and known ER background
events for the three DM search runs. The span 1 and span 2 of the Run 11 are
listed separately due to the different background rates. ROI is chosen as from
50 PE and 75 PE, corresponding to a mean ER energy between 0.08 to 0.15 keV.
The flat ER background includes 85Kr, 222Rn, 220Rn, material ER, solar
neutrino and 136Xe [28].
| Run 9 | Run 10 | Run 11 span 1 | Run 11 span 2 | Total
---|---|---|---|---|---
Exposure $[\rm tonne\cdot\rm day]$ | 9.3 | 9.0 | 28.6 | 46.9
DM-electron candidates [events] | 287 | 340 | 1194 | 1821
Flat ER background [events] | 0.8 | 0.2 | 0.3 | 0.6 | 1.8
Tritium background [events] | 0 | 0.1 | 0.2 | 0.3 | 0.6
Figure 4: Detected ionization signals (US2, black histograms), and expected
signals from DM-electron scatterings with $F_{\rm DM}$=1 (upper) and
$\alpha^{2}m^{2}_{e}/q^{2}$ (lower), with blue (red) histogram corresponding
to a DM mass of 20 $\rm{MeV/c^{2}}$ 200 $\rm{MeV/c^{2}}$). The gray shadow
shows the ROI of this analysis. The excess in the data peaking at $\sim$25 PE
are single electron events, likely due to stray electrons in LXe. Figure 5:
$90\%$ C.L. upper limits (solid: the constant model, dashed: NEST 2.0) on DM-
electron scattering cross section from PandaX-II data for $F_{\rm DM}=1$
(upper) and $F_{\rm DM}=\alpha^{2}m^{2}_{e}/q^{2}$ (lower). Results from
XENON1T [15], XENON10 [38], DarkSide-50 [16], SENSEI [18], DAMIC [17],
EDELWEISS [39] are also shown for comparison.
The predicted rates and recoil energy spectra of DM-electron scatterings in
LXe target are calculated following the procedures in Refs. [40, 13]. It is
recently pointed out that such calculation contains non-negligible theoretical
uncertainties due to relativistic effect for inner-shell ionization and final-
state free electron wave function [41]. We adopt the procedures of Refs. [13,
40] nevertheless, so that our results can be directly compared to previous
work. Electron shell corrections are applied in the signal model and the
minimum of the additional quanta range is taken (e.g. three additional quanta
for $4s^{2}$ shell from Table II of Ref. [38]) to conservatively estimate the
ionization. Two benchmark interaction models, the point-like interaction with
form-factor $F_{\rm DM}=1$, and light mediator with $F_{\rm
DM}=\alpha^{2}m^{2}_{e}/q^{2}$, are considered, with their corresponding
ionization cross sections from xenon atoms computed. The US2 candidates from
the data are overlaid with the predicted ionization spectra for the two models
with DM masses of 20 and 200 ${\rm MeV/c^{2}}$, respectively, in Fig. 4. The
region-of-interest (ROI) is chosen to be 50 to 75 PE. The lower cut is set to
keep at least 50% trigger efficiency, and the higher cut is set to enclose the
most high energy tail for the DM benchmark DM mass of 20 $\rm MeV/c^{2}$. The
number of candidates are summarized in Table 2 for the three DM search runs,
which are significantly higher than the dominating known ER background with
approximately flat spectrum at low energy (flat ER) and the tritium
contribution [28]. We note that in comparison to a standard $S1$-$S2$
analysis, the US2 analysis reported here reduces the energy threshold
significantly, but is more vulnerable to un-modeled background contamination
due to the absence of $S1$. To be conservative, we assume that all detected
candidates are DM-electron scattering events in the ROI. The 90% C.L. upper
limit is derived using a cut-and-count approach via toy Monte Carlo,
incorporating the statistical uncertainty and the systematic uncertainties due
to the efficiency (Fig. 2), detector responses (Table 1), and the FV. The
exclusion curves of DM-electron scattering cross section under the two
benchmark interaction models are shown in Fig. 5, assuming both the constant
and NEST 2.0 signal response models. For comparison, results from earlier
experiments are also overlaid in Fig. 5. Due to the achieved 50 PE analysis
threshold ($\sim$0.08 keV), the most stringent exclusion limit for DM-electron
interaction is given for the point-interaction within the DM mass range from
15 to 30 $\rm{MeV/c^{2}}$, with the corresponding cross section from
$2.5\times 10^{-37}$ to $3.1\times 10^{-38}$ cm2. At 25 $\rm{MeV/c^{2}}$, our
result is a few times more constraining than that from XENON10 and XENON1T
which used the same xenon target and the ionization-only channel [15, 38].
Alternative choice of the NEST 2.0 model would increase the ionization yield
relative to the constant model at a given energy, leading to a more
constraining limit. In the near future, the PandaX-4T experiment will be under
operation. With larger exposure, lower background and a lower threshold using
triggerless readout [42], PandaX-4T will provide more sensitive search for DM-
electron scatterings.
This project is supported in part by Office of Science and Technology,
Shanghai Municipal Government (grant No. 18JC1410200), a grant from the
Ministry of Science and Technology of China (No. 2016YFA0400301), grants from
National Science Foundation of China (Nos. 12005131, 11905128, 12090061,
11775141), and a grant from Sichuan Science and Technology Program
(No.2020YFSY0057). We thank supports from Double First Class Plan of the
Shanghai Jiao Tong University. We also thank the sponsorship from the Chinese
Academy of Sciences Center for Excellence in Particle Physics (CCEPP), Hongwen
Foundation in Hong Kong, and Tencent Foundation in China. Finally, we thank
the CJPL administration and the Yalong River Hydropower Development Company
Ltd. for indispensable logistical support and other help. We thank Rouven
Essig for the helpful email discussions about the theoretical spectrum at the
beginning of this project. We also thank Matthew Szydagis for the helpful
email communication about the NEST 2.0 model.
## References
* Bertone _et al._ [2005] G. Bertone, D. Hooper, and J. Silk, Physics Reports 405, 279 (2005).
* Liu _et al._ [2017] J. Liu, X. Chen, and X. Ji, Nature Phys. 13, 212 (2017).
* Undagoitia and Rauch [2015] T. M. Undagoitia and L. Rauch, Journal of Physics G: Nuclear and Particle Physics 43, 013001 (2015).
* Tan _et al._ [2016] A. Tan _et al._ (PandaX), Phys. Rev. D 93, 122009 (2016).
* Aprile _et al._ [2018] E. Aprile _et al._ (XENON), Phys. Rev. Lett. 121, 111302 (2018).
* Akerib _et al._ [2017a] D. Akerib _et al._ (LUX), Phys. Rev. Lett. 118, 021303 (2017a).
* Agnes _et al._ [2018a] P. Agnes _et al._ (DarkSide), Phys. Rev. D 98, 102006 (2018a).
* Ajaj _et al._ [2019] R. Ajaj _et al._ (DEAP), Phys. Rev. D 100, 022004 (2019).
* Agnese _et al._ [2014] R. Agnese _et al._ (SuperCDMS), Phys. Rev. Lett. 112, 041302 (2014).
* Jiang _et al._ [2018] H. Jiang _et al._ (CDEX), Phys. Rev. Lett. 120, 241301 (2018).
* Abdelhameed _et al._ [2019] A. Abdelhameed _et al._ (CRESST), Phys. Rev. D 100, 102002 (2019).
* Amole _et al._ [2017] C. Amole _et al._ (PICO), Phys. Rev. Lett. 118, 251301 (2017).
* Essig _et al._ [2012a] R. Essig, J. Mardon, and T. Volansky, Phys. Rev. D 85, 076007 (2012a).
* Essig _et al._ [2012b] R. Essig, A. Manalaysay, J. Mardon, P. Sorensen, and T. Volansky, Phys. Rev. Lett. 109, 021301 (2012b).
* Aprile _et al._ [2019] E. Aprile _et al._ (XENON), Phys. Rev. Lett. 123, 251801 (2019).
* Agnes _et al._ [2018b] P. Agnes _et al._ (The DarkSide Collaboration), Phys. Rev. Lett. 121, 111303 (2018b).
* Aguilar-Arevalo _et al._ [2019] A. Aguilar-Arevalo _et al._ (DAMIC), Phys. Rev. Lett. 123, 181802 (2019).
* Barak _et al._ [2020] L. Barak, I. M. Bloch, M. Cababie, G. Cancelo, L. Chaplinsky, F. Chierchie, _et al._ (SENSEI), Phys. Rev. Lett. 125, 171802 (2020).
* Emken _et al._ [2019] T. Emken, R. Essig, C. Kouvaris, and M. Sholapurkar, Journal of Cosmology and Astroparticle Physics 2019 (09), 070.
* Fu _et al._ [2017a] C. Fu _et al._ (PandaX-II), Phys. Rev. Lett. 118, 071301 (2017a), [Erratum: Phys.Rev.Lett. 120, 049902 (2018)].
* Fu _et al._ [2017b] C. Fu _et al._ (PandaX), Phys. Rev. Lett. 119, 181806 (2017b).
* Chen _et al._ [2017] X. Chen, A. Abdukerim, W. Chen, Y. Chen, X. Cui, D. Fang, _et al._ (PandaX-II), Phys. Rev. D 96, 102007 (2017).
* Cui _et al._ [2017] X. Cui _et al._ (PandaX-II), Phys. Rev. Lett. 119, 181302 (2017).
* Xia _et al._ [2019] J. Xia _et al._ (PandaX-II), Phys. Lett. B 792, 193 (2019).
* Ren _et al._ [2018] X. Ren _et al._ (PandaX-II), Phys. Rev. Lett. 121, 021304 (2018).
* Wang _et al._ [2020a] Q. Wang _et al._ (PandaX-II), Sci. China Phys. Mech. Astron. 63, 231011 (2020a).
* Ni _et al._ [2019] K. Ni _et al._ (PandaX-II), Chin. Phys. C 43, 113001 (2019).
* Wang _et al._ [2020b] Q. Wang _et al._ (PandaX-II), Chin. Phys. C 44, 125001 (2020b).
* Zhou _et al._ [2021] X. Zhou _et al._ (PandaX-II), Chinese Physics Letters 38, 011301 (2021).
* [30] Supplementary materials.
* Wu _et al._ [2017] Q. Wu _et al._ , JINST 12 (08), T08004.
* Szydagis _et al._ [2021] M. Szydagis, C. Levy, G. M. Blockinger, A. Kamaha, N. Parveen, and G. R. C. Rischbieter, Phys. Rev. D 103, 012002 (2021).
* [33] Noble Element Simulation Technique, http://nest.physics.ucdavis.edu/download/calculator.
* Aprile _et al._ [2007] E. Aprile, K. L. Giboni, P. Majewski, K. Ni, and M. Yamashita, Phys. Rev. B 76, 014115 (2007).
* Yan _et al._ [2021] B. Yan _et al._ , Determination of responses of liquid xenon to low energy electron and nuclear recoils using the pandax-ii detector (2021), arXiv:2102.09158 [physics.ins-det] .
* Akerib _et al._ [2017b] D. S. Akerib _et al._ , Phys. Rev. D 96, 112011 (2017b).
* Szydagis _et al._ [2011] M. Szydagis, N. Barry, K. Kazkaz, J. Mock, D. Stolp, M. Sweany, M. Tripathi, S. Uvarov, N. Walsh, and M. Woods, JINST 6, P10002, 1106.1613 .
* Essig _et al._ [2017] R. Essig, T. Volansky, and T.-T. Yu, Phys. Rev. D 96, 043017 (2017).
* Arnaud _et al._ [2020] Q. Arnaud _et al._ (EDELWEISS), Phys. Rev. Lett. 125, 141301 (2020).
* [40] Direct Detection of sub-GeV Dark Matter, http://ddldm.physics.sunysb.edu/ddlDM/.
* Pandey _et al._ [2020] M. K. Pandey, L. Singh, C.-P. Wu, J.-W. Chen, H.-C. Chi, C.-C. Hsieh, C.-P. Liu, and H. T. Wong, Phys. Rev. D 102, 123025 (2020).
* Zhang _et al._ [2019] H. Zhang _et al._ (PandaX), Sci. China Phys. Mech. Astron. 62, 31011 (2019).
|
USTC-ICTS/PCFT-21-04
Free BMN Correlators With More Stringy Modes
Bao-ning<EMAIL_ADDRESS>Min-xin<EMAIL_ADDRESS>
Interdisciplinary Center for Theoretical Study,
University of Science and Technology of China, Hefei, Anhui 230026, China
Peng Huanwu Center for Fundamental Theory,
Hefei, Anhui 230026, China
In the type IIB maximally supersymmetric pp-wave background, stringy excited
modes are described by BMN (Berenstein-Madalcena-Nastase) operators in the
dual $\mathcal{N}=4$ super-Yang-Mills theory. In this paper, we continue the
studies of higher genus free BMN correlators with more stringy modes, mostly
focusing on the case of genus one and four stringy modes in different
transverse directions. Surprisingly, we find that the non-negativity of torus
two-point functions, which is a consequence of a previously proposed
probability interpretation and has been verified in the cases with two and
three stringy modes, is no longer true for the case of four or more stringy
modes. Nevertheless, the factorization formula, which is also a proposed
holographic dictionary relating the torus two-point function to a string
diagram calculation, is still valid. We also check the correspondence of
planar three-point functions with Green-Schwarz string vertex with many string
modes. We discuss some issues in the case of multiple stringy modes in the
same transverse direction. Our calculations provide some new perspectives on
pp-wave holography.
###### Contents
1. 1 Introduction
2. 2 The reality of higher genus two-point functions with more stringy modes
3. 3 The calculations of torus two-point functions
1. 3.1 Some standard integrals
2. 3.2 Five modes: the generic case
3. 3.3 Four modes
4. 4 The factorization formula
1. 4.1 Planar three-point functions
2. 4.2 One-loop string diagram calculations
5. 5 Comparison with light cone string field cubic vertex
6. 6 Some issues with multiple string modes in the same transverse direction
7. 7 Conclusion
8. A Some calculational details of the one-loop string integrals
1. A.1 $S_{1}$ contribution
2. A.2 $S_{2}$ contribution
3. A.3 $S_{3}$ contribution
## 1 Introduction
The AdS/CFT correspondence [1, 2, 3] is a deep idea which relates two
seemingly totally different theories, namely string theory or supergravity on
AdS background and the $\mathcal{N}=4$ $SU(N)$ super-Yang-Mills theory.
Although the correspondence has found flourishing applications in many topics,
the precise quantitative tests of the holographic dictionary are mostly
restricted to supersymmetry protected quantities in the supergravity
approximation, such as the spectrum and correlation functions of BPS
operators. Without an alternative effective method to handle string theory in
the deeply stringy regime, a common perspective is to simply take the super-
Yang-Mills theory as a non-perturbative definition of AdS string theory at any
finite coupling and energy scale, assumed to be valid unless otherwise
convincingly explicitly contradicted.
A particularly interesting avenue for progress in the precise tests of the
holographic correspondence in the stringy regime is to take a Penrose limit
[4] of the type IIB $AdS_{5}\times S^{5}$ background. The geometry becomes a
pp-wave background [5] with also maximal supersymmetry
$\displaystyle
ds^{2}=-4dx^{+}dx^{-}-\mu^{2}(\vec{r}^{~{}2}+\vec{y}^{~{}2})(dx^{+})^{2}+d\vec{r}^{~{}2}+d\vec{y}^{~{}2},$
(1.1)
where $x^{+},x^{-}$ are light cone coordinates, $\vec{r},\vec{y}$ are
4-vectors, and the parameter $\mu$ is proportional to spacetime curvature as
well as the Ramond-Ramond flux $F_{+1234}=F_{+5678}\sim\mu$. The free string
spectrum can be solved in the light cone gauge using Green-Schwarz formalism
similar to the flat space [6]. Berenstein, Maldacena and Nastase (BMN)
proposed the holographic dual operators in the gauge theory for the stringy
states, a type of near-BPS operators known as the BMN operators, and it was
shown that the free string spectrum is reproduced by the planar conformal
dimensions of these BMN operators [7]. On the field theory side, one takes a
large R-charge limit, previously considered in the context of giant gravitons,
or D-branes in the AdS space in [8, 9, 10, 11], and also in many subsequent
literature e.g. [12, 13, 14, 15]. The calculations on the field theory side
are perturbative in the large R-charge limit, so the original strong-weak
AdS/CFT duality becomes precisely testable in this setting.
The Penrose limit provides a new twist to the holography story. In the
celebrated AdS/CFT holographic dictionary in [3], the CFT lives at the
boundary of a bulk AdS space and its local operators couple to the boundary
configurations of the AdS bulk fields. However, although the pp-wave
background (1.1) comes from a Penrose limit of the AdS space, the geometry is
rather different. As such, it is not clear how to directly apply the standard
AdS holographic dictionary, particularly in the situations with finite string
interactions. Our approach in some previous papers [16, 17, 18, 19, 20] is to
consider another corner of the parameter space in the BMN limit, focusing on
the free gauge theory. In this case, the string theory side becomes infinitely
curved $\mu\sim\infty$, and strings are effectively infinitely long and
tensionless, but can still have finite string interactions. Most
interestingly, since the string spectrum is completely degenerate, the
tensionless string can jump from one excited state to another without energy
cost through a quantum unitary transition. It turns out that in this case the
effective string coupling constant should be identified with a finite genus
counting parameter $g:=\frac{J^{2}}{N}$, where $J$ is the large R-charge and
scales like $J\sim\sqrt{N}\sim\infty$ in the BMN limit. Some higher genus BMN
correlators were first computed in [21, 22].
Since the full fledged holographic dictionary is no longer available in the
pp-wave background, our pragmatic approach is to try to compute the physical
quantities on both sides of the correspondence and find potential non-trivial
agreements. In this sense, a mismatch with naive expectation is not
necessarily a contradiction of the holographic principle. Instead, one should
focus on finding aspects where the calculations from both side do match, and
try to give physical derivations or proofs of such mathematical coincidence.
Besides the free string spectrum originally considered in [7], some more tests
of the pp-wave holography are immediately clear. For example, the free planar
three-point functions of BMN operator should correspond to the Green-Schwarz
light cone string field cubic vertex [23, 24] in the infinitely curved pp-wave
background [16, 25]. In the papers [17, 18], we further proposed a
factorization formula, where the free higher genus BMN correlators are
holographically related to string loop diagram calculations by pasting
together the cubic string vertices without propagator. More recently, we
propose a probability interpretation of the BMN two-point functions [19]. This
also provides yet another interesting new entry of the pp-wave holographic
dictionary that the BMN two-point function does not naively correspond to a
quantum transition amplitude on the string theory side, but rather to its norm
square. A consequence of the probability interpretation is the non-negativity
of BMN two-point functions, which can be demonstrated for BMN operators with
two stringy modes at any genus, or three stringy modes at genus one. In this
paper, we further test the non-negativity conjecture for BMN operators with
four and five stringy modes at genus one. Surprisingly, it turns out this is
no longer valid. Of course, as mentioned earlier, this is not necessarily a
contradiction of holographic principle according to our philosophy, but rather
provide a new perspective on the limitation of our probability interpretation.
Motivated by the results, we further check the factorization formula for the
case of four stringy modes and confirm that it is still valid. We also check
that the correspondence of planar three-point functions with Green-Schwarz
string vertex is robust in the case of many string modes. Our mixed test
results for this case shall motivate potential physical explanations which
might shed new light on the still mysterious holographic principle.
In some potentially related interesting recent developments, Gaberdiel and
Gopakumar et al study string theory on a $AdS_{3}$ background, dual to a
symmetric product CFT [26, 27, 28], with ideas dating back to some early
papers e.g. [29]. Although the technical details are rather different, there
appears to be some common features with our works that the strings are
tensionless and the dual CFT is free. To our knowledge, in various special
situations where the higher genus string amplitudes can be systematically
computed, our setting by far most resembles the usual critical superstring
theory on flat spacetime, with of course still certain notable simplifications
that in our case there is no continuous light cone or transverse momentum due
to the infinite curvature and Ramond-Ramond flux in the background.
The paper is organized as the followings. In Sec. 2 we review some notations
and previous results, with an emphasis on the real and symmetric properties of
the two-point functions. In Sec. 3 we calculate the torus two-point functions
of BMN operators with four string modes with the notations of some standard
integrals. We also compute the case five string modes for the generic
situations of mode numbers with no degeneracy. In both cases we discover that
they are not alway non-negative. In Sec. 4 we perform the one-loop string
calculations and confirm that the factorization formula for the case of four
string modes is still valid. In Sec. 5 we check the correspondence of planar
three-point functions with Green-Schwarz string vertex with many string modes.
In Sec. 6 we consider the situations of multiple string modes in the same
transverse direction. We conclude with some discussions in Sec. 7.
## 2 The reality of higher genus two-point functions with more stringy modes
Let us first introduce some notations for the higher genus two-point
functions, and review some previous results. The integral formula is naively
complex and we perform a more careful analysis of its reality property. The
string vacuum state in the pp-wave geometry is described by a dual vacuum BMN
operator with large R-charge $O^{J}=\textrm{Tr}(Z^{J})$, where
$Z=\frac{1}{\sqrt{2}}(\phi^{5}+i\phi^{6})$ is a complex scalar field in the
$SU(N)$ adjoint representation, constructed from two of the six real scalar
fields in the $\mathcal{N}=4$ $SU(N)$ super-Yang-Mills theory. We take the BMN
limit $J\sim\sqrt{N}\sim\infty$ with $g:=\frac{J^{2}}{N}$ finite, and focus on
free gauge theory. As in the previous papers, our notation omits the universal
spacetime factors in the correlators.
The stringy states with bosonic excited modes in the eight transverse
directions are constructed by inserting the four remaining real scalars
$\phi^{I}$ and four covariant derivatives $D_{I}$ where $I=1,2,3,4$ into the
string of $Z$’s with phases. For example, the BMN operators up to four scalar
oscillator modes are the followings
$\displaystyle
O^{J}=\frac{1}{\sqrt{JN^{J}}}TrZ^{J},~{}~{}~{}~{}O^{J}_{0}=\frac{1}{\sqrt{N^{J+1}}}Tr(\phi^{I}Z^{J}),$
(2.1) $\displaystyle
O^{J}_{-m,m}=\frac{1}{\sqrt{JN^{J+2}}}\sum_{l=0}^{J-1}e^{\frac{2\pi
iml}{J}}Tr(\phi^{I_{1}}Z^{l}\phi^{I_{2}}Z^{J-l}).$ $\displaystyle
O^{J}_{(m_{1},m_{2},m_{3})}=\frac{1}{\sqrt{N^{J+3}}J}\sum_{l_{1},l_{2}=0}^{J}e^{\frac{2\pi
im_{2}l_{1}}{J}}e^{\frac{2\pi
im_{3}l_{2}}{J}}\textrm{Tr}(\phi^{I_{1}}Z^{l_{1}}\phi^{I_{2}}Z^{l_{2}-l_{1}}\phi^{I_{3}}Z^{J-l_{2}}).$
$\displaystyle O^{J}_{(m_{1},m_{2},m_{3},m_{4})}$
$\displaystyle=\frac{1}{\sqrt{N^{J+4}}J^{\frac{3}{2}}}\sum_{l_{1},l_{2},l_{3}=0}^{J}e^{\frac{2\pi
im_{2}l_{1}}{J}}e^{\frac{2\pi im_{3}l_{2}}{J}}e^{\frac{2\pi
im_{4}l_{3}}{J}}\textrm{Tr}(\phi^{1}Z^{l_{1}}\phi^{2}Z^{l_{2}-l_{1}}\phi^{3}Z^{l_{3}-l_{2}}\phi^{4}Z^{J-l_{3}}).$
Here one can use the cyclicity of the trace to move one scalar to the starting
position for convenience, the mode numbers $\sum_{i}m_{i}=0$ in the case of
three and four modes. The operators are properly normalized to be orthonormal
at the genus zero or planar level. The convention is that the first operator
$O^{J}$ corresponds to the closed string vacuum state, and the positive and
negative modes in the other operators represent the left and right moving
stringy excited modes, while the zero modes are supergravity modes
representing discretized momenta in the corresponding traverse direction. The
construction ensures only operators satisfying closed string level match
conditioning are non-vanishing. As a consequence, the stringy excited states
have at least two oscillator modes with opposite signs. Analogously, we can
add more stringy modes and denote the properly normalized BMN operator
$O^{J}_{m_{1},m_{2},\cdots,m_{k}}$ with the closed string level matching
condition $\sum_{i=1}^{k}m_{k}=0$. Unless otherwise specified, we use this
notation to denote $k$ different string modes.
The free two-point functions at higher genus $h\geq 1$ are computed by
dividing the string of $Z$’s up to $n\leq 4h$ segments and Wick contracted
according to a permutation of $(1,2,\cdots,n)$. We only consider cyclically
inequivalent permutations where no two neighboring numbers are consecutive.
The contributions of such Feynman diagrams of genus $h$ are proportional to
$\frac{J^{n}}{N^{2h}}$. So the dominant contributions come from those of the
maximal number of segments $n=4h$ and we can neglect the other cases $n<4h$
which are suppressed in the large R-charge limit. Furthermore, in the BMN
limit, the contributions are proportional to $\frac{J^{4h}}{N^{2h}}=g^{2h}$,
confirming the finite parameter $g$ as the genus counting parameter therefore
the effective string coupling constant with our restriction to free gauge
theory. We should note that a generic permutation of $(1,2,\cdots,4h)$ can
give Feynman diagram with genus higher than $h$. A useful rule to select genus
$h$ permutations is to generate them by string diagrams with $h$ loops [18].
It is known that there are $\frac{(4h-1)!!}{2h+1}$ such genus $h$ permutations
[30]. For example, at genus one there is only one such permutation, and can be
generated by a one-loop string process
$(1234)\rightarrow(12)(34)\rightarrow(2143)$. The field theory torus diagram
is depicted in Figure 1. Please note that we denote the genus as $h$ because
the usual symbol $g$ has been used as the effective string coupling.
Figure 1: The torus diagram.
Once the string of $Z$’s is Wick contracted with $\bar{Z}$’s, we can add
scalar insertions and contract them along the lines of $Z$’s to preserve the
genus of the Feynman diagram. In the BMN limit each scalar insertion gives an
integral with the corresponding phases. For example, the free torus two-point
function can be written as
$\displaystyle\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{k})}O^{J}_{(n_{1},n_{2},\cdots,n_{k})}\rangle_{\textrm{torus}}$
(2.2)
$\displaystyle=\frac{g^{2}}{4}\int_{0}^{1}dx_{1}dx_{2}dx_{3}dx_{4}\delta(x_{1}+x_{2}+x_{3}+x_{4}-1)\times$
$\displaystyle\prod_{i=1}^{k}(\int_{0}^{x_{1}}+e^{2\pi
in_{i}(x_{3}+x_{4})}\int_{x_{1}}^{x_{1}+x_{2}}+e^{2\pi
in_{i}(x_{4}-x_{2})}\int_{x_{1}+x_{2}}^{1-x_{4}}+e^{-2\pi
in_{i}(x_{2}+x_{3})}\int_{1-x_{4}}^{1})dy_{i}e^{2\pi i(n_{i}-m_{i})y_{i}}$
$\displaystyle=g^{2}\int_{0}^{1}dx_{1}dx_{2}dx_{3}dx_{4}\delta(x_{1}+x_{2}+x_{3}+x_{4}-1)\int_{0}^{x_{1}}dy_{k}e^{2\pi
i(n_{k}-m_{k})y_{k}}\times$
$\displaystyle\prod_{i=1}^{k-1}(\int_{0}^{x_{1}}+e^{2\pi
in_{i}(x_{3}+x_{4})}\int_{x_{1}}^{x_{1}+x_{2}}+e^{2\pi
in_{i}(x_{4}-x_{2})}\int_{x_{1}+x_{2}}^{1-x_{4}}+e^{-2\pi
in_{i}(x_{2}+x_{3})}\int_{1-x_{4}}^{1})dy_{i}e^{2\pi i(n_{i}-m_{i})y_{i}},$
where in the second equality we use the cyclicity to put the one string mode
into one of the four segments which have the same contribution since there is
only one cyclically inequivalent permutation for the torus diagram. This
breaks the symmetry between different modes but would be sometimes convenient
for calculations.
By definition the correlator is invariant under a complex conjugate and
exchange of modes. We can perform a more careful analysis. We take the complex
conjugate in the first formula in (LABEL:freetorus1), and change the
integration variables $y_{i}\rightarrow 1-y_{i},i=1,2,\cdots,k$ and
${x_{1},x_{2},x_{3},x_{4}}\rightarrow{x_{4},x_{3},x_{2},x_{1}}$. After a
simple calculation, also using the closed string level matching condition, one
can check the formula remains the same. So the torus two-point function is
purely real and symmetric.
The analysis for higher genus $h\geq 2$ is somewhat more complicated. When the
$k$’s string mode runs through $4h$ segments as in the first equality in
(LABEL:freetorus1), generically they correspond to multiple cyclically
inequivalent permutations if we fix the $k$’s string mode in the first segment
as in the second equality in (LABEL:freetorus1). To illustrate, we consider
the case of genus two, which have 21 cyclically inequivalent permutations.
These 21 permutations are divided into 4 groups
$\displaystyle
1.~{}(14732865),(17548362),(18643725),(14875326),(15837642),(18472653),$ (2.3)
$\displaystyle~{}~{}~{}(15428736),(17625843),$ $\displaystyle
2.~{}(15387426),(15842763),(16528473),(17362854),(17438625),(14863275),$
$\displaystyle~{}~{}~{}(16483752),(18537264),$ $\displaystyle
3.~{}(14325876),(14765832),(18365472),(18725436),$ $\displaystyle
4.~{}(16385274).$
Here we have use the cyclicity to always put 1 into the first place in the
permutations. Each group of permutations is generated by running a particular
string mode through $8$ segments. A convenient rule is to start with a
permutation, subtract each element by 1 (with 1 replaced by 8), then
cyclically move 1 to the first position. One can repeat this operation until
the original permutation reappears. The permutations of each group have the
same multiplicity with respect to the string diagrams in the factorization
formulas [18]. The contribution of each group to the genus 2 two-point
function is real. However, the contribution of each individual permutation may
be complex if we fix one particular, e.g. the $k$’s, string mode to be in the
first segment as in the second equality in (LABEL:freetorus1). The
permutations can then be further classified by some permutations whose
contributions are real, and some other pairs where each pair consists of two
permutations whose contributions are complex conjugate to each other. In our
case, there are 5 self-conjugate permutations and 8 pairs of conjugate
permutations as the followings
$\displaystyle{\rm
Self~{}conjugate}:~{}~{}(18365472),(14325876),(14875326),(17625843),(16385274),$
(2.4) $\displaystyle{\rm
Conjugate~{}pairs}:~{}~{}~{}\\{(14765832),(18725436)\\},~{}~{}\\{(15837642),(18643725)\\},$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\\{(16483752),(18537264)\\},\\{(17548362),(18472653)\\},$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\\{(14863275),(15387426)\\},\\{(14732865),(15428736)\\},$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\\{(17438625),(15842763)\\},\\{(17362854),(16528473)\\}.$
The conjugate pairs always fall into the same group in (LABEL:group2). We can
similarly perform the change of integration variables $y_{i}\rightarrow
1-y_{i},i=1,2,\cdots,k$ and reverse the order of 8 segments as in the torus
case to show that the two contributions of each pair in (LABEL:pairs) are
complex conjugate to each other. This also provides a convenient rule for
computing the conjugate permutation. One fixes 1 in the first position, then
replace the 7 remaining numbers by $a\rightarrow 10-a$ and reverse their
order, i.e. the conjugate of the permutation $(1,a_{1},a_{2},\cdots,a_{7})$ is
simply $(1,10-a_{7},10-a_{6},\cdots,10-a_{1})$.
Since a conjugate pair of permutations should have the same genus in our case,
we expect this argument similarly works out at higher genus so the two-point
functions are always real and symmetric
$\displaystyle\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{k})}O^{J}_{(n_{1},n_{2},\cdots,n_{k})}\rangle_{h}^{*}=\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{k})}O^{J}_{(n_{1},n_{2},\cdots,n_{k})}\rangle_{h},$
(2.5)
$\displaystyle\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{k})}O^{J}_{(n_{1},n_{2},\cdots,n_{k})}\rangle_{h}=\langle\bar{O}^{J}_{(n_{1},n_{2},\cdots,n_{k})}O^{J}_{(m_{1},m_{2},\cdots,m_{k})}\rangle_{h}.$
An interesting formula, as discussed in [19], is to sum over one set of string
modes
$\sum_{\sum_{i=1}^{k}n_{k}=0}\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{k})}O^{J}_{(n_{1},n_{2},\cdots,n_{k})}\rangle_{h}=\frac{(4h-1)!!}{(2h+1)(4h)!}g^{2h}.$
(2.6)
This can be also easily derived using the Poisson resummation formula
$\sum_{n=-\infty}^{\infty}e^{2\pi ixn}=\sum_{p=-\infty}^{\infty}\delta(x-p)$.
The resulting integral after the summation can be easily performed due to
delta function constrain, and is independent of the remaining set of string
modes. One can thus sum up all genus contributions with a proper normalization
by the all-genera formula of vacuum correlator
$p_{(m_{1},m_{2},\cdots,m_{k}),(n_{1},n_{2},\cdots,n_{k})}=\frac{g}{2\sinh(\frac{g}{2})}\sum_{h=0}^{\infty}\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{k})}O^{J}_{(n_{1},n_{2},\cdots,n_{k})}\rangle_{h}.$
(2.7)
Then the matrix element is real and looks like a probability distribution
$\sum_{\sum_{i=1}^{k}n_{k}=0}p_{(m_{1},m_{2},\cdots,m_{k}),(n_{1},n_{2},\cdots,n_{k})}=1.$
(2.8)
To interpret the matrix element (2.7) as a probability, it needs also to be
non-negative. In order to keep the nice normalization relations (2.6, 2.8), we
can not simply add some non-uniform phase factors to the BMN operators, so the
signs of two-point functions can not be trivially changed and have physical
relevance. Since the string coupling constant $g$ can be arbitrary, one may
expect each correlator in the sum should be non-negative if such
interpretation is valid. For the case of two string modes $k=2$, it can be
easily shown that the correlators are indeed non-negative since the two string
modes have opposite sign [19]. For the case of three string modes, one can
also explicitly check that at the torus two-point function is always non-
negative. For example, the torus two-point functions for generic case with no
degeneracy in mode numbers is [18]
$\displaystyle\langle\bar{O}^{J}_{(m_{1},m_{2},m_{3})}O^{J}_{(n_{1},n_{2},n_{3})}\rangle_{\textrm{torus}}=\frac{g^{2}}{32\pi^{4}}\frac{\sum_{i=1}^{3}(m_{i}-n_{i})^{2}}{\prod_{i=1}^{3}(m_{i}-n_{i})^{2}},$
(2.9)
which is manifestly positive. These properties strongly suggest a new entry of
the pp-wave holographic dictionary
$p_{(m_{1},m_{2},\cdots,m_{k}),(n_{1},n_{2},\cdots,n_{k})}=|\langle
m_{1},m_{2},\cdots,m_{k}|\hat{U}(g)|n_{1},n_{2},\cdots,n_{k}\rangle|^{2},~{}~{}~{}k=2,3,$
(2.10)
where the states $|n_{1},n_{2},\cdots,n_{k}\rangle$ denote the orthonormal BMN
states of free string theory, while the operator $\hat{U}(g)$ describes the
quantum unitary transition between the tensionless strings. As discussed in
[19], the higher point functions are vanishing in the BMN limit and are
regarded as virtual processes, so a single string can not actually decay into
multi-strings through a finite physical process. In this sense the single
strings form a complete Hilbert space
$\sum_{\sum_{i=1}^{k}n_{k}=0}|n_{1},n_{2},\cdots,n_{k}\rangle\langle
n_{1},n_{2},\cdots,n_{k}|=I$.
In the next Section 3 we will calculate the cases of torus two-point functions
with more than 3 string modes. It turns out the results are not always non-
negative. Therefore we are currently restricting our probability
interpretation and the proposed holographic dictionary (2.10) to the
situations with no more than 3 (different) string modes.
## 3 The calculations of torus two-point functions
In this section we provide some details of the calculations of the integral
formula (LABEL:freetorus1) for $k=4,5$ string modes, generalizing previous
results. First we introduce some standard integrals. For the case of $k=5$, we
only compute a generic case, where no degeneracy of string modes appears in
the standard integrals. It turns out that this is actually simpler than the
case of four modes and we consider it first. We then study the case of $k=4$
modes in more details, and provide the universal result in terms of the
standard integrals, which are valid for all cases including degeneracy.
### 3.1 Some standard integrals
The following standard integral, appeared in e.g. [22, 18], is very useful for
calculating higher genus correlators,
$\displaystyle I(u_{1},u_{2},\cdots,u_{r})\equiv\int_{0}^{1}dx_{1}\cdots
dx_{r}\delta(x_{1}+\cdots+x_{r}-1)e^{2\pi i(u_{1}x_{1}+\cdots u_{r}x_{r})}$
(3.1)
Here It is clear that the integral is unchanged if we add an integer to all
the arguments. If some of the $u_{i}$’s are identical, one uses the following
notation
$\displaystyle I_{(a_{1},\cdots,a_{r})}(u_{1},u_{2},\cdots,u_{r})\equiv
I(u_{1},\cdots,u_{1},u_{2},\cdots,u_{2},\cdots,u_{r},\cdots,u_{r}),$ (3.2)
where $a_{i}$’s are integers representing the numbers of the $u_{i}$’s in the
right hand side, and for $a_{i}=0$ we can just eliminate the corresponding
argument. The integral can be calculated by the following recursion relation
$\displaystyle 2\pi
i(u_{i}-u_{j})I_{(a_{1},\cdots,a_{r})}(u_{1},u_{2},\cdots,u_{r})$ (3.3)
$\displaystyle=$ $\displaystyle
I_{(a_{1},\cdots,a_{j}-1,\cdots,a_{r})}(u_{1},u_{2},\cdots,u_{r})-I_{(a_{1},\cdots,a_{i}-1,\cdots,a_{r})}(u_{1},u_{2},\cdots,u_{r}),$
If $u_{i}\neq u_{j}$ then this equation can be used to reduce the number of
arguments, but the relation is also valid and both sides are zero when
$u_{i}=u_{j}$. From the recursion relation one can obtain the formulas for the
integral
$\displaystyle I(u_{1},u_{2},\cdots u_{r})$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{r}e^{2\pi iu_{i}}\prod_{j\neq i}\frac{1}{2\pi
i(u_{i}-u_{j})},$ (3.4) $\displaystyle
I_{(a_{1}+1,\cdots,a_{r}+1)}(u_{1},\cdots,u_{r})$ $\displaystyle=$
$\displaystyle\prod_{i=1}^{r}\frac{(\partial/\partial u_{i})^{a_{i}}}{(2\pi
i)^{a_{i}}a_{i}!}I(u_{1},\cdots,u_{r}),$ (3.5)
where the $u_{i}$’s are different. We note we have used the $i$ symbol for
both the pure imaginary number and the product index, which are easy to
distinguish and should not cause confusion. In our calculations, the arguments
$u_{i}$’s will be always integers, so the exponential functions will be simply
1 in the end results.
### 3.2 Five modes: the generic case
For the case of five distinct string modes, at least one of the modes is
covariant derivative $D_{I}$. The structures of free field two-point functions
are the same as the cases of scalar field insertions, as also implied by
supersymmetry. So we can simply apply the integral formula (LABEL:freetorus1)
regardless of the type of string modes.
Using the reality of the integral (LABEL:freetorus1), it turns out the
calculations are especially simple for the generic situation with no
degeneracy. First we assume $m_{i}\neq n_{i}$ for all $i$’s. Then the
$y_{i}$’s integrals can be performed
$\displaystyle~{}~{}\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{5})}O^{J}_{(n_{1},n_{2},\cdots,n_{5})}\rangle_{\textrm{torus}}$
(3.6) $\displaystyle=\frac{g^{2}}{(2\pi
i)^{5}\prod_{i=1}^{5}(n_{i}-m_{i})}\int_{0}^{1}dx_{1}dx_{2}dx_{3}dx_{4}\delta(x_{1}+x_{2}+x_{3}+x_{4}-1)(e^{2\pi
i(n_{5}-m_{5})x_{1}}-1)$ $\displaystyle\times\prod_{i=1}^{4}[e^{2\pi
i(n_{i}-m_{i})x_{1}}-1+e^{-2\pi im_{i}(x_{1}+x_{2})}-e^{2\pi
i(-m_{i}x_{1}-n_{i}x_{2})}+e^{2\pi i(-n_{i}x_{2}+m_{i}x_{4})}$
$\displaystyle-e^{2\pi i[(n_{i}-m_{i})x_{1}-m_{i}x_{2}+n_{i}x_{4}]}+e^{-2\pi
in_{2}(x_{2}+x_{3})}-e^{2\pi i(n_{2}x_{1}+m_{2}x_{4})}].$
So this calculation becomes some standard 4-dimensional integrals. Due to the
factor of $i^{5}$, we only need to compute the imaginary part of the integrals
because of the reality of the result. This is quite simple for the
4-dimensional case. For example, suppose the integers $u_{i}\neq u_{j}$ for
any $i\neq j$, some results of the standard integrals are
$\displaystyle I(u_{1},u_{2},u_{3},u_{4})=0,$ (3.7) $\displaystyle
I_{(2,1,1)}(u_{1},u_{2},u_{3})=-\frac{1}{4\pi^{2}(u_{1}-u_{2})(u_{1}-u_{3})},$
$\displaystyle I_{(2,2)}(u_{1},u_{2})=-\frac{1}{2\pi^{2}(u_{1}-u_{2})^{2}},$
$\displaystyle
I_{(3,1)}(u_{1},u_{2})=\frac{1}{4\pi^{2}(u_{1}-u_{2})^{2}}-\frac{i}{4\pi(u_{1}-u_{2})},$
$\displaystyle I_{(4)}(u_{1})=\frac{1}{6}.$
We see that most results are real and the only imaginary contribution appears
in $I_{(3,1)}$. Assuming no further degeneracy among the mode numbers, we can
check that the only contributions to the $I_{(3,1)}$ type integral come from
taking $l=1,2,3,4$ factor(s) $e^{2\pi i(n_{i}-m_{i})}$ and the $5-l$ factor(s)
of $-1$ in the integrand in (LABEL:5mode). The result is quite simple
$\displaystyle~{}~{}\langle\bar{O}^{J}_{(m_{1},m_{2},\cdots,m_{5})}O^{J}_{(n_{1},n_{2},\cdots,n_{5})}\rangle_{\textrm{torus}}$
(3.8)
$\displaystyle=\frac{g^{2}}{(2\pi)^{6}\prod_{i=1}^{5}(n_{i}-m_{i})}[\sum_{i=1}^{5}\frac{1}{n_{i}-m_{i}}-\sum_{i=1}^{4}\sum_{j=i+1}^{5}\frac{1}{n_{i}-m_{i}+n_{j}-m_{j}}].$
We can compute the result for some random mode numbers, and find that it can
be either positive or negative.
The calculations actually work similarly for the cases of any odd number of
stringy modes, providing a simpler method for obtaining the result for the
case of three generic stringy modes with no degeneracy in (2.9).
### 3.3 Four modes
We use the second equality in (LABEL:freetorus1) which fixes the 4th string
mode in the first segment, namely $0<y_{4}<x_{1}$. It turns out there are 20
cases where we can put the positions of remaining string modes $y_{1,2,3}$, up
to some permutation symmetries. We write the 8-dimensional integrals in the
standard form and list these 20 cases in the followings.
1. 1.
The variables $0<y_{1},y_{2},y_{3},y_{4}<x_{1}$. The permutations of indices
$1,2,3,4$ give $4!=24$ such integrals. Without loss of generality we consider
$0<y_{1}<y_{2}<y_{3}<y_{4}<x_{1}$. After dissecting the integral, the
contribution is
$I_{(1,1,1,5)}(n_{4}-m_{4},n_{4}-m_{4}+n_{3}-m_{3},-n_{1}+m_{1},0).$ (3.9)
2. 2.
The variables $0<y_{2}<y_{3}<y_{4}<x_{1}<y_{1}<x_{1}+x_{2}$. There are
$3!\cdot 3=18$ similar integrals by counting the choice of $y_{1}$ and
permutations of indices $2,3,4$. The contribution is
$I_{(2,2,2,1,1)}(-m_{1},-n_{1},0,-m_{1}+n_{4}-m_{4},-n_{1}-n_{2}+m_{2}).$
(3.10)
3. 3.
The variables $0<y_{2}<y_{3}<y_{4}<x_{1}<x_{1}+x_{2}<y_{1}<x_{1}+x_{2}+x_{3}$.
There are also $3!\cdot 3=18$ similar integrals by counting the choice of
$y_{1}$ and permutations of indices $2,3,4$. The contribution is
$I_{(2,2,1,1,1,1)}(n_{1}-m_{1},0,-m_{1},n_{1},m_{2}-n_{2},n_{1}-m_{1}+n_{4}-m_{4}).$
(3.11)
4. 4.
The variables $0<y_{2}<y_{3}<y_{4}<x_{1}<x_{1}+x_{2}+x_{3}<y_{1}<1$. There are
also $3!\cdot 3=18$ similar integrals by counting the choice of $y_{1}$ and
permutations of indices $2,3,4$. The contribution is
$I_{(2,2,2,1,1)}(m_{1},n_{1},0,n_{1}+n_{4}-m_{4},m_{1}+m_{2}-n_{2}).$ (3.12)
5. 5.
The variables $0<y_{3}<y_{4}<x_{1}<y_{1}<y_{2}<x_{1}+x_{2}$. There are $3\cdot
2\cdot 2=12$ similar integrals by counting the choice of $y_{3}$ and exchange
of indices between $3,4$ and between $1,2$. The contribution is
$I_{(2,2,2,1,1)}(-m_{1}-m_{2},-n_{1}-n_{2},0,-n_{1}-m_{2},m_{3}+n_{4}).$
(3.13)
6. 6.
The variables $0<y_{3}<y_{4}<x_{1}<x_{1}+x_{2}<y_{1}<y_{2}<x_{1}+x_{2}+x_{3}$.
There are also $3\cdot 2\cdot 2=12$ similar integrals by counting the choice
of $y_{3}$ and exchange of indices between $3,4$ and between $1,2$. The
contribution is
$I_{(2,2,1,1,1,1)}(n_{1}+n_{2}-m_{1}-m_{2},0,n_{1}+n_{2},-m_{1}-m_{2},n_{2}-m_{2},-n_{3}+m_{3}).$
(3.14)
7. 7.
The variables $0<y_{3}<y_{4}<x_{1}<x_{1}+x_{2}+x_{3}<y_{1}<y_{2}<1$. There are
also $3\cdot 2\cdot 2=12$ similar integrals by counting the choice of $y_{3}$
and exchange of indices between $3,4$ and between $1,2$. The contribution is
$I_{(2,2,2,1,1)}(n_{1}+n_{2},m_{1}+m_{2},0,m_{1}+n_{2},-n_{3}-m_{4}).$ (3.15)
8. 8.
The variables $0<y_{3}<y_{4}<x_{1}<y_{1}<x_{1}+x_{2}<y_{2}<x_{1}+x_{2}+x_{3}$.
There are also $3\cdot 2\cdot 2=12$ similar integrals by counting the choice
of $y_{3}$ and exchange of indices between $3,4$ and between $1,2$. The
contribution is
$I(m_{2},n_{2},-m_{1},-n_{1},m_{2}+n_{2},-m_{1}+n_{2},m_{2}-n_{1},m_{2}+n_{2}+m_{3}+n_{4}).$
(3.16)
9. 9.
The variables
$0<y_{3}<y_{4}<x_{1}<y_{1}<x_{1}+x_{2}<x_{1}+x_{2}+x_{3}<y_{2}<1$. There are
also $3\cdot 2\cdot 2=12$ similar integrals by counting the choice of $y_{3}$
and exchange of indices between $3,4$ and between $1,2$. The contribution is
$I(m_{2},n_{2},-m_{1},-n_{1},0,m_{2}-n_{1},n_{2}-m_{1},n_{2}-m_{1}+n_{4}-m_{4}).$
(3.17)
10. 10.
The variables
$0<y_{3}<y_{4}<x_{1}<x_{1}+x_{2}<y_{1}<x_{1}+x_{2}+x_{3}<y_{2}<1$. There are
also $3\cdot 2\cdot 2=12$ similar integrals by counting the choice of $y_{3}$
and exchange of indices between $3,4$ and between $1,2$. The contribution is
$I(m_{2},n_{2},-m_{1},-n_{1},-m_{1}-n_{1},m_{2}-n_{1},n_{2}-m_{1},m_{2}+m_{3}+n_{2}+n_{4}).$
(3.18)
11. 11.
The variables $0<y_{4}<x_{1}<y_{1}<y_{2}<y_{3}<x_{1}+x_{2}$. There are $3!=6$
similar integrals by counting the permutations of indices $1,2,3$. The
contribution is
$I_{(2,2,2,1,1)}(m_{4},n_{4},0,n_{3}+n_{4}-m_{3},m_{1}+m_{4}-n_{1}).$ (3.19)
12. 12.
The variables $0<y_{4}<x_{1}<x_{1}+x_{2}<y_{1}<y_{2}<y_{3}<x_{1}+x_{2}+x_{3}$.
There are also $3!=6$ similar integrals by counting the permutations of
indices $1,2,3$. The contribution is
$I_{(2,2,1,1,1,1)}(n_{4}-m_{4},0,n_{4},-m_{4},-n_{1}+m_{1},n_{3}+n_{4}-m_{3}-m_{4}).$
(3.20)
13. 13.
The variables $0<y_{4}<x_{1}<x_{1}+x_{2}+x_{3}<y_{1}<y_{2}<y_{3}<1$. There are
also $3!=6$ similar integrals by counting the permutations of indices $1,2,3$.
The contribution is
$I_{(2,2,2,1,1)}(-m_{4},-n_{4},0,-m_{4}-m_{3}+n_{3},-n_{4}+m_{1}-n_{1}).$
(3.21)
14. 14.
The variables $0<y_{4}<x_{1}<y_{1}<y_{2}<x_{1}+x_{2}<y_{3}<x_{1}+x_{2}+x_{3}$.
There are also $3!=6$ similar integrals by counting the permutations of
indices $1,2,3$. The contribution is
$I(m_{4},n_{4},-m_{3},-n_{3},0,m_{4}-n_{3},n_{4}-m_{3},m_{1}+m_{4}+n_{2}+n_{4}).$
(3.22)
15. 15.
The variables
$0<y_{4}<x_{1}<y_{1}<y_{2}<x_{1}+x_{2}<x_{1}+x_{2}+x_{3}<y_{3}<1$. There are
also $3!=6$ similar integrals by counting the permutations of indices $1,2,3$.
The contribution is
$I(m_{3},n_{3},0,m_{3}+m_{4},n_{3}+n_{4},-n_{1}-m_{2},m_{3}+m_{4}+n_{3}).$
(3.23)
16. 16.
The variables $0<y_{4}<x_{1}<y_{3}<x_{1}+x_{2}<y_{1}<y_{2}<x_{1}+x_{2}+x_{3}$.
There are also $3!=6$ similar integrals by counting the permutations of
indices $1,2,3$. The contribution is
$I(m_{4},n_{4},0,m_{3}+m_{4},n_{3}+n_{4},-n_{1}-m_{2},m_{3}+m_{4}+n_{4},n_{3}+n_{4}+m_{4}).$
(3.24)
17. 17.
The variables
$0<y_{4}<x_{1}<x_{1}+x_{2}<y_{1}<y_{2}<x_{1}+x_{2}+x_{3}<y_{3}<1$. There are
also $3!=6$ similar integrals by counting the permutations of indices $1,2,3$.
The contribution is
$I(-m_{4},-n_{4},0,m_{1}+m_{2},n_{1}+n_{2},n_{2}+m_{1},m_{1}+m_{2}-n_{4},n_{1}+n_{2}-m_{4}).$
(3.25)
18. 18.
The variables
$0<y_{4}<x_{1}<y_{3}<x_{1}+x_{2}<x_{1}+x_{2}+x_{3}<y_{1}<y_{2}<1$. There are
also $3!=6$ similar integrals by counting the permutations of indices $1,2,3$.
The contribution is
$I(-m_{3},-n_{3},0,m_{1}+m_{2},n_{1}+n_{2},m_{1}+n_{2},n_{1}+n_{2}-m_{3},m_{1}+m_{2}-n_{3}).$
(3.26)
19. 19.
The variables
$0<y_{4}<x_{1}<x_{1}+x_{2}<y_{3}<x_{1}+x_{2}+x_{3}<y_{1}<y_{2}<1$. There are
also $3!=6$ similar integrals by counting the permutations of indices $1,2,3$.
The contribution is
$I(m_{3},n_{3},-m_{4},-n_{4},0,n_{3}-m_{4},m_{3}-n_{4},m_{1}+m_{3}+n_{2}+n_{3}).$
(3.27)
20. 20.
The variables
$0<y_{4}<x_{1}<y_{1}<x_{1}+x_{2}<y_{2}<x_{1}+x_{2}+x_{3}<y_{3}<1$. There are
also $3!=6$ similar integrals by counting the permutations of indices $1,2,3$.
The contribution is
$I(m_{2},n_{2},-m_{1},-n_{1},m_{2}+m_{3}-n_{1},m_{2}+m_{3}+n_{2},n_{2}+n_{3}+m_{2},n_{2}+n_{3}-m_{1}).$
(3.28)
The 20 cases of integrals can be organized into 10 types of integrals, so that
the total contribution to can be more succinctly written as
$\displaystyle~{}~{}\langle\bar{O}^{J}_{(m_{1},m_{2},m_{3},m_{4})}O^{J}_{(n_{1},n_{2},n_{3},n_{4})}\rangle_{{\rm
torus}}$ (3.29)
$\displaystyle=g^{2}\sum_{(i,j,k,l)}[I_{(1,1,1,5)}(n_{i}-m_{i},n_{i}-m_{i}+n_{j}-m_{j},-n_{k}+m_{k},0)$
$\displaystyle+I_{(2,2,2,1,1)}(-m_{i},-n_{i},0,-m_{i}+n_{j}-m_{j},-n_{i}-n_{k}+m_{k})$
$\displaystyle+I_{(2,2,2,1,1)}(m_{i},n_{i},0,m_{i}-n_{j}+m_{j},n_{i}+n_{k}-m_{k})$
$\displaystyle+I_{(2,2,2,1,1)}(m_{i}+m_{j},n_{i}+n_{j},0,m_{i}+n_{j},-m_{k}-n_{l})$
$\displaystyle+I_{(2,2,1,1,1,1)}(n_{i}-m_{i},0,n_{i},-m_{i},-n_{j}+m_{j},n_{i}-m_{i}+n_{k}-m_{k})$
$\displaystyle+I(m_{i},n_{i},-m_{j},-n_{j},0,m_{i}-n_{j},n_{i}-m_{j},n_{i}-m_{j}+n_{k}-m_{k})$
$\displaystyle+I(m_{i},n_{i},-m_{j},-n_{j},-m_{j}-n_{j},m_{i}-n_{j},n_{i}-m_{j},m_{i}+n_{i}+m_{k}+n_{l})$
$\displaystyle+I(m_{i},n_{i},-m_{j},-n_{j},m_{i}+n_{i},m_{i}-n_{j},n_{i}-m_{j},m_{i}+n_{i}+m_{k}+n_{l})]$
$\displaystyle+g^{2}\sum_{(i,j)\leftrightarrow(k,l)}I_{(2,2,1,1,1,1)}(n_{i}+n_{j}-m_{i}-m_{j},0,n_{i}+n_{j},-m_{i}-m_{j},n_{i}-m_{i},-n_{k}+m_{k})$
$\displaystyle+g^{2}\sum_{(i,j,k)}I(m_{i},n_{i},-m_{j},-n_{j},m_{i}+n_{i}+m_{k},m_{i}+n_{i}+n_{k},m_{i}+m_{k}-n_{j},n_{i}+n_{k}-m_{j}).$
$\displaystyle\equiv g^{2}\sum_{k=1}^{10}I_{k}$
We provide some explanations of the notations. For later convenience we denote
the 10 type of integrals by $I_{k},k=1,2,\cdots,10$, according to the order as
written in the above equation, which should not be confused with labels of
transverse space direction in the pp-wave geometry. The first 8 types of
integrals are summed over the 24 permutations $(i,j,k,l)$ of $1234$. The
notation $(i,j)\leftrightarrow(k,l)$ in $I_{9}$ denotes we sum only once if
two permutations are related by exchanging $(i,j)\leftrightarrow(k,l)$. This
can be achieved e.g. by specifying $1\in\\{i,k\\}$ in the permutations. The
last integral $I_{10}$ is summed over the 6 permutations $(i,j,k)$ of $123$.
Of the 10 types of integrals, the $I_{1}$ comes from case 1 in the above
enumeration, the $I_{2}$ from combining cases 2 and 13, the $I_{3}$ from
combining cases 4 and 11, the $I_{4}$ from combing cases 5 and 7, the $I_{5}$
from combing cases 3 and 12, the $I_{6}$ from combining cases 9, 14 and 19,
the $I_{7}$ from combing cases 10, 15 and 16, the $I_{8}$ from combining cases
8, 17 and 18, the $I_{9}$ from case 6, the $I_{10}$ from case 20. Although the
last two integrals $I_{9},I_{10}$ are not summed over the full 24 permutations
of $1234$, it is easy to show they are also permutation symmetric using the
closed string level matching conditions and the invariance of the standard
integral under a shift of all arguments by an integer.
We can perform the calculations using a compute program. The calculations are
straightforward for a given set of mode numbers. An expression for the generic
case where there is no further degeneracy in the arguments in
(LABEL:4moderesult) can be obtained but it is too long to write down here. We
can check some special cases. For example, when two modes $m_{4}=n_{4}=0$,
this reduced to the case of three string modes considered in [18]. Another
special case is when $m_{i}=0$ and $n_{i}\neq 0$, then the result identically
vanishes, consistent with the conservation of discrete momentum in the
transverse direction.
The total contribution (LABEL:4moderesult) is always real, although each
individual integral can be complex. Computing the results for some random mode
numbers, we find that the result can be either positive or negative. We can
provide some potentially helpful empirical observations about the signs of the
torus two-point functions. However for now there seems no particular strong
motivation to warrant a thoroughly rigorous analysis. In the followings we
assume all $m_{i},n_{i},i=1,2,3,4$ are non-zero.
1. 1.
If two pairs of mode numbers are the same, e.g. $m_{i}=n_{i},i=1,2$, then the
torus two-point functions are most likely positive. There may be some
exceptions. For example, in the case
$(m_{i},n_{i})=(-10,-10),(-10,-10),(1,10),(19,10),i=1,2,3,4$, the torus two-
point function is negative. If all mode numbers are the same, i.e.
$m_{i}=n_{i},i=1,2,3,4$, then we have not found an example of negative torus
two-point function.
2. 2.
For $m_{i}\neq n_{i},i=1,2,3,4$, the sign of torus two-point function is most
likely the same as $\prod_{i=1}^{4}(m_{i}-n_{i})$. There are also some
exceptions. For example, in the case
$(m_{i},n_{i})=(8,5),(2,-6),(-5,-6),(-5,7),i=1,2,3,4$, the torus two-point
function is positive. This phenomenon can be explained from the previous
method in the case of five modes in (LABEL:5mode). In the case of four modes
we need to now pick up the real parts in the integrals (LABEL:someresults).
Only two terms give the last integral with completely degenerate arguments. So
the result can be roughly written as
$\displaystyle~{}~{}\langle\bar{O}^{J}_{(m_{1},m_{2},m_{3},m_{4})}O^{J}_{(n_{1},n_{2},n_{3},n_{4})}\rangle_{\textrm{torus}}$
(3.30)
$\displaystyle=\frac{g^{2}}{(2\pi)^{4}\prod_{i=1}^{4}(m_{i}-n_{i})}(\frac{1}{3}+\cdots),$
where the $\cdots$ denotes some correction terms which are inverse squares of
non-zero integers from mode numbers and also suppressed by a factor of
$2\pi^{2}$, so their absolute values are most likely small comparing to
$\frac{1}{3}$.
## 4 The factorization formula
Since we have now discovered a new phenomenon in the case of more than three
string modes that the BMN torus two-point functions can be negative, it is
worthwhile to test the other proposals for the holographic dictionary, in
particular the factorization formulas in [17, 18]. This also serves as a check
of the somewhat complicated calculations in the previous Section 3. In this
section we focus on the case of four string modes.
### 4.1 Planar three-point functions
Figure 2: The planar three-point diagrams.
First we shall calculate the relevant free planar three-point functions, which
correspond to the string vertices. There are 3 ways to distribute the 4 scalar
insertions as the long string is cut into two short strings with $J_{1}\equiv
xJ$ and $J_{2}\equiv(1-x)J$ number of $Z$’s ($0<x<1$). The field theory
diagrams are depicted in Figure 2. We integrate over the positions of
insertions with the BMN operators (LABEL:BMNoperators) and compute the results
$\displaystyle\langle\bar{O}^{J}_{m_{1},m_{2},m_{3},m_{4}}O^{J_{1}}_{k_{1},k_{2},k_{3},k_{4}}O^{J_{2}}\rangle$
(4.1)
$\displaystyle=\frac{g}{\sqrt{J}}\frac{(1-x)^{\frac{1}{2}}}{x^{\frac{3}{2}}}\int_{0}^{x}\prod_{i=1}^{4}dy_{i}e^{-2\pi
i(m_{i}-\frac{k_{i}}{x})y_{i}}=\frac{g}{\sqrt{J}}x^{\frac{5}{2}}(1-x)^{\frac{1}{2}}\prod_{i=1}^{4}\frac{\sin(\pi
m_{i}x)}{\pi(m_{i}x-k_{i})},$
$\displaystyle\langle\bar{O}^{J}_{m_{1},m_{2},m_{3},m_{4}}O^{J_{1}}_{k_{1},k_{2},k_{3}}O^{J_{2}}_{0}\rangle$
$\displaystyle=\frac{g}{\sqrt{J}x}[\int_{0}^{x}\prod_{i=1}^{3}dy_{i}e^{-2\pi
i(m_{i}-\frac{k_{i}}{x})y_{i}}][\int_{x}^{1}dy_{4}e^{-2\pi
im_{4}y_{4}}]=-\frac{gx^{2}}{\sqrt{J}}[\prod_{i=1}^{3}\frac{\sin(\pi
m_{i}x)}{\pi(m_{i}x-k_{i})}]\frac{\sin(\pi m_{4}x)}{\pi m_{4}},$
$\displaystyle\langle\bar{O}^{J}_{m_{1},m_{2},m_{3},m_{4}}O^{J_{1}}_{-k,k}O^{J_{2}}_{-l,l}\rangle$
$\displaystyle=g\frac{[\int_{0}^{x}dy_{1}dy_{2}e^{-2\pi
i(m_{1}+\frac{k}{x})y_{1}}e^{-2\pi
i(m_{2}-\frac{k}{x})y_{2}}][\int_{x}^{1}dy_{3}dy_{4}e^{-2\pi
i(m_{3}y_{3}+\frac{l(y_{3}-x)}{1-x})}e^{-2\pi
i(m_{4}y_{4}-\frac{l(y_{4}-x)}{1-x})}]}{\sqrt{Jx(1-x)}}$
$\displaystyle=\frac{g}{\sqrt{J}}[x(1-x)]^{\frac{3}{2}}\frac{\prod_{i=1}^{4}\sin(\pi
m_{i}x)}{\pi^{4}(m_{1}x+k)(m_{2}x-k)(m_{3}(1-x)+l)(m_{4}(1-x)-l)}.$
For the simplicity of notation, we do not label the specific string modes in
the operators with the implicit understanding that the string modes appearing
in the same order in $\bar{O}$ and $O$ operators are the same. We note that
the integral formulas are valid for any mode numbers, but the integrated
results in the above equation may not be valid in some special cases where the
denominator vanishes, e.g. some $m_{i}=k_{i}=0$. In those cases one needs to
do the integral separately. As in the cases with less stringy modes, the
three-point functions are always suppressed by a factor $\sqrt{J}$, so they
are vanishing or “virtual” in the BMN limit $J\sim\infty$.
### 4.2 One-loop string diagram calculations
Figure 3: The one-loop string diagrams.
There are also 3 string one-loop processes corresponding to the torus two-
point functions, depicted in Figure 3. In the BMN limit, the sum over the
operator length becomes integral $\sum_{J_{1}=1}^{J-1}=J\int_{0}^{1}dx$. We
denote the contributions $S_{1},S_{2},S_{3}$ as the followings
$\displaystyle
S_{1}=J\int_{0}^{1}dx\sum_{\sum_{i}k_{i}=0}\langle\bar{O}^{J}_{m_{1},m_{2},m_{3},m_{4}}O^{J_{1}}_{k_{1},k_{2},k_{3},k_{4}}O^{J_{2}}\rangle\langle\bar{O}^{J_{1}}_{k_{1},k_{2},k_{3},k_{4}}\bar{O}^{J_{2}}O^{J}_{n_{1},n_{2},n_{3},n_{4}}\rangle,$
(4.2) $\displaystyle
S_{2}=J\int_{0}^{1}dx\sum_{i_{4}=1}^{4}\sum_{\sum_{i}k_{i}=0}\langle\bar{O}^{J}_{m_{i_{1}},m_{i_{2}},m_{i_{3}},m_{i_{4}}}O^{J_{1}}_{k_{1},k_{2},k_{3}}O^{J_{2}}_{0}\rangle\langle\bar{O}^{J_{1}}_{k_{1},k_{2},k_{3}}\bar{O}^{J_{2}}_{0}O^{J}_{n_{i_{1}},n_{i_{2}},n_{i_{3}},n_{i_{4}}}\rangle,$
$\displaystyle
S_{3}=J\int_{0}^{1}dx\sum_{i_{2}=2}^{4}\sum_{k,l=-\infty}^{+\infty}\langle\bar{O}^{J}_{m_{1},m_{i_{2}},m_{i_{3}},m_{i_{4}}}O^{J_{1}}_{-k,k}O^{J_{2}}_{-l,l}\rangle\langle\bar{O}^{J_{1}}_{-k,k}\bar{O}^{J_{2}}_{-l,l}O^{J}_{n_{1},n_{i_{2}},n_{i_{3}},n_{i_{4}}}\rangle,$
where $(i_{1},i_{2},i_{3},i_{4})$ in $S_{2}$ is a cyclic permutation of
$(1234)$ and $(i_{2},i_{3},i_{4})$ in $S_{3}$ is a cyclic permutation of
$(234)$.
There are two methods for the computations of the equations in
(LABEL:stringloop). The first method is to directly sum over the integrated
results in (LABEL:planar3), then perform the $x$ integral. One need to use
some summation formulas, which is mostly straightforward. But this method
needs to deal with degenerate special cases separately. The second method,
discussed in [18] for the case of two string modes, is to directly use the
integral formulas for planar three-point functions and sum over the
intermediate modes first. This can be done using the Poisson summation
formula. The resulting integrals with delta function constrains can then be
converted to the standard integrals in Sec. 3.1. The second method works
universally with any string mode numbers without the need to deal with
degenerate cases separately, but the careful dissections of integral domains
are also quite complicated.
Since the sums and integrals here are always convergent, the two methods are
equivalent. The first method was used in the case of three string modes in
[18]. However, in our current case of four string modes, the calculations
become too complicated to obtain analytic answers in some steps. In any case,
the explicit results are quite long and complicated to provide useful physical
insight, so it is better to use the second method and write the contributions
as the standard integrals in Sec. 3.1. We also use the first method
complementarily to perform some numerical tests of the results.
Summing over the intermediate string modes, we find the integral formulas for
string loop diagram contributions
$\displaystyle
S_{1}=g^{2}\int_{0}^{1}(1-x)dx[\int_{0}^{x}\prod_{i=1}^{4}dy_{i}dy^{\prime}_{i}e^{2\pi
i(n_{i}y^{\prime}_{i}-m_{i}y_{i})}][\prod_{i=1}^{3}\sum_{p_{i}=-\infty}^{+\infty}\delta(y_{i}^{\prime}-y_{i}+y_{4}-y^{\prime}_{4}-p_{i}x)],$
(4.3) $\displaystyle
S_{2}=g^{2}\int_{0}^{1}dx[\int_{0}^{x}\prod_{i=1}^{3}dy_{i}dy^{\prime}_{i}][\int_{x}^{1}dy_{4}dy^{\prime}_{4}][\prod_{i=1}^{2}\sum_{p_{i}=-\infty}^{+\infty}\delta(y_{i}^{\prime}-y_{i}+y_{3}-y^{\prime}_{3}-p_{i}x)]$
$\displaystyle~{}~{}~{}~{}~{}\cdot e^{2\pi
i\sum_{i=1}^{4}(n_{i}y^{\prime}_{i}-m_{i}y_{i})}+\sum_{i=1}^{3}(i\leftrightarrow
4),$ $\displaystyle
S_{3}=g^{2}\int_{0}^{1}dx[\int_{0}^{x}\prod_{i=1}^{2}dy_{i}dy^{\prime}_{i}][\int_{x}^{1}\prod_{i=3}^{4}dy_{i}dy^{\prime}_{i}][\sum_{p_{1},p_{2}=-\infty}^{+\infty}\delta(y_{1}^{\prime}-y_{1}+y_{2}-y^{\prime}_{2}-p_{1}x)$
$\displaystyle~{}~{}~{}~{}~{}~{}\cdot\delta(y_{3}^{\prime}-y_{3}+y_{4}-y^{\prime}_{4}-p_{2}(1-x))]e^{2\pi
i\sum_{i=1}^{4}(n_{i}y^{\prime}_{i}-m_{i}y_{i})}+\sum_{i=3}^{4}(i\leftrightarrow
2),$
where the contributions in $S_{2},S_{3}$ have some extra terms which are
simply related to the explicit expression by permutations of indices. After
carefully dissecting the integral formulas, with details explained in the
Appendix A, we find that the contributions can be transformed into the 10
types of integrals in (LABEL:4moderesult). Specifically, the results are
$\displaystyle S_{1}=g^{2}(2I_{1}+I_{2}+I_{3}+I_{4}),$ (4.4) $\displaystyle
S_{2}=g^{2}(I_{2}+I_{3}+2I_{5}+2I_{6}+I_{7}+I_{8}),$ $\displaystyle
S_{3}=g^{2}(I_{4}++I_{7}+I_{8}+2I_{9}+2I_{10}).$
So we confirm an entry of the holographic dictionary of factorization formulas
$2\langle\bar{O}^{J}_{(m_{1},m_{2},m_{3},m_{4})}O^{J}_{(n_{1},n_{2},n_{3},n_{4})}\rangle_{{\rm
torus}}=S_{1}+S_{2}+S_{3}=2g^{2}\sum_{i=1}^{10}I_{i}.$ (4.5)
## 5 Comparison with light cone string field cubic vertex
In this section we compare the planar three-point functions with many string
modes with the Green-Schwarz light cone string field cubic vertex,
generalizing the earlier paper for the case of two string modes [16]. The
bosonic part of the Green-Schwarz cubic string field vertex can be described
by $|V\rangle=E_{a}|0\rangle$, where the operator
$E_{a}\sim\exp[\sum_{r=1}^{2}\sum_{I=1}^{8}\sum_{m,n=-\infty}^{\infty}a_{m(3)}^{I\dagger}N^{3,r}_{m,n}a_{n(r)}^{I\dagger}].$
(5.1)
We provide some explanations of the notations. The indices $r=1,2,3$ labels
the three strings, with the convention that $r=3$ string has the largest light
cone width which is the sum of those of $r=1,2$. The bosonic operator
$a^{I\dagger}_{m(r)}$ creates the $r$’s string state in the $I$’s transverse
direction with BMN mode number $m$. The Neumann matrix encodes the string
interactions. Its element $N^{3,3}_{m,n}=0$ for any $m,n$, and it has a
symmetry $N^{r,s}_{m,n}=N^{s,r}_{n,m}$. Since the number of string modes in
the 3rd long string is the sum of those of the two $r=1,2$ short strings, we
only need to include the $N^{3,r}_{m,n}$ type of matrix elements. This
corresponds to the calculations of free planar BMN three-point functions where
the string modes are contracted between a long string and the two short
strings.
The Neumann matrix elements were computed in the pp-wave background [25] and
becomes much simplified in the infinite curvature limit [16]. We denote the
light cone width of the two short strings as $x$ and $1-x$, corresponding to
the relative lengths of the two short operators in the free planar three-point
function. The relevant matrix elements in the infinite curvature limit are
$\displaystyle N^{3,1}_{0,0}=\sqrt{x},~{}~{}~{}~{}N^{3,2}_{0,0}=\sqrt{1-x},$
(5.2) $\displaystyle N^{3,1}_{m,n}=\frac{\sqrt{x}\sin(\pi
mx)}{\pi(mx-n)},~{}~{}~{}~{}N^{3,2}_{m,n}=-\frac{\sqrt{1-x}\sin(\pi
mx)}{\pi[m(1-x)-n]},~{}~{}~{}~{}{\rm for}~{}(m,n)\neq(0,0),$
We should note that we use a different convention for the basis of the bosonic
creation operators from the literature [25, 16]. Due to the different
conventions, there are also some sign differences in the Neumann matrix
elements with the literature. The current convention is most convenient from
the field theory perspective.
In the study of superstring field theories in flat space [23, 24], besides the
cubic vertex, there are other important physical quantities, such as the
prefactor and the higher order contact interactions. This is further studied
in the pp-wave background and the dual field theory in e.g. [31, 32]. With our
specialization to the infinite curvature limit, the tensionless strings do not
have an effective action description. So in our case it seems that at tree
level, the cubic vertex $|V\rangle$ is the only remaining relevant finite
physical object to consider.
Suppose three BMN operators $O_{1},O_{2},O_{3}$ correspond to three string
states $|1\rangle,|2\rangle,|3\rangle$, then the planar three-point functions
are related to the string vertex
$\frac{\langle\bar{O}_{3}O_{1}O_{2}\rangle}{\langle\bar{O}^{J}O^{xJ}O^{(1-x)J}\rangle}=\frac{\langle
1|\langle 2|\langle 3|V\rangle}{\langle 0|V\rangle},$ (5.3)
where $|0\rangle$ is the string vacuum state, and the normalization factor of
BMN vacuum correlator is simply
$\langle\bar{O}^{J}O^{xJ}O^{(1-x)J}\rangle=\frac{g\sqrt{x(1-x)}}{\sqrt{J}}.$
(5.4)
The right hand side of (5.3) can be computed by expanding the bosonic operator
(5.1) to appropriate order and extract the relevant Neumann matrix elements.
For example, for the case of four string modes, the BMN operators correspond
to the string states as
$\displaystyle O^{J}_{m_{1},m_{2},m_{3},m_{4}}\Longleftrightarrow
a^{I_{1}\dagger}_{m_{1}(3)}a^{I_{2}\dagger}_{m_{2}(3)}a^{I_{3}\dagger}_{m_{3}(3)}a^{I_{4}\dagger}_{m_{4}(3)}|0\rangle,$
(5.5) $\displaystyle O^{J_{1}}_{k_{1},k_{2},k_{3},k_{4}}\Longleftrightarrow
a^{I_{1}\dagger}_{k_{1}(1)}a^{I_{2}\dagger}_{k_{2}(1)}a^{I_{3}\dagger}_{k_{3}(1)}a^{I_{4}\dagger}_{k_{4}(1)}|0\rangle,$
$\displaystyle O^{J_{1}}_{k_{1},k_{2},k_{3}}\Longleftrightarrow
a^{I_{1}\dagger}_{k_{1}(1)}a^{I_{2}\dagger}_{k_{2}(1)}a^{I_{3}\dagger}_{k_{3}(1)}|0\rangle,$
$\displaystyle O^{J_{1}}_{-k,k}\Longleftrightarrow
a^{I_{1}\dagger}_{-k(1)}a^{I_{2}\dagger}_{k(1)}|0\rangle,~{}~{}~{}O^{J_{2}}_{-l,l}\Longleftrightarrow
a^{I_{3}\dagger}_{-l(2)}a^{I_{4}\dagger}_{l(2)}|0\rangle,$ $\displaystyle
O^{J_{2}}\Longleftrightarrow|0\rangle,~{}~{}~{}O^{J_{2}}_{0}\Longleftrightarrow
a^{I_{4}\dagger}_{0(2)}|0\rangle.$
We can expand the exponential operator (5.1) to 4th order and compute planar
three-point functions with the usual commutation relation of creation and
annihilation operators. The only non-vanishing contributions come from the 4th
order which provides the same numbers of creation operators as those of the
annihilation operators. The results are
$\displaystyle\langle\bar{O}^{J}_{m_{1},m_{2},m_{3},m_{4}}O^{J_{1}}_{k_{1},k_{2},k_{3},k_{4}}O^{J_{2}}\rangle=\frac{g\sqrt{x(1-x)}}{\sqrt{J}}\prod_{i=1}^{4}N^{3,1}_{m_{i},k_{i}},$
(5.6)
$\displaystyle\langle\bar{O}^{J}_{m_{1},m_{2},m_{3},m_{4}}O^{J_{1}}_{k_{1},k_{2},k_{3}}O^{J_{2}}_{0}\rangle=\frac{g\sqrt{x(1-x)}}{\sqrt{J}}(\prod_{i=1}^{3}N^{3,1}_{m_{i},k_{i}})N^{3,2}_{m_{4},0},$
$\displaystyle\langle\bar{O}^{J}_{m_{1},m_{2},m_{3},m_{4}}O^{J_{1}}_{-k,k}O^{J_{2}}_{-l,l}\rangle=\frac{g\sqrt{x(1-x)}}{\sqrt{J}}N^{3,1}_{m_{1},-k}N^{3,1}_{m_{2},k}N^{3,2}_{m_{3},-l}N^{3,2}_{m_{4},l}.$
This agrees with the field theory results (LABEL:planar3) using the Neumann
matrix elements in (LABEL:Neumann). One can also check the case of three
string modes previously computed in [18] and various degenerate cases. Of
course, as we mentioned, the planar three-point functions are vanishing in the
BMN limit $J\sim\infty$ and regarded as virtual processes, but their ratios
with the vacuum correlator are finite and meaningfully related to the Neumann
matrix elements.
It is not difficult to see that the Neumann matrix elements in (LABEL:Neumann)
simply correspond to the integrations of the positions of relevant string mode
with phases in the BMN operators with proper normalization, using the closed
string level matching condition in the long 3rd string to cancel out an
overall phase. We infer that although the physical setting has at most eight
string modes of distinct directions, the mathematical structure of the
holographic dictionary (5.3) is quite robust and survives even in a
hypothetical situation with any number of different string modes, i.e. not
just valid for $I=1,2,\cdots 8$.
The analysis here also provides another perspective on the reality condition
of higher genus two-point functions discussed in Sec. 2. Since the Neumann
matrix elements are all real, the planar three-point functions are also always
real. If the factorization formulas e.g. (4.5) are correct, the higher genus
two-point functions can be computed from string diagrams and must be real.
## 6 Some issues with multiple string modes in the same transverse direction
Since we have now studied BMN operators with many string modes, it is
appropriate to consider the situation of multiple modes in the same transverse
direction. To our knowledge, this situation has not been much discussed in the
literature. Naively, the corresponding BMN operators can be similarly
constructed, using the same scalar field (or covariant derivative) going
through the string of $Z$’s with multiple sums with phases, with possibly a
different normalization discussed below.
For simplicity we consider BMN operators with only the 4 scalar field
insertions. First we introduce some notations, if multiple string modes
correspond to the same direction, we use a square bracket to enclose the mode
numbers. For example, the BMN operator with two identical scalar fields is
denoted $O^{J}_{[-m,m]}$, and the BMN operator with three string modes where
two of them have the same direction is denoted
$O^{J}_{([m_{1},m_{2}],m_{3})}$. The closed string level matching condition is
still the same that all mode numbers should sum to zero. Since the scalar
fields in the square bracket are exchangeable, e.g. the operators
$O^{J}_{([m_{1},m_{2}],m_{3})}$ and $O^{J}_{([m_{2},m_{1}],m_{3})}$ are the
same, we can choose to order the mode numbers in the square bracket, e.g. in a
non-decreasing order.
However, this brings a subtle issue. We recall that the chiral primary
operators with lowest dimension in a short multiplet of $\mathcal{N}=4$ super-
Yang-Mills theory are constructed by the 6 real scalars in the $SO(6)$
symmetric traceless representation, see e.g. the review [33, 34]. They are BPS
operators whose conformal dimensions are protected by supersymmetry. When a
real scalar appears multiple times, an operator may no longer be chiral
primary. For example, the operator $\textrm{Tr}((\phi^{I})^{2})$, known as the
Konishi operator, is not a chiral primary operator, since it is not traceless
in the $SO(6)$. The conformal dimension of this operator would grow at least
as $(g_{YM}^{2}N)^{\frac{1}{4}}$. On the other hand, the BMN vacuum operator
$\textrm{Tr}(Z^{J})$ is a chiral primary operator since a power of the complex
scalar $Z$ is automatically traceless in the $SO(6)$.
In the original calculations of planar anomalous conformal dimensions of the
BMN operator $O^{J}_{-m,m}$ [7], one used the fact that for $m=0$, the
operator $O^{J}_{0,0}$ is a chiral primary operator whose conformal dimension
is not corrected by gauge interactions. So one only needs to compute the mode
number $m$-dependent part which is perturbative in an effective gauge coupling
constant $\lambda^{\prime}\equiv\frac{g_{YM}^{2}N}{J^{2}}$, a small parameter
in the BMN limit. In this sense the BMN operators of distinct scalar field
insertions with non-zero modes are “near BPS” operators. As mentioned, if we
put two identical real scalars into the string of $Z$’s, the zero mode
operator, namely $O^{J}_{[0,0]}$, is no longer a chiral primary operator.
There may be large (field theory) quantum corrections to the $m$-independent
part of its conformal dimension. So in this case the calculations of planar
conformal dimension is no longer reliable. We are not aware a simple natural
fix which also matches the expectations from the string theory side.
In any case, we may hope by restricting ourselves to free gauge theory, this
issue with large quantum gauge corrections does not cause problems. We shall
retest our earlier results for the cases involving BMN operators with multiple
identical scalar fields. We find that the comparison with Green-Schwarz light
cone string field cubic vertex [16] and factorization formula [17, 18] still
go through smoothly. However, the probability interpretation [19] begins to
encounter an issue in the case of three scalar field insertions with two of
them identical.
First we consider the case of two identical scalar fields. The BMN operators
are
$\displaystyle
O^{J}_{[-m,m]}=\frac{1}{\sqrt{JN^{J+2}}}\sum_{l=0}^{J-1}e^{\frac{2\pi
iml}{J}}Tr(\phi^{I}Z^{l}\phi^{I}Z^{J-l}),~{}~{}~{}~{}m>0,$ (6.1)
$\displaystyle
O^{J}_{[0,0]}=\frac{1}{\sqrt{2JN^{J+2}}}\sum_{l=0}^{J-1}Tr(\phi^{I}Z^{l}\phi^{I}Z^{J-l}),$
where $\phi^{I}$ is any one of 4 remaining real scalars. We only need to
consider $m\geq 0$ since the negative $m$ gives the same operator. The zero
mode has an extra normalization factor $\sqrt{2}$ to keep them orthonormal at
the planar level.
We consider the comparison with string field vertex in Sec. 5. As an example
we compute the vertex amplitude with three string states
$a^{I_{1}\dagger}_{0(1)}|0\rangle,a^{I_{2}\dagger}_{0(2)}|0\rangle,a^{I_{1}\dagger}_{-m(3)}a^{I_{2}\dagger}_{m(3)}|0\rangle$.
We have an extra contribution if the directions are the same $I_{1}=I_{2}$,
namely,
$\frac{\langle
0|a^{I_{1}\dagger}_{0(1)}a^{I_{2}\dagger}_{0(2)}a^{I_{1}}_{-m(3)}a^{I_{2}}_{m(3)}|V\rangle}{\langle
0|V\rangle}=\begin{cases}N^{3,1}_{-m,0}N^{3,2}_{m,0},&I_{1}\neq I_{2}\\\
N^{3,1}_{-m,0}N^{3,2}_{m,0}+N^{3,1}_{m,0}N^{3,2}_{-m,0},&I_{1}=I_{2}.\end{cases}$
(6.2)
The extra contribution for $I_{1}=I_{2}$ also appears in the extra contraction
for identical scalar fields in the field theory calculations. So the
comparison of BMN three-point functions with cubic string vertex is still
valid in the case of multiple modes in the same direction.
The factorization formula also works in this case. We note that with the extra
contraction due to identical scalars, for $m,n\neq 0$ we have the formula
$\displaystyle\langle\bar{O}^{J}_{[-m,m]}O^{J_{1}}_{0}O^{J_{2}}_{0}\rangle=2\langle\bar{O}^{J}_{-m,m}O^{J_{1}}_{0}O^{J_{2}}_{0}\rangle,$
(6.3)
$\displaystyle\langle\bar{O}^{J}_{[-m,m]}O^{J_{1}}_{[-n,n]}O^{J_{2}}\rangle=\langle\bar{O}^{J}_{-m,m}O^{J_{1}}_{-n,n}O^{J_{2}}\rangle+\langle\bar{O}^{J}_{-m,m}O^{J_{1}}_{n,-n}O^{J_{2}}\rangle,$
$\displaystyle\langle\bar{O}^{J}_{[-m,m]}O^{J}_{[-n,n]}\rangle_{\rm
torus}=\langle\bar{O}^{J}_{-m,m}O^{J}_{-n,n}\rangle_{\rm
torus}+\langle\bar{O}^{J}_{-m,m}O^{J}_{n,-n}\rangle_{\rm torus},$
where $J=J_{1}+J_{2}$ and the three-point functions without label are planar.
Using the fact
$\langle\bar{O}^{J}_{-m,m}O^{J_{1}}_{0}O^{J_{2}}_{0}\rangle=\langle\bar{O}^{J}_{m,-m}O^{J_{1}}_{0}O^{J_{2}}_{0}\rangle$,
$\langle\bar{O}^{J}_{-m,m}O^{J_{1}}_{-n,n}O^{J_{2}}\rangle=\langle\bar{O}^{J}_{m,-m}O^{J_{1}}_{n,-n}O^{J_{2}}\rangle$
and the factorization formula for the case of two different modes [18, 19], we
can write the analogous formula for the current case
$\displaystyle 2\langle\bar{O}^{J}_{[-m,m]}O^{J}_{[-n,n]}\rangle_{\rm torus}$
$\displaystyle=\sum_{J_{1}=1}^{J-1}\sum_{k=0}^{\infty}\langle\bar{O}^{J}_{[-m,m]}O^{J_{1}}_{[-k,k]}O^{J_{2}}\rangle\langle\bar{O}^{J_{1}}_{[-k,k]}\bar{O}^{J_{2}}O^{J}_{[-n,n]}\rangle$
(6.4)
$\displaystyle+\sum_{J_{1}=1}^{[\frac{J}{2}]}\langle\bar{O}^{J}_{[-m,m]}O^{J_{1}}_{0}O^{J_{2}}_{0}\rangle\langle\bar{O}^{J_{1}}_{0}\bar{O}^{J_{2}}_{0}O^{J}_{[-n,n]}\rangle.$
We note that the difference is that we only need to sum over $k\geq 0$ and the
second sum is over $J_{1}\leq J_{2}$ since the scalars in the two operators
$O^{J_{1}}_{0}$ and $O^{J_{2}}_{0}$ are the same. The formula for case of
$m,n=0$ is much simpler and also works in this case, taking into account the
normalization in (LABEL:2samemodes).
The calculations with more stringy modes are similar based on the experience.
So we conclude that as long as the comparison with cubic string vertex and the
factorization formula are valid in the case of many different string modes,
then identifying some of the modes as the same shall not cause problems.
Now we consider the probability interpretation for two and three string modes.
Identifying some modes to the same apparently does not change the non-
negativity of the correlators. So we only need to consider the normalization
relation. For the case two string modes in the same direction we still also
have the normalization relation similar as (2.6)
$\sum_{n=0}^{\infty}\langle\bar{O}^{J}_{[-m,m]}O^{J}_{[-n,n])}\rangle_{h}=\frac{(4h-1)!!}{(2h+1)(4h)!}g^{2h},$
(6.5)
where we now only need to sum over non-negative integer $k$. The formula is
valid for both $m=0$ and $m>0$ since the zero mode decouple from non-zero
modes.
However, the normalization relation encounters a problem in the case of three
string modes with two of them in the same direction. The BMN operators are the
followings
$\displaystyle
O^{J}_{([m_{1},m_{2}],m_{3})}=\frac{c}{\sqrt{N^{J+3}}J}\sum_{l_{1},l_{2}=0}^{J}e^{\frac{2\pi
im_{2}l_{1}}{J}}e^{\frac{2\pi
im_{3}l_{2}}{J}}\textrm{Tr}(\phi^{1}Z^{l_{1}}\phi^{1}Z^{l_{2}-l_{1}}\phi^{2}Z^{J-l_{2}}).$
(6.6)
Comparing to the case of three different modes (LABEL:BMNoperators), we add a
normalization constant which is $c=1$ if $m_{1}<m_{2}$ and
$c=\frac{1}{\sqrt{2}}$ if $m_{1}=m_{2}$, so that the operators are orthonormal
at the planar level. Again we compute the sum over one set of mode numbers.
Suppose $m_{1}<m_{2}$,
$\displaystyle~{}~{}\sum_{n_{1}\leq
n_{2}}\langle\bar{O}^{J}_{([m_{1},m_{2}],m_{3})}O^{J}_{([n_{1},n_{2}],n_{3})}\rangle_{h}$
(6.7) $\displaystyle=\sum_{n_{1}\neq
n_{2}}\langle\bar{O}^{J}_{(m_{1},m_{2},m_{3})}O^{J}_{(n_{1},n_{2},n_{3})}\rangle_{h}+\sqrt{2}\sum_{n}\langle\bar{O}^{J}_{(m_{1},m_{2},m_{3})}O^{J}_{(n,n,n_{3})}\rangle_{h}.$
Unlike the case of two string modes, the second term does not generally
vanish. So because of the $\sqrt{2}$ factor, we can not combine the sums into
a nice formula like (2.6).
## 7 Conclusion
The $SO(8)$ rotational symmetry of the transverse directions in the pp-wave
background (1.1) is broken by the Ramond-Ramond flux into $SO(4)\times SO(4)$,
where the bosonic string modes are described differently by covariant
derivatives and scalar field insertions in the dual CFT. As such, it is
reasonable to expect our proposed entries of pp-wave holographic dictionary,
e.g. (2.10, 4.5, 5.3), to face some challenges with more than four distinct
string modes as the infinite Ramond-Ramond flux in our setting shall separate
the two types of string modes. However, it is rather surprising that even for
the case of four string modes, the torus two-point function can be negative,
so the probability interpretation may no longer valid. Of course, since the
two-point function is always real and symmetric, the arguments in [19] are
still valid that it can not be naively identified with a quantum transition
amplitude on the string theory side, which would then violate fundamental
principle of unitarity. It would be interesting to provide a reasonable
explanation, or improve the proposed holographic dictionary (2.10) to include
this case of four string modes.
On the other hand, we confirm that the factorization formulas e.g. (4.5) are
still valid for the case of four string modes, while the comparison with cubic
string vertex (5.3) is seen to be straightforwardly applied to any
hypothetical number of string modes, not even restricted by the eight
dimensions of transverse directions in the pp-wave background.
We also discuss the situation with multiple string modes in the same
direction. In this case the BMN operators are no longer “near-BPS”, and there
are potentially large quantum corrections on the field theory side if one
turns on the gauge coupling. We check that the mathematical structures in the
factorization formula and comparison with cubic string vertex, e.g. (4.5,
5.3), are robust and remain valid in this situation as we stay in free gauge
theory. However, the proposed probability interpretation (2.10) again seems
rather fragile and further breaks down in the case of three string modes
because of a problem with normalization, though it still holds up in the case
of two string modes due to the decoupling of the zero mode with non-zero
modes.
It is interesting to further explore aspects of the pp-wave holographic
dictionary. For example, in the case of three string modes, the non-negativity
of torus two-point functions can be shown by explicit calculations, where
there are numerous degenerate cases to deal with separately. One may ask
whether there is a universal formalism which can deal with all cases
regardless of mode number degeneracy and may also generalize to higher genus
$h\geq 2$. It is also interesting to check whether the factorization formulas
are still valid in the case of more than four string modes or further in a
hypothetical situation of any number of (different) string modes. Without a
significant improvement of mathematical tools, the calculations are much more
complicated. In any case, it seems worthwhile to push forward with the
laborious endeavor for the purpose of a better understanding of pp-wave
holography.
As mentioned in [20], the probability interpretation of two-point function
implies the string perturbation series is convergent. In this sense, the
holographic higher genus calculations are not asymptotic perturbative
expansions as familiar in most examples of quantum theories, but may in
principle provide exact complete string amplitudes valid for any string
coupling. If no new non-perturbative effect is discovered in the future, then
perhaps we have luckily found a rare example of perturbatively complete string
theory, at least for the case of two string modes and very likely also for the
case of three distinct string modes pending more tests of non-negativity at
higher genus $h\geq 2$. In the cases of four or more string modes, the torus
two-point functions are no longer always non-negative. One can nevertheless
similarly follow the method in [20] to give an upper bound on the higher genus
two-point functions and show that the genus expansions remain convergent. For
small string coupling and two different sets of mode numbers, the torus
contribution is dominant, so the total two-point function could certainly be
negative and is no longer a probability distribution although they can be
still similarly normalized to sum to unity. It would be desirable to better
understand the physical meaning of the two-point function on the string theory
side of the correspondence in this situation.
Acknowledgments
We thank Jun-Hao Li, Jian-xin Lu, Gao-fu Ren, Pei-xuan Zeng for helpful
discussions. This work was supported in parts by the national Natural Science
Foundation of China (Grants No.11675167, No.11947301 and No.12047502).
## Appendix A Some calculational details of the one-loop string integrals
We will convert the formulas for one-loop string diagrams (LABEL:Sformulas)
into the 10 types of integrals in (LABEL:4moderesult). In the calculations,
some cases are simply related to others by a transformation
$(m_{i},n_{i})\rightarrow(-n_{i},-m_{i})$. It is helpful to first list the
action of the transformation on the integrals
$\displaystyle I_{i}{~{}~{}\rm invariant},~{}~{}~{}i=1,4,5,6,9,10,$ (A.1)
$\displaystyle I_{2}\leftrightarrow I_{3},~{}~{}~{}~{}I_{7}\leftrightarrow
I_{8}.$
We discuss the dissection of the multi-dimensional integral domain in many
cases, and introduce some positive variables $z$’s and $z^{\prime}$’s such
that they sum to one.
### A.1 $S_{1}$ contribution
We assume integral variables $y_{4}^{\prime}>y_{4}$. The other case is related
by switching $y_{4}^{\prime}\leftrightarrow y_{4}$ and the transformation
$(m_{i},n_{i})\rightarrow(-n_{i},-m_{i})$. We have
$0<y_{i}+y_{4}^{\prime}-y_{4}<2x$. In this case first we write
$1-x=\int_{0}^{1-x}dz_{7}dz_{8}\delta(z_{7}+z_{8}-(1-x))$. Then the variables
$z_{7},z_{8}$ do not appear in the exponent. There is always an argument $0$
with at least multiplicity two in the standard integral. We define
$z_{4}=x-y_{4}^{\prime}$, $z_{5}=y_{4}$ and discuss various cases.
1. 1.
$x<y_{i}+y_{4}^{\prime}-y_{4}<2x,i=1,2,3$. The delta function constrains fix
$y_{i}^{\prime}=y_{i}+y_{4}^{\prime}-y_{4}-x,i=1,2,3$. Without loss of
generality we assume $y_{1}>y_{2}>y_{3}$. We change integration variables
$z_{i}=x-y_{i},i=1,2,3$, $x=z_{3}+z_{4}+z_{5}+z_{6}$,
$z_{3}=z_{3}^{\prime}+z_{2}$, $z_{2}=z_{2}^{\prime}+z_{1}$. The integral is
then
$\displaystyle\int_{0}^{1}dz_{1}dz_{2}^{\prime}dz_{3}^{\prime}[\prod_{i=4}^{8}dz_{i}]\delta(z_{1}+z_{2}^{\prime}+z_{3}^{\prime}+\sum_{i=4}^{8}z_{i}-1)$
(A.2) $\displaystyle\times e^{2\pi
i[m_{4}(z_{4}+z_{6})+n_{4}(z_{1}+z_{5})+(n_{4}+n_{1}-m_{1})z_{2}^{\prime}+(m_{3}+m_{4}-n_{3})z_{3}^{\prime}]}$
$\displaystyle=I_{(2,2,2,1,1)}(m_{4},n_{4},0,m_{3}+m_{4}-n_{3},n_{4}+n_{1}-m_{1}).$
This is a $I_{3}$ type integral.
2. 2.
$x<y_{i}+y_{4}^{\prime}-y_{4}<2x,i=2,3$, and $0<y_{1}+y_{4}^{\prime}-y_{4}<x$.
The delta functions constrain
$y_{i}^{\prime}=y_{i}+y_{4}^{\prime}-y_{4}-x,i=2,3$, and
$y_{1}^{\prime}=y_{1}+y_{4}^{\prime}-y_{4}$. Without loss of generality we
assume $y_{2}>y_{3}$. We change variables
$z_{i}=x-y_{i},i=2,3,x=z_{3}+z_{4}+z_{5}+z_{6},z_{3}=z_{3}^{\prime}+z_{2}$. We
have $y_{1}<z_{4}+z_{5}$ and this further divides into two sub-cases.
1. (a)
$y_{1}<z_{5}$. Then we define $y_{1}=z_{1},z_{5}=z_{5}^{\prime}+z_{1}$. The
delta function constrain is
$\delta(z_{1}+z_{2}+z_{3}^{\prime}+z_{4}+z_{5}^{\prime}+z_{6}-x)$. The
exponents is now
$e^{2\pi
i[(n_{1}+n_{4})(z_{1}+z_{2})+(m_{1}+m_{4})(z_{4}+z_{6})+(-n_{3}-m_{2})z_{3}^{\prime}+(m_{1}+n_{4})z_{5}^{\prime}]}.$
(A.3)
The integral is
$I_{(2,2,2,1,1)}(m_{1}+m_{4},n_{1}+n_{4},0,m_{1}+n_{4},-n_{3}-m_{2})$, which
is a $I_{4}$ type integral.
2. (b)
$z_{5}<y_{1}<z_{4}+z_{5}$. Then we define
$z_{1}=y_{1}-z_{5},z_{4}=z_{4}^{\prime}+z_{1}$. The delta function constrain
is $\delta(z_{1}+z_{2}+z_{3}^{\prime}+z_{4}^{\prime}+z_{5}+z_{6}-x)$. The
exponents is
$e^{2\pi
i[(n_{1}+n_{4})(z_{2}+z_{5})+(m_{1}+m_{4})(z_{4}^{\prime}+z_{6})+(-n_{3}-m_{2})z_{3}^{\prime}+(m_{4}+n_{1})z_{1}]}.$
(A.4)
The integral is
$I_{(2,2,2,1,1)}(m_{1}+m_{4},n_{1}+n_{4},0,m_{4}+n_{1},-n_{3}-m_{2})$, which
is also a $I_{4}$ type integral.
3. 3.
$x<y_{3}+y_{4}^{\prime}-y_{4}<2x$, and $0<y_{i}+y_{4}^{\prime}-y_{4}<x,i=1,2$.
The delta functions constrain $y_{3}^{\prime}=y_{3}+y_{4}^{\prime}-y_{4}-x$,
and $y_{i}^{\prime}=y_{i}+y_{4}^{\prime}-y_{4},i=1,2$. Without loss of
generality we assume $y_{1}<y_{2}$. Define variables
$z_{3}=x-y_{3},x=z_{3}+z_{4}+z_{5}+z_{6}$. We have $y_{i}<z_{4}+z_{5},i=1,2$
and this further divides into three sub-cases.
1. (a)
$y_{1}<y_{2}<z_{5}$. Then we define
$y_{1}=z_{1},z_{2}=y_{2}-y_{1},z_{5}=z_{5}^{\prime}+z_{1}+z_{2}$. The delta
function constrain is
$\delta(z_{1}+z_{2}+z_{3}+z_{4}+z_{5}^{\prime}+z_{6}-x)$. The exponents is
$e^{2\pi
i[-n_{3}(z_{1}+z_{3})-m_{3}(z_{4}+z_{6})+(-n_{3}+m_{1}-n_{1})z_{2}+(-m_{3}+n_{4}-m_{4})z_{5}^{\prime}]}.$
(A.5)
The integral is
$I_{(2,2,2,1,1)}(-m_{3},-n_{3},0,-m_{3}+n_{4}-m_{4},-n_{3}+m_{1}-n_{1})$,
which is a $I_{2}$ type integral.
2. (b)
$y_{1}<z_{5}<y_{2}<z_{4}+z_{5}$. Then we define
$y_{1}=z_{1},z_{2}=y_{2}-z_{5},z_{5}=z_{5}^{\prime}+z_{1},z_{4}=z_{2}+z_{4}^{\prime}$.
The delta function constrain is
$\delta(z_{1}+z_{2}+z_{3}+z_{4}^{\prime}+z_{5}^{\prime}+z_{6}-x)$. The
exponents is
$e^{2\pi
i[-n_{3}(z_{1}+z_{3})-m_{3}(z_{4}^{\prime}+z_{6})+(-m_{3}+n_{2}-m_{2})z_{2}+(-n_{3}+m_{1}-n_{1})z_{5}^{\prime}]}.$
(A.6)
The integral is
$I_{(2,2,2,1,1)}(-m_{3},-n_{3},0,-m_{3}+n_{2}-m_{2},-n_{3}+m_{1}-n_{1})$,
which is also a $I_{2}$ type integral.
3. (c)
$z_{5}<y_{1}<y_{2}<z_{4}+z_{5}$. Then we define
$z_{1}=y_{1}-z_{5},z_{2}=y_{2}-y_{1},z_{4}=z_{1}+z_{2}+z_{4}^{\prime}$. The
delta function constrain is
$\delta(z_{1}+z_{2}+z_{3}+z_{4}^{\prime}+z_{5}+z_{6}-x)$. The exponents is
$e^{2\pi
i[-n_{3}(z_{3}+z_{5})-m_{3}(z_{4}^{\prime}+z_{6})+(-m_{3}+n_{2}-m_{2})z_{2}+(-n_{3}+m_{4}-n_{4})z_{1}]}.$
(A.7)
The integral is
$I_{(2,2,2,1,1)}(-m_{3},-n_{3},0,-m_{3}+n_{2}-m_{2},-n_{3}+m_{4}-n_{4})$,
which is also a $I_{2}$ type integral.
4. 4.
$0<y_{i}+y_{4}^{\prime}-y_{4}<x,i=1,2,3$. The delta functions constrain
$y_{i}^{\prime}=y_{i}+y_{4}^{\prime}-y_{4},i=1,2,3$. Without loss of
generality we assume $y_{1}<y_{2}<y_{3}$. Define variables
$x=z_{4}+z_{5}+z_{6}$. We have $y_{i}<z_{4}+z_{5},i=1,2,3$ and this further
divides into four sub-cases.
1. (a)
$y_{1}<y_{2}<y_{3}<z_{5}$. Then we define
$z_{1}=y_{1},z_{2}=y_{2}-y_{1},z_{3}=y_{3}-y_{2},z_{5}=z_{5}^{\prime}+z_{1}+z_{2}+z_{3}$.
The delta function constrain is
$\delta(z_{1}+z_{2}+z_{3}+z_{4}+z_{5}^{\prime}+z_{6}-x)$. The exponents is
$e^{2\pi
i[(m_{1}-n_{1})z_{2}+(n_{3}+n_{4}-m_{3}-m_{4})z_{3}+(-m_{4}+n_{4})z_{5}^{\prime}]}.$
(A.8)
The integral is
$I_{(5,1,1,1)}(0,m_{1}-n_{1},-m_{4}+n_{4},n_{3}+n_{4}-m_{3}-m_{4})$, which is
a $I_{1}$ type integral.
2. (b)
$y_{1}<y_{2}<z_{5}<y_{3}<z_{4}+z_{5}$. Then we define
$z_{1}=y_{1},z_{2}=y_{2}-y_{1},z_{3}=y_{3}-z_{5},z_{5}=z_{5}^{\prime}+z_{1}+z_{2},z_{4}=z_{4}^{\prime}+z_{3}$.
The delta function constrain is
$\delta(z_{1}+z_{2}+z_{3}+z_{4}^{\prime}+z_{5}^{\prime}+z_{6}-x)$. The
exponents is
$e^{2\pi
i[(m_{1}-n_{1})z_{2}+(-m_{3}+n_{3})z_{3}+(n_{3}+n_{4}-m_{3}-m_{4})z_{5}^{\prime}]}.$
(A.9)
The integral is
$I_{(5,1,1,1)}(0,m_{1}-n_{1},-m_{3}+n_{3},n_{3}+n_{4}-m_{3}-m_{4})$, which is
also a $I_{1}$ type integral.
3. (c)
$y_{1}<z_{5}<y_{2}<y_{3}<z_{4}+z_{5}$. Then we define
$z_{1}=y_{1},z_{2}=y_{2}-z_{5},z_{3}=y_{3}-y_{2},z_{5}=z_{5}^{\prime}+z_{1},z_{4}=z_{4}^{\prime}+z_{2}+z_{3}$.
The delta function constrain is
$\delta(z_{1}+z_{2}+z_{3}+z_{4}^{\prime}+z_{5}^{\prime}+z_{6}-x)$. The
exponents is
$e^{2\pi
i[(m_{1}-n_{1})z_{5}^{\prime}+(-m_{3}+n_{3})z_{3}+(n_{2}+n_{3}-m_{2}-m_{3})z_{2}]}.$
(A.10)
The integral is
$I_{(5,1,1,1)}(0,m_{1}-n_{1},-m_{3}+n_{3},n_{2}+n_{3}-m_{2}-m_{3})$, which is
also a $I_{1}$ type integral.
4. (d)
$z_{5}<y_{1}<y_{2}<y_{3}<z_{4}+z_{5}$. Then we define
$z_{1}=y_{1}-z_{5},z_{2}=y_{2}-y_{1},z_{3}=y_{3}-y_{2},z_{4}=z_{4}^{\prime}+z_{1}+z_{2}+z_{3}$.
The delta function constrain is
$\delta(z_{1}+z_{2}+z_{3}+z_{4}^{\prime}+z_{5}+z_{6}-x)$. The exponents is
$e^{2\pi
i[(m_{4}-n_{4})z_{1}+(-m_{3}+n_{3})z_{3}+(n_{2}+n_{3}-m_{2}-m_{3})z_{2}]}.$
(A.11)
The integral is
$I_{(5,1,1,1)}(0,m_{4}-n_{4},-m_{3}+n_{3},n_{2}+n_{3}-m_{2}-m_{3})$, which is
also a $I_{1}$ type integral.
Summarizing the total contributions, taking into account various permutations
of indices, we find
$S_{1}=g^{2}(2I_{1}+I_{2}+I_{3}+I_{4}).$ (A.12)
### A.2 $S_{2}$ contribution
We only need to consider the first expression for $S_{2}$ in
(LABEL:Sformulas), and the others can be simply obtained by permutations of
indices. First we consider the integrals of $y_{4},y_{4}^{\prime}$. There are
two cases
1. 1.
$y_{4}^{\prime}>y_{4}$. We define variables
$z_{4}=y_{4}-x,z_{4}^{\prime}=y_{4}^{\prime}-y_{4},z_{5}=1-y_{4}^{\prime}$.
There is a delta function constrain $\delta(z_{4}+z_{4}^{\prime}+z_{5}+x-1)$.
The exponents of $y_{4},y_{4}^{\prime}$ variables become
$e^{2\pi i[(0)z_{5}+n_{4}z_{4}\prime+(n_{4}-m_{4})z_{4}+(n_{4}-m_{4})x]}.$
(A.13)
2. 2.
$y_{4}^{\prime}<y_{4}$. This is simply obtained from the above by switching
$n_{4}\rightarrow-m_{4},m_{4}\rightarrow-n_{4}$. delta function constrain
$\delta(z_{4}+z_{4}^{\prime}+z_{5}+x-1)$ is the same. The exponent is now
$e^{2\pi i[(0)z_{5}-m_{4}z_{4}\prime+(n_{4}-m_{4})z_{4}+(n_{4}-m_{4})x]}.$
(A.14)
Next we consider the integrals of $y_{i},y_{i}^{\prime},i=1,2,3$. We assume
$y_{3}^{\prime}>y_{3}$, with the other cases obtained by the transformation
(LABEL:transform1). We have $0<y_{i}+y_{3}^{\prime}-y_{3}<2x,i=1,2,$. We
define $z_{3}^{\prime}=x-y_{3},z_{3}=y_{3}$ and discuss various cases
1. 1.
$x<y_{i}+y_{3}^{\prime}-y_{3}<2x,i=1,2$. The delta functions constrain
$y_{i}^{\prime}=y_{i}+y_{3}^{\prime}-y_{3}-x,i=1,2$. Without loss of
generality we assume $y_{1}>y_{2}$. Define variables
$z_{2}=z_{2}^{\prime}+z_{1},x=z_{1}+z_{2}^{\prime}+z_{3}+z_{3}^{\prime}+z_{6}$.
Including the factor $e^{2\pi i(n_{4}-m_{4})x}$, the exponents of
$y_{i},y_{i}^{\prime},i=1,2,3$ variables become
$e^{2\pi
i[(n_{3}+n_{4}-m_{4})z_{1}+(-n_{2}-m_{1}-m_{4})z_{2}^{\prime}+(n_{3}+n_{4})z_{3}+(m_{3}+m_{4})z_{3}^{\prime}+m_{3}z_{6}]}.$
(A.15)
There are two contributions. Combining with equation (A.13) we have an
integral
$I(m_{3},n_{3},-m_{4},-n_{4},-m_{4}-n_{4},m_{3}-n_{4},n_{3}-m_{4},m_{3}+n_{3}+m_{2}+n_{1}),$
(A.16)
which is a $I_{7}$ type integral, while combining with equation (A.14) we have
an integral
$I(m_{3},n_{3},-m_{4},-n_{4},0,m_{3}-n_{4},n_{3}-m_{4},n_{3}-m_{3}+n_{1}-m_{1}),$
(A.17)
which is a $I_{6}$ type integral.
2. 2.
$x<y_{2}+y_{3}^{\prime}-y_{3}<2x,0<y_{1}+y_{3}^{\prime}-y_{3}<x$. The delta
functions constrain
$y_{2}^{\prime}=y_{2}+y_{3}^{\prime}-y_{3}-x,y_{1}^{\prime}=y_{1}+y_{3}^{\prime}-y_{3}$.
Define variables $z_{2}=x-y_{2},x=z_{2}+z_{3}+z_{3}^{\prime}+z_{6}$. We have
$y_{1}<z_{3}+z_{3}^{\prime}$, and discuss two sub-cases
1. (a)
$y_{1}<z_{3}$. Define variables $z_{1}=y_{1},z_{3}=z_{1}+z_{1}^{\prime}$.
Including the factor $e^{2\pi i(n_{4}-m_{4})x}$, the exponents of
$y_{i},y_{i}^{\prime},i=1,2,3$ variables become
$e^{2\pi
i[-n_{2}z_{1}+(m_{1}+n_{3}+n_{4})z_{1}^{\prime}+(-m_{4}-n_{2})z_{2}+(-m_{2}-m_{4}+n_{4})z_{3}^{\prime}+(-m_{2}-m_{4})z_{6}]}.$
(A.18)
There are two contributions. Combining with equation (A.13) we have an
integral
$I(m_{4},n_{4},-m_{2},-n_{2},m_{4}+n_{4},m_{4}-n_{2},n_{4}-m_{2},m_{4}+n_{4}+m_{1}+n_{3}),$
(A.19)
which is a $I_{8}$ type integral, while combining with equation (A.14) we have
an integral
$I(m_{4},n_{4},-m_{2},-n_{2},0,m_{4}-n_{2},n_{4}-m_{2},m_{4}+n_{4}+m_{1}+n_{3}),$
(A.20)
which is a $I_{6}$ type integral.
2. (b)
$z_{3}<y_{1}<z_{3}+z_{3}^{\prime}$. Define variables
$z_{1}=y_{1}-z_{3},z_{3}^{\prime}=z_{1}+z_{1}^{\prime}$. Including the factor
$e^{2\pi i(n_{4}-m_{4})x}$, the exponents of $y_{i},y_{i}^{\prime},i=1,2,3$
variables become
$e^{2\pi
i[(m_{3}+n_{1}+n_{4})z_{1}+(-m_{2}-m_{4}+n_{4})z_{1}^{\prime}+(-m_{4}-n_{2})z_{2}-n_{2}z_{3}+(-m_{2}-m_{4})z_{6}]}.$
(A.21)
There are two contributions. Combining with equation (A.13) we have an
integral
$I(m_{4},n_{4},-m_{2},-n_{2},m_{4}+n_{4},m_{4}-n_{2},n_{4}-m_{2},m_{4}+n_{4}+m_{3}+n_{1}),$
(A.22)
which is a $I_{8}$ type integral, while combining with equation (A.14) we have
an integral
$I(m_{4},n_{4},-m_{2},-n_{2},0,m_{4}-n_{2},n_{4}-m_{2},m_{4}+n_{4}+m_{3}+n_{1}),$
(A.23)
which is a $I_{6}$ type integral.
3. 3.
$0<y_{i}+y_{3}^{\prime}-y_{3}<x,i=1,2$. The delta functions constrain
$y_{i}^{\prime}=y_{i}+y_{3}^{\prime}-y_{3},i=1,2$. Without loss of generality
we assume $y_{1}<y_{2}$. Define variables $x=z_{3}+z_{3}^{\prime}+z_{6}$. We
have $y_{i}<z_{3}+z_{3}^{\prime},i=1,2$, and discuss three sub-cases
1. (a)
$y_{1}<y_{2}<z_{3}$. Define variables
$z_{1}=y_{1},z_{2}=y_{2}-y_{1},z_{3}=y_{2}+z_{2}^{\prime}$. Including the
factor $e^{2\pi i(n_{4}-m_{4})x}$, the exponents of
$y_{i},y_{i}^{\prime},i=1,2,3$ variables become
$e^{2\pi
i[(0)z_{1}+(m_{1}-n_{1})z_{2}+(n_{3}+n_{4}-m_{3}-m_{4})z_{2}^{\prime}+(n_{4}-m_{4})z_{3}^{\prime}-m_{4}z_{6}]}.$
(A.24)
There are two contributions. Combining with equation (A.13) we have an
integral
$I_{(2,2,1,1,1,1)}(n_{4}-m_{4},0,n_{4},-m_{4},m_{1}-n_{1},n_{4}-m_{4}+n_{3}-m_{3}),$
(A.25)
which is a $I_{5}$ type integral, while combining with equation (A.14) we have
an integral
$I_{(2,2,2,1,1)}(m_{4},n_{4},0,n_{4}+n_{3}-m_{3},m_{4}+m_{1}-n_{1}),$ (A.26)
which is a $I_{3}$ type integral.
2. (b)
$y_{1}<z_{3}<y_{2}<z_{3}+z_{3}^{\prime}$. Define variables
$z_{1}=y_{1},z_{2}=y_{2}-z_{3},z_{3}=z_{1}+z_{1}^{\prime},z_{3}^{\prime}=z_{2}+z_{2}^{\prime}$.
Including the factor $e^{2\pi i(n_{4}-m_{4})x}$, the exponents of
$y_{i},y_{i}^{\prime},i=1,2,3$ variables become
$e^{2\pi
i[(0)z_{1}+(m_{1}-n_{1})z_{1}^{\prime}+(n_{2}+n_{4}-m_{2}-m_{4})z_{2}+(n_{4}-m_{4})z_{2}^{\prime}-m_{4}z_{6}]}.$
(A.27)
There are two contributions. Combining with equation (A.13) we have an
integral
$I_{(2,2,1,1,1,1)}(n_{4}-m_{4},0,n_{4},-m_{4},m_{1}-n_{1},n_{4}-m_{4}+n_{2}-m_{2}),$
(A.28)
which is a $I_{5}$ type integral, while combining with equation (A.14) we have
an integral
$I_{(2,2,2,1,1)}(m_{4},n_{4},0,n_{4}+n_{2}-m_{2},m_{4}+m_{1}-n_{1}),$ (A.29)
which is a $I_{3}$ type integral.
3. (c)
$z_{3}<y_{1}<y_{2}<z_{3}+z_{3}^{\prime}$. Define variables
$z_{1}=y_{1}-z_{3},z_{2}=y_{2}-y_{1},z_{3}^{\prime}=z_{1}+z_{2}+z_{2}^{\prime}$.
Including the factor $e^{2\pi i(n_{4}-m_{4})x}$, the exponents of
$y_{i},y_{i}^{\prime},i=1,2,3$ variables become
$e^{2\pi
i[(m_{3}-n_{3})z_{1}+(n_{2}+n_{4}-m_{2}-m_{4})z_{2}+(n_{4}-m_{4})z_{2}^{\prime}+(0)z_{3}-m_{4}z_{6}]}.$
(A.30)
There are two contributions. Combining with equation (A.13) we have an
integral
$I_{(2,2,1,1,1,1)}(n_{4}-m_{4},0,n_{4},-m_{4},m_{3}-n_{3},n_{4}-m_{4}+n_{2}-m_{2}),$
(A.31)
which is a $I_{5}$ type integral, while combining with equation (A.14) we have
an integral
$I_{(2,2,2,1,1)}(m_{4},n_{4},0,n_{4}+n_{2}-m_{2},m_{4}+m_{3}-n_{3}),$ (A.32)
which is a $I_{3}$ type integral.
Summarizing the total contributions, taking into account various permutations
of indices, we find
$S_{2}=g^{2}(I_{2}+I_{3}+2I_{5}+2I_{6}+I_{7}+I_{8}).$ (A.33)
### A.3 $S_{3}$ contribution
We only need to consider the first expression for $S_{3}$ in
(LABEL:Sformulas), and the others can be simply obtained by permutations of
indices. First we consider the integrals of $y_{i},y_{i}^{\prime},i=3,4$. We
assume $y_{4}^{\prime}>y_{4}$, with the other cases obtained by the
transformation (LABEL:transform1). Define variables
$z_{4}=y_{4}-x,z_{4}^{\prime}=1-y_{4}^{\prime}$. We have
$x<y_{3}+y_{4}^{\prime}-y_{4}<2-x$. There are two cases with a subdivision
into a total of three cases
1. 1.
$1<y_{3}+y_{4}^{\prime}-y_{4}<2-x$. The delta function constrain
$y_{3}^{\prime}=y_{3}+y_{4}^{\prime}-y_{4}-(1-x)$. We define variable
$y_{3}=1-z_{3},1-x=z_{3}+z_{4}+z_{4}^{\prime}+z_{5}$. The exponents of
$y_{i},y_{i}^{\prime},i=3,4$ variables become
$e^{2\pi
i[n_{4}z_{3}+(n_{4}-m_{3}-m_{4})z_{4}-m_{3}z_{4}^{\prime}+(n_{3}+n_{4}-m_{3})z_{5}+(n_{3}+n_{4}-m_{3}-m_{4})x]}.$
(A.34)
2. 2.
$x<y_{3}+y_{4}^{\prime}-y_{4}<1$. The delta function constrain
$y_{3}^{\prime}=y_{3}+y_{4}^{\prime}-y_{4}$. We have
$y_{3}-x<z_{4}+z_{4}^{\prime}$, which divides into two sub-cases
1. (a)
$y_{3}-x<z_{4}$. Define variables $z_{3}=y_{3}-x,z_{4}=z_{3}+z_{3}^{\prime}$.
The exponents of $y_{i},y_{i}^{\prime},i=3,4$ variables become
$e^{2\pi
i[(n_{3}+n_{4}-m_{3}-m_{4})z_{3}+(n_{4}-m_{4})z_{3}^{\prime}+(0)z_{4}^{\prime}+(n_{3}+n_{4})z_{5}+(n_{3}+n_{4}-m_{3}-m_{4})x]}.$
(A.35)
2. (b)
$z_{4}<y_{3}-x<z_{4}+z_{4}^{\prime}$. Define variables
$z_{3}=y_{3}-x-z_{4},z_{4}^{\prime}=z_{3}+z_{3}^{\prime}$. The exponents of
$y_{i},y_{i}^{\prime},i=3,4$ variables become
$e^{2\pi
i[(n_{3}-m_{3})z_{3}+(0)z_{3}^{\prime}+(n_{3}+n_{4}-m_{3}-m_{4})z_{4}^{\prime}+(n_{3}+n_{4})z_{5}+(n_{3}+n_{4}-m_{3}-m_{4})x]}.$
(A.36)
Next we consider the integrals of $y_{i},y_{i}^{\prime},i=1,2$. We mainly
consider $y_{2}^{\prime}>y_{2}$ and the results for $y_{2}^{\prime}<y_{2}$ can
be simply obtained by transforming
$(m_{i},n_{i})\rightarrow(-n_{i},-m_{i}),i=1,2$. We define
$z_{2}^{\prime}=x-y_{2}^{\prime},z_{2}=y_{2}$. We have
$0<y_{1}+y_{2}^{\prime}-y_{2}<2x$ and discuss some cases
1. 1.
$x<y_{1}+y_{2}^{\prime}-y_{2}<2x$. The delta function constrain
$y_{1}^{\prime}=y_{1}+y_{2}^{\prime}-y_{2}-x$. Define
$y_{1}=x-z_{1},x=z_{1}+z_{2}+z_{2}^{\prime}+z_{6}$. The exponents of
$y_{i},y_{i}^{\prime},i=1,2$ variables become
$e^{2\pi
i[n_{2}z_{1}+(n_{2}-m_{1}-m_{2})z_{2}-m_{1}z_{2}^{\prime}+(n_{1}+n_{2}-m_{1})z_{6}]}.$
(A.37)
There are three contributions. Combining with equation (A.34) we have an
integral
$I(m_{2},n_{2},-m_{3},-n_{3},m_{1}+m_{2}+n_{2},n_{1}+n_{2}+m_{2},m_{1}+m_{2}-n_{3},n_{1}+n_{2}-m_{3}),$
(A.38)
which is a $I_{10}$ type integral, combining with equation (A.35) we have an
integral
$I(m_{1},n_{1},-m_{2},-n_{2},-m_{2}-n_{2},m_{1}-n_{2},n_{1}-m_{2},m_{1}+n_{1}+m_{3}+n_{4}),$
(A.39)
which is a $I_{7}$ type integral, and combining with equation (A.36) we have
an integral
$I(m_{1},n_{1},-m_{2},-n_{2},-m_{2}-n_{2},m_{1}-n_{2},n_{1}-m_{2},m_{1}+n_{1}+m_{4}+n_{3}),$
(A.40)
which is also a $I_{7}$ type integral.
2. 2.
We transform equation (A.37) by
$(m_{i},n_{i})\rightarrow(-n_{i},-m_{i}),i=1,2$, and get a factor
$e^{2\pi
i[-m_{2}z_{1}+(-m_{2}+n_{1}+n_{2})z_{2}+n_{1}z_{2}^{\prime}+(-m_{1}-m_{2}+n_{1})z_{6}]}.$
(A.41)
Again there are three contributions. Combining with equation (A.34) we have an
integral
$I(m_{4},n_{4},-m_{2},-n_{2},m_{3}+m_{4}+n_{4},n_{3}+n_{4}+m_{4},m_{3}+m_{4}-n_{2},n_{3}+n_{4}-m_{2}),$
(A.42)
which is a $I_{10}$ type integral, combining with equation (A.35) we have an
integral
$I(m_{2},n_{2},-m_{1},-n_{1},-m_{1}-n_{1},m_{2}-n_{1},n_{2}-m_{1},m_{2}+n_{2}+m_{3}+n_{4}),$
(A.43)
which is a $I_{7}$ type integral, and combining with equation (A.36) we have
an integral
$I(m_{2},n_{2},-m_{1},-n_{1},-m_{1}-n_{1},m_{2}-n_{1},n_{2}-m_{1},m_{2}+n_{2}+m_{4}+n_{3}),$
(A.44)
which is also a $I_{7}$ type integral.
3. 3.
$0<y_{1}+y_{2}^{\prime}-y_{2}<x$. The delta function constrain
$y_{1}^{\prime}=y_{1}+y_{2}^{\prime}-y_{2}$. We have
$y_{1}<z_{2}^{\prime}+z_{2}$ and discuss some sub-cases
1. (a)
$0<y_{1}<z_{2}$. Define
$z_{1}=y_{1},z_{2}=z_{1}+z_{1}^{\prime},x=z_{1}+z_{1}^{\prime}+z_{2}^{\prime}+z_{6}$.
The exponents of $y_{i},y_{i}^{\prime},i=1,2$ variables become
$e^{2\pi
i[(n_{1}+n_{2}-m_{1}-m_{2})z_{1}+(n_{2}-m_{2})z_{1}^{\prime}+(0)z_{2}^{\prime}+(n_{1}+n_{2})z_{6}]}.$
(A.45)
There are three contributions. Combining with equation (A.34) we have an
integral
$I(m_{3},n_{3},-m_{4},-n_{4},-m_{4}-n_{4},m_{3}-n_{4},n_{3}-m_{4},m_{3}+n_{3}+m_{1}+n_{2}),$
(A.46)
which is a $I_{7}$ type integral, combining with equation (A.35) we have an
integral
$I_{(2,2,1,1,1,1)}(n_{3}+n_{4}-m_{3}-m_{4},0,n_{3}+n_{4},-m_{3}-m_{4},n_{4}-m_{4},-n_{1}+m_{1}),$
(A.47)
which is a $I_{9}$ type integral, and combining with equation (A.36) we have
an integral
$I_{(2,2,1,1,1,1)}(n_{3}+n_{4}-m_{3}-m_{4},0,n_{3}+n_{4},-m_{3}-m_{4},n_{3}-m_{3},-n_{1}+m_{1}),$
(A.48)
which is also a $I_{9}$ type integral.
2. (b)
We transform equation (A.45) by
$(m_{i},n_{i})\rightarrow(-n_{i},-m_{i}),i=1,2$, and get a factor
$e^{2\pi
i[(n_{1}+n_{2}-m_{1}-m_{2})z_{1}+(n_{2}-m_{2})z_{1}^{\prime}+(0)z_{2}^{\prime}+(-m_{1}-m_{2})z_{6}]}.$
(A.49)
Again there are three contributions. Combining with equation (A.34) we have an
integral
$I(m_{3},n_{3},-m_{4},-n_{4},m_{3}+n_{3},m_{3}-n_{4},n_{3}-m_{4},m_{3}+n_{3}+m_{1}+n_{2}),$
(A.50)
which is a $I_{8}$ type integral, combining with equation (A.35) we have an
integral
$I_{(2,2,2,1,1)}(m_{1}+m_{2},n_{1}+n_{2},0,m_{1}+n_{2},-m_{4}-n_{3}),$ (A.51)
which is a $I_{4}$ type integral, and combining with equation (A.36) we have
an integral
$I_{(2,2,2,1,1)}(m_{1}+m_{2},n_{1}+n_{2},0,m_{1}+n_{2},-m_{3}-n_{4}),$ (A.52)
which is also a $I_{4}$ type integral.
3. (c)
$z_{2}<y_{1}<z_{2}+z_{2}^{\prime}$. Define variables
$z_{1}=y_{1}-z_{2},z_{2}^{\prime}=z_{1}+z_{1}^{\prime},x=z_{1}+z_{1}^{\prime}+z_{2}+z_{6}$.
The exponents of $y_{i},y_{i}^{\prime},i=1,2$ variables become
$e^{2\pi
i[(n_{1}+n_{2}-m_{1}-m_{2})z_{2}+(n_{1}-m_{1})z_{1}+(0)z_{1}^{\prime}+(n_{1}+n_{2})z_{6}]}.$
(A.53)
We notice this is just (A.45) with the index switch $1\leftrightarrow 2$. So
we can simply obtain the remaining results by index switching the last two
sub-cases. Namely we have one more type of $I_{7},I_{8}$ integrals, two more
types of $I_{4},I_{9}$ integrals.
Summarizing the total contributions, taking into account various permutations
of indices, we find
$S_{3}=g^{2}(I_{4}++I_{7}+I_{8}+2I_{9}+2I_{10}).$ (A.54)
## References
* [1] J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,” Int. J. Theor. Phys. 38 (1999) 1113–1133, arXiv:hep-th/9711200.
* [2] S. Gubser, I. R. Klebanov, and A. M. Polyakov, “Gauge theory correlators from noncritical string theory,” Phys. Lett. B 428 (1998) 105–114, arXiv:hep-th/9802109.
* [3] E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys. 2 (1998) 253–291, arXiv:hep-th/9802150.
* [4] R. Penrose, Any Space-Time has a Plane Wave as a Limit, pp. 271–275. Springer Netherlands, Dordrecht, 1976. https://doi.org/10.1007/978-94-010-1508-0_23.
* [5] M. Blau, J. M. Figueroa-O’Farrill, C. Hull, and G. Papadopoulos, “A New maximally supersymmetric background of IIB superstring theory,” JHEP 01 (2002) 047, arXiv:hep-th/0110242.
* [6] R. Metsaev, “Type IIB Green-Schwarz superstring in plane wave Ramond-Ramond background,” Nucl. Phys. B 625 (2002) 70–96, arXiv:hep-th/0112044.
* [7] D. E. Berenstein, J. M. Maldacena, and H. S. Nastase, “Strings in flat space and pp waves from N=4 superYang-Mills,” JHEP 04 (2002) 013, arXiv:hep-th/0202021.
* [8] J. McGreevy, L. Susskind, and N. Toumbas, “Invasion of the giant gravitons from Anti-de Sitter space,” JHEP 06 (2000) 008, arXiv:hep-th/0003075.
* [9] A. Hashimoto, S. Hirano, and N. Itzhaki, “Large branes in AdS and their field theory dual,” JHEP 08 (2000) 051, arXiv:hep-th/0008016.
* [10] V. Balasubramanian, M. Berkooz, A. Naqvi, and M. J. Strassler, “Giant gravitons in conformal field theory,” JHEP 04 (2002) 034, arXiv:hep-th/0107119.
* [11] S. Corley, A. Jevicki, and S. Ramgoolam, “Exact correlators of giant gravitons from dual N=4 SYM theory,” Adv. Theor. Math. Phys. 5 (2002) 809–839, arXiv:hep-th/0111222.
* [12] V. Balasubramanian, M.-x. Huang, T. S. Levi, and A. Naqvi, “Open strings from N=4 superYang-Mills,” JHEP 08 (2002) 037, arXiv:hep-th/0204196.
* [13] V. Balasubramanian, D. Berenstein, B. Feng, and M.-x. Huang, “D-branes in Yang-Mills theory and emergent gauge symmetry,” JHEP 03 (2005) 006, arXiv:hep-th/0411205.
* [14] J. Pasukonis and S. Ramgoolam, “From counting to construction of BPS states in N=4 SYM,” JHEP 02 (2011) 078, arXiv:1010.1683 [hep-th].
* [15] R. de Mello Koch, M. Dessein, D. Giataganas, and C. Mathwin, “Giant Graviton Oscillators,” JHEP 10 (2011) 009, arXiv:1108.2761 [hep-th].
* [16] M.-x. Huang, “Three point functions of N=4 superYang-Mills from light cone string field theory in PP wave,” Phys. Lett. B 542 (2002) 255–260, arXiv:hep-th/0205311.
* [17] M.-x. Huang, “String interactions in PP wave from N=4 superYang-Mills,” Phys. Rev. D 66 (2002) 105002, arXiv:hep-th/0206248.
* [18] M.-x. Huang, “Higher Genus BMN Correlators: Factorization and Recursion Relations,” Adv. Theor. Math. Phys. 16 no. 2, (2012) 421–503, arXiv:1009.5447 [hep-th].
* [19] M.-x. Huang, “Note on S-channel factorization in multitrace Berenstein-Maldacena-Nastase correlators,” Phys. Rev. D 101 no. 2, (2020) 026013, arXiv:1909.06995 [hep-th].
* [20] M.-x. Huang, “Entropy of Berenstein-Maldacena-Nastase Strings,” Phys. Rev. D 101 no. 4, (2020) 046027, arXiv:1910.02581 [hep-th].
* [21] C. Kristjansen, J. Plefka, G. Semenoff, and M. Staudacher, “A New double scaling limit of N=4 superYang-Mills theory and PP wave strings,” Nucl. Phys. B 643 (2002) 3–30, arXiv:hep-th/0205033.
* [22] N. R. Constable, D. Z. Freedman, M. Headrick, S. Minwalla, L. Motl, A. Postnikov, and W. Skiba, “PP wave string interactions from perturbative Yang-Mills theory,” JHEP 07 (2002) 017, arXiv:hep-th/0205089.
* [23] M. B. Green and J. H. Schwarz, “Superstring Interactions,” Nucl. Phys. B 218 (1983) 43–88.
* [24] M. B. Green and J. H. Schwarz, “Superstring Field Theory,” Nucl. Phys. B 243 (1984) 475–536.
* [25] M. Spradlin and A. Volovich, “Superstring interactions in a p p wave background,” Phys. Rev. D 66 (2002) 086004, arXiv:hep-th/0204146.
* [26] L. Eberhardt, M. R. Gaberdiel, and R. Gopakumar, “Deriving the AdS3/CFT2 correspondence,” JHEP 02 (2020) 136, arXiv:1911.00378 [hep-th].
* [27] A. Dei, M. R. Gaberdiel, R. Gopakumar, and B. Knighton, “Free field world-sheet correlators for ${\rm AdS}_{3}$,” arXiv:2009.11306 [hep-th].
* [28] M. R. Gaberdiel, R. Gopakumar, B. Knighton, and P. Maity, “From Symmetric Product CFTs to ${\rm AdS}_{3}$,” arXiv:2011.10038 [hep-th].
* [29] R. Gopakumar, “From free fields to AdS,” Phys. Rev. D 70 (2004) 025009, arXiv:hep-th/0308184.
* [30] J. Harer and D. Zagier, “The Euler characteristic of the moduli space of curves,” Invent. Math. 85 no. 3, (1986) 457–485. https://doi.org/10.1007/BF01390325.
* [31] A. Pankiewicz and B. Stefanski, Jr., “PP wave light cone superstring field theory,” Nucl. Phys. B 657 (2003) 79–106, arXiv:hep-th/0210246.
* [32] U. Gursoy, “Predictions for PP wave string amplitudes from perturbative SYM,” JHEP 10 (2003) 027, arXiv:hep-th/0212118.
* [33] O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri, and Y. Oz, “Large N field theories, string theory and gravity,” Phys. Rept. 323 (2000) 183–386, arXiv:hep-th/9905111.
* [34] E. D’Hoker and D. Z. Freedman, “Supersymmetric gauge theories and the AdS / CFT correspondence,” in Theoretical Advanced Study Institute in Elementary Particle Physics (TASI 2001): Strings, Branes and EXTRA Dimensions, pp. 3–158. 1, 2002. arXiv:hep-th/0201253.
|
# Unsupervised Deep Learning for Handwritten Page Segmentation
Ahmad Droby, Berat Kurar Barakat, Borak Madi, Reem Alaasam and Jihad El-Sana
Ben-Gurion University of the Negev
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Segmenting handwritten document images into regions with homogeneous patterns
is an important pre-processing step for many document images analysis tasks.
Hand-labeling data to train a deep learning model for layout analysis requires
significant human effort. In this paper, we present an unsupervised deep
learning method for page segmentation, which revokes the need for annotated
images. A siamese neural network is trained to differentiate between patches
using their measurable properties such as number of foreground pixels, and
average component height and width. The network is trained that spatially
nearby patches are similar. The network’s learned features are used for page
segmentation, where patches are classified as main and side text based on the
extracted features. We tested the method on a dataset of handwritten document
images with quite complex layouts. Our experiments show that the proposed
unsupervised method is as effective as typical supervised methods.
###### Index Terms:
layout analysis, segmentation, historical, documents, unsupervised, Siamese
network, deep-learning, page segmentation, hand-written
## I Introduction
Manually copying of manuscripts was the ultimate way scholars shared knowledge
before the popularisation of the printing press. Notes were frequently added
by scholars to the margin of pages, and often contribute valuable information
concern the main text and the manuscript as a whole. In addition, the content
of a manuscript’s marginal notes help historians to analyze the authenticity,
temporal, and geographical origin of the manuscript.
The increasing number of available digital scans of historical manuscripts,
call for reliable automatic processing systems, which would allow historians
and scholars to access and explore this knowledge more efficiently.
Page segmentation is an essential preprocessing step for many document image
processing tasks. Due to the irregular structure, varying writing styles, and
non-rectangular layout of historical handwritten documents [1, 2], segmenting
them into main and side text poses a challenging research problem.
Learning free based page layout analysis methods rely on human crafted
features, such as connected component statistics [3, 4], SIFT of points of
interests [5], color and texture features [6, 7, 8], etc. Due to the highly
irregular structure and varying text style of historical handwritten
documents, those methods do not generalize well. Therefore, researchers have
been opting to learn those features instead. Page segmentation methods with a
learning component generally outperform traditional learning free based
methods. However, those methods require a large amount of manually annotated
data for training in order to perform well. Obtaining such data is tedious and
time-consuming; and in some cases requires domain experts.
We present an unsupervised deep learning method for page segmentation that
utilizes measurable features such as spatial proximity, number of foreground
pixels and average character height and width. The method first trains a
siamese neural network model, $M$, then uses $M$ for feature extraction. A
siamese network model contains two Convolutional Neural Networks (CNNs) with
shared weights. The CNNs work in parallel on two different inputs to extract
comparable feature vectors. We train a siamese network to discriminate between
patches with statistical differences of connected components; e.g., various
number of foreground pixels and different background areas. Typically, in
documents with side notes nearby patches belong to the same class (main or
side text) with high probability. Based on this basic assumption, the network
is trained that two spatially nearby patches are similar. Following training,
we use the CNN component of the Siamese network to extract feature vector for
every patch in a given page. The extracted feature vectors are then used to
segment the page into main and side text regions. Our experimental results
show that the accuracy of this method is comparable and in most cases
surpasses the accuracy of supervised methods.
The rest of the paper is structured as follows. Section II reviews related
work. In Section III we present our method in detail. Experimental results are
reported in Section IV. Finally, conclusions are drawn, and future work is
discussed in Section V. l ]
## II Related Work
Typically, page segmentation algorithms use features in order to segment pages
into regions with homogeneous patterns. Existing page segmentation algorithms
can be classified into two categories based on the type of used features.
### II-A Hand-Crafted Features
Traditional page segmentation algorithms rely upon hard-coded features,
specification of documents structure, assumptions and statistical rules. Graz
et al. [5] presented an approach to analyze the layout of handwritten
documents using Scale Invariant Feature Transform (SIFT). The method uses
Difference of Gaussian (DOG) to compute interest points, which guide the
detection of layout entities. Finally, Support Vector Machine (SVM) is applied
to classify the points into entity classes. Bukhari et al. [3] first extracts
discriminative and simple features in the level of connected components, such
as relative distance, foreground area, orientation, normalized height, and
neighborhood information. Then a Multi-Layer Perceptron (MLP) classifies the
connected components into side notes and main body texts. Finally, a voting
scheme refines final classification results. Asi et al. [9] proposed a
learning-free approach for page segmentation of Arabic manuscripts. This is a
two-step method: coarse segmentation and fine segmentation. Coarse
segmentation utilizes Gabor texture filter and fine segmentation optimizes the
results using energy minimization. Wong et al. [10] use Run Length Smearing
Algorithm (RLSA) for page segmentation. RLSA links together the neighboring
areas that are black within predefined $c$ pixels. Two distinct bit maps are
generated by applying RSLA row-by-row and column-by-column to a document.
These maps are combined using ’AND’ logical operator to produce segmented
regions. These regions are then classified into text and non-text according to
several criteria, such as black-white transitions and the total number of
black pixels. Akiyama and Hagita [11] divides an input document into smaller
regions using basic features such as projection profiles, crossing counts, and
circumscribed rectangles. These regions are then classified into headlines,
text lines, and graphics regions. Apostolos [12] identifies background space
surrounding the page regions and describes them using white tiles, which are
horizontal rectangular white spaces. The algorithm can segment and identify
regions with severe skew, but it does not classify them. Journet et al. [13]
extracts texture features and applies a multi-resolution analysis to avoid any
assumption about the document’s structure. Mehri et al. [14] segment a
document into homogeneous regions by clustering texture features. Mehri et al.
[15] compared different approaches such as Gabor filters, auto-correlation
function, and Grey Level Co-occurrence Matrix (GLCM). They conclude that for
clustering and segmentation of document images, Gabor features are preformed
best. Wei et al. [6] address segmentation as a pixel-level classification.
Each pixel is a vector of its coordinates and its color values. They use SVM,
MLP, and GMM to classify these vectors. Similarly, Chen et al. [7] formulate
layout analysis as a problem of pixel classification, where each pixel could
belong to either periphery, background, text, or decoration. This method
represents each pixel as a vector of its coordinates, color and texture. In
addition, irrelevant features are removed by a feature selection algorithm
Chen et al. [7] outperforms [6] by including more features such as texture
information and applying feature-selection algorithm for better classification
result.
### II-B Learned Features
In the past decade, learning features using CNN has become the dominant
approach in the page-layout analysis domain.
Chen et al. [16] apply convolutional autoencoders for learning the features
from pixels. These features are used to train an SVM for page segmentation.
The classifier assigns to each pixel one of four classes: periphery,
background, text block, and decoration. Later they applied SVM to classify
superpixels instead of pixels [17] to reduce the classification time
complexity. In addition, segmentation results are further refined in [18]
using Conditional Random Field (CRF) that utilizes local and contextual
information. These works [16, 18] consider feature extraction and classifying
as two separate steps. On the other hand, [19] introduced an end-to-end method
by combining feature learning and classifier training into one step.
Recently, Kurar et al. [20] and Alaasam et al. [21] trained Fully
Convolutional Network (FCN) and siamese neural network, respectively, to apply
page segmentation. Both Kurar et al. [20] and Alaasam et al. [21] reported
their results on the same dataset that we use for evaluation.
## III Method
Figure 1: Method flow: (a) Input page, (b) the resulting feature map by
applying the trained CNN on the input page image using a sliding window, (c) a
visualization of the first three principal components of the feature map, (d)
applying a threshold on the first and second principal components of the
feature map to extract the main-text mask, and (e) the final segmentation of
the page, where foreground pixels inside the main-text mask are determined to
be part of the main-text; otherwise, they are part of the side text
Our method is composed of two main steps: feature extraction and segmentation.
Feature extraction is a crucial step in any layout analysis algorithm. We
delegate this step to a CNN trained as a branch of a siamese network, which is
then used to extract features from patches in a given page. The siamese
network is trained using patches prepared according to multiple strategies. We
apply principal component analysis to the feature map and use the first and
the second principle components to guide classifying the map into two
categories: main text and side notes.
### III-A Data preparation
Data preparation consists of generating patches of the size $200\times 200$
pixels, cropped randomly from document images and labeling. Every pair of
patches are labeled either similar or different based on a set of principles
we discuss below. Patch size is estimated as four times the average character
height in the input document images. Without loss of generality and by analogy
with distance, we label similar pairs of patches by zero and different pairs
by one. We use four strategies to generate pairs of image patches with labels.
One of them is for similar pairs of patches and the remaining three are for
different pairs. The principles used to generate and label the patches are
dataset independent and generalize to other datasets with heterogeneous text
line-heights.
Next we discuss the four strategies to generate pairs of image patches with
their labels.
#### III-A1 Patches similar by spatial proximity
Patches are labeled by a simple principle, neighbouring patches are similar
[22]. Given a document image we randomly sample a first patch, $p_{1}$ and an
arbitrarily second patch, $p_{2}$, from the eight possible neighbouring
locations around $p_{1}$ (see illustration in Figure 2). In order to avoid
trivial solutions, we perturb the location of $p_{2}$ by a quarter of the
patch’s height. Naturally, some neighbouring patches are not similar (e.g.
patches located between main and side text). However, such patches are rare
enough relative to similar neighbouring patches to be considered as noise.
Figure 2: Generating patches from document image
#### III-A2 Patches different by average component sizes
Given randomly cropped two image patches, let $h_{i}$ and $w_{i}$ be the
average component height and width of patch $i$, respectively, where
$i\in\\{1,2\\}$. Our algorithm iteratively sample, at random, pairs of patches
until the similarity score $s_{1}$ satisfies the following condition:
$s_{1}=\frac{\min(h_{1}\times w_{1},h_{2}\times w_{2})}{\max(h_{1}\times
w_{1},h_{2}\times w_{2})}<0.5$ (1)
In a loosely manner this strategy generates pairs of patches where one from
main text area and the other from side text area (Figure 3). This is based on
the assumption that the side text is written in a relatively small and
restricted margins on the page, resulting in text with smaller font size.
Therefore, the average component’s height and width in side text area are
relatively less than the average component’s height and width in main text
area.
Figure 3: Every column shows a pair of patches different by average component
height and width. Such pairs train the machine to discriminate between the
side text (above patches) and the main text areas (below patches).
#### III-A3 Patches different by number of foreground pixels
Due to the font size difference between main and side text, the number of
foreground pixels in side text area is relatively less than the number of
foreground pixels in main text area. This assumption is used in this strategy
to differentiate between patches from main text area and patches from side
text areas.
Given randomly cropped two image patches, let $a_{i}$ be the number of
foreground pixels in patch $i$, where $i\in\\{1,2\\}$. The algorithm continues
selecting two random patches until the similarity score $s_{2}$ satisfies the
following condition:
$s_{2}=\frac{\min(a_{1},a_{2})}{\max(a_{1},a_{2})}<0.5$ (2)
In a loosely manner this strategy generates pairs of patches where one from
main text area and the other from side text area, as illustrated in Figure 4.
Figure 4: Every column shows a pair of patches different by the number of
foreground pixels. Such pairs train the machine to discriminate among the side
text (above patches) and the main text areas (below patches).
#### III-A4 Patches different by background area
A significant difference between background areas and text areas exists often
in document images. This strategy iteratively sample pair of patches at random
until one of the patches is from background area and the other from text area,
as shown in Figure Figure 5. We assume a patch belongs to a background area if
more than its half belongs to a background area.
Figure 5: Every column shows a pair of different patches. In a loosely manner,
either of patches in each pair contain background area or foreground area.
Such pairs train the machine to discriminate the text areas from the
background areas.
### III-B Siamese network
We train a Siamese network with two identical CNN branches. The input is a
pair of image patches of size $200\times 200$. The CNN branches extract
representations of the input patches. Subsequently, these representations are
concatenated and fed into fully connected layers in order to classify whether
the two image patches are similar or different (further details are given in
Section IV.)
### III-C Feature extraction
We use one of the branches of the trained siamese network for feature
extraction. This branch takes a patch of size $200\times 200$ and applies a
number of convolutional layers followed by two fully connected layers and
outputs a feature vector of size $512$, as shown in Figure Figure 6.
In the feature extraction step, a sliding window is used to extract a feature
vector for each pixel in the input image using the CNN branch of the trained
siamese network. As can be seen in Figure 1, the feature extraction step
outputs a feature map of size $w\times h\times 512$, where $w$ and $h$ are the
width and height of the input image respectively.
### III-D Segmentation
The obtained feature map is used to guide the segmentation of the page into
main and side text regions. The construction of the feature map aims at
representing the two segments differently to simplify the segmentation
procedure.
We have investigated applying PCA on the feature map and study (visualize and
analyze) the resulting subspace. The first and the second principal components
lead to a good indication of main text location, where the values of the first
and the second principal components are higher for main text areas than side
text areas. Therefore, we thresholded the feature map based on the first and
second principal components to segment the main-text; i.e., a pixel $p$ in the
image is denoted main-text if the following condition holds:
$\displaystyle PC_{1}(p)<T_{1}\text{ and }PC_{2}(p)<T_{2}$
where $PC_{i}(p)$ is the $i$’th principal component of the feature vector at
pixel $p$ and $T_{1},T_{2}$ are predefined thresholds based on experimental
results.
The network learns to extract meaningful information about the patches, such
as text orientation, number background and foreground pixels, and connected
component statistics. As a result, it extracts similar features from main text
patches which are different from those extracted from side text patches, and
similar features from side text patches which are different from those
extracted from main text patches. We searched for a scheme to reduce the
dimensions of the feature space to two while maintaining the distances between
the data points. Since PCA does this well, we adopted it for dimension
reduction. We have found that the first two components provide good results
and the segmentation (int main and side text) is carried out using a simple
threshold.
## IV Experiments
In this section we present the dataset we adopted for training and test,
discuss training procedure, and analyse the obtained results.
### IV-A Dataset
We have choose to evaluate our approach using the dataset presented by Bukhari
et al. [3]. The dataset consists of 38 handwritten document images from 7
different historical Arabic Books. It is split as follows: 28 documents for
training and the remaining 10 images are used for testing. The main-text and
the side-text are labeled in each document in the dataset. To train the
Siamese network we use 24 documents from the training set and the remaining 6
documents are used for validation.
### IV-B Training
We built the Siamese network’s branches similar to the Alexnet [23] model and
through experiments we tune the hyperparameters to fit our problem. The final
architecture consists of two CNN branches, each one has five convolutional
layers as shown in Figure 6. Dotted lines indicate identical weights, and the
numbers in parentheses represent the number of filters, filter size, and
stride. All convolutional and fully connected layers are followed by ReLU
activation functions, except fc5, which feeds into a sigmoid binary
classifier. The learning rate is $0.00001$ and the optimizing algorithm is
ADAM.
Figure 6: Siamese architecture for classifying pairs as similar or different.
Dotted lines stand for identical weights, conv stands for convolutional layer,
fc stands for fully connected layer and pool is a max pooling layer.
We trained this model from scratch using $60,000$ pairs with balanced classes
and reached a validation loss value of $0.30$ after 11 epochs (Figure 7). When
the training is done, we cut out a branch of the Siamese network to be used
for feature extraction.
Figure 7: Loss over the epochs of model training.
### IV-C Results
We applied our method to segment the pages in the test set of the dataset into
main and side text regions. The non-binarized images were used in the feature
extraction step of the method. Extracting the feature vector for every
possible patch in the image is expensive time-wise. Therefore, we used a
sliding window with a step size of 50 pixels resulting in a feature map with
dimensions smaller than the original image. In order to match the original
image dimension, the feature map was resized with bi-linear interpolation.
In Table I we compare the performance of the proposed method using F-measure
against the layout analysis methods [3, 20, 21]. Note that those three methods
uses labeled data to train a ML model, while the proposed method is trained in
an unsupervised manner. Our method outperformed both Bukhari et al. [3] and
Kurar et al. [20] on both the main-text and the side-text. While it
outperformed Alaasam et al. [21] on the side-text, we obtained slightly lower
results on the main-text. However, it worth noting that Alaasam et
al.preformed post-processing on their results whereas we do not.
Figure 8 shows an example runs of our method. The second row shows a
visualization of the extracted feature map using the Siamese network’s CNN.
The feature map is visualized by mapping the first three principal components
to the RGB channels of the image. The visualization show that the CNN were
able to extract the meaningful features regarding the main and the side text.
TABLE I: Comparison of F-measure values. [3] and [21]’s results are with supervised learning and post processing, [20]’s results are with supervised learning and without post processing whereas our results are with unsupervised learning and without post-processing. Method | Main text | Side text
---|---|---
Bukhari et al. [3] | 95.02 | 94.68
Kurar et al. [20] | 95.00 | 80.00
Alaasam et al. [21] | 98.59 | 96.89
Proposed | 98.56 | 96.97
Input | | |
---|---|---|---
Feature map | | |
Main-text | | |
Result | | |
Groundtruth | | |
Figure 8: Example runs from the test set. First row shows the input image from
the test set. Second row, shows visualisation of the feature map. Third, shows
mask of the main-text extracted from the feature map. Forth, shows the
segmentation result of the method. And the last row shows the groundtruth from
the dataset.
## V Conclusion
This paper presents an unsupervised page segmentation method for hand-written
document images. We train a Siamese network to discriminate between patches
with different writing attributes. In addition, the network is trained that
two neighboring patches are similar. Our method uses one of the CNN branches
of the trained Siamese network to extract a feature map from hand-written
document images. The main-text region is extracted based on the first and
second principal components of the feature map, which is then used to segment
the image into main and side text. We have shown that the proposed method is
on par with the supervised state of the art page layout analysis of historical
manuscripts in terms of performance. In future work, we plan to adapt this
method for text line segmentation. In addition, we aim to expand on the idea
of using established hand-crafted features to train deep learning networks to
tackle other document analysis tasks in an unsupervised setting.
## Acknowledgment
This research was partially supported by The Frankel Center for Computer
Science at Ben-Gurion University.
## References
* [1] A. Antonacopoulos and A. C. Downton, “Special issue on the analysis of historical documents,” 2007.
* [2] L. Likforman-Sulem, A. Zahour, and B. Taconet, “Text line segmentation of historical documents: a survey,” International Journal of Document Analysis and Recognition (IJDAR), vol. 9, no. 2-4, pp. 123–138, 2007.
* [3] S. S. Bukhari, T. M. Breuel, A. Asi, and J. El-Sana, “Layout analysis for arabic historical document images using machine learning,” in 2012 International Conference on Frontiers in Handwriting Recognition, pp. 639–644, IEEE, 2012.
* [4] S. S. Bukhari, M. I. A. Al Azawi, F. Shafait, and T. M. Breuel, “Document image segmentation using discriminative learning over connected components,” in Proceedings of the 9th IAPR International Workshop on Document Analysis Systems, pp. 183–190, 2010.
* [5] A. Garz, R. Sablatnig, and M. Diem, “Layout analysis for historical manuscripts using sift features,” in 2011 International Conference on Document Analysis and Recognition, pp. 508–512, IEEE, 2011.
* [6] H. Wei, M. Baechler, F. Slimane, and R. Ingold, “Evaluation of svm, mlp and gmm classifiers for layout analysis of historical documents,” in 2013 12th International Conference on Document Analysis and Recognition, pp. 1220–1224, IEEE, 2013.
* [7] K. Chen, H. Wei, J. Hennebert, R. Ingold, and M. Liwicki, “Page segmentation for historical handwritten document images using color and texture features,” in 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 488–493, IEEE, 2014.
* [8] H. Wei, K. Chen, R. Ingold, and M. Liwicki, “Hybrid feature selection for historical document layout analysis,” in 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 87–92, IEEE, 2014.
* [9] A. Asi, R. Cohen, K. Kedem, J. El-Sana, and I. Dinstein, “A coarse-to-fine approach for layout analysis of ancient manuscripts,” in 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 140–145, Sep. 2014.
* [10] K. Y. Wong, R. G. Casey, and F. M. Wahl, “Document analysis system,” IBM Journal of Research and Development, vol. 26, pp. 647–656, 1982.
* [11] T. Akiyama and N. Hagita, “Automated entry system for printed documents,” Pattern Recogn., vol. 23, p. 1141–1154, Oct. 1990.
* [12] A. Antonacopoulos, “Page segmentation using the description of the background,” Comput. Vis. Image Underst., vol. 70, p. 350–369, June 1998\.
* [13] N. Journet, J.-Y. Ramel, R. Mullot, and V. Eglin, “Document image characterization using a multiresolution analysis of the texture: Application to old documents,” Int. J. Doc. Anal. Recognit., vol. 11, p. 9–18, Sept. 2008.
* [14] M. Mehri, P. Héroux, P. Gomez-Krämer, A. Boucher, and R. Mullot, “A pixel labeling approach for historical digitized books,” in 2013 12th International Conference on Document Analysis and Recognition, pp. 817–821, Aug 2013.
* [15] M. Mehri, P. Gomez-Krämer, P. Héroux, A. Boucher, and R. Mullot, “Texture feature evaluation for segmentation of historical document images,” in Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing, HIP ’13, (New York, NY, USA), p. 102–109, Association for Computing Machinery, 2013.
* [16] K. Chen, M. Seuret, M. Liwicki, J. Hennebert, and R. Ingold, “Page segmentation of historical document images with convolutional autoencoders,” in 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pp. 1011–1015, Aug 2015.
* [17] K. Chen, C. Liu, M. Seuret, M. Liwicki, J. Hennebert, and R. Ingold, “Page segmentation for historical document images based on superpixel classification with unsupervised feature learning,” in 2016 12th IAPR Workshop on Document Analysis Systems (DAS), pp. 299–304, April 2016\.
* [18] K. Chen, M. Seuret, M. Liwicki, J. Hennebert, C. Liu, and R. Ingold, “Page segmentation for historical handwritten document images using conditional random fields,” in 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 90–95, Oct 2016.
* [19] K. Chen and M. Seuret, “Convolutional neural networks for page segmentation of historical document images,” CoRR, vol. abs/1704.01474, 2017.
* [20] B. K. Barakat and J. El-Sana, “Binarization free layout analysis for arabic historical documents using fully convolutional networks,” in 2018 IEEE 2nd International Workshop on Arabic and Derived Script Analysis and Recognition (ASAR), pp. 151–155, IEEE, 2018.
* [21] R. Alaasam, B. Kurar, and J. El-Sana, “Layout analysis on challenging historical arabic manuscripts using siamese network,” in 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 738–742, IEEE, 2019.
* [22] D. Danon, H. Averbuch-Elor, O. Fried, and D. Cohen-Or, “Unsupervised natural image patch learning,” Computational Visual Media, vol. 5, no. 3, pp. 229–237, 2019.
* [23] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
|
On the Quantum K-Theory of the Quintic
On the Quantum K-Theory of the Quintic
Stavros GAROUFALIDIS a and Emanuel SCHEIDEGGER b
S. Garoufalidis and E. Scheidegger
a) International Center for Mathematics, Department of Mathematics,
a) Southern University of Science and Technology, Shenzhen, China
<EMAIL_ADDRESS>http://people.mpim-bonn.mpg.de/stavros
b) Beijing International Center for Mathematical Research, Peking University,
Beijing, China<EMAIL_ADDRESS>
Received October 21, 2021, in final form March 03, 2022; Published online
March 21, 2022
Quantum K-theory of a smooth projective variety at genus zero is a collection
of integers that can be assembled into a generating series $J(Q,q,t)$ that
satisfies a system of linear differential equations with respect to $t$ and
$q$-difference equations with respect to $Q$. With some mild assumptions on
the variety, it is known that the full theory can be reconstructed from its
small $J$-function $J(Q,q,0)$ which, in the case of Fano manifolds, is a
vector-valued $q$-hypergeometric function. On the other hand, for the quintic
3-fold we formulate an explicit conjecture for the small $J$-function and its
small linear $q$-difference equation expressed linearly in terms of the
Gopakumar–Vafa invariants. Unlike the case of quantum knot invariants, and the
case of Fano manifolds, the coefficients of the small linear $q$-difference
equations are not Laurent polynomials, but rather analytic functions in two
variables determined linearly by the Gopakumar–Vafa invariants of the quintic.
Our conjecture for the small $J$-function agrees with a proposal of
Jockers–Mayr.
quantum K-theory; quantum cohomology; quintic; Calabi–Yau manifolds;
Gromov–Witten invariants; Gopakumar–Vafa invariants; $q$-difference equations;
$q$-Frobenius method; $J$-function; reconstruction; gauged linear $\sigma$
models; 3d-3d correspondence; Chern–Simons theory; $q$-holonomic functions
14N35; 53D45; 39A13; 19E20
## 1 Introduction
### 1.1 Quantum K-theory, the small $\boldsymbol{J}$-function and its
$\boldsymbol{q}$-difference equation
The K-theoretic Gromov–Witten invariants of a compact Kähler manifold $X$
(often omitted from the notation) is a collection of integers (see [27, p. 6])
$\displaystyle\big{\langle}E_{1}L^{k_{1}},\dots,E_{n}L^{k_{n}}\big{\rangle}_{g,n,d}$
(1.1)
defined for vector bundles $E_{1},\dots,E_{n}$ on $X$ and nonnegative integers
$k_{1},\dots,k_{n}$ as the holomorphic Euler characteristic of
$\mathcal{O}^{{\rm vir}}\otimes\big{(}{\otimes}_{i=1}^{n}{\rm
ev}_{i}^{*}(E_{i})\otimes L_{i}^{k_{1}}\big{)}$ over the moduli space
$\overline{\mathcal{M}}^{X,d}_{g,n}$ of genus $g$ degree $d$ stable maps to
$X$ with $n$ marked points. Here, $L_{1},\dots,L_{n}$ denote the line
(orbi)bundles over $\overline{\mathcal{M}}^{X,d}_{g,n}$ formed by the
cotangent lines to the curves at the respective marked points. A definition of
these integers was given by Givental and Lee [22, 33]. These numerical
invariants can be assembled into a generating series which at genus zero can
be used to define an associative deformation of the product of the K-theory
ring $K(X)$ of $X$.
There are several ways to assemble the integers (1.1) into generating series,
and reconstruction theorems relate these generating series and often determine
one from the other. This is reviewed in Section 2.2. Our choice of generating
series will be the so-called small $J$-function
$\displaystyle
J_{X}(Q,q,0)=(1-q)\Phi_{0}+\sum_{d}\sum_{\alpha}\left\langle\frac{\Phi_{\alpha}}{1-qL}\right\rangle_{0,1,d}\Phi^{\alpha}Q^{d}\in
K(X)\otimes\mathcal{K}_{-}(q)[[Q]]$ (1.2)
(with the notation of Section 2.1), which determines the genus 0 quantum
K-theory $X$, i.e., the integers (1.1) [28, Theorem 1.1, Lemma 3.3] with
$g=0$, as well as the genus 0 permutation-equivariant quantum K-theory $X$
[24] (when $K(X)$ is generated by line bundles).
The small $J$-function is a vector-valued function (taking values in the
rational vector space $K(X)$) that obeys a system of linear $q$-difference
equations [26, 27], giving rise to matrices $A_{i}(Q,q,0)\in
K(X)\otimes\mathcal{K}_{+}(q)[[Q]]$, for $i=1,\dots,r$ which can also be used
to reconstruct the genus $0$ quantum K-theory of $X$ [28, Lemma 3.3].
Concretely, for $X=\mathbb{C}\mathbb{P}^{N}$, the small $J$-function is given
by a $q$-hypergeometric formula [26, 27, 33]
$\displaystyle
J_{\mathbb{C}\mathbb{P}^{N}}(Q,q,0)=(1-q)\sum_{d=0}^{\infty}\frac{Q^{d}}{((1-x)q;q)_{d}^{N+1}}\in
K\big{(}\mathbb{C}\mathbb{P}^{N}\big{)}\otimes\mathcal{K}_{-}(q)[[Q]],$ (1.3)
where $(z;q)_{d}=\prod_{j=0}^{d-1}\big{(}1-q^{j}z\big{)}$ for $d\geq 0$, and
$\displaystyle
K\big{(}\mathbb{C}\mathbb{P}^{N}\big{)}=\mathbb{Q}[x]/\big{(}x^{N+1}\big{)}$
is the K-theory ring with basis $\big{\\{}1,x,\dots,x^{N}\big{\\}}$ where
$1-x$ is the class of $\mathcal{O}(1)$.111The K-theory ring is also written as
[28, Section 4.1]
$K\big{(}\mathbb{C}\mathbb{P}^{N}\big{)}=\mathbb{Q}\big{[}P,P^{-1}\big{]}/\big{(}(1-P)^{N+1}\big{)}$
as the Grothendieck group of locally free sheaves on projective space, where
$P=\mathcal{O}_{\mathbb{P}^{N}}(-1)$ in which case the small $J$-function
takes the form
$J_{\mathbb{C}\mathbb{P}^{N}}(Q,q,0)=(1-q)\sum_{d=0}^{\infty}\frac{Q^{d}}{(Pq;q)_{d}^{N+1}}$.
The corresponding matrix $A(Q,q,0)$ of the vector-valued $q$-holonomic
function $J(Q,q,0)$ is given by [28, Section 4.1]
$\displaystyle A(Q,q,0)=I-\begin{pmatrix}0&0&\ldots&0&Q\\\ 1&0&\ldots&0&0\\\
0&1&\ldots&0&0\\\ \vdots&\vdots&\ddots&\vdots&\vdots\\\
0&0&\ldots&1&0\end{pmatrix}$ (1.4)
in the above basis of $K(\mathbb{C}\mathbb{P}^{N})$. It is remarkable that
either (1.3) or (1.4) give the complete determination of all the integers
(1.1) for $\mathbb{C}\mathbb{P}^{N}$. Observe that the small $J$-function of
$\mathbb{C}\mathbb{P}^{N}$ is given by a vector-valued $q$-hypergeometric
formula, which is always $q$-holonomic (as follows from Zeilberger et al. [35,
41, 43]), and as a result the entries of $A(Q,q,0)$ (as well as the
coefficients of the small quantum product) are polynomials in $Q$ and $q$. It
turns out that the small $J$-function of Grassmanianns, flag varieties,
homogeneous spaces and more generally Fano manifolds is $q$-hypergeometric as
shown by many researchers; see, e.g., [5, 37, 38] and references therein. On
the other hand, new phenomena are expected for the case of general Calabi–Yau
manifolds, and particularly for the quintic. Our motivation to study the case
of the quintic was two-fold, coming from numerical observations concerning
coincidences of quantum K-theory counts and quantum cohomology counts (given
below), as well as a comparison of the linear $q$-difference equations in
quantum K-theory with those in Chern–Simons theory (such as the $q$-difference
equation of the colored Jones polynomial of a knot [19]).
Our results give a relation between quantum K-theory and quantum cohomology of
the quintic in two different limits, namely $q=1$ (see Corollary 1.3) and
$q=0$ (see Corollary 1.5), and propose a linear expression of the small
$J$-function of the quintic in terms of its Gopakumar–Vafa invariants (see
Conjecture 1.1).
### 1.2 The small $\boldsymbol{J}$-function for the quintic
Quantum K-theory was developed by analogy with quantum cohomology (or
Gromov–Witten theory), a theory that deforms the cohomology ring $H(X)$ of $X$
and whose corresponding numerical invariants are rational numbers (known as
Gromov–Witten invariants) or integers in the case of a Calabi–Yau threefold
(known as the Gopakumar–Vafa invariants). A standard reference is [7] and the
book [9]. For the quintic 3-fold $X$, the first six values of the GW and the
GV invariants are given by
$d$ | 1 | 2 | 3 | 4 | 5 | 6
---|---|---|---|---|---|---
$\operatorname{GW}_{d}$ | $\frac{2875}{1}$ | $\frac{4876875}{8}$ | $\frac{8564575000}{27}$ | $\frac{15517926796875}{64}$ | $\frac{229305888887648}{1}$ | $\frac{248249742157695375}{1}$
$\operatorname{GV}_{d}$ | 2875 | 609250 | 317206375 | 242467530000 | 229305888887625 | 248249742118022000
with $2875$ being the famous number of rational curves in the quintic. The two
sets of invariants are related by the following multi-covering formula
$\displaystyle\operatorname{GV}_{n}=\sum_{d|n}\frac{\mu(d)}{d^{3}}\operatorname{GW}_{n/d},\qquad\operatorname{GW}_{n}=\sum_{d|n}\frac{1}{d^{3}}\operatorname{GV}_{n/d}.$
In [38, Section 6.5], Tonita gave an algorithm to compute the quantum K-theory
of the quintic and using it, he found that
$\displaystyle\langle 1\rangle_{0,1,1}=2875,$
where 2875 coincides with the famous number of lines in the quintic. Going
further, (see Jockers–Mayr [29, 30] and equation (1.11) below) one finds that
$\displaystyle\langle 1\rangle_{0,1,2}=620750=609250+4\cdot 2875,$ (1.5a)
$\displaystyle\langle 1\rangle_{0,1,3}=317232250=317206375+9\cdot 2875,$
(1.5b) $\displaystyle\langle 1\rangle_{0,1,4}=242470013000=242467530000+4\cdot
609250+16\cdot 2875,$ (1.5c) $\displaystyle\langle
1\rangle_{0,1,5}=229305888959500=229305888887625+25\cdot 2875,$ (1.5d)
$\displaystyle\langle 1\rangle_{0,1,6}=248249743392434250$
$\displaystyle\hphantom{\langle 1\rangle_{0,1,6}}{}=248249742118022000+4\cdot
317206375+9\cdot 609250+36\cdot 2875$ (1.5e)
are nearly equal to $\operatorname{GV}$ invariants of the quintic, and more
precisely matched with linear combinations of $\operatorname{GV}$ invariants.
Surely this is not a coincidence and suggests that the $\operatorname{GV}$
invariants can fully reconstruct the quantum K-theory invariants. In [27] this
“coincidence” is proven in abstractly. Givental and Tonita give a complete
solution in genus-0 to the problem of expressing K-theoretic GW-invariants of
a compact complex algebraic manifold in terms of its cohomological GW-
invariants. One motivation for our work is to give an explicit formula (see
Conjecture 1.1 below) of this abstract statement. To phrase our conjecture,
recall that the rational K-theory of the quintic 3-fold $X$ is given by
$\displaystyle K(X)=\mathbb{Q}[x]/\big{(}x^{4}\big{)}$ (1.6)
is the K-theory ring with basis $\\{\Phi_{\alpha}\\}$ for $\alpha=0,1,2,3$
where $\Phi_{\alpha}=x^{\alpha}$. Here $1-x$ is the class of
$\mathcal{O}(1)|_{X}$. We define
$\displaystyle 5a(d,r,q)=\frac{dr}{1-q}+\frac{dq}{(1-q)^{2}},$ (1.7a)
$\displaystyle
5b(d,r,q)=\frac{rd+r^{2}-d}{1-q}+\frac{d}{(1-q)^{2}}-\frac{q+q^{2}}{(1-q)^{3}}.$
(1.7b)
###### Conjecture 1.1.
The small $J$-function of the quintic is expressed linearly in terms of the
GV-invariants by
$\displaystyle\frac{1}{1-q}J(Q,q,0)=1+x^{2}\sum_{d,r\geq
1}a(d,r,q^{r})\operatorname{GV}_{d}Q^{dr}+x^{3}\sum_{d,r\geq
1}b(d,r,q^{r})\operatorname{GV}_{d}Q^{dr}.$ (1.8)
It is interesting to observe that the right hand side of (1.8) is a
meromorphic function of $q$ with poles at roots of unity of bounded order 3.
In Section 3 we verify the above conjecture modulo $O\big{(}Q^{7}\big{)}$ by
an explicit calculation. Without doubt, Conjecture 1.1 concerns not only the
quintic 3-fold, but Calabi–Yau 3-folds with $h^{1,1}=1$ (there are plenty of
those, see, e.g., [2]) and beyond. In contrast to the case of
$\mathbb{C}\mathbb{P}^{N}$ (see (1.3)) or the case of Fano manifolds, the
small $J$-function of the quintic is not hypergeometric. The above conjecture
was formulated independently by Jockers–Mayr [29, p. 10] and a comparison
between their formulation and ours is given in Section 3.3. Our conjecture
also agrees with the results of Jockers–Mayr presented in [30, Table 6.1]. Let
us introduce the following multi-covering notation
$\displaystyle\operatorname{GV}^{(\gamma)}_{n}=\sum_{d|n}d^{\gamma}\operatorname{GV}_{d}.$
Then, we have the following.
###### Corollary 1.2.
We have
$\displaystyle
5J(Q,0,0)=5+x^{2}\sum_{n=1}^{\infty}n\operatorname{GV}^{(0)}_{n}Q^{n}+x^{3}\sum_{n=1}^{\infty}\big{(}n\operatorname{GV}^{(0)}_{n}+n^{2}\operatorname{GV}^{(-2)}_{n}\big{)}Q^{n}$
$\displaystyle\hphantom{5J(Q,0,0)}{}=5+\big{(}2875Q+1224250Q^{2}+951627750Q^{3}+969872568500Q^{4}+\cdots\big{)}x^{2}$
$\displaystyle\hphantom{5J(Q,0,0)=}{}+\big{(}5750Q+1845000Q^{2}+1268860000Q^{3}+1212342581500Q^{4}+\cdots\big{)}x^{3}.\\!\\!\\!$
(1.9)
The above corollary reproduces the invariants of equations (1.5). To extract
them, let $[J(Q,q,0)]_{x^{\alpha}}$ denote the coefficient of $x^{\alpha}$ in
$J(Q,q,0)$. The next corollary is proven in Section 3.2.
###### Corollary 1.3.
We have
$\displaystyle\sum_{d\geq
1}\left\langle\frac{\Phi_{\alpha}}{1-qL}\right\rangle_{0,1,d}Q^{d}=\begin{cases}-5[J(Q,q,0)]_{x^{2}}+5[J(Q,q,0)]_{x^{3}}&\text{if}\quad\alpha=0,\\\
\hphantom{-}5[J(Q,q,0)]_{x^{2}}&\text{if}\quad\alpha=1,\\\
\hphantom{-}0&\text{if}\quad\alpha=2,3.\end{cases}$ (1.10)
Setting $q=0$, it follows that
$\displaystyle\sum_{d\geq 1}\langle
1\rangle_{0,1,d}Q^{d}=\sum_{n=1}^{\infty}\\!n^{2}\operatorname{GV}^{(-2)}_{n}Q^{n}=2875Q+620750Q^{2}\\!+317232250Q^{3}\\!+242470013000Q^{4}$
$\displaystyle\hphantom{\sum_{d\geq 1}\langle
1\rangle_{0,1,d}Q^{d}=}{}+229305888959500Q^{5}+248249743392434250Q^{6}+\cdots$
(1.11)
matching with equations (1.5) $($being the generating series of the
K-theoretic versions of the GV-invariants, given in the second page and in
[30, Table 6.1]), as well as
$\displaystyle\sum_{d\geq
1}\langle\Phi_{1}\rangle_{0,1,d}Q^{d}=\sum_{n=1}^{\infty}\\!n\operatorname{GV}^{(0)}_{n}Q^{n}=2875Q+1224250Q^{2}\\!+951627750Q^{3}\\!+969872568500Q^{4}$
$\displaystyle\hphantom{\sum_{d\geq
1}\langle\Phi_{1}\rangle_{0,1,d}Q^{d}=}{}+1146529444452500Q^{5}+1489498454615043000Q^{6}+\cdots.$
### 1.3 The linear $\boldsymbol{q}$-difference equation for the quintic
In this section we give an explicit formula for the small linear
$q$-difference equation for the quintic, assuming Conjecture 1.1. A key
feature of this formula is that the coefficients of this equation are analytic
(as opposed to polynomial) functions of $Q$ and $q$. The small $J$-function
$J(Q,q,0)$, viewed as a vector in the vector space $K(X)$, forms the first
column of the matrix $T(Q,q,0)$ of fundamental solutions of the small linear
$q$-difference equation in the basis $\big{\\{}1,x,x^{2},x^{3}\big{\\}}$ of
$K(X)$. The formula (1.8) for the small $J$-function and that fact that it is
a cyclic vector of the linear $q$-difference equation allows us to reconstruct
the matrix $A(Q,q,0)$. See also [28, Theorem 1.1, Lemma 3.3]. To do so, let us
introduce some useful notation. If $f=f(d,r,q)\in\mathbb{Q}(q)$ we denote
$\displaystyle[f]=\sum_{d,r\geq 1}f(d,r,q^{r})\operatorname{GV}_{d}Q^{dr}.$
With this notation, equation (1.8) becomes
$\frac{1}{1-q}J(Q,q,0)=1+[a]x^{2}+[b]x^{3}=\begin{pmatrix}1\\\ 0\\\ [a]\\\
[b]\end{pmatrix}$
in the basis $\big{\\{}1,x,x^{2},x^{3}\big{\\}}$ of $K(X)$, where $a$, $b$ are
given by (1.7). Further, we denote $(Ef)(d,r,q)\allowbreak=q^{d}f(d,r,q)$, and
define
$\displaystyle 5c=\pi_{+}((1-E)a),\qquad 5d=\pi_{+}(Ea+(1-E)b),$ (1.12)
with projections $\pi_{\pm}\colon\mathcal{K}(q)\to\mathcal{K}_{\pm}(q)$ given
in Section 2.1. Explicitly, we have
$\displaystyle 5c(d,r,q)=\frac{d^{2}}{1-q},$ $\displaystyle
5e(d,r,q)=\frac{dr}{1-q}-\frac{d(dq+q-d)}{(1-q)^{2}}.$
Recall the $T$ matrix from [28, Proposition 2.3] which is a fundamental
solution of the linear $q$-difference equation, and whose first column is $J$.
The proof of the next theorem and its corollary is given in Section 4.1.
###### Theorem 1.4.
Conjecture 1.1 implies that the small $T$-matrix of the quintic is given by
$\displaystyle T(Q,q,0)=\begin{pmatrix}1&0&0&0\\\ 0&1&0&0\\\ [a]&[c]&1&0\\\
[b]&[e]&0&1\end{pmatrix}$ (1.13)
and the small $A$-matrix of the linear $q$-difference equation is given by
$\displaystyle A=I-D^{\rm T},\qquad D(Q,q,0)=\begin{pmatrix}0&1&[a-c-
Ea]&[b-e+Ea-Eb]\\\ 0&0&1+[c-Ec]&[e+Ec-Ee]\\\ 0&0&0&1\\\ 0&0&0&0\end{pmatrix}.$
(1.14)
Note that the entries of $5D(Q,q,0)$ are in $\mathbb{Z}[[Q]][q]$ and given
explicitly in equations (4.2) below. Let us denote by
$c_{ttt}(Q,q,t)=5D_{2,3}(Q,q,t)$, where $D_{i,j}$ denotes the $(i,j)$-entry of
the matrix $D$. In other words, we have
$\displaystyle\begin{split}&c_{ttt}(Q,q)=5+\sum_{d,r\geq
1}d^{2}\frac{1-q^{dr}}{1-q^{r}}\operatorname{GV}_{d}Q^{dr}\\\
&\hphantom{c_{ttt}(Q,q)}{}=\sum_{d=1}^{\infty}d^{2}\operatorname{GV}_{d}\big{(}\operatorname{Li}_{0}\big{(}Q^{d}\big{)}+\operatorname{Li}_{0}\big{(}qQ^{d}\big{)}+\dots+\operatorname{Li}_{0}\big{(}q^{d-1}Q^{d}\big{)}\big{)},\end{split}$
where $\operatorname{Li}_{s}$ denotes the $s$-polylogarithm function
$\operatorname{Li}_{s}(z)=\sum_{d\geq 1}z^{d}/d^{s}$. Recall the genus 0
generating series (minus its quadratic part) of the quintic [7, 9]
$\displaystyle\mathcal{F}(Q)=\sum_{n=1}^{\infty}\operatorname{GW}_{n}Q^{n}=\frac{5}{6}(\log
Q)^{3}+\sum_{d=1}^{\infty}\operatorname{GV}_{d}\operatorname{Li}_{3}\big{(}Q^{d}\big{)}$
and its third derivative
$\displaystyle
c_{ttt}(Q)=(Q\partial_{Q})^{3}\mathcal{F}(Q)=5+\sum_{d=1}^{\infty}d^{3}\operatorname{GV}_{d}\operatorname{Li}_{0}\big{(}Q^{d}\big{)},$
(1.15)
where $\partial_{Q}=\partial/\partial_{Q}$.
The next corollary gives a second relation between the $q=1$ limit of quantum
K-theory and quantum cohomology.
###### Corollary 1.5.
The function $c_{ttt}(Q,q)\in\mathbb{Z}[[Q]][q]$ is a $q$-deformation of the
Yukawa coupling $($i.e., $3$-point function$)$ $c_{ttt}(Q)$ in (1.15). Indeed,
we have
$\displaystyle c_{ttt}(Q,1)=c_{ttt}(Q),\qquad 5D_{2,3}(Q,q,0)=c_{ttt}(Q,q).$
Thus, the $q$-difference equation of the quantum K-theory of the quintic is a
$q$-deformation of the well-known Picard–Fuchs equation of the quintic.
Let us abbreviate the four nontrivial entries of $D(Q,q,0)$ by
$\displaystyle\alpha=D_{1,3},\qquad\beta=D_{1,4},\qquad\gamma=D_{2,3},\qquad\delta=D_{2,4}.$
###### Lemma 1.6 ([30, equations (8.22) and (8.23)]).
The linear $q$-difference equation
$\displaystyle\Delta\begin{pmatrix}y_{0}\\\ y_{1}\\\ y_{2}\\\
y_{3}\end{pmatrix}=\begin{pmatrix}0&1&\alpha&\beta\\\ 0&0&\gamma&\delta\\\
0&0&0&1\\\ 0&0&0&0\end{pmatrix}\begin{pmatrix}y_{0}\\\ y_{1}\\\ y_{2}\\\
y_{3}\end{pmatrix}$
$($where $\Delta=1-E)$ is equivalent to the equation
$\displaystyle\mathcal{L}y_{0}=0,\qquad\mathcal{L}=\Delta\left(1+\Delta\frac{\delta+E\alpha+\Delta\beta}{\gamma+\Delta\alpha}\right)^{-1}\Delta(\gamma+\Delta\alpha)^{-1}\Delta^{2}.$
(1.16)
We now discuss the $q\to 1$ limit, using the realization of the $q$-commuting
operators $E={\rm e}^{hQ\partial_{Q}}$ and $Q$ which act on a function
$f(z,h)$ by
$\displaystyle(Ef)(z,h)=f(z+h,h),\qquad(Qf)(z,h)={\rm e}^{z}f(z,h),\qquad
EQ={\rm e}^{h}QE,$
where $Q={\rm e}^{z}$ and $q={\rm e}^{h}$. Then, in the limit $h\to 0$, the
operator $\mathcal{L}$ is given by
$\displaystyle\mathcal{L}(\Delta,Q,q)=\frac{1}{\gamma(Q,1)}\Delta^{4}+\partial^{2}_{z}\frac{1}{\gamma(Q,1)}\partial^{2}_{z}h^{4}+O\big{(}h^{5}\big{)},$
(1.17)
where $5\gamma(Q,1)=c_{ttt}(Q,1)$. Thus, the coefficient of $h^{4}$ is the
Picard–Fuchs equation of the quintic, whereas the coefficient of $h^{0}$ (the
analogue of the AJ conjecture) is a line $(1-E)^{4}=0$ with multiplicity 4,
punctured at the zeros of $\gamma(Q,1)=0$. It is not clear if one can apply
topological recursion on such a degenerate curve.
## 2 A review of quantum K-theory
### 2.1 Notation
In this section we collect some useful notation that we use throughout the
paper. For a smooth projective variety $X$, let $K(X)=K^{0}(X;\mathbb{Q})$
denote the Grothendieck group of topological complex vector bundles with
rational coefficients.
Although we will not use it, the Chern class map induces a rational
isomorphism of rings
$\displaystyle\operatorname{ch}\colon\ K(X)\otimes\mathbb{Q}\to H^{\rm
ev}(X,\mathbb{Q})$
between K-theory and even cohomology. The ring $K(X)$ has a basis
$\\{\Phi_{\alpha}\\}$ for $\alpha=0,\dots,N$ such that
$\Phi_{0}=1=[\mathcal{O}_{X}]$ is the identity element. There is a
nondegenerate pairing on $K(X)$ given by $(E,F)\in K(X)\otimes
K(X)\mapsto\chi(E\otimes F)$, where
$\displaystyle\chi(E)=\int_{X}\operatorname{ch}(E)\mathrm{td}(X)$
is the holomorphic Euler characteristic of $E$. Let $\\{\Phi^{\alpha}\\}$
denote the dual basis of $K(X)$ with respect to the above pairing. Let
$\\{P_{1},\dots,P_{r}\\}$ denote a collection of vector bundles whose first
Chern class forms a nef integral basis of
$H^{2}(X,\mathbb{Z})/\text{torsion}$, and let $Q=(Q_{1},\dots,Q_{r})$ be the
collection of Novikov variables dual to $(P_{1},\dots,P_{r})$.
The vector space $\mathcal{K}(q)=\mathbb{Q}(q)$ admits a symplectic form
$\displaystyle\omega(f,g)=(\mathrm{Res}_{q=0}+\mathrm{Res}_{q=\infty})\left(f(q)g\big{(}q^{-1}\big{)}\frac{\mathrm{d}q}{q}\right)$
and a splitting
$\displaystyle\mathcal{K}(q)=\mathcal{K}_{+}(q)\oplus\mathcal{K}_{-}(q)$
(with projections $\pi_{\pm}\colon\mathcal{K}(q)\to\mathcal{K}_{\pm}(q)$) into
a direct sum of two Lagrangian susbpaces
$\mathcal{K}_{+}(q)=\mathbb{Q}\big{[}q^{\pm 1}\big{]}$ and
$\mathcal{K}_{-}(q)$, the space of reduced functions of $q$, i.e., rational
functions of negative degree which are regular at $q=0$.
### 2.2 Reconstruction theorems for quantum K-theory
In our paper we will focus exclusively on the genus 0 quantum K-theory of $X$
(i.e., $g=0$ in (1.1)). The collection of integers (1.1) can be encoded in
several generating series. Among them is the primary potential
$\displaystyle\mathcal{F}_{X}(Q,t)=\sum_{d,n}\langle
t,\dots,t\rangle_{0,n,d}\frac{Q^{d}}{n!}\in\mathbb{Q}[[Q,t]]$
(where the summation is over $d\in\text{Eff}(X)$ and $n\geq 0$), the
$J$-function
$\displaystyle
J_{X}(Q,q,t)=(1-q)\Phi_{0}+t+\sum_{d,n}\sum_{\alpha}\\!\left\langle
t,\dots,t,\frac{\Phi_{\alpha}}{1-qL}\right\rangle_{0,n+1,d}\\!\Phi^{\alpha}Q^{d}\in
K(X)\otimes\mathcal{K}(q)[[Q,t]]$
(where $\\{\Phi_{\alpha}\\}$ is a basis for $K(X)$ for $\alpha=0,\dots,N$ with
$\Phi_{0}=1$), and the $T$ matrix
$T_{\alpha,\beta}(Q,q,t)\in\operatorname{End}(K(X))\otimes\mathcal{K}(q)[[Q,t]]$
and its inverse, whose definition we omit but may be found in [28, Section 2].
We may think of $\mathcal{F}_{X}$, $J_{X}(Q,q,t)$ and $T(Q,q,t)$ as scalar-
valued, vector-valued and matrix-valued invariants, respectively.
$J_{X}(Q,q,t)$ specializes to $J_{X}(Q,q,0)$ when $t=0$ and specializes to
$\mathcal{F}_{X}(Q,t)$ when $\alpha=0$ (as follows from the string equation).
Also, the $\alpha=0$ column of $T$ is $J_{X}$.
There are several reconstruction theorems that determine all the invariants
(1.1) from others. In [28, Theorem 1.1], it was shown that the small
$J$-function $J_{X}(Q,q,0)$ uniquely determines the $J$-function
$J_{X}(Q,q,t)$, the primary potential $\mathcal{F}_{X}(Q,t)$ and the integers
(1.1) (with $g=0$), under the assumption that $K(X)$ is generated by line
bundles. In [24] it was shown (under the same assumption on $X$) that the
small $J$-function $J_{X}(Q,q,0)$ reconstructs a permutation-equivariant
version of the quantum K-theory of $X$. This theory was introduced by Givental
in [24], where this theory takes into account the action of the symmetric
groups $S_{n}$ on the moduli spaces $\overline{\mathcal{M}}^{X,d}_{g,n}$ that
permutes the marked points. The $J$ function of the permutation-equivariant
quantum K-theory of $X$ takes values in the ring
$K(X)\otimes\mathcal{K}(q)\otimes\Lambda[[Q]]$ where $\Lambda$ is the ring of
symmetric functions in infinitely many variables [34]. $K(X)$,
$\mathbb{Q}[[Q]]$ and $\Lambda$ are $\lambda$-rings with Adams operations
$\psi^{r}$, so is their tensor product. Moreover, the small $J$ function of
the permutation-equivariant quantum K-theory of $X$ agrees with the small
$J$-function $J_{X}(Q,q,0)$ of the (ordinary) genus 0 quantum K-theory of $X$.
According to a reconstruction theorem of Givental [24] one can recover all
genus zero permutation-equivariant K-theoretic GW invariants of a projective
manifold $X$ (under the mild assumption that the ring $K(X)$ is generated by
line bundles) from any point $t^{*}$ on their K-theoretic Lagrangian cone via
an explicit flow. In fortunate situations (that apply to the quintic as we
shall see below), one is given a value $J_{X}(Q,q,t^{*})\in
K(X)\otimes\mathcal{K}(q)[[Q]]\subset
K(X)\otimes\mathcal{K}(q)\otimes\Lambda[[Q]]$ and $t^{*}\in
K(X)\otimes\mathcal{K}_{+}(q)[[Q]]$ (e.g., $t^{*}=0$), in which case there
exists a unique $\varepsilon(x,Q,q)\in K(X)\otimes Q\mathcal{K}_{+}(q)[[Q]]$
such that for all $t$
$\displaystyle J_{X}(Q,q,t)=\exp\left(\sum_{r\geq
1}\frac{\psi^{r}(\varepsilon((1-x)E,Q,q))}{r(1-q^{r})}\right)J_{X}(Q,q,t^{*})\in
K(X)\otimes\mathcal{K}(q)[[Q]],$ (2.1)
where $E$ is the operator that shifts $Q$ to $qQ$. The key point here is that
the coefficients of $\varepsilon(x,Q,q)$ (for each power of $Q$ and $x$) are
in the subspace $\mathcal{K}_{+}(q)$ of $\mathcal{K}(q)$ whereas the
corresponding coefficients of $J_{X}(Q,q,t)$ are in the complementary subspace
$\mathcal{K}_{-}(q)$ of $\mathcal{K}(q)$. Another key point is that although
the above formula a priori is an equality in the permutation-equivariant
quantum K-theory, in fact it is an equality of the ordinary quantum K-theory
when $\varepsilon$ is independent of $\Lambda$.
It follows that a single value $J_{X}(Q,q,t^{*})\in
K(X)\otimes\mathcal{K}(q)[[Q]]$ uniquely determines $t^{*}$ as well as the
small J-function $J_{X}(Q,q,0)$, which in turn determines the permutation-
equivariant $J$-function $J_{X}(Q,q,t)$ for all $t$ via (2.1).
### 2.3 A special value for the $\boldsymbol{J}$-function of the quintic
For concreteness, we will concentrate on the case $X$ of the quintic. To use
the above formula (2.1) we need the value $J_{X}(Q,q,t^{*})$ at some point
$t^{*}$. Such a value was given by Givental in [23, p. 11] and by Tonita in
[38, Theorem 1.3 and Corollary 6.8] who proved that if $J_{d}$ denotes the
coefficient of $Q^{d}$ in $J_{\mathbb{C}\mathbb{P}^{4}}(Q,q,0)$ given in
(1.3), then
$\displaystyle
I_{\mathcal{O}(5)}(Q,q)=\sum_{d=0}^{\infty}J_{d}\big{(}(1-x)^{5}q;q\big{)}_{5d}Q^{d}=(1-q)\sum_{d=0}^{\infty}\frac{\big{(}(1-x)^{5}q;q\big{)}_{5d}}{((1-x)q;q)_{d}^{5}}Q^{d}$
(2.2)
lies on the K-theoretic Lagrangian cone of the quintic $X$. This means that if
$\iota\colon X\to\mathbb{C}\mathbb{P}^{4}$ is the inclusion, and
$\iota^{*}\colon K(\mathbb{C}\mathbb{P}^{4})=\mathbb{Q}[x]/(x^{5})\to
K(X)=\mathbb{Q}[x]/(x^{4})$ is the induced map (sending $x\bmod x^{5}$ to
$x\bmod x^{4}$), there exists a $t^{*}$ such that
$\iota^{*}I_{\mathcal{O}(5)}(Q,q)=J_{X}(Q,q,t^{*})$. In other words, we have
$\displaystyle
J(Q,q,t^{*})=(1-q)\sum_{d=0}^{\infty}\frac{((1-x)q;q)_{5d}}{((1-x)q;q)_{d}^{5}}Q^{d}\in
K(X)\otimes\mathcal{K}(q)[[Q]].$ (2.3)
Interestingly, the above formula has been interpreted by Jockers and Mayr as
an example of the 3d-3d correspondence of gauged linear $\sigma$-models [30].
More precisely, the disk partition function of a 3d gauged linear
$\sigma$-model is a one-dimensional (so-called vortex) integral whose
integrand is a ratio of infinite Pochhammer symbols. A residue calculation
then produces the $q$-hypergeometric series (2.2).
## 3 The flow of the $\boldsymbol{J}$-function
### 3.1 Implementing the flow
In this section we explain how to obtain a formula for the small $J$-function
of the quintic (one power of $Q$ at a time) using formula (2.2) and the flow
(2.1). Observe that the coefficients of $q$ in the function $J(Q,q,t^{*})$
given in (2.3) are not in $\mathcal{K}_{-}(q)$. For instance,
$\displaystyle\text{coeff}\left(\frac{1}{1-q}J(Q,q,t^{*}),x^{0}\right)=\sum_{d=0}^{\infty}\frac{(q;q)_{5d}}{(q;q)_{d}^{5}}Q^{d}$
is a power series in $Q$ whose coefficients are in $\mathcal{K}_{+}(q)$ (and
even in $\mathbb{N}[q]$) and not in $\mathcal{K}_{-}(q)$. Note also that the
function $J(Q,q,t^{*})$ satisfies a 24th order (but _not_ a 4th order) linear
$q$-difference equation with polynomial coefficients. This is discussed in
detail in Section 4.2 below.
To find $J(Q,q,0)$ from $J(Q,q,t^{*})$, we need to apply a flow operator
(2.1). To state the theorem, recall that $K(X)\otimes\mathcal{K}(q)[[Q]]$ is a
$\lambda$-ring with Adams operations $\psi^{(r)}$ given by combining the usual
Adams operations in K-theory with the replacement of $Q$ and $q$ by $Q^{r}$
and $q^{r}$. More precisely, for a positive natural number $r$, we have
$\displaystyle\psi^{(r)}\colon\ K(X)\otimes\mathcal{K}(q)[[Q]]\to
K(X)\otimes\mathcal{K}(q)[[Q]],\psi^{(r)}\big{(}(1-x)^{i}f(q)Q^{j}\big{)}=(1-x)^{ri}f(q^{r})Q^{rj}$
for $f(q)\in\mathcal{K}(q)$ and natural numbers $i$, $j$ and $x$ as in (1.6).
Recall that the plethystic exponential of $f(x,Q,q)\in
K(X)\otimes\mathcal{K}(q)[[Q]]$ (with $f(x,0,q)=0$) is given by
$\displaystyle\operatorname{Exp}(f)=\exp\left(\sum_{r=1}^{\infty}\frac{\psi^{(r)}(f)}{r}\right).$
It is easy to see that when $f$ is small (i.e., $f(x,0,q)=0$), then
$\operatorname{Exp}(f)\in K(X)\otimes\mathcal{K}(q)[[Q]]$ is well-defined. Let
$E$ denote the $q$-difference operator that shifts $Q$ to $qQ$, as in (1.12).
By slight abuse of notation, we denote
$\displaystyle E\colon\ K(X)\otimes\mathcal{K}(q)[[Q]]$ $\displaystyle\to
K(X)\otimes\mathcal{K}(q)[[Q]],$ $\displaystyle
E\big{(}(1-x)^{i}f(Q)Q^{j}\big{)}$ $\displaystyle=(1-x)^{i}f(qQ)Q^{j}.$ (3.1)
Throughout the paper, the operators $E$ and $Q$ will act on a function
$f(Q,q)$ by
$\displaystyle(Ef)(Q,q)=f(qQ,q),\qquad(Qf)(Q,q)=Qf(Q,q),\qquad EQ=qQE.$ (3.2)
The theorem of Givental–Tonita asserts that there exists a unique
$\varepsilon(x,Q,q)\in K(X)\otimes Q\mathcal{K}_{+}(q)[[Q]]$
such that
$\displaystyle\operatorname{Exp}\left(\frac{\varepsilon((1-x)E,Q,q)}{1-q}\right)J(Q,q,t^{*})\in
K(X)\otimes\mathcal{K}_{-}(q)[[Q]]$ (3.3)
and then, the left hand side of the above equation is $J(Q,q,0)$. Equation
(3.3) is a non-linear fixed-point equation for $\varepsilon$ that has a unique
solution that may be found working on one $Q$-degree at a time. Indeed, we can
write
$\varepsilon(x,Q,q)=\sum_{k=1}^{\infty}\varepsilon_{k}(x,q)Q^{k},\qquad\varepsilon_{k}(x,q)=\sum_{\ell=0}^{3}\sum_{k=1}^{\infty}\varepsilon_{k,\ell}(q)x^{\ell}Q^{k}.$
Then for each positive integer number $N$ we have
$\displaystyle\pi_{+}\left(\exp\left(\sum_{r=1}^{N}\sum_{\ell=0}^{3}\sum_{k=1}^{N}\frac{\psi^{(r)}\varepsilon_{k,\ell}(q)}{r(1-q^{r})}Q^{rk}((1-x)E)^{\ell
r}\right)J(Q,q,t^{*})\right)=0.$
Equating the coefficient of each power of $x^{i}$ for $i=0,\dots,3$ to zero in
the above equation, we get a system of four inhomogeneous linear equations
with unknowns $(\varepsilon_{N,0},\dots,\varepsilon_{N,3})$ (with coefficients
polynomials in $\varepsilon_{N^{\prime},\ell^{\prime}}$ for $N^{\prime}<N$),
with a unique solution in the field $\mathcal{K}(q)$. A further check
(according to Givental–Tonita’s theorem) is that the unique solution lies in
$\mathcal{K}_{+}(q)$, and even more, in our case we check that it lies in
$\mathbb{Q}[q]$. Once $\varepsilon_{N^{\prime}}(x,q)$ is known for
$N^{\prime}\leq N$, equation (3.3) allows us to compute $J_{d}(q)$, where
$\displaystyle J(Q,q,0)=\sum_{d=0}^{\infty}J_{d}(q)Q^{d}.$
For instance, when $N=1$ we have
$\displaystyle\varepsilon_{1,0}(q)=1724+572q-625q^{2}-1941q^{3}-3430q^{4}-4952q^{5}-6223q^{6}-6755q^{7}-6184q^{8}$
$\displaystyle\hphantom{\varepsilon_{1,0}(q)=}{}-4690q^{9}-2747q^{10}-969q^{11},$
$\displaystyle\varepsilon_{1,1}(q)=-4600-1140q+2485q^{2}+6520q^{3}+11140q^{4}+15890q^{5}+19860q^{6}+21490q^{7}$
$\displaystyle\hphantom{\varepsilon_{1,1}(q)=}{}+19630q^{8}+14860q^{9}+8690q^{10}+3060q^{11},$
$\displaystyle\varepsilon_{1,2}(q)=4025+555q-3115q^{2}-7255q^{3}-12055q^{4}-17020q^{5}-21175q^{6}-22850q^{7}$
$\displaystyle\hphantom{\varepsilon_{1,2}(q)=}{}-20830q^{8}-15740q^{9}-9190q^{10}-3230q^{11},$
$\displaystyle\varepsilon_{1,3}(q)=-1150+10q+1250q^{2}+2670q^{3}+4340q^{4}+6080q^{5}+7540q^{6}+8120q^{7}$
$\displaystyle\hphantom{\varepsilon_{1,3}(q)=}{}+7390q^{8}+5575q^{9}+3250q^{10}+1140q^{11},$
and, consequently, we find that
$\displaystyle J_{0}(q)=1-q,$ $\displaystyle
J_{1}(q)=-\frac{575x^{2}}{-1+q}-\frac{1150(-1+2q)x^{3}}{(-1+q)^{2}}$
in agreement with [30, eqation (6.38)]. Continuing our computation, we find
that
$\displaystyle
J_{2}(q)=-\frac{25\big{(}9794+19496q+9725q^{2}\big{)}x^{2}}{(-1+q)(1+q)^{2}}$
$\displaystyle\hphantom{J_{2}(q)=}{}-\frac{50\big{(}{-}7380-9748q+14760q^{2}+29244q^{3}+12139q^{4}\big{)}x^{3}}{(-1+q)^{2}(1+q)^{3}}$
and
$\displaystyle
J_{3}(q)=-\frac{25\big{(}7613022+15225906q+22838859q^{2}+15225860q^{3}+7612953q^{4}\big{)}x^{2}}{(-1+q)\big{(}1+q+q^{2}\big{)}^{2}}$
$\displaystyle\hphantom{J_{3}(q)=}{}-\frac{50}{(-1+q)^{2}\big{(}1+q+q^{2}\big{)}^{3}}\big{(}{-}5075440-7612953q-7612953q^{2}+10150880q^{3}$
$\displaystyle\hphantom{J_{3}(q)=}{}+22838859q^{4}+30451812q^{5}+17763442q^{6}+7612953q^{7}\big{)}x^{3}.$
Two further values of $J_{d}(q)$ for $d=4,5$ were computed but are too long to
be presented here. Based on this data, we guessed the formula for $J(Q,q,0)$
given in (1.8). Finally, we computed $J_{6}(q)$ and found that it is in
agreement with out predicted formula (1.8).
### 3.2 Extracting quantum K-theory counts from the small
$\boldsymbol{J}$-function
In this section we give a proof of Corollary 1.3 for the quintic $X$. Recall
that $K(X)$ from equation (1.6) has basis $\Phi_{\alpha}=x^{\alpha}$ for
$\alpha=0,1,2,3$ with $x^{4}=0$ and inner product
$\displaystyle(\Phi_{a},\Phi_{b})=\int_{X}\Phi_{a}\Phi_{b}\mathrm{td}(X)=\begin{pmatrix}\hphantom{-}0&\hphantom{-}5&-5&5\\\
\hphantom{-}5&-5&\hphantom{-}5&0\\\ -5&\hphantom{-}5&\hphantom{-}0&0\\\
\hphantom{-}5&\hphantom{-}0&\hphantom{-}0&0\end{pmatrix}.$ (3.4)
The dual basis $\\{\Phi^{a}\\}$ of $K(X)$ is given by
$\displaystyle\Phi^{0}=\tfrac{1}{5}\Phi_{3},\qquad\Phi^{1}=\tfrac{1}{5}(\Phi_{2}+\Phi_{3}),\qquad\Phi^{2}=\tfrac{1}{5}(\Phi_{1}+\Phi_{2}),\qquad\Phi^{3}=\tfrac{1}{5}(\Phi_{0}+\Phi_{1}-\Phi_{3}),$
(3.5)
and is related to the basis $\\{\Phi_{a}\\}$ by
$\displaystyle\Phi_{0}=5\big{(}\Phi^{1}-\Phi^{2}+\Phi^{3}\big{)},\qquad\Phi_{1}=5\big{(}\Phi^{0}-\Phi^{1}+\Phi^{2}\big{)},$
$\displaystyle\Phi_{2}=5\big{(}{-}\Phi^{0}+\Phi^{1}\big{)},\qquad\Phi_{3}=5\Phi^{0}.$
Substituting $\Phi^{\alpha}$ as above in equation (1.2) and collecting the
powers of $x^{\alpha}$, it follows that
$\displaystyle[J(Q,q,0)]_{1}=1-q+\frac{1}{5}\sum_{d\geq
1}\left\langle\frac{\Phi_{3}}{1-qL}\right\rangle_{0,1,d}Q^{d},$
$\displaystyle[J(Q,q,0)]_{x}=\frac{1}{5}\sum_{d\geq
1}\left(\left\langle\frac{\Phi_{2}}{1-qL}\right\rangle_{0,1,d}+\left\langle\frac{\Phi_{3}}{1-qL}\right\rangle_{0,1,d}\right)Q^{d},$
$\displaystyle[J(Q,q,0)]_{x^{2}}=\frac{1}{5}\sum_{d\geq
1}\left(\left\langle\frac{\Phi_{1}}{1-qL}\right\rangle_{0,1,d}+\left\langle\frac{\Phi_{2}}{1-qL}\right\rangle_{0,1,d}\right)Q^{d},$
$\displaystyle[J(Q,q,0)]_{x^{3}}=\frac{1}{5}\sum_{d\geq
1}\left(\left\langle\frac{\Phi_{0}}{1-qL}\right\rangle_{0,1,d}+\left\langle\frac{\Phi_{1}}{1-qL}\right\rangle_{0,1,d}-\left\langle\frac{\Phi_{3}}{1-qL}\right\rangle_{0,1,d}\right)Q^{d}.$
The above is a linear system of equations with unknowns $\sum_{d\geq
1}\big{\langle}\frac{\Phi_{\alpha}}{1-qL}\big{\rangle}_{0,1,d}Q^{d}$ for
$\alpha=0,1,2,3$. Solving the linear system combined with equation (1.9),
gives (1.10). Setting $q=0$ in (1.10) and using Corollary 1.2, we obtain
(1.11) and (1.3) and conclude the proof of Corollary 1.3.
### 3.3 A comparison with Jockers–Mayr
In this section we give the details of the comparison of our Conjecture 1.1
with a conjecture of Jockers–Mayr [29, p. 10].
To begin with, their $I_{QK}(t)$ is our $J(Q,q,t)$ and their $I(0)$ in [29,
equation (7)] is our $J(Q,q,0)$. They drop the index QK later on. From [29,
equation (4)] it follows that they are working in the same basis
$\Phi_{\alpha}=x^{a}$, $\alpha=0,1,2,3$, as we are. Furthermore, the inner
product on $K(X)$ [29, equation (6)] agrees with the one given in equation
(3.4) with dual basis $\\{\Phi^{a}\\}$ of $K(X)$ given in (3.5). By [29,
equation (8)], specialized to the quintic, the function $I(t)$ becomes
$I(t)=1-q+t\Phi_{1}+F^{2}(t)\Phi_{2}+F^{3}(t)\Phi_{3}$. Then, they define
functions $F_{A}$ and $\hat{F}^{A}$ by writing
$\sum_{A}F^{A}\Phi_{A}=\sum_{A}\big{(}F_{A,cl}+\hat{F}_{A}\big{)}\Phi^{A}$,
where
$\hat{F}_{A}(t)=\sum_{d>0}Q^{d}\big{\langle}\\!\big{\langle}\frac{\Phi_{A}}{1-qL}\big{\rangle}\\!\big{\rangle}_{d}$,
cf. [29, equation (9)], and $F_{A,cl}$ are “constant”, i.e., independent of
$Q$ and $t$. Note that only $F^{2}$, $F^{3}$ are nonzero which implies that
only $F_{0}$, $F_{1}$ are nonzero. Their conjecture [29, p. 10] can now be
stated (in the case of the quintic) as follows [29, equation (10)]:
$\displaystyle\hat{F}_{0}=p_{2}+\frac{1}{(1-q)^{2}}[(1-3q)\mathcal{F}+qt\mathcal{F}_{1}]_{t^{n>2}},$
$\displaystyle\hat{F}_{1}=p_{1,1}+\frac{1}{(1-q)}[\mathcal{F}_{1}]_{t^{n>1}},$
where $p_{2}$, $p_{1,1}$, $\mathcal{F}$, $\mathcal{F}_{1}$ are certain
explicitly given functions of $t$ and the Gopakumar–Vafa invariants
$\operatorname{GV}_{d}$, [29, equations (11) and (12)]. Combining everything
so far, their conjecture reads
$\displaystyle
I(t)=1-q+t\Phi_{1}+(F_{1,cl}+p_{1,1}+\frac{1}{(1-q)}[\mathcal{F}_{1}]_{t^{n>1}})\Phi^{1}$
$\displaystyle\phantom{I(t)=}+(F_{0,cl}+p_{2}+\frac{1}{(1-q)^{2}}[(1-3q)\mathcal{F}+qt\mathcal{F}_{1}]_{t^{n>2}})\Phi^{0}.$
We will not spell out these functions completely, but only their value at
$t=0$ in order to compare it to our formulas. First, the brackets
$[\dots]_{t^{n>1}},[\dots]_{t^{n>2}}$ vanish for $t=0$. So we are left with
$p_{2}$ and $p_{1,1}$ [29, equation (12)]. Noting that $\sum_{j}d_{j}t_{j}=0$
for $t=0$, these read
$\displaystyle\frac{1}{1-q}p_{1,1}|_{t=0}=\sum_{d>0}Q^{d}\sum_{r|d}\operatorname{GV}_{d/r}\frac{d(1-q^{r})+\frac{d}{r}q^{r}}{(1-q^{r})^{2}},$
$\displaystyle\frac{1}{1-q}p_{2}|_{t=0}=\sum_{d>0}Q^{d}\sum_{r|d}\operatorname{GV}_{d/r}\frac{r^{2}(1-q^{r})^{2}-q^{r}(1+q^{r})}{(1-q^{r})^{3}}.$
Next, we rewrite these sums so that they run over all values of $r$
$\displaystyle\frac{1}{1-q}p_{1,1}|_{t=0}=\sum_{d,r>0}Q^{dr}\operatorname{GV}_{r}\frac{dr(1-q^{r})+dq^{r}}{(1-q^{r})^{2}},$
$\displaystyle\frac{1}{1-q}p_{2}|_{t=0}=\sum_{d,r>0}Q^{dr}\operatorname{GV}_{r}\frac{r^{2}(1-q^{r})^{2}-q^{r}(1+q^{r})}{(1-q^{r})^{3}}.$
Hence,
$\displaystyle\frac{1}{1-q}p_{1,1}|_{t=0}=5\sum_{d,r>0}Q^{dr}\operatorname{GV}_{r}a(d,r,q^{r}),$
$\displaystyle\frac{1}{1-q}p_{2}|_{t=0}=5\sum_{d,r>0}Q^{dr}\operatorname{GV}_{r}\left(b(d,r,q^{r})-a(d,r,q^{r})\right).$
The appearance of the term involving $a(d,r,q^{r})$ in the second equation is
due to the change of basis $\Phi_{2}=5\big{(}{-}\Phi^{0}+\Phi^{1}\big{)}$.
This completes the compatibility of our conjecture and theirs.
## 4 $\boldsymbol{q}$-difference equations
### 4.1 The small $\boldsymbol{q}$-difference equation of the quintic
In this section we explain how Theorem 1.4 follows from Conjecture 1.1. We
begin with a general discussion. Given a collection of vector functions
$f_{j}(Q,q)\in\mathbb{Q}(q)[[Q]]^{r}$ for $j=1,\dots,r$ such that
$\det(f_{1}|f_{2}|\dots|f_{r})$ is not identically zero, there is always a
canonical linear $q$-difference equation
$\displaystyle(Ey)(Q,q)=A(Q,q)y(Q,q)$
with fundamental solution set $f_{1},\dots,f_{r}$, where $E$ is the shift
operator of equation (3.1) that replaces $Q$ by $qQ$. Indeed, the equations
$Ey_{j}=Ay_{j}$ for $j=1,\dots,r$ are equivalent to the matrix equation
$ET=AT$ where $T=(f_{1}|f_{2}|\dots|f_{r})$ is the fundamental matrix
solution, and inverting $T$, we find that $A=(ET)^{-1}T$. This can be applied
in particular to the case of a collection $E^{j}g$ for $j=0,\dots,r-1$ of a
vector function $g(Q,q)\in\mathbb{Q}(q)[[Q]]$ that satisfies
$\det\big{(}g|Eg|\dots|E^{r-1}g\big{)}$ is nonzero. Said differently, every
vector function $g(Q,q)\in\mathbb{Q}(q)[[Q]]$ along with its $r-1$ shifts
(generically) satisfies a linear $q$-difference equation.
We will apply the above principle to the 4-tuple
$((1-x)E)^{j}J(Q,q,0)/(1-q)\in K(X)\otimes\mathcal{K}(q)[[Q]]$ for
$j=0,\dots,3$ where $J(Q,q,0)\in K(X)\otimes\mathcal{K}_{-}(q)[[Q]]$ is as in
Conjecture 1.1. However, notice that although the $q$-coefficients of
$J(Q,q,0)/(1-q)$ are in $\mathcal{K}_{-}(q)$, this is no longer true for the
shifted functions $((1-x)E)^{j}J(Q,q,0)/(1-q)$ for $j=1,2,3$. In that case, we
need to apply the Birkhoff factorization [25, App.A] to the matrix
$\displaystyle\frac{1}{1-q}\big{(}J|(1-x)EJ|((1-x)E)^{2}J|((1-x)E)^{3}J\big{)}=TU,$
(4.1)
where the $q$-coefficients of the entries of $T$ are in $\mathcal{K}_{-}(q)$
and of $U$ are in $\mathcal{K}_{+}(q)$ (compare also with Lemma 3.3 of [28,
equation (4)]). The existence and uniqueness of matrices $T$ and $U$ in the
above equation follows from the fact that the left hand side of the above
equation is unipotent, and the proof is discussed in detail in the above
reference.
In our case, the choice
$\displaystyle
T=\frac{1}{1-q}\pi_{+}\big{(}J|(1-x)EJ|((1-x)E)^{2}J|((1-x)E)^{3}J\big{)}$
together with equation (4.1) implies that the $q$-coefficients of the entries
of $U$ are in $\mathcal{K}_{+}(q)$. Equation (1.13) for the fundamental matrix
$T$ follows from the fact that
$\displaystyle\pi_{+}\left(q^{d}\left(\frac{r^{2}}{1-q}-\frac{q+q^{2}}{(1-q)^{3}}\right)\right)=\frac{-1+3q-4q^{2}}{(1-q)^{3}}+\frac{(-1+d)(-1-d+3q+dq)}{(1-q)^{2}}+\frac{r^{2}}{1-q},$
$\displaystyle\pi_{+}\left(q^{d}\left(\frac{r}{1-q}+\frac{q}{(1-q)^{2}}\right)\right)=\frac{-d+q+dq}{(1-q)^{2}}+\frac{r}{1-q}$
valid for all positive natural numbers $d$ and $r$.
Having computed the fundamental matrix $T$ (1.13), we use [28, equation (2)],
with $P^{-1}q^{Q\partial_{Q}}$ replaced by $1-(1-x)E$ to deduce the small
A-matrix (1.14).
Explicitly, the four nontrivial entries of the matrix $D$ are given by
$\displaystyle 5(a-c-
Ea)(d,r,q)=\frac{d\big{(}{-}d+q+dq-q^{1+d}+r-qr-q^{d}r+q^{1+d}r\big{)}}{(1-q)^{2}},$
(4.2a) $\displaystyle 5(b-e+Ea-
Eb)(d,r,q)=-\frac{q^{2}\big{(}1+2d+d^{2}-r^{2}\big{)}+q\big{(}1-2d-2d^{2}+2r^{2}\big{)}}{(1-q)^{3}}$
$\displaystyle\phantom{5(b-e+Ea-
Eb)(d,r,q)=}+\frac{d^{2}-r^{2}+q^{d}\big{(}{-}q-q^{2}+r^{2}-2qr^{2}+q^{2}r^{2}\big{)}}{(1-q)^{3}},$
(4.2b) $\displaystyle 5(c-Ec)(d,r,q)=\frac{d^{2}\big{(}1-q^{d}\big{)}}{1-q},$
(4.2c) $\displaystyle 5(e+Ec-
Ee)(d,r,q)=-\frac{d\big{(}{-}d+q+dq-q^{1+d}-r+qr+q^{d}r-q^{1+d}r\big{)}}{(1-q)^{2}}.$
(4.2d)
Note that the entries of $5D$ are in $\mathbb{Z}[[Q]][q]$. Moreover, the
values when $q=1$ are given by
$\displaystyle 5(a-c-Ea)(d,r,1)=-\frac{1}{2}d^{2}(1+d-2r),$ $\displaystyle
5(b-e+Ea-Eb)(d,r,1)=-\frac{1}{6}d\big{(}1+3d+2d^{2}-6r^{2}\big{)},$
$\displaystyle 5(c-Ec)(d,r,q)=d^{3},$ $\displaystyle 5(e+Ec-
Ee)(d,r,1)=\frac{1}{2}d^{2}(1+d+2r).$
As a further consistency check, note that our matrix $D$ given in (1.14)
equals to the matrix $D$ of [30, equation (8.21)].
Given the formula of (1.14), an explicit calculation shows that the entries of
$D$ are given by (4.2). This concludes the proof of Theorem 1.4.
###### Proof of Corollary 1.5.
It follows from equations (4.2c) and (1.15). ∎
###### Proof of Lemma 1.6.
We have
$\displaystyle\Delta y_{0}=y_{1}+\alpha y_{2}+\beta y_{3},\qquad\Delta
y_{1}=\gamma y_{2}+\delta y_{3},\qquad\Delta y_{2}=y_{3},\qquad\Delta
y_{3}=0.$
The lemma follows by eliminating $y_{1}$, $y_{2}$, $y_{3}$ (one at a time)
using the fact that
$\displaystyle E(fg)=(Ef)(Eg),\qquad\Delta(fg)=(\Delta f)g+f(\Delta g)-(\Delta
f)(\Delta g).$
(which follows from $(Ef)(Q,q)=f(qQ,q)$ and $\Delta=1-E$). Indeed, we have
$\Delta^{2}y_{0}=\Delta(\Delta y_{0})=\Delta(y_{1}+\alpha y_{2}+\beta
y_{3})=(\gamma+\Delta\alpha)y_{2}+(\delta+E\alpha+\Delta\beta)y_{3},$
and hence,
$(\gamma+\Delta\alpha)^{-1}\Delta^{2}y_{0}=y_{2}+\frac{\delta+E\alpha+\Delta\beta}{\gamma+\Delta\alpha}y_{3},$
and hence,
$\Delta(\gamma+\Delta\alpha)^{-1}\Delta^{2}y_{0}=\left(1+\frac{\delta+E\alpha+\Delta\beta}{\gamma+\Delta\alpha}\right)y_{3}.$
Applying $\Delta$ once again and using $\Delta y_{3}=0$ concludes the proof of
equation (1.16). Note that the notation is such that an operator $\Delta$ is
applied to everything on the right hand side.
The $q=1$ limit of $\mathcal{L}(\Delta,Q,q)$ follows from equation (1.16), the
fact that
$\displaystyle(\Delta f)(Q,q)|_{q=1}=(f(qQ,q)-f(Q,q))|_{q=1}=0$
and Corollary 1.5. ∎
### 4.2 The Frobenius method for linear $\boldsymbol{q}$-difference equations
In this section we discuss in detail the linear $q$-difference equation
satisfied by the function $J(Q,q,t^{*})$ of (2.2). Recall the operators $E$
and $Q$ that act on functions of $Q$ and $q$ by (3.2).
Let
$\displaystyle
J(Q,q,x)=\sum_{n=0}^{\infty}a_{n}(q,x)Q^{n}=J_{0}(Q,q)+J_{1}(Q,q)x+\dots\in\mathbb{Q}(q)[[Q,x]],$
(4.3)
where $J_{n}(Q,q)\in\mathbb{Q}(q)[[Q]]$ for all $n$ and
$\displaystyle a_{n}(q,x)=\frac{\big{(}{\rm
e}^{5x}q;q\big{)}_{5n}}{\big{(}{\rm e}^{x}q;q\big{)}_{n}^{5}},$
where ${\rm e}^{ax}$ is to be understood as a polynomial in $x$ obtained as
${\rm e}^{ax}+O\big{(}x^{4}\big{)}$.
The functions $J_{n}(Q,q)$ are given by series whose summand ia a
$q$-hypergeometric function times a polynomial of $q$-harmonic functions. For
example, we have
$\displaystyle
J_{0}(Q,q)=\sum_{n=0}^{\infty}\frac{(q;q)_{5n}}{(q;q)_{n}^{5}}Q^{n},$
$\displaystyle
J_{1}(Q,q)=\sum_{n=0}^{\infty}\frac{(q;q)_{5n}}{(q;q)_{n}^{5}}(1+5H_{5n}(q)-5H_{n}(q))Q^{n},$
where $H_{n}(q)=\sum_{j=1}^{n}q^{j}/\big{(}1-q^{j}\big{)}$ is the $n$th
$q$-harmonic number. Consider the 25-th order linear $q$-difference operator
$\displaystyle
L_{5}(E,Q,q)=(1-E)^{5}-Q\prod_{j=1}^{5}\big{(}1-q^{j}E^{5}\big{)}$ (4.4)
with coefficients polynomials in $Q$ and $q$. Note that
$L_{5}=(1-E)^{5}-\prod_{j=1}^{5}\big{(}1-q^{5-j}E^{5}\big{)}Q$, hence $L_{5}$
factors as $1-E$ times a 24-th order operator.
###### Lemma 4.1.
With $J$ as in (4.3) and $L_{5}$ as in (4.4), we have
$\displaystyle L_{5}\big{(}{\rm e}^{x},Q,q\big{)}J=\big{(}1-{\rm
e}^{x}\big{)}^{5}.$
###### Proof.
It is easy to see that
$\frac{a_{n}(q,x)}{a_{n-1}(q,x)}=\frac{\prod_{j=1}^{5}\big{(}1-{\rm
e}^{5x}q^{5n-j}\big{)}}{\big{(}1-{\rm e}^{x}q^{n}\big{)}^{5}}.$
Hence,
$\big{(}1-{\rm
e}^{x}q^{n}\big{)}^{5}a_{n}(q,x)Q^{n}=Q\prod_{j=1}^{5}\big{(}1-{\rm
e}^{5x}q^{5n-j}\big{)}a_{n-1}(q,x)Q^{n-1}$
and in operator form,
$\big{(}1-{\rm
e}^{x}E\big{)}^{5}a_{n}(q,x)Q^{n}=Q\prod_{j=1}^{5}\big{(}1-q^{j}{\rm
e}^{5x}E^{5}\big{)}a_{n-1}(q,x)Q^{n-1}.$
Summing from $n=1$ to infinity, we obtain that
$\big{(}1-{\rm e}^{x}E\big{)}^{5}(J-1)=Q\prod_{j=1}^{5}\big{(}1-q^{j}{\rm
e}^{5x}E^{5}\big{)}J.$
Since $\big{(}1-{\rm e}^{x}E\big{)}^{5}=\big{(}1-{\rm e}^{x}\big{)}^{5}$, the
result follows. ∎
Note that the proof of Lemma 4.1 implies that $J(Q,q,x)$ satisfies a 24-th
order linear $q$-difference equation but this will no play a role in our
paper. Of importance is the fact that the 25-th order equation $L_{5}f=0$ has
a distinguished 5-dimensional space of solutions, given explicitly by a
$q$-version of the Frobenius method. Since this method is well-known for
linear differential equations, but less so for linear $q$-difference
equations, we give more details than usual. For additional discussion on this
method, see Wen [40], and for references for the $q$-gamma and $q$-beta
functions, see De Sole–Kac [10].
First, we define an $n$-th derivative of an operator $P(E,Q,q)$ by
$\displaystyle P^{(n)}(E,Q,q)=\sum_{k=0^{d}}k^{n}c_{k}(Q,q)E^{k},\qquad
P(E,Q,q)=\sum_{k=0^{d}}c_{k}(Q,q)E^{k}.$
In other words, we may write $P^{(n)}=(E\partial_{E})^{n}(P)$.
###### Lemma 4.2.
For a linear $q$-difference operator $P(E,Q,q)$ we have
$\displaystyle P({\rm
e}^{x}E,Q,q)=\sum_{n=0}^{\infty}\frac{x^{n}}{n!}P^{(n)}(E,Q,q).$ (4.5)
Moreover, for all natural numbers $n$ and a function $f(Q,q)$ we have
$\displaystyle P((\log Q)^{n}f)=\sum_{k=0}^{n}\binom{n}{k}(\log Q)^{n-k}(\log
q)^{k}P^{(n-k)}f.$ (4.6)
###### Proof.
Equations (4.5) and (4.6) are additive in $P$, hence it suffices to prove them
when $P=E^{a}$ for a natural number $a$, in which case $(E^{a})^{(n)}=a^{n}$
and both identities are clear. ∎
###### Lemma 4.3.
Suppose $P(E,Q,q)$ is a linear $q$-difference operators with coefficients
polynomials in $E$ and $Q$, and $J(Q,q,x)\in\mathbb{Q}(q)[[Q,x]]$ is such that
$\displaystyle P({\rm e}^{x}E,Q,q)J(Q,q,x)=O\big{(}x^{N+1}\big{)}$ (4.7)
for some natural number $N$. Then,
$\displaystyle\sum_{k=0}^{n}\binom{n}{k}P^{(k)}J_{n-k}=0$ (4.8)
for $n=0,\dots,N$, where
$J_{k}=\operatorname{coeff}\big{(}J(Q,q,x),x^{k}\big{)}$, and
$\displaystyle Pf_{n}=0,\qquad f_{n}=\sum_{k=0}^{n}\binom{n}{k}(\log
Q)^{n-k}(\log q)^{k}J_{k}$ (4.9)
for $n=0,1,\dots,N$. In other words, the equation $Pf=0$ has $N+1$
distinguished solutions given by
$\displaystyle f_{0}=J_{0},$ $\displaystyle f_{1}=\log QJ_{0}+\log qJ_{1},$
$\displaystyle f_{2}=(\log Q)^{2}J_{0}+2\log Q\log qJ_{1}+(\log q)^{2}J_{2},$
$\displaystyle\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots$
###### Proof.
Equation (4.8) follows easily using (4.5) and by expanding the left hand side
of equation (4.7) into power series in $x$ and equating the coefficient of
$x^{n}$ with zero for $n=0,1,\dots,N$. Equation (4.9) follows from equations
(4.8) and (4.6), and induction on $n$. ∎
## 5 Quantum K-theory versus Chern–Simons theory
There are several hints in the physics literature pointing to a deeper
relation between Quantum K-theory and Chern–Simons gauge theory (e.g., for
3-manifolds with boundary, such as knot complements), see for instance in [6,
13, 15, 29, 30] and in references therein. In this section we discuss and
comment on the $q$-difference equations in Chern–Simons theory, gauged linear
$\sigma$-models and Quantum K-theory. We will discuss three aspects of this
comparison:
* (a)
$q$-holonomic systems and their $q=1$ semiclassical limits,
* (b)
$\varepsilon$-deformations,
* (c)
matrix-valued invariants.
We begin with the case of the Chern–Simons theory. The partition function of
Chern–Simons theory with compact (e.g., ${\rm SU}(2)$) gauge group on a
3-manifold (with perhaps nonempty boundary) is given by a finite-dimensional
state-sum whose summand has as a building block the quantum $n$-factorial.
This follows from existence of an underlying TQFT [36, 39, 42] which reduces
the computation of the partition function into elementary pieces. For the
complement of a knot $K$ in $S^{3}$, the partition function recovers the
colored Jones polynomial of a knot which, in the case of ${\rm SU}(2)$, is a
sequence $J_{K,n}(q)\in\mathbb{Z}\big{[}q^{\pm}\big{]}$ of Laurent polynomials
which can be presented as a finite-dimensional sum whose summand has as a
building block the finite $q$-Pochhammer symbol $(q;q)_{n}$. This ultimately
boils down to the entries of the $R$-matrix which are given for example in
[36].
On the other hand, Chern–Simons theory with complex (e.g., ${\rm
SL}_{2}(\mathbb{C})$) gauge group is not well-understood as a TQFT. However,
the partition function for a 3-manifold with boundary can be computed by a
finite-dimensional state-integral whose integrand has as a building block
Faddeev’s quantum dilogarithm function [16]. The latter is a ratio of two
infinite Pochhammer symbols which form a quasi-periodic function with two
quasi periods. (Recall that the Pochhammer symbol is
$(x;q)_{\infty}=\prod_{j=0}^{\infty}\big{(}1-q^{j}x\big{)}$.) These are the
state-integrals studied in quantum Teichmüller theory by Kashaev et al. [3, 4,
31] and in complex Chern–Simons theory by Dimofte et al. [11, 12].
The appearance of $q$-holonomic systems in Chern–Simons theory with
compact/complex gauge group is a consequence of Zeilberger theory [35, 41, 43]
applied to finite-dimensional state-sums/integrals whose summand/integrand has
as a building block the finite/infinite $q$-Pochhammer symbol. This is exactly
how it was deduced that the sequence of colored Jones polynomials $J_{K,n}(q)$
of a knot satisfy a linear $q$-difference equation
$A_{K}\big{(}\hat{L},\hat{M},q\big{)}J_{K}=0$ (see [19]), where $\hat{L}$ and
$\hat{M}$ are $q$-commuting operators that act on a sequence
$f\colon\mathbb{N}\to\mathbb{Q}(q)$ by
$\displaystyle\big{(}\hat{L}f\big{)}(n)=f(n+1),\qquad\big{(}\hat{M}f\big{)}(n)=q^{n}f(n),\qquad
LM=qML.$
In the case of state-integrals, the existence of two quasi-periods leads to a
linear $q$\- (and also $\tilde{q}$)-difference equation, where $q={\rm
e}^{2\pi{\rm i}h}$ and $\tilde{q}={\rm e}^{-2\pi{\rm i}/h}$.
It is conjectured that the linear $q$-difference equation of the colored Jones
polynomial essentially coincides with the one of the state-integral, and that
the classical $q=1$ limit (the so-called AJ conjecture [17]) coincides with
the $A$-polynomial $A_{K}(L,M,1)$ of the knot. The latter is the ${\rm
SL}_{2}(\mathbb{C})$-character variety of the fundamental group of the knot
complement, viewed from the boundary torus [8]. Finally, the semiclassical
limit (the analogue of (1.17) is given by
$\displaystyle
A_{K}\big{(}\hat{L},\hat{M},q\big{)}=A_{K}(L,M,1)+D_{K}(z,\partial_{z})h^{s}+O\big{(}h^{s+1}\big{)},$
where $D_{K}(z,\partial_{z})$ is a linear differential operator of degree $s$
where $s$ is the order of vanishing of $A_{K}(L,1,1)$ at $L=1$. This order is
typically $1$ (e.g., for the $4_{1}$, $5_{2}$, $6_{1}$ and more generally all
twist knots) but it is equal to $2$ for the $8_{18}$ knot.
We now come to the feature, namely an expected “factorization” of state-
integrals into a finite sum of products of $q$-series and $\tilde{q}$-series.
This factorization is computed by an $\varepsilon$-deformation of $q$\- and
$\tilde{q}$-hypergeometric series that arise by applying the residue theorem
to the state-integrals. For a detailed illustration of this, we refer the
reader to [6, 18] and [21].
Our last discussed feature, namely a matrix-valued extension of the
Chern–Simons invariants with compact/complex gauge group was recently
discovered in two papers [20, 21]. More precisely, it was conjectured and in
some cases verified that the scalar valued quantum knot invariants such as the
Kashaev invariant [32] (an evaluation of the $n$-th colored Jones polynomial
at $n$-th roots of unity) and the Andersen–Kashaev state-integral [3] admit an
extension into a matrix-valued invariants. The rows and columns are labeled by
the set $\mathcal{P}_{M}$ of ${\rm SL}_{2}(\mathbb{C})$ boundary-parabolic
representations of $\pi_{1}(M)$. In the case of a knot complement, the set
$\mathcal{P}_{M}$ can be thought of as the set of branches of the
$A$-polynomial curve above a point (where the meridian has eigenvalues 1).
Although the corresponding vector space $R(M):=\mathbb{Q}\mathcal{P}_{M}$ with
basis $\mathcal{P}_{M}$ has no ring structure known to us, it has a
distinguished element corresponding to the trivial ${\rm
SL}_{2}(\mathbb{C})$-representation that plays an important role. A ring
structure $\mathbb{Q}\mathcal{P}_{M}$ might be defined as the Grothendieck
group of an appropriate category associated to flat connections on 3-manifolds
with boundary, or perhaps by contructing an appropriate logarithmic conformal
field theory use fusion rules will define the sought ring as suggested by
Gukov. Alternatively, the sought ring might be described in terms of ${\rm
SL}(2,\mathbb{C})$-Floer homology, suggested by Witten. Alternatively, it
might be described by the quantum K-theory of the mirror of the local
Calabi–Yau manifold $uv=A_{M}(x,y)$, (where $A_{M}$ is the $A$-polynomial
discussed above), suggested by Aganagic–Vafa [1].
We now discuss the above features (a)–(c) that appear in the 3d-gauged linear
$\sigma$-models and their 3d-3d correspondence studied in detail in [6, 13,
14, 15, 29, 30] and references therein. The $q$-holonomic aspect is still
present since the (so-called vortex) partition function is a finite-
dimensional integral whose integrand has as a building block the infinite
Pochhammer symbol (note however that $\tilde{q}$ does not appear). The second
aspect involving $\varepsilon$-deformations is also present for the same
reason as in Chern–Simons theory. The third aspect is absent in general.
We finally discuss the above features in genus 0 quantum K-theory of the
quintic. The first aspect is different: the linear $q$-differential equation
has coefficients which are analytic (and not polynomial) functions of $Q$ and
$q$. The classical limit $q=1$ of the linear $q$-difference equation of the
quintic is given by $\gamma(Q,1)^{-1}\Delta^{4}$ (1.17) and this defines a
degenerate analytic curve in $\mathbb{C}\times\mathbb{C}$ that consists of a
finite collection of lines with coordinates $(\Delta,Q)$. On the other hand,
the semi-classical limit (i.e., the coefficient of $h^{4}$ in (1.17)) is the
famous Picard–Fuchs equation of the quintic. The second feature, the
$\varepsilon$-deformation for a nilpotent variable $\varepsilon$ is encoded in
the fact that $K(X)$ has nilpotent elements $x$. The last feature is most
interesting since the matrix-valued invariants are encoded in
$\operatorname{End}(K(X))$, where $K(X)$ is not just a rational vector space,
but a ring unit $1$. It follows that the linear $q$-difference equations have
not only a distinguished solution $J_{X}(Q,q,0)$ but a basis of solutions
parametrized by a basis $\\{\Phi_{\alpha}\\}$ of $K(X)$.
Let us end our discussion with some questions on the colored Jones polynomial
$J_{K,n}(q)$ colored by the $n$-dimensional irreducible
$\mathfrak{sl}_{2}(\mathbb{C})$ representation. For simplicity, we abbreviate
$R\big{(}S^{3}\setminus K\big{)}$ defined above by $R(K)$.
###### Question 5.1.
1. $(a)$
Does the vector space $R(K)$ have a ring structure?
2. $(b)$
If so, is the series $\sum_{n=1}^{\infty}J_{K,n}(q)Q^{n}$ the coefficient of
$1$ in the $R(K)$-valued small $J$-function $J_{K}(Q,q,0)$ of a knot $K$?
3. $(c)$
If so, is there a $t$-deformation $J_{K}(Q,q,t)$?
### Acknowledgements
The authors wish to thank the Max-Planck-Institute for Mathematics and the
Bethe Center for Theoretical Physics in Bonn for inviting them to their
workshop on _Number Theoretic Methods in Quantum Physics_ in July 2019, where
the first ideas were conceived. We also wish to thank Gaetan Borot, Alexander
Givental, Todor Milanov and Di Yang for useful conversations. E.S. wishes to
thank the University of Melbourne for having him as a guest during 2020 and
Southern University of Science and Technology for hospitality in 2021.
## References
* [1] Aganagic M., Vafa C., Large $N$ duality, mirror symmetry, and a $Q$-deformed $A$-polynomial for knots, arXiv:1204.4709.
* [2] Almkvist G., van Enckevort C., van Straten D., Zudilin W., Tables of Calabi–Yau equations, arXiv:math.AG/0507430.
* [3] Andersen J.E., Kashaev R., A TQFT from quantum Teichmüller theory, Comm. Math. Phys. 330 (2014), 887–934, arXiv:1109.6295.
* [4] Andersen J.E., Kashaev R., The Teichmüller TQFT, in Proceedings of the International Congress of Mathematicians – Rio de Janeiro 2018, Vol. III, Invited Lectures, World Sci. Publ., Hackensack, NJ, 2018, 2541–2565, arXiv:1811.06853.
* [5] Anderson D., Chen L., Tseng H.H., On the finiteness of quantum K-theory of a homogeneous space, Int. Math. Res. Not. 2022 (2022), 1313–1349, arXiv:1804.04579.
* [6] Beem C., Dimofte T., Pasquetti S., Holomorphic blocks in three dimensions, J. High Energy Phys. 2014 (2014), no. 12, 177, 119 pages, arXiv:1211.1986.
* [7] Candelas P., de la Ossa X.C., Green P.S., Parkes L., A pair of Calabi–Yau manifolds as an exactly soluble superconformal theory, Nuclear Phys. B 359 (1991), 21–74.
* [8] Cooper D., Culler M., Gillet H., Long D.D., Shalen P.B., Plane curves associated to character varieties of $3$-manifolds, Invent. Math. 118 (1994), 47–84.
* [9] Cox D.A., Katz S., Mirror symmetry and algebraic geometry, Mathematical Surveys and Monographs, Vol. 68, Amer. Math. Soc., Providence, RI, 1999.
* [10] De Sole A., Kac V.G., On integral representations of $q$-gamma and $q$-beta functions, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl. 16 (2005), 11–29, arXiv:math.QA/0302032.
* [11] Dimofte T., Complex Chern–Simons theory at level $k$ via the 3d-3d correspondence, Comm. Math. Phys. 339 (2015), 619–662, arXiv:1409.0857.
* [12] Dimofte T., Perturbative and nonperturbative aspects of complex Chern–Simons theory, J. Phys. A: Math. Theor. 50 (2017), 443009, 25 pages, arXiv:1608.02961.
* [13] Dimofte T., Gaiotto D., Gukov S., 3-manifolds and 3d indices, Adv. Theor. Math. Phys. 17 (2013), 975–1076, arXiv:1112.5179.
* [14] Dimofte T., Gaiotto D., Gukov S., Gauge theories labelled by three-manifolds, Comm. Math. Phys. 325 (2014), 367–419, arXiv:1108.4389.
* [15] Dimofte T., Gukov S., Hollands L., Vortex counting and Lagrangian 3-manifolds, Lett. Math. Phys. 98 (2011), 225–287, arXiv:1006.0977.
* [16] Faddeev L.D., Discrete Heisenberg–Weyl group and modular group, Lett. Math. Phys. 34 (1995), 249–254, arXiv:hep-th/9504111.
* [17] Garoufalidis S., On the characteristic and deformation varieties of a knot, in Proceedings of the Casson Fest, Geom. Topol. Monogr., Vol. 7, Geom. Topol. Publ., Coventry, 2004, 291–309, arXiv:math.GT/0306230.
* [18] Garoufalidis S., Kashaev R., From state integrals to $q$-series, Math. Res. Lett. 24 (2017), 781–801, arXiv:1304.2705.
* [19] Garoufalidis S., Lê T.T.Q., The colored Jones function is $q$-holonomic, Geom. Topol. 9 (2005), 1253–1293, arXiv:math.GT/0309214.
* [20] Garoufalidis S., Zagier D., Knots, perturbative series and quantum modularity, arXiv:2111.06645.
* [21] Garoufalidis S., Zagier D., Knots and their related $q$-series, in preparation.
* [22] Givental A., On the WDVV equation in quantum $K$-theory, Michigan Math. J. 48 (2000), 295–304, arXiv:math.AG/0003158.
* [23] Givental A., Permutation-equivariant quantum K-theory V. Toric $q$-hypergeometric functions, arXiv:1509.03903.
* [24] Givental A., Permutation-equivariant quantum K-theory VIII. Explicit reconstruction, arXiv:1510.06116.
* [25] Givental A., Explicit reconstruction in quantum cohomology and K-theory, Ann. Fac. Sci. Toulouse Math. 25 (2016), 419–432, arXiv:1506.06431.
* [26] Givental A., Lee Y.-P., Quantum $K$-theory on flag manifolds, finite-difference Toda lattices and quantum groups, Invent. Math. 151 (2003), 193–219, arXiv:math.AG/0108105.
* [27] Givental A., Tonita V., The Hirzebruch–Riemann–Roch theorem in true genus-0 quantum K-theory, in Symplectic, Poisson, and noncommutative geometry, Math. Sci. Res. Inst. Publ., Vol. 62, Cambridge University Press, New York, 2014, 43–91, arXiv:1106.3136.
* [28] Iritani H., Milanov T., Tonita V., Reconstruction and convergence in quantum $K$-theory via difference equations, Int. Math. Res. Not. 2015 (2015), 2887–2937, arXiv:1309.3750.
* [29] Jockers H., Mayr P., Quantum K-theory of Calabi–Yau manifolds, J. High Energy Phys. 2019 (2019), no. 011, 20 pages, arXiv:1905.03548.
* [30] Jockers H., Mayr P., A 3d gauge theory/quantum K-theory correspondence, Adv. Theor. Math. Phys. 24 (2020), 327–457, arXiv:1808.02040.
* [31] Kashaev R., Luo F., Vartanov G., A TQFT of Turaev–Viro type on shaped triangulations, Ann. Henri Poincaré 17 (2016), 1109–1143, arXiv:1210.8393.
* [32] Kashaev R.M., A link invariant from quantum dilogarithm, Modern Phys. Lett. A 10 (1995), 1409–1418, arXiv:q-alg/9504020.
* [33] Lee Y.-P., Quantum $K$-theory. I. Foundations, Duke Math. J. 121 (2004), 389–424, arXiv:math.AG/0105014.
* [34] Macdonald I.G., Symmetric functions and Hall polynomials, 2nd ed., Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 1995.
* [35] Petkovšek M., Wilf H.S., Zeilberger D., $A=B$, A.K. Peters Ltd., Wellesley, MA, 1996.
* [36] Reshetikhin N., Turaev V.G., Invariants of $3$-manifolds via link polynomials and quantum groups, Invent. Math. 103 (1991), 547–597.
* [37] Taipale K., K-theoretic J-functions of type A flag varieties, Int. Math. Res. Not. 2013 (2013), 3647–3677, arXiv:1110.3117.
* [38] Tonita V., Twisted K-theoretic Gromov–Witten invariants, Math. Ann. 372 (2018), 489–526, arXiv:1508.05976.
* [39] Turaev V.G., Quantum invariants of knots and 3-manifolds, De Gruyter Studies in Mathematics, Vol. 18, Walter de Gruyter & Co., Berlin, 1994.
* [40] Wen Y., Difference equation for quintic 3-fold, arXiv:2011.07527.
* [41] Wilf H.S., Zeilberger D., An algorithmic proof theory for hypergeometric (ordinary and “$q$”) multisum/integral identities, Invent. Math. 108 (1992), 575–633.
* [42] Witten E., Quantum field theory and the Jones polynomial, Comm. Math. Phys. 121 (1989), 351–399.
* [43] Zeilberger D., A holonomic systems approach to special functions identities, J. Comput. Appl. Math. 32 (1990), 321–368.
|
# Optimizing Hyperparameters in CNNs using Bilevel Programming
for Time Series Data
Taniya Seth Pranab K. Muhuri
Department of Computer Science, South Asian University, New Delhi, India
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Hyperparameter optimization has remained a central topic within the machine
learning community due to its ability to produce state-of-the-art results.
With the recent interest growing in the usage of CNNs for time series
prediction, we propose the notion of optimizing Hyperparameters in CNNs for
the purpose of time series prediction. In this position paper, we give away
the idea of modeling the concerned hyperparameter optimization problem using
bilevel programming.
## 1 Introduction
Training a machine to perform humanely tasks such as image recognition and
data prediction, involves preparing a good enough model that learns the given
data. This model predominantly involves a training algorithm that is
responsible for this learning task. Furthermore, the job of this training
algorithm is to develop a function, which in essence minimizes a loss on some
data samples (a subset of the ground truth data) introduced to it. This
trained model is then applied to the test data (and out-of-sample subset of
the ground truth data), on which the model is evaluated based on another loss.
Who evaluates the performance of this model? Who decides exactly how much is
good enough?
Literature on machine learning has blessed us with answers to such questions
while actually producing good models to make machines perform various tasks.
The answer to the above question is that, the evaluation of the training
algorithm in the model is done using a training loss, which identifies the
difference between the actual state of the model’s learning and the training
data it is provided with to learn. The function mentioned earlier in this
section minimizes this difference between the learnt data and the training
data. This is, with respect some parameters of the model, say $\theta$. The
training algorithm gradually learns these parameters during the concerned
process, model weights, for example.
However, a model also includes the hyperparameters, $\lambda$ in the scene,
which Bergstra and Bengio (2012) refer to as the “bells and whistles” of a
training algorithm. In practice, hyperparameters are chosen first, which is
then followed by the development of the training algorithm. Due to their
importance and influence on the training, these hyperparameters require expert
intervention to be chosen.
When optimized values of hyperparameters are supplied to the training
algorithm, it learns well from the training data, while additionally
performing well on the out-of-sample test data. The performance of the model
on the test data is evaluated based on a validation loss, which must be
minimized for the model to generalize well.
This discussion defines the necessity of hyperparameter optimization (HO)
within machine learning models. This problem has been studied for a long time.
Bergstra et al. (2011) utilized various approaches such as the sequential
model based approach, Gaussian process approach, tree-structure Parzen
estimator approach etc for optimizing the estimated improvement criteria.
Random search was subsequently studied for HO in Bergstra and Bengio (2012).
Later, Thornton et al. (2013) introduced Auto-WEKA for the combined selection
and HO in classification algorithms. Eggensperger et al. (2013) put forward an
empirical study to deal with Bayesian optimization for hyperparameters. Most
importantly, gradient-based HO was discussed in Maclaurin et al. (2015),
wherein exact gradients of hyperparameters were computed by chaining their
derivatives backwards in the training procedure through reversible learning.
Other works on HO include Feurer et al. (2015) and Li et al. (2017).
Having discussed the problem of HO above, one can notice the dual structure
that the problem encompasses. In other words, performance of a machine
learning model is optimized based on the training and validation losses. This
optimization is subject to the values chosen for hyperparameters, $\lambda$ of
the model. This can be stated as the following: the validation loss of a model
is minimized with respect to minimized training loss, for the model which is
parameterized by the hyperparameters.
Such a dual structure is noticed in multiple real-life situations, which can
be modeled using the bilevel programming strategy Bard (2013). Solving these
problems follow a leader-follower approach, inspired from the game theory Von
Stackelberg and Von (1952). Within these problems, the solution space of the
objective function (OF) of the leader is constrained by that of the follower
problem. Hence, a proper solution is sought that satisfies both the leader’s
and follower’s solution space while optimizing their individual objectives.
Recently, the idea of HO using bilevel programming was proposed in Franceschi
et al. (2018). Franceschi and co-authors developed a bilevel optimization
framework for HO. Upon formulating the bilevel for HO, they observed that it
is difficult to obtain a solution to the bilevel model, especially when
$\lambda$ is a real-valued vector of hyperparameters. To overcome this, the
exact problem of the bilevel model was approximated and later proven to
guarantee solutions.
In the literature so far, the problem of HO has been dealt with mostly for the
cases of images. In today’s world, time series is available in abundance. From
stock market to daily average temperatures, human activity data and now most
importantly COVID-19 data, everything is available as a time series.
Leveraging such series for either classification or prediction is crucial.
Convolutional neural networks (CNN) have been utilized for both classification
and prediction purposes on time series data.
Zheng et al. (2014) time series data utilizing multiple channels deep CNNs
with special attention to exploration of feature learning techniques. In Yang
et al. (2015), classification of the human activity recognition (HAR) is done
using deep CNNs, whereas in Cui et al. (2016), time series classification is
done using multiscale CNNs. Other recent works on time series forecast and
classification using CNNs in-clude Borovykh et al. (2017) and Yazdanbakhsh and
Dick (2019).
Keeping an eye on the relevance of time series prediction in today’s world,
one can observe that the literature lacks works where a machine learning model
has been optimized for performance on time-series data.
Hence, in this position paper, we propose the idea of utiliz-ing bilevel
programming for HO within CNNs for time series prediction. We first introduce
the bilevel framework to model the overall performance of the machine learning
model in terms of the training and validation loss. This is done in Section 2.
Subsequently, in the same section, we revisit the approximation strategy for
the bilevel framework of HO, along with the gradient-based approach to solve
the problem. In Section 3, we introduce our proposed framework of utilizing
bilevel programming for HO in CNNs for time series prediction. We conclude the
position paper in Section 4.
## 2 Preliminary Knowledge
### 2.1 Bilevel programming framework
In this section, we revisit the structure of the bilevel programming framework
for a machine learning model. As specified in Franceschi et al. (2018),
bilevel programming problems of the following forms are considered:
$\min\\{f(\lambda):\Lambda\in\lambda\\}$ (1)
where,
$f(\lambda)=\inf\\{E(w_{\lambda},\lambda):w_{\lambda}\in\arg\min_{u\in\mathbb{R}^{d}}L_{\lambda}(u)\\}$
(2)
In the above equations, $f:\Lambda\rightarrow{\mathbb{R}}$ is defined at
$\lambda\in\Lambda$. $E:\mathbb{R}^{d}\times\Lambda\rightarrow{\mathbb{R}}$ is
the leader objective. Also, $\forall\lambda\in\Lambda$,
$L_{\lambda}:\mathbb{R}^{d}\rightarrow{\mathbb{R}}$ is the follower objective
given that $L_{\lambda}:\lambda\in\Lambda$ is the class of OFs parameterized
by $\lambda$.
### 2.2 Bilevel programming framework for HO
As mentioned earlier, the validation error is sought to be minimized for a
machine learning model. Let the model be denoted as $g_{w}:X\rightarrow{Y}$
Let it be parameterized by the vector $w$, with respect to one vector of
hyperparameters $\lambda$. For a predefined loss function $l$, the leader and
follower objectives can be given as follows:
$E(w,\lambda)=\Sigma_{(x,y)\in{D_{validation}}}l(g_{w}(w),y)$ (3)
$L_{\lambda}(w)=\Sigma_{(x,y)\in{D_{train}}}l(g_{w}(w),y)+penalty$ (4)
Here, $D_{validation}$ is the validation data presented to $g_{w}$, for
evaluation after it has been trained on $D_{train}$. The penalty term can be
implemented as a regularizer for the network model to improve the performance.
### 2.3 Gradient based approach to solve bilevel optimization for HO
Franceschi et al. (2018), specified an approximation of the bilevel problem
given in (1) and (2). It is given as follows:
$\min_{\lambda}f_{T}(\lambda)=E(w_{T,\lambda},\lambda)$ (5)
$w_{0,\lambda}=\phi_{0}(\lambda),w_{t,\lambda}=\phi_{t}(w_{t-1,\lambda},\lambda),t\in{[T]}$
(6)
In the above equations, $[T]$ is a predefined positive integer such that
$[T]=\\{1,\dots,T\\}$, $\phi_{0}:\mathbb{R}^{m}\rightarrow{\mathbb{R}^{d}}$ is
a smooth initialization dynamic, and $\forall{t}\in[T]$,
$\mathbb{R}^{d}\times\mathbb{R}^{m}\rightarrow{\mathbb{R}^{d}}$is a smooth
mapping the operation of an optimization algorithm at the $t^{th}$ step. The
optimization dynamic $\phi$ is implemented using the gradient descent
optimization algorithm.
In Franceschi et al. (2018), certain assumptions are chosen to reduce the
bilevel framework given in (1)-(2), to prove the existence of solutions of the
reduced problem and also the existence of the convergence of approximate
problems to the reduced problem. They are omitted from this position paper for
simplicity.
## 3 HO using bilevel programming within CNNs
We discuss our proposed idea in this section.
We first define our CNN model for classification purposes. For our time series
data, we utilize 1D convolutional layers, which are fit for situations dealing
with time series information.
Figure 1: Network structure for time series prediction using a deep CNN
architecture
For explanation, we utilize a time series dataset with 128 time steps and 9
features of data. Our deep CNN model for this time series data begins with an
input layer, followed by two 1D convolutional layers each encompassing 64
filters with a filter size of 3. Both layers have the ReLU activation applied.
These are followed by a dropout layer with a 50% dropout rate, followed by a
max pooling layer. The output from the max pooling layer is then flattened and
forwarded to a dense layer with 100 connections and the ReLU activation,
followed by a final dense layer with 9 output units and the softplus
activation. The model structure is depicted in Fig. 1.
For this model and data, we consider the example weights ($w$) and the
learning rate ($lr$) of the neurons as the hyperparameters to be optimized.
The metric to be minimized is given by the model is the Mean Squared Error
(MSE), while the optimizer utilized is the Adam optimizer.
With this scenario defined, the bilevel programming framework for our CNN
model for the time series data is described below. For the following,
$\lambda=\\{w,lr\\}$ and $T=200$.
$\min_{\lambda}f_{T}(\lambda)=Adam(w_{T,\lambda},\lambda)$ (7)
and,
$\begin{split}\phi_{0}(\lambda)=\\{w_{0}=[0],lr=0.01\\},\\\
\phi_{t}(w_{t-1,\lambda},\lambda)=w_{t}-\eta_{t}\nabla{L_{\lambda}},t\in{[T]}\end{split}$
(8)
The follower level optimizer is defined by the gradient descent optimizer as
given in Franceschi et al. (2018). This follower level optimizer is defined
for the hyperparameter, $lr$. While the $w$ is the minimizer for the problems
in ( 7)-( 8).
We believe that solving this bilevel problem to obtain the optimized value of
$lr$ with respect to the minimizer $w$, shall produce state-of-the-art results
in terms of MSE.
We plan to implement this scenario in on a machine with the following
specifications: Intel Core 140 i3-6100 CPU with 12 GB of RAM and Windows 10
OS. The GPU employed is the NVIDIA GeForce 141 GTX 1660 Super.
## 4 Conclusions and Future Work
In this position paper, we have introduced the idea of using bilevel
programming for HO within CNNs for time series data. Since the literature on
HO for time series prediction or classification tasks is scarce, we believe
that the idea presented here will mark a good start in the research in this
direction.
We utilized a deep CNN architecture to define the model for the purpose of
time series prediction. Based on this, we defined a framework for the bilevel
programming problem that must be solved to obtain the better results than most
of the existing models.
Our subsequent plans are to implement the scenario introduced within this
position paper. Within this implementation, we shall perform a sensitivity
analysis on different values of $T$, to obtain varied results. We also plan to
compare the impact of HO using bilevel programming within the prediction and
classification tasks on time series data. We plan to perform our experiments,
on the human activity recognition (HAR) data to observe the results.
## References
* Bard [2013] Jonathan F Bard. Practical bilevel optimization: algorithms and applications, volume 30. Springer Science & Business Media, 2013.
* Bergstra and Bengio [2012] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281–305, 2012.
* Bergstra et al. [2011] James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems, 24:2546–2554, 2011.
* Borovykh et al. [2017] Anastasia Borovykh, Sander Bohte, and Cornelis W Oosterlee. Conditional time series forecasting with convolutional neural networks. arXiv preprint arXiv:1703.04691, 2017.
* Cui et al. [2016] Zhicheng Cui, Wenlin Chen, and Yixin Chen. Multi-scale convolutional neural networks for time series classification. arXiv preprint arXiv:1603.06995, 2016.
* Eggensperger et al. [2013] Katharina Eggensperger, Matthias Feurer, Frank Hutter, James Bergstra, Jasper Snoek, Holger Hoos, and Kevin Leyton-Brown. Towards an empirical foundation for assessing bayesian optimization of hyperparameters. In NIPS workshop on Bayesian Optimization in Theory and Practice, volume 10, page 3, 2013.
* Feurer et al. [2015] Matthias Feurer, Jost Springenberg, and Frank Hutter. Initializing bayesian hyperparameter optimization via meta-learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.
* Franceschi et al. [2018] Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimilano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. arXiv preprint arXiv:1806.04910, 2018.
* Li et al. [2017] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1):6765–6816, 2017\.
* Maclaurin et al. [2015] Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pages 2113–2122, 2015.
* Thornton et al. [2013] Chris Thornton, Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 847–855, 2013.
* Von Stackelberg and Von [1952] Heinrich Von Stackelberg and Stackelberg Heinrich Von. The theory of the market economy. Oxford University Press, 1952.
* Yang et al. [2015] Jianbo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiaoli Li, and Shonali Krishnaswamy. Deep convolutional neural networks on multichannel time series for human activity recognition. In Ijcai, volume 15, pages 3995–4001. Buenos Aires, Argentina, 2015\.
* Yazdanbakhsh and Dick [2019] Omolbanin Yazdanbakhsh and Scott Dick. Multivariate time series classification using dilated convolutional neural network. arXiv preprint arXiv:1905.01697, 2019.
* Zheng et al. [2014] Yi Zheng, Qi Liu, Enhong Chen, Yong Ge, and J Leon Zhao. Time series classification using multi-channels deep convolutional neural networks. In International Conference on Web-Age Information Management, pages 298–310. Springer, 2014.
|
# SIR Simulation of COVID-19 Pandemic in Malaysia: Will the Vaccination
Program be Effective?
W. K. Wong Filbert H. Juwono Tock H. Chua Department of Electrical and
Computer Engineering, Faculty of Engineering and Science, Curtin University
Malaysia, 98009 Miri, Sarawak, Malaysia Department of Pathobiology and
Medical Diagnostics, Faculty of Medicine and Health Sciences, University
Malaysia Sabah, 88400 Kota Kinabalu, Malaysia
###### Abstract
Since the end of 2019, COVID-19 has significantly affected the lives of people
around the world. Towards the end of 2020, several COVID-19 vaccine candidates
with relatively high efficacy have been reported in the final phase of
clinical trials. Vaccines have been considered as critical tools for opening
up social and economic activities, thereby lessening the impact of this
disease on the society. This paper presents a simulation of COVID-19 spread
using modified Susceptible-Infected-Removed (SIR) model under vaccine
intervention in several localities of Malaysia, i.e. those cities or states
with high relatively COVID-19 cases such as Kuala Lumpur, Penang, Sabah, and
Sarawak. The results show that at different vaccine efficacy levels (0.75,
0.85, and 0.95), the curves of active infection vary slightly, indicating that
vaccines with efficacy above 0.75 would produce the herd immunity required to
level the curves. In addition, disparity is significant between implementing
or not implementing a vaccination program. Simulation results also show that
lowering the reproduction number, $R_{0}$ is necessary to keep the infection
curve flat despite vaccination. This is due to the assumption that vaccination
is mostly carried out gradually at the assumed fixed rate. The statement is
based on our simulation results with two values of $R_{0}$: 1.1 and 1.2,
indicative of reduction of $R_{0}$ by social distancing. The lower $R_{0}$
shows a smaller peak amplitude about half the value simulated with
$R_{0}=1.2$. In conclusion, the simulation model suggests a two-pronged
strategy to combat the COVID-19 pandemic in Malaysia: vaccination and
compliance with standard operating procedure issued by the World Health
Organization (e.g. social distancing).
###### keywords:
COVID-19 , SIR model , Malaysia , vaccination
## 1 Introduction
Coronavirus disease 2019 (COVID-19) is caused by the 2019 novel coronavirus
(2019-nCOV) first reported in the city of Wuhan, China. The epidemic began in
December 2019, when several adults in Wuhan presented with serious pneumonia
[1]. Since then, the infection has spread globally and the number of cases
increased exponentially, with more than 90 million people infected by January
2021 [2]. The COVID-19 pandemic impacted the socio-economic wellbeing for many
countries in 2020 [3]. It has been believed that the vaccination would be the
most promising alternative to get life back to normal. To date, there have
been at least 166 vaccine candidates undertaking pre-clinical and clinical
trials [4].
In the fourth quarter of 2020, several pharmacological companies reported high
efficacy rates of their vaccine candidates [5]. By mid-December 2020, 57
vaccine candidates were in clinical research, including 40 candidates in Phase
I–II trials and 17 candidates in Phase II–III trials. In Phase III trials,
several COVID-19 vaccines demonstrated efficacy as high as 95% in preventing
symptomatic COVID-19 infections [6]. Various levels of efficacy had been
reported for the advance phases of clinical trials: Sinopharm (79%) [7],
Pfizer BioNTech (95%) [6], Moderna (94.5%) [8] at advance phases of clinical
trials. A survey was conducted in [9] to gauge the public acceptance of
COVID-19 vaccines in 19 countries. From the results, 71.5% of respondents
reported that they would be very or very likely to take vaccines and 61.4%
reported accepting employer’s recommendation to do so. Acceptance rate
discrepancies ranged from almost 90% (China) to less than 55% (Russia).
With the proposed vaccination campaign, countries are considering to open
economic operations to reactivate the socio-economy activities. In Malaysia,
vaccines were scheduled to arrive in the mid of first quarter of 2021 [10].
However, according to a public survey, approximately one-third of Malaysian
people were still unsure of the safety of COVID-19 vaccines [11]. Hence, it is
crucial to create public awareness on the importance of vaccination.
In this paper, we we present the results of simulating the effectiveness of
vaccination in reducing the infection rates in Malaysia. In particular, we
focus on Kuala Lumpur, the capital of Malaysia and three states, i.e. Penang,
Sabah, and Sarawak. Further, we apply a modified Susceptible-Infected-Removed
(SIR) model to incorporate constant vaccination rate based on the Malaysia
Government’s planning. SIR has been widely used to model COVID-19 pandemic
spread [12, 13]. Note that mathematical modelling has been a long progressive
field due to its inherent importance to health policy makers. This will help
to answer some fundamental questions, such as the effectiveness of the
vaccination program given various efficacy rates and current developments.
However, it needs to be highlighted that these projections are only based on
the proposed model. Furthermore, the modified SIR model does not take into
account the stochasticity of the imported cases. This research will serve as a
projection to observe the effects of vaccination at different efficacy rates
and to show the importance of vaccination.
## 2 Previous Work
In [14], the authors discussed various models for forecasting the spread of
COVID-19 including the common compartmental models (SIR and
Susceptible–Exposed–Infected–Removed (SEIR)), exponential growth model, and
self exiting branch model. Despite there are various methods, the principle
remains the same i.e. to find some parameters which enable the model to
represent the actual recorded data. Obviously, compartmental models have been
highly popular in modeling COVID-19 as seen from massive research works in
this area. SIR model was used to fit the COVID-19 data and then used to
forecast the number of cases in Senegal [15]. In Italy, particle swarm
optimization (PSO), a form of stochastic optimization method, was used to fit
SEIR COVID-19 model to actual data [16]. Acuña-Zegarra, et. al. [17] developed
an optimal control problem to design vaccination strategies with various
efficacy rates to gauge the COVID-19 pandemic situation. It was difficult to
gauge the efficiency of the vaccine to provide immunity to the receivers as
the clinical trials are based on procedures and evaluation based on the
efficacy. SIR models have been developed for some countries (China, South
Korea, India, Australia, and USA) [12]. In [13], SIR model was created to
analyze the effectiveness using the data from 15 European countries.
In general, most pandemic related models are an extension of the basic SIR
model proposed in [18]. This model is built on the premise that entire
population can be divided into three states: susceptible (S), infected (I),
and removed (R) states. The infected group continuously infects the population
until isolation is performed. The assumption requires the recovered population
to receive 100% immunity. The SIR model follows a deterministic model with
linear infection rate. In [19], the authors modified the existing SIR to be
stochastic model and included vertical transmission and vaccination and non-
linear incidence and vaccination. Current vaccination SIR models mostly
consider newborn vaccination as a result of previous and on-going efforts.
However, the principles could be similarly applied in similar manner with
slight modifications.
## 3 Mathematical Models
### 3.1 SIR Model
The basic SIR model without isolation and vaccination effect is given by the
following expressions
$\frac{dS(t)}{dt}=\frac{-\beta I(t)S(t)}{N},$ (1)
$\frac{dI(t)}{dt}=\frac{\beta I(t)S(t)}{N}-\gamma I(t),$ (2)
$\frac{dR(t)}{dt}=\gamma I(t),$ (3) $N=S(t)+I(t)+R(t),\forall t,$ (4)
where $S(t)$ represents the number of people in an area at time $t$ who are
susceptible to the disease infection and can be infected by the infectious
people, $I(t)$ represents the number of people in an area at time $t$ who
infected and infectious due to the spread of the virus, $R(t)$ represents the
number of people in an area at time $t$ who are removed from the infected
state, $N$ is the total number of people in the area at time $t$, $\beta$ is a
constant showing infectivity rate, i.e. expected number of people infected by
an infectious person, and $\gamma$ is a constant showing removal rate, i.e.
expected number of people removed from the infected state. The ratio of
$\beta$ and $\gamma$ is called as reproduction number, i.e.
$R_{0}=\beta/\gamma$. Reproduction number shows the average number of
secondary infections coming from an infected person assuming everyone is in
susceptible state.
In practice, the constant $\beta$ depends on how the society practice hygiene
and social distancing adhering to the standard operating procedure and the
constant $\gamma$ is related to the people who have either recovered or
deceased. The initial number of susceptible people, $S(t=0)$ can be found
using the data released by the Government. From (4), we have
$S(t=0)=N-R(t=0)-I(t=0)$. We can approximate $I(t=0)$ from the identification
rate, $p$ and the average ratio of asymptomatic and symptomatic cases,
$\varphi$. For Malaysia, we assume a fixed ratio of asymptomatic and
symptomatic cases, i.e. $\varphi=0.7$ [20]. Let $X(t)$ be the number of new
daily cases identified. The approximate number of initial infected cases is
$I(t=0)=X(t=0)/p\times 0.7$.
### 3.2 Modified SIR Model
Fig. 1 depicts the proposed state diagram. We introduce a new term,
vaccination rate, $d$ to the existing SIR model as shown in the figure. The
vaccination rate is defined as the ratio of target population and completion
time. This assumption is based on the demographic and logistical challenges of
administering the vaccines.
Figure 1: State diagram of modified SIR model with vaccination strategy.
According to the state diagram and using the model from [21], we modify the
SIR model for the $S(t)$ and $R(t)$ expressions to be
$\frac{dS(t)}{dt}=-d-\frac{\beta I(t)S(t)}{N},\text{ s.t. }S(t)>0$ (5)
$\frac{dR(t)}{dt}=d+\gamma I(t).$ (6)
Note that we consider only the constant rate vaccination and ignore the death
rate (removed state is equivalent to recovered state) as the number of deaths
is small compared to the total population. In [21], the dynamics of the
vaccination is based on number of S(t). However, we know that vaccination
program will be progressive and based on a fixed rate on the remaining
susceptible class. We may assume that $d$ is equivalent to an ersatz
efficiency in vaccination since no information on this is attained at the
moment globally.
Further, we take into account the vaccine efficacy factor in the modified
$S(t)$ and $R(t)$ formulae. We introduce a multiplier factor $\epsilon$ which
is defined as the ratio of the number of positive cases in the vaccinated
samples and the number of positive cases in the placebo samples during
clinical trials. Therefore, the vaccine efficacy is given by $(1-\epsilon)$.
Expressions (5) and (6) considering $\epsilon$ factor are given by
$\frac{dS(t)}{dt}=-d\epsilon-\frac{\beta I(t)S(t)}{N},\text{ s.t. }S(t)>0$ (7)
$\frac{dR(t)}{dt}=d(1-\epsilon)+\gamma I(t).$ (8)
In case that self-isolation from random checks or contact tracing is conducted
everyday, again, we need to re-modify the $I(t)$ and $R(t)$ expressions in (2)
and (8), respectively. Finally, we have the following SIR model which
incorporates vaccination and isolation strategies
$\frac{dS(t)}{dt}=-d\epsilon-\frac{\beta I(t)S(t)}{N},\text{ s.t. }S(t)>0$ (9)
$\frac{dI(t)}{dt}=\frac{\beta I(t)S(t)}{N}-\hat{\gamma}I(t),$ (10)
$\frac{dR(t)}{dt}=d(1-\epsilon)+\hat{\gamma}I(t),$ (11)
where $\hat{\gamma}=\gamma-\hat{p}$ and $\hat{p}$ is the isolation rate of
positive infected individuals in the population. Note that the reproduction
number is now $R_{0}=\beta/\hat{\gamma}$.
## 4 Simulation Setup
### 4.1 General Simulation Setup
Table 1 shows the simulation details which include the location, total
population, initial number of people in removed (recovered) state, $R(t)$, and
initial number of new infected cases, $X(t)$. We set 1 January 2021 as $t=0$.
The data are obtained from Government website and other sources. We use two
reproduction numbers for simulations, $R_{0}=1.1$ and $R_{0}=1.2$. The overall
Malaysia $R_{0}$ fluctuates between 1 and 1.5. The latest $R_{0}$ projection
has been estimated to be average of 1.1 until 31 May 2021 [22]. Moreover, we
will use Runge-Kutta differential solver method to obtain the simulation
curve.
Table 1: Details of simulation parameters. Location | Population ($N$) | $R(t=0)$ | $X(t=0)$
---|---|---|---
Malaysia (overall) | 32.7 million | 120,000 | 2,500
Kuala Lumpur | 1.808 million | 12,494 | 202
Penang | 1.767 million | 4,160 | 60
Sabah | 3.54 million | 36,074 | 186
Sarawak | 2.16 million | 1,115 | 8
### 4.2 Malaysia Vaccination Program Details
Table 2 shows the vaccines delivery set by Malaysian Government. The table
only shows the confirmed purchases inked by the government while many other
negotiations are on-going. The government is also in final negotiations with
China’s Sinovac for 14 million doses, CanSino Biologics for 3.5 million doses,
and and Russia for 6.4 million shots of Sputnik V vaccine [23]. However, it is
clear that most vaccines will arrive only in the second and third quarter of
2021 onwards based on projections. In fact, the Malaysian Government has
expressed the desire to cover 80% of the population [24]. In this paper, we
assume that the vaccination rate is constant and that 75% of the target
population will be vaccinated within a year. The figure is reasonable as the
inked deals have already covered almost 50% of the Malaysian population (see
the cumulative percentage in Table 2).
Table 2: COVID-19 vaccination plan in Malaysia. Quarter (2021) | Doses in million | Vaccination | Cumulative
---|---|---|---
1 | 1 (Pfizer) | 1.52% | 1.52%
2 | 8.1 (Pfizer) | 12.2% | 23.48%
6.4 (Covax) | 9.78%
3 | 5.8 (Pfizer) | 8.86% | 42.12%
6.4 (Covax) | 9.78%
4 | 4.3 (Pfizer) | 6.57% | 48.69%
Since most vaccines have reported efficacy more than 70%, we will apply
75%-95% efficacy in our simulations. We assume 75% inoculation by end $t=540$
with the various efficacy rates. This would yield 0.13% vaccination rate per
day (18 months to complete 75% inoculation excluding those that have received
natural immunity due to recovery). We can see that vaccines will be reserved
for front liners and high risk populations in the first quarter and almost
none for the public. Hence, our simulation will start from the second quarter
of 2021. This is a proper figure to simulate the realistic scenario. If
vaccination is performed at mid February 2021, the effective immunity will
only be achieved by March 2021 (assume 15 days to reach immunity).
## 5 Results and Discussion
### 5.1 Curve Fitting
The optimized parameters, $\beta$ and $\hat{\gamma}$, in (9), (10), and (11)
are acquired by curve-fitting the daily cases from 1 January 2021 to 11
January 2021 such that the reproduction numbers are $R_{0}=1.1$ and
$R_{0}=1.2$, following the Government projection. The projected daily cases
for 270 days is shown in Fig. 2. Daily cases projection can be acquired by
$\Delta R$ in the R compartment of the population. We acquire this best fit by
systematically combining the parameters (grid search). Our best fit parameters
give $\hat{\gamma}=0.08$ with $\beta=0.100$ and $\beta=0.108$ for $R_{0}=1.1$
and $R_{0}=1.2$, respectively.
Fig. 3 shows the removed cases projection without vaccination. It seems to
suggest at $R_{0}=1.1$ and $R_{0}=1.2$, both cases will only cause natural
immunity at 17% and 31%, respectively. This situation is based on the premise
that no new imported cases have been added into the community. Note that herd
immunity can only be reached if about 70% of the population is immune to the
virus. This is not recommended, of course, considering that Malaysia has a
comparatively high elderly (people over 60 years of age) population, i.e.
about 1.4 million.
Figure 2: Projected daily case with $R_{0}=1.1$ and $R_{0}=1.2$. Figure 3:
Removed (recovered) projections for 800 days with $R_{0}=1.1$ and $R_{0}=1.2$.
### 5.2 Simulation Results
The projection plots for 270 days using the given population and vaccination
data and formulae (9), (10), and (11) with $R_{0}=1.1$ and $R_{0}=1.1$ are
shown in Figs. 4 and 5, respectively. We observe three efficacy rates: 0.75,
0.85, and 0.95. It is worth mentioning that we apply a normalized population
i.e. $N=1.00$ and the individual state is scaled respectively.
(a) Malaysia.
(b) Kuala Lumpur.
(c) Penang.
(d) Sabah.
(e) Sarawak.
Figure 4: Modified SIR simulation with and without vaccination for the various
localities, $R_{0}=1.1$.
(a) Malaysia.
(b) Kuala Lumpur.
(c) Penang.
(d) Sabah.
(e) Sarawak.
Figure 5: Modified SIR simulation with and without vaccination for the various
localities, $R_{0}=1.2$.
In Fig. 4, it can be seen that vaccination program is effective which is shown
by the big decrease in the percentage of population getting infected. In other
words, the curve gets flattened more rapidly at about 10 - 40 days after day
90 ($t=90$). Keep in mind that we assume that vaccination will take place in
the second quarter of 2021. In Fig. 5, we can observe similar trend except
with a more protruding curve due to higher production number. Except for
Sarawak, most of the cases show deviation of peak within 10-20 days. As
observed, the amplitude (peak of curve) is also found to be lower even in
early stages of vaccination.
In both cases, the gradient of state I can be explained by observing (10). The
gradient will become negative when $\hat{\gamma}I(t)+\hat{p}I(t)>\beta
I(t)S(t)/N$. The difference between vaccination and non-vaccination mainly
effects the $S(t)$ term, thereby causing the effect described earlier. We
focus on the constant vaccination rate which decreases $S(t)$ gradually but
constantly at about 0.138% per day. At the end of the third quarter ($t=270$),
we will have achieved approximately 25% vaccinated and received immunity
excluding those that have received immunity from recovery.
For overall Malaysia cases with $R_{0}=1.2$ (see Fig. 5(a)), we observe that
peak for non-vaccination case is approximately at 1.7% of the total population
while for vaccination case, the peak is around 0.7% of the population. The
difference is significant at 0.1%. This translates to 3.2 $\times$ 103
individuals. However, these conditions require that the infected case numbers
are highly accurate. Therefore, the effects of the vaccination demonstrated by
the simulations can be accepted as approximations. The difference between the
two reproduction numbers produces peaks that are approximately twofold for
overall Malaysia cases (compare Figs. 4(a) and 5(a)). This shows that
vaccination program is effective despite the seemingly low vaccination rate of
0.138% per day. Cumulatively, the effect increases as the susceptible
population receives immunity and moves to the removed class.
The effective daily cases simulation is shown in Fig. 7. For overall Malaysia,
in particular, it can be seen from Figs. 6(a) and 6(b) that with vaccination,
both plots show peak at about 6,000 and 20,000 cases, respectively. Therefore,
it is important to comply with the standard operating procedure issued by the
World Health Organization (such as social distancing) to further reduce
$R_{0}$. Similarly, plots for individual city or state show that peaks of
daily cases are mostly exceeding twofold. This shows that social distancing is
still much relevant and important in the fight to eradicate the pandemic while
the population gets vaccinated.
(a) Malaysia, $R_{0}=1.1$.
(b) Malaysia, $R_{0}=1.2$.
(c) Kuala Lumpur, $R_{0}=1.1$.
(d) Kuala Lumpur, $R_{0}=1.2$.
(e) Penang, $R_{0}=1.1$.
(f) Penang, $R_{0}=1.2$.
Figure 6: Daily cases simulation with and without vaccination for the various
localities - continued.
(a) Sabah, $R_{0}=1.1$.
(b) Sabah, $R_{0}=1.2$.
(c) Sarawak, $R_{0}=1.1$.
(d) Sarawak, $R_{0}=1.2$.
Figure 7: Daily cases simulation with and without vaccination for the various
localities.
## 6 Conclusion
In this paper, we have used a modified SIR model to simulate the conditions
with and without the vaccination program, and at various vaccine efficacy
levels.. The model applies minimal modification to the basic SIR model and
hence, tuning can be easily achieved. In particular, we modify the existing
SIR model to include a gradual vaccination which effectively moves population
from susceptible state to removed state. In the simulation, we assume that the
vaccination program will be gradual and progressive. The results have shown
that vaccination makes a significant difference in combating the pandemic.
With the global vaccination, we can for see that the imported cases can even
be reduced. In particular, we have presented a few scenarios to predict
infectious population and the daily cases in Malaysia.
In worse case scenario, e.g. $R_{0}=1.2$, simulation results have shown about
20,000 daily cases daily (with vaccination). Hence, lowering $R_{0}$ is still
very relevant and should be done simultaneously with on-going vaccination
program. Based on this simulation, it is clear with the given description and
assumptions, there is huge benefit in the roll out of vaccination program in
Malaysia. Nevertheless, challenges may arise as the community may not want to
be vaccinated, since a third of Malaysians surveyed are not convinced of the
safety of COVID-19 vaccines. Therefore, community leaders should play
important role in educating the public.
Secondly, Malaysia has population of 32.7 million staying in rural areas where
access can be very challenging due to infrastructure problem. A lower
vaccination rate due to these circumstances may cause higher infection rates
as indicated in the simulation projections. Finally, similar methodology may
be extended to other countries to estimate the efficacy of vaccination on the
basis of their individual data.
## References
* [1] T. Singhal, A review of Coronavirus Disease-2019 (COVID-19), The Indian Journal of Pediatrics 87 (4) (2020) 281–286.
* [2] Worldometer.info, COVID-19 Coronavirus Pandemic, cited 14 January 2021 (2020).
URL https://www.worldometers.info/coronavirus/
* [3] A. Pak, O. A. Adegboye, A. I. Adekunle, K. M. Rahman, E. S. McBryde, D. P. Eisen, Economic consequences of the covid-19 outbreak: The need for epidemic preparedness, Frontiers in Public Health 8 (2020) 241.
* [4] M. Jeyanathan, S. Afkhami, F. Smaill, M. S. Miller, B. D. Lichty, Z. Xing, Immunological considerations for COVID-19 vaccine strategies, Nature Reviews Immunology 20 (2020) 615–632.
* [5] CNBC, Pfizer, BioNTech say Covid vaccine is more than 90% effective — ‘great day for science and humanity’, cited 15 January 2021 (2020).
URL https://www.cnbc.com/2020/11/09/covid-vaccine-pfizer-drug-is-more-
than-90percent-effective-in-preventing-infection.html
* [6] F. P. Polack, S. J. Thomas, N. Kitchin, J. Absalon, A. Gurtman, S. Lockhart, J. L. Perez, G. Perez Marc, E. D. Moreira, C. Zerbini, R. Bailey, K. A. Swanson, S. Roychoudhury, K. Koury, P. Li, W. V. Kalina, D. Cooper, R. W. Frenck, L. L. Hammitt, O. Türeci, H. Nell, A. Schaefer, S. Unal, D. B. Tresnan, S. Mather, P. R. Dormitzer, U. Sahin, K. U. Jansen, W. C. Gruber, Safety and Efficacy of the BNT162b2 mRNA Covid-19 Vaccine, New England Journal of Medicine 383 (27) (2020) 2603–2615.
* [7] The Strait Times, Sinopharm’s Covid-19 vaccine 79% effective, seeks approval in China, cited 15 January 2021 (2020).
URL https://www.straitstimes.com/asia/east-asia/china-sinopharms-vaccine-
has-79-protection-rate-against-covid-19
* [8] Time, Moderna’s COVID-19 vaccine is 94.5% effective. Here’s what that really means, cited 15 January 2021 (2020).
URL https://time.com/5912491/moderna-covid-19-vaccine-effectiveness/
* [9] J. V. Lazarus, S. C. Ratzan, A. Palayew, L. O. Gostin, H. J. Larson, K. Rabin, S. Kimball, A. El-Mohandes, A global survey of potential acceptance of a covid-19 vaccine, Nature Medicine (2020).
* [10] The Star, Covid-19: Malaysia expected to receive vaccine by end of February, says PM, cited 15 January 2021 (2021).
URL https://www.thestar.com.my/news/nation/2021/01/11/covid-19-malaysia-
expected-to-receive-vaccine-by-end-of-february-says-pm
* [11] Malay Mail, Health Ministry survey shows a third of Malaysians still fear, doubt Covid-19 vaccine, cited 15 January 2021 (2020).
URL https://www.malaymail.com/news/malaysia/2020/12/31/health-ministry-survey-
shows-a-third-of-malaysians-still-fear-doubt-covid-1/1936319
* [12] I. Cooper, A. Mondal, C. G. Antonopoulos, A SIR model assumption for the spread of COVID-19 in different communities, Chaos, Solitons & Fractals 139 (2020) 110057.
* [13] K. S. Sharov, Creating and applying SIR modified compartmental model for calculation of COVID-19 lockdown efficiency, Chaos, Solitons & Fractals 141 (2020) 110295.
* [14] A. L. Bertozzi, E. Franco, G. Mohler, M. B. Short, D. Sledge, The challenges of modeling and forecasting the spread of COVID-19, Proceedings of the National Academy of Sciences 117 (29) (2020) 16732–16738.
* [15] M. A. M. T. Baldé, Fitting sir model to covid-19 pandemic data and comparative forecasting with machine learning, medRxiv (2020). arXiv:https://www.medrxiv.org/content/early/2020/05/01/2020.04.26.20081042.full.pdf.
* [16] A. Godio, F. Pace, A. Vergnano, SEIR modeling of the Italian epidemic of SARS-CoV-2 using computational swarm intelligence, International Journal of Environmental Research and Public Health 17 (10) (2020) 3535.
* [17] M. A. Acuña-Zegarra, S. Díaz-Infante, D. Baca-Carrasco, D. O. Liceaga, Covid-19 optimal vaccination policies: a modeling study on efficacy, natural and vaccine-induced immunity responses, medRxiv (2020). arXiv:https://www.medrxiv.org/content/early/2020/11/20/2020.11.19.20235176.full.pdf.
* [18] W. O. Kermack, A. G. McKendrick, G. T. Walker, A contribution to the mathematical theory of epidemics, Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character 115 (772) (1927) 700–721.
* [19] A. E. Koufi, J. Adnani, A. Bennar, N. Yousfi, Analysis of a stochastic sir model with vaccination and nonlinear incidence rate, International Journal of Differential Equations 2019 (2019) 9 pages.
* [20] The Star, Covid-19: 70% of cases in Malaysia were asymptomatic, says Health DG , cited 16 January 2021 (2020).
URL https://www.thestar.com.my/news/nation/2020/07/08/covid-19-70-of-cases-in-
malaysia-were-asymptomatic-says-health-
dg#:~:text=PUTRAJAYA%3A%20Out%20of%20the%20total,%2z%20(2%2C582)%20were%20symptomatic.
* [21] P. Yongzhen, L. Shuping, L. Changguo, S. Chen, The effect of constant and pulse vaccination on an sir epidemic model with infectious period, Applied Mathematical Modelling 35 (8) (2011) 3866 – 3878.
* [22] Ministry of Health of Malaysia, UNJURAN R-NAUGHT MALAYSIA DARI 04/01/2021 SEHINGGA 31/05/2021, cited 17 January 2021 (2021).
URL http://covid-19.moh.gov.my/semasa-kkm/2021/01/unjuran-r-
naught-4janhingga5mei2021
* [23] Bloomberg, Southeast Asia Covid-19 Vaccine Tracker: Who Gets What, cited 17 January 2021 (2020).
URL https://www.bloomberg.com/news/articles/2020-12-31/southeast-asia-
covid-19-vaccine-tracker-who-will-get-what-when
* [24] The Strait Times, Malaysia to start Covid-19 vaccinations in February, cited 17 January 2021 (2020).
URL https://www.straitstimes.com/asia/se-asia/malaysia-procures-64m-doses-of-
astrazenecas-coronavirus-vaccine-pm-muhyiddin
|
# Tameness and the power of programsover monoids in
Nathan Grosshans0000-0003-3400-1098 Fachbereich Elektrotechnik/Informatik,
Universität Kassel, Kassel, Germany<EMAIL_ADDRESS>https://nathan.grosshans.me , Pierre McKenzie DIRO, Université de Montréal,
Montréal, Canada<EMAIL_ADDRESS>and Luc
Segoufin0000-0002-9564-7581 Inria, DI ENS, ENS, CNRS, PSL University, Paris,
France<EMAIL_ADDRESS>
###### Key words and phrases:
Programs over monoids, tameness, DA, lower bounds
18314 LABEL:LastPageJan. 20, 2021Aug. 02, 2022 *Revised and extended version
of [Grosshans-McKenzie-Segoufin-2017] that includes a more inclusive
definition of tameness, thus strengthening the statement that $\mathbf{J}$ is
not a tame variety, as explained in Section LABEL:sec:general.
[a]
[b]
[c]
|
11institutetext: ETH Zurich, Institute for Particle Physics & Astrophysics,
Wolfgang-Pauli-Str. 27, 8093 Zurich, Switzerland 22institutetext: National
Center of Competence in Research PlanetS (www.nccr-planets.ch)
33institutetext: European Southern Observatory, Karl-Schwarzschild-Str. 2,
85748 Garching, Germany 44institutetext: Research School of Astronomy &
Astrophysics, Australian National University, ACT 2611, Australia
55institutetext: STAR Institute, University of Liège, 19C allée du Six Août,
4000 Liège, Belgium 66institutetext: NASA Goddard Space Flight Center, 8800
Greenbelt Rd, Greenbelt, MD, 20771, USA 77institutetext: Department of
Physics, and Institute for Research on Exoplanets, Université de Montréal,
Montréal, H3T 1J4, Canada 88institutetext: University of Oxford, Department of
Atmospheric, Oceanic and Planetary Physics, Clarendon Laboratory, Sherrington
Road, Oxford OX1 3PU, United Kingdom 99institutetext: DTU Space, National
Space Institute, Technical University of Denmark, Elektrovej 328, DK-2800 Kgs.
Lyngby, Denmark 1010institutetext: Department of Extrasolar Planets and
Atmospheres, Institute for Planetary Research, German Aerospace Centre,
Rutherfordstr. 2, 12489 Berlin 1111institutetext: Zentrum für Astronomie und
Astrophysik, Technische Universität Berlin, Hardenbergstrasse 36, D-10623
Berlin, Germany 1212institutetext: Univ. Grenoble Alpes, CNRS, IPAG, F-38000
Grenoble, France 1313institutetext: Centre Spatial de Liège, Université de
Liège, Avenue Pré-Aily, 4031 Angleur, Belgium 1414institutetext: Institute of
Astronomy, KU Leuven, Celestijnenlaan 200D, 3001, Leuven, Belgium
1515institutetext: University of Zurich, Institute of Computational Sciences,
Winterthurerstrasse 190, 8057 Zurich, Switzerland 1616institutetext:
Observatoire astronomique de l’Université de Genève, chemin Pegasi 51b, 1290
Versoix, Switzerland 1717institutetext: Large Binocular Telescope Observatory,
933 North Cherry Avenue, Tucson, AZ 85721, USA 1818institutetext: Steward
Observatory, Department of Astronomy, University of Arizona, 993 N. Cherry
Ave, Tucson, AZ, 85721, USA 1919institutetext: Leiden Observatory, Leiden
University, 2333CA Leiden, The Netherlands 2020institutetext: Department of
Space, Earth & Environment, Chalmers University of Technology, Onsala Space
Observatory, 439 92 Onsala, Sweden 2121institutetext: Institut de Ciències de
l’Espai (ICE, CSIC), Campus UAB, C/Can Magrans s/n, 08193 Bellaterra, Spain
2222institutetext: Space Telescope Science Institute, 3700 San Martin Drive,
Baltimore, MD 21218, USA 2323institutetext: Department of Astronomy, Stockholm
University, Alba Nova University Center, 10691 Stockholm, Sweden
2424institutetext: Department of Space, Earth and Environment, Astronomy and
Plasma Physics, Chalmers University of Technology, SE-412 96 Gothenburg,
Sweden 2525institutetext: University of Exeter, School of Physics and
Astronomy, Stocker Road, Exeter, EX4 4QL, UK 2626institutetext: IAS, CNRS (UMR
8617), bat 121, Univ. Paris-Sud, F-91405 Orsay, France 2727institutetext:
University of Tartu, Tartu Observatory, 1 Observatooriumi Str., 61602
Tõravere, Tartumaa, Estonia 2828institutetext: Centro de Astrobiología (CAB,
CSIC-INTA), Depto. de Astrofísica, ESAC campus 28692 Villanueva de la Cañada
(Madrid), Spain 2929institutetext: Max-Planck-Institut für Astronomie,
Königstuhl 17, 69117 Heidelberg, Germany 3030institutetext: SRM Institute of
Science and Technology, Chennai, India 3131institutetext: Jet Propulsion
Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena,
CA 91109, USA 3232institutetext: Department of Astronomy, University of
Michigan, Ann Arbor, MI 48109, USA 3333institutetext: Physikalisches Institut,
Universität Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland
3434institutetext: Landessternwarte, Zentrum für Astronomie der Universität
Heidelberg, Königstuhl 12, 69117 Heidelberg, Germany 3535institutetext: Freie
Universität Berlin, Department of Earth Sciences, Malteserstr. 74-100, 12249
Berlin, Germany
3636institutetext: Instituto de Astrofísica de Canarias (IAC), E-38200 La
Laguna, Tenerife, Spain 3737institutetext: Dept. Astrofísica, Universidad de
La Laguna (ULL), E-38206 La Laguna, Tenerife, Spain 3838institutetext:
Institut d’Estudis Espacials de Catalunya (IEEC), C/Gran Capità 2-4, 08034
Barcelona, Spain 3939institutetext: Department of Astronomy, Yale University,
New Haven, CT 06511, USA 4040institutetext: Nicolaus Copernicus Astronomical
Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland
4141institutetext: Department of Earth and Planetary Sciences, University of
California, Riverside, 900 University Ave. Riverside, CA, USA 92521
4242institutetext: Deshbandhu College, University of Delhi 110019 Delhi, India
4343institutetext: Vanderbilt University, Department of Physics & Astronomy,
6301 Stevenson Center Ln., Nashville, TN 37235, USA 4444institutetext:
Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3
0HA, UK 4545institutetext: www.life-space-mission.com
# Large Interferometer For Exoplanets (_LIFE_):
I. Improved exoplanet detection yield estimates for a large mid-infrared
space-interferometer mission
S.P. Quanz, Correspondence<EMAIL_ADDRESS>Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): M. Ottiger Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): E. Fontanet Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): J. Kammerer
Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): F. Menti Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): F. Dannert Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): A. Gheorghe Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): O. Absil
Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): V.S. Airapetian Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): E. Alei Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): R. Allart Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): D. Angerhausen Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): S. Blumenthal Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): L.A. Buchhave Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): J. Cabrera Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): Ó. Carrión-González Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): G. Chauvin Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): W.C. Danchi Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): C. Dandumont
Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): D. Defrère Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): C. Dorn Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): D. Ehrenreich Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): S. Ertel M. Fridlund Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): A. García Muñoz Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): C. Gascón Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): J. H. Girard Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): A. Glauser
Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): J.L. Grenfell Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): G. Guidi Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): J. Hagelberg Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): R. Helled Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): M.J. Ireland Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): M. Janson Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): R.K.
Kopparapu Large Interferometer For Exoplanets (_LIFE_):Large Interferometer
For Exoplanets (_LIFE_): J. Korth Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): T. Kozakis Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): S. Kraus Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): A. Léger Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): L. Leedjärv
Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): T. Lichtenberg Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): J. Lillo-Box Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): H. Linz Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): R. Liseau Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): J. Loicq
Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): V. Mahendra Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): F. Malbet Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): J. Mathew Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): B. Mennesson Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): M.R. Meyer
Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): L. Mishra Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): K. Molaverdikhani Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): L. Noack Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): A.V. Oza Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): E. Pallé Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): H. Parviainen Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): A.
Quirrenbach Large Interferometer For Exoplanets (_LIFE_):Large Interferometer
For Exoplanets (_LIFE_): H. Rauer Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): I. Ribas Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): M. Rice Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): A. Romagnolo Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): S. Rugheimer Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): E.W. Schwieterman Large Interferometer
For Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): E.
Serabyn Large Interferometer For Exoplanets (_LIFE_):Large Interferometer For
Exoplanets (_LIFE_): S. Sharma Large Interferometer For Exoplanets
(_LIFE_):Large Interferometer For Exoplanets (_LIFE_): K.G. Stassun Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): J. Szulágyi Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): H.S. Wang Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):Large Interferometer For Exoplanets
(_LIFE_): F. Wunderlich Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_): M.C. Wyatt Large Interferometer For
Exoplanets (_LIFE_):Large Interferometer For Exoplanets (_LIFE_): the _LIFE_
Collaboration Large Interferometer For Exoplanets (_LIFE_):Large
Interferometer For Exoplanets (_LIFE_):
(Received: ¡date¿ / Accepted: ¡date¿)
###### Abstract
Context. One of the long-term goals of exoplanet science is the atmospheric
characterization of dozens of small exoplanets in order to understand their
diversity and search for habitable worlds and potential biosignatures.
Achieving this goal requires a space mission of sufficient scale that can
spatially separate the signals from exoplanets and their host stars and thus
directly scrutinize the exoplanets and their atmospheres.
Aims. We seek to quantify the exoplanet detection performance of a space-based
mid-infrared (MIR) nulling interferometer that measures the thermal emission
of exoplanets. We study the impact of various parameters and compare the
performance with that of large single-aperture mission concepts that detect
exoplanets in reflected light.
Methods. We have developed an instrument simulator that considers all major
astrophysical noise sources and coupled it with Monte Carlo simulations of a
synthetic exoplanet population around main-sequence stars within 20 pc of the
Sun. This allows us to quantify the number (and types) of exoplanets that our
mission concept could detect. Considering single visits only, we discuss two
different scenarios for distributing 2.5 years of an initial search phase
among the stellar targets. Different apertures sizes and wavelength ranges are
investigated.
Results. An interferometer consisting of four 2 m apertures working in the
4–18.5 $\mu$m wavelength range with a total instrument throughput of 5% could
detect up to $\approx$550 exoplanets with radii between 0.5 and 6 R⊕ with an
integrated S/N$\geq$7\. At least $\approx$160 of the detected exoplanets have
radii $\leq$1.5 R⊕. Depending on the observing scenario, $\approx$25–45 rocky
exoplanets (objects with radii between 0.5 and 1.5 ⊕) orbiting within the
empirical habitable zone (eHZ) of their host stars are among the detections.
With four 3.5 m apertures, the total number of detections can increase to up
to $\approx$770, including $\approx$60–80 rocky eHZ planets. With four times 1
m apertures, the maximum detection yield is $\approx$315 exoplanets, including
$\leq$20 rocky eHZ planets. The vast majority of small, temperate exoplanets
are detected around M dwarfs. The impact of changing the wavelength range to
3–20 $\mu$m or 6–17 $\mu$m on the detection yield is negligible.
Conclusions. A large space-based MIR nulling interferometer will be able to
directly detect hundreds of small, nearby exoplanets, tens of which would be
habitable world candidates. This shows that such a mission can compete with
large single-aperture reflected light missions. Further increasing the number
of habitable world candidates, in particular around solar-type stars, appears
possible via the implementation of a multi-visit strategy during the search
phase. The high median S/N of most of the detected planets will allow for
first estimates of their radii and effective temperatures and will help
prioritize the targets for a second mission phase to obtain high-S/N thermal
emission spectra, leveraging the superior diagnostic power of the MIR regime
compared to shorter wavelengths.
###### Key Words.:
Telescopes – Techniques: interferometric – Infrared: planetary systems –
Techniques: high angular resolution – Methods: numerical – Planets and
satellites: detection – Planets and satellites: terrestrial planets
## 1 Introduction
One of the major objectives of exoplanet science is the atmospheric
characterization of a statistically relevant sample of small exoplanets.
Specific emphasis will be on temperate terrestrial planets to investigate
whether there are other worlds similar to Earth that may harbor life. While
occurrence rates of Earth-like planets around solar-type stars are still
somewhat uncertain (e.g., Bryson et al., 2021), thanks to transiting exoplanet
discovery missions such as _Kepler_ (Borucki et al., 2010) and the _Transiting
Exoplanet Survey Satellite_ (TESS; Ricker et al., 2015) and ongoing long-term
radial velocity (RV) surveys, we know that, statistically, planets with radii
and masses comparable to or slightly larger than those of Earth and with
shorter orbital periods are very abundant (e.g., Mayor et al., 2011; Tuomi et
al., 2019; Kunimoto & Matthews, 2020; Bryson et al., 2021). Some major
detections were even made within 20 pc of the Sun with both transit searches
(e.g., Berta-Thompson et al., 2015; Vanderspek et al., 2019; Gillon et al.,
2017, 2017) and RV surveys (e.g., Anglada-Escudé et al., 2016; Ribas et al.,
2016; Jeffers et al., 2020; Astudillo-Defru et al., 2017; Díaz et al., 2019;
Zechmeister et al., 2019), with RV planets typically being closer to the Sun
because of the geometric bias of transiting planets.
Going forward, and focusing on the atmospheric characterization of small
planets, the _James Webb Space Telescope_ (_JWST_) might reveal whether some
of these objects transiting nearby M dwarfs have managed to retain their
atmospheres (e.g., Koll et al., 2019) despite the high level of activity of
their host stars, in particular at younger ages (e.g., Ribas et al., 2016;
MacGregor et al., 2018; Johnstone et al., 2019); an in-depth investigation of
atmospheric constituents with _JWST_ seems, however, rather challenging (e.g.,
Kreidberg & Loeb, 2016; Morley et al., 2017; Krissansen-Totton et al., 2018).
The _Atmospheric Remote-sensing Infrared Exoplanet Large-survey_ mission
(Ariel; Tinetti et al., 2018) of the European Space Agency (ESA) will provide
transmission and emission spectra of hundreds of exoplanets, but the focus
will be on objects with hot or warm hydrogen-dominated atmospheres; only a few
small, relatively hot exoplanets will be studied. Upgraded or new fully
optimized instruments at existing 8-meter-class ground-based telescopes may
have a chance of directly detecting the nearest small exoplanet, Proxima Cen b
(e.g., Lovis et al., 2017). Due to their unprecedented spatial resolution and
sensitivity, the upcoming 30–40 m ground-based extremely large telescopes
(ELTs) will be powerful enough to directly detect small planets around the
nearest stars. Instruments working at mid-infrared (MIR) wavelengths, such as
the _Mid-infrared ELT Imager and Spectrograph_ (METIS; Brandl et al., 2018,
2021), will detect the thermal emission of the planets (Quanz et al., 2015;
Bowens et al., 2021). Instruments working at optical or near-infrared (NIR)
wavelengths and featuring high-resolution spectrographs coupled with adaptive
optics systems, such as the _Planetary Camera and Spectrograph_ (PCS; Kasper
et al., 2021) and the _High Resolution Spectrograph_ (HIRES; Marconi, 2020),
aim at detection in reflected light.
Unfortunately, none of the currently planned ground-based instrument projects
and approved space missions is capable of investigating in detail the
atmospheres of several dozen small exoplanets, including a sizable subsample
residing in or close to the so-called habitable zone (HZ) of their host stars
(Kasting et al., 1993; Kopparapu et al., 2013). This is one of the reasons
why, in the context of the Astrophysics Decadal Survey in the United States,
new flagship missions, the _Habitable Exoplanet Observatory_ (HabEx; Gaudi et
al., 2020) and the _Large UV/Optical/IR Surveyor_ (LUVOIR; The LUVOIR Team,
2019), are currently under assessment; one of their main science drivers is
the direct detection and characterization of temperate terrestrial exoplanets
in reflected light111We note that during the refereeing process of this paper,
the Consensus Study Report “Pathways to Discovery in Astronomy and
Astrophysics for the 2020s” was published by the National Academies of
Sciences, Engineering, and Medicine recommending a large ($\sim$6 m aperture)
infrared/optical/ultraviolet (IR/O/UV) space telescope as a future flagship
mission (National Academies of Sciences & Medicine, 2021)..
Here, we focus on a different observational approach and a new initiative that
aims at developing a space-based MIR nulling interferometer capable of
detecting and characterizing the thermal emission of (temperate) rocky
exoplanets. The characterization of temperate exoplanets in the MIR was
recently announced to be a potential science theme for a future science
mission within ESA’s Voyage 2050
program222https://www.cosmos.esa.int/web/voyage-2050. The idea to employ
interferometric nulling for exoplanet science was originally proposed by
Bracewell (1978) and later followed up in Leger et al. (1995) and Angel &
Woolf (1997); in the late 1990s to mid 2000s, concept studies were carried out
by both ESA and NASA: the _Darwin_ mission and the _Terrestrial Planet Finder
- Interferometer_ (_TPF-I_) mission, respectively. In the end, these concepts
did not go forward for implementation because of technical challenges, but
also because our understanding of the exoplanet population was significantly
more limited. This has, however, changed. Given the enormous scientific
progress in exoplanet research since the mid 2000s and significant advances
related to key technologies, it is time to reassess such a mission concept and
quantify its potential scientific return. In 2018, a first such study was
published (Kammerer & Quanz, 2018), which investigated the exoplanet detection
yield of a space-based MIR nulling interferometer based on exoplanet
occurrence rates from NASA’s _Kepler_ mission; it claimed that a few hundred
small exoplanets (radii between 0.5 and $6\leavevmode\nobreak\
\mathrm{R}_{\oplus}$) could be within reach for such an instrument. These
promising results, in combination with ongoing lab activities related to
nulling interferometry, resulted in the creation of the _Large Interferometer
For Exoplanets (LIFE)_ initiative333www.life-space-mission.com.
The present paper is the first in a series of papers currently in preparation.
It is assumed that a mission such as _LIFE_ would consist of two main phases:
(1) a search phase to directly detect a large sample of exoplanets orbiting
nearby stars and (2) a characterization phase to reobserve a subsample of
these exoplanets and investigate their properties and atmospheres in detail.
In the following, we focus on the search phase and quantify how many
exoplanets _LIFE_ would be able to detect depending on different mission
parameters. Future work will focus more on the characterization phase of the
mission, including questions related to atmospheric diversity and evolution,
habitability, and the search for indications of biological activity. In this
context, the MIR wavelength regime offers complementary information and even
several advantages compared to studies at optical or NIR wavelengths. These
include more direct constraints on the temperature and size of the objects and
a large set of molecular absorption lines of main atmospheric species,
including biosignatures (e.g., Schwieterman et al., 2018; Catling et al.,
2018), some of which, for example CH4, might be easier to detect in the MIR
than at shorter wavelengths.
In comparison to previous studies that quantify the detection yield of an MIR
nulling interferometer (e.g., Kammerer & Quanz, 2018; Quanz et al., 2018,
2021), we have significantly updated and improved our simulation approach as
further described below. Similar detection yield analyses were carried out for
the reflected light missions mentioned above, enabling a direct comparison of
the different mission concepts.
We note that in the ideal case, exoplanet surveys with existing and future
high-precision RV instruments (e.g., CARMENES, NIRPS, ESPRESSO, MAROON-X,
HARPS3, and EXPRES) will continue to uncover a significant fraction of the
exoplanet population within 20 pc, including rocky, potentially habitable,
planets444As an alternative to ground-based RV searches, space-based high-
precision astrometry missions could be envisioned (e.g., Malbet & Sozzetti,
2018; Janson et al., 2018).. In this case, the search phase of future direct
detection exoplanet missions would be shortened and more of the limited
observing time could be allotted to the characterization of the objects.
Whether in the end the low-amplitude RV signals from Earth-like planets can be
separated from astrophysical noise sources, such as stellar jitter (e.g.,
Oshagh et al., 2017), and to what extent, consequently, those RV surveys will
be complete, remains to be seen.
In Sect. 2 we describe in detail the setup of our yield simulations. The
results are presented in Sect. 3, and we discuss them in Sect. 4. We conclude
and summarize our main findings in Sect. 5.
## 2 Setup of yield simulations
The general approach of our Monte Carlo-based yield simulations is described
in Kammerer & Quanz (2018), but we have implemented several updates as
detailed in the following.
### 2.1 Stellar target catalog
We used a new _LIFE_ target star catalog that includes single main-sequence
stars and wide separation binaries of spectral types FGKM out to 20 pc of the
Sun. While the catalog includes a total of 1732 objects, only a subset was
considered for the simulations of the search phase depending on the
optimization strategy (see Sect. 2.6 for details). The catalog and its
creation are further explained in Appendix A.
### 2.2 Exoplanet population
In our simulations we generated an artificial population of exoplanets around
the stars in our target catalog and did not consider any known exoplanets. The
underlying exoplanet occurrence rates as a function of the radius and orbital
period follow the results from NASA’s ExoPaG SAG13 (Kopparapu et al., 2018)
for single FGK stars and Dressing & Charbonneau (2015) for single M stars.
This allows for a comparison with the results obtained in the context of the
_HabEx_ and _LUVOIR_ studies mentioned above, which used a very similar
underlying exoplanet population. Binary stars with measured (apparent)
separations greater than 50 AU were treated as single stars. For binary
systems with smaller separations, the occurrence rates were scaled down by a
factor of 0.3 over the entire period and radius range (cf. Kraus et al.,
2016). We focused our analysis on planets with radii, $R_{\textrm{p}}$, in the
range $0.5\,R_{\oplus}\leq R_{\textrm{p}}\leq 6\,R_{\oplus}$ for FGK stars and
$0.5\,R_{\oplus}\leq R_{\textrm{p}}\leq 4\,R_{\oplus}$ for M stars. Orbital
periods, $P_{\textrm{P}}$, in the range $0.5\,d\leq P_{\textrm{P}}\leq 500\,d$
and $0.5\,d\leq P_{\textrm{P}}\leq 200\,d$ were considered for FGK and M
stars, respectively. Cold, Neptune-, or Jupiter-like objects with separations
$\geq$3 au, for which the SAG13 statistics cannot be applied (cf. Dulz et al.,
2020) and which were included in the _HabEx_ and _LUVOIR_ studies, were not
considered as they are typically too cold for a detection within a reasonable
amount of integration time. All planets were assumed to have circular orbits
and were assigned a random Bond albedo, $A_{\textrm{B}}\in[0,0.8)$555For
reference, the Bond albedos of Venus and Mercury are $\approx$0.8 and
$\approx$0.1, respectively, bracketing the values for the Solar System
planets., and a geometric albedo, $A_{\textrm{g}}\in[0,0.1)$666These low
values are motivated by the MIR wavelength range we are considering., that is
constant over the wavelength range we consider (cf. Kammerer & Quanz, 2018).
Both albedos were uniformly distributed in the considered intervals. The
planets were treated as black bodies with their whole surface area radiating
with an equilibrium temperature ($T_{\textrm{eq}}$) determined by the
luminosity of their host star, their Bond albedo, and their orbital
separation.
In Table 1 we define two types of exoplanets that are important throughout the
paper: rocky planets orbiting within the empirical habitable zone (eHZ) and
exo-Earth candidates (EECs). We also provide the respective occurrence rates
as provided by our exoplanet population.
Table 1: Types of exoplanets that are of particular importance throughout the
paper. The first row shows our definition of a rocky planet orbiting within
the empirical habitable zone (eHZ). The second row defines exo-Earth
candidates (EECs) as used in the yield estimates for _HabEx_ and _LUVOIR_. The
last columns summarize the occurrence rates for these objects as provided by
the assumed exoplanet population.
Planet type | $R_{\textrm{P}}$ [R⊕] | Stellar flux range [S⊕] | Occurrence ratesaa$a$Occurrence rates are values for single stars averaged over our input catalog for the given range of spectral types. Because the stellar flux range is spectral type dependent, the number of objects falling within this range, and hence the occurrence rates, varies with spectral type as well.
---|---|---|---
| | | M stars | FGK stars
Rocky eHZ | 0.5 – 1.5 | 1.776 – 0.32bb$b$The flux range is given by the “recent Venus” and “early Mars” limits and is spectral type dependent (Kopparapu et al., 2014). The values given here correspond to 1 M⊕ planet orbiting a solar twin. We note that both limits take into account the luminosity evolution of the Sun, which was fainter during the epochs when Venus and Mars provided habitable conditions. For present-day solar luminosity, these insolation limits correspond to separations of 0.75 and 1.77 au, respectively, excluding Venus from, but including Mars in, the eHZ. | 0.558 | 0.370
Exo-Earth Candidates (EECs) | 0.82 / 0.62cc$c$For EECs, the lower limit of the radius range depends on the separation from the star; closer to the star, planets are required to have a larger radius. This can be described by $R_{\textrm{P}}^{\textrm{min}}=0.8\cdot S^{0.25}$, where $S$ is the insolation. The _HabEx_ and _LUVOIR_ studies focused primarily on solar-type stars and used a corresponding expression based on the semimajor axis, $a$, i.e., $R_{\textrm{P}}^{\textrm{min}}=0.8\cdot a^{-0.5}$. As we are also interested in M stars, we had to convert this into an expression for $S$. – 1.4 | 1.107 – 0.356dd$d$The flux range is given by the “runaway greenhouse” and “maximum greenhouse” limits and is spectral type dependent (Kopparapu et al., 2014). The values given here correspond to a 1 M⊕ planet orbiting a solar twin. | 0.312 | 0.164
777
### 2.3 Simulating spacecraft, instrument, and noise sources with LIFEsim
In order to estimate the signal-to-noise ratio (S/N) of our simulated
exoplanets we have developed the new simulation tool LIFEsim (Dannert et al.,
2022). This tool enables us to simulate the temporally modulated signal that a
planetary system would leave in an observing sequence with a space-based
nulling interferometer (cf. Lay, 2005) and further includes the most relevant
– and wavelength-dependent – astrophysical noise sources. This is an important
difference from our earlier exoplanet detection yield estimates, where,
instead of explicitly simulating the interferometer transmission and signal
modulation for every simulated planet, constraints on the inner working angle
of the instrument and sensitivity were used to assess the discovery potential
(Kammerer & Quanz, 2018; Quanz et al., 2018).
Figure 1: Artist’s impression of the _LIFE_ nulling-interferometry mission,
consisting of four collector spacecraft in a rectangular array configuration
sending light to a beam combiner spacecraft in the center. The present
analysis assumes an X-array configuration with a baseline ratio of 6:1.
In the following, we considered an interferometer consisting of four collector
spacecraft in a so-called X-array configuration that feed their beams into a
fifth beam combiner spacecraft located in their center (see Fig. 1).
The ratio between the long and the short baseline of the X-array was assumed
to be 6:1 for the time being and a $\pi/2$ phase shift was applied between two
conjugated transmission maps (cf. Defrère et al., 2010). The short baselines
of the array are referred to as “nulling baselines” and are responsible for
creating the central dark fringe that nulls the host star. The long baselines
are referred to as “imaging baselines” and are responsible for a higher-
frequency modulation of the transmission map perpendicular to the fringe
pattern created by the nulling baselines. The resulting modulation map, the
difference between the two conjugate transmission maps, effectively suppresses
the signal coming from any centrally symmetric source (such as emission from
optically thin and smooth exozodiacal dusk disks with random inclination) so
that only the shot noise of the source contributes to the S/N of the
observations. The 6:1 baseline ratio has been shown to be more robust against
instability noise compared to a 2:1 baseline ratio (Lay, 2006), but additional
trade-off studies are needed to further validate this choice. A detailed
description of LIFEsim is provided in Dannert et al. (2022), but we refer the
reader to Defrère et al. (2010) for a description of the general nulling and
beam combination scheme and the resulting final modulation map for an X-array
interferometer.
In our simulations we assumed that for each new target star the array is
reconfigured so that the center of the projected eHZ (cf. Table 1; Sect. 2.2)
falls within the first transmission maximum of the nulling baselines at a
reference wavelength of 15 $\mu$m. However, we imposed a minimum separation
between two adjacent collector spacecraft of at least 10 m and allow for a
maximum separation of 600 m. Initial tests had shown that having the reference
wavelength between 15 and 20 $\mu$m resulted in comparable detection yields,
but that shorter (e.g., 10 $\mu$m) or longer (e.g., 25 $\mu$m) reference
wavelengths provided lower yield numbers. Keeping the baseline lengths of the
configuration in mind (from a technical perspective), we decided to use 15
$\mu$m as reference for all analyses presented in the following. The aperture
diameter $D$ of the collector spacecraft is a free parameter in our instrument
model, and we discuss the impact of aperture size on the results in Sect. 3.2.
For the computation of the total photon flux received by the collector
spacecraft we ignored any possible obscuration from a secondary mirror. To be
conservative, we assumed 5%888This 5% throughput is applied to the modulation
maps, which already contain only 50% of the incoming light. for the optical
throughput of the instrument, but will update this number when the concept for
the optical layout is maturing. In earlier studies in the context of the
_Darwin/TPF-I_ missions a throughput of 10% was assumed (e.g., Lay et al.,
2007; Defrère et al., 2010) and also at the _Large Binocular Telescope
Interferometer (LBTI)_ the most recent estimate for the optical throughout
around $\sim$11$\leavevmode\nobreak\ \mu$m is $\approx$0.11 (S. Ertel, private
communication). For the detector quantum efficiency, we assumed 70% over the
full wavelength range, which is identical to the _Darwin_ studies mentioned
above. Recent experiments with 15-micron-cutoff HgCdTe detector arrays have
yielded quantum efficiencies of $\gtrsim$0.8 between 6 and 12 $\mu$m
wavelengths (Cabrera et al., 2020) and the Si:As IBC detectors of the
JWST/MIRI instrument achieve $\gtrsim$0.7 between 12 and 20 $\mu$m (Rieke et
al., 2015).
At the moment, our S/N calculations are photon-based and include all major
astrophysical noise terms. We implicitly assumed that our measurements would
not be limited by instrumental effects (see Sect. 2.7 below for the definition
of our detection criterion). The impact of phase and/or amplitude variations
as major systematic noise sources is currently being assessed. Also, thermal
background from the aperture mirrors and the instrument optics, and detector-
related noise sources will be included in subsequent work. For the mirrors of
the collector spacecraft and the instrument optics not to contribute
significantly to the measurement implies a required temperature of
$\lesssim$40 K (Defrère et al., 2010). Noise terms that were explicitly
included are:
Photon noise from the simulated planets: Given the distance, radius, and
equilibrium temperature of our simulated planets, their photon flux (assuming
black-body emission) and related noise are fully described.
Photon noise from stellar leakage: Depending on the distance to the star, its
radius, and the length of the nulling baseline, a small fraction of stellar
photons may “leak” through the central dark fringe and hence contribute to the
photon noise.
Photon noise from exozodi disks: For each simulated planetary system we
randomly assigned a level of emission from a dusty exozodi disk following the
observed (nominal) distribution from the Hunt for Observable Signatures of
Terrestrial Systems (HOSTS) survey (Ertel et al., 2018, 2020). To compute the
spectral energy distribution (SED) of the exozodi disk we used the publicly
available code from Kennedy et al. (2015). All exozodi disks were assumed to
be optically thin, smooth (i.e., without any substructure) and seen face-on.
We refer the reader to Sect. 4.3.2 for a discussion about these assumptions.
Photon noise from local zodiacal light: The optically thin zodiacal dust in
our Solar System is a source of significant MIR emission. In LIFEsim the
surface brightness is described by a pointing-dependent 2D model originally
developed for the _Darwin_ simulator and based on data from _Cosmic Background
Explorer_ (COBE; Kelsall et al., 1998). Compared to the original _COBE_ data,
the model slightly over-predicts the flux in the 6-20 $\mu$m range and for a
pointing direction with a relative latitude of more than 90∘ from the Sun by
10-20%. For wavelengths shorter than 6 $\mu$m the difference increases to a
factor of 2-3 at 3 $\mu$m. However, at these shorter wavelengths the total
photon noise is strongly dominated (up to several orders of magnitude) by the
contribution from stellar leakage. In our simulations we assumed that we
always point in anti-sunward direction (_LIFE_ will be launched to the Earth-
Sun L2 point) but considered the true latitude of the target star.
In our S/N calculations we implicitly assumed that the combined beams are fed
through single-mode fibers before the signal is spectrally dispersed. The
effective field-of-view (FoV) of the fibers, and hence of each collector
spacecraft, is wavelength dependent and given by $\lambda/D$.
In Fig. 2 we show an example of how the various noise terms compare to the
incoming photon flux from a 1 R⊕ exoplanet with an effective temperature of
276 K orbiting at 1 au from a Sun-like star at 10 pc distance. The system is
assumed to be located within the ecliptic and contains an exozodi disk with
the same brightness as the zodiacal light in the Solar System.
Figure 2: Example illustrating the photon flux and noise contributions from
the various astrophysical sources in our nulling-interferometry simulations:
exoplanet flux (1 R⊕ and 276 K effective temperature located at 10 pc; dashed
red line), flux from a Sun-like star (black line with stars), local zodiacal
light (green line with ticks), and exozodi (1 zodi; solid blue line). The
corresponding photon noise contributions (1-$\sigma$) are shown with the same
color code, but as dotted lines.
### 2.4 Mission parameters
Assuming a total mission lifetime of 5-6 years, we assigned an available on-
source observing time of 2 years to the initial search phase. This translates
into 2.5 years of mission time considering 25% of general mission overhead;
the remaining time of the mission is dedicated to detailed follow-up
observations of a subset of the detected exoplanets and possibly an ancillary
science program. The slew time from one target to the next was fixed to 10
hours, which is part of the 2 year observing time. For the moment we only
considered single visits of target stars during the survey.
### 2.5 Setup of Monte Carlo simulations
To create the exoplanet population we used the freely available P-Pop Monte
Carlo tool999https://github.com/kammerje/P-pop, which for each star of the
target catalog randomly draws exoplanets from the distributions described
above (cf. Kammerer & Quanz, 2018) and puts them at random positions along
their orbits. The orbital inclination was also randomly chosen for each
system, but planets in multi-planet systems were assumed to be co-planar. To
ensure that multi-planet systems were dynamically stable we applied a
stability criterion following the approach by He et al. (2019) that is based
on the mutual Hill radius of neighboring planets. Specifically, for circular
orbits as assumed here, a system was considered stable if for all planet pairs
within the system
$\Delta=\frac{a_{out}-a_{in}}{R_{H}}>8\quad,$
where $a_{out}$ and $a_{in}$ are the semimajor axes of the outer and inner
planet, respectively, and $R_{H}$ is the mutual Hill radius given by
$R_{H}=\frac{a_{in}+a_{out}}{2}\Bigg{[}\frac{m_{in}+m_{out}}{3M_{*}}\Bigg{]}^{1/3}\quad,$
with $m_{in}$ and $m_{out}$ being the mass of the inner and outer planet,
respectively, and $M_{*}$ being the mass of the host star. If a system or pair
of planets was unstable, we re-drew the system, which happened in less than 2%
of the cases. In total, we generated 500 planetary systems per target star.
All planets were then run through LIFEsim in order to compute their photon
fluxes as well as the photon noise from the various sources listed above.
Figure 3: Total exoplanet detection yield from our reference case scenario
simulations ($D=2$ m; $\lambda=4-18.5\,\mu m$) in the radius vs. stellar
insolation plane. The plots show the number of expected planet detections per
grid cell, including the statistical 1-$\sigma$ uncertainty from the Monte
Carlo approach but excluding uncertainties in the exoplanet occurrence rates.
Left panel: Scenario 1 (search phase optimized for maximizing the total number
of exoplanets). Right panel: Scenario 2 (search phase optimized for maximizing
the number of rocky eHZ exoplanets.)
### 2.6 Distribution of observing time: Two scenarios
We considered two scenarios that determine the distribution of the available
on-source observing time of 2 years: maximizing the total number of detected
exoplanets (scenario 1) or the number of rocky exoplanets orbiting within the
eHZ of their host star (scenario 2; see Table 1 for definition). We note that
the eHZ includes a much larger range of insolations than the “classical” HZ
(Kasting et al., 1993; Kopparapu et al., 2013), but a much smaller range than
the “extended hydrogen” HZ (Pierrehumbert & Gaidos, 2011), which is estimated
to reach $\approx$10 au ($\approx 0.01\;S_{\oplus}$) for a G-type star.
Depending on the main science goals of _LIFE_ , one may prefer either of the
two scenarios, but, as we will see below, maximizing the number of temperate,
rocky exoplanets (scenario 2) leads to a decrease in the total number of
detectable planets (scenario 1).
The algorithm to distribute the observing time was similar to the one
discussed in Lay et al. (2007) and considered that for each star in a given
Monte Carlo run one can compute the detection efficiency (defined as number of
detected planets per time interval $\delta t$). The number of detected planets
depends on the threshold one puts on the S/N of the planets, which in turn
depends on the assumed aperture size of the collector spacecraft and the
assumed length of $\delta t$ (in our analysis we assumed $\delta t$=1h). Also,
one can decide which subset of planets to focus on (i.e., scenario 1 or
scenario 2). By computing the number of detectable planets for all stars and
over a sufficiently large range of time intervals, one can identify the star
that offers the maximum possible detection yield for the smallest time
interval. This star and the corresponding planet(s) as well as the length of
the required time interval were saved, and the star offering the second best
detection efficiency was searched. We repeated this process until the
available observing time (including the 10 h slew time from one star to the
next) was used up. This yielded the total number of detectable planets per
star as well as a rank-ordered list of target stars based on their expected
contribution to the detection yield. We then calculated the gain (i.e., planet
yield per time) as an average over all 500 Monte Carlo realizations for each
star. This allowed us to construct an observing sequence, which yielded the
final average numbers we are quoting below. For completeness we note that in
this analysis we implicitly assumed that the X-array of the collector
spacecraft did an integer number of full rotations around its center
irrespective of the assumed integration time. This allowed for an easier
computation of the exoplanets’ signals passing through the interferometer’s
modulation map (Dannert et al., 2022).
### 2.7 Detection criterion
In the following, we required a S/N$\geq$7 spectrally integrated over the full
wavelength range for a planet to be considered a detection. This choice
compensates for the lack of an instrumental noise model in the current
simulations. Under the assumptions that the instrumental noise contribution is
equal to or lower than the astrophysical noise and that the total noise can be
written as the square root of the sum of the instrumental and astrophysical
noise (i.e., $\sigma_{tot}=\sqrt{(\sigma_{inst})^{2}+(\sigma_{astro})^{2}}$),
a total S/N$\geq$7 corresponds to an astrophysical signal of
S/N${}_{astro}\geq$5.
As we were only considering photon noise, we verified that (slightly modified
versions of) published signal extraction algorithms for nulling-interferometry
data (e.g., Thiébaut & Mugnier, 2006; Mugnier et al., 2006) actually achieve a
performance close to the ideal photon-noise limited case and can also be
applied to multi-planet systems (Dannert et al., 2022).
## 3 Results
For the two scenarios outlined in Sect. 2.6, we chose an aperture size of
$D=2$ m as our reference case, but we also investigated cases with $D=1$ m and
$D=3.5$ m (the latter corresponding to the aperture of ESA’s _Herschel_
spacecraft, the largest monolithic infrared space telescope ever launched).
Besides using 4–18.5 $\mu$m as wavelength range in the reference case, we also
computed detection yields for 3–20 $\mu$m and 6–17 $\mu$m. We note that for
determining the final wavelength range not only the expected detection yield
during a 2.5-year search phase should be considered, but also the scientific
importance of molecular bands for atmospheric characterization at the short
and long wavelength end (Konrad et al., 2021) and technical aspects. We remind
the reader that in all cases the assumed instrument throughput is 5% (see
Sect. 2.3).
Figure 4: Total exoplanet detection yield from our reference case scenario
simulations ($D=2$ m; $\lambda=4-18.5\,\mu m$) using the planet classification
scheme introduced by Kopparapu et al. (2018) (see Table 2). For comparison, we
also plot the number of terrestrial exoplanets within the eHZ as defined in
Sect. 2.6 as the leftmost bar labeled “Rocky eHZ.” The bars show the number of
expected planet detections, including the statistical 1-$\sigma$ uncertainty
from the Monte Carlo approach but excluding uncertainties in the exoplanet
occurrence rates. Left panel: Scenario 1, i.e., the search phase is optimized
for maximizing the total number of exoplanets. Right panel: Scenario 2, i.e.,
the search phase is optimized for maximizing the number of rocky eHZ
exoplanets.
Figure 5: Total exoplanet detection yield from our reference case scenario
simulations ($D=2$ m; $\lambda=4-18.5\,\mu m$) shown as a function of spectral
type of the host star (left panel: Scenario 1; right panel: Scenario 2). The
bars show the number of expected planet detections, including the statistical
1-$\sigma$ uncertainty from the Monte Carlo approach but excluding
uncertainties in the exoplanet occurrence rates. In both panels, the green
bars (labeled “Rocky HZ”) show the number of rocky eHZ exoplanets, which are a
subset of the planets in the 0.5–1.5 R⊕ bin in Fig. 3.
### 3.1 Exoplanet yield of reference case scenarios
In Fig. 3 we show the expected number of detectable exoplanets and the
standard deviation resulting from our 500 Monte Carlo runs in the radius
versus stellar insolation plane for the reference case setup and the two
scenarios described in Sect. 2.6. Figure 4 is based on the same information,
but this time we follow the exoplanet classification scheme introduced by
Kopparapu et al. (2018) (see Table 2). This scheme was also used in the final
study reports by the _LUVOIR_ and _HabEx_ teams (The LUVOIR Team, 2019; Gaudi
et al., 2020), which allows for an easier comparison between the different
mission concepts. A current short-coming is that the scheme assumes a Sun-like
host star. Variations in the host star SED could potentially alter the stellar
flux condensation boundaries of the considered chemical species by a few
percent. A more robust analysis with different host stellar spectral types,
including M dwarfs, is needed to correct this. We note that in Fig. 4, we also
show the number of terrestrial exoplanets located within the eHZ as defined in
Sect. 2.6. for comparison with the other classes of exoplanets. It is
important to keep in mind that in all cases the number of detectable
exoplanets as a function of their radius and received insolation is influenced
by both the assumed underlying exoplanet population and our technical
assumptions.
Table 2: Exoplanet classification scheme introduced by Kopparapu et al. (2018). Together with the planet types defined in Table 1, we apply this scheme in Figs. 4, 8, 9, and 14, as well as in Figs. 18 and 21 in Appendix C. For reference, Venus would be classified as a “hot, rocky” planet and Earth and Mars as “warm, rocky” planets. Planet type | $R_{\textrm{P}}$ [R⊕] | Stellar flux range [S⊕]
---|---|---
| | Hot | Warm | Cold
Rocky | 0.5 – 1 | 182 – 1.0 | 1.0 – 0.28 | 0.28 – 0.0035
Super-Earths | 1 – 1.75 | 187 – 1.12 | 1.12 – 0.30 | 0.30 – 0.0030
Sub-Neptunes | 1.75 – 3.5 | 188 – 1.15 | 1.15 – 0.32 | 0.32 – 0.0030
Sub-Jovians | 3.5 – 6 | 220 – 1.65 | 1.65 – 0.45 | 0.45 – 0.0030
These plots show that within the present simulation framework, _LIFE_ would
discover hundreds of nearby exoplanets, the vast majority of which have radii
between 0.5 and 3 R⊕. It also shows that the choice of observing scenario has
a significant impact on the planet yield: on the one hand the number of rocky
exoplanets orbiting within the eHZ can indeed be significantly increased by a
factor of $\approx$1.6 if one optimizes the observing strategy accordingly
(scenario 2). For EECs (see Table 1) it is even a factor of $\approx$2\.
However, this comes at a price as the resulting total number of detectable
exoplanets is significantly smaller compared to scenario 1 ($\approx$350 vs.
$\approx$550). Because of their moderate temperatures, rocky exoplanets in the
eHZ are fainter than objects orbiting closer to the star and require larger
amounts of integration time to be detected. While for scenario 1 the typical
observing time per target is between 15 and 35 hours, it is between 50 and 130
hours for scenario 2.
In Fig. 5 we show the distribution of detectable exoplanets as a function of
the spectral type of their host star. For the rocky eHZ exoplanets there is a
strong preference for M dwarfs. This is because M dwarfs are, on average, much
more numerous and hence there is a larger number of M dwarfs close to the Sun.
In addition, they have a higher occurrence rate of terrestrial exoplanets (cf.
Dressing & Charbonneau, 2015). In Fig. 6 we show the distance distribution of
the detected exoplanets for both scenarios. By maximizing the number of rocky
eHZ exoplanets one exclusively observes stars within $\sim$10 pc of the Sun.
Figure 6: Distance distribution of the detected planet populations shown in
Fig. 3. The bars show the number of expected planet detections, including the
statistical 1-$\sigma$ uncertainty from the Monte Carlo approach but excluding
uncertainties in the exoplanet occurrence rates.
Another important parameter to look at is the detection efficiency, that is,
the number of detectable rocky eHZ exoplanets relative to the total number of
such exoplanets that were generated in our simulations. This is illustrated in
Fig. 7, which is based on the results for scenario 2. One can see that,
depending on the received stellar insolation (or, to a first approximation,
the resulting equilibrium temperature), only a certain fraction of the
exoplanets is detected. As indicated above, the main reason is the required
sensitivity rather than the spatial resolution. Still, some of the simulated
exoplanets are indeed at an orbital phase where they escape a detection with
the interferometer. However, it is reassuring that $\geq$50% detection
efficiency is achieved for exoplanets with $T_{\textrm{eq}}\geq 225$ K, or
insolations within $0.8\;S_{\oplus}\leq S_{\textrm{p}}\leq 1.5\;S_{\oplus}$.
This number could be further increased by implementing a multi-visit search
phase. Work on quantifying the gain in detection efficiency (and survey
completeness) as a function of number of visits in currently ongoing.
Figure 7: Detection efficiency for rocky eHZ exoplanets for our scenario 2.
Top panel: Equilibrium temperature distribution of all exoplanets present in
the surveyed sample (gray) and all detected exoplanets (blue). Bottom panel:
Same as above, but as a function of stellar insolation. In both panels the
detection efficiency (y axis on the right-hand side) is shown with the dashed
red line.
Figure 8: Same as Fig. 4, but now for $D=1.0$ m.
Figure 9: Same as Fig. 4, but now for $D=3.5$ m.
### 3.2 Impact of aperture size and wavelength range
In Figs. 8 and 9 we show the expected detection yield for apertures with $D=1$
m and $D=3.5$ m, respectively. The format is the same as for the reference
case shown in Fig. 4; the plots corresponding to Fig. 3 are available in
Appendix C, where we also show in Fig. 18 the relative changes in yield
compared to the reference case. Figure 10 provides a summary of the impact of
the aperture size on the total _LIFE_ exoplanet detection yield during the
2.5-year search phase. It shows that, as expected, aperture size strongly
affects the number of detectable exoplanets and it is important to point out
that the gain (loss) when going to larger (smaller) apertures is most
significant for small exoplanets of all temperatures and cool exoplanets of
all sizes. Specifically, Figs. 8 and 9 show that in the case of $D=3.5$ m, the
number of rocky eHZ exoplanets and EECs would increase to $\approx$63 (+132%)
and $\approx$28 (+161%), respectively, in scenario 1. The corresponding
numbers in scenario 2 are $\approx$78 (+78%) and $\approx$39 (+88%). The
relative smaller gain in scenario 2 compared to scenario 1 is explained by the
higher number of exoplanets already detected with the reference aperture size
of $D=2$ m. In case of $D=1$ m, the number of rocky eHZ exoplanets and EECs
would decrease to $\approx$6 (-76%) and $\approx$2 (-81%), respectively, in
scenario 1. In scenario 2, the numbers would go down to $\approx$17 (-61%) and
$\approx$7 (-64%) for rocky eHZ exoplanets and EECs, respectively.
Figure 10: Total expected exoplanet detection yield as a function of aperture
size and for each of the two scenarios. The numbers shown here are the sum of
the mean numbers shown in the grid cells in Figs. 3, 16, and 17. Figure 11:
Detection yield comparison between _LUVOIR A/B_ , _HabEx_ , and _LIFE_. For
_LIFE_ we show the numbers for the $D=2$ m reference case and for _HabEx_ the
numbers from the baseline 4-meter concept. Jovian planets are not shown,
because they were not included in the _LIFE_ simulations (cf. Sect. 2.2; see
text for important details).
The effect of changing the wavelength range is much weaker by comparison, and
generally the number of detectable planets only increases or decreases by a
few percent. Figures 19 and 20 in Appendix C show the results in the same
format as Fig. 3, and the changes relative to the reference case with
$\lambda=4-18.5\,\mu m$ are shown in Fig. 21.
## 4 Discussion
### 4.1 Total exoplanet yields
Figure 12: Median S/N of the detected exoplanets in our reference case
scenario simulations in the radius vs. stellar insolation plane (left panel:
Scenario 1; right panel: Scenario 2). We note that the 1D distributions on top
and to the right of the grids (as well as colored area) represent the numbers
of detected exoplanets, including the 1-$\sigma$ uncertainties shown in Fig.
3, and not the marginalized distributions of the S/N.
Looking at the total number of detectable exoplanets and their properties
reveals how diverse the expected _LIFE_ exoplanet yield will be. This sample
spans about four orders of magnitudes in planet insolation and about a factor
of 10 in planet radius. In addition to investigations concerning the
habitability of a subset of the sample, _LIFE_ has the potential to address a
number of scientific questions related to the formation and evolution of
exoplanets and their atmospheres.
In Fig. 11 we provide a comparison with the detection yields published in the
_HabEx_ and _LUVOIR_ study reports (Gaudi et al., 2020; The LUVOIR Team,
2019)101010See Stark et al. (2019) for details on the yield calculations for
the reflected light missions.. This plot suggests that for exoplanets with
radii up to 6 R⊕, _LIFE_ , with four times $D=2$ m apertures, can achieve
overall detection yields comparable to those of the _LUVOIR-A_ (15-meter
aperture) and _LUVOIR-B_ (8-meter aperture) concepts; _HabEx_ , with a 4-meter
primary mirror, is predicted to yield fewer detections. It needs to be noted,
though, that while in our simulations planets with radii $>$6 R⊕ were not
included, _LUVOIR A_ and _B_ and _HabEx_ are predicted to detect $\approx$117,
$\approx$102 and $\approx$31 of these Jovian planets, respectively. It is also
important to mention that the numbers for _LIFE_ are the sum of numbers for
the various planet types shown in Fig. 4. The resulting overall numbers of
detectable planets differ slightly from those shown in Fig. 10 because some
detectable planets fall outside the insolation ranges defined in Table 2.
Overall, these results show that in principle both approaches, large, single-
aperture reflected light missions and interferometric MIR missions (under the
assumptions laid out in Sect. 2), offer unprecedented opportunities for the
direct detection and detailed investigations of hundreds of nearby exoplanets.
Going forward, it will be important to investigate possible scientific
synergies between missions such as _HabEx_ or _LUVOIR_ and _LIFE_ because at
least a subset of the exoplanets detected by one approach is likely also
detectable by the other.
Figure 13: Comparison of exoplanet detections with _LIFE_ to known Solar
System planets and exoplanets. The _LIFE_ yield for the reference case
(scenario 2) is shown in red contours using a kernel density estimate of the
detected sample. Every shaded contour level corresponds to 50 exoplanets
detected in the respective parameter space. Blue points represent 60 out of
the 79 known exoplanets within 10 pc of the Sun for which we could estimate
the radius and insolation level. Gray points represent the four rocky planets
in the Solar System (E=Earth, V=Venus, Ma=Mars, and Me=Mercury).
The single most important parameter related to the number of detectable
exoplanets is, unsurprisingly, the aperture size of the collector spacecraft.
While here we focus on a 2-D array architecture for the interferometer with
four collector spacecraft with aperture sizes between $D=1$ m and $D=3.5$ m,
Dandumont et al. (2020) recently presented a similar yield analysis based on 4
different implementations of a two-aperture Bracewell interferometer: 2
CubeSat options (with a 0.5 or 1 m baseline and 0.08 m apertures), a Proba
mission option (with a 5 m baseline and 0.25 m apertures), and the _Fourier
Kelvin Stellar Interferometer_ concept presented in Danchi et al. (2008) and
Danchi & Barry (2010) (with a 12 m baseline and 0.5 m apertures). The trend
shown here continues down to CubeSat apertures and the detection of at least
$\approx$10–15 rocky eHZ exoplanets requires an aperture size of at least
$D=1$ m. The strong dependence of the detection yield on the aperture size
results from the fact that, in the vast majority of cases, the local zodiacal
dust emission is an important noise term and the effective FoV of the
collector spacecraft scales with $\lambda/D$ (cf. Sect. 2.3). In Appendix D we
provide an overview of the relative contributions of the various noise terms
to the total noise for planets detected in the reference case scenarios (Fig.
23).
Another key result from our analyses is that, depending on how the observing
time is distributed amongst the stellar targets, both the number of detected
exoplanets and the type of detected exoplanets can vary significantly. This
illustrates a strong need for the community to clearly define and prioritize
the scientific objectives of such a mission in order to derive the appropriate
observing strategy. An additional parameter that needs to be considered in
this context is the completeness of the survey, that is, how important it is
to have detected, with a certain level of confidence, all (or at least most)
exoplanets from a specific subset of the exoplanet population in the solar
neighborhood. Higher completeness requires more observing time per target star
(including multiple visits) and hence a lower number of detectable exoplanets
overall.
The results from the search phase, and also its duration, have an immediate
impact on the follow-up strategy during the characterization phase of the
mission. It is hence important to understand how well the fundamental
properties of the detected exoplanets (such as radius and temperature, but
also their orbital position) can be constrained from single-epoch data. In
Fig. 12 we present some first indications by showing the median S/N of the
detected exoplanets as a function of their radius and insolation. Because of
the much longer average integration time per star in scenario 2 (cf. Sect.
3.1), the median S/N is, in many cases, significantly higher than in scenario
1. Interestingly, the exoplanets in most grid cells (and certainly warm and
hot exoplanets with radii $>1.5\leavevmode\nobreak\ R_{\oplus}$) are detected
with sufficiently high S/N that some first-order estimate of their radius and
effective temperature and maybe even a rough analysis of their SED based on
(very) low-resolution spectroscopy appears feasible. This aspect needs to be
investigated further as the possibility to obtain spectral information already
from single-epoch data allowing for a first characterization and
classification of the exoplanets has an impact on the follow-up strategy
during the characterization phase. For completeness we note that the largest,
hottest planets receiving the highest levels of insolation do not show the
highest median S/N. This is because the S/N is related to the location of the
exoplanets in the transmission map of the interferometer, which maximizes the
throughput for exoplanets located in the eHZ and not for close-in exoplanets
(cf. Sect. 2.3).
Figure 14: Same as the right panel in Fig. 4, but ignoring all M-type dwarfs
in the target catalog and spending the full search phase on FGK stars only.
The panels show the results for $D=1.0$ m, $D=2.0$ m, and $D=3.5$ m from left
to right, respectively.
Finally, whether or not future exoplanet imaging space missions will have to
carry out a somewhat extended search phase, will also depend on the progress
and results of ongoing and future ground-based RV surveys. In Fig. 13 we show
a comparison between the expected _LIFE_ detection yield (reference case;
scenario 2) and currently known exoplanets within 10 pc of the Sun drawn from
the NASA Exoplanet Archive111111https://exoplanetarchive.ipac.caltech.edu. If
the insolation is not provided for planets in the archive, it is calculated
via $L_{\star}[\mathrm{L_{\odot}}]/a[\mathrm{AU}]^{2}$, with $L_{\star}$ the
host star luminosity, $L_{\odot}$ the solar luminosity, and $a$ the semimajor
axis of the exoplanet orbit. A missing radius measurement is estimated using
forecaster121212https://github.com/chenjj2/forecaster (Chen & Kipping, 2016).
This leads to 60 out of the 79 confirmed exoplanets within 10 pc for which we
can assign both radius and insolation. Fig. 13 shows on the one hand that
there is an interesting sample of planets already known within 10 pc from
which a preliminary target list could be compiled. On the other it
demonstrates that one expect a factor of 5 more planets to be found within 10
pc with _LIFE_. New (or the continuation of) systematic RV exoplanet searches
in the solar neighborhood will be fundamentally important to either provide
future imaging missions with a predefined exoplanet target list or at least
provide them with stringent constraints on the existence of nearby planetary
systems. The same is true for systematic searches of exozodi disks around
nearby stars. As mentioned above, already during the search phase typical
integration times are easily on the order of days. This means that S/Ns $>$5
per spectral channel131313First science requirements for the spectral
resolution and wavelength coverage of _LIFE_ are presented im (Konrad et al.,
2021). Previous works in this direction and in the context of atmospheric
characterization of terrestrial exoplanets at MIR wavelengths suggested
spectral resolutions of up to $R\approx 40$ (e.g., Des Marais et al., 2002;
Léger et al., 2019). are costly and knowing interesting or promising targets
beforehand saves valuable observing time.
### 4.2 Rocky HZ exoplanets: M-star preference and detection efficiency
As shown in Fig. 4, _LIFE_ could detect $\approx$25–45 of rocky exoplanets
located within the eHZ of their host stars and $\approx$10–20 EECs following
the definition of Kopparapu et al. (2018). However, these numbers are strongly
affected by the aperture size of the collector spacecraft. As shown in Figs. 8
and 9, decreasing (increasing) the aperture size yields a significant decrease
(increase) in the number of rocky temperate exoplanets. These findings will be
of great importance during upcoming trade-offs between mission cost, where
aperture size will be an important parameter, and science return. We stress
that this is not only important for the search phase, but it is even more
relevant for the characterization phase that aims at investigating a subsample
of the detected exoplanets in greater detail with high-S/N spectra (Konrad et
al., 2021).
Following similar arguments presented in Stark et al. (2014, 2015), Quanz et
al. (2021) argued that in order to obtain statistically robust results on the
fraction of rocky HZ exoplanets that are indeed habitable, at least 30 (better
50) exoplanets in that part of parameter space need to be studied. According
to the numbers presented above, this appears to be achievable with _LIFE_.
However, the vast majority of these planets is found around M stars and at
this point in time it is unknown whether exoplanets orbiting within the HZ
around M stars are able to retain (secondary) atmospheres because of the high
activity of M-type stars, in particular at young ages (e.g., Tian & Ida, 2015;
Luger & Barnes, 2015; Lingam & Loeb, 2018; Godolt et al., 2019; Atri & Mogan,
2021). It has been shown that under certain circumstances such exoplanets,
which are very likely tidally locked, may still provide habitable conditions
(e.g., Leconte et al., 2015; Ribas et al., 2016; Turbet et al., 2016; Boutle
et al., 2017), but empirical data are still lacking. There is, however, hope
that _JWST_ will be able to address this fundamentally important question and,
for a few cases, investigate the existence of atmospheres of rocky exoplanets
transiting M stars (e.g., Koll et al., 2019). Also, a deep characterization
effort of the potential M-star targets should be carried out, including the
high-energy radiation budget and its past history, in order to understand
which stars may have provided a more quiescent environment for their expected
planets. If rocky exoplanets orbiting M stars can retain atmospheres, then
_LIFE_ is in an excellent position to robustly characterize a significantly
larger sample. If not, then one may want to reconsider the observing strategy
and possibly de-prioritize M stars in the stellar input catalog. Figure 14
shows the results for the most extreme case, where all M stars are ignored and
the full search phase is spent on FGK stars. In this case, $\approx$25 rocky
eHZ exoplanets can be expected assuming an aperture size of $D=3.5$ m; with
$D=2.0$ m this number would drop to $\approx$11, limiting the statistical
power of the analysis. In this context, two points are important to mention:
(a) in order to further increase the number of detected rocky eHZ planets
around FGK stars, it will be important to investigate how a search phase with
a multi-visit strategy would affect the results. We remind the reader that a
detection efficiency $\geq$50% is currently achieved for exoplanets with
$T_{\textrm{eq}}\geq 225$ K or insolations within $0.8\;S_{\oplus}\leq
S_{\textrm{p}}\leq 1.5\;S_{\oplus}$. Ignoring the M stars and repeating the
analysis shown in Fig. 7 for FGK stars only reveals that the overall detection
efficiency is indeed lower (see Fig. 22 in Appendix C). Hence, we can expect
to gain additional detections when multiple visits per star are applied; (b)
As we discuss in Sect. 4.3.3, our underlying occurrence rates for rocky,
temperate planets around FGK stars may be on the rather conservative side.
Similar to the expected total detection yield, also the number of predicted
EECs can be compared to those published in the _HabEx_ and _LUVOIR_ study
reports (Gaudi et al., 2020; The LUVOIR Team, 2019). _HabEx_ , with its
4-meter baseline concept, is expected to detect $\approx$8 EECs, while
_LUVOIR-A_ and _LUVOIR-B_ are predicted to directly image $\approx$54 and
$\approx$28 EECs, respectively. Considering the $\approx$20 EECs that _LIFE_
is expected detect (assuming $D=2$ m and scenario 2), one has to keep in mind
its preference for planets around M stars, while _HabEx_ and _LUVOIR_ have a
strong detection bias for planets around solar-type stars. This suggests that
there is only limited overlap in primary discovery space for EECs between the
missions if they were to carry out independent search phases. Hence, it will
be important to check the potential overlap assuming that one mission is
following-up after the other (e.g., _LIFE_ following after _LUVOIR/HabEx_). In
addition, looking more carefully at the underlying EEC occurrence rates, it
shows that the numbers cannot be directly compared (see Sect. 4.3.3 below).
### 4.3 Remaining limitations and uncertainties in the simulations
#### 4.3.1 Treatment of noise sources
Compared to previous works the yield simulations presented here are based on a
more realistic treatment of the observing technique and include all major
astrophysical noise terms. One of the next crucial steps is to continue the
development of an instrument concept including a noise breakdown structure so
that quantitative instrumental noise estimates can be included in the
simulations. The calculations from Lay (2004) indicate that the noise
contributions from photon noise and instrumental noise may indeed not be very
different. However, these calculations were done for a specific example and
how the relative contributions scale with stellar and planet properties and
exozodi brightness needs to be investigated. Also, recent work by Dandumont et
al. (2020) and other previous analyses in the context of _TPF-I_ (e.g., Lay,
2006) or _Darwin_ (e.g., Defrère et al., 2010) can serve as starting points.
In addition to the noise budget, important instrument parameters such as
overall throughput and detector quantum efficiency need to be further
validated. For the interested reader we provide a summary of the status of
some key technologies relevant for _LIFE_ in Appendix B.
#### 4.3.2 Treatment of exozodiacal and zodiacal dust
While we do take into account emission from potential exozodiacal dust disks
using the nominal distribution from the HOSTS survey (Ertel et al., 2020), it
needs to be acknowledged that there is still considerable uncertainty in the
median exozodi level: while in the nominal distribution the median zodi level
is $\bar{z}$$\approx$3.2, Ertel et al. (2020) show that one can only be
confident at the 1$\sigma$ level that the median is below 9 zodis and at the
2$\sigma$ level that it is below 27 zodis. In order to quantify the impact of
these uncertainties on the detection yield, we did the following experiment
for the reference case scenarios: the exozodi level distribution was shifted
by adding multiples of the median absolute deviation (MAD) of the distribution
(MAD($z$)$\approx$2.7) to each individual exozodi level in the sample. In the
most extreme case we analyzed the distribution was shifted by 9$\cdot$MAD,
resulting in a median $\bar{z}$$\approx$27 corresponding to the 2$\sigma$
level mentioned above. In this case, the total number of detectable planets
decreased by $\lesssim$6% and the number of rocky, HZ planets changed even
less. One reason for this somewhat limited impact is that, compared to noise
from stellar leakage and local zodiacal dust emission, noise from exozodiacal
dust disks contributes only little to the total noise budget of detected
planets (see Fig. 23). Hence, in a statistical sense, the current
uncertainties may not have a strong impact on the overall results. Still,
additional observational efforts determining exozodi levels would further
improve upon the current statistical uncertainties and, maybe even more
importantly, would also help prioritize the most promising individual targets
for future space missions. In addition, at the moment, the HOSTS survey does
not show a correlation between spectral type and the level of exozodi emission
(Ertel et al., 2020), and hence we apply the same underlying distribution of
exozodi levels to all target stars irrespective of spectral type. A larger
data set would be required to further confirm this current finding.
Furthermore, the inclination of exozodiacal dust disks and possible spatial
offsets and substructures (e.g., “clumps,” such as those seen in the zodiacal
light; Reach, 2010; Krick et al., 2012) are not considered in our simulations.
As long as exozodiacal dust disks are centrally symmetric and optically thin,
their contribution to the photon noise in a _LIFE_ measurement is to first
order independent from their inclination; hence, changing the inclination of
the disks has no measurable impact on the results. Spatial offsets and disk
substructures would, however, have an impact on the S/N calculations. Defrère
et al. (2010) looked at the specific case of an Earth-Sun twin located at 15
pc. They modeled planet induced resonant structures in exozodi disks with
varying dust density and inclination and investigated up to what exozodi level
the planet would still be detectable. They concluded that around 10 $\mu$m
wavelength, up to $\sim$15 and $\sim$7 zodis are acceptable for disks with
inclinations between 0-30∘ and up to 60 ∘, respectively. For edge-on systems
this limit drops to $\sim$1.4 zodis. In order to further refine the results
presented here, analyses as the ones presented in Defrère et al. (2010) could
to be carried out, possibly enlarging the covered parameter space. However, as
already noted by the authors, advanced signal processing approaches may
further relax the constraints in terms of acceptable zodi levels mentioned
above. Also, if one considers the nominal distribution of exozodi levels
published by the HOSTS team, the above mentioned zodi level limits appear to
be on the high-end side: about two-thirds of our simulated systems with
randomly assigned zodi levels based on the nominal distribution have disks
with $\leq$7 zodis. Hence, we do not expect exozodi dust disks to be a show-
stopper for _LIFE_. However, systems that are seen (close to) edge-on may pose
a real challenge and have to be investigated in more detail, and a coordinated
effort to reduce the existing uncertainties in the median exozodi level of
nearby stars remains certainly important.
Defrère et al. (2010) also addressed the question of disk offsets, where the
geometric center of the exozodi disk is shifted away from the center of the
star, which would lead to additional flux through the modulation map of the
interferometer. They focused on a Sun-like star at 15 pc and considered only
face-on systems. When looking at the modulated signal covering the full
wavelength range, systems with up to $\sim$50 zodis and offsets as large as
$\sim$0.5 au were considered acceptable. As the offset increases, the level of
acceptable zodis decreases, but for offsets as large as 1 au, $\sim$5 zodis
could still be tolerated. It hence seems that, apart from potentially extreme
cases, disk offsets are not a major concern for _LIFE_. For reference: in the
Solar System, the center of the zodiacal cloud is shifted by only about 0.013
AU from the Sun (Landgraf & Jehn, 2001).
Similarly to the exozodi disks, also our zodiacal dust model does not contain
any substructures. As mentioned above, and shown in Fig. 23, the MIR emission
from the zodiacal dust is an important astrophysical noise source in a typical
_LIFE_ observation and the model should hence be further refined to correct
for the current overestimation of the emission shortward of 6 $\mu$m.
#### 4.3.3 Occurrence rates of small, temperate exoplanets
An additional uncertainty in our results is related to the underlying
exoplanet population. While in some parts of the parameter space the
occurrence rate of exoplanets was robustly measured by the Kepler mission,
there remains significant uncertainty related to the completeness and
reliability for the occurrence rates of rocky, temperate exoplanets around
Sun-like stars (e.g., Bryson et al., 2020). We note that recent estimates for
$\eta_{\oplus}$, which is the fraction of stars with terrestrial exoplanets
within their HZ, are higher than the ones resulting from our underlying
distributions: Bryson et al. (2021) provided two values, $\eta_{\oplus}^{\rm
o}=0.58^{+0.73}_{-0.33}$ and $\eta_{\oplus}^{\rm o}=0.88^{+1.28}_{-0.51}$, for
the occurrence rate of planets with radii between 0.5 and 1.5 $R_{\oplus}$
orbiting in the eHZ of stars with effective temperatures between 4800 and 6300
K141414Our definition of the eHZ is identical to their “optimistic” HZ case;
the superscript “o” in $\eta_{\oplus}^{\rm o}$ refers to the word
“optimistic”.. These bounds represent two extreme assumptions about the
extrapolation of completeness beyond orbital periods where the Kepler DR25
completeness data are available. For EECs around solar-type stars, Bryson et
al. (2021) found a lower bound of $\eta_{\oplus}^{\rm
EEC}=0.18^{+0.16}_{-0.28}$ and an upper bound of $\eta_{\oplus}^{\rm
EEC}=0.28^{+0.30}_{-0.09}$. In our simulations, $\eta_{\oplus}^{\rm o}\approx
0.37$ and $\eta_{\oplus}^{\rm EEC}\approx 0.16$ for FGK dwarfs (and
$\approx$0.56 and $\approx$0.31 for M dwarfs, respectively; see Table 1).
Hence, in particular for solar-type stars, we might be underestimating the
number of EECs and rocky, eHZ planets. Also, our value for $\eta_{\oplus}^{\rm
EEC}$ is lower than the one used in the _HabEx_ and _LUVOIR_ concept studies,
which was $\eta_{\oplus}^{\rm EEC}=0.24^{+0.46}_{-0.16}$. One reason for this
difference is that we did not keep this parameter constant throughout our
stellar sample, but we let it vary between various spectral types (see notes
in Table 1). Hence, at least for this specific subset of planets, a direct
quantitative comparison between our results and the other mission studies is
not immediately straightforward. In a future study, we will further
investigate the impact of the various values of $\eta_{\oplus}$ and their
statistical and systematic uncertainties on the resulting detection yield (cf.
Léger et al., 2015).
Overall, it is clear that reanalyses of the _Kepler_ data, in combination with
additional results from _K2_ , _TESS_ , and the upcoming _PLAnetary Transits
and Oscillations of stars (PLATO)_ mission (Rauer et al., 2014) are extremely
important to provide a more robust empirical basis for future updates of the
analyses presented here. In particular _PLATO_ is designed to detect Earth-
like planets in the HZ of solar-like stars and will improve our knowledge of
the occurrence rate and formation mechanism of these targets.
## 5 Summary and conclusions
We have presented new and more realistic results for the exoplanet detection
yield of a space-based MIR nulling interferometer based on the exoplanet
statistics observed by the _Kepler_ mission and targeting main-sequence FGKM
stars within 20 pc of the Sun. Taking into account all major astrophysical
noise terms and adding some margin for the not yet included instrumental noise
effects (we require a S/N$\geq$7 for a detection), we find that an
interferometer array consisting of four 2 m apertures and covering the 4–18.5
$\mu$m wavelength range with a total throughput of 5% will yield, depending on
the observing strategy, between $\approx$350 and $\approx$550 directly
detected exoplanets with radii between 0.5 and 6 R⊕ within a 2.5-year search
phase. Between $\approx$160 and $\approx$190 of these exoplanets have radii
between 0.5 and 1.5 R⊕. As there is some freedom in how to assign observing
time to the stellar targets, one can attempt to maximize the number of
detected planets in certain areas of parameter space. We demonstrated this
with two scenarios where either the total number of exoplanets or the number
of rocky planets within the empirical HZ is maximized. The observing strategy
must be adapted to the overall scientific objectives of the _LIFE_ mission
since it influences the number (and types) of detected exoplanets.
Keeping the instrument throughput fixed at 5%, we find the number of
detectable exoplanets to be a strong function of aperture size. In our current
analysis, the wavelength range has a negligible impact on the exoplanet yield.
We have shown that $\approx$25–45 rocky exoplanets within the empirical HZ of
their host stars are expected to be detectable with four 2 m apertures, but
this number could go up to $\approx$60–80 if the aperture size were increased
to 3.5 m. In this case, the total number of detectable planets could go up to
$\approx$770\. With four 1 m apertures, the number of rocky exoplanets within
the empirical HZ would be $\leq$20 and the total detection yield $<$320\.
Irrespective of aperture size, the vast majority of rocky exoplanets orbiting
within the empirical HZ are detected around M dwarfs. It will be important to
further investigate if these planets could in principle possess atmospheres
despite the high-energy UV flux and flaring activity these stars display. To
further increase the number of detected rocky HZ planets around FGK stars,
multiple visits per star during the search phase need to be considered in
future work.
All numbers presented here (i.e., the total number of detected planets and the
number of rocky planets within the HZ) are competitive with those predicted
for current mission concepts searching for exoplanets in reflected light.
Further studies investigating potential scientific and operational synergies
between a reflected light and a thermal emission mission should be considered.
Such efforts are particularly important for small temperate planets, such as
EECs, because reflected light missions have a strong bias for detecting these
objects primarily around solar-type stars, while _LIFE_ has a strong bias for
planets around M stars. We note, however, that when comparing the numbers of
detectable EECs with those predicted for the _LUVOIR_ and _HabEx_ missions,
the underlying occurrence rates are not identical. The simulations presented
here use lower values for EECs around FGK stars. This shows that care must be
taken when comparing predicted detection yields of future missions, and
additional efforts, such as obtaining new data and investigating new data
analysis approaches, are needed to further refine the statistical occurrence
rates that form the basis for all yield calculations.
Comparing the predicted primary discovery space of _LIFE_ with known
exoplanets within 10 pc of the Sun shows that there are $>$40 objects,
$\approx$15 of which have predicted radii $<$1.5 R⊕, that could be added to a
target list today. To minimize the time that future exoplanet imaging space
missions have to devote to an initial search phase, continuing ground- and
space-based detection surveys is crucial.
We note that both the overall exoplanet detection yield and the observing time
required to robustly characterize the atmospheric properties of rocky,
temperate exoplanets with an MIR interferometer are strong functions of
apertures size, which must be considered in future trade-off studies. The MIR
regime is particularly rich in molecular absorption bands of the main
constituents of terrestrial exoplanet atmospheres, including major
biosignatures (e.g., Schwieterman et al., 2018; Catling et al., 2018). Also,
thermal emission spectra provide more information about the atmospheric
structure and allow for a more direct measurement of the planetary radius than
reflected light data (e.g., Line et al., 2019). The relatively high S/N
(integrated over the full wavelength range) that most detectable planets in
our simulations have suggests that decent estimates for their radii and
effective temperatures – and in some cases even rough SEDs – seem possible. In
this case, the data from a single-epoch observation obtained during the search
phase will already provide crucial information for categorizing and
prioritizing the planets for the follow-up characterization phase (for further
details, see Dannert et al., 2022).
Our results show that when investigating and selecting future large exoplanet
imaging space missions, for instance in the context of ESA’s Voyage 2050
program, a concept such as _LIFE_ should be considered a serious contender and
may be required to ultimately assess the habitability of exoplanets151515See,
e.g., ”Exoplanet Strategy Report 2018” from the National Academies of
Sciences, Engineering, and Medicine available at
https://www.nap.edu/catalog/25187/exoplanet-science-strategy.. Taking into
account the heritage from the _Darwin_ and _TPF-I_ studies and more recent
progress based on various local activities, new coordinated efforts to further
understand and increase the technological readiness level of key components
have started as part of the _LIFE_ initiative (e.g., Gheorghe et al., 2020).
###### Acknowledgements.
We thank the anonymous referee for a critical and constructive review of the
original manuscript which helped improve the quality of the paper. This work
has been carried out within the framework of the National Centre of Competence
in Research PlanetS supported by the Swiss National Science Foundation. SPQ,
EA and HSW. acknowledge the financial support of the SNSF. SK acknowledges
funding from an ERC Starting Grant (grant agreement No. 639889). DE
acknowledges funding from the European Research Council (ERC) under the
European Union’s Horizon 2020 research and innovation programme (project Four
Aces; grant agreement No 724427). JL-B acknowledges financial support received
from “la Caixa” Foundation (ID 100010434) and the European Union’s Horizon
2020 research and innovation programme under the Marie Sklodowska-Curie grant
agreement No 847648, with fellowship code LCF/BQ/PI20/11760023. RA is a
Trottier Postdoctoral Fellow and acknowledges support from the Trottier Family
Foundation. This work was supported in part through a grant from FRQNT. TL
acknowledges funding from the Simons Foundation (SCOL award No. 611576). Part
of this work was conducted at the Jet Propulsion Laboratory, California
Institute of Technology, under contract with NASA. This research has made use
of the SIMBAD database, operated at CDS, Strasbourg, France, and of the
Washington Double Star Catalog maintained at the U.S. Naval Observatory. This
research has made use of the NASA Exoplanet Archive, which is operated by the
California Institute of Technology, under contract with the National
Aeronautics and Space Administration under the Exoplanet Exploration Program.
This research has made use of the following Python packages: astropy (Astropy
Collaboration et al., 2013, 2018), matplotlib (Hunter, 2007), numpy (Van Der
Walt et al., 2011), and scipy (Virtanen et al., 2020).
_Author contributions:_ SPQ initiated the project, devised the analyses and
wrote the manuscript. MO and FD wrote the LIFESim tool and created the
figures. AGh and EF contributed to the LIFESim tool. JK simulated the
exoplanet populations. FM created the _LIFE_ target star catalog. All authors
discussed the results and commented on the manuscript.
## References
* Angel & Woolf (1997) Angel, J. R. P. & Woolf, N. J. 1997, ApJ, 475, 373
* Anglada-Escudé et al. (2016) Anglada-Escudé, G., Amado, P. J., Barnes, J., et al. 2016, Nature, 536, 1
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Astudillo-Defru et al. (2017) Astudillo-Defru, N., Díaz, R. F., Bonfils, X., et al. 2017, Astronomy & Astrophysics, 605, L11
* Atri & Mogan (2021) Atri, D. & Mogan, S. R. C. 2021, MNRAS, 500, L1
* Berta-Thompson et al. (2015) Berta-Thompson, Z. K., Irwin, J., Charbonneau, D., et al. 2015, Nature, 527, 204
* Borucki et al. (2010) Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977
* Boutle et al. (2017) Boutle, I. A., Mayne, N. J., Drummond, B., et al. 2017, A&A, 601, A120
* Bowens et al. (2021) Bowens, R., Meyer, M. R., Delacroix, C., et al. 2021, A&A, 653, A8
* Bracewell (1978) Bracewell, R. N. 1978, Nature, 274, 780
* Brandl et al. (2021) Brandl, B., Bettonvil, F., van Boekel, R., et al. 2021, The Messenger, 182, 22
* Brandl et al. (2018) Brandl, B. R., Absil, O., Agócs, T., et al. 2018, in Proceedings of the SPIE, ed. H. Takami, C. J. Evans, & L. Simard, Leiden Univ. (Netherlands) (SPIE), 107021U
* Bryson et al. (2020) Bryson, S., Coughlin, J., Batalha, N. M., et al. 2020, AJ, 159, 279
* Bryson et al. (2021) Bryson, S., Kunimoto, M., Kopparapu, R. K., et al. 2021, AJ, 161, 36
* Cabrera et al. (2020) Cabrera, M. S., McMurtry, C. W., Forrest, W. J., et al. 2020, Journal of Astronomical Telescopes, Instruments, and Systems, 6, 011004
* Catling et al. (2018) Catling, D. C., Krissansen-Totton, J., Kiang, N. Y., et al. 2018, Astrobiology, 18, 709
* Chen & Kipping (2016) Chen, J. & Kipping, D. 2016, The Astrophysical Journal, 834, 17
* Colavita et al. (2009) Colavita, M. M., Serabyn, E., Millan-Gabet, R., et al. 2009, PASP, 121, 1120
* Danchi & Barry (2010) Danchi, W. C. & Barry, R. K. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7734, Optical and Infrared Interferometry II, ed. W. C. Danchi, F. Delplancke, & J. K. Rajagopal, 77340M
* Danchi et al. (2008) Danchi, W. C., Barry, R. K., Lawson, P. R., Traub, W. A., & Unwin, S. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7013, Optical and Infrared Interferometry, 70132Q
* Dandumont et al. (2020) Dandumont, C., Defrère, D., Kammerer, J., et al. 2020, Journal of Astronomical Telescopes, Instruments, and Systems, 6, 035004
* Dannert et al. (2022) Dannert, F., Ottiger, M., Quanz, S. P., et al. 2022, arXiv e-prints, arXiv:2203.00471
* Defrère et al. (2018a) Defrère, D., Absil, O., Berger, J. P., et al. 2018a, Experimental Astronomy, 46, 475
* Defrère et al. (2010) Defrère, D., Absil, O., den Hartog, R., Hanot, C., & Stark, C. 2010, A&A, 509, A9
* Defrère et al. (2018b) Defrère, D., Léger, A., Absil, O., et al. 2018b, Experimental Astronomy, 46, 543
* Des Marais et al. (2002) Des Marais, D. J., Harwit, M. O., Jucks, K. W., et al. 2002, Astrobiology, 2, 153
* Díaz et al. (2019) Díaz, R. F., Delfosse, X., Hobson, M. J., et al. 2019, A&A, 625, A17
* Dressing & Charbonneau (2015) Dressing, C. D. & Charbonneau, D. 2015, ApJ, 807, 45
* Dulz et al. (2020) Dulz, S. D., Plavchan, P., Crepp, J. R., et al. 2020, ApJ, 893, 122
* Ertel et al. (2020) Ertel, S., Defrère, D., Hinz, P., et al. 2020, AJ, 159, 177
* Ertel et al. (2018) Ertel, S., Defrère, D., Hinz, P., et al. 2018, AJ, 155, 194
* Gaudi et al. (2020) Gaudi, B. S., Seager, S., Mennesson, B., et al. 2020, arXiv e-prints, arXiv:2001.06683
* Gheorghe et al. (2020) Gheorghe, A. A., Glauser, A. M., Ergenzinger, K., et al. 2020, in Optical and Infrared Interferometry and Imaging VII, ed. P. G. Tuthill, A. Merand, & S. Sallum, Vol. 11446, International Society for Optics and Photonics (SPIE), 497 – 505
* Gillon et al. (2017) Gillon, M., Demory, B.-O., Van Grootel, V., et al. 2017, Nature Astronomy, 1, 0056
* Gillon et al. (2017) Gillon, M., Triaud, A. H. M. J., Demory, B.-O., et al. 2017, Nature, 542, 456
* Godolt et al. (2019) Godolt, M., Tosi, N., Stracke, B., et al. 2019, A&A, 625, A12
* He et al. (2019) He, M. Y., Ford, E. B., & Ragozzine, D. 2019, MNRAS, 490, 4575
* Hinz et al. (2014) Hinz, P., Bailey, V. P., Defrère, D., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9146, Optical and Infrared Interferometry IV, ed. J. K. Rajagopal, M. J. Creech-Eakman, & F. Malbet, 91460T
* Holman & Wiegert (1999) Holman, M. J. & Wiegert, P. A. 1999, AJ, 117, 621
* Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90
* Janson et al. (2018) Janson, M., Brandeker, A., Boehm, C., & Martins, A. K. 2018, Future Astrometric Space Missions for Exoplanet Science, ed. H. J. Deeg & J. A. Belmonte, 87
* Jeffers et al. (2020) Jeffers, S. V., Dreizler, S., Barnes, J. R., et al. 2020, Science, 368, 1477
* Johnstone et al. (2019) Johnstone, C. P., Khodachenko, M. L., Lüftinger, T., et al. 2019, Astronomy & Astrophysics, 624, L10
* Kammerer & Quanz (2018) Kammerer, J. & Quanz, S. P. 2018, A&A, 609, A4
* Kasper et al. (2021) Kasper, M., Cerpa Urra, N., Pathak, P., et al. 2021, The Messenger, 182, 38
* Kasting et al. (1993) Kasting, J. F., Whitmire, D. P., & Reynolds, R. T. 1993, Icarus, 101, 108
* Kelsall et al. (1998) Kelsall, T., Weiland, J. L., Franz, B. A., et al. 1998, ApJ, 508, 44
* Kennedy et al. (2015) Kennedy, G. M., Wyatt, M. C., Bailey, V., et al. 2015, ApJS, 216, 23
* Koll et al. (2019) Koll, D. D. B., Malik, M., Mansfield, M., et al. 2019, The Astrophysical Journal, 886, 140
* Konrad et al. (2021) Konrad, B. S., Alei, E., Angerhausen, D., et al. 2021, arXiv e-prints, arXiv:2112.02054
* Kopparapu et al. (2018) Kopparapu, R. K., Hébrard, E., Belikov, R., et al. 2018, ApJ, 856, 122
* Kopparapu et al. (2013) Kopparapu, R. k., Ramirez, R., Kasting, J. F., et al. 2013, The Astrophysical Journal, 765, 131
* Kopparapu et al. (2014) Kopparapu, R. K., Ramirez, R. M., SchottelKotte, J., et al. 2014, ApJ, 787, L29
* Kraus et al. (2016) Kraus, A. L., Ireland, M. J., Huber, D., Mann, A. W., & Dupuy, T. J. 2016, AJ, 152, 8
* Kreidberg & Loeb (2016) Kreidberg, L. & Loeb, A. 2016, The Astrophysical Journal Letters, 832, L12
* Krick et al. (2012) Krick, J. E., Glaccum, W. J., Carey, S. J., et al. 2012, ApJ, 754, 53
* Krissansen-Totton et al. (2018) Krissansen-Totton, J., Garland, R., Irwin, P., & Catling, D. C. 2018, The Astronomical Journal, 156, 114
* Kunimoto & Matthews (2020) Kunimoto, M. & Matthews, J. M. 2020, AJ, 159, 248
* Landgraf & Jehn (2001) Landgraf, M. & Jehn, R. 2001, Ap&SS, 278, 357
* Lay (2004) Lay, O. P. 2004, Appl. Opt., 43, 6100
* Lay (2005) Lay, O. P. 2005, Applied Optics IP, 44, 5859
* Lay (2006) Lay, O. P. 2006, in SPIE Astronomical Telescopes + Instrumentation, ed. J. D. Monnier, M. Schöller, & W. C. Danchi (SPIE), 62681A–14
* Lay et al. (2007) Lay, O. P., Martin, S. R., & Hunyadi, S. L. 2007, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6693, Techniques and Instrumentation for Detection of Exoplanets III, ed. D. R. Coulter, 66930A
* Leconte et al. (2015) Leconte, J., Wu, H., Menou, K., & Murray, N. 2015, Science, 347, 632
* Léger et al. (2019) Léger, A., Defrère, D., García Muñoz, A., et al. 2019, Astrobiology, 19, 797
* Léger et al. (2015) Léger, A., Defrère, D., Malbet, F., Labadie, L., & Absil, O. 2015, ApJ, 808, 194
* Leger et al. (1995) Leger, A., Puget, J. L., Mariotti, J. M., Rouan, D., & Schneider, J. 1995, Ap&SS, 223, 172
* Line et al. (2019) Line, M., Quanz, S. P., Schwieterman, E. W., et al. 2019, BAAS, 51, 271
* Lingam & Loeb (2018) Lingam, M. & Loeb, A. 2018, J. Cosmology Astropart. Phys., 2018, 020
* Lovis et al. (2017) Lovis, C., Snellen, I., Mouillet, D., et al. 2017, Astronomy & Astrophysics, 599, A16
* Luger & Barnes (2015) Luger, R. & Barnes, R. 2015, Astrobiology, 15, 119
* MacGregor et al. (2018) MacGregor, M. A., Weinberger, A. J., Wilner, D. J., Kowalski, A. F., & Cranmer, S. R. 2018, The Astrophysical Journal Letters, 855, L2
* Malbet & Sozzetti (2018) Malbet, F. & Sozzetti, A. 2018, Astrometry as an Exoplanet Discovery Method, ed. H. J. Deeg & J. A. Belmonte, 196
* Marconi (2020) Marconi, A. 2020, in Ground-based and Airborne Instrumentation for Astronomy VIII, ed. C. J. Evans, J. J. Bryant, & K. Motohara, Vol. 11447, International Society for Optics and Photonics (SPIE), 414 – 425
* Martin et al. (2012) Martin, S., Booth, A., Liewer, K., et al. 2012, Appl. Opt., 51, 3907
* Mason et al. (2001) Mason, B. D., Wycoff, G. L., Hartkopf, W. I., Douglass, G. G., & Worley, C. E. 2001, AJ, 122, 3466
* Mayor et al. (2011) Mayor, M., Marmier, M., Lovis, C., et al. 2011, arXiv e-prints, arXiv:1109.2497
* Meixner et al. (2019) Meixner, M., Cooray, A., Leisawitz, D., et al. 2019, arXiv e-prints, arXiv:1912.06213
* Morley et al. (2017) Morley, C. V., Kreidberg, L., Rustamkulov, Z., Robinson, T., & Fortney, J. J. 2017, The Astrophysical Journal, 850, 121
* Mugnier et al. (2006) Mugnier, L., Thiébaut, E., & Belu, A. 2006, in EAS Publications Series, Vol. 22, EAS Publications Series, ed. M. Carbillet, A. Ferrari, & C. Aime, 69–83
* National Academies of Sciences & Medicine (2021) National Academies of Sciences, E. & Medicine. 2021, Pathways to Discovery in Astronomy and Astrophysics for the 2020s (Washington, DC: The National Academies Press)
* Oshagh et al. (2017) Oshagh, M., Santos, N. C., Figueira, P., et al. 2017, A&A, 606, A107
* Pecaut & Mamajek (2013) Pecaut, M. J. & Mamajek, E. E. 2013, ApJS, 208, 9
* Peñín et al. (2020) Peñín, L. F., Scoarnec, Y., Fernández-Ibarz, J. M., et al. 2020, in Proceedings of the Small Satellite Conference, Technical Session I: Space Mission Architectures, Paper 02
* Pierrehumbert & Gaidos (2011) Pierrehumbert, R. & Gaidos, E. 2011, ApJ, 734, L13
* Quanz et al. (2021) Quanz, S. P., Absil, O., Benz, W., et al. 2021, Experimental Astronomy
* Quanz et al. (2015) Quanz, S. P., Crossfield, I., Meyer, M. R., Schmalzl, E., & Held, J. 2015, International Journal of Astrobiology, 14, 279
* Quanz et al. (2018) Quanz, S. P., Kammerer, J., Defrère, D., et al. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10701, Optical and Infrared Interferometry and Imaging VI, 107011I
* Rauer et al. (2014) Rauer, H., Catala, C., Aerts, C., et al. 2014, Experimental Astronomy, 38, 249
* Reach (2010) Reach, W. T. 2010, Icarus, 209, 848
* Ribas et al. (2016) Ribas, I., Bolmont, E., Selsis, F., et al. 2016, Astronomy & Astrophysics, 596, A111
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Rieke et al. (2015) Rieke, G. H., Ressler, M. E., Morrison, J. E., et al. 2015, PASP, 127, 665
* Sakon et al. (2018) Sakon, I., Roellig, T. L., Ennico-Smith, K., et al. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10698, Space Telescopes and Instrumentation 2018: Optical, Infrared, and Millimeter Wave, ed. M. Lystrup, H. A. MacEwen, G. G. Fazio, N. Batalha, N. Siegler, & E. C. Tong, 1069817
* Schwieterman et al. (2018) Schwieterman, E. W., Kiang, N. Y., Parenteau, M. N., et al. 2018, Astrobiology, 18, 663
* Stark et al. (2019) Stark, C., Belikov, R., Bolcar, M., et al. 2019, in American Astronomical Society Meeting Abstracts, Vol. 233, American Astronomical Society Meeting Abstracts #233, 402.05
* Stark et al. (2015) Stark, C. C., Roberge, A., Mandell, A., et al. 2015, ApJ, 808, 149
* Stark et al. (2014) Stark, C. C., Roberge, A., Mandell, A., & Robinson, T. D. 2014, ApJ, 795, 122
* The LUVOIR Team (2019) The LUVOIR Team. 2019, arXiv e-prints, arXiv:1912.06219
* Thiébaut & Mugnier (2006) Thiébaut, E. & Mugnier, L. 2006, in IAU Colloq. 200: Direct Imaging of Exoplanets: Science & Techniques, ed. C. Aime & F. Vakili, 547–552
* Tian & Ida (2015) Tian, F. & Ida, S. 2015, Nature Geoscience, 8, 177
* Tinetti et al. (2018) Tinetti, G., Drossart, P., Eccleston, P., et al. 2018, Experimental Astronomy, 46, 135
* Tuomi et al. (2019) Tuomi, M., Jones, H. R. A., Butler, R. P., et al. 2019, arXiv e-prints, arXiv:1906.04644
* Turbet et al. (2016) Turbet, M., Leconte, J., Selsis, F., et al. 2016, Astronomy & Astrophysics, 596, A112
* Van Der Walt et al. (2011) Van Der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science & Engineering, 13, 22
* Vanderspek et al. (2019) Vanderspek, R., Huang, C. X., Vanderburg, A., et al. 2019, The Astrophysical Journal Letters, 871, L24
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261
* Wenger et al. (2000) Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9
* Zechmeister et al. (2019) Zechmeister, M., Dreizler, S., Ribas, I., et al. 2019, Astronomy & Astrophysics, 627, A49
## Appendix A Stellar sample
The stellar sample was compiled from querying the SIMBAD
database161616http://simbad.cds.unistra.fr/simbad/ (Wenger et al. 2000) for
objects within 20 pc. We removed the substellar objects (planets and brown
dwarfs) using the object type parameter. For the remaining stellar objects we
focused on main-sequence stars as indicated by the luminosity class of the
objects (in case no luminosity class was given we assumed the objects were
main-sequence objects). Based on the spectral type of the object we then
assigned effective temperature, radius and mass using the relation published
in Table 5 of Pecaut & Mamajek (2013), which is based on empirical data. In
order to assess which objects are members of binary or multiple systems we
used SIMBAD’s hierarchical link. This feature connects an object with its
parent and child objects. To keep the complexity of our multiple star sample
low we decided to only use wide binaries where planetary orbits are possible
around both components and thus are most similar to orbits around single
stars. We therefore excluded all systems with more than two components and
also those that had binary subtypes as SIMBAD object type. This was necessary
because not all binary components had their own SIMBAD entry (for example, if
the objects cannot be observed individually due to too small separations). In
that case the system has no children and we cannot distinguish it from a
single star. We also removed objects with incomplete information for the
stellar parameters (this step also included binaries, where one component was
not listed as a main-sequence star). To obtain an estimate for the separation
between the remaining binary systems, we cross-matched the system position
with the Washington Visual Double Star Catalog (WDS; Mason et al. 2001) by
drawing a circle with a radius of 1 arcsec around the system position. If the
position of a WDS object was close enough to fall within the circle we assumed
that the two objects were the same physical system. In the case the WDS
catalog had more components listed per system than SIMBAD (e.g., because of
background stars) we took the separation between the two main components. The
separation is normally given for two different observations. We kept the
smaller one as the assumed physical separation of the binary system. In order
to ensure that all stellar components of the remaining binary systems could
harbor stable planetary systems between 0.5 and 10 AU we used the stability
criterion from Holman & Wiegert (1999) that takes into account the stellar
masses, their separation and the eccentricity of the binary orbit (which we
assumed to be zero). Systems that did not fulfill this stability criterion
were removed.
The catalog that was the basis for our simulations consists of 1732 stars in
total (123 wide binary components and 1609 single stars). The distance and
spectral type distributions are shown in Fig. 15. The catalog is available
upon request. We are continuously improving the catalog and plan for an
updated version to include the results from _Gaia_ data release 3.
Figure 15: Properties of target stars considered in our study. Top panel:
Distance distribution of stars from the stellar input catalog in bins of 2.5
pc. Single stars are shown in orange, wide binaries (see text) in blue, and
the sum in green. Bottom panel: Spectral type distribution of stars from the
stellar input catalog. Colors are the same as above.
## Appendix B Technological readiness
The readiness of key technologies relevant for a space mission such as _LIFE_
are summarized in Defrère et al. (2018b) and Quanz et al. (2021), but given
their importance for the mission success, we discuss three aspects in the
following:
Nulling interferometry: It is important to understand that the _LIFE_
measurement principle has been demonstrated successfully in the lab at ambient
temperatures (Martin et al. 2012). One of the next steps is to build a
corresponding experiment, but fully under cryogenic conditions, for a broad
wavelength coverage and with sensitivity as one of the key drivers (as needed
for a space mission). This is underway in the form of the _Nulling
Interferometry Cryogenic Experiment_ at ETH Zurich (Gheorghe et al. 2020). In
addition, new nulling instruments have been proposed for the _Very Large
Telescope Interferometer_ , including one working in the 3–5 $\mu$m range
(Defrère et al. 2018a). These efforts join previous successful projects
related to N-band ground-based nulling interferometry, with the Keck Nuller
(Colavita et al. 2009) and the LBTI (Hinz et al. 2014). For _LIFE_ it will be
important to leverage the experience from these projects and realize possible
synergies.
Autonomous formation flying: To meet the assumptions made for our simulations,
a high level of autonomous formation flying of all spacecraft will be
required. Specifically, the baselines of the array should be rearranged for
every new target star in order to maximize the transmission of photons from a
specific distance from the star, and, during an observation, the array should
rotate in order to modulate the signal from potential exoplanets. ESA’s
Proba-3 mission (current launch date: 2023) will demonstrate many critical
aspects of formation flying for _LIFE_. Proba-3 will feature two cube-sized
spacecraft with lengths of $\sim$1.5 m and masses of 200-300 kg and will
maintain a virtual “rigid structure” with millimeter and arcsecond relative
precision. In addition, Proba-3 aims specifically at demonstrating manoeuvres
relevant for _LIFE_ , such as station-keeping at distances from 25 m up to 250
m, approaching and separating in formation without losing millimeter
precision, repointing the formation, and the combination of station-keeping,
resizing, and re-targeting manoeuvres. Details can be found in Peñín et al.
(2020). We note that some relevant technology related to formation flying was
developed in both France and Germany in the context of the _Darwin_ mission
and has been flying on the Swedish _Prototype Research Instruments and Space
Mission technology Advancement_ space
experiment171717https://earth.esa.int/web/eoportal/satellite-
missions/p/prisma-prototype.
High quantum-efficiency, low-noise MIR detectors: The low photon rate of
exoplanets (cf. Fig. 2) and the need to integrate for many hours, if not days,
will put strong requirements on the detector technology in terms of quantum
efficiency (in our simulations we assumed 70%; see Sect. 2.3), low read-out
noise and dark current, and high stability. In addition, a wavelength coverage
of at least $\sim$4–18.5 $\mu$m is required based on atmospheric retrieval
analyses quantifying the characterization potential of _LIFE_ for Earth-like
atmospheres (cf. Konrad et al. 2021). In this overall context, the technology
development plan for the 5.9-meter _Origins Space Telescope_
181818https://asd.gsfc.nasa.gov/firs/docs/OriginsVolume2TechDevelopmentPlanREDACTED.pdf,
another mission concept proposed in the context of NASA’s 2020 astrophysics
decadal survey (Meixner et al. 2019), is of great importance as its proposed
MISC instrument (Sakon et al. 2018) would cover the same wavelength range as
_LIFE_. While a detailed noise budget and requirements breakdown for _LIFE_
are still being worked on, it is clear that two types of detector technologies
can be considered (and eventually traded): HgCdTe detectors (as, for instance,
used in NEOCam) and Si:As detectors (as used in _JWST/MIRI_). Currently, the
3-11 $\mu$m range is better covered by HgCdTe than Si:As since the latter is
basically transparent below 10 $\mu$m wavelengths. First efforts to go up to
15 $\mu$m with HgCdTe detectors were already reported (Cabrera et al. 2020)
and new arrays with cutoff wavelength $>$16 $\mu$m have been grown,
hybridized, and packaged and are undergoing testing (cf. _OST_ Technology
Development Plan). A key parameter for HgCdTe detectors will be the achievable
dark current. For Si:As detectors, in addition to potential challenges related
to dark current and 1/f noise, a general problem is the question of
availability. Industrial fabricators that have built these detectors in the
past (e.g., for _Spitzer_ , _WISE_ , and _JWST_) have stopped their production
of low background detectors in the relevant wavelength range and it is unclear
under what conditions and on what timescales restarting the production would
be an option.
## Appendix C Additional figures for non-reference cases
Figures 16, 17, 18, 19, 20 , and 21 show the expected detection yield for the
non-reference cases. Figure 22 shows the detection efficiency for FGK-dwarf
host stars (i.e., ignoring M dwarfs).
Figure 16: Same as Fig. 3, but now for $D=1.0$ m.
Figure 17: Same as Fig. 3, but now for $D=3.5$ m.
Figure 18: Impact of the aperture size on the exoplanet detection yield. The
numbers are relative to those shown for the reference scenarios with $D=2$ m
in Fig. 4. Left: Scenario 1, with $D=3.5$ m in the top panel and $D=1.0$ m in
the bottom panel. Right: Scenario 2, with $D=3.5$ m in the top panel and
$D=1.0$ m in the bottom panel.
Figure 19: Same as Fig. 3, but now for a wavelength range of 3–20 $\mu$m.
Figure 20: Same as Fig. 3, but now for a wavelength range of 6–17 $\mu$m.
Figure 21: Impact of the wavelength coverage on the exoplanet detection yield.
The numbers are relative to those shown for the reference scenarios with
$\lambda=4-18.5\,\mu m$ in Fig. 4. Left: Scenario 1, with $\lambda=3-20\,\mu
m$ in the top panel and $\lambda=6-17\,\mu m$ in the bottom panel. Right:
Scenario 2, with $\lambda=3-20\,\mu m$ in the top panel and $\lambda=6-17\,\mu
m$ in the bottom panel.
Figure 22: Same as Fig. 7, but ignoring M stars and only considering FGK
stars.
## Appendix D Distribution of noise contributions for detected planets in the
reference case scenarios
Figure 23 summarizes the contribution of the main noise terms to the overall
noise budget for detected planets orbiting FGK stars or M stars.
Figure 23: Donut charts illustrating how the main noise terms considered in
our simulations (stellar leakage and noise from the thermal emission of the
local and exozodiacal dust disks) contribute to the overall noise budget for
the detected planets. The left column shows the results for planets detected
around FGK stars and the right column for planets detected around M stars. The
top row is for reference case scenario 1 and the bottom row for reference case
scenario 2. The numbers correspond to the mean relative contribution of the
various noise terms to the total noise per detected planet averaged over all
planets. The quoted uncertainties are the corresponding standard deviations
(graphically indicated by the colored arcs inside and outside of the donuts).
For M stars, noise from exozodiacal dust disks is basically negligible, and
stellar leakage and noise from the zodiacal dust disk contribute equally to
the total noise in both scenarios. This trend is generally the same for FGK
stars in scenario 1, even though the relative share of exozodi noise is
larger. For scenario 2, however, stellar leakage clearly dominates the noise
budget of planets around FGK stars.
|
# Combined Newton-Raphson and Streamlines-Upwind Petrov-Galerkin iterations
for nano-particles transport in buoyancy driven flow
M. K. RIAHI<EMAIL_ADDRESS>M. Ali Y. Addad E. Abu-Nada Department
of Applied Mathematics, Khalifa University, PO Box 127788, Abu Dhabi, UAE
Department of Mechanical Engineering, Khalifa University, PO Box 127788, Abu
Dhabi, UAE Department of Nuclear Engineering, Khalifa University, PO Box
127788, Abu Dhabi, UAE Emirates Nuclear Technology Center, Khalifa
University, PO Box 127788, Abu Dhabi, UAE
(January 01,2021)
###### Abstract
The present study deals with the finite element discretization of nanofluid
convective transport in an enclosure with variable properties. We study the
Buongiorno model, which couples the Navier-Stokes equations for the base
fluid, an advective-diffusion equation for the heat transfer, and an advection
dominated nanoparticle fraction concentration subject to thermophoresis and
Brownian motion forces. We develop an iterative numerical scheme that combines
Newton’s method (dedicated to the resolution of the momentum and energy
equations) with the transport equation that governs the nanoparticles
concentration in the enclosure. We show that Stream Upwind Petrov-Galerkin
regularization approach is required to solve properly the ill-posed Buongiorno
transport model being tackled as a variational problem under mean value
constraint. Non-trivial numerical computations are reported to show the
effectiveness of our proposed numerical approach in its ability to provide
reasonably good agreement with the experimental results available in the
literature. The numerical experiments demonstrate that by accounting for only
the thermophoresis and Brownian motion forces in the concentration transport
equation, the model is not able to reproduce the heat transfer impairment due
to the presence of suspended nanoparticles in the base fluid. It reveals,
however, the significant role that these two terms play in the vicinity of the
hot and cold walls.
###### keywords:
Nanofluid , Navier-Stokes equation , Newton-Raphson method , Advection
dominated equation , Finite element method , Strem-Upwind Petrov-Galerkin.
††journal: Journal
## 1 Introduction
Natural convection, or natural circulation, is a phenomenon in which fluid
recirculates due to applied temperature difference, where hot fluid (light)
tends to rise up, while colder fluid (heavy) tends to fall down. This
phenomenon occurs in several engineering applications such as chips cooling,
large compartment ventilation, passive cooling in heat exchangers, and further
ocean dynamics and weather applications.
The suspension of nano-sized particles in base fluids, a mixture referred to
as nanofluid, represents an attractive method in heat transfer engineering
problems during the last two decades [1, 2, 3]. Several engineering
applications in heat transfer were investigated including natural convection,
combined convection, heat transfer in electronic cooling, and renewable energy
[4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Due to the high number of publications in
using nanofluids to enhance the heat transfer rate in thermal engineering
systems, several reviews were conducted, such as Jabbari et al. [13] and
Khodadadi et al. [14], Fan and Wang [15], Kakaç and Pramuanjaroenkij [16],
Buongiorno et al. [17], Sheikholeslami and Ganji [18], and Manca et al. [2].
The role of nanofluids in augmenting the heat transfer rate in forced
convection is accepted in the research community where the nanofluids are
found to be very useful in enhancing the performance of forced convective
flows. Conversely, the role of nanofluids is still controversial in natural
convection. For example, most theoretical studies reported enhancement in heat
transfer due to the dispersion of the nanoparticles in base fluids. However,
several experiments reported a deterioration in heat transfer due to the
addition of nanoparticles to base fluids, Wen and Ding [19], Li and Peterson
[20], Ho et al. [21], and Putra et al. [22]. In some experimental
measurements, a significant increase in the effective thermal conductivity for
the mixture of certain types of small solid particles with a diluted water has
been noticed (see [21] and [23]). Due to this claimed significant improvement
of transport properties of the nanofluids, nanofluid technology has attracted
many applications including the cooling of electronic chips, cooling of
smaller internal combustion engines, and biomedical technology.
Furthermore, along its development and use, the nanofluid technology has been
controversial on some of its aspects. In fact, increasing the nanoparticle
concentration does not necessarily mean an improvement of the heat transfer
rate. This is indeed, a debate that until today researchers still does not
converge to a common conclusion on the the heat transfer enhancement using
nanoparticles. Nonetheless, the experimental results of Ho et al. [21] provide
a clear evidence that heat transfer enhancement using nanofluid is actually
possible with a low nanoparticles concentration (i.e. for volume fractions
below 2%). For high Rayleigh number in particular, the dispersed nanoparticles
in the base fluid are able to contribute to heat transfer rate enhancement of
around 18% when compared to pure water. For concentrations equal to or greater
than 2%, on the other hand, it was observed that the addition of nanoparticles
would have a negative impact on the heat transfer rate.
One of the mostly used mathematical model used to study the heat transfer of
nanofluid is the the Buongiorno model [5]. In this model, the flow around the
nanoparticles is regarded as a continuum. It assumes that the only mechanisms
causing the nanoparticles to develop a slip velocity with respect to the base
fluid are; i) the Brownian diffusion resulting from continuous collisions
between the nanoparticles and the molecules of the base fluid and ii) the
thermophoresis representing the nanoparticles diffusion due to temperature
gradient in the domain. Other slip mechanisms considered by Buongiorno in his
analysis, namely, the diffusiophoresis, the lift force, the fluid drainage,
and gravity settling are considered negligible. As a result, the model
translates the fact that the Brownian diffusion and thermophoresis, are the
only forces responsible for the diffusivity of nanoparticle concentrations,
hence, together with the main stream they update the density of the nanofluid
via a convection dominated partial differential equations (PDE).
In this paper, we use the model Buongiorno model to investigate the heat
transfer enhancement using nanofluid. We develop and analyze a new numerical
iterative scheme based on a finite element method (FEM) of the steady state
PDEs. The nanofluid model at hand, consists then of four equations i) the
continuity equation ii) the momentum equation, which is the classical Navier-
Stokes equation subject to the buoyancy force, iii) the energy equation which
is the convected heat equation and iv) the nanoparticle transport equation.
The latter, is an advection dominated PDEs, as per the Péclet number (ratio of
the convection rate and the diffusion rate) is very high, which leads to
numerical spurious while using finite difference method or FEM [24]. The
convected dominated nanoparticle transport equation is, indeed, ill-posed and
has to be regularized in order to stabilize the calculation, see for instance
[25] and also [26, 27] for an adaptive procedure based on a posteriori error
estimate.
The momentum and energy equations are coupled through the convection and
buoyancy terms, while the nanoparticle transport equation plays the role of
fluid density regulator that has a crucial role in the variation of the fluid
viscosity, hence on the shear stresses of the fluid. Particles migration is
also impacted by the thermophoresis forces and other subs-scale forces modeled
through a Brownian motion. A theoretical mathematical functional analysis of
the Buongiorno model is studied in [28] where a mollified regularizing problem
is set up and is shown to be weakly convergent toward the nanofluid Buongiorno
model. A similar nanofluid model has been derived and studied in [29].
The resulting coupled system represents the dynamics of the nanofluid inside a
differentially heated cavity. We supplement the problem at hand with the
following appropriate boundary conditions; non-slip velocity, adiabatic
horizontal walls, specifically heated and cooled walls (Dirichlet non
homogeneous conditions), and the Neumann homogeneous boundary condition for
the nanoparticle transport equation describing simply the fact that none of
the particles is allowed to exit the enclosure (particles-flux is null).
For the numerical discretization of such Oberbeck-Boussinesq equations we
refer to [30, 31, 32]. We also refer to the book [33] for a complete finite
element analysis for the Navier-Stokes equations. In the present work we
consider the pair P2–P1 continuous Taylor–Hood elements [34] for the
discretization of the momentum equation, and consider the P1-continuous finite
elements for the energy equation. The lowest order Taylor-Hood P2-P1 is one of
the most popular FE pairs for incompressible fluid problems. Indeed, it
satisfies the inf-sup stability condition for almost regular meshes and
isotropic meshes with moderate aspect ratio [35].
The momentum and energy equations are solved iteratively through Newton-
Raphson method see for instance [33]. We use P2-continuous FE for the
nanoparticle transport equation for regularization aspects that we will
discuss subsequently.
The rest of the paper is organized as follows: We present in Section 2 the
mathematical equations that govern the heat transfer in a cavity with variable
properties. In section 3, we develop the variational formulation of the
governing equations. We discuss in section 4 the finite element discretization
and the overall algorithm coupling Newton and SUPG methods. Then, in section 5
we report the stability and convergence of our numerical scheme in addition to
its validation using available experimental and numerical data. Finally we
close our paper with some concluding remarks.
Throughout the paper we shall use the following notations
## Nomenclature table
$(x,y)$ | coordinate system (m)
---|---
${\bf n}$ | outward normal vector
$L$ | width of the cavity
${\mathbf{g}}$ | gravitational acceleration ($ms^{-2}$)
$\textbf{u}^{\star}$ | dimensional velocity ($ms^{-1}$)
$\mathbf{u}$ | horizontal velocity component
$p^{\star}$ | dimensional pressure term
$p$ | dimensionless pressure term
$\theta^{\star}$ | temperature (K)
$\theta$ | temperature
$\phi^{\star}$ | nanoparticles’ volume concentration ($\%$)
$\phi$ | nanoparticles’ volume concentration
$\phi_{b}$ | nanoparticles’ bulk volume concentration ($\%$)
$\mathcal{C}$ | denoting Correlation
$c_{p}$ | specific heat ($J\,kg^{-1}K^{-1}$)
$Nu$ | Nusselt number, $Nu=-(L/\Delta\theta)(\partial\theta/\partial x)_{w}$
Ra | Rayleigh number
Pr | Prantl number
Pe | Peclet number
Le | Lewis number
Sc | Schmit number
$\rho$ | density ($kg\,m^{-3}$)
---|---
$k$ | thermal diffusivity ($m^{2}s^{-1}$)
$\mu$ | dynamic viscosity ($N\,kg^{-1}s^{-1}$)
$\alpha$ | kinematic viscosity ($m^{2}s^{-1}$)
$\beta$ | volumetric expansion coefficient of the fluid
$D^{\star}_{\omega}$ | dimensional Brownian diffusion coefficient
$D^{\star}_{\theta}$ | dimensional thermophoretic diffusion coefficient
$D_{\omega}$ | Brownian diffusion coefficient
$D_{\theta}$ | thermophoretic diffusion coefficient
$\pi^{m}$ | variables for the momentum equation
$\pi^{e}$ | variables for the energy equation
$\pi^{p}$ | variables for the np transport equation
⋆ | dimension superscript
p | particle subscript
nf | nanofluid subscript
bf | base fluid subscript
np | nanoparticle subscript
C | cold wall
H | hot wall
## 2 Equations settings
Following Buongiorno model for incompressible nanofluid flow and using
Boussinesq approximation for density, we describe the natural convection in a
deferentially heated squared cavity. Thus the domain of computation $\Omega$
(Figure 1) is a simply connected domain with Lipschitz boundary
$\partial\Omega$. It is assumed that the fluid has reached a statistically
time invariant state, a reason for which we shall study the steady state of
the problem. The description of the dimensional and non-dimensional problems
is reported in the sequel subsections.
$x^{\star}$$y^{\star}$$\textbf{u}^{\star}=0,\,\dfrac{\partial\theta^{\star}}{\partial{\bf
n}}=0,\,\dfrac{\partial\phi^{\star}}{\partial{\bf
n}}=0$$\textbf{u}^{\star}=0,\,\dfrac{\partial\theta^{\star}}{\partial{\bf
n}}=0,\,\dfrac{\partial\phi^{\star}}{\partial{\bf
n}}=0$$\textbf{u}^{\star}=0,\,\theta^{\star}=\theta^{\star}_{h},\,\dfrac{\partial\phi^{\star}}{\partial{\bf
n}}=0$$\textbf{u}^{\star}=0,\,\theta^{\star}=\theta^{\star}_{c},\,\dfrac{\partial\phi^{\star}}{\partial{\bf
n}}=0$${\mathbf{g}}^{\star}$$L$ Figure 1: Geometric sketch of the considered
differential cavity and the boundary conditions for the velocity, temperature
and nanoparticle concentration.
### 2.1 Governing dimensional equations
Our focus will be on the long time statistically invariant state known as
steady state. The flow is therefore considered time-invariant. The heat
transfer equations governing the physics consist of a coupling between i) the
momentum equation associated with the continuity equation, ii) the heat
equation and iii) the nanoparticle transport equation and write as follows:
$\left\\{\begin{array}[]{cllr}\nabla^{\star}\,\cdot\textbf{u}^{\star}&=&0&\text{
on }\Omega\\\
\left(\textbf{u}^{\star}\cdot\nabla^{\star}\,\right)\textbf{u}^{\star}&=&\dfrac{-1}{\rho_{\text{nf}}}\nabla^{\star}\,{p^{\star}}+\dfrac{1}{\rho_{\text{nf}}}\nabla^{\star}\,\cdot\bigg{(}\mu^{\star}_{\text{nf}}\left(\nabla^{\star}\,\textbf{u}^{\star}+(\nabla^{\star}\,\textbf{u}^{\star})^{t}\right)\bigg{)}+{\mathbf{g}}^{\star}\dfrac{(\rho_{\infty}-\rho_{\text{nf}})}{\rho_{\text{nf}}}&\text{
on }\Omega\\\
\left(\textbf{u}^{\star}\cdot\nabla^{\star}\,\theta^{\star}\right)&=&\dfrac{1}{(\rho
c_{\text{p}})_{\text{nf}}}\nabla^{\star}\,\cdot\left(k^{\star}_{\text{nf}}(\theta^{\star},\phi^{\star})\nabla^{\star}\,\theta^{\star}\right)+\left(D^{\star}_{\omega}\nabla^{\star}\,\phi^{\star}\cdot\nabla^{\star}\,\theta^{\star}+\dfrac{D^{\star}_{\theta^{\star}}}{\theta^{\star}_{C}}\nabla^{\star}\,\theta^{\star}\cdot\nabla^{\star}\,\theta^{\star}\right)&\text{
on }\Omega\\\
\left(\textbf{u}^{\star}\cdot\nabla^{\star}\,\phi^{\star}\right)&=&\nabla^{\star}\,\cdot\left(D^{\star}_{\omega}\nabla^{\star}\,\phi^{\star}+\dfrac{D^{\star}_{\theta^{\star}}}{\theta^{\star}_{C}}\nabla^{\star}\,\theta^{\star}\right)&\text{
on }\Omega\\\
\rho_{\infty}-\rho_{\text{nf}}&=&(\rho\beta)_{\text{nf}}(\theta^{\star}-\theta^{\star}_{\text{c}})&\end{array}\right.$
(1)
Supplemented by the non-slip boundary condition for the fluid velocity,
Dirichlet non-homogeneous for the temperature differential in two opposite
sides of the domain, Neumann homogeneous for the adiabatic boundaries, and all
over Neumann homogeneous for the nanoparticle concentration. Figure 1
showcases the considered geometric domain for the numerical simulation. In
Eq.(1) the effective density, specific heat, and thermal expansion
coefficients of nanofluid are given by
$\displaystyle\rho_{\text{nf}}$ $\displaystyle=$
$\displaystyle\rho_{\text{bf}}(1-\phi^{\star})+\rho_{\text{np}}\phi^{\star},$
(2) $\displaystyle(\rho c_{\text{p}})_{\text{nf}}$ $\displaystyle=$
$\displaystyle(\rho c_{\text{p}})_{\text{bf}}(1-\phi^{\star})+(\rho
c_{\text{p}})_{\text{np}}\phi^{\star},$ (3)
$\displaystyle(\rho\beta)_{\text{nf}}$ $\displaystyle=$
$\displaystyle(\rho\beta)_{\text{bf}}(1-\phi^{\star})+(\rho\beta)_{\text{np}}\phi^{\star}.$
(4)
The Brownian diffusion coefficient $D^{\star}_{\omega}$ is defined using the
Einstein-Stokes equation in the following form
$D^{\star}_{\omega}(\theta^{\star})=\dfrac{k_{\text{b}}}{3\pi\mu_{\text{bf}}(\theta^{\star})d_{\text{p}}}\theta^{\star},$
(5)
where $k_{\text{b}}=1.3807\cdot 10^{-23}J/K$ stands for the Boltzmann’s
constant. The thermophoretic diffusion coefficient
$D^{\star}_{\theta^{\star}}$ is defined as
$D^{\star}_{\theta}(\theta^{\star},\phi^{\star})=\dfrac{0.26k_{\text{bf}}}{2k_{\text{bf}}+k_{\text{p}}}\dfrac{\mu_{\text{bf}}(\theta^{\star})}{\rho_{\text{bf}}}\,\phi^{\star}.$
(6)
The dynamic viscosity for water is defined as follows
$\mu_{\text{bf}}(\theta^{\star})=2.414\cdot 10^{-5}\cdot
10^{247.8/(\theta^{\star}-140)}.$ (7)
For the thermal conductivity and viscosity of nanofluid many correlations have
been derived [36, 37, 23, 38]. These correlations are generally nonlinear with
respect to their variables. For this reasons we shall keep the correlation as
a general function as such for the effective viscosity for nanofluid:
$\frac{\mu_{\text{nf}}}{\mu_{\text{bf}}}=\mathcal{C}_{\mu_{\text{nf}}}(\phi,\theta)$
and for the effective thermal conductivity
$\frac{k_{\text{nf}}}{k_{\text{bf}}}=\mathcal{C}_{k_{\text{nf}}}(\phi,\theta)$
both as a potential correlations of $\phi$ and $\theta$.
### 2.2 Dimensionless equations
We start the non-dimensionalization of the equations by defining the following
non-dimensional variables:
$\begin{array}[]{llllll}\mathbf{u}&=&(L/\alpha)\,\textbf{u}^{\star}&,\qquad
x&=&x^{\star}/L\\\ y&=&y^{\star}/L&,\qquad z&=&z^{\star}/L\\\
\phi&=&\phi^{\star}/\phi_{0}&,\qquad\theta&=&(\theta^{\star}-\theta^{\star}_{C})/(\theta^{\star}_{H}-\theta^{\star}_{C})\\\
\mu_{\text{nf}}(\theta,\phi)&=&\mu^{\star}_{\text{nf}}(\theta^{\star},\phi^{\star})/\mu_{\text{bf}}(\theta^{\star}_{C})&,\qquad
k_{\text{nf}}(\theta,\phi)&=&k^{\star}_{\text{nf}}(\theta^{\star},\phi^{\star})/k_{\text{bf}}\\\
{D}_{\omega}&=&D^{\star}_{\omega}(\theta^{\star})/D^{\star}_{\omega}(\theta^{\star}_{C})&,\qquad
D_{\theta}&=&D^{\star}_{\theta}(\theta^{\star},\phi^{\star})/D^{\star}_{\theta}(\theta^{\star}_{C},\phi_{0}),\end{array}$
(8)
Hence the corresponding equation writes
$\left\\{\begin{array}[]{clll}\nabla\cdot\mathbf{u}&=&0\\\
\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}&=&-\pi^{\text{m}}_{1}(\phi)\nabla{p}+\pi^{\text{m}}_{2}(\phi)\nabla\cdot\bigg{(}\mu_{\text{nf}}(\theta,\phi)\left(\nabla\mathbf{u}+(\nabla\mathbf{u})^{t}\right)\bigg{)}+\pi^{\text{m}}_{3}(\phi)\theta\vspace{.08in}\\\
\left(\mathbf{u}\cdot\nabla\theta\right)&=&\pi^{\text{e}}_{1}(\phi)\nabla\cdot\left(k_{\text{nf}}(\theta,\phi)\nabla\theta\right)+\left(\pi^{\text{e}}_{2}(\phi){D}_{\omega}\nabla\phi\cdot\nabla\theta+\pi^{\text{e}}_{3}(\phi)D_{\theta}\nabla\theta\cdot\nabla\theta\right)\vspace{.13in}\\\
\left(\mathbf{u}\cdot\nabla\phi\right)&=&\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\phi\right)+\pi^{\text{p}}_{2}\nabla\cdot\left({D}_{\theta}\nabla\theta\right),\end{array}\right.$
(9)
where the variable properties are defined as follows:
$\begin{array}[]{llll}&\pi^{\text{m}}_{1}(\phi)=&(1-\phi+\phi\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}})^{-1}\\\
&\pi^{\text{m}}_{2}(\phi)=&\text{Pr}\pi^{\text{m}}_{1}\\\
&\pi^{\text{m}}_{3}(\phi)=&\text{Pr}\text{Ra}_{\text{nf}}\left(\dfrac{(1-\phi^{\star})}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}+\dfrac{\phi^{\star}}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}\dfrac{\beta_{\text{np}}}{\beta_{\text{bf}}}\right)\\\
&\pi^{\text{e}}_{1}(\phi)=&(1-\phi+\phi\dfrac{\rho_{\text{np}}\phi^{\star}_{\text{np}}}{\rho_{\text{bf}}\phi^{\star}_{\text{bf}}})^{-1}\\\
&\pi^{\text{e}}_{2}(\phi)=&\dfrac{\text{Pr}}{\text{Sc}}\phi_{\text{b}}\left((1-\phi)\dfrac{\rho_{\text{bf}}\phi^{\star}_{\text{bf}}}{\rho_{\text{np}}\phi^{\star}_{\text{np}}}+\phi\right)\vspace{0.1in}\\\
&\pi^{\text{e}}_{3}(\phi)=&\text{St}\dfrac{\text{Pr}}{\text{Sc}}\left(\dfrac{\theta^{\star}_{H}-\theta^{\star}_{C}}{\theta^{\star}_{H}}\right)\left((1-\phi)\dfrac{\rho_{\text{bf}}\phi^{\star}_{\text{bf}}}{\rho_{\text{np}}\phi^{\star}_{\text{np}}}+\phi\right)^{-1}\\\
&\pi^{\text{p}}_{1}=&\dfrac{D_{\omega^{0}}}{\alpha}=\dfrac{1}{\text{Le}}\vspace{0.1in}\\\
&\pi^{\text{p}}_{2}=&\text{St}\dfrac{\text{Pr}}{\text{Sc}}\left(\dfrac{\theta^{\star}_{H}-\theta^{\star}_{C}}{\theta^{\star}_{C}}\right)\dfrac{1}{\phi_{\text{b}}}.\end{array}$
and the non-dimensional numbers are given by:
$\begin{array}[]{llll}&\text{Rayleigh
number}&\text{Ra}=&g(\rho\beta)_{\text{bf}}(\theta^{\star}_{H}-\theta^{\star}_{C})L^{3}/(\mu_{\text{bf}}(\theta^{\star}_{C})\alpha),\\\
&\text{Prantle
number}&\text{Pr}=&\mu_{\text{bf}}(\theta^{\star}_{C})/(\rho_{\text{bf}}\alpha),\\\
&\text{Lewis
number}&\text{Le}=&\alpha/D^{\star}_{\omega}(\theta^{\star}_{C}),\\\
&\text{Peclet
number}&\text{Pe}_{\text{np}}=&{\|\mathbf{u}\|_{2}}/{\left(\pi_{1}^{p}D_{\omega}\right)}.\\\
\end{array}$
We proceed with the numerical discretization of the governing equations. We
propose to decouple the whole system into two parts; The first part consists
in solving the momentum equations along with the energy equation, the second
part consists in solving the stabilized nanoparticle equation with an
additional constrain to assess that the bulk amount of nanoparticle in the
fluid is constant during the calculation. The main motivation behind this
decoupling is to reduce the non-linearity of the equations (momentum and
energy) in term of the nanoparticle volume fraction and give room to the
numerical scheme to stabilize itself iteratively for the transport equation.
Finally, we re-iterate through these two parts in order to update the
coefficients and the density of the fluid. More details are described in
section 3 below.
## 3 Variational formulations and approximation tools settings
The numerical scheme we propose in this work is based on the finite element
discretization of the dimensionless equations. As the steady problem at hand
presents non-linearity in term of the velocity and temperature, a direct
linear algebra solver, is, therefore, impractical to solve the coupled system.
One has to relax the non-linearity and set up an iterative numerical scheme
that converges to the desired solution. Newton’s method plays a key role here
and has been shown to be very efficient in term of convergence [33].
We consider homogeneous Dirichlet boundary conditions for the velocity, i.e.
$u=0$ on $\partial\Omega$. Let us consider the following Hilbert spaces for
the temperature, velocity and pressure as follows:
$\mathbf{T}=H^{1}(\Omega)\quad,{\bf U}=H^{1}_{0}(\Omega)\times
H^{1}_{0}(\Omega),\quad{\bf P}=\\{p\in
L^{2}(\Omega)\,|\int_{\Omega}p\,\delta\omega=0\,\\}.$
where
$L^{2}(\Omega)=\\{u\,|\,\int_{\Omega}|u|^{2}\,\delta\omega<\infty\\},$
and
$H^{1}(\Omega)=\left\\{u\in L^{2}(\Omega)\,|\,\nabla u\in
L^{2}(\Omega)\right\\},$
and
$H^{1}_{0}(\Omega)=\\{u\in H^{1}(\Omega)\,|\,u_{|\partial\Omega}=0\\}.$
The Sobolev space $L^{2}(\Omega)$ is endowed with the usual inner product
denoted by the duality pairing $(\cdot,\cdot)$, that generates the
$L^{2}$-norm $\|\cdot\|_{2}$.
### 3.1 Newton’s (optimize then discretize) Methods for the solution of the
momentum and energy equations
In order to present a general algorithm, we shall separate nanoparticles
equation from the momentum and energy equations while we are processing. The
generality here is mainly targeting the use of any correlations (as there is a
variety available in literature). This gives our approach a flexibility to
treat different type of nanofluid and consider different (possibly highly
nonlinear) thermal and viscosity correlations. The idea behind the following
notation and calculation of the tangent equations is that we formulate the
problem as a multivariable function $\mathbf{F}$ and use the classical Newton-
Raphson iterations that reads
$\begin{bmatrix}\mathbf{u}^{k+1}\\\ p^{k+1}\\\
\theta^{k+1}\end{bmatrix}=\begin{bmatrix}\mathbf{u}^{k}\\\ p^{k}\\\
\theta^{k}\end{bmatrix}-\left(\mathbf{DF}\left((\mathbf{u}^{k},p^{k},\theta^{k})\right)\right)^{-1}\mathbf{F}\left((\mathbf{u}^{k},p^{k},\theta^{k})\right),$
where $\mathbf{DF}(X^{k})$ stands for the Jacobian (isomerism in ${\bf
U}\times{\bf P}\times{\bf T}$).
In practice, we define the coupled momentum-energy variational form as such
for every trial $(\mathbf{v},q,\zeta)\in{\bf U}\times{\bf P}\times{\bf T}$ we
have
$\begin{array}[]{lll}\mathcal{F}_{(\mathbf{v},q,\zeta)}:{\bf U}\times{\bf
P}\times{\bf T}\longrightarrow\mathbb{R}\\\
(\mathbf{u},p,\theta)\longmapsto\mathcal{F}_{(\mathbf{v},q,\zeta)}(\mathbf{u},p,\theta):=&((\mathbf{u}\cdot\nabla\mathbf{u}),\mathbf{v})+\pi^{m}_{2}\left(\mu_{\text{nf}}(\theta,\phi)\left(\nabla\mathbf{u}+(\nabla\mathbf{u})^{t}\right),\nabla\mathbf{v}\right)\\\
&-(\pi^{m}_{1}\nabla p,\mathbf{v})-\left(\pi^{m}_{1}\mathbf{u},\nabla
q\right)-\left(\pi^{m}_{3}\theta(\mathbf{u}\cdot\textbf{e}_{z}),\mathbf{v}\right)+\varepsilon(p,q)\\\
&+(\left(\mathbf{u}\cdot\nabla\theta\right),\zeta)+\displaystyle(\pi^{\text{e}}_{1}(\phi)\left(k_{\text{nf}}(\theta,\phi)\nabla\theta\right)\cdot\nabla\zeta)\\\
&+(\pi^{\text{e}}_{2}(\phi){D}_{\omega}(\nabla\phi,\nabla\theta),\zeta)+\displaystyle\left(\pi^{\text{e}}_{3}(\phi)D_{\theta}\left(\nabla\theta\cdot\nabla\theta\right),\zeta\right).\end{array}$
(10)
Then we define the vector field $\mathbf{F}(\mathbf{u},p,\theta)\in{\bf
U}\times{\bf P}\times{\bf T}$ as the vector that satisfies the following inner
product formula for every $(\mathbf{v},q,\zeta)\in{\bf U}\times{\bf
P}\times{\bf T}$
$(\mathbf{F}(\mathbf{u},p,\theta),\begin{bmatrix}\mathbf{v}\\\ q\\\
\zeta\end{bmatrix})=\mathcal{F}_{(\mathbf{v},q,\zeta)}(\mathbf{u},p,\theta).$
Besides, we define the tangent coupled momentum-energy variational trilinear
form as such for every trial $(\mathbf{v},q,\zeta)\in{\bf U}\times{\bf
P}\times{\bf T}$ we have
$\begin{array}[]{lll}\mathcal{DF}_{(\mathbf{v},q,\zeta)}(\mathbf{u},p,\theta):{\bf
U}\times{\bf P}\times{\bf T}\longrightarrow\mathbb{R}\\\
\mathcal{DF}_{(\mathbf{v},q,\zeta)}(\mathbf{u},p,\theta)(\delta\mathbf{u},\delta
p,\delta\theta)&:=&\displaystyle\left(\left(\delta\mathbf{u}\cdot\nabla\mathbf{u}\right),\mathbf{v}\right)+\displaystyle\left(\left(\mathbf{u}\cdot\nabla\delta\mathbf{u}\right),\mathbf{v}\right)\\\
&&+\displaystyle\left(\pi^{m}_{2}\mu_{\text{nf}}(\theta,\phi)\left(\nabla\delta\mathbf{u}+(\nabla\delta\mathbf{u})^{t}\right),\nabla\mathbf{v}\right)\\\
&&-\displaystyle\left(\pi^{m}_{1}\nabla\delta
p,\mathbf{v}\right)-\left(\pi^{m}_{1}\delta\mathbf{u},\nabla q\right)\\\
&&-\displaystyle\left(\pi^{m}_{3}\delta\theta(\mathbf{u}\cdot\mathbb{e}_{z}),\mathbf{v}\right)-\displaystyle\left(\pi^{m}_{3}\theta(\delta\mathbf{u}\cdot\mathbb{e}_{z}),\mathbf{v}\right)+\varepsilon\left(\delta
p,q\right)\\\
&&+\displaystyle\left(\left(\delta\mathbf{u}\cdot\nabla\theta\right),\zeta\right)+\displaystyle\left(\left(\mathbf{u}\cdot\nabla\delta\theta\right),\zeta\right)\\\
&&+\displaystyle\left(\pi^{\text{e}}_{1}(\phi)\left(k_{\text{nf}}(\theta,\phi)\nabla\delta\theta\right),\nabla\zeta\right)\\\
&&+\displaystyle\left(\pi^{\text{e}}_{2}(\phi){D}_{\omega}\left(\nabla\phi\cdot\nabla\delta\theta\right),\zeta\right)+\displaystyle\left(\pi^{\text{e}}_{3}(\phi)2D_{\theta}\left(\nabla\theta\cdot\nabla\delta\theta\right),\zeta\right),\end{array}$
(11)
and we define the linear operator $\mathbf{DF}(\mathbf{u},p,\theta)$ that
satisfies the following equation
$(\mathbf{DF}(\mathbf{u},p,\theta)(\delta\mathbf{u},\delta
p,\delta\theta),\begin{bmatrix}\mathbf{v}\\\ q\\\
\zeta\end{bmatrix})=\mathcal{DF}_{(\mathbf{v},q,\zeta)}(\mathbf{u},p,\theta)(\delta\mathbf{u},\delta
p,\delta\theta).$
$\forall(\mathbf{v},q,\zeta)\in{\bf U}\times{\bf P}\times{\bf T}$.
### 3.2 The nanoparticles transport equation: variational formulation
The variational formulation for the nanoparticle concentration transport
equation reads: For avery $\varphi\in X:=H^{1}(\Omega)$ find $\phi\in
H^{1}(\Omega)$ such that
$\begin{array}[]{lll}a(\phi,\varphi)&:=&\displaystyle\left(\left(\mathbf{u}\cdot\nabla\phi\right),\varphi\right)+\displaystyle\left(\pi^{\text{p}}_{1}\left({D}_{\omega}\nabla\phi\right),\nabla\varphi\right)\\\
&&-\displaystyle\left(\pi^{\text{p}}_{1}{D}_{\omega}\dfrac{\partial\phi}{\partial\bf
n}\right)_{\Gamma}-\displaystyle\left(\pi^{\text{p}}_{2}{D}_{\theta}\dfrac{\partial\theta}{\partial{\bf
n}}\right)_{\Gamma}=0.\\\
(b(\theta),\varphi)&=&\pi^{\text{p}}_{2}\left({D}_{\theta}\nabla\theta,\nabla\varphi\right).\end{array}$
(12)
After applying the boundary conditions. This problem is not well posed.
Indeed, it does not satisfy the inf-sup condition. We shall discuss this
concern in the sequel where we introduce a regularization parameter
$\varepsilon$ acting as a reaction term. In the numerical code this parameter
is taken very small ($\varepsilon\approx 1.e-10$) to ensure satisfaction of
the inf-sup condition and not to harm the model, as the nanoparticles
concentration does not exhibit a reaction within the mixture.
## 4 Finite element algorithm
For the steady states under consideration we use i) standard Taylor-Hood
finite element approximation [34] for the space discretization of the Navier-
Stokes equations, approximating the velocity field with P2 finite elements,
and the pressure with the P1 finite element. We assume we have the
triangulation $\mathcal{T}_{h}$ of the computational domain $\Omega$, such
that
$\Omega=\cup_{n=e}^{nel}\overline{\Omega^{e}}$
we seek for approximated solution over the finite dimensional vector spaces
$\mathbf{U}_{h}\times\mathbf{P}_{h}\times\mathbf{T}_{h}\subset\mathbf{U}\times\mathbf{P}\times\mathbf{T}$,
where $h$ denotes the discretization parameter. We denote by
$(\mathbf{u}_{h},p_{h},\theta_{h})$ (respectively
$(\mathbf{v}_{h},q_{h},\zeta_{h})$) the discrete FE solution approximating the
continuous solution $(\mathbf{u},p,\theta)$ (respectively
$(\mathbf{v},q,\zeta)$). We also denote by $\phi_{h}$ the FE approximation of
$\phi$.
### 4.1 Matrix assembly for Newton’s method
Hereafter, we associate to the linear operator
$\mathbf{DF}(\mathbf{u},p,\theta)(\cdot,\cdot,\cdot)$ a matrix representation
as follows:
$\begin{bmatrix}DF_{h}^{\mathbf{u},\mathbf{u}}&DF_{h}^{\mathbf{u},p}&DF_{h}^{\mathbf{u},\theta}\\\
DF_{h}^{p,\mathbf{u}}&DF_{h}^{p,p}&0\\\
DF_{h}^{\theta,\mathbf{u}}&0&DF_{h}^{\theta,\theta}\end{bmatrix}\begin{bmatrix}\delta\mathbf{u}_{h}\\\
\delta p_{h}\\\ \delta\theta_{h}\end{bmatrix},$ (13)
where the blocks in this matrix are linear operators associated to the
following bilinear forms as follows
$\displaystyle(DF_{h}^{\mathbf{u},\mathbf{u}}\delta\mathbf{u}_{h},\mathbf{v}_{h})$
$\displaystyle=$
$\displaystyle\left(\left(\delta\mathbf{u}_{h},\nabla\mathbf{u}_{h}\right),\mathbf{v}_{h}\right)+(\left(\mathbf{u}_{h},\nabla\delta\mathbf{u}_{h}\right),\mathbf{v}_{h})+(\pi^{m}_{2}\mu_{\text{nf}}(\theta_{h},\phi_{h})\left(\nabla\delta\mathbf{u}_{h}+(\nabla\delta\mathbf{u}_{h})^{t}\right),\nabla\mathbf{v}_{h})$
$\displaystyle(DF_{h}^{\mathbf{u},p}\delta p_{h},\mathbf{v}_{h})$
$\displaystyle=$ $\displaystyle(\pi^{m}_{1}\nabla\delta p,\mathbf{v}_{h})$
$\displaystyle(DF_{h}^{\mathbf{u},\theta}\delta\theta,\mathbf{v}_{h})$
$\displaystyle=$
$\displaystyle-(\pi^{m}_{3}\delta\theta(\mathbf{u}_{h},\mathbb{e}_{z}),\mathbf{v}_{h})$
$\displaystyle(DF_{h}^{p,\mathbf{u}}\delta\mathbf{u}_{h},q_{h})$
$\displaystyle=$ $\displaystyle-\left(\pi^{m}_{1}\delta\mathbf{u}_{h},\nabla
q_{h}\right)$ $\displaystyle(DF_{h}^{p,p}p_{h},q_{h})$ $\displaystyle=$
$\displaystyle\varepsilon(\delta p_{h},q_{h})$
$\displaystyle(DF_{h}^{\theta,\mathbf{u}}\delta\theta_{h},\zeta_{h})$
$\displaystyle=$
$\displaystyle((\mathbf{u}_{h},\nabla\delta\theta_{h}),\zeta_{h})$
$\displaystyle(DF_{h}^{\theta,\theta}\delta\theta_{h},\zeta_{h})$
$\displaystyle=$
$\displaystyle\displaystyle(\pi^{\text{e}}_{1}(\phi)(k_{\text{nf}}(\theta_{h},\phi_{h})\nabla\delta\theta_{h}),\nabla\zeta_{h})$
$\displaystyle+(\pi^{\text{e}}_{2}(\phi_{h}){D}_{\omega}\left(\nabla\phi_{h},\nabla\delta\theta_{h}\right)\zeta_{h})+(\pi^{\text{e}}_{3}(\phi)2D_{\theta_{h}}\left(\nabla\theta_{h},\nabla\delta\theta_{h}\right),\zeta_{h}).$
Newton’s iterations updates the FE solution
$(\mathbf{u}_{h},p_{h},\theta_{h})$ as follows
$\begin{bmatrix}\mathbf{u}_{h}^{k+1}\\\ p_{h}^{k+1}\\\
\theta_{h}^{k+1}\end{bmatrix}=\begin{bmatrix}\mathbf{u}_{h}^{k}\\\
p_{h}^{k}\\\
\theta_{h}^{k}\end{bmatrix}-\begin{bmatrix}\delta\mathbf{u}^{k}\\\ \delta
p^{k}\\\ \delta\theta^{k}\end{bmatrix}$ (14)
where $(\delta\mathbf{u}_{h}^{k},\delta p_{h}^{k},\delta\theta_{h}^{k})$ is
solution to
$\begin{bmatrix}DF_{h}^{\mathbf{u},\mathbf{u}}&DF_{h}^{\mathbf{u},p}&DF_{h}^{\mathbf{u},\theta}\\\
DF_{h}^{p,\mathbf{u}}&DF_{h}^{p,p}&0\\\
DF_{h}^{\theta,\mathbf{u}}&0&DF_{h}^{\theta,\theta}\end{bmatrix}\begin{bmatrix}\delta\mathbf{u}_{h}^{k}\\\
\delta p_{h}^{k}\\\
\delta\theta_{h}^{k}\end{bmatrix}=\begin{bmatrix}F_{h}^{\mathbf{u}\mathbf{u}}&F_{h}^{\mathbf{u}p}&F_{h}^{\mathbf{u}\theta}\\\
F_{h}^{p\mathbf{u}}&F_{h}^{pp}&0\\\
F_{h}^{\theta\mathbf{u}}&0&F_{h}^{\theta\theta}\end{bmatrix}\begin{bmatrix}\mathbf{u}_{h}^{k}\\\
p_{h}^{k}\\\ \theta_{h}^{k}\end{bmatrix}$ (15)
with right-hand side (presented as a matrix-vector product) contrains blocks
of linear operators associated to the following bilinear forms as such
$\displaystyle(F_{h}^{\mathbf{u}\mathbf{u}}\mathbf{u}_{h},\mathbf{v}_{h})$
$\displaystyle=$
$\displaystyle\left(\left(\mathbf{u}_{h}\cdot\nabla\mathbf{u}_{h}\right),\mathbf{v}_{h}\right)+\displaystyle\left(\pi^{m}_{2}\mu_{\text{nf}}(\theta_{h},\phi_{h})\left(\nabla\mathbf{u}_{h}+(\nabla\mathbf{u}_{h})^{t}\right),\nabla\mathbf{v}_{h}\right)$
$\displaystyle(F_{h}^{\mathbf{u}p}p_{h},\mathbf{v}_{h})e$ $\displaystyle=$
$\displaystyle\left(\pi^{m}_{1}\nabla p_{h},\mathbf{v}_{h}\right)$
$\displaystyle(F_{h}^{\mathbf{u}\theta}\theta_{h},\mathbf{v}_{h})$
$\displaystyle=$
$\displaystyle-\left(\pi^{m}_{3}\theta_{h}(\mathbf{u}_{h},\mathbb{e}_{z}),\mathbf{v}_{h}\right)$
$\displaystyle(F_{h}^{p\mathbf{u}}\mathbf{u}_{h},q_{h})$ $\displaystyle=$
$\displaystyle-\left(\pi^{m}_{1}\mathbf{u}_{h},\nabla q_{h}\right)$
$\displaystyle(F_{h}^{pp}p_{h},q_{h})$ $\displaystyle=$
$\displaystyle\varepsilon\left(p_{h},q_{h}\right)$
$\displaystyle(F_{h}^{\theta\mathbf{u}}\mathbf{u}_{h},\zeta_{h})$
$\displaystyle=$
$\displaystyle\left(\left(\mathbf{u}_{h}\cdot\nabla\theta_{h}\right),\zeta_{h}\right)$
$\displaystyle(F_{h}^{\theta\theta}\theta_{h},\zeta_{h})$ $\displaystyle=$
$\displaystyle\displaystyle\left(\pi^{\text{e}}_{1}(\phi)\left(k_{\text{nf}}(\theta_{h},\phi_{h})\nabla\theta_{h}\right),\nabla\zeta_{h}\right)+\displaystyle\left(\pi^{\text{e}}_{2}(\phi_{h}){D}_{\omega}\left(\nabla\phi_{h},\nabla\theta_{h}\right),\zeta_{h}\right)$
$\displaystyle+\displaystyle\left(\pi^{\text{e}}_{3}(\phi_{h})D_{\theta_{h}}\left(\nabla\theta_{h},\nabla\theta_{h}\right),\zeta_{h}\right).$
### 4.2 Ill-posedness of the Buongiorno nanoparticle transport model with FEM
We develop hereafter a numerical scheme that approximates the solution of the
transport dominated advection-diffusion nanoparticles concentration equation.
Despite the fact that finite element solution is very adaptive for this kind
of governing equations, such method and other Galerkin based approaches may
potentially fail where their relative discrete solution will be market by
spurious oscillations in space. This happens if their element Péclet number
goes beyond certain critical value. Other discretization methods suffer,
actually, from this issue, for instance the finite difference technique. In
the later, the spatial oscillation could be reduced by upwinding scheme. For
the finite element method, we have equivalent methods to the later upwinding
scheme, such as Petrov-Galerkin and Streamline-Upwind Petrov-Galerkin (SUPG)
[39, 40, 26]. Here, the shape function is modified in order to mimic the
upwinding effect and therefore reduce, or eliminate, the spatial oscillations.
An other interesting stabilization method is the Galerkin/Least-Square (GLS)
see for instance [27, 41] and references therein. All the above and other
techniques [42, 43, 44] more adaptable for the time-dependent equations, use
artificial viscosity in the direction of the streamlines.
Let us define the space
$X_{h}=\left\\{\varphi_{h}\in\mathcal{C}^{0}(\overline{\Omega});\,\forall\Omega^{e}\in\mathcal{T}_{h},\,\varphi_{h|\Omega
e}\in\mathbf{P}_{2}\right\\}\subset X,$
Over which, the bilinear form related to the variational formulation of the
advection-diffusion nanoparticle concentration writes as follows: for any
trial function $\varphi_{h}\in X_{h}$ find $\phi_{h}\in X_{h}$
$a^{\varepsilon}(\phi_{h},\varphi_{h})=b(\theta_{h})$ (16)
where
$\displaystyle a^{\varepsilon}(\phi_{h},\varphi_{h})$ $\displaystyle=$
$\displaystyle\left(\left(\mathbf{u}\cdot\nabla\phi_{h}\right),\varphi_{h}\right)+\pi^{\text{p}}_{1}\left({D}_{\omega}\nabla\phi_{h},\nabla\varphi_{h}\right)+\varepsilon\left(\phi_{h},\varphi_{h}\right).$
$\displaystyle(b(\theta_{h}),\varphi_{h})$ $\displaystyle=$
$\displaystyle\pi^{\text{p}}_{2}\left({D}_{\theta_{h}}\nabla\theta_{h},\nabla\varphi_{h}\right)$
We endow $X$ with the following norm
$\|\varphi\|_{\varepsilon}:=\sqrt{\pi_{1}^{p}\|\nabla\varphi\|_{2}^{2}+\varepsilon\|\varphi\|_{2}^{2}},$
which we will use in the sequel.
###### Proposition 1.
(Well-posedness) The bilinear form $a^{\varepsilon}(\cdot,\cdot)$ is coercive
such that
$\displaystyle a^{\varepsilon}(\varphi_{h},\varphi_{h})$ $\displaystyle\geq$
$\displaystyle\|\varphi_{h}\|_{\varepsilon}^{2},$ $\displaystyle
a^{\varepsilon}(\varphi_{h},\varphi_{h})$ $\displaystyle\geq$
$\displaystyle\alpha_{p}\|\varphi_{h}\|_{2}^{2}.$
###### Proof.
We have
$a^{\varepsilon}(\varphi_{h},\varphi_{h})=\left(\left(\mathbf{u}_{h}\cdot\nabla\varphi_{h}\right),\varphi_{h}\right)+\pi^{\text{p}}_{1}\left({D}_{\omega}\nabla\varphi_{h},\nabla\varphi_{h}\right)+\varepsilon\left(\varphi_{h},\varphi_{h}\right)$
Note that we have
$\left(\left(\mathbf{u}_{h}\cdot\nabla\varphi_{h}\right)\varphi_{h}\right)=\left(\nabla\cdot\left(\mathbf{u}_{h}\varphi_{h}^{2}\right),\frac{1}{2}\right)=-\frac{1}{2}\left((\mathbf{u}_{h}\cdot{\bf
n})\varphi_{h}^{2}\right)_{\Gamma}=0$ after using integration by part and the
divergence theorem together with the velocity homogeneous boundary condition.
We therefore have
$\displaystyle a^{\varepsilon}(\varphi_{h},\varphi_{h})$ $\displaystyle\geq$
$\displaystyle\pi_{1}^{p}\|\nabla\varphi_{h}\|^{2}_{2}+\varepsilon\|\varphi_{h}\|_{2}^{2}-\frac{1}{2}\left((\mathbf{u}_{h}\cdot{\bf
n})\varphi_{h}^{2}\right)_{\Gamma}$ $\displaystyle\geq$
$\displaystyle\dfrac{\pi_{1}^{p}+\varepsilon
c_{\Omega}}{c_{\Omega}}\|\varphi_{h}\|^{2}_{2}.$
Where $c_{\Omega}$ stands for the Poincaré constant. Finally, by setting
$\alpha_{p}=\frac{\pi_{1}^{p}+\varepsilon c_{\Omega}}{c_{\Omega}}$, we obtain
the coercivity result in $X$ and in $L^{2}(\Omega)$ as stated above. ∎
In the subsequent part, we shall discuss the stabilization of the transport
equation for the concentration of nanoparticles. Indeed, we use the so-called
Streamline Upwind Petrov-Galerkin (SUPG) method that enlarges the standard
trial space via streamline trials. In practice, the SUPG formulates as
follows. Given $\mathbf{u}_{h}\in\mathbf{U}_{h}$, for a test function
$\varphi$ in
$\text{span}\left\\{\varphi_{h}+\sum_{\Omega^{e}}\delta_{\Omega^{e}}\mathbf{u}_{h}\nabla\varphi_{h}\right\\}$
find $\phi_{h}$ such that
$a^{\varepsilon}_{\sc supg}(\phi_{h},\varphi_{h})=b_{\sc
supg}(\phi_{h},\varphi_{h}).$ (17)
where
$\displaystyle a^{\varepsilon}_{\sc supg}(\phi_{h},\varphi_{h})$
$\displaystyle:=$ $\displaystyle
a^{\varepsilon}(\phi_{h},\varphi_{h})+\sum_{e}^{nel}\int_{\Omega^{e}}\delta_{\Omega^{e}}\left(\mathbf{u}_{h}\cdot\nabla\phi_{h}+\varepsilon\phi_{h}^{2}-\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\phi_{h}\right)\right)\left(\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega$
and
$b_{\sc
supg}(\phi_{h},\varphi)=b(\phi_{h},\varphi)+\sum_{e}^{nel}\int_{\Omega^{e}}\delta_{\Omega^{e}}\pi^{\text{p}}_{2}\nabla\cdot\left({D}_{\theta_{h}}\nabla\theta_{h}\right)\left(\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega.$
Here, the integral contributions stand for the SUPG formulation [45, 46, 47,
48, 49], where, $\delta_{\Omega^{e}}$ are user-chosen weights element
dependent parameters. The above convection dominated nanoparticle equation,
even if it is linear with respect to the unknown $\phi_{h}$ sill exhibit
spurious numerical oscillations due to the fact that the dominant contribution
comes from the convective term, which is far away larger than the
thermophoresis effect and the diffusive Brownian motion. This directly
reflects that the corresponding Péclet number Pe (non-dimensional quantity),
i.e., the ratio of the convection rate and the diffusion rate is relatively
high. To overcome this handicap in the numerical simulation, we use the SUPG
method, which provides numerical stability. Indeed, the SUPG adds an
artificial stream-diffusion along the streamlines of the particles flow. In
practice the SUPG weight function parameter is adjusted in term of local
Péclet number. In theory, the choice of the weighted parameter function must
satisfy
$\delta_{\Omega^{e}}\leq\min\left\\{\dfrac{1}{2\varepsilon},\dfrac{h_{\Omega^{e}}^{2}\|D_{\omega}\|_{\infty,\Omega^{e}}^{2}}{2C_{inv}^{2}}\right\\}$
(18)
a condition for which we ensure the coercivity and hence the well-posedness of
the SUPG problem via the next coercivity result in the Banach space $V_{h}$:
$V_{h}=\left\\{\varphi_{h}\in
X_{h}(\Omega)\,|\,\mathbf{u}\cdot\nabla{\varphi_{h}}\in
L^{2}(\Omega)\right\\}$
endowed with the SUPG-norm defined by:
$\|\varphi_{h}\|_{\sc
supg}:=\sqrt{\pi^{\text{p}}_{1}\|\nabla\varphi_{h}\|_{2}^{2}+\varepsilon\|\varphi_{h}\|_{2}^{2}+\|\delta_{\Omega}^{\frac{1}{2}}\mathbf{u}\cdot\nabla\varphi_{h}\|_{2}^{2}}$
(19)
As our numerical approach is based on Newton’s method, which increasingly
varies the Rayleigh number Ra, our SUPG weighted function is made as function
of the local Péclet number $\text{Pe}_{\text{np},\Omega^{e}}$, the nanofluid
Rayleigh number $\text{Ra}_{\text{nf}}$ and the local element size
$h_{\Omega^{e}}$. Hence the numerically chosen weighted function is
$\delta_{\Omega^{e}}=h_{\Omega^{e}}^{2}\dfrac{\text{Ra}_{\text{nf}}}{\text{Pe}_{\text{np},\Omega^{e}}}.$
###### Theorem 1.
The bilinear form $a^{\varepsilon}_{\sc supg}(\cdot,\cdot)$ satisfies the
following Lax-Milgram conditions
Coercivity $\displaystyle a^{\varepsilon}_{\sc
supg}(\varphi_{h},\varphi_{h})\geq\dfrac{1}{2}\|\varphi_{h}\|^{2}_{\sc supg}$
(20) Continuity $\displaystyle a^{\varepsilon}_{\sc
supg}(\varphi_{h},\psi)\leq C\|\varphi_{h}\|_{\sc supg}\|\psi\|_{\sc supg}$
(21)
for which the problem (17) is well-posed and has unique solution.
###### Proof.
The proof follows the classical bounds estimates for SUPG analysis.
$\displaystyle a^{\varepsilon}_{\sc supg}(\varphi_{h},\varphi_{h})$
$\displaystyle\geq$
$\displaystyle\left(\left(\mathbf{u}\cdot\nabla\varphi_{h}\right),\varphi_{h}\right)+\pi^{\text{p}}_{1}\left({D}_{\omega}\nabla\varphi_{h},\nabla\varphi_{h}\right)+\varepsilon\left(\varphi_{h},\varphi_{h}\right)$
(23)
$\displaystyle+\sum_{e}^{nel}\int_{\Omega^{e}}\delta_{\Omega^{e}}\left(\mathbf{u}\cdot\nabla\varphi_{h}\right)\left(\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega$
$\displaystyle-\sum_{e}^{nel}\left|\int_{\Omega^{e}}\left(\varepsilon\varphi_{h}^{2}-\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}\right)\right)\left(\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega\right|$
$\displaystyle\geq$
$\displaystyle\pi^{\text{p}}_{1}\|\nabla\varphi_{h}\|_{2}^{2}+\varepsilon\|\varphi_{h}\|_{2}^{2}+\|\delta_{\Omega}^{\frac{1}{2}}\mathbf{u}\cdot\nabla\varphi_{h}\|_{2}^{2}$
$\displaystyle-\sum_{e}^{nel}\left|\int_{\Omega^{e}}\left(\varepsilon\varphi_{h}^{2}-\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}\right)\right)\left(\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega\right|$
$\displaystyle\geq$ $\displaystyle\|\varphi_{h}\|_{\sc
supg}^{2}-\sum_{e}^{nel}\left|\int_{\Omega^{e}}\left(\varepsilon\varphi_{h}^{2}-\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}\right)\right)\left(\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega\right|$
(24)
where we have use the fact that $D_{\omega}>1$ almost every where as per its
definition (8) and (5) together with the maximum principle for the (heat)
energy equation.
On the other hand, we have the following inequality
$\displaystyle\left|\int_{\Omega^{e}}\delta_{\Omega^{e}}^{\frac{1}{2}}\left(\varepsilon\varphi_{h}^{2}-\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}\right)\right)\left(\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega\right|$
(25) $\displaystyle\leq$
$\displaystyle\int_{\Omega^{e}}\left|\delta_{\Omega^{e}}^{\frac{1}{2}}\left(\varepsilon\varphi_{h}^{2}-\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}\right)\right)\left(\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\right)\right|\,\delta\omega$
$\displaystyle\leq$
$\displaystyle\left(\varepsilon\delta_{\Omega^{e}}^{\frac{1}{2}}\|\varphi_{h}\|_{2,\Omega^{e}}+\pi^{\text{p}}_{1}\delta_{\Omega^{e}}^{\frac{1}{2}}\|D_{\omega}\|_{\infty,\Omega^{e}}\|\Delta\varphi_{h}\|_{2,\Omega^{e}}\right)\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\|_{2,\Omega^{e}}$
$\displaystyle\leq$
$\displaystyle\left(\varepsilon\delta_{\Omega^{e}}^{\frac{1}{2}}\|\varphi_{h}\|_{2,\Omega^{e}}+\delta_{\Omega^{e}}^{\frac{1}{2}}\dfrac{\pi^{\text{p}}_{1}\|D_{\omega}\|_{\infty,\Omega^{e}}C_{inv}}{h_{\Omega^{e}}}\|\nabla\varphi_{h}\|_{2,\Omega^{e}}\right)\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\|_{2,\Omega^{e}}$
(26) $\displaystyle\leq$
$\displaystyle\left(\sqrt{\dfrac{\varepsilon}{2}}\|\varphi_{h}\|_{2,\Omega^{e}}+\sqrt{\dfrac{\pi^{\text{p}}_{1}}{2}}\|\nabla\varphi_{h}\|_{2,\Omega^{e}}\right)\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\|_{2,\Omega^{e}}$
(27) $\displaystyle\leq$
$\displaystyle\dfrac{\varepsilon}{4\xi}\|\varphi_{h}\|_{2,\Omega^{e}}^{2}+\dfrac{\pi^{\text{p}}_{1}\|D_{\omega}\|_{\infty,\Omega^{e}}}{4\xi}\|\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}+\xi\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}$
(28)
Having used Cauchy-Schwarz for the inequality (25), then the inverse
inequality (see [50])
$\|\Delta\varphi_{h}\|_{2,\Omega^{e}}\leq\dfrac{C_{inv}}{h_{\Omega^{e}}}\|\nabla\varphi_{h}\|_{2,\Omega^{e}}$
in the argument for (26), we also refer to [51] and [52] for further details
on inverse inequalities. Then we used (18) in (27), and young’s product
inequality in (26). Thus by chosing $\xi=\frac{1}{2}$ we obtain
$\left|\int_{\Omega^{e}}\left(\varepsilon\varphi_{h}^{2}-\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}\right)\right)\left(\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\right)\,\delta\omega\right|\leq\dfrac{\varepsilon}{2}\|\varphi_{h}\|_{2,\Omega^{e}}^{2}+\dfrac{\pi^{\text{p}}_{1}\|D_{\omega}\|_{\infty,\Omega^{e}}}{2}\|\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}+\dfrac{1}{2}\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}$
(29)
which we combine together with (24) to obtain
$\displaystyle a^{\varepsilon}_{\sc supg}(\varphi_{h},\varphi_{h})$
$\displaystyle\geq$ $\displaystyle\|\varphi_{h}\|_{\sc
supg}^{2}-\dfrac{1}{2}\sum_{e}^{nel}\left(\varepsilon\|\varphi_{h}\|_{2,\Omega^{e}}^{2}+\left(\pi^{\text{p}}_{1}\|D_{\omega}\|_{\infty,\Omega^{e}}\right)\|\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}+\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}\right)$
(30) $\displaystyle=$ $\displaystyle\dfrac{1}{2}\|\varphi_{h}\|_{\sc
supg}^{2}$
Besides, for the boundedness of the binilinear form we have
$\displaystyle a^{\varepsilon}_{\sc supg}(\varphi_{h},\psi_{h})$
$\displaystyle=$
$\displaystyle\left(\left(\mathbf{u}\cdot\nabla\varphi_{h}\right),\psi_{h}\right)+\pi^{\text{p}}_{1}\left({D}_{\omega}\nabla\varphi_{h},\nabla\psi_{h}\right)+\varepsilon\left(\varphi_{h},\psi_{h}\right)$
(31)
$\displaystyle+\sum_{e}^{nel}\int_{\Omega^{e}}\delta_{\Omega^{e}}\left(\mathbf{u}\cdot\nabla\varphi_{h}\right)\left(\mathbf{u}\nabla\psi_{h}\right)\,\delta\omega$
$\displaystyle+\sum_{e}^{nel}\int_{\Omega^{e}}\left(\varepsilon\varphi_{h}\psi_{h}+\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}\right)\right)\left(\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\psi_{h}\right)\,\delta\omega$
$\displaystyle\leq$
$\displaystyle\left(c_{\Omega}\|\mathbf{u}\|_{2}+\pi^{\text{p}}_{1}\|{D}_{\omega}\|_{\infty}+\varepsilon
c_{\Omega}^{2}\right)\|\nabla\varphi_{h}\|_{2}\|\nabla\psi_{h}\|_{2}$
$\displaystyle+\sum_{e}^{nel}\left\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\right\|_{2,\Omega^{e}}\left\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\psi_{h}\right\|_{2,\Omega^{e}}$
$\displaystyle+\frac{\varepsilon
c_{\Omega}^{2}}{2}\|\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}\|\nabla\psi_{h}\|_{2,\Omega^{e}}^{2}+\dfrac{\pi^{\text{p}}_{1}\|D_{\omega}\|_{\infty,\Omega^{e}}}{2}\|\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}\|\nabla\psi_{h}\|_{2,\Omega^{e}}^{2}+\dfrac{1}{2}\|\delta_{\Omega^{e}}^{\frac{1}{2}}\mathbf{u}\nabla\varphi_{h}\|_{2,\Omega^{e}}^{2}$
$\displaystyle\leq$ $\displaystyle
C\|\nabla\varphi_{h}\|_{2}\|\nabla\psi_{h}\|_{2}$
where $C$ is function of
$C\left(c_{\Omega},\varepsilon,\|{D}_{\omega}\|_{\infty},\|\mathbf{u}\|_{2}\right)$.
This completes the proof. ∎
Note here that the SUPG method demonstrates a stronger stability property in
the streamline direction than the standard Galerkin discretization.
Furthermore, the nanoparticle concentration mean value has to be equal to one
(as per the dimensionless formulation of the equations). This reflects the
conservation of the bulk amount of concentration in the enclosure. This
nanoparticle constraint hence writes as
$\int_{\Omega}\phi(\omega)\,\delta\omega=1.$ (32)
In this light, the handled problem is treated as a variational minimization
problem subject to a constraint in the state variable $\phi_{h}$ and writes as
follows
$\min_{\begin{array}[]{cc}\phi_{h}\in H^{1}_{0}(\Omega),\\\
\int_{\Omega}\phi_{h}(\omega)\,\delta\omega=1\end{array}}\mathcal{J}(\phi_{h},\lambda):=\dfrac{1}{2}a^{\varepsilon}(\phi_{h},\phi_{h})+\lambda\int_{\Omega}\phi_{h}(\omega)\,\delta\omega$
where the constraint (32) is token into consideration using the Lagrange
multiplier $\lambda$ in the above augmented cost functional. One can easily
retrieve the equations that need to be solved for the nanoparticle
concentrations as a critical point of $\mathcal{J}(\cdot,\cdot)$, i.e., by
deriving with respect to $\phi_{h}$ and $\lambda$.
In practice, we assemble the finite element matrix system for the nanoparticle
transport equation as follows:
$\begin{bmatrix}\mathbf{N}&z\vspace{.05in}\\\
z^{T}&0\end{bmatrix}\begin{bmatrix}\phi_{h}\\\
\lambda\end{bmatrix}=\begin{bmatrix}b\\\ 1\end{bmatrix}$ (33)
where
$\displaystyle\left(\mathbf{N}\right)_{i,j}$ $\displaystyle=$
$\displaystyle\int_{\Omega}\left(\mathbf{u}\cdot\nabla\varphi_{h}^{i}\right)\varphi_{h}^{j}\,\delta\omega+\int_{\Omega}\pi^{\text{p}}_{1}D_{w}\nabla\varphi_{h}^{i}\cdot\nabla\varphi_{h}^{j}\,\delta\omega+\int_{\Omega}\varepsilon\varphi_{h}^{i}\varphi_{h}^{j}\,\delta\omega$
$\displaystyle+\sum_{e}^{nel}\int_{\Omega^{e}}\delta_{\Omega^{e}}\left(\mathbf{u}\cdot\nabla\varphi_{h}^{i}\right)\left(\mathbf{u}\nabla\varphi_{h}^{j}\right)\,\delta\omega$
$\displaystyle-\sum_{e}^{nel}\int_{\Omega^{e}}\delta_{\Omega^{e}}\pi^{\text{p}}_{1}\nabla\cdot\left({D}_{\omega}\nabla\varphi_{h}^{i}\right)\left(\mathbf{u}\nabla\varphi_{h}^{j}\right)\,\delta\omega$
$\displaystyle\left(z\right)_{j}$ $\displaystyle=$
$\displaystyle\int_{\Omega}1\cdot\varphi_{h}^{j}\,\delta\omega$
$\displaystyle\left(b\right)_{j}$ $\displaystyle=$
$\displaystyle-\int_{\Omega}\pi^{\text{p}}_{2}{D}_{\theta^{\star}_{H}}\nabla\theta_{h}\cdot\nabla\varphi_{h}^{j}\,\delta\omega$
$\displaystyle+\sum_{e}^{nel}\int_{\Omega^{e}}\delta_{\Omega^{e}}\pi^{\text{p}}_{2}\nabla\cdot\left({D}_{\theta_{h}}\nabla\theta_{h}\right)\left(\mathbf{u}\nabla\varphi_{h}^{j}\right)\,\delta\omega.$
One may ask whether this augmented matrix (33) is invertible or not. Indeed,
it is invertible as per the following proposition
###### Proposition 2.
The discrete linear system (33) describing the nanoparticle transport in the
enclosure (under the mean value constraint) is consistent.
###### Proof.
It is clear that the finite element square matrix
$\mathbf{N}:=[c_{1}|\cdots|c_{n}]\in\mathbb{R}^{n\times n}$ is non-singular as
per Proposition 1, which means that $0$ is not an eigenvalue of $\mathbf{N}$.
Therefore, its set of column vectors $c_{i},i=1...,n$ form a basis for the
column space Col$(\mathbf{N})$. For any non zero vector $z\in\mathbb{R}^{n}$
with positive entries, there exists a sequence of coefficients $(\alpha)_{j}$
such that we have $z=\sum_{j=1}^{n}\alpha_{j}c_{j}$ where $\alpha=N^{-1}z$ is
clearly unique.
Let us assume that the column vector $\tilde{z}:=\begin{bmatrix}z\\\
0\end{bmatrix}$ is also linear combination of columns of the augmented matrix
$\begin{bmatrix}\mathbf{N}\\\ z^{T}\end{bmatrix}$. So
$\tilde{z}=\sum_{j}\alpha_{j}\tilde{c}_{j}$ (where
$\tilde{c}_{j}=[c_{j}^{T},z_{j}]^{T}$ , in particular, if we take the bottom
line of this linear combination we have $\sum_{j}\alpha_{j}z_{j}=0$ or
equivalently
$z^{T}\alpha=0$
which leads to $z^{T}N^{-1}z=0,\,\forall z\in\mathbb{R}^{n},\|z\|_{2}\neq 0$.
This contradicts $\mathbf{N}$ is non-singular. ∎
### 4.3 The Algorithm
We present our method that combines Newton’s method for the resolution of
nonlinear PDEs, combining the Momentum and Energy equations, together with the
nanoparticle concentration advection dominated at steady state. The physical
parameters dependency in the nanoparticle concentration differs from one
correlation to another, see for instance [21, 38, 53] without being
exhaustive. In most cases a highly nonlinear term involving the nanoparticle
concentration appear in the correlation (generally coming from non-linear
fitting procedure). In order to avoid any differentiation in the Newton’s
method with respect to the nanoparticle concentration we will split the
resolution method into two and update all variables accordingly in iterative
fashion as demonstrated in following Algorithm 1.
Input: $tol,\mathbf{u}^{0},p^{0},\theta^{0}$
1 for _$Ra_{nf}\quad in\quad(10^{4}\cdots 10^{8})$_ do
2 $\epsilon=1$;$k=1$;$\phi_{h}^{0}=(\int_{\Omega}\,\delta\omega)^{-1}$;
3 _Initialization of the nanoparticle concentration_ ;
4 while _$\epsilon\geq$ tol_ do
/* Using $\phi_{h}^{k-1}$ */
5 Solve for $\begin{bmatrix}\delta\mathbf{u}^{k},\delta
p^{k},\delta\theta^{k}\end{bmatrix}^{T}$ following Eq. (15);
6 $\epsilon_{m,e}\leftarrow\dfrac{[\delta\mathbf{u}_{h}^{k},\delta
p_{h}^{k},\delta\theta^{k}]^{T}[\delta\mathbf{u}_{h}^{k},\delta
p_{h}^{k},\delta\theta^{k}]}{[\mathbf{u}_{h}^{k},p_{h}^{k},\theta^{k}]^{T}[\mathbf{u}_{h}^{k},p_{h}^{k},\theta_{h}^{k}]}$;
7 Update
$\begin{bmatrix}\mathbf{u}_{h}^{k+1},p_{h}^{k+1},\theta_{h}^{k+1}\end{bmatrix}^{T}$
following Eq. (14);
8 Solve for $\phi_{h}^{k+1}$ under the mean constraint Eq. (33);
/* Nanoparticle concentration with the SUPG stabilization */
9 $k\leftarrow k+1$;
10
11 end while
12 $k\leftarrow 0$;
13
14 end for
Algorithm 1 Combined Newton’s method and SUPG for nanofluid equation
As it is showing in Algorithm 1 a split procedure is adopted, where the
tangent equation only concern the momentum and energy equations leading to an
update, through Newton’s method, of the velocity and temperature respectively.
Although, these later two equations are dependent upon the nanoparticle
concentration through the mixture fluid density, viscosity and thermal
conductivity. The proposed method assumes that the velocity and the
temperature are constant through one iteration of Newton’s method. Their
update will then be ensured within the next iteration after an exact
resolution (through LU decomposition) of the nanoparticle concentration (with
SUPG) based on the new variables of the velocity and temperature coming out of
the previous Newton’s iteration. This alternating combination is applied
throughout the iterations and leads to convergence for all variables involved.
The split approach has lead to a simple yet effective implementation of
Newton’s method involving highly non-linear parameters. Therefore, it skipped
the tedious calculation of the tangent equations of the whole coupled system
of four PDEs. Furthermore, less memory storage is then deployed hence rapid
calculation. The above procedure is thus repeated with a predefined set of
increasing Rayleigh numbers.
## 5 Numerical experiments and validations
In this section, we investigate the numerical treatment of the heat transfer
enhancement in a differentially heated enclosure using variable thermal
conductivity and variable viscosity of the alumina-water nanofluid
(Al2O3-water). The validation of numerical scheme is done through two
processes. The first focuses on the validation of the numerical scheme
relative to the resolution of the momentum and energy equations regardless of
the volume fraction. In this case we consider the pure water heat transfer
calculation in a square cavity. The second considers the comparison of the
present numerical results with available numerical and experimental data.
### 5.1 Numerical schemes validations
Figure 2: Mesh sensitivity; Results present Nusselt (base fluid) number
plotted along the Heated wall of the cavity.
We present in Figure 2 the mesh sensitivity results for the base fluid (only
the Newton’s solver).
Figure 3: Order of Convergence, with respect to the mesh size, of the
presented scheme for the nanoparticle concentration with SUPG stabilization.
Figure3 shows the quadratic convergence of the proposed numerical scheme with
the use of the SUPG stabilization technique. In addition, as we shall explain
in the sequel, we enforced the bulk of the nanoparticle concentration to be of
mean value equal to $1$, through minimization under constraint problem. A
continuous $\textbf{P}2$ finite element was used for the advection dominated
nanoparticle concentration equation. Results show that the numerical scheme
with the SUPG is stable and satisfies the convergence property of the FE
discretization [54] even at high Rayleigh number for a turbulent flow [55].
In Figure 4 we present the benefit of the implementation of the SUPG method in
order to eliminate numerical artifacts showing up in the concentration of the
nanoparticles, which is governed by an advection dominated problem.
| $\text{Ra}_{\text{nf}}$=1.E+06 | $\text{Ra}_{\text{nf}}$=1.E+07 | $\text{Ra}_{\text{nf}}$=1.E+08
---|---|---|---
Without SUPG | | |
With SUPG | | |
Figure 4: Stabilizatoin effect of the SUPG on the nanoparticle solutions by
elimination of numerical artifacts. Plot of isovalues of the concentration in
the case of $\phi_{0}=3\%$
It is clearly shown in Figure 4 that in the case without SUPG stabilisation
the numerical spurious are more accentuating for high Rayleigh simulations,
where indeed, high Peclet number takes place.
### 5.2 Validation -vs- Experimental results
Our numerical scheme is validated using detailed comparison with the
experimental data of Ho et al. in [21] using Al2O3 which their thermophysical
properties are reported in Table 1. In their experimental investigations, the
authors studied the heat transfer characteristics of alumina-water nanofluid
enclosed in square cells. They used three different cell geometries and
different heating conditions ($\theta^{\star}_{H}-\theta^{\star}_{C}$) to
increase the Rayleigh number. Cases with $0,1$ and $3\%$ of nanoparticle
concentration were considered. It has been shown recently [56] that numerical
experiments using continuous models over-predict the heat transfer
enhancement.
Physical properties | Base fluid | Al2O3
---|---|---
$c_{\text{p}}$ (Jkg-1 K-1) | 4179 | 765
$\rho$ (kg m-3) | 997.1 | 3970
$k$ (W m-1K-1) | 0.613 | 25
$d_{\text{p}}$ (nm) | 0.384 | 47
$\alpha\cdot 10^{-7}$ (m2s-1) | 1.47 | 82.23
$\beta\cdot 10^{-5}$ (K-1) | 21 | 0.85
Table 1: Physical properties of base fluid and Al2O3 nanoparticles
---
Figure 5: Comparison of the numerical experiments with the experimental
results of Ho et al.[21] for the first cell case
Our validation and comparison of the numerical results against experimental
finding of nanofluid heat transfer followed the cells presentation as in [21].
The corresponding results are reported in Figure 5,6 and 7. In each of these
figures we plot separately the cases of base fluid (left) the nanofluid
concentration of $1\%$ (middle) and the nanofluid concentration of $3\%$
(left). To produce these results, we have used viscosity and thermal
conductivity correlations as reported by Ho et al. in [21] as follows.
$\displaystyle\mathcal{C}_{\mu}(\phi)$ $\displaystyle=$ $\displaystyle
1+4.93\cdot\phi+222.4\cdot\phi^{2}$ $\displaystyle\mathcal{C}_{k}(\phi)$
$\displaystyle=$ $\displaystyle(1+2.944\cdot\phi+19.672\cdot\phi^{2}),$
standing for the viscosity and thermal conductivity respectively.
---
Figure 6: Comparison of the numerical experiment with the experimental results
of Ho et al. [21] for the second cell case
---
Figure 7: Comparison of the numerical experiment with the experimental results
of Ho et al. [21] for the third cell case
Based on the averaged Nusselt number values of the nanofluid, the present
predictions show a reasonably good agreement with the experimental data for
low and moderate Ra (Figures 5 and 6). Whereas, for high Ra and high
concentration, the numerical calculations are found to overestimate the
Nusselt number. In fact, for the high $\text{Ra}_{\text{nf}}$ number cases
(Figure 7), the experimental results show a more pronounced Nusselt number
deterioration for the cases with nanofluid in comparison to the one with the
base fluid. In their paper, Ho et al. [21] suspected that this behavior could
be attributed to the transport mechanisms associated with nanoparticle-fluid
interactions such as Brownian diffusion and thermophoresis in addition to the
impact of the thermophysical properties changes. The present numerical
predictions, however, indicate that the Buongiorno model which is intended to
specifically account for these two mechanisms, is not actually able to mimic
the equivalent heat transfer impairment. This might be suggesting that perhaps
additional forces should be incorporated in the Buongiorno transport equations
to provide a better physical model for the nanoparticles concentration,
capable of reflecting the effect of the nanoparticles on the heat transfer
impairment observed in the experimental data.
### 5.3 FE stabilized Buongiorno model -vs- multi-phase model
This subsection is devoted to the comparison of the numerical results obtained
by Algorithm 1 based on FEM discretization of the Buongiorno nanofluid
transport model, against results of [57] that are based on finite volume
discretization using Fluent [58] (commercial software). Here it is worth
noticing that both methods deal with same physics of nanofluid transport,
although, use different equations models. Indeed, the aforementioned results
are based on a solid-liquid mixture model which solves the momentum equations
with an additional term to account for the phases drift velocity, the
continuity equation, and energy equation for the mixture. The model adopts
algebraic expressions for the relative velocities which are then used to
define the drift velocities (see Fluent’s documentation for more details
[58]). Both numerical results are compared to experimental results of Ho et
al.[21].
|
---|---
Figure 8: Comparison of the numerical experiments with the numerical
experiment of Chen et al.[57] for the range of $Ra$ in base fluid formulation.
Figure 8 depicts both numerical results and showcase a good agreement between
the two methods. The range of data available for this validation and
comparison is $1\cdot 10^{6}$ to $6\cdot 10^{6}$. For the case of $1\%$ of the
nanoparticle bulk concentration, Buongiorno model seems to have better
prediction of the heat transfer in term of the Nussult number of the
nanofluid. However, this advantage becomes marginally on the side of the
multi-phase model while we increase the bulk of nanoparticle concentration to
$3\%$.
$\text{Ra}_{\text{nf}}$=1.E4 | | |
---|---|---|---
$\text{Ra}_{\text{nf}}$=1.E5 | | |
$\text{Ra}_{\text{nf}}$=1.E6 | | |
$\text{Ra}_{\text{nf}}$=1.E7 | | |
$\text{Ra}_{\text{nf}}$=1.E8 | | |
Figure 9: (From left to right) velocity streamlines, heat distribution and
nanoparticle concentration distribution. Plots, show (from top to bottom) the
effect of increasing the Rayleigh number on the profile of the variables
listed earlier. Numerical simulation was performed through finite element
discretization, using Lagrange polynomials of degree two for the velocity and
of degree 1 for the temperature and nanoparticle concentration respectively.
Plots of streamlines contours, isothermal lines and nanoparticle concentration
distribution are shown in Figure 9, in which we vary the Rayleigh number
$\text{Ra}_{\text{nf}}$. These results show that the stream-lines exhibit
recirculations that get flatten with the increase of the Rayleigh number.
These recirculations get localized in the middle of the cavity and near the
hot and cold walls. Whereas, isothermal lines become more and more horizontal
which lead to high temperature gradient near the hot and cold walls. One,
here, can directly see the increase of the heat transfer with the increase of
the Rayleigh number. Besides, both the energy and the nanoparticle
concentration are advected mainly by the buoyancy driven flow. Although, the
energy equation, has a considerable contribution of the thermal diffusion
compared to the nanoparticle transport equation, which has much smaller
diffusion terms. Indeed, the Brownian diffusivity is very weak in comparison
to the thermal diffusion and even negligible when compared to the advection
term. This, in fact, would drastically affect the numerical approximation of
this advection dominated equation. The SUPG artificial viscosity through the
streamlines is, therefore, needed to stabilize the numerical scheme for high
Rayleigh/Peclet number as shown in Figure 4. As the nanoparticle transport
equation is mainly driven by the advection, one would expect the distribution
of the concentration to be very similar to the stream-line of the flow.
Indeed, this behaviour is observed in the third column of the plot in Figure
9.
|
---|---
|
Figure 10: Nanoparticle concentration profile along the $(x,\frac{1}{2})$
varying from $\%0$ to $\%3$.
Figure 10 displays the nanoparticles concentration profile along the line
$(x,0.5)$ for different Rayleigh numbers and averaged concentration $\phi$
values ranging from $1\%$ to $3\%$. Although, the profiles exhibit only minor
variation, of the order of $10^{-4}$ in magnitude along the horizontal line,
very interesting phenomena occurring in the vicinity of the hot and cold walls
can be observed. Near the cold wall, for instance, the concentration profiles
decrease sharply as the nanoparticles get closer to the cold wall. The slop of
this decrease is a function of both; the averaged nanoparticles concentration
value and the Rayleigh number. If one is to recall the thermophoresis effect
which tends to move particles from hotter to colder zones, then even in the
near-the-wall zone in which only small convective velocity magnitude exist,
this effect is not dominant and the nanoparticles are still being carried away
by the recirculating motion of the carrier fluid. On the hot wall, however, a
different picture is depicted. Although it would not be a straight forward
task to pin point the dominant term in this zone, the resulting force seems to
favor a higher nanoparticles concentration at the wall vicinity followed by a
sharper increase which can be translated to the fact that the convective term
regains its dominant effect further away from the wall. All these
nanoparticles that are removed from the walls region, by one of the two
mechanisms described above, get accumulated in the center resulting in a
relatively higher nanoparticles concentration. Unfortunately, as stated above,
the concentration variation along this line remains marginal, hence, one would
not expect it to provide a dramatically different outcomes from running the
simulations with a constant concentration value, hence, assuming a single-
phase model approach.
## 6 Conclusion
We presented in this article a numerical technique based on Newton-Raphson
iterations to solve the nanofluid heat transfer problem in a square cavity
with variable properties. In addition to its generality (regardless of the
correlation used for the variable properties), our technique has mainly two
advantages compared to the conventional use of Newton’s iterations:
* 1.
Firstly, it avoids the difficulty coming from the highly non-linear dependency
upon the concentration in several correlations published in the literature.
Indeed, the Jacobian (tangent problem) disregards nanoparticle concentration
and only considers the velocity, pressure, and temperature, while nanoparticle
concentration gets updated iteratively. Admittedly, the momentum and energy
equations are solved through Newton’s iterations, as the dominant (Navier-
Stokes) equations is quadratic for the velocity variable, while the
nanoparticle transport equation gets solved right after each iteration of the
momentum and energy equations.
* 2.
Secondly, the proposed split leads to less memory consumption and allows the
viscosity, density, and thermal conductivity of recirculating flow to be
updated at each Newton’s iteration.
The numerical experiments based on the Finite Element discretization of the
nanofluid heat transfer problem have been regularized using the SUPG method,
which showed to be very effective in stabilizing the numerical solution by
wiping the spurious oscillations without wrecking the solution. Here in
particular we found that the ratio formula between the local Peclet number and
global Raleigh number is a good combination for the regularization function
used in the SUPG. Besides, our numerical scheme has been thoroughly validated
against experimental results and showed a good agreement over a large spectrum
of Rayleigh numbers ranging from $10^{4}$ to $10^{8}$. The present study also
reveals that although the Buongiorno’s four equations-based nanofluid
transport model, tested herein, is able to capture additional physical
phenomena affecting the nanoparticles distribution, additional forces might
have to be accounted for within this model in order to mimic heat transfer
deterioration of similar amplitude as it is observed in the experimental data.
## Appendix A Density dimensionless derivation
Following Eq.(2) we have
$\rho_{\text{nf}}=(1-\phi^{\star})\rho_{\text{bf}}+\phi^{\star}\rho_{\text{np}}$
note also from Eq. (4),
$(\rho\beta)_{\text{nf}}=(\rho\beta)_{\text{bf}}(1-\phi^{\star})+(\rho\beta)_{\text{np}}\phi^{\star}.$
where $(\rho\beta)_{\text{bf}}=\rho_{\text{bf}}\beta_{\text{bf}}$ and
$(\rho\beta)_{\text{np}}=\rho_{\text{np}}\beta_{\text{np}}$. Hence
$\displaystyle\dfrac{(\rho\beta)_{\text{nf}}}{\rho_{\text{nf}}}$
$\displaystyle=$
$\displaystyle\dfrac{\rho_{\text{bf}}\beta_{\text{bf}}(1-\phi^{\star})}{(1-\phi^{\star})\rho_{\text{bf}}+\phi^{\star}\rho_{\text{np}}}+\dfrac{\rho_{\text{np}}\beta_{\text{np}}\phi^{\star}}{(1-\phi^{\star})\rho_{\text{bf}}+\phi^{\star}\rho_{\text{np}}}$
$\displaystyle=$
$\displaystyle\dfrac{\beta_{\text{bf}}(1-\phi^{\star})}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}+\dfrac{\phi^{\star}}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}\beta_{\text{np}}$
$\displaystyle=$
$\displaystyle\beta_{\text{bf}}\left(\dfrac{(1-\phi^{\star})}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}+\dfrac{\phi^{\star}}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}\dfrac{\beta_{\text{np}}}{\beta_{\text{bf}}}\right)$
Let
$\mathcal{M}:=\left(\dfrac{(1-\phi^{\star})}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}+\dfrac{\phi^{\star}}{(1-\phi^{\star})+\phi^{\star}\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}}\dfrac{\beta_{\text{np}}}{\beta_{\text{bf}}}\right)$
## Appendix B Momentum equation
$\left(\textbf{u}^{\star}\cdot\nabla^{\star}\,\right)\textbf{u}^{\star}=\dfrac{-1}{\rho_{\text{nf}}}\nabla^{\star}\,p^{\star}+\dfrac{1}{\rho_{\text{nf}}}\nabla^{\star}\,\cdot\left(\mu^{\star}_{\text{nf}}\left(\nabla^{\star}\,\textbf{u}^{\star}+(\nabla^{\star}\,\textbf{u}^{\star})^{t}\right)\right)+\dfrac{{\mathbf{g}}}{\rho_{\text{nf}}}(\rho_{\infty}-\rho_{c})$
(34)
Moving toward dimensionless variables the above equation writes
$\dfrac{\alpha^{2}}{L^{3}}\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}=\dfrac{-\rho_{\text{bf}}\alpha^{2}}{L^{2}\rho_{\text{nf}}}\nabla^{\star}\,p^{\star}+\dfrac{\alpha\mu_{\text{bf}}}{L^{3}\rho_{\text{nf}}}\nabla^{\star}\,\cdot\left(\mu_{\text{nf}}\left(\nabla^{\star}\,\textbf{u}^{\star}+(\nabla^{\star}\,\textbf{u}^{\star})^{t}\right)\right)+\dfrac{{\mathbf{g}}\beta_{\text{nf}}}{\rho_{\text{nf}}}\rho_{\infty}(\theta_{h}-\theta_{c})\theta$
(35)
Multiplying the above equation by $\dfrac{L^{3}}{\alpha^{2}}$ we obtain
$\displaystyle\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}$ $\displaystyle=$
$\displaystyle\dfrac{\rho_{\text{bf}}}{\rho_{\text{nf}}}\nabla
p+\dfrac{\mu_{\text{bf}}}{\alpha\rho_{\text{bf}}\left(1-\phi+\phi\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}\right)}\nabla\cdot\left(\mu_{\text{nf}}\left(\nabla\mathbf{u}+(\nabla\mathbf{u})^{t}\right)\right)$
$\displaystyle+\dfrac{\mathcal{M}L^{3}{\mathbf{g}}\beta_{\text{nf}}}{\alpha^{2}\rho_{\text{nf}}}\rho_{\infty}(\theta_{h}-\theta_{c})\theta$
which rewrites using the non-dimensional constants as follows
$\displaystyle\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}$ $\displaystyle=$
$\displaystyle\pi^{m}_{1}(\phi)\nabla
p+\pi^{m}_{2}(\phi)\nabla\cdot\left(\mu_{\text{nf}}\left(\nabla\mathbf{u}+(\nabla\mathbf{u})^{t}\right)\right)+\pi^{m}_{3}(\phi)\theta,$
(36)
where
$\displaystyle\pi^{m}_{1}(\phi)$ $\displaystyle=$
$\displaystyle\left(1-\phi+\phi\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}\right)^{-1},$
(37) $\displaystyle\pi^{m}_{2}(\phi)$ $\displaystyle=$
$\displaystyle\text{Pr}\left(1-\phi+\phi\dfrac{\rho_{\text{np}}}{\rho_{\text{bf}}}\right)^{-1},$
(38) $\displaystyle\pi^{m}_{3}(\phi)$ $\displaystyle=$
$\displaystyle\text{Pr}\text{Ra}_{\text{nf}}\mathcal{M}.$ (39)
## Appendix C Energy equation
The dimensional energy equation writes
$\left(\textbf{u}^{\star}\cdot\nabla^{\star}\,\theta^{\star}\right)=\dfrac{1}{c_{\text{nf}}\rho_{\text{nf}}}\nabla^{\star}\,\cdot\left(k^{\star}_{\text{nf}}\nabla^{\star}\,\theta^{\star}\right)+\left(\dfrac{\rho_{\text{np}}c_{\text{np}}}{\rho_{\text{nf}}c_{\text{nf}}}\right)\left(\dfrac{D^{\star}_{\theta}}{\theta^{\star}_{C}}\nabla^{\star}\,\theta^{\star}\cdot\nabla^{\star}\,\theta^{\star}+D_{\omega}^{\star}\nabla^{\star}\,\phi\cdot\nabla^{\star}\,\theta^{\star}\right)$
moving toward dimensionless variables the above equation writes
$\displaystyle\dfrac{\alpha(\theta^{\star}_{H}-\theta^{\star}_{C})}{L^{2}}\left(\mathbf{u}\cdot\nabla\theta\right)$
$\displaystyle=$
$\displaystyle\dfrac{k_{\text{bf}}(\theta^{\star}_{H}-\theta^{\star}_{C})}{L^{2}c_{\text{nf}}\rho_{\text{nf}}}\nabla\cdot\left(k_{\text{nf}}\nabla\theta\right)$
$\displaystyle+\left(\dfrac{\rho_{\text{np}}c_{\text{np}}}{\rho_{\text{nf}}c_{\text{nf}}}\right)\dfrac{D_{\theta_{c}}(\theta^{\star}_{H}-\theta^{\star}_{C})^{2}}{\theta^{\star}_{C}L^{2}}\left(D_{\theta}\nabla\theta\cdot\nabla\theta\right)$
$\displaystyle+\left(\dfrac{\rho_{\text{np}}c_{\text{np}}}{\rho_{\text{nf}}c_{\text{nf}}}\right)\dfrac{\phi_{\text{b}}D_{\omega_{c}}(\theta^{\star}_{H}-\theta^{\star}_{C})}{L^{2}}\left(D_{\omega}\nabla\phi\cdot\nabla\theta\right).$
Multiplying the above equation by
$\dfrac{L^{2}}{\alpha(\theta^{\star}_{H}-\theta^{\star}_{C})}$ we obtain
$\displaystyle\left(\mathbf{u}\cdot\nabla\theta\right)$ $\displaystyle=$
$\displaystyle\dfrac{k_{\text{bf}}}{\alpha
c_{\text{nf}}\rho_{\text{nf}}}\nabla\cdot\left(k_{\text{nf}}\nabla\theta\right)$
$\displaystyle+\left(\dfrac{\rho_{\text{np}}c_{\text{np}}}{\rho_{\text{nf}}c_{\text{nf}}}\right)\dfrac{D_{\theta_{c}}(\theta^{\star}_{H}-\theta^{\star}_{C})}{\alpha\theta^{\star}_{C}}\left(D_{\theta}\nabla\theta\cdot\nabla\theta\right)$
$\displaystyle+\left(\dfrac{\rho_{\text{np}}c_{\text{np}}}{\rho_{\text{nf}}c_{\text{nf}}}\right)\dfrac{\phi_{\text{b}}D_{\omega_{c}}}{\alpha}\left(D_{\omega}\nabla\phi\cdot\nabla\theta\right),$
which rewrites using non-dimensional variables as follows
$\displaystyle\left(\mathbf{u}\cdot\nabla\theta\right)$ $\displaystyle=$
$\displaystyle\pi^{e}_{1}\nabla\cdot\left(k_{\text{nf}}\nabla\theta\right)+\pi^{e}_{2}\left(D_{\theta}\nabla\theta\cdot\nabla\theta\right)+\pi^{e}_{3}\left(D_{\omega}\nabla\phi\cdot\nabla\theta\right)$
Where
$\begin{array}[]{ccccc}\pi^{e}_{1}&=&\dfrac{k_{\text{bf}}}{\alpha
c_{\text{nf}}\rho_{\text{nf}}}&=&\left(\phi+(1-\phi)\dfrac{\rho_{\text{bf}}c_{\text{bf}}}{\rho_{\text{np}}c_{\text{np}}}\right)^{-1}\\\
\pi^{e}_{2}&=&\left(\dfrac{\rho_{\text{np}}c_{\text{np}}}{\rho_{\text{nf}}c_{\text{nf}}}\right)\dfrac{\phi_{\text{b}}D_{\omega_{c}}}{\alpha}&=&\dfrac{\text{St}\text{Pr}}{\text{Sc}}\dfrac{\theta^{\star}_{H}-\theta^{\star}_{C}}{\theta^{\star}_{H}}\left(\phi+(1-\phi)\dfrac{\rho_{\text{bf}}c_{\text{bf}}}{\rho_{\text{np}}c_{\text{np}}}\right)^{-1}\\\
\pi^{e}_{3}&=&\left(\dfrac{\rho_{\text{np}}c_{\text{np}}}{\rho_{\text{nf}}c_{\text{nf}}}\right)\dfrac{D_{\theta_{c}}(\theta^{\star}_{H}-\theta^{\star}_{C})}{\alpha\theta^{\star}_{C}}&=&\dfrac{\text{Pr}}{\text{Sc}}\phi_{\text{b}}\left(\phi+(1-\phi)\dfrac{\rho_{\text{bf}}c_{\text{bf}}}{\rho_{\text{np}}c_{\text{np}}}\right)^{-1}.\end{array}$
## Appendix D Nanoparticle transport equation
The particle dimensional equation writes
$\nabla^{\star}\,\cdot\nabla^{\star}\,\phi^{\star}=\nabla^{\star}\,\cdot\left(D_{\omega}^{\star}\nabla^{\star}\,\phi^{\star}+\dfrac{D_{\theta^{\star}}^{\star}}{\theta^{\star}_{C}}\nabla^{\star}\,\theta^{\star}\right)$
using the non-dimensional equations the above equation writes
$\dfrac{\alpha}{L^{2}}\phi_{\text{b}}\nabla\cdot\nabla\phi=\dfrac{\phi_{\text{b}}D_{\omega_{c}}}{L^{2}}\nabla\cdot\left(D_{\omega}\nabla\phi\right)+\dfrac{D_{\theta^{\star}_{C}}(\theta^{\star}_{H}-\theta^{\star}_{C})}{L^{2}\theta^{\star}_{C}}\nabla\cdot\left(D_{\theta}\nabla\theta\right).$
Multiplying the above by $\dfrac{L^{2}}{\alpha\phi_{\text{b}}}$ we obtain
$\phi_{\text{b}}\nabla\cdot\nabla\phi=\dfrac{D_{\omega_{c}}}{\alpha}\nabla\cdot\left(D_{\omega}\nabla\phi\right)+\dfrac{D_{\theta^{\star}_{C}}^{\star}(\theta^{\star}_{H}-\theta^{\star}_{C})}{\phi_{\text{b}}\alpha\theta^{\star}_{C}}\nabla\cdot\left(D_{\theta}\nabla\theta^{\star}\right).$
$\nabla\cdot\nabla\phi=\pi^{p}_{1}\nabla\cdot\left(D_{\omega}\nabla\phi\right)+\pi^{p}_{2}\nabla\cdot\left(D_{\theta}\nabla\theta^{\star}\right).$
where
$\begin{array}[]{ccccc}\pi^{p}_{1}&=&\dfrac{D_{\omega_{c}}}{\alpha}&=&\dfrac{1}{\text{Le}},\vspace{.1in}\\\
\pi^{p}_{2}&=&\dfrac{D_{\theta^{\star}_{C}}(\theta^{\star}_{H}-\theta^{\star}_{C})}{L^{2}\theta^{\star}_{C}}&=&\dfrac{\text{St}\text{Pr}}{\text{Sc}}\dfrac{\theta^{\star}_{H}-\theta^{\star}_{C}}{\theta^{\star}_{C}}\dfrac{1}{\phi_{\text{b}}}.\end{array}$
## References
* Minkowycz et al. [2012] W. Minkowycz, E. M. Sparrow, J. P. Abraham, Nanoparticle heat transfer and fluid flow, volume 4, CRC press, 2012\.
* Manca et al. [2010] O. Manca, Y. Jaluria, D. Poulikakos, Heat transfer in nanofluids, 2010.
* Kleinstreuer and Xu [2016] C. Kleinstreuer, Z. Xu, Mathematical modeling and computer simulations of nanofluid flow with applications to cooling and lubrication, Fluids 1 (2016) 16\.
* Kleinstreuer [2013] C. Kleinstreuer, Microfluidics and nanofluidics: theory and selected applications, John Wiley & Sons, 2013.
* Buongiorno [2005] J. Buongiorno, Convective Transport in Nanofluids, Journal of Heat Transfer 128 (2005) 240–250. URL: https://doi.org/10.1115/1.2150834. doi:10.1115/1.2150834.
* Sheremet et al. [2018] M. A. Sheremet, I. Pop, O. Mahian, Natural convection in an inclined cavity with time-periodic temperature boundary conditions using nanofluids: application in solar collectors, International Journal of Heat and Mass Transfer 116 (2018) 751–761.
* Mahian et al. [2013] O. Mahian, A. Kianifar, S. A. Kalogirou, I. Pop, S. Wongwises, A review of the applications of nanofluids in solar energy, International Journal of Heat and Mass Transfer 57 (2013) 582–594.
* Li and Kleinstreuer [2008] J. Li, C. Kleinstreuer, Thermal performance of nanofluid flow in microchannels, International Journal of Heat and Fluid Flow 29 (2008) 1221–1232.
* Baïri [2018] A. Baïri, Effects of zno-h2o nanofluid saturated porous medium on the thermal behavior of cubical electronics contained in a tilted hemispherical cavity. an experimental and numerical study, Applied Thermal Engineering 138 (2018) 924–933.
* Li et al. [2018] Q. Li, J. Wang, J. Wang, J. Baleta, C. Min, B. Sundén, Effects of gravity and variable thermal properties on nanofluid convective heat transfer using connected and unconnected walls, Energy conversion and management 171 (2018) 1440–1448.
* Xu and Kleinstreuer [2014] Z. Xu, C. Kleinstreuer, Computational analysis of nanofluid cooling of high concentration photovoltaic cells, Journal of Thermal Science and Engineering Applications 6 (2014).
* Baïri et al. [2018] A. Baïri, N. Laraqi, K. Adeyeye, Thermal behavior of an active electronic dome contained in a tilted hemispherical enclosure and subjected to nanofluidic cu-water free convection, The European Physical Journal Plus 133 (2018) 1–11.
* Jabbari et al. [2017] F. Jabbari, A. Rajabpour, S. Saedodin, Thermal conductivity and viscosity of nanofluids: a review of recent molecular dynamics studies, Chemical Engineering Science 174 (2017) 67–81.
* Khodadadi et al. [2018] H. Khodadadi, S. Aghakhani, H. Majd, R. Kalbasi, S. Wongwises, M. Afrand, A comprehensive review on rheological behavior of mono and hybrid nanofluids: effective parameters and predictive correlations, International Journal of Heat and Mass Transfer 127 (2018) 997–1012.
* Fan and Wang [2011] J. Fan, L. Wang, Review of heat conduction in nanofluids, Journal of heat transfer 133 (2011).
* Kakaç and Pramuanjaroenkij [2009] S. Kakaç, A. Pramuanjaroenkij, Review of convective heat transfer enhancement with nanofluids, International journal of heat and mass transfer 52 (2009) 3187–3196.
* Buongiorno et al. [2009] J. Buongiorno, D. C. Venerus, N. Prabhat, T. McKrell, J. Townsend, R. Christianson, Y. V. Tolmachev, P. Keblinski, L.-w. Hu, J. L. Alvarado, et al., A benchmark study on the thermal conductivity of nanofluids, Journal of Applied Physics 106 (2009) 094312.
* Sheikholeslami and Ganji [2016] M. Sheikholeslami, D. Ganji, Nanofluid convective heat transfer using semi analytical and numerical approaches: a review, Journal of the Taiwan Institute of Chemical Engineers 65 (2016) 43–77.
* Wen and Ding [2004] D. Wen, Y. Ding, Experimental investigation into convective heat transfer of nanofluids at the entrance region under laminar flow conditions, International journal of heat and mass transfer 47 (2004) 5181–5188.
* Li and Peterson [2010] C. H. Li, G. Peterson, Experimental studies of natural convection heat transfer of al2o3/di water nanoparticle suspensions (nanofluids), Advances in Mechanical engineering 2 (2010) 742739.
* Ho et al. [2010] C. Ho, W. Liu, Y. Chang, C. Lin, Natural convection heat transfer of alumina-water nanofluid in vertical square enclosures: An experimental study, International Journal of Thermal Sciences 49 (2010) 1345?1353. doi:10.1016/j.ijthermalsci.2010.02.013.
* Putra et al. [2003] N. Putra, W. Roetzel, S. K. Das, Natural convection of nano-fluids, Heat and mass transfer 39 (2003) 775–784.
* Chon et al. [2005] C. H. Chon, K. D. Kihm, S. P. Lee, S. U. Choi, Empirical correlation finding the role of temperature and particle size for nanofluid (al 2 o 3) thermal conductivity enhancement, Applied Physics Letters 87 (2005) 153107.
* Galeão and Do Carmo [1988] A. C. Galeão, E. G. D. Do Carmo, A consistent approximate upwind petrov-galerkin method for convection-dominated problems, Computer Methods in Applied Mechanics and Engineering 68 (1988) 83–95.
* Yurun [1997] F. Yurun, A comparative study of the discontinuous galerkin and continuous supg finite element methods for computation of viscoelastic flows, Computer Methods in Applied Mechanics and Engineering 141 (1997) 47 – 65. URL: http://www.sciencedirect.com/science/article/pii/S0045782596011024. doi:https://doi.org/10.1016/S0045-7825(96)01102-4.
* Erath and Praetorius [2019] C. Erath, D. Praetorius, Optimal adaptivity for the supg finite element method, Computer Methods in Applied Mechanics and Engineering 353 (2019) 308 – 327. URL: http://www.sciencedirect.com/science/article/pii/S0045782519302981. doi:https://doi.org/10.1016/j.cma.2019.05.028.
* ten Eikelder and Akkerman [2018] M. ten Eikelder, I. Akkerman, Correct energy evolution of stabilized formulations: The relation between vms, supg and gls via dynamic orthogonal small-scales and isogeometric analysis. ii: The incompressible navier–stokes equations, Computer Methods in Applied Mechanics and Engineering 340 (2018) 1135 – 1154. URL: http://www.sciencedirect.com/science/article/pii/S0045782518301105. doi:https://doi.org/10.1016/j.cma.2018.02.030.
* Bänsch et al. [2020] E. Bänsch, S. Faghih-Naini, P. Morin, Convective transport in nanofluids: The stationary problem, Journal of Mathematical Analysis and Applications 489 (2020) 124151. URL: http://www.sciencedirect.com/science/article/pii/S0022247X20303139. doi:https://doi.org/10.1016/j.jmaa.2020.124151.
* Bänsch [2019] E. Bänsch, A thermodynamically consistent model for convective transport in nanofluids: existence of weak solutions and fem computations, Journal of Mathematical Analysis and Applications 477 (2019) 41–59.
* Shekar and Kishan [2015] B. C. Shekar, N. Kishan, Finite element analysis of natural convective heat transfer in a porous square cavity filled with nanofluids in the presence of thermal radiation, in: Journal of Physics: Conference Series, volume 662, IOP Publishing, 2015, p. 012017.
* Balla and Naikoti [2016] C. S. Balla, K. Naikoti, Finite element analysis of magnetohydrodynamic transient free convection flow of nanofluid over a vertical cone with thermal radiation, Proceedings of the Institution of Mechanical Engineers, Part N: Journal of Nanomaterials, Nanoengineering and Nanosystems 230 (2016) 161–173.
* Ullah et al. [2020] N. Ullah, S. Nadeem, A. U. Khan, Finite element simulations for natural convective flow of nanofluid in a rectangular cavity having corrugated heated rods, Journal of Thermal Analysis and Calorimetry (2020) 1–13.
* Girault and Raviart [2012] V. Girault, P.-A. Raviart, Finite element methods for Navier-Stokes equations: theory and algorithms, volume 5, Springer Science & Business Media, 2012.
* Taylor and Hood [1973] C. Taylor, P. Hood, A numerical solution of the navier-stokes equations using the finite element technique, Computers & Fluids 1 (1973) 73–100.
* Apel and Randrianarivony [2003] T. Apel, H. M. Randrianarivony, Stability of discretizations of the stokes problem on anisotropic meshes, Mathematics and Computers in Simulation 61 (2003) 437–447.
* Ho et al. [2010] C. Ho, W. Liu, Y. Chang, C. Lin, Natural convection heat transfer of alumina-water nanofluid in vertical square enclosures: an experimental study, International Journal of Thermal Sciences 49 (2010) 1345–1353.
* Abu-Nada and Chamkha [2010] E. Abu-Nada, A. J. Chamkha, Effect of nanofluid variable properties on natural convection in enclosures filled with a cuo–eg–water nanofluid, International Journal of Thermal Sciences 49 (2010) 2339–2352.
* Khanafer and Vafai [2017] K. Khanafer, K. Vafai, A critical synthesis of thermophysical characteristics of nanofluids, Nanotechnology and Energy (2017) 279?332. doi:10.1201/9781315163574-12.
* Franca et al. [2004] L. P. Franca, G. Hauke, A. Masud, Stabilized finite element methods, International Center for Numerical Methods in Engineering (CIMNE), Barcelona …, 2004.
* John and Novo [2013] V. John, J. Novo, A robust supg norm a posteriori error estimator for stationary convection–diffusion equations, Computer Methods in Applied Mechanics and Engineering 255 (2013) 289 – 305. URL: http://www.sciencedirect.com/science/article/pii/S0045782512003684. doi:https://doi.org/10.1016/j.cma.2012.11.019.
* ten Eikelder and Akkerman [2018] M. ten Eikelder, I. Akkerman, Correct energy evolution of stabilized formulations: The relation between vms, supg and gls via dynamic orthogonal small-scales and isogeometric analysis. i: The convective–diffusive context, Computer Methods in Applied Mechanics and Engineering 331 (2018) 259 – 280. URL: http://www.sciencedirect.com/science/article/pii/S004578251730720X. doi:https://doi.org/10.1016/j.cma.2017.11.020.
* Burman [2010] E. Burman, Consistent supg-method for transient transport problems: Stability and convergence, Computer Methods in Applied Mechanics and Engineering 199 (2010) 1114 – 1123. URL: http://www.sciencedirect.com/science/article/pii/S0045782509003983. doi:https://doi.org/10.1016/j.cma.2009.11.023.
* Bochev et al. [2004] P. B. Bochev, M. D. Gunzburger, J. N. Shadid, Stability of the supg finite element method for transient advection–diffusion problems, Computer Methods in Applied Mechanics and Engineering 193 (2004) 2301 – 2323. URL: http://www.sciencedirect.com/science/article/pii/S0045782504000830. doi:https://doi.org/10.1016/j.cma.2004.01.026.
* Russo [2006] A. Russo, Streamline-upwind petrov/galerkin method (supg) vs residual-free bubbles (rfb), Computer Methods in Applied Mechanics and Engineering 195 (2006) 1608 – 1620. URL: http://www.sciencedirect.com/science/article/pii/S0045782505002987. doi:https://doi.org/10.1016/j.cma.2005.05.031, a Tribute to Thomas J.R. Hughes on the Occasion of his 60th Birthday.
* Brooks and Hughes [1982] A. N. Brooks, T. J. Hughes, Streamline upwind/petrov-galerkin formulations for convection dominated flows with particular emphasis on the incompressible navier-stokes equations, Computer methods in applied mechanics and engineering 32 (1982) 199–259.
* Franca et al. [1992] L. P. Franca, S. L. Frey, T. J. Hughes, Stabilized finite element methods: I. application to the advective-diffusive model, Computer Methods in Applied Mechanics and Engineering 95 (1992) 253–276.
* Gelhard et al. [2005] T. Gelhard, G. Lube, M. A. Olshanskii, J.-H. Starcke, Stabilized finite element schemes with lbb-stable elements for incompressible flows, Journal of computational and applied mathematics 177 (2005) 243–267.
* Burman and Smith [2011] E. Burman, G. Smith, Analysis of the space semi-discretized supg method for transient convection–diffusion equations, Mathematical Models and Methods in Applied Sciences 21 (2011) 2049–2068.
* Burman [2010] E. Burman, Consistent supg-method for transient transport problems: Stability and convergence, Computer Methods in Applied Mechanics and Engineering 199 (2010) 1114–1123.
* JOHN and NOVO [2011] V. JOHN, J. NOVO, Error analysis of the supg finite element discretization of evolutionary convection-diffusion-reaction equations, SIAM Journal on Numerical Analysis 49 (2011) 1149–1176. URL: http://www.jstor.org/stable/23074327.
* Ciarlet [2002] P. G. Ciarlet, The finite element method for elliptic problems, SIAM, 2002.
* Ern and Guermond [2013] A. Ern, J.-L. Guermond, Theory and practice of finite elements, volume 159, Springer Science & Business Media, 2013.
* Astanina et al. [2018] M. S. Astanina, M. Kamel Riahi, E. Abu-Nada, M. A. Sheremet, Magnetohydrodynamic in partially heated square cavity with variable properties: Discrepancy in experimental and theoretical conductivity correlations, International Journal of Heat and Mass Transfer 116 (2018) 532 – 548. URL: http://www.sciencedirect.com/science/article/pii/S0017931017313285. doi:https://doi.org/10.1016/j.ijheatmasstransfer.2017.09.050.
* Benedetto et al. [2016] M. Benedetto, S. Berrone, A. Borio, S. Pieraccini, S. Scialò, Order preserving supg stabilization for the virtual element formulation of advection–diffusion problems, Computer Methods in Applied Mechanics and Engineering 311 (2016) 18 – 40. URL: http://www.sciencedirect.com/science/article/pii/S0045782516301773. doi:https://doi.org/10.1016/j.cma.2016.07.043.
* Wervaecke et al. [2012] C. Wervaecke, H. Beaugendre, B. Nkonga, A fully coupled rans spalart–allmaras supg formulation for turbulent compressible flows on stretched-unstructured grids, Computer Methods in Applied Mechanics and Engineering 233-236 (2012) 109 – 122. URL: http://www.sciencedirect.com/science/article/pii/S0045782512001235. doi:https://doi.org/10.1016/j.cma.2012.04.003.
* Alosious et al. [2017] S. Alosious, S. Sarath, A. R. Nair, K. Krishnakumar, Experimental and numerical study on heat transfer enhancement of flat tube radiator using al 2 o 3 and cuo nanofluids, Heat and Mass Transfer 53 (2017) 3545–3563.
* Chen et al. [2016] Y.-J. Chen, P.-Y. Wang, Z.-H. Liu, Numerical study of natural convection characteristics of nanofluids in an enclosure using multiphase model, Heat and Mass Transfer 52 (2016) 2471–2484. doi:10.1007/s00231-016-1760-2.
* ANSYS [2016] ANSYS, Ansys fluent - cfd software — ansys, 2016\. URL: http://www.ansys.com/products/fluids/ansys-fluent. doi:b97b60a697227d0a7d5a660b242f281f.
|
# Band Structure Dependent Electronic Localization in Macroscopic Films of
Single-Chirality Single-Wall Carbon Nanotubes
Weilu Gao<EMAIL_ADDRESS>Tel:801-581-7054 Department of Electrical and
Computer Engineering, University of Utah, Salt Lake City, Utah 84112, USA
Davoud Adinehloo Department of Electrical Engineering, University at Buffalo,
Buffalo, NY 14228, USA Xinwei Li111Present Address: Division of Physics,
Mathematics and Astronomy, California Institute of Technology, Pasadena, CA
91125, USA Department of Electrical and Computer Engineering, Rice University,
Houston, TX 77005, USA Ali Mojibpour Yohei Yomogida Department of Physics,
Tokyo Metropolitan University, Hachioji, Tokyo 192-0397, Japan Atsushi Hirano
Nanomaterials Research Institute, National Institute of Advanced Industrial
Science and Technology (AIST), Tsukuba, Ibaraki 305-8565, Japan Takeshi
Tanaka Hiromichi Kataura Ming Zheng National Institute of Standards and
Technology, Gaithersburg, MD 20899, USA Vasili Perebeinos Junichiro Kono
Department of Physics and Astronomy, Rice University, Houston, TX 77005, USA
Department of Materials Science and NanoEngineering, Rice University, Houston,
TX 77005, USA
###### Abstract
Significant understanding has been achieved over the last few decades
regarding chirality-dependent properties of single-wall carbon nanotubes
(SWCNTs), primarily through single-tube studies. However, macroscopic
manifestations of chirality dependence have been limited, especially in
electronic transport, despite the fact that such distinct behaviors are needed
for many applications of SWCNT-based devices. In addition, developing reliable
transport theory is challenging since a description of localization phenomena
in an assembly of nanoobjects requires precise knowledge of disorder on
multiple spatial scales, particularly if the ensemble is heterogeneous. Here,
we report an observation of pronounced chirality-dependent electronic
localization in temperature and magnetic field dependent conductivity
measurements on macroscopic films of single-chirality SWCNTs. The samples
included large-gap semiconducting (6,5) and (10,3) films, narrow-gap
semiconducting (7,4) and (8,5) films, and armchair metallic (6,6) films.
Experimental data and theoretical calculations revealed Mott variable-range-
hopping dominated transport in all samples, while localization lengths fall
into three distinct categories depending on their band gaps. Armchair films
have the largest localization length. Our detailed analyses on electronic
transport properties of single-chirality SWCNT films provide significant new
insight into electronic transport in ensembles of nanoobjects, offering
foundations for designing and deploying macroscopic SWCNT solid-state devices.
###### keywords:
carbon nanotubes, single-chirality films, electronic transport
††journal: Carbon
## 1 Introduction
Since their discovery in the early 1990s, single-wall carbon nanotubes
(SWCNTs) have served as an ideal nanoscale laboratory for investigating
fundamental electronic, optical, magnetic, and thermal processes in one-
dimensional (1D) systems [1, 2]. Furthermore, macroscopic assemblies of
SWCNTs, especially if they are ordered, are expected to lead to a wide variety
of applications, such as lightweight electric wires for power transmission
with ultrahigh current carrying capacity [3, 4] and mechanical strength [5,
6], ultrabroadband optoelectronic devices with strongly anisotropic optical
constants [7, 8, 9, 10], thermal engineering via giant and anisotropic thermal
conductivity [11, 12, 13], and energy harvesting and conversion with
substantial thermoelectric power [14, 15, 16].
What determines most of the basic properties of a SWCNT is the chiral vector
(or roll-up vector), defined as
$\mathbf{C}_{h}=n\mathbf{a}_{1}+m\mathbf{a}_{2}$, where $\mathbf{a}_{1}$ and
$\mathbf{a}_{2}$ are the primitive lattice vectors of graphene [1]. Depending
on the pair of integers ($n,m$), called the chirality indices, the SWCNT is
metallic or semiconducting. Specifically, for $\nu\equiv(n-m)$ mod 3,
nanotubes with $=\pm 1$ are semiconductors with large band gaps ($>$1 eV),
nanotubes with $\nu=0$ and $n\neq m$ are semiconductors with curvature-induced
small band gaps, and armchair nanotubes ($\nu=0$ and $n=m$) are “metallic”
with electron-electron interaction-induced small band gaps.
Over the last two decades, considerable understanding has been achieved
regarding ($n,m$)-dependent properties of SWCNTs through pioneering single-
tube experiments [17, 18, 19]. However, macroscopic manifestations of their
promised extraordinary properties have been elusive because of defects,
unintentional doping, intertube interactions, random orientations, and most
significantly, mixed chiralities. When SWCNTs are synthesized in high-
temperature furnaces, many chiralities of SWCNTs are produced together,
including both semiconducting and metallic species, and as a consequence,
exotic 1D charge carrier transport effects, such as quantum interference [20],
have never been observed in macroscopic SWCNT ensembles. Furthermore,
chirality-dependent electronic transport studies have been challenging even in
single-tube experiments, because of contact resistance issues that prevent
studies of small-diameter SWCNTs ($<1$ nm) [17, 21, 22].
Recently, solution-based chirality separation techniques have been developed,
offering opportunities for addressing challenges of structural polydispersity
in SWCNT ensembles. Among these techniques are aqueous two phase extraction
(ATPE) [23, 24, 25] and gel chromatography [26] – two methods widely used for
large-scale separation. Especially combined with DNA-assisted selectivity
[27], ultrahigh-purity and single-chirality SWCNT suspensions can be prepared
in substantial amounts. Furthermore, solution-based large-scale assembling
methods, such as vacuum filtration [28], can preserve the chirality purity and
produce wafer-scale uniform samples. Transport studies have been performed on
metal-enriched SWCNT films [29, 30], but multiple chiralities coexisting in
the films made clear and comprehensive understanding difficult. Therefore, it
is crucial to study transport in single-chirality SWCNT films, which has not
been reported before.
In the present work, we systematically performed temperature and magnetic
field dependent conductivity measurements of single-chirality SWCNT films,
including semiconducting (6,5) and (10,3) films, chiral-metallic (7,4) and
(8,5) films, and an armchair (6,6) film, prepared through solution-based
chirality separation combined with a large-scale vacuum filtration technique.
We found that Mott’s variable-range-hopping (VRH) conduction mechanism
dominates the transport behavior of both semiconducting and metallic SWCNT
films at low temperatures. However, the metallic SWCNT film displayed
significant deviation from what is expected from VRH at high temperatures,
especially for armchair SWCNT films. Moreover, we extracted the localization
lengths of carriers in these films, which strongly depended on the SWCNT band
structure type, leading to the following general conclusions: large-gap
semiconducting films with $\nu=\pm 1$ [e.g., (6,5) and (10,3)] have the
smallest localization lengths, armchair films with $n=m$ [e.g., (6,6)] have
the largest localization lengths, and narrow-gap semiconducting films with
$\nu=0$ and $n\neq m$ [e.g., (7,4) and (8,5)] have intermediate values. Note,
there is an order of magnitude difference in localization lengths for chiral-
metallic and armchair SWCNTs. Magnetoconductivity measurements on these films
further corroborated these conclusions by providing the orders of magnitude
for the localization lengths. Our theoretical calculations reproduced these
trends and quantitatively explained the main features of the experimental
data. Despite similar defect densities in these films, the armchair film
exhibited stronger resilience against localization and also displayed
significant deviation of calculated localization length compared to that
extracted from experiments, indicating additional distinctly different
transport mechanism from the other films. These measurements and analyses thus
provide significant new insight and guidance for future electronic and
optoelectronic devices based on macroscopic assemblies of SWCNTs.
## 2 Experimental section
Figure 1 summarizes the preparation methods and characteristics of the
chirality-sorted suspensions, films, and fabricated electronic devices that we
studied in this work. In preparing single-chirality SWCNT suspensions, we used
two solution-based separation techniques: ATPE and gel chromatography. Figure
1a is a photograph of the prepared suspensions of (6,5), (6,6), (7,4), (8.5),
and (10,3) SWCNTs. (6,5), (6,6), and (7,4) suspensions were prepared using the
standard ATPE method.[23, 24, 25] The (8,5) suspension was sorted and
separated from an as-grown SWCNT powder following similar procedures, but
instead of dispersing SWCNTs with multiple surfactants, the dispersant used
was DNA [31, 32, 27, 33]. Finally, the (10,3) suspension was prepared using
column gel chromatography [26]. For the (6,5), (7,4), and (8,5) samples, the
raw materials we used are CoMoCAT SG65i as-grown SWCNT powders
(MilliporeSigma); for the (6,6) sample, the raw material we used is CoMoCAT
CG200 as-grown SWCNT powder (MilliporeSigma); and for the (10,3) sample, the
raw material we used is from HiPco SWCNTs (R1831, NanoIntegris). All sorted
suspensions were inspected using absorption spectroscopy for evaluating the
obtained purity. Figure 1b shows the obtained spectra, exhibiting signature
excitonic peaks. Except the small amount of residual (6,5) nanotubes in the
(7,4) suspension, all suspensions demonstrate high purities ($>90$%). See
Supporting Information Section 1 for suspension preparation details.
Figure 1: Single-chirality SWCNT suspensions, films, and electronic devices.
(a) Sorted single-chirality SWCNT suspensions of (6,5), (6,6), (7,4), (8,5),
and (10,3). (b) Optical absorption spectra for the sorted SWCNT suspensions.
(c) Large-scale uniform films of single-chirality SWCNTs obtained by using
vacuum filtration. (d) Fabricated devices for electronic transport
measurements using standard micro/nanofabrication processes.
We produced large-area films of single-chirality SWCNTs using vacuum
filtration. Figure 1c shows the filtration system, together with photographs
of the produced (6,5), (6,6), and (7,4) films. Due to the self-limiting nature
of the vacuum filtration process [34, 28], the film thickness was uniform and
carefully controlled. Specifically, we tried to have the same film thickness
used in this study, which was $\sim 10\,$nm. Depending on the conditions used
during the filtration process and colloidal properties of SWCNT suspensions,
the controlled vacuum filtration method can produce aligned or randomly
oriented SCWNT films. The films used for the current transport study were
randomly oriented; see Supporting Information Section 2, Fig. S2, and Fig. S3
for more information on film fabrication. The obtained films were transferred
onto nearly arbitrary substrates and are compatible with micro/nanofabrication
processes. As shown in Fig. 1d, the films were first transferred onto silicon
oxide/silicon substrates and patterned into device geometries using
photolithography. We then annealed fabricated devices in an Ar atmosphere at
350 ∘C for 30 min in order to minimize any effects from residues [28]; see
Supporting Information Section 2 and Fig. S2 for additional scanning electron
microscopy and X-ray spectroscopy characterizations on obtained films. The
sample was mounted in a cryostat, and the temperature was varied from $2$ K to
$260$ K. Four-point measurements were conducted to exclude the influence from
contacts, as shown in Fig. 1d. The current was supplied from the two outmost
electrodes, and the voltage was measured across a region of $5\,\mu$m in width
and $8\,\mu$m in length. The magnetic field is applied perpendicular to the
device plane. At each temperature and magnetic field, a current-voltage sweep
was performed, and the conductivity value was extracted in the linear region.
## 3 Results and discussion
Figure 2 shows the conductivity ($\sigma$) versus temperature ($T$) for all
five films. They all show a general trend of decreasing conductivity with
decreasing temperature. Our analysis suggests that electron-phonon and
electron-electron interactions for either intratube transport or intertube
transport cannot alone account for such strongly temperature-dependent
conductivity, but can have a significant contribution for specific types of
SWCNTs under certain conditions. As a result, all temperature-dependent
conductivity data were fit with the 1D Mott VRH model,
$\sigma(T)=\sigma_{0}\textrm{exp}\left[-(\frac{T_{0}}{T})^{1/2}\right]$, where
both $\sigma_{0}$ and $T_{0}$ are constants to be determined via fitting. Note
that the Efros-Shklovskii (ES) VRH model has the same expression with strong
temperature dependence, and the SWCNT density in films determines which model
is more applicable. Specifically, the factor
$n_{\textrm{CNT}}L_{\textrm{CNT}}^{3}$, where $n_{\textrm{CNT}}$ is the volume
density of nanotubes in the film and $L_{\textrm{CNT}}$ is the length of
SWCNTs, is the decisive parameter as to which model is applicable, i.e., the
Mott VRH model ($n_{\textrm{CNT}}L_{\textrm{CNT}}^{3}>1$) or the ES VRH model
[35] ($n_{\textrm{CNT}}L_{\textrm{CNT}}^{3}<1$). Because of the high packing
density and random SWCNT orientations in the produced films (see Supporting
Information Fig. S2), $n_{\textrm{CNT}}$ can be estimated to be $\approx
2/(dL^{2}_{\textrm{CNT}})$, where $d$ is the SWCNT diameter, and thus,
$n_{\textrm{CNT}}L_{\textrm{CNT}}^{3}\approx 2L_{\textrm{CNT}}/d$. The aspect
ratio of SWCNTs in our produced films is around 200–300, which suggests that
$n_{\textrm{CNT}}L_{\textrm{CNT}}^{3}\gg 1$, i.e., the Mott VRH model is more
appropriate for our system [35]; see Supporting Information Fig. S4 for atomic
force microscopy characterization of SWCNT length.
Figure 2: Temperature-dependent electrical conductivity of the five single-chirality SWCNT films. Solid markers of different styles are experimental data, and dashed lines are corresponding fitting curves based on the 1D Mott variable range hopping model. SWCNTs | (6,5) | (10,3) | (8,5) | (7,4) | (6,6)
---|---|---|---|---|---
Exp. $T_{0}$ (K) | $2.0\times 10^{3}$ | $1.8\times 10^{3}$ | $3.7\times 10^{2}$ | $8.8\times 10^{2}$ | $5.1\times 10^{1}$
$E_{\textrm{g}}$ (eV) | 1.27 | 0.99 | 0.446 | 0.379 | 0.193
Cal. $T_{0}$ (K) | $2.3\times 10^{3}$ | $1.4\times 10^{3}$ | $5.8\times 10^{2}$ | $8.0\times 10^{2}$ | $3.6\times 10^{2}$
Cal. $\xi$ (nm) | 0.21 | 0.44 | 2.7 | 1.7 | 8.0
Table 1: Characteristic temperature and localization length of the five
single-chirality SWCNT films determined by experiments and calculations. Note
that electron correlation and curvature induced gaps are added to nominally
metallic SWCNTs in the last 3 columns.
The obtained values of $T_{0}$ for the five films fall into three distinct
categories (see the Exp. $T_{0}$ row of Table 1). The large-gap semiconducting
SWNCT films [(6,5) and (10,3)] have $T_{0}$ on the order of $10^{3}$, the
narrow-gap metallic SWCNT films [(8,5) and (7,4)] have $T_{0}$ on the order of
$10^{2}$, and the armchair SWCNT (6,6) film has $T_{0}$ on the order of
$10^{1}$. The localization length $\xi$, which corresponds to the localization
radius of SWCNT states within single CNTs, is inversely proportional to
$T_{0}$ through $T_{0}=\frac{\beta}{\xi\cdot\textrm{DOS}(\varepsilon)}$, where
$\textrm{DOS}(\varepsilon)$ is the density of states at energy $\varepsilon$
and $\beta$ is a constant, which we take $\beta=1$ here. Because the five
films were prepared using the same procedures and are thus expected to have
similar defect densities, the existence of three distinct categories of
$T_{0}$ and $\xi$ values suggests that the localization mechanism is strongly
band-structure dependent. Note that the variations of quality in the raw SWCNT
materials did not have major contributions to the defect densities of the
studied films, as suggested by the fact that the Raman intensity ratio of the
G peak to the D peak ($I_{D}/I_{G}$) was much larger in the raw materials
($I_{D}/I_{G}\sim 20$) than that in the prepared films ($I_{D}/I_{G}\sim 3.5$)
[36]; see Supporting Information Fig. S5. Also, a comparison of these three
categories shows that the (6,6) film displays the longest localization length.
Figure 3: Calculated localization lengths and inverse participation ratio in
single-chirality SWCNT films. a) shows localization length using Eq. (6) in
five films. The solid vertical line represents the assumed doping level for
all samples. b) shows IPR using Eq. (7) in (6,5) SWCNT with the supercell
length 200 nm. The large IPR values near the band edge, shown by the dashed
vertical line, indicate carrier localization.
To better understand the relation between the band structure and localization
mechanism, Fig. 3a summarizes our calculated localization lengths as a
function of doping density for five single-chirality SWCNT samples. The
localization length $\xi$ in the presence of disorder can be estimated as
follows:[37]
$\displaystyle\xi(\varepsilon)$ $\displaystyle=$
$\displaystyle|v(\varepsilon)|\tau(\varepsilon),$ (1)
where $\varepsilon$ is the energy and $\tau$ is the scattering time due to the
disorder, which can be evaluated using Fermi’s golden rule
$\displaystyle\frac{1}{\tau(\varepsilon)}$ $\displaystyle=$
$\displaystyle\frac{2\pi}{\hbar}\langle|\langle\Psi_{\varepsilon}|{\rm
H}|\Psi_{\varepsilon}\rangle|^{2}\rangle_{\rm dis}{\rm DOS}(\varepsilon),$ (2)
where ${\rm DOS}(\varepsilon)=2L/{\pi\hbar v(\varepsilon)}$ is the density of
states of a SWCNT of length $L$ for a single spin channel. We assume that
disorder scattering does not mix spin, but only valleys, as it is evident from
the D band Raman signal (not shown).
In the Anderson model with a random on-site potential disorder in the range
$[-W/2,W/2]$, the averaged square of the matrix element is given by [38]
$\displaystyle\langle|\langle\Psi_{\varepsilon}|{\rm
H}|\Psi_{\varepsilon}\rangle|^{2}\rangle_{\rm dis}$ $\displaystyle=$
$\displaystyle\frac{W^{2}}{12N},$ (3)
where $N=L\pi d/A_{\mathrm{c}}$ is the number of carbon atoms in a SWCNT of
diameter $d=a\sqrt{n^{2}+nm+m^{2}}/\pi$ and $A_{\mathrm{c}}=a^{2}\sqrt{3}/4$
is the area per carbon atom in graphene with a unit cell length $a=2.46$ Å.
Putting the results of Eqs. (1)–(3) together, we obtain
$\displaystyle\xi(\varepsilon)$ $\displaystyle=$ $\displaystyle 3\pi
d\frac{\hbar^{2}v(\varepsilon)^{2}}{W^{2}A_{\mathrm{c}}}.$ (4)
A hyperbolic dispersion velocity is given by
$v(\varepsilon)=v_{\mathrm{F}}\sqrt{\varepsilon^{2}-E_{\mathrm{g}}^{2}/4}/\varepsilon$,
where $E_{\mathrm{g}}$ is the SWCNT band gap and $v_{\mathrm{F}}=10^{8}$ cm/s
is the Fermi velocity in graphene. We assume that all SWCNTs have the same
level of disorder $W$ since the fabrication and processing conditions were
very similar. Thus, effects due to defects and disorder in the different
samples are believed to be similar. In addition, since the samples were
annealed before the transport measurements, unintentional doping from adsorbed
molecules in the environment, such as water and oxygen, can reasonably be
assumed to be the same. As a result, we also assume that the doping level $n$
is the same for all SWCNTs, which relates to the Fermi energy
$\varepsilon_{\mathrm{F}}$ at zero temperature according to
$\displaystyle\varepsilon_{\mathrm{F}}=\sqrt{(E_{\mathrm{g}}/2)^{2}+(\pi\hbar
v_{\mathrm{F}}n/4)^{2}}.$ (5)
Therefore, to compare the values of $\xi$ in different SWCNTs, we use the
following relation
$\displaystyle\xi(n)$ $\displaystyle=$ $\displaystyle 3\pi
d\frac{\hbar^{2}v_{\mathrm{F}}^{2}}{W^{2}A_{\mathrm{c}}}\frac{\left(\pi n\hbar
v_{F}/(2E_{g})\right)^{2}}{1+\left(\pi n\hbar v_{F}/(2E_{g})\right)^{2}}.$ (6)
Simulations of $\xi$ according to Eq. (6) for $W=1.9$ eV are shown in Fig. 3a,
for the chosen values of the band gap given in Table 1. For the semiconducting
SWCNTs, we use conventional values [39]. For the metallic armchair (6,6)
SWCNTs, we used a correlation band gap value suggested in Ref. [21]. For the
chiral metallic SWCNTs, we used a combination of the curvature induced gap
[22] and the electron correlation induced gap. For a typical carrier density
of $n=0.1$ e/nm, shown by the vertical line in Fig. 3a, we can find a good
match between the simulated values of $T_{0}$ and the experimentally obtained
values, except for the (6,6) film. In the latter case, we find a factor of 7
larger value of calculated $T_{0}$ than the experimental value. This indicates
that the origin of temperature dependence is different in the (6,6) sample,
and armchair SWCNTs are less affected by defects and thus weakly localized.
Specifically, we found that intertube contributions to the total conductivity
become more significant in armchair and metallic SWCNTs. A detailed discussion
is beyond the scope of this paper and will be reported elsewhere. Indeed,
under our assumption for the same values of disorder strength and carrier
density in different SWCNTs, we find that the localization length in (6,6)
SWCNTs is 8 nm. Therefore, the condition for the applicability of the Mott VRH
mechanism $L_{\textrm{CNT}}\gg\xi$ is weakened in the (6,6) film as compared
to the other SWCNT samples, where $L_{\textrm{CNT}}\sim 200$ nm. In a prior
individual multiwall CNT transport experiment, a localization length of 4.5 nm
was reported [40], which is close to the values we obtained for the (6,6) and
(8,5) CNT films.
Furthermore, the degree of localization can be expressed by inverse
participation ratio (IPR), which is defined as
$\displaystyle\textrm{IPR}(E_{\mathbf{k}})$ $\displaystyle=$ $\displaystyle
N\frac{\sum_{i=1}^{N}|A_{i,\mathbf{k}}|^{4}}{(\sum_{i=1}^{N}|A_{i,\mathbf{k}}|^{2})^{2}},$
(7)
where $N$ is number of orbitals in the unit cell and $A_{i,\mathbf{k}}$ is
$i$-th eigenstate of wavevector $\mathbf{k}$. A fully delocalized state
corresponds to $\textrm{IPR}=1$, while a fully localized state on a single
atom corresponds to $\textrm{IPR}=N$. Fig. 3b depicts averaged IPR in (6, 5)
SWCNT for $W=1.9$ eV. The lower energy states near the band edge are
localized, especially those falling into the bandgap. For higher energies, the
states tend to be less localized.
Figure 4: Magnetoconductivity of the single-chirality SWCNT films. Solid dots
are experimental data for four films. The (6,5) film had a large resistance
value out of our measurement range. Green dashed line is the fitting curve for
the (10,3) film, while the lines for the others simply connecting the
experimental data points.
Finally, we performed magnetoconductivity (MC) measurements on the five films
at $T=20$ K and three distinct kinds of behaviors were again clearly observed;
see Fig. 4. Specifically, the (10,3) film displayed strong negative MC while
the (6,6) film showed positive MC, up to a magnetic field ($B$) of 10 T. The
(7,4) and (8,5) films instead demonstrated a combination of the two trends. In
the (10,3) film, strong localization paradigms including two negative MC
contributions created by wave-function shrinkage and spin-dependent hopping
feature a scaling dependence of $B^{2}$. We fit the experimental data (green
dots in Fig. 4) with the expression
ln$[\rho(B)/\rho(0)]=\frac{2}{3}\frac{B^{2}}{B_{0}^{2}}$, where
$B_{0}=\frac{\sqrt{2}\phi_{0}}{\pi\xi^{2}\sqrt{\frac{T_{0}}{T}}}$,
$\phi_{0}=\frac{h}{e}$ is the flux quantum, and $\rho(B)$ is the resistance in
magnetic field $B$. The extracted $B_{0}$ is $\sim 23$ T and $\xi$ for the
(10,3) film is $\sim 1.4$ nm. This value qualitatively agrees with the value
obtained from theoretical calculations based on temperature-dependent
conductivity measurements. Moreover, the (6,6) film displayed a signature of
weak localization and positive MC, suggesting the largest localization length.
Finally, the behaviors of the (7,4) and (8,5) films stayed in the middle with
medium localization strengths. The orders of magnitude and trends of the
localization strengths obtained from MC measurements are consistent with
results from temperature-dependent conductivity.
## 4 Conclusions
In summary, we measured temperature- and magnetic field-dependent conductivity
of single-chirality SWCNT films with known chiralities ($n$,$m$) of three
categories: (A) $\nu=(n-m)$ mod 3 $=\pm 1$, (B) $\nu=0$ and $n\neq m$, and (C)
$\nu=0$ and $n=m$ (armchair). Despite similar defect densities in the films,
the obtained localization length exhibited distinctly ($n$,$m$)-dependent
behaviors: Films in category (A) have the smallest localization length, films
in category (B) have medium localization lengths, and films in category (C)
have the largest localization length. Our theoretical calculations based on
Mott VRH explained all observations, except for the armchair SWCNT film
(category C). Specifically, the VRH formula breaks down for armchair SWCNTs
and an additional mechanism might be responsible for the observed temperature
dependence. The largest localization lengths in macroscopic samples of
armchair SWCNTs make them promising for future electronic and optoelectronic
applications.
Acknowledgement – J. K. acknowledges the support from the Department of Energy
Basic Energy Sciences through grant no. DEFG02-06ER46308 (optical spectroscopy
experiments), the Robert A. Welch Foundation through grant no. C-1509 (sample
preparation), and the support from the JST CREST program, Japan, through Grant
Number JPMJCR17I5. W. G. thanks the support from the University of Utah start-
up fund. V. P. acknowledges support from the Vice President for Research and
Economic Development (VPRED) and the Center for Computational Research at the
University at Buffalo (http://hdl.handle.net/10477/79221).
## References
* Dresselhaus et al. [2001] Dresselhaus MS, Dresselhaus G, Avouris P, editors. Carbon Nanotubes: Synthesis, Structure, Properties, and Applications. No. 18 in Topics in Applied Physics; Berlin: Springer; 2001\.
* Jorio et al. [2008] Jorio A, Dresselhaus G, Dresselhaus MS, editors. Carbon Nanotubes: Advanced Topics in the Synthesis, Structure, Properties and Applications. Berlin: Springer; 2008\.
* Yao et al. [2000] Yao Z, Kane CL, Dekker C. High-field electrical transport in single-wall carbon nanotubes. Phys Rev Lett 2000;84:2941–4.
* Wang et al. [2014] Wang X, Behabtu N, Young CC, Tsentalovich DE, Pasquali M, Kono J. High-ampacity power cables of tightly-packed and aligned carbon nanotubes. Adv Func Mater 2014;24:3241–9.
* Behabtu et al. [2013] Behabtu N, Young CC, Tsentalovich DE, Kleinerman O, Wang X, Ma AWK, et al. Strong, light, multifunctional fibers of carbon nanotubes with ultrahigh conductivity. Science 2013;339:182–6.
* Bai et al. [2018] Bai Y, Zhang R, Ye X, Zhu Z, Xie H, Shen B, et al. Carbon nanotube bundles with tensile strength over 80 gpa. Nat Nanotechnol 2018;13(7):589–95.
* Ando [1997] Ando T. Excitons in carbon nanotubes. J Phys Soc Jpn 1997;66(4):1066–73.
* Ren et al. [2009] Ren L, Pint CL, Booshehri LG, Rice WD, Wang X, Hilton DJ, et al. Carbon nanotube terahertz polarizer. Nano Lett 2009;9:2610–3.
* Nanot et al. [2012] Nanot S, Hároz EH, Kim JH, Hauge RH, Kono J. Optoelectronic properties of single-wall carbon nanotubes. Adv Mater 2012;24:4977–94.
* Gao et al. [2018] Gao W, Li X, Bamba M, Kono J. Continuous transition between weak and ultrastrong coupling through exceptional points in carbon nanotube microcavity exciton–polaritons. Nat Photon 2018;12:362–7.
* Fujii et al. [2005] Fujii M, Zhang X, Xie H, Ago H, Takahashi K, Ikuta T, et al. Measuring the thermal conductivity of a single carbon nanotube. Phys Rev Lett 2005;95(6):065502.
* Pop et al. [2006] Pop E, Mann D, Wang Q, Goodson K, Dai H. Thermal conductance of an individual single-wall carbon nanotube above room temperature. Nano Lett 2006;6(1):96–100.
* Yamaguchi et al. [2019] Yamaguchi S, Tsunekawa I, Komatsu N, Gao W, Shiga T, Kodama T, et al. One-directional thermal transport in densely aligned single-wall carbon nanotube films. Appl Phys Lett 2019;115(22):223104.
* Hicks and Dresselhaus [1993] Hicks LD, Dresselhaus MS. Thermoelectric figure of merit of a one-dimensional conductor. Phys Rev B 1993;47(24):16631.
* Blackburn et al. [2018] Blackburn JL, Ferguson AJ, Cho C, Grunlan JC. Carbon-nanotube-based thermoelectric materials and devices. Adv Mater 2018;30(11):1704386.
* Ichinose et al. [2019] Ichinose Y, Yoshida A, Horiuchi K, Fukuhara K, Komatsu N, Gao W, et al. Solving the thermoelectric trade-off problem with metallic carbon nanotubes. Nano Lett 2019;19(10):7370–6.
* Bockrath et al. [1999] Bockrath M, Cobden DH, Lu J, Rinzler AG, Smalley RE, Balents L, et al. Luttinger-liquid behaviour in carbon nanotubes. Nature 1999;397(6720):598–601.
* Kim et al. [2001] Kim P, Shi L, Majumdar A, McEuen PL. Thermal transport measurements of individual multiwalled nanotubes. Phys Rev Lett 2001;87(21):215502.
* Fuhrer et al. [2000] Fuhrer M, Nygård J, Shih L, Forero M, Yoon YG, Choi HJ, et al. Crossed nanotube junctions. Science 2000;288(5465):494–7.
* Kong et al. [2001] Kong J, Yenilmez E, Tombler TW, Kim W, Dai H, Laughlin RB, et al. Quantum interference and ballistic transmission in nanotube electron waveguides. Phys Rev Lett 2001;87(10):106801.
* Deshpande et al. [2009] Deshpande VV, Chandra B, Caldwell R, Novikov DS, Hone J, Bockrath M. Mott insulating state in ultraclean carbon nanotubes. Science 2009;323(5910):106–10.
* Senger et al. [2018] Senger MJ, McCulley DR, Lotfizadeh N, Deshpande VV, Minot ED. Universal interaction-driven gap in metallic carbon nanotubes. Phys Rev B 2018;97(3):035445.
* Khripin et al. [2013] Khripin CY, Fagan JA, Zheng M. Spontaneous partition of carbon nanotubes in polymer-modified aqueous phases. J Am Chem Soc 2013;135(18):6822–5.
* Subbaiyan et al. [2014] Subbaiyan NK, Cambré S, Parra-Vasquez ANG, Hároz EH, Doorn SK, Duque JG. Role of Surfactants and Salt in Aqueous Two-Phase Separation of Carbon Nanotubes toward Simple Chirality Isolation. ACS Nano 2014;8:1619–28. doi:10.1021/nn405934y.
* Fagan et al. [2014] Fagan JA, Khripin CY, Silvera Batista CA, Simpson JR, Hároz EH, Hight Walker AR, et al. Isolation of specific small-diameter single-wall carbon nanotube species via aqueous two-phase extraction. Adv Mater 2014;26(18):2800–4.
* Yomogida et al. [2016] Yomogida Y, Tanaka T, Zhang M, Yudasaka M, Wei X, Kataura H. Industrial-scale separation of high-purity single-chirality single-wall carbon nanotubes for biological imaging. Nat Commun 2016;7:12056.
* Ao et al. [2016] Ao G, Streit JK, Fagan JA, Zheng M. Differentiating left-and right-handed carbon nanotubes by dna. J Am Chem Soc 2016;138(51):16677–85.
* He et al. [2016] He X, Gao W, Xie L, Li B, Zhang Q, Lei S, et al. Wafer-scale monodomain films of spontaneously aligned single-walled carbon nanotubes. Nat Nanotechnol 2016;11(7):633–8.
* Yanagi et al. [2010] Yanagi K, Udoguchi H, Sagitani S, Oshima Y, Takenobu T, Kataura H, et al. Transport mechanisms in metallic and semiconducting single-wall carbon nanotube networks. Acs Nano 2010;4(7):4027–32.
* Wang et al. [2018] Wang X, Gao W, Li X, Zhang Q, Nanot S, Hároz E, et al. Magnetotransport in type-enriched single-wall carbon nanotube networks. Phys Rev Mater 2018;2(11):116001.
* Zheng et al. [2003] Zheng M, Jagota A, Strano MS, Santos AP, Barone P, Chou SG, et al. Structure-based carbon nanotube sorting by sequence-dependent DNA assembly. Science 2003;302(5650):1545–8.
* Tu et al. [2009] Tu X, Manohar S, Jagota A, Zheng M. DNA sequence motifs for structure-specific recognition and separation of carbon nanotubes. Nature 2009;460(7252):250.
* Zheng [2017] Zheng M. Sorting carbon nanotubes. Topics in Current Chemistry 2017;375(1). doi:10.1007/s41061-016-0098-z.
* Wu et al. [2004] Wu Z, Chen Z, Du X, Logan JM, Sippel J, Nikolou M, et al. Transparent, conductive carbon nanotube films. Science 2004;305(5688):1273–6.
* Hu and Shklovskii [2006] Hu T, Shklovskii B. Theory of hopping conductivity of a suspension of nanowires in an insulator. Phys Rev B 2006;74(5):054205.
* Dresselhaus et al. [2010] Dresselhaus M, Jorio A, Souza Filho A, Saito R. Defect characterization in graphene and carbon nanotubes using raman spectroscopy. Philos Trans R Soc A 2010;368(1932):5355–77.
* Takashima et al. [2016] Takashima K, Konabe S, Yamamoto T. Carrier localization length in edge-disordered graphene nanoribbons with sub-100 nm length. J Appl Phys 2016;119(2):024301.
* Nunez et al. [2016] Nunez C, Orellana P, Rosales L. Electron localization due to side-attached molecules on graphene nanoribbons. J Appl Phys 2016;120(16):164310.
* Weisman and Bachilo [2003] Weisman RB, Bachilo SM. Dependence of optical transition energies on structure for single-walled carbon nanotubes in aqueous suspension: an empirical kataura plot. Nano Lett 2003;3(9):1235–8.
* Wang et al. [2007] Wang D, Feldman D, Perkins B, Yin A, Wang G, Xu J, et al. Hopping conduction in disordered carbon nanotubes. Solid State Commun 2007;142(5):287–91.
|
# Central elastic scattering
S.M<EMAIL_ADDRESS>+79151953408, N.E. Tyurin
NRC “Kurchatov Institute”–IHEP
Protvino, 142281, Russian Federation,
This article is registered under preprint number /hep-ph/2101.07504
We comment on phase selection of the scattering amplitude, emphasizing that
the elastic overlap function should have a central impact parameter profile at
high energies and highlighting the role of the reflective scattering mode at
the LHC energies. Emerging problems with the use of peripheral impact
parameter dependence of the elastic overlap function are explicitly indicated.
Their solution is an elimination of the phases connected to peripheral form of
the elastic overlap function. Contrary, we adhere to a relative peripheral
form of the inelastic overlap function with an additional new feature of a
maximum at nonzero value of the impact parameter at the highest energies.
Phenomenologically, the dynamics of hadron scattering is motivated by a hadron
structure with a hard central core presence.
Keywords: scattering amplitude, phase, unitarity, analyticity, arXiv:
2101.07504.
## Introduction. Role of the scattering amplitude phase
Phase of the elastic scattering amplitude plays an important role in physics
interpretation of hadron scattering. It is just as important as the scattering
amplitude modulus. Unfortunately, the phase is an essentially unknown
quantity, and the experimental data on the differential cross–section of the
elastic scattering allow one to reconstruct the amplitude $F(s,t)$ with its
phase at $-t=0$ only with the use of Coulomb–Nuclear Interference (CNI)
contribution at very small values of $-t$. Moreover, certain model assumptions
are needed even at this reconstruction. In the simplest case of assumed
constant (i.e. $t$–independent) phase the relation of the real to imaginary
parts ratio of the forward scattering amplitude $\rho$ with the phase $\arg
F(s,t)$ is the following:
$\arg F(s,t)=\frac{\pi}{2}-\arctan\rho.$
However, the phase $t$–dependence can be a nontrivial one and therefore
several various parameterizations of the nuclear amplitude phase dependencies
on $t$ have been used in [1]. A significant uncertainty allows one to use a
wide range of assumptions on the phase behavior and it would be useful to find
some arguments limiting the phase selection.
A role of the phase in hadron scattering is under active theoretical
discussion nowadays, cf. e.g. [2, 3, 4, 5, 6, 7], in particular, due to its
relation to the recent measurements of the real to imaginary part ratio
$\rho$.
The elastic and inelastic overlap functions $h_{l,el}$ and $h_{l,inel}$
introduced by Van Hove [8] are related by unitarity and are strongly
interdependent with the phase. The important role of the phase has been
clearly demonstrated under analysis of the LHC experimental data obtained by
the TOTEM Collaboration [1].
To this end, it is instrumental to address the elastic and inelastic overlap
functions in order to limit choice of the phases suitable for further
considerations and phenomenological analysis. It is aim of present note.
Thus, we discuss an exclusion of the phase $t$-dependencies on the ground of
unitarity and analyticity of the scattering amplitude. Peripheral distribution
of elastic scattering has been suggested by analogue tunneling picture of
strong interactions based on the hypothesis of ”maximal importance” of
particle production [9]. The cases, subject to elimination, correspond to a
peripheral form of the elastic overlap function [10]. Despite the explicit
form of the phase dependence after elimination of peripheral distribution of
the elastic scattering remains to be unknown and leave us with results of
phenomenological models only, it allows one to reduce an arbitrariness under
the phase selection in the data analysis [1].
The reasons for elimination of the peripheral option of $h_{l,el}$ at phase
preselection stage are discussed in Section 1. Section 2 is devoted to the
central profile of the elastic overlap function in the impact parameter
representation and its correlation with the reflective scattering mode. The
results come from a combined utilization of unitarity and analyticity in the
region of large impact parameters.
## 1 Problems with a peripheral form of the elastic overlap function
The partial elastic scattering matrix element can always be represented as
complex function:
$S_{l}(s)=\kappa_{l}(s)\exp[2i\delta_{l}(s)]$ (1)
with the two real functions $\kappa_{l}$, $\delta_{l}$. Herewith $\kappa_{l}$
can vary in the interval $0\leq\kappa_{l}\leq 1$ and is known as an absorption
factor. The value $\kappa_{l}=0$ means a complete absorption of the
corresponding initial state.
The function $h_{l,el}(s)\equiv|f_{l}(s)|^{2}$ is a contribution of the
elastic intermidiate states while $h_{l,inel}(s)$ is a contribution of the
inelastic intermediate states into the unitarity relation:
$\mbox{Im}f_{l}(s)=h_{l,el}(s)+h_{l,inel}(s).$ (2)
The latter can be expressed through the function $\kappa_{l}(s)$ by the
relation
$\kappa_{l}^{2}(s)=1-4h_{l,inel}(s).$ (3)
The normalization is such that $S_{l}=1+2if_{l}$. In Eq. (2) $f_{l}(s)$ is the
partial scattering amplitude, $h_{l,el}(s)$ and $h_{l,inel}(s)$ correspond to
the elastic and inelastic overlap functions $h_{el}(s,b)$ and $h_{inel}(s,b)$
which can be related to probability distributions of the elastic and inelastic
interactions over the impact parameter $b=2l/\sqrt{s}$ (cf. [11]).
Evidently, the unitarity relation implies that the limiting behavior
$\mbox{Im}f_{l}\to 1$ leads to a vanishing real part of the scattering
amplitude, i.e. $\mbox{Re}f_{l}\to 0$, cf. [12].
Eq. (2) can be rewritten in the form
$\mbox{Im}f_{l}(s)[1-\mbox{Im}f_{l}(s)]=[\mbox{Re}f_{l}(s)]^{2}+h_{l,inel}(s).$
(4)
We consider the region of $l\gg 1$, $s\gg 4m^{2}_{\pi}$ and large
$2l/\sqrt{s}$, i.e. the region of peripheral interactions at high energies.
Eq. (4) can be simplified in this region since $\mbox{Im}f_{l}(s)\ll 1$. The
smallness of $\mbox{Im}f_{l}(s)$ results from the axiomatic field theory [13]
(short–range nature of strong interactions), and the unitarity relation can be
approximated by the following one:
$\mbox{Im}f_{l}(s)\simeq[\mbox{Re}f_{l}(s)]^{2}+h_{l,inel}(s).$ (5)
An evident upper bound for the real part squared
$[\mbox{Re}f_{l}(s)]^{2}\leq\mbox{Im}f_{l}(s)$ (6)
assumes that any phenomenological model with a nonvanishing real part of the
amplitude should test its amplitude against this inequality along with the
other constraints.
It will be shown further that unitarity being combined with Mandelstam
analyticity allows one to obtain a more stringent constraint for the profile
of the elastic overlap function and, as a consequence, for the scattering
amplitude phase.
Namely, it is known (cf. [14]) that the amplitude $f_{l}(s)$ (its real and
imaginary parts) decreases exponentially with $l$ at large values of $l$ and
$s$ according to the Froissart–Gribov formula [15, 16]:
$f_{l}(s)\simeq\omega(s)\exp(-\mu\frac{2l}{\sqrt{s}}),$ (7)
where $\omega(s)$ is a complex function of energy and $\mu$ is determined by
the position of the lowest singularity in the $t$–channel. Thus, the exponent
is the same for the real as well as for the imaginary parts of the amplitude
$f_{l}$ and is determined by the mass of two pions. Eq. (7) originates from
Mandelstam representation and is consistent also with analyticity of the
amplitude $F(s,t)$ in the Lehmann–Martin ellipse [13].
We address the problems related to a peripheral option for the elastic
scattering in what follows. Peripheral dominant role of $h_{l,el}(s)$ has been
discussed in [17, 18, 19, 10] and could occur due to a nontrivial contribution
of the real part at large values of $l$, i.e. by a relevant choice of the
scattering amplitude phase.
If the elastic scattering is assumed to be dominant in a peripheral region and
the contribution of the inelastic states can be neglected in Eq. (5), the
following approximate relation should be considered as valid at large values
of $l$,$s$ and $2l/\sqrt{s}$:
$\mbox{Im}f_{l}(s)\simeq[\mbox{Re}f_{l}(s)]^{2}.$ (8)
Both Eq. (8) or the elastic unitarity
$\mbox{Im}f_{l}(s)=h_{l,el}(s)$ (9)
are in conflict with Eq. (7) since the latter points to the same exponent of
the decrease with $l$ for the real and imaginary parts of the scattering
amplitude. Therefore, peripheral domination of the elastic scattering should
be discarded with the corresponding elimination of the respective phases.
Alternative option anticipates a central elastic scattering. Then Eq. (5) can
be approximated in the following form
$\mbox{Im}f_{l}(s)\simeq h_{l,inel}(s),$ (10)
confirming a shadow nature of the elastic scattering at large values of $l$.
The use of the phases, leading to peripheral dominance of the elastic
scattering [1] looks like an artificial version because of the shadow nature
of the amplitude at large impact parameters takes place (cf. e.g. [20, 21]).
Moreover, we can conclude on the following behaviour of $h_{l,inel}(s)$
$h_{l,inel}(s)\simeq\mbox{Im}\,\omega(s)\exp(-\mu\frac{2l}{\sqrt{s}})$ (11)
in the region of large $l$ and $s$ variation. In addition, the relations
$[\mbox{Re}f_{l}(s)]^{2}\ll h_{l,inel}(s)$ (12)
and, consequently,
$[\mbox{Re}f_{l}(s)]^{2}\ll\mbox{Im}f_{l}(s)$ (13)
are being valid for the large values of $l$, $s$ and $2l/\sqrt{s}$.
Those relations reflect a shadow nature of the elastic scattering, cf. Eq.
(10), i.e. they correspond to the central profile of the elastic scattering in
the impact parameter representation. The large–$l$ tail of $\mbox{Im}f_{l}(s)$
at high energies is due to the inelastic collisions.
One should conclude that the assumption on approximate elastic unitarity
applicability in this region of $l$ and $s$ variation contradicts to the
analytical properties of the scattering amplitude. Indeed, this is not
surprising from the general principles and from viewpoint based on the
semiclassical scattering picture of hadrons.
## 2 Central elastic scattering
We proceed with discussion of the physical picture which corresponds to the
central elastic scattering. It is a common practice to analyze the hadron
scattering in the impact parameter representation (cf. [22]) which is a
convenient way to use a semiclassical picture of hadron collisions at high
energies [11, 23, 24]. Indeed, the impact parameter $b$ is a conserved
quantity at high energies and the scattering amplitude is determined by the
Fourier–Bessel transformation of the amplitude $F(s,t)$. The elastic and
inelastic overlap functions can be interpreted as the differential
contributions to the elastic and inelastic cross–sections respectively over
the impact parameter $b$ [11].
Large–$b$ behavior of the scattering amplitude can be obtained from the
relation similar to the Froissart–Gribov projection formula [23, 25, 26].
Note, that due to high collision energy and the exponential decrease of the
amplitude $F(s,t)$ (both of the real and imaginary parts) with $-t$, the
effect of finite integration range over $-t$ is not significant and has
therefore been neglected by the use of the infinite integration limit.
In what follows a central profile of the elastic overlap function will be
considered. At the LHC energies, the respective estimates lead to the value of
the elastic overlap function equal to $0.31$ at $b=0$ [1]222From another point
of view, this value can be used for the estimation of the value of
$[\mbox{Re}f(s,b=0)]^{2}$. It is approximately equal to $0.03$ since
$\mbox{Im}f(s,b=0)\simeq 0.53$ [27] at the energy $\sqrt{s}=7$ TeV.
indicating transition to the reflective scattering mode [28] in agreement with
the results of [29, 30]. It has been shown that the inelastic overlap function
$h_{inel}(s,b)$ is very close to its limiting value $h^{max}_{inel}=1/4$ in
the region of small and moderate impact parameters, $0\leq b\simeq 0.4$ fm at
the LHC energy $\sqrt{s}=13$ TeV. Deviation of $h_{inel}$ from its maximal
value is small and negative in this region of the impact parameters and
$h_{el}(s,b)>1/4>h_{inel}(s,b).$
In fact, $h_{inel}(s,b)$ has a shallow local minimum at $b=0$. Asymptotically
in the reflective scattering mode $h_{el}(s,b)\to 1$ and $h_{inel}(s,b)\to 0$
at $s\to\infty$ and fixed $b$ values in the central region333Both limiting
values are equal to $1/4$ in case of absorptive scattering mode, i.e. the
black disc saturation..
At the LHC energy $\sqrt{s}=13$ TeV, the unitarity relation at the impact
parameters range $0\leq b\simeq 0.4$ fm gives
$(\mbox{Im}f(s,b)-{1}/{2})^{2}+(\mbox{Re}f(s,b))^{2}\simeq 0.$ (14)
The estimations $\mbox{Re}f(s,b)\simeq 0$ and $\mbox{Im}f(s,b)\simeq 1/2$
result from Eq. (14). Thus, the impact parameter picture of the elastic
scattering combined with the unitarity can provide at least a qualitative
explanation of the recent result on the unexpectedly small real to imaginary
parts ratio of the forward scattering amplitude [31].
In fact, there is no room for a significant real part of the elastic
scattering amplitude at $\sqrt{s}=13$ TeV even at higher values of the impact
parameters. The real part of the amplitude $f(s,b)$ can be neglected due to
its smallness (cf. for the numerical estimations [32], the arguments in favor
of such an approximation have been given in Section 1).
So, the replacement $f\to if$ is used and the function $S(s,b)$ is taken to be
real. The $S(s,b)$ can acquire the negative values at $b<r(s)$ at high enough
energy[28]. The $r(s)$ increases as $\ln s$ at $s\to\infty$ and its value at
$\sqrt{s}=13$ TeV is approximatery equal to $0.4$ fm. At $b<r(s)$ the
reflection appears, i.e the function $S(s,b)$ in Eq. (1) crosses zero at
$b=r(s)$ and the value of the phase $\delta$ changes abruptly from $0$ to
$\delta=\pi/2$.
Under the reflective scattering regime, $f>1/2$, an increase of the elastic
scattering amplitude $f$ correlates with decrease of $h_{inel}$ at small
values of $b$ due to the unitarity relation
$[f(s,b)-{1}/{2}]^{2}={1}/{4}-h_{inel}(s,b)=\kappa^{2}(s,b)/4.$ (15)
The negative $S(s,b)$ correspond to $\delta=\pi/2$444Despite that experimental
data are in favor of the reflective scattering, aka hollowness [7], the
results of the experimental data analysis of the inelastic overlap function
are affected by the assumptions on the real part or phase of the scattering
amplitude [7, 33]. The term “reflective” comes from optics where the phases of
incoming and outgoing waves under reflection differ by $\pi$. Hollowness or
$h_{inel}(s,b)$ getting maximum at $b>0$ results from reflective scattering
and vice versa due to the unitarity. Possibility of such impact parameter
dependence of the inelastic overlap function was briefly mentioned in [34] and
discussed in [35, 36, 37, 38, 39]. Those phenomena are already relevant for
the LHC energies.
The physical picture of the hadron interaction region in the transverse plane
at high energies corresponds to a reflective disc surrounded by a black ring.
It should be emphasized that it is a picture of the interaction region and not
the one of the individual protons. Appearance of the reflective ability can be
associated with a soft deconfinement proposed in [40]. The reflective
scattering can also be associated with formation of the color conducting
medium in the intermediate state of hadron interaction [41].
This picture finds its confirmation in the experiments at JLab and at the LHC
[1, 42]. The results are in favor of the dominance of the elastic scattering
in soft deconfined state. The mechanism is energy–dependent and leads to the
elastic scattering dominance at $s\to\infty$ as a consequnce of its increasing
decoupling from the inelastic production [43].
A central profile of the elastic overlap function is encoded into the relation
$\frac{\partial h_{el}}{\partial b}=\left(\frac{1-S}{S}\right)\frac{\partial
h_{inel}}{\partial b}.$ (16)
resulting from [44]. Note, that $({1-S})/{S}<0$ in the reflective scattering
region. The respective qualitative dependencies are presented in Fig. 1.
Figure 1:
The picture of hadron scattering is schematically represented at Fig. 2 [45].
Figure 2:
Because of the hard core presence, the elastic scattering of hadrons has a
central profile and looks similar to collisions of billiard balls. Under the
central collisions the cores survive in soft processes. An inelasticity is
therefore depleted in the central collisions due to unitarity.
## 3 Concluding remarks
The peripheral profile of the elastic overlap function, i.e. its domination
over the inelastic one at large values of $b$, corresponds to assumption of
the elastic unitarity validity at high energies and is to be attributed to a
dominant contribution of the amplitude’s real part. Evidently, at high
energies and large values of $b$ such an assumption faces the troubles due to
violation of the consequences of the scattering matrix analyticity and
unitarity and should be considered as an incorrect one. The region of high
energies includes the highest ones achieved at the LHC as well as the lower
energies of CERN ISR. However, at the moment we cannot turn the qualitative
estimate of the high energy region into a more quantitative one.
The inconsistency of a peripheral dominance of the elastic scattering results,
in particular, in disregard of a shadow nature of the elastic scattering at
large impact parameters. The peripheral profile dominance of $h_{el}$ masks an
absence of absorption in this region. Proper exclusion of this option
corresponds to the respective elimination of the relevant class of the
amplitude phase $t-$dependencies and, thus, can be treated as an emergent
constraints for the phase. This exclusion, which means an elimination of the
peripheral geometric elastic scattering, provides a consent with the unitarity
and analyticity and leads to a clear physical picture of the elastic
scattering restoring its shadow nature at large values of the collision impact
parameters.
Thus, any phenomenological model and/or analysis of the experimental data
which exploits a nonvanishing contribution of the amplitude real part should
control the profile of the elastic overlap function in the impact parameter
representation to guarantee that the inequality
$[\mbox{Re}f(s,b)]^{2}\ll h_{inel}(s,b)$ (17)
is valid at large values of $b$, i.e. the limiting behavior of the ratio
$h_{inel}(s,b)/\mbox{Im}f(s,b)\to 1$ (18)
takes place at large increasing impact parameters and fixed high collision
energy indicating peripherality of inelastic interactions. This constraint
should be applied already at the model preselection stage and only models
which obey Eq. (18) can be used for the experimental data analysis.
An appearance of the constraint for the real part of the scattering amplitude
is not unexpected since its real and imaginary parts are related due to
dispersion relation. Full implementation of this connection is not realized at
the moment, a possible variant of such an implementation has been discussed in
[46].
It is rather reasonable, that a hadron can be seen as a structure with a hard
central core coated by a thick but breakable shell. Similar structure of
hadrons has been proposed in [47]. As it follows from the recent studies [48,
49, 50], the mass distribution in the hadron also supports it. The peripheral
hadron collisions lead to the shells’ destruction which is naturally
associated with the inelastic diffraction processes. This breaking down the
shells can therefore be a reason of the peripheral nature of the single or
double inelastic diffraction processes.
Diffraction on the outer shell has a shadow nature and it is observed as a
diffraction peak with the first dip in $d\sigma/dt$. Diffraction on the core
has a geometrical nature and hides the secondary dips and bumps in the
differential cross–section [45]. The latter is due to the reflective
scattering/hollowness which, as we believe, is being observed at the LHC. It
should be noted that the latter phenomena are associated with the inelastic
overlap function peripheral form at small impact parameters at the LHC
energies. The central subject of this note is related to the region of large
impact parameters and the dominance of the inelastic overlap function in this
region. Of course, the reflective scattering/hollowness obey this property.
But the property is a more general and somehow should be observed at lower
energies too, where the reflection/hollowness phenomenon has not been
detected.
The reflective scattering mode can be interpreted as a result of the central
core presence in the hadron structure responsible for appearance of the second
diffraction cone in the differential cross–section of the elastic scattering
[45]. This mode implies the elastic scattering dominance at $s\to\infty$ .
Moreover, the feature of this mode is an absence of the noticeable
contribution from the real part, i.e. the elastic overlap function always has
a central profile and the inelastic one has a peripheral profile. Presence of
the central proton’s core could be associated with a chiral symmetry
spontaneous breaking mechanism when the two scales $\Lambda_{QCD}$ and
$\Lambda_{\chi}$ relevant for the color confinement and spontaneous chiral
symmetry breaking are different (recent discussion is in [51]).
Finally, besides of the CNI use for the phase studies, there is a yet another
possibility of a phase-sensitive experimentation related to the polarization
studies. Those are technically difficult, but polarization measurements are
sensitive to the helicity amplitudes phases [52].
## Acknowledgements
We are grateful to Laszlo Jenkovszky and Evgen Martynov for a careful reading
of the note, many useful comments and remarks.
## References
* [1] G. Antchev et al. (The TOTEM Collaboration), Eur. Phys. J. C 26, (2016) 661.
https://doi.org/10.1140/epjc/s10052-016-4399-8.
* [2] I.M. Dremin, Particles 2 (2019) 57-69.
https://doi.org/10.3390/particles2010005.
* [3] V.A. Petrov, Theor. Math. Phys. 204 (2020) 896-900.
https://doi.org/10.1134/S0040577920070041.
* [4] V.A. Khoze, A.D. Martin, M.G. Ryskin, Phys. Rev. D 101 (2020) 016018.
https://doi.org/10.1103/PhysRevD.101.016018.
* [5] L. Durand, P. Ha, Phys. Rev. D 102 (2020) 036025.
https://doi.org/10.1103/PhysRevD.102.036025.
* [6] E. Ferreira, A.K. Kohara, T. Kodama, arXiv: 2011.13335v1.
* [7] W. Broniowski et. al. Phys. Rev. D. 98 (2018) 074012.
https://doi.org/10.1103/PhysRevD.98.074012
* [8] L. Van Hove, Rev. Mod. Phys. 36 (1964) 655-665.
https://doi.org/10.1103/RevModPhys.36.655
* [9] B. Schrempp, F. Schrempp, Nucl. Phys. B 163, (1980) 397.
https://doi.org/10.1016/0550-3213(80)90410-1.
* [10] V. Kundrát, M. Locajíček, D. Krupa, Phys. Lett. B 544 (2002) 132-138
https://doi.org/10.1016/S0370-2693(02)02481-4.
* [11] B.R. Webber, Nucl. Phys. B 87 (1975) 269.
https://doi.org/10.1016/0550-3213(75)90067-X.
* [12] S.M. Troshin, Phys. Lett. B 682 (2009) 40-42.
https://doi.org/10.1016/j.physletb.2009.10.088.
* [13] A. Martin, Phys. Rev. 129 (1963)1432-1436.
https://doi.org/10.1103/PhysRev.129.1432.
* [14] P.D.B. Collins, An Introduction to Regge Theory and High-Energy Physics, 460pp, Cambridge University Press, Cambridge 1977.
* [15] M. Froissart, Phys. Rev. 123 (1961) 1053-1057.
https://doi.org/10.1103/PhysRev.123.1053.
* [16] V.N. Gribov, JETP 41 (1961) 667-669.
* [17] R. Cahn, Z. Phys. C 15 (1982) 253-260.
https://doi.org/10.1007/BF01475009
* [18] V. Kundrát, M. Locajíček, Z. Phys. C 63 (1994) 619-629.
https://doi.org/10.1007/BF01557628.
* [19] V. Kundrát, M. Locajíček, Mod. Phys. Lett. A 11, 2241-2250 (1996).
https://doi.org/10.1142/S021773239600223X.
* [20] J. Pumplin, G.L. Kane, Phys. Rev. D 11 (1975) 1183 .
https://doi.org/10.1103/PhysRevD.11.1183.
* [21] G. Cohen–Tannoudji, V.V. Ilyn, L.L. Jenkovszky, Lett. Nuov. Cim. 5, 957-962 (1972).
https://doi.org/10.1007/BF02777999.
* [22] F. Halzen, Model Independent Features of Diffraction. In: Speiser D., Halzen F., Weyers J. (eds) Particle Interactions at Very High Energies. NATO Advanced Study Institutes Series (Series B: Physics), vol 4. Springer, Boston, MA.
https://doi.org/10.1007/978-1-4684-8655-1-1.
* [23] R. Blankenbecler, M.L. Goldberger, Phys. Rev. 126 (1962) 766-786.
https://doi.org/10.1103/PhysRev.126.766.
* [24] M.L. Goldberger, K.M. Watson, Collision Theory, John Wiley & Sons, New-York–London–Sydney, 1964.
* [25] R. Henzi, Nuovo Cim. A 46, (1966) 370.
* [26] B. Schrempp, F. Schrempp, Nucl. Phys. B 163 (1980) 397-452.
https://doi.org/10.1016/0550-3213(80)90410-1.
* [27] A. Alkin et al, Phys. Rev. D 89 (2014) 091501(R).
https://doi.org/10.1103/PhysRevD.89.091501.
* [28] S.M. Troshin, N.E. Tyurin, Int. J. Mod. Phys. A 22 (2007) 4437-4449.
https://doi.org/10.1142/S0217751X0703697X.
* [29] A. Alkin et al., arXiv: 1807.06471v2.
* [30] T. Csörgő, R. Pasechnik, A. Ster, Acta Phys. Pol. B Proc. Suppl. 12 (2019) 779-785.
* [31] G. Antchev et al. (The TOTEM Collaboration), Eur. Phys. J. C 79 (2019) 785.
https://doi.org/10.1140/epjc/s10052-019-7223-4.
* [32] I.M. Dremin, V.A. Nechitailo, S.N. White, Eur. Phys. J. C 77 (2017) 910 .
https://doi.org/10.1140/epjc/s10052-017-5483-4.
* [33] V.A. Petrov, A.P. Samokhin, Int. J. Mod. Phys.: Conf. Ser. 47 (2018) 1860097.
https://doi.org/10.1142/S2010194518600972.
* [34] V.F. Edneral et al., Preprint CERN-TH-2126, 1976.
* [35] S.M. Troshin, N.E. Tyurin, Phys. Lett. B 316 (1993) 175.
https://doi.org/10.1016/0370-2693(93)90675-8.
* [36] P. Desgrolard, L.L. Jenkovszky, B.V. Struminsky, Phys. Atom. Nucl. 63 (2000) 891.
https://doi.org/10.1134/1.855720.
* [37] I.M. Dremin, Phys.-Usp. 58 (2015) 61.
https://doi.org/10.3367/UFNe.0185.201501d.0065
* [38] V.V. Anisovich, V.A. Nikonov, J. Nyiri, Phys. Rev. D 90 (2014) 074005.
https://doi.org/10.1103/PhysRevD.90.074005.
* [39] E.R. Arriola, W. Broniowski, Few Body Syst. 57 (2016) 485.
https://doi.org/10.1007/s00601-016-1087-z.
* [40] K. Fukushima, T. Kojo, W. Weise, Phys. Rev. D 102 (2020) 096017.
https://doi.org/10.1103/PhysRevD.102.096017.
* [41] S.M. Troshin, N.E. Tyurin, J. Phys. G, 46 (2019) 105009.
https://doi.org/10.1088/1361-6471/ab0ed1.
* [42] V.D. Burkert, L. Elouadrhiri, F.X. Girod, Nature 557 (2018) 396-399.
https://doi.org/10.1038/s41586-018-0060-z.
* [43] S.M. Troshin, N.E. Tyurin, Phys. Lett. B 707 (2012) 558-561.
https://doi.org/10.1016/j.physletb.2012.01.033.
* [44] S.M. Troshin, N.E. Tyurin, Phys. Rev. D 88, 077502 (2013).
https://doi.org/10.1103/PhysRevD.88.077502.
* [45] S.M. Troshin, N.E. Tyurin, EPL 129, 31002 (2020).
https://doi.org/10.1209/0295-5075/129/31002.
* [46] S.M. Troshin, N.E. Tyurin, Mod. Phys. Lett. A 32 (2017) 1750028 .
https://doi.org/10.1142/S0217732317500286.
* [47] M.M. Islam, Nucl. Phys. B (Proc. Suppl.) 25 (1992) 104.
* [48] D.E. Kharzeev, arXiv:2102.00110v2.
* [49] R. Wang, W. Kou, X. Chen, arXiv:2102.01610v2.
* [50] X.-D. Ji, arXiv:2102.07830v1.
* [51] N. Evans, K.S. Rigatos, arXiv: 2012.00032v1.
* [52] M.G. Echevarria et al., PoS SPIN2018 (2019) 063.
https://doi.org/10.22323/1.346.0063.
## Figure captions
Figure 1: Qualitative $b$–dependencies of the elastic and inelastic overlap
functions $h_{el}$ and $h_{inel}$.
Figure 2: Proton scattering at the impact parameter $b$.
============================================================ No color
required.
|
captionUnsupported document class
# Collaborative Federated Learning For Healthcare: Multi-Modal COVID-19
Diagnosis at the Edge
Adnan Qayyum1 Email<EMAIL_ADDRESS>Kashif Ahmad2 Muhammad Ahtazaz
Ahsan1 Ala Al-Fuqaha2 and Junaid Qadir1
1 Information Technology University (ITU) Punjab Lahore Pakistan
2 Information and Computing Technologies (ICT) Division College of Science
and Engineering (CSE) Hamad Bin Khalifa University Doha Qatar
###### Abstract
Despite significant improvements over the last few years, cloud-based
healthcare applications continue to suffer from poor adoption due to their
limitations in meeting stringent security, privacy, and quality of service
requirements (such as low latency). The edge computing trend, along with
techniques for distributed machine learning such as federated learning, have
gained popularity as a viable solution in such settings. In this paper, we
leverage the capabilities of edge computing in medicine by analyzing and
evaluating the potential of intelligent processing of clinical visual data at
the edge allowing the remote healthcare centers, lacking advanced diagnostic
facilities, to benefit from the multi-modal data securely. To this aim, we
utilize the emerging concept of clustered federated learning (CFL) for an
automatic diagnosis of COVID-19. Such an automated system can help reduce the
burden on healthcare systems across the world that has been under a lot of
stress since the COVID-19 pandemic emerged in late 2019. We evaluate the
performance of the proposed framework under different experimental setups on
two benchmark datasets. Promising results are obtained on both datasets
resulting in comparable results against the central baseline where the
specialized models (i.e., each on a specific type of COVID-19 imagery) are
trained with central data, and improvements of 16% and 11% in overall
F1-Scores have been achieved over the multi-modal model trained in the
conventional Federated Learning setup on X-ray and Ultrasound datasets,
respectively. We also discuss in detail the associated challenges,
technologies, tools, and techniques available for deploying ML at the edge in
such privacy and delay-sensitive applications.
## I Introduction
The main motivation for using data storage and computing at the edge stems
from the desire to make high quality computing resources available closer to
the users and to reduce the need for end devices to exchange private data with
centralized servers [satyanarayanan2017emergence]. There is an ongoing trend
to deploy machine learning (ML) algorithms at the edge enabling consumers and
corporations to enjoy and explore new opportunities in different application
domains, such as automotive, security, surveillance, and other smart city
services like healthcare [2]. This desire is particularly strong in the
healthcare industry where the stakes are high and various stakeholders
(including consumers, governments, and service providers) have stressed the
need for foolproof safeguards for ensuring data security, user privacy, and
ethical data use [1]. This motivates the use of edge computing in healthcare
settings to meet the high expected standards for patient privacy and security
as well as stringent requirements for quality of service (high reliability and
low latency).
Figure 1: An illustration of AI/ML at edge in an IoT empowered healthcare
environment.
Healthcare is a major application domain that can benefit from edge computing.
Generally, healthcare centers in remote areas lack advanced medical equipment
and other healthcare facilities resulting in poorer access to health services
by the people living there. Thanks to the recent advancement in telemedicine,
the provision of health services remotely, using audiovisual technology, is a
reality now. Large volumes of medical data including ultrasound and X-ray
images could be transmitted to major hospitals with advanced diagnosis
facilities for diagnostic and training purposes. However, there are several
challenges associated with a typical cloud-based infrastructure, such as low
bandwidth, high latency, transmission cost, and increasing concerns about data
security and privacy. Edge computing, which aims to store and process data
locally or closer to edge devices, on the other hand, results in low latency
and increased privacy and data security.
A typical IoT environment for smart healthcare is illustrated in Figure 1 in
which data collected by different sensors is processed at the edge for
different applications using ML techniques. Once the ML predicts an event, the
edge devices triggers an action or request service in the cloud. ML algorithms
can also be executed concurrently in the cloud as well as at the edge as shown
in the figure. However, with local data storage and processing resources
(through cloud computing), such applications enjoy a significant improvement
in terms of processing time by avoiding networking congestion. More
importantly, real-time processing at the edge improves the performance of
delay-sensitive applications, such as healthcare, surveillance and automotive,
by avoiding any potential latency or delay occurred during data transmission
between end devices and the cloud. In addition, ML at the edge results in
increased security and privacy as sending data back and forth from the cloud
may lead to security threats.
In this paper, we leverage the capabilities of edge computing in medicine by
analyzing and evaluating the potential of intelligent processing of clinical
visual data related to COVID-19 at the edge allowing the remote healthcare
centers to benefit from the multi-modal collaborative learning paradigm
without sharing any information about the modality of the local data and the
data itself. A number of recent research efforts have focused on diagnosing
COVID-19 using AI and data science methods [3]; relatively little work has
however focused on using edge AI for COVID-19 diagnosis.
To this aim, we utilize an emerging concept of clustered federated learning
(CFL) and propose a CFL-based collaborative learning framework for an
automatic multi-modal diagnosis of COVID-19. Our approach is well suited to
the task of COVID-19 diagnosis as visual data (i.e., CT scans, X-rays, and
ultrasound) is collected at different centers and could be used to build a
joint/shared ML model in a cloud-edge infrastructure able to diagnose COVID-19
in both X-ray and Ultrasound images. The proposed framework is evaluated on
two benchmark datasets under different experimental setups and we have
achieved encouraging results using CFL that are comparable with the baseline
results (when the model is trained with central data). We also discuss in
detail the potential applications, associated challenges, technologies, tools,
and techniques available for deploying ML at the edge in such privacy and
delay-sensitive applications. We note that we use the term multi-modal model
to represent a single model capable of diagnosing COVID-19 in both X-ray and
Ultrasound imagery when provided separately.
The main contributions of the paper are as follows:
1. 1.
To highlight the potential of intelligent processing of clinical data at the
edge, we propose a collaborative learning framework for COVID-19 diagnosis by
leveraging a CFL approach enabling remote healthcare centers to benefit from
each other’s data without sharing the data itself and associated information.
2. 2.
We also demonstrate how the performance of conventional FL is affected by the
divergence of the distribution of data from different sources (i.e., X-ray and
Ultrasound imagery), and how CFL can help to mitigate the adverse impact.
3. 3.
We also highlight the potential challenges and enabling factors that enable
the deployment of ML/DL models to the edge.
4. 4.
Finally, we elaborate on the open research issues related to deploying ML at
the edge for healthcare applications that require further investigation.
Organization of the paper: The rest of the paper is organized as follows.
Section II-A provides a broad discussion of the related work on automated
COVID-19 diagnosis as well as the different challenges encountered in
deploying ML on the edge along with a discussion on enabling technologies. The
case study for collaborative learning for multi-modal diagnosis of COVID-19 is
presented in Section III and results are presented in Section IV. Various open
research issues that require further investigation are presented in Section V.
Finally, Section VI provides some concluding remarks.
## II Background
In this section, we provide background on the related work on automated
COVID-19 diagnosis as well as a discussion on the potential challenges that
hinder deployment ML at the edge along with the different enabling
technologies.
### II-A Existing Automated COVID-19 Diagnosis Work
COVID-19 has been a strong focus of the research community in 2020, especially
after it was declared in March by the World Health Organization (WHO) to be a
pandemic, with diverse efforts focusing on diagnosis [4], treatment [5], and
the development of the potential vaccine [6]. Data science
methods—particularly, ML and data visualization techniques—are playing a major
role in the international response against the COVID-19 pandemic with some key
applications being risk assessment, contact tracking, fake news detection,
sentiment analysis, and screening and diagnosis [3]. The focus of this paper
is on automated screening and diagnosis; we shall discuss next some of the
prominent related techniques relying on different types of information (e.g.,
audio and visual data) that have been proposed.
A number of efforts have focused on automated image analysis in a bid to speed
up the COVID-19 diagnosis process [7]. To this aim, three different medical
imaging modalities, namely computerized tomography (CT), Ultrasound scans, and
X-radiation (X-ray), have been mostly exploited. To facilitate research on
image-based solutions for COVID-19 diagnosis, several datasets have been
collected and made publicly available [7, 9]. For instance, Maghdid et al. [9]
collected a comprehensive dataset containing a total of 170 X-rays and 361 CT
scan images from different sources. Cohen et al. [10] also provide a
collection of X-rays and CT scans of confirmed COVID-19 patients. A collection
of COVID-19 patients’ CT scans has also been made publicly available for
research purposes in [11, 12]. Born et al. [13], on the other hand, provide a
lung ultrasound (POCUS) dataset containing a total of 1103 images including
654 COVID-19, 277 bacterial pneumonia, and 277 healthy controls samples.
A vast majority of the image-based solutions for COVID-19 diagnosis relies on
CT scan images. For instance, Wan et al. [14] proposed a deep learning model
for extracting COVID-19’s specific features/textures in CT scans of confirmed
cases to extract useful clinical insight before the pathogenic tests. An
evaluation of a reasonable amount of confirmed cases showed encouraging
results with an average test accuracy of 73.1%. Butt et al. [15] proposed a
two-phase solution for COVID-19 diagnosis in CT scans. Initially, a pre-
trained 3D Convolutional Neural Network (CNN) is employed to extract potential
infectious regions in CT scans followed by a CNN-based classification
framework to classify the candidate regions into COVID-19, influenza, and non-
infectious regions. Li et al. [16] also proposed a 3D CNN-based framework to
extract both local and global deep features for diagnosis COVID-19 in CT
scans. One of the key challenges to CNN-based solution is the unavailability
of a large-scale CT scans datasets. In order to deal with the challenge,
Afshar et al. [17] proposed a Capsule Networks based deep learning framework,
namely COVID-CAPS, for COVID-19 diagnosis in X-ray images. Moreover, to
further enhance the capabilities of the proposed model, the authors used an
external dataset composed of 94, 323 frontal view chest X-ray images for pre-
training and transfer learning purposes.
There are also methods relying on X-ray images for COVID-19 diagnosis. For
instance, in [18] a pre-trained deep model is fine-tuned on X-ray images for
COVID-19 diagnosis. Similarly, Sethy et al. [19] trained a Support Vector
Machine (SVM) classifier on features extracted via ResNet-50 [20] from X-ray
images for classification of COVID-19 and non-COVID-19 cases. Ali et al. [21]
evaluated the performance of several existing deep models in diagnosing
COVID-19 in X-ray images. Islam et al. [22] on the other hand proposed a deep
framework combining CNNs and Recurrent Neural Networks (RNNs) for diagnosis of
COVID-19 in X-ray images. Initially, features are extracted with a CNN, which
are then feed into a Long short-term memory (LSTM) for diagnosis/detection
purposes. Kassani et al. [23] provide a detailed evaluation of several
existing deep models and classification algorithms to find a best combination
for COVID-19 diagnosis in both X-ray and CT scans. However, both modalities
are treated individually. The deep models are used for feature extraction,
which is then fed into different classification algorithms.
Some image-based COVID-19 diagnosis methods also rely on a recently introduced
concept of Federated Learning (FL) to ensure data privacy in a collaborative
learning environment, where several hospitals can participate in training a
global ML model. For instance, in [24] a deep model is collaboratively trained
in a federated learning environment on CT scans collected from different
sources. Kumar et al. [25] on the other hand proposed a blockchain-FL-based
framework for collecting data (CT scans) from different hospitals, and
collaboratively training a global deep model. Moreover, several exiting deep
models have also been evaluated in the proposed federated learning framework.
In [26], a federated learning technique is employed for training a global
model on electronic health records from various hospitals to predict mortality
within seven days in hospitalized COVID-19 patients.
### II-B Challenges in Deploying ML at the Edge
#### II-B1 Resource Scarcity and Heterogeneity
Heterogeneous edge devices with varying computational, storage and
communication resources are a major bottleneck for the deployment of ML on the
edge. ML algorithms in general and deep learning (DL) in particular require a
large amount of computational and processing resources making the deployment
of ML impractical in several edge computing applications. DL models are
usually large and are computationally expensive, as both the training of a
deep model and its inferences are typically performed on power-hungry GPUs and
servers while, the edge devices are designed to be operated at low power and
usually have frugal memory, therefore, deploying DL models on the edge devices
is very challenging. One another important challenge is the availability of a
power source at edge device, i.e., a battery with long power backup is always
desirable in a typical edge computing network. In addition, the size of the
network and systems constraints is also a major challenge that can result in
only a few devices being active at a time [27].
#### II-B2 Network Communication
The heterogeneity of the computational and communication resources also led to
slow and unstable communication. In addition to resource heterogeneity, there
are other considerations as well, e.g., the Internet upload speed is typically
much slower than the download speed [2]. Therefore, in an edge computing
environment in which ML/DL models are being trained on the client site stable
and powerful Internet connection is always desirable, otherwise, the unstable
clients will be disconnected from the network that will result in a drop in
performance. On the other hand, deploying ML at the edge saves expensive
communication, i.e., we do not require the local (raw) data to be transmitted
to the cloud/server.
#### II-B3 Statistical Heterogeneity
The statistical heterogeneity, due to data generated by different types of
devices in an edge computing environment, can lead to many efficiency
challenges. For instance, the optimization/training of a ML/DL hyperparameters
becomes difficult, which directly affect its performance. To address the
statistical heterogeneity techniques such meta-learning can be used that can
enable device-specific modeling [28].
#### II-B4 Privacy and Security Challenges
Despite being able to train joint models with sharing data in a collaborative
learning environment using FL, privacy and security challenges arise with the
presence of malicious devices. For instance, an adversary can learn sensitive
information using the model parameters and the shared model. As shown in [29],
privacy-related information can be inferred from the shared weights even
without getting access to the data itself. To restrain leakage of privacy-
related information from the shared model, different privacy-preserving
techniques can be leveraged, such as cryptographic approaches and differential
privacy [30].
#### II-B5 Adversarial ML
Despite the state of the art performance of ML/DL techniques in solving
complex tasks, these techniques have been found vulnerable to carefully
crafted adversarial examples [31]. In a federated learning setup, a client or
multiple clients can be compromised to realize the attacks on the whole
network. For instance, local poisoning attacks using compromised attacker
devices are presented in [32]. The authors demonstrated that their proposed
attacks can increase the error rates of the distributively trained model on
four real-world datasets. Moreover, a systematic review focused on different
adversarial ML attacks and defenses for cloud-hosted ML models can be found in
[33].
### II-C Enabling Technologies: Building Blocks for ML at Edge
#### II-C1 Schemes for deploying ML at the Edge
In recent years, enormous growth has been observed in the computational power
of edge devices, allowing them to play a more important role than just
collecting data in IoTs. ML can contribute significantly in fully utilizing
the potential of edge devices in numerous exciting applications (e.g., smart
healthcare using wearables technologies and AI-empowered sensors, etc.) and
turn them into more useful components of an IoT environment [34]. ML could be
employed at the edge in several ways, such as inference, sensor fusion,
transfer learning, generative models, and self-improving devices. In this
section, we briefly describe some of the most commonly used schemes.
* •
Inference: The inference capabilities of ML, which aims predicting unseen
objects/classes based on the previous knowledge/trained data, help the IoTs to
perform different activities, such as cancer prognosis, brain tumor
classification, and other clinical data analysis at the edge devices resulting
in reduced latency and bandwidth in telemedicine [35].
* •
Sensor Fusion: ML in conjunction with signal processing algorithms can be used
for the fusion of information from different sensors enabling efficient
utilization of the available information. With fusion capabilities, individual
sensors in an IoT environment can be converted into sophisticated synthetic
sensors to solve complex problems more accurately. For instance, in healthcare
data from several sensors/sources can be combined efficiently to predict a
clinical event, such as heart failure [36].
* •
Transfer Learning: Transfer learning, which aims to re-utilize the knowledge
of one domain in another domain by fine-tuning a pre-trained model trained on
a larger dataset, can help them to learn on a smaller dataset with less
computational resources. In an IoT environment and in particular, in
healthcare applications where the data is scarcely available, the transfer
learning technique can be used to balance workload and latency where the pre-
trained models are put at the cloud and are shared among edge devices to be
fine-tuned for specific tasks [37].
* •
Generative Models: Generative learning can also be useful in edge computing
where generative models can be used for the approximation of the original data
at the clouds to be used for training models at edge devices for applications
with less training samples or to solve complex tasks with minimal computation
from the clouds. Generative deep models have already been explored for the
generation of synthetic medical images [38].
* •
Self-improving devices: In a typical IoT environment, ML techniques can also
be used to enable end devices to optimize their performance and improve
continuously based on the collected data and behaviors of other devices. Such
strategies help to configure the devices faster which ultimately leads to
faster and efficient implementation and deployment.
#### II-C2 Hardware Optimization Techniques
For successful deployment of ML at the edge, the two critical requirements of
edge computing—namely (i) low power consumption, and (ii) high
performance—need to be fulfilled. Thus, off-the-shelf solutions are not
practical to intelligent processing of data at the edge devices, and custom
hardware architectures need to be developed. In this section, we discuss some
hardware optimization techniques to optimize hardware resources for deploying
ML at the edge.
##### Decentrailized Distributed Computing
In edge computing, computations are completely or largely performed on end
devices in a distributed computing fashion. Also, edge computing brings data,
applications, and services closer to end devices while eliminating the need
for centralized clouds that requires infrastructure decentralization, such
kind of decentralization can be efficiently achieved using blockchain
technologies [39]. Therefore, computational resources can be shared among
end/edge devices by employing blockchain and smart contracts technologies thus
allowing computational resources demanding ML applications to be deployed at
the edge. For instance, different design requirements and challenges in
designing a decentralized edge computing and IoT ecosystem are presented in
[40]. This study is specifically focused on the need of using decentralized
trust schemes for the elimination of trust in centralized entities and
highlights the potential of using distributed ledger technology, i.e.,
blockchain for achieving the feature of decentralization. The backbone of
blockchain technologies is the distributed consensus mechanism enabling secure
communication among trust-less participants without the intervention of a
central controlling unit. There are many facets of blockchain with different
distributed consensus methods that can be used for edge-centric IoT systems
[41].
##### AI Co-Processors
Portable intelligent and dedicated co-processors are considered to be the
driving force for deploying AI/ML models at the edge. Different types of
specialized processors can be integrated into a single system or chip thus
forming a heterogeneous computing paradigm optimized for a specific type of
task. In general, AI co-processors have two common features: (1) enables
parallel computing using multiple mini-cores; (2) enables accelerated data
fetching using distributed memory that is placed right to mini-cores.
### II-D Algorithmic Optimization Techniques
The development and advancement of ML algorithms are promising aspects that
facilitate the successful application of ML at the edge. In this regard,
various algorithms and techniques can be leveraged to enhance and reduce the
computation of the parameters in ML models by exploiting different properties
such as sparsity. The widely used methods are described below.
Figure 2: The proposed clustered federated learning based collaborative
learning paradigm (Fig. 2 (c)) versus the method of model training using
central data (Fig. 2 (a)) and the conventional federated learning model
training method for multiple modalities (Fig. 2 (b)). The term “clients”
refers to hospitals, clinics, and medical imaging facilities.
* •
Parameter Efficient Networks: To efficiently deploy ML models at the edge,
computation and memory-efficient architectures of ML/DL models are highly
desirable. To facilitate embedded ML computing, various architectures of ML
models have been proposed in the literature that can be leveraged to deploy ML
models on the edge, e.g., Mobile Net [42] and SqueezeNet [43]. These
architectures are designed with a key focus on reducing computation costs
associated with the training and inferences of ML models while maintaining
accuracy.
* •
Network Pruning: The literature suggests that a penalty of neurons in the
trained model does not contribute towards the final accuracy, therefore, such
neurons can be prune to save some memory. Google’s
Learn2Compress111https://ai.googleblog.com/2018/05/custom-on-device-ml-
models.html has found that neurons can be reduced by a factor of 2 while
retaining an overall accuracy of 97%. To this aim, several algorithms have
been proposed in the literature, such as learning important connections and
weights among neurons [44] and learning structural sparsity in deep models
[45]. Moreover, many ML models perform parameter computation using 32-bit
float values. On the other hand, edge devices typically operate on 8-bit
values or less. Therefore, the model size can be significantly reduced by
reducing precision.
* •
Network Distillation: Network distillation is a method for transferring
knowledge learned by a larger model to a smaller model. Together with transfer
learning, which deals with the transfer of knowledge learned from one domain
to another domain, network distillation holds the substantial potential to
significantly reduce model size without comprising much on performances in
terms of accuracy. In addition, network distillation can be benefited from
other hyperparameters tuning algorithms as well. For instance, the
distillation method has been successfully used for application-specific and
resource-constrained IoT platforms [46].
## III COVID-19 Diagnosis using Collaborative Federated Learning
In this section, we consider the problem of developing a single ML model for
classification of chest images from multiple sources (such as X-rays and
Ultrasound). Consider a clustered federated learning (CFL) setup as shown in
Figure 2 resembles the actual federated learning settings [47]. Clients in
each cluster represent the healthcare entities (remote medical imaging
facilities) and major hospitals or other government entities (e.g., ministry
of health) play the role of the cloud server facilitating the weights
aggregation and updates. The problem formulation for collaborative learning is
described below.
### III-A Problem Formulation
In this task, we are interested in learning a shared model $M_{s}$ in a
collaborative fashion using clustered federated learning (CFL). As shown in
Figure 2, there are two clusters each having different kind of imaging
modality, i.e., cluster 1 ($C_{1}$) has clients having X-ray imagining
facility and clients in cluster 2 ($C_{2}$) posses ultrasound imagining
facilities, therefore, each cluster $C_{k}$ is disjoint and has different data
distribution $\mathcal{D}_{k}$. Each client $m$ in cluster $C_{k}$ has drawn
its samples $z^{k,1},\dots,z^{k,m}$ from the distribution $\mathcal{D}_{k}$
such that there are no overlapping samples among the clients. We have
formulated the problem of collaborative learning as supervised learning
problem such that each sample $z^{k,m}$ contains a pair of data sample
$x^{k,m}$ and its corresponding class label $y^{k,m}$, denoted by
$z^{k,m}=(x^{k,m},y^{k,m})$. Furthermore, we assume that each client does not
have any knowledge either about the identity and data of every other client
within the same cluster as well as in the other cluster. The major hospital
(aka server) shares a shared model $M_{s}$ and initial weights $W_{0}$ with
each client of every cluster. After receiving the $M_{s}$ and $W_{0}$, each
client trains the shared model (i.e., $M_{s}$) using its own local data
$D_{k,m}$, where $k=\\{1,2\\}$ and $m$ denotes the number of clients in each
cluster $C_{k}$. After that, every client in each cluster shares the learned
weights $W_{r,k,m}$ to the server, where $r$ denotes the communication
round/iteration number. After receiving the weight updates from each client,
the server performs federated averaging using Eq. 1.
$W_{r}=\frac{1}{n}\sum_{i}^{n}w_{i}\times W_{i}$ (1)
Where, $n$ denotes the total number of clients participating in the CFL setup
(i.e., $n=|C_{1}|+|C_{2}|$) and $w$ is a weighting factor that specifies the
weight-age given to the weights of each client. Then the server updates the
new weights (i.e., update its copy of $M_{s}$ with $W_{r}$) and performs the
inference using its multi-modal test data (the two modalities, i.e., X-ray and
Ultrasound are merged to make the testing data multi-modal). After testing the
performance of $M_{s}$ at the communication round $r$, the server shares the
updated weights $W_{r}$ with all clients in each cluster and repeats the
process until the specified criteria or desired performance is achieved. The
algorithm for collaborative multi-modal learning using CFL is presented in
Algorithm 1.
Input: Shared Model $M_{s}$, Clusters $k$, Initial Model Weights $W_{0}$, Set
of Clients $m$, Communication Rounds $R$, Epochs $E$, Batch Size $B$, and
Learning Rate $\eta$
Output: Updated Weights $W_{r}$
Initialize: $W_{0}$, $R$, $E$, $B$, and $\eta$ for _$r=1,...,R$_ do
for _$i=1,...,k$ in parallel_ do
for _$j=1,...,m$ in parallel_ do
if _$r==1$_ then
$W_{i,m}\leftarrow W_{0}$;
$M_{s}\leftarrow W_{0}$;
for _$e\in E$_ do
Using $B$ & $\eta$;
Train $M_{s}$ using $z^{i,j}=(x^{i,j},y^{i,j})$;
Get $W_{j}$ from $M_{s}$;
end for
else
$M_{s}\leftarrow W_{r}$;
for _$e\in E$_ do
Using $B$ & $\eta$;
Train $M_{s}$ using $z^{i,j}=(x^{i,j},y^{i,j})$;
Get $W_{j}$ from $M_{s}$;
end for
end if
end for
$W_{i,r}=\frac{1}{m}\sum_{j=1}^{m}w_{j}\times W_{j}$;
end for
Return: $W_{r}=\frac{1}{k}\sum_{i=1}^{k}W_{i,r}$;
end for
Algorithm 1 Collaborative learning from data of different sources and modality
using CFL
### III-B Experimental Setup
#### III-B1 Data Description
For this study, two datasets from different sources one containing chest X-ray
[10] and chest ultrasound images [13], are used. We formulated the problem as
binary classification, i.e., differentiating between COVID-19 chest images and
normal chest images. Each dataset is divided into two parts, i.e., a training
set and a testing set using a split of 80% and 20%, respectively. The training
portion (i.e., 80%) of each dataset is further divided into different parts,
depending upon the number of clients in that cluster. The distribution of
training and testing data of X-ray and Ultrasound datasets over different
classes is shown in Table I. Moreover, the testing sets from both datasets are
merged to develop a joint testing set that will be used by the server for the
evaluation of the performance of a shared model that is being trained in a
collaborative fashion using CFL.
We further note that the datasets used in this study have inter and intra
class variability in terms of image size and quality, contrast and brightness
level, and positioning of subjects, an example is shown in Figure 3. This is
not surprising as these publicly available databases are not standard datasets
for COVID-19 detection, and have been curated from different sources [48].
Moreover, it is evident from Table I that these datasets are highly
imbalanced. These limitations make the training of a generalized model more
difficult.
Figure 3: The depiction of inter and intra class variations observed in COVID-19 datasets (X-ray [10] and Ultrasound [13]). TABLE I: The distribution of training and testing data of X-ray and Ultrasound datasets over different classes. Data | Class | Training Data (80%) | Test Data (20%)
---|---|---|---
X-ray | COVID-19 | 179 | 44
Healthy | 1072 | 269
Ultrasound | COVID-19 | 319 | 80
Healthy | 116 | 30
#### III-B2 Model Architecture and Implementation Details
In our experiments, we have used the VGG16 model with one extra convolutional
layer and three fully connected layers stacked before its original output
layer. Each image is first converted into a gray-scale image, which is then
resized to a dimension of $256\times 256$. Moreover, the resized images are
normalized before feeding into the model. The model is trained using Adam
optimizer with a learning rate of $0.0001$ at each client. We use different
types of standard data augmentation techniques for training the models.
Furthermore, to address the problem of imbalanced classes, we propose to use
focal loss [49], which is suited for such issues in binary classification
tasks. The focal loss adds a modulating factor $(1-p_{t})^{\gamma}$ to the
standard cross-entropy loss, where $\gamma\geq 0$ is a tunable focusing
parameter. The $\alpha$-balanced variant of focal loss is defined in Eq. 2,
where $\alpha$ balances the importance of positive/negative examples [49].
$FL(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}\log(p_{t})$ (2)
We note that for the implementation of the proposed work we used TensorFlow ML
library, and all experiments are performed in a simulated environment. The
results of the different experiments are described in the next section.
## IV Results and Discussions
In order to show the effectiveness of the proposed multi-modal collaborative
learning framework for COVID-19 diagnosis, we performed several experiments.
On one side, we aim to evaluate and compare the performances of CFL against
two baselines, namely (i) specialized FL baseline, and the (ii) multi-
modal222By the term multi-modal we mean images acquired using different
imagining techniques, i.e., modalities (e.g., X-ray and Ultrasound).
conventional FL. Since CFL aims to tackle the convergence issues of
conventional FL schemes due to the diverse distribution of the data, the two
baselines, we believe, are appropriate options as a comparison benchmark
instead of the state of the art methods for COVID-19 diagnosis. We note that
due to the limitations of the dataset, we only consider the divergence in
distribution of the data in terms of the nature of the data (i.e., the
distribution of ultrasound and X-ray images is different). The first baseline
shows the best-case scenario, where separated models for each type of imagery,
which we termed as specialized models, are trained in a FL environment. The
individual models are trained on X-ray and Ultrasound images with a learning
rate of $0.0001$ and a batch size of $32$ resulting into two separate models
one for each modality (i.e., X-ray and Ultrasound). The second baseline
represents the experimental setup of a conventional FL environment, where the
data is distributed among different clients, and a shared ML model is built in
a federated environment. The parameters used in different experiments can be
found in Table II.
TABLE II: Parameters of clustered federated learning (CFL) experiments. Parameter (s) | Value (s)
---|---
Communication Rounds | 30, 50, & 100
Epochs | 5 & 10
Batch Size | 16 & 32
Learning Rate | $1e^{-3}$
TABLE III: COMPARISON AGAINST THE TWO BASELINES IN TERMS OF PRECISION, RECALL, AND F1-SCORE. Promising results are obtained by CFL, outperforming the conventional FL while slightly lower performance is obtained compared to Central Baseline with the added advantage of improved privacy and data security. ∗A separate model is trained in federated learning settings for each modality. Dataset | Class | Federated Learning (Specialized∗) | Federated Learning (multi-modal) | Clustered FL (multi-modal)
---|---|---|---|---
Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score
X-ray | COVID-19 | 0.73 | 0.82 | 0.77 | 0.30 | 0.68 | 0.41 | 0.71 | 0.82 | 0.76
Healthy | 0.97 | 0.95 | 0.96 | 0.93 | 0.74 | 0.82 | 0.97 | 0.94 | 0.96
Ultrasound | COVID-19 | 0.97 | 0.95 | 0.97 | 0.94 | 0.76 | 0.84 | 0.93 | 0.95 | 0.94
Healthy | 0.88 | 0.93 | 0.90 | 0.58 | 0.87 | 0.69 | 0.86 | 0.80 | 0.83
(a) X-ray
(b) Ultrasound
Figure 4: Comparison of clustered federated learning (CFL) with two baselines
(i.e., the specialized models (trained with conventional FL independently for
each modality) and conventional federated learning (when the model is trained
using multi-modal data)) in terms of average values of precision, recall, and
F1-score on X-ray and Ultrasound imagery.
Table III and Figure 4 provides the experimental results per class and overall
(per dataset) results, respectively, in terms of precision, recall, and
F1-Score. Since the data set is not balanced, so we believe, alone accuracy is
not enough to evaluate the proposed method. For performance evaluation of the
three experimental setups (i.e., the two baselines and CFL), we kept the
similar experimental setup where we first train the baseline models with a
batch size of 16 (for each modality) and then we train the same model in CFL
fashion (i.e., using multi-modal settings) with 5 epochs of local training
with a batch size of 16. Then we evaluated the collaboratively trained model
with the test data from each cluster (modality), i.e., X-ray and Ultrasound.
As can be seen in the Figure 4, overall comparable results are observed for
multi-modal model trained using CFL compared with the specialized two models
trained in a conventional FL environment using X-ray and Ultrasound imagery
separately. On the other hand, we can see that CFL performance is considerably
better than the performance of multi-modal model trained in a conventional
federated learning environment. Moreover, it is evident from the figure that a
collaboratively trained model is capable of recognizing the test of images
from different modalities without having explicit knowledge about these
modalities. Moreover, overall better results are obtained on ultrasound images
(Figure 4(b)) compared to X-ray imagery (Figure 4(a)) for all models.
Figure 5: Comparison of clustered federated learning (CFL) with the
specialized models (trained with conventional FL independently for each
modality, i.e., X-ray and Ultrasound) and conventional federated learning
(when the model is trained using multi-modal data) over increasing number of
communication rounds.
In Figure 5, we provide the comparison of the three experimental setups (i.e.,
specialized models trained in conventional FL settings, multi-modal models
trained in a conventional FL and CFL environments) in terms of accuracy and
loss at different communication rounds. The figure depicts that the proposed
CFL model (which is trained using multi-modal data) provides comparable
performance with that of specialized FL models (that are separately trained
for each modality). Moreover, it is also evident from the figure that the
model trained using multi-modal data in conventional FL settings gets over-
fitted after 50 communication rounds. On the counter side, the model keeps on
learning in CFL setting, though it also tends to show over-fitting behavior at
later stage communication round as evident in the Figure 5. The vertical red
line shown on Figures 5 and 5 shows the inflection point beyond which the
parameters of the specialized machine learning models of the two clusters
(i.e., X-ray and ultrasound) start to diverge from each other. This diversion
limits the extent to which the multimodal model can be generalized to fit the
underlying multimodal data (i.e., X-ray and ultrasound). Therefore, Figure 5
provides the insight that the federated learning rounds should be stopped as
soon as the inflection point in the value of the loss function is reached.
This inflection point identified the number of rounds beyond which the
multimodal machine learning model cannot be enhanced.
### IV-A Lessons Learned
Some key lessons learned from the literature and experiments conducted in this
work are:
* •
CFL ensures the privacy of the user’s local data, as it does not need to be
shared with the server for central training.
* •
The communication payload of model weights is far less than the payload of
sharing actual data, therefore, it saves bandwidth and as well as time.
* •
It enables the collaborative learning of multi-modal features by shared model
$M_{s}$ without sharing any explicit information about the modality of the
local data and the data itself.
* •
More importantly, compared to the conventional FL, CFL ensures better
performance in presence of divergence in the data distribution. The divergence
in the distribution could be in terms of the distribution of negative and
positive samples per class as well as in terms of the nature of data samples
as detailed earlier.
* •
This particular use-case demonstrates the potential of the method in medical
applications where remote smaller healthcare units can benefit from this
collaborative learning method.
However, despite these benefits, there are some challenges and limitations as
well, e.g., efficiency, security issues, and the optimization of CFL
parameters is difficult. Moreover, there is a trade-off in the model
performance when we compare model trained using central data and model trained
with distributed data using federate learning. In addition, for multi-modal
distributively dispersed data, the development of personalized models that are
tailored to these modalities is required for local training, which will
enhance the efficiency of the shared model and as well as of the models on the
client-side. For instance, from our experiments we have learned that the
performance of the model being trained in CFL settings start degrading after a
particular point (i.e., communication round). Thus highlighting the need of
early stopping and the development of optimal stopping criteria except the
maximum allowed communication rounds.
## V Open Research Issues
### V-A Developing Personalized Approaches
The edge computing network is potentially more heterogeneous as compared to
any other central network and clients in an edge computing network vary due to
data acquisition resources, network, computational, and storage resources,
etc. Moreover, as discussed above in the paper, clients can significantly vary
due to statistical heterogeneity, which is usually a great challenge in
realistic settings. For example, as we discussed in the above section,
developing a multi-modal collaborative learning framework for COVID-19
diagnosis has efficiency challenges due to the aforementioned heterogeneity
issues. Therefore, to handle such heterogeneities, the development of
personalized and client-specific ML/DL approaches is required.
### V-B Adversarialy Robust ML
The edge computing network is more prone to security threats, as edge
computing network is an ideal environment for adversaries that aim to get
desired outcomes or incentives for breaching the network security and privacy
of participating agents. This phenomenon becomes, even more, worse with the
integration of ML/DL models that are vulnerable to adversarial attacks, which
have been already shown effective for healthcare applications [50]. For
instance, an adversarial attack on CT scanners in an actual hospital
environment by manipulating the hospital’s network has already been realized
in the literature [51] and threats of adversarial ML for ML and IoT empowered
COVID-19 detection systems are highlighted in [52]. To restrain the
adversarial attacks, different defensive techniques have been proposed in the
literature. However, the adversarially robust methods developed so far are
attack specific, i.e., they only work for particular attacks for which they
were developed and fail to withstand unforeseen attacks. Therefore, the
development of adversarially robust ML/DL models is still an open research
problem that demands a proportionate amount of interest from the community
with the advancement of ML/DL techniques. Moreover, for the successful
deployment of ML/DL models on the edge, in particular, for developing robust
healthcare applications, the development of adversarially robust models are of
utmost importance.
### V-C Asynchronous Distributed ML
In distributed computing, two approaches are widely used for communication,
i.e., synchronous and asynchronous. These approaches are ideal for scenarios
where data is instantly available for instance in the central picture
archiving and communication system (PACS) of a hospital. However, in realistic
settings, the data collection or acquisition might get delayed due to any
reason, such as due to some network issue or unavailability of I/O device,
etc. Moreover, it is possible that the client (i.e., a small healthcare
entity) in an ML-based collaborative computing network is not active at the
current iteration/communication round due to some inherent issue, this will
result in a delay in the federated parameters update process and will
eventually affect the system’s overall performance. Therefore, it is worth
studying and developing asynchronous approaches for facilitating shared model
training for healthcare applications using distributed data.
## VI Conclusions
This article provides insights on how edge computing and machine learning
advances can be used to provide a solution for COVID-19 diagnosis in an
efficient privacy-aware manner thereby allowing remote healthcare units to
benefit from collaborative learning paradigm without sharing local data. In
particular, we propose using a clustered federated learning (CFL)-based
collaborative learning framework to intelligently process visual data at the
edge by training a multi-modal ML model capable of diagnoses COVID-19 in both
X-ray and Ultrasound imagery. Compared to the conventional FL, CFL is found to
better cope with the divergence in distribution of data from different sources
(i.e., X-ray and Ultrasound imagery). In the current implementation, we
consider the divergence in distribution due to the sources and nature of the
data due to the limitations of the datasets. In the future, we will explore
how CFL performs in the presence of variances in the distribution of the data
in terms of the number of samples per client.
## References
* [1] Kashif Ahmad, Majdi Maabreh, Mohamed Ghaly, Khalil Khan, Junaid Qadir, and Ala Al-Fuqaha. Developing future human-centered smart cities: Critical analysis of smart city security, interpretability, and ethical challenges. arXiv preprint arXiv:2012.09110, 2020.
* [2] Wei Yang Bryan Lim, Nguyen Cong Luong, Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao. Federated learning in mobile edge networks: A comprehensive survey. IEEE Communications Surveys & Tutorials, 2020.
* [3] S. Latif, M. Usman, S. Manzoor, W. Iqbal, J. Qadir, G. Tyson, I. Castro, A. Razi, M. N. K. Boulos, A. Weller, and J. Crowcroft. Leveraging data science to combat covid-19: A comprehensive review. IEEE Transactions on Artificial Intelligence, 1(1):85–103, 2020\.
* [4] Yi-Wei Tang, Jonathan E Schmitz, David H Persing, and Charles W Stratton. Laboratory diagnosis of covid-19: current issues and challenges. Journal of clinical microbiology, 58(6), 2020.
* [5] Mohamed A Hendaus. Remdesivir in the treatment of coronavirus disease 2019 (covid-19): A simplified summary. Journal of Biomolecular Structure and Dynamics, 2020.
* [6] Peter J Hotez, David B Corry, and Maria Elena Bottazzi. COVID-19 vaccine design: the Janus face of immune enhancement. Nature Reviews Immunology, 20(6):347–348, 2020.
* [7] Tao Ai, Zhenlu Yang, Hongyan Hou, Chenao Zhan, Chong Chen, Wenzhi Lv, Qian Tao, Ziyong Sun, and Liming Xia. Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases. Radiology, page 200642, 2020.
* [8] Roman Kalkreuth and Paul Kaufmann. COVID-19: a survey on public medical imaging data resources. arXiv preprint arXiv:2004.04569, 2020.
* [9] Halgurd S Maghdid, Aras T Asaad, Kayhan Zrar Ghafoor, Ali Safaa Sadiq, and Muhammad Khurram Khan. Diagnosing COVID-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. arXiv preprint arXiv:2004.00038, 2020.
* [10] Joseph Paul Cohen, Paul Morrison, Lan Dao, Karsten Roth, Tim Q Duong, and Marzyeh Ghassemi. COVID-19 Image Data Collection: Prospective Predictions Are the Future. arXiv 2006.11988, 2020.
* [11] COVID-19 CT segmentation dataset. http://medicalsegmentation.com/covid19/. Accessed: 2020-08-127.
* [12] Jinyu Zhao, Yichen Zhang, Xuehai He, and Pengtao Xie. Covid-ct-dataset: a ct scan dataset about covid-19. arXiv preprint arXiv:2003.13865, 2020.
* [13] Jannis Born, Gabriel Brändle, Manuel Cossio, Marion Disdier, Julie Goulet, Jérémie Roulin, and Nina Wiedemann. POCOVID-Net: automatic detection of COVID-19 from a new lung ultrasound imaging dataset (POCUS). arXiv preprint arXiv:2004.12084, 2020.
* [14] Shuai Wang, Bo Kang, Jinlu Ma, Xianjun Zeng, Mingming Xiao, Jia Guo, Mengjiao Cai, Jingyi Yang, Yaodong Li, Xiangfei Meng, et al. A deep learning algorithm using ct images to screen for corona virus disease (covid-19). MedRxiv, 2020.
* [15] Charmaine Butt, Jagpal Gill, David Chun, and Benson A Babu. Deep learning system to screen coronavirus disease 2019 pneumonia. Applied Intelligence, page 1, 2020.
* [16] Lin Li, Lixin Qin, Zeguo Xu, Youbing Yin, Xin Wang, Bin Kong, Junjie Bai, Yi Lu, Zhenghan Fang, Qi Song, et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology, 2020.
* [17] Parnian Afshar, Shahin Heidarian, Farnoosh Naderkhani, Anastasia Oikonomou, Konstantinos N Plataniotis, and Arash Mohammadi. COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images. arXiv preprint arXiv:2004.02696, 2020.
* [18] Linda Wang and Alexander Wong. Covid-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-Ray images. arXiv preprint arXiv:2003.09871, 2020.
* [19] Prabira Kumar Sethy and Santi Kumari Behera. Detection of coronavirus disease (covid-19) based on deep features. Preprints, 2020030300:2020, 2020.
* [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [21] Ali Narin, Ceren Kaya, and Ziynet Pamuk. Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. arXiv preprint arXiv:2003.10849, 2020.
* [22] Md Zabirul Islam, Md Milon Islam, and Amanullah Asraf. A combined deep cnn-lstm network for the detection of novel coronavirus (covid-19) using x-ray images. Informatics in Medicine Unlocked, page 100412, 2020.
* [23] Sara Hosseinzadeh Kassani, Peyman Hosseinzadeh Kassasni, Michal J Wesolowski, Kevin A Schneider, and Ralph Deters. Automatic detection of coronavirus disease (covid-19) in x-ray and ct images: A machine learning-based approach. arXiv preprint arXiv:2004.10641, 2020.
* [24] Yongchao Xu, Liya Ma, Fan Yang, Yanyan Chen, Ke Ma, Jiehua Yang, Xian Yang, Yaobing Chen, Chang Shu, Ziwei Fan, et al. A collaborative online AI engine for CT-based COVID-19 diagnosis. medRxiv, 2020.
* [25] Rajesh Kumar, Abdullah Aman Khan, Sinmin Zhang, WenYong Wang, Yousif Abuidris, Waqas Amin, and Jay Kumar. Blockchain-federated-learning and deep learning models for COVID-19 detection using CT imaging. arXiv preprint arXiv:2007.06537, 2020.
* [26] Akhil Vaid, Suraj K Jaladanki, Jie Xu, Shelly Teng, Arvind Kumar, Samuel Lee, Sulaiman Somani, Ishan Paranjpe, Jessica K De Freitas, Tingyi Wanyan, et al. Federated learning of electronic health records improves mortality prediction in patients hospitalized with covid-19. medRxiv, 2020.
* [27] Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3):50–60, 2020.
* [28] Jeffrey Li, Mikhail Khodak, Sebastian Caldas, and Ameet Talwalkar. Differentially private meta-learning. arXiv preprint arXiv:1909.05830, 2019.
* [29] Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP), pages 691–706. IEEE, 2019.
* [30] Adnan Qayyum, Junaid Qadir, Muhammad Bilal, and Ala Al-Fuqaha. Secure and robust machine learning for healthcare: A survey. IEEE Reviews in Biomedical Engineering, 2020.
* [31] Adnan Qayyum, Muhammad Usama, Junaid Qadir, and Ala Al-Fuqaha. Securing connected & autonomous vehicles: Challenges posed by adversarial machine learning and the way forward. IEEE Communications Surveys & Tutorials, 22(2):998–1026, 2020\.
* [32] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Local model poisoning attacks to byzantine-robust federated learning. arXiv preprint arXiv:1911.11815, 2019.
* [33] Adnan Qayyum, Aneeqa Ijaz, Muhammad Usama, Waleed Iqbal, Junaid Qadir, Yehia Elkhatib, and Ala Al-Fuqaha. Securing machine learning in the cloud: A systematic review of cloud machine learning security. Frontiers in Big Data, 3:43, 2020.
* [34] Guangxu Zhu, Dongzhu Liu, Yuqing Du, Changsheng You, Jun Zhang, and Kaibin Huang. Toward an intelligent edge: wireless communication meets machine learning. IEEE Communications Magazine, 58(1):19–25, 2020.
* [35] Graham Gobieski, Brandon Lucia, and Nathan Beckmann. Intelligence beyond the edge: Inference on intermittent embedded systems. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 199–213. ACM, 2019.
* [36] Chirag Nagpal. Deep multimodal fusion of health records and notes for multitask clinical event prediction. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA., 2017.
* [37] Zhenyu Zhou, Haijun Liao, Bo Gu, Kazi Mohammed Saidul Huq, Shahid Mumtaz, and Jonathan Rodriguez. Robust mobile crowd sensing: When deep learning meets edge computing. IEEE Network, 32(4):54–60, 2018.
* [38] Adnan Qayyum, Waqas Sultani, Fahad Shamshad, Junaid Qadir, and Rashid Tufail. Single-shot retinal image enhancement using deep image priors. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 636–646. Springer, 2020.
* [39] David A Booz, Jonathan D Dye, Michael J Dye, and Egan F Ford. Decentralized autonomous edge compute coordinated by smart contract on a blockchain, September 28 2017. US Patent App. 15/082,559.
* [40] Ioannis Psaras. Decentralised edge-computing and IoT through distributed trust. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services, pages 505–507. ACM, 2018.
* [41] Yang Zhao, Jun Zhao, Linshan Jiang, Rui Tan, Dusit Niyato, Zengxiang Li, Lingjuan Lyu, and Yingbo Liu. Privacy-preserving blockchain-based federated learning for iot devices. IEEE Internet of Things Journal, 2020.
* [42] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
* [43] Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and¡ 0.5 MB model size. arXiv preprint arXiv:1602.07360, 2016.
* [44] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135–1143, 2015.
* [45] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pages 2074–2082, 2016.
* [46] Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4133–4141, 2017.
* [47] Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. Clustered federated learning: Model-agnostic distributed multi-task optimization under privacy constraints. arXiv preprint arXiv:1910.01991, 2019.
* [48] Michael J Horry, Subrata Chakraborty, Manoranjan Paul, Anwaar Ulhaq, Biswajeet Pradhan, Manas Saha, and Nagesh Shukla. COVID-19 detection through transfer learning using multimodal imaging data. IEEE Access, 8:149808–149824, 2020.
* [49] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
* [50] Samuel G Finlayson, John D Bowers, Joichi Ito, Jonathan L Zittrain, Andrew L Beam, and Isaac S Kohane. Adversarial attacks on medical machine learning. Science, 363(6433):1287–1289, 2019.
* [51] Yisroel Mirsky, Tom Mahler, Ilan Shelef, and Yuval Elovici. Ct-gan: Malicious tampering of 3d medical imagery using deep learning. In 28th USENIX Security Symposium USENIX Security 19), pages 461–478, 2019.
* [52] Abdur Rahman, M Shamim Hossain, Nabil A Alrajeh, and Fawaz Alsolami. Adversarial examples–security threats to COVID-19 deep learning systems in medical IoT devices. IEEE Internet of Things Journal, 2020.
|
∎
11institutetext: Jiaming Qi 22institutetext: Harbin Institute of Technology,
Harbin, 150001, China.
Tel.: +86-18646086707
22email<EMAIL_ADDRESS>
# Towards Latent Space Based Manipulation of Elastic Rods using Autoencoder
Models and Robust Centerline Extractions ††thanks: J. Qi, G. Ma and Y. Lyu are
with the Harbin Institute of Technology, Harbin, 150001, China. ††thanks: P.
Zhou and D. Navarro-Alarcon are with the Hong Kong Polytechnic University,
Hung Hom, KLN, Hong Kong. Corresponding author<EMAIL_ADDRESS>††thanks: Haibo
Zhang is with the Beijing Institute of Control Engineering, Beijing, 100190,
China. National Key Laboratory of Science and Technology on Space Intelligent
Control, Beijing, 100190, China.
Jiaming Qi Guangfu Ma
Peng Zhou Haibo Zhang Yueyong Lyu
David Navarro-Alarcon∗
(Received: date / Accepted: date)
###### Abstract
The automatic shape control of deformable objects is a challenging (and
currently hot) manipulation problem due to their high-dimensional geometric
features and complex physical properties. In this study, a new methodology to
manipulate elastic rods automatically into 2D desired shapes is presented. An
efficient vision-based controller that uses a deep autoencoder network is
designed to compute a compact representation of the object’s infinite-
dimensional shape. An online algorithm that approximates the sensorimotor
mapping between the robot’s configuration and the object’s shape features is
used to deal with the latter’s (typically unknown) mechanical properties. The
proposed approach computes the rod’s centerline from raw visual data in real-
time by introducing an adaptive algorithm on the basis of a self-organizing
network. Its effectiveness is thoroughly validated with simulations and
experiments.
###### Keywords:
Robotics Visual Servoing Deformable Objects Autoencoder Self-Organizing
Network Model Predictive Control
## 1 Introduction
Controlling the shape of soft objects automatically with robot manipulators is
highly valuable in many applications, such as food processing
tokumoto2002deformation , robotic surgery abolmaesumi2002image , cable
assembly tang2018framework , and household works sun2019general . Although
great progress has been achieved in recent years, shape control remains an
open problem in robotics navarro2014visual . One of the most crucial issues
that hamper the implementation of these types of controls is the difficulty to
obtain a meaningful and efficient feedback representation of the object’s
configuration in real-time. However, given the intrinsic high-dimensional
nature of deformable objects, standard vision-based control algorithms (e.g.,
based on simple point features) cannot be used as they cannot properly capture
the objects’ state. In this work, a solution is provided to this problem.
The configuration of rigid objects can be fully described by six degrees of
freedom. However, representing the configuration of soft objects is difficult
as they have infinite-dimensional geometric information. Therefore, a simple
and effective feature extractor that can characterize these objects in an
efficient (i.e., compact) manner should be designed cretu2011soft . At
present, traditional methods are roughly divided into two categories: local
and global descriptors. Local descriptors may use centroids, distances,
angles, curvatures navarro2016automatic to describe geometric
characteristics. However, these features must be “hard-coded.” In turn, they
can only provide a fixed type of representation. Global descriptors produce a
generic representation of the object’s shape. An example method under this
category is the Point Feature Histogram (PFH) reported in rusu2008persistent .
PFH forms a multi-dimensional histogram to represent the overall shape of a
soft object. Subsequent efforts developed PFH into the Fast Point Feature
Histograms (FPFH), which reduces computation time 5152473 ; hu20193 . A method
based on linearly parameterized (truncated) Fourier series was also proposed
to represent the object’s contour navarro2017fourier . This parameterization
idea was generalized in qi2020adaptive , where more shape representations were
analyzed and implemented.
Learning-based solutions have received considerable attention due to their
potential to learn (in latent space) shape representations of virtually any
type of object from data observations only yeo2005colour . Force and position
measurements of a three-finger gripper manipulating a soft object were used in
cretu2011soft as input to a network, which produced and predicted the
object’s contour (even for unknown objects). A coarse-to-fine shape
representation was also proposed on the basis of spatial transformer networks,
which allowed it to obtain good generalization properties without expensive
ground truth observations yan2020self . Growing neural gas was used in
valencia2019toward to represent deformable shapes. In 2019Convolutional , a
feature extractor based on the convolutional autoencoder was developed. This
method was used to obtain a low-dimensional latent space from tactile sensing
data.
Traditional methods for manipulating soft objects henrich2000robot typically
need to identify the complex physical properties of the object. This
requirement hinders their application in practice. Algorithms based on latent
spaces present a feasible solution, as they can effectively extract low-
dimensional features from a high-dimensional shape space. For example,
convolutional neural networks were used to build the inverse kinematics of a
rope nair2017combining , learn the physical model of a soft object in the
latent space without any prior knowledge of the object ebert2018visual , and
estimate the rope’s state and combine it with model predictive control
yan2020self . However, none of these works has been used to establish an
explicitly shaped servo-loop with a latent space representation. This idea has
not been sufficiently explored in the soft manipulation literature.
In the current work, a new solution to the manipulation problem of the elastic
rod is proposed. The novel contributions of this study are listed as follows.
1. 1.
A centerline extraction algorithm based on self-organizing maps (SOM) is
presented for slender elastic rods.
2. 2.
A shape feature extraction algorithm is designed using the deep autoencoder
network (DAE). The proposed method is used to represent the elastic rod with
finite-dimensional feature vectors.
3. 3.
Detailed simulations and experiments are conducted to validate the
effectiveness of the proposed method.
To the best of the authors’ knowledge, this work is the first attempt wherein
a shape servo-controller uses DAE to establish an _explicit_ shape servo-loop.
The remainder of this study is organized as follows. The preliminaries are
presented in Section 2. The overall deformation control implementation process
is discussed in Section 3. Various visually servoed deformation tasks of
elastic rods are shown in Sections 4 and 5. Conclusions and future work are
provided in Section 6.
## 2 PRELIMINARIES
_Notation._ Column vectors are denoted with bold small letters $\mathbf{v}$
and matrices with bold capital letters $\mathbf{M}$. Time evolving variables
are represented as $\mathbf{m}_{k}$, where the subscript $k$ denotes the
discrete time instant. $\mathbf{E}_{n}$ is an $n\times n$ identity matrix.
The deformation control scheme of a robot manipulating the elastic rod based
on visual servoing is investigated. The following conditions are provided to
foster an understanding among readers:
* •
A fixed camera is used to measure the centerline of the elastic rod, namely,
eye-to-hand configuration (depicted in Fig. 1). The coordinates obtained are
denoted by:
$\begin{array}[]{*{20}{c}}{\bar{\mathbf{c}}={{\left[{\mathbf{c}_{1}^{T},\dots,\mathbf{c}_{N}^{T}}\right]}^{T}}\in{\mathbb{R}^{2N}}}&{{\mathbf{c}_{i}}={{\left[{{u_{i}},{v_{i}}}\right]}^{T}}\in{\mathbb{R}^{2}}}\end{array}$
(1)
where $N$ represents the number of points that make up the centerline, $u_{i}$
and $v_{i}$ represents the coordinates of the $i$th $(i=1,\cdots,N)$ point in
the image frame.
* •
Before the experiment, the robot has tightly grasped the elastic rod; that is,
object grasping is not the research field of this article. Measurement loss is
also not a problem during the manipulation process.
* •
The robot supports velocity control mode, which can accurately execute the
given desired kinematic commands $\Delta\mathbf{r}_{k}\in\mathbb{R}^{q}$
siciliano1990kinematic and satisfy the incremental position motions
$\mathbf{r}_{k}=\mathbf{r}_{k-1}+\Delta\mathbf{r}_{k}$.
* •
The robot manipulates the elastic rod at low speeds, so the shape is uniquely
determined by elastic potential energy.
Problem Statement. Without any prior physical characteristics of elastic rods,
design a model-free vision-based controller which commands the robot to deform
the elastic rod into the desired shape in the 2D image space.
Figure 1: Schematic diagram of the elastic rod shape deformation. The camera
is utilized to determine shape feature $\mathbf{s}$ in real time, and within
the designed controller the robot automatically deform the real-time shape
denoted by $\bar{\mathbf{c}}$ of elastic rods into the target shape
$\bar{\mathbf{c}}^{*}$.
## 3 Methods
Figure 2: Schematic diagram of SOM-based centerline extraction. The white area
in the left side represents the area of elastic rod (clustering area), and the
red points in the right side represent the obtained centerline points
(clustering points) (in this figure, $N=30$).
### 3.1 Robust SOM-based Centerline Extraction Algorithm
Slender elastic rods whose lengths are much larger than their diameters are
used as the research object. Therefore, the centerline describes the shape of
the elastic rods. Given that the centerline generally comprises center-points
for elastic rods, it should be fixed-length, ordered, and equidistant for
subsequent feature extraction and controller design. Although some centerline
extraction algorithms are used in the literature, e.g., _OpenCV/thinning_ ,
they cannot meet the above requirements and need pre-processing of data, which
will deteriorate the system’s real-time performance.
In this article, SOM is utilized to achieve real-time 2D centerline extraction
of elastic rods without artificial marker points. SOM is a neural network
trained in an unsupervised learning manner kohonen1982self , which is
originally used for dimensionality reduction of high-dimensional data. Here,
it is used as a clustering algorithm. It generates a fixed number of
clustering points from the image data of the elastic rods. Finally, the
centerline is composed of the clustering points. The input of SOM is the white
area where the elastic rod is located in the binary image, as shown in Fig. 2.
The points in the white area are defined by
$\bar{\mathbf{m}}={{\left[{\mathbf{m}_{1}^{T},\dots,\mathbf{m}_{M}^{T}}\right]}^{T}}\in\mathbf{R}^{2M}$,
$\mathbf{m}_{i}\in\mathbf{R}^{2}$ represents coordinates of each point in the
image frame, and $M\gg N$. With the clustering nature of SOM, a fixed-length
equidistant centerline can be obtained, namely, $SOM:2M\to 2N$.
###### Remark 1
The proposed SOM-based centerline extraction is only used in the experiment
and not for simulation. The centerline extracted by SOM is not guaranteed to
be ordered, so the sorting algorithm qi2020adaptive is utilized to reorder
the centerline. This process will not take too much time because $N$ is small.
Figure 3: Structure of DAE with the centerline $\overline{\textbf{c}}$ as the
input, and s is defined as the shape feature used for deformation Jacobian
matrix approximation and controller design.
### 3.2 Feature Extraction
A controller that can deform the real-time shape $\bar{\mathbf{c}}$ of elastic
rods into the target shape $\bar{\mathbf{c}}^{*}$ can be designed using the
centerline extracted by SOM. However, the centerline cannot be directly
inputted into the system. Given its high dimensionality, it will make the
system run slowly and may even cause many adverse effects, e.g., loss of
control. Thus, designing a shape feature extraction algorithm for elastic rods
to reduce the feature dimension and represent the centerline effectively is
necessary.
In this article, DAE is used to extract shape features
$\mathbf{s}\in\mathbb{R}^{p}$ from the high-dimensional centerline
$\bar{\mathbf{c}}\in\mathbb{R}^{2N}$. DAE is an artificial neural network
trained in an unsupervised-learning manner, which can automatically learn
latent features from unlabeled data zhou2020lasesom . DAE comprises three
parts, an Encoder that projects the input into the hidden layer, a hidden
layer describing the latent feature $\mathbf{s}$, and a Decoder that
reconstructs the latent feature into the original input. Formally, the
centerline $\bar{\mathbf{c}}\in\mathbb{R}^{2N}$ is fed into DAE and mapped to
the hidden layer through the nonlinear transformation
$\mathbf{s}=\mathbf{f}_{\theta_{1}}\left(\bar{\mathbf{c}}\right)=sig(\mathbf{W}_{1}\bar{\mathbf{c}}+\mathbf{b}_{1})$,
where parameter set
$\theta_{1}=\left\\{\mathbf{W}_{1},\mathbf{b}_{1}\right\\}$. $\mathbf{W}_{1}$
is a $k\times 2N$ weight matrix, $\mathbf{b}_{1}$ is a vector of bias and sig
is a $sigmoid$ activation function,
$s\left({\bar{c}}\right)=\frac{1}{{1+{e^{-\bar{c}}}}}$. The latent feature
$\mathbf{s}$ is input into the Decoder to generate a reconstruction
$\hat{\bar{\mathbf{c}}}$ with $2N$ dimensions through the deterministic
equation
$\hat{\bar{\mathbf{c}}}=\mathbf{g}_{\theta_{2}}\left(\mathbf{s}\right)=sig(\mathbf{W}_{2}\mathbf{s}+\mathbf{b}_{2})$,
with $\theta_{2}=\left\\{\mathbf{W}_{2},\mathbf{b}_{2}\right\\}$. The
parameters of $\theta_{1}$ and $\theta_{2}$ of the DAE are designed to
minimize the average error of reconstruction, which is defined as:
$\left\\{{\theta_{1}^{*},\theta_{2}^{*}}\right\\}=\mathop{\arg\min}\limits_{{\theta_{1}},{\theta_{2}}}\sum\limits_{k=1}^{N}{L\left({{\mathbf{c}_{i}},{g_{{\theta_{2}}}}\left({{f_{{\theta_{1}}}}\left({{\mathbf{c}_{i}}}\right)}\right)}\right)}$
(2)
where $\theta_{1}^{*}$ and $\theta_{2}^{*}$ are the ideal parameters, and $L$
is usually a mean square error. Once the Autoencoder is trained, the
centerline $\bar{\mathbf{c}}$ is input into the network, and the low-
dimensional shape feature $\mathbf{s}\in\mathbb{R}^{p}$ can be obtained
through nonlinear transformations $\mathbf{f}_{\theta_{1}}$. The workflow of
DAE is shown in Fig. 3.
For DAE, the reconstruction output $\hat{\bar{\mathbf{c}}}$ is not the focus,
and only the Encoder is utilized to provide the shape feature
$\mathbf{s}\in\mathbb{R}^{p}$ once the DAE is trained. At present, DAE has
various forms. In this paper, multilayer perceptron (MLP) is used, given its
ability to handle 2D data efficiently. The size of shape feature dimension $p$
can also be selected due to a trade-off balance. A small $p$ will improve the
system’s controllability, e.g., $p<q$. However, a large $p$ will enhance the
representation accuracy of centerlines. In the simulation and experiment, the
effect of various $p$ on the shape representation capabilities is presented.
### 3.3 Approximation of the Local Deformation Model
Given that regular (i.e., mechanically well-behaved) elastic objects are
considered, the centerline $\bar{\mathbf{c}}$ is extremely dependent on the
robot command $\mathbf{r}\in\mathbb{R}^{q}$ can be defined as the joint angles
or end-effector’s pose in this study. The relationship between
$\bar{\mathbf{c}}$ and $\mathbf{r}$ can be represented by the following
unknown function (3).
$\bar{\mathbf{c}}=\mathbf{h}\left(\mathbf{r}\right)$ (3)
Following (3), the overall kinematics model from robot command $\mathbf{r}$ to
shape feature $\bar{\mathbf{c}}$ can be constructed as follows:
$\mathbf{s}=\mathbf{f}_{\theta_{1}}\left(\mathbf{h}\left(\mathbf{r}\right)\right)$
(4)
Differentiating (4) concerning time variable $t$ yields:
$\dot{\mathbf{s}}=\mathbf{J}\left(t\right)\dot{\mathbf{r}}$ (5)
where
$\mathbf{J}\left(t\right)=\partial\mathbf{s}/\partial\mathbf{r}\in\mathbb{R}^{p\times
q}$ represents a Jacobian-like matrix that describes the mapping between the
feature change speed $\dot{\mathbf{s}}$ and the velocity command
$\dot{\mathbf{r}}$. The properties of elastic rods are unknown, so the
analytical form of $\mathbf{J}(t)$ cannot be obtained. Discretizing (5) yields
the first-order format as follows:
$\mathbf{s}_{k}=\mathbf{s}_{k-1}+\mathbf{J}_{k}\Delta\mathbf{r}_{k}$ (6)
where $\Delta\mathbf{r}_{k}=\mathbf{r}_{k}-\mathbf{r}_{k-1}\in\mathbf{R}^{q}$.
The application of DAE as feature extraction is the focus of this study.
Accordingly, the simple Broyden algorithms are used to computes local
approximations of $\mathbf{J}_{k}$ in real-time. Define the following
differential signal:
$\begin{array}[]{*{20}{c}}{{\mathbf{y}_{k}}={\mathbf{s}_{k}}-{\mathbf{s}_{k-1}}}&{{\mathbf{u}_{k}}=\Delta\mathbf{r}_{k}}={\mathbf{r}_{k}}-{\mathbf{r}_{k-1}}\end{array}$
(7)
Broyden algorithms are as follows:
1. 1.
R1 update formula broyden1965class :
$\hat{\mathbf{J}}_{k}=\hat{\mathbf{J}}_{k-1}+\frac{{\left({{\mathbf{y}_{k}}-\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}}\right)\mathbf{u}_{k}^{T}}}{{\mathbf{u}_{k}^{T}{\mathbf{u}_{k}}}}$
(8)
This form has a simple structure and fast calculation speed.
2. 2.
SR1 update formula broyden1965class :
$\hat{\mathbf{J}}_{k}=\hat{\mathbf{J}}_{k-1}+\frac{{\left({{\mathbf{y}_{k}}-\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}}\right){{\left({{\mathbf{y}_{k}}-\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}}\right)}^{T}}}}{{{\mathbf{u}_{k}}{{\left({{\mathbf{y}_{k}}-\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}}\right)}^{T}}}}$
(9)
The structure of SR1 is similar to R1, but the calculation accuracy is higher.
3. 3.
DFP update formula nocedal1980updating :
$\displaystyle\hat{\mathbf{J}}_{k}$
$\displaystyle=\hat{\mathbf{J}}_{k-1}+\frac{{\left({{\mathbf{y}_{k}}-\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}}\right)\mathbf{y}_{k}^{T}+{\mathbf{y}_{k}}{{\left({{\mathbf{y}_{k}}-\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}}\right)}^{T}}}}{{{\mathbf{u}_{k}}\mathbf{y}_{k}^{T}}}$
(10)
$\displaystyle-\frac{{\mathbf{y}_{k}^{T}{\mathbf{y}_{k}}}}{{\left\|{{\mathbf{u}_{k}}\mathbf{y}_{k}^{T}}\right\|}}\left({{\mathbf{y}_{k}}-\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}}\right)\mathbf{u}_{k}^{T}$
DFP is a rank two quasi-Newton method, which is efficient for solving
nonlinear optimization.
4. 4.
BFGS update formula dennis1974characterization :
$\hat{\mathbf{J}}_{k}=\hat{\mathbf{J}}_{k-1}-\frac{{\hat{\mathbf{J}}_{k-1}{\mathbf{u}_{k}}\mathbf{u}_{k}^{T}\hat{\mathbf{J}}_{k-1}^{T}}}{{{\mathbf{u}_{k}}\mathbf{u}_{k}^{T}\hat{\mathbf{J}}_{k-1}^{T}}}+\frac{{{\mathbf{y}_{k}}\mathbf{y}_{k}^{T}}}{{{\mathbf{u}_{k}}\mathbf{y}_{k}^{T}}}$
(11)
It is recognized with the best numerical stability.
When a new data pair $\left(\mathbf{y}_{k},\mathbf{u}_{k}\right)$ enters the
system, the deformation Jacobian matrix $\hat{\mathbf{J}}_{k}$ can be updated
with the above estimators.
###### Remark 2
The robot is assumed to manipulate elastic rods at low speed. Thus, the
deformation of the elastic rods is relatively slow. On the basis, the
deformation Jacobian matrix $\mathbf{J}_{k}$ can be estimated online as the
formula (4) is assumed to be smooth.
Figure 4: The block diagram of the proposed real-time deformation control
strategy.
### 3.4 Shape Servoing Controller
At the discrete-time instant $k$, the deformation Jacobian matrix
$\hat{\mathbf{J}}_{k}$ has been assumed to be exactly approximated by
(8)(9)(10)(11), so that the shape-motion difference model is satisfied:
${\mathbf{s}_{k}}={\mathbf{s}_{k-1}}+{\hat{\mathbf{J}}_{k}}\cdot{\mathbf{u}_{k}}$
(12)
A model predictive controller ouyang2018robust is utilized to minimize the
shape deformation error $\mathbf{e}_{k}={\mathbf{s}^{*}-\mathbf{s}_{k}}$
between the measured feature $\mathbf{s}_{k}$ and a constant target feature
$\mathbf{s}^{*}$. With the estimated deformation Jacobian matrix
$\hat{\mathbf{J}}_{k}$ and (12), the predicted deformation output at time
$k+w$ is shown below:
$s_{k+w}^{p}={s_{k}}+\hat{\mathbf{J}}_{k}\cdot{\mathbf{u}_{k+w}}$ (13)
where $w\in\left[{0,H}\right]$ represents the length of prediction horizon,
and $\mathbf{u}_{k+w}=\mathbf{r}_{k+w}-\mathbf{r}_{k}$. The reference
deformation trajectory at time $k+w$ is calculated to ensure smooth
deformation of elastic rods and the estimation accuracy of deformation
Jacobian matrix as follows:
$\mathbf{s}_{k+w}^{r}={\mathbf{s}^{*}}-{e^{-\rho w}}\cdot{\mathbf{e}_{k}}$
(14)
where $\rho$ is a positive constant. Error $\varepsilon$ between the reference
and the prediction deformation at instant $k+w$ is defined as follows:
${\varepsilon_{k+w}}=\mathbf{s}_{k+w}^{r}-\mathbf{s}_{k+w}^{p}=\left({1-{e^{-\rho
w}}}\right){\mathbf{e}_{k}}-{\hat{\mathbf{J}}_{k}}{\mathbf{u}_{k+w}}$ (15)
Velocity command $\mathbf{u}_{k}$ is assumed to remain constant from $k$ to
$k+w$ and can be calculated by minimizing $\varepsilon$ from $k$ to $k+H$, as
shown below:
$\min\frac{1}{2}\left({\sum\limits_{w=0}^{H}{{\alpha^{w}}{{\left\|{\left({1-{e^{-\rho
w}}}\right){\mathbf{e}_{k}}-w{\hat{\mathbf{J}}_{k}}{\mathbf{u}_{k}}}\right\|}^{2}}}+\mathbf{u}_{k}^{T}\mathbf{Q}{\mathbf{u}_{k}}}\right)$
(16)
where $0<\alpha\leq 1$, and $\mathbf{Q}$ is a symmetric and positive definite
matrix used to adjust the command $\mathbf{u}_{k}$. When the command
$\mathbf{u}_{k}$ is too large, it will cause the robot to move too fast and
the manipulated object will oscillate. In turn, the estimation accuracy of the
deformation Jacobian matrix will be affected. Taking derivative of (16) with
respect to $\mathbf{u}_{k}$, the gradient $\nabla$ is calculated as follows:
$\nabla=\sum\limits_{w=0}^{H}{-w{\alpha^{w}}\hat{\mathbf{J}}_{k}^{T}\left({\left({1-{\beta^{w}}}\right){\mathbf{e}_{k}}-w{\hat{\mathbf{J}}_{k}}{\mathbf{u}_{k}}}\right)}+\mathbf{Q}{\mathbf{u}_{k}}$
(17)
where $\beta=e^{-\rho}$. By setting $\nabla=0$, the velocity command
$\mathbf{u}_{k}$ is derived:
$\displaystyle\mathbf{u}_{k}$
$\displaystyle={\left({a\hat{\mathbf{J}}_{k}+{\hat{\mathbf{J}}^{T+}}_{k}\mathbf{Q}}\right)^{+}}\left({b-c}\right)\mathbf{e}_{k}$
$\displaystyle a$
$\displaystyle=\left({{H^{2}}{\alpha^{H}}-2b}\right)/\ln\alpha$ $\displaystyle
b$
$\displaystyle=\left({H{\alpha^{H}}\ln\alpha-{\alpha^{H}}+1}\right)/{\ln^{2}}\alpha$
(18) $\displaystyle c$
$\displaystyle=\left({H{{\left({\alpha\beta}\right)}^{H}}\ln\left({\alpha\beta}\right)-{{\left({\alpha\beta}\right)}^{H}}+1}\right)/{\ln^{2}}\left({\alpha\beta}\right)$
Thus, at each time instant, the incremental position command is calculated as
follows:
$\mathbf{r}_{k}=\mathbf{r}_{k-1}+\mathbf{u}_{k}$ (19)
Following (12), it yields:
${\mathbf{e}_{k}}-{\mathbf{e}_{k-1}}=-{\hat{\mathbf{J}}_{k}}\Delta{\mathbf{r}_{k}}$
(20)
$\hat{\mathbf{J}}_{k}$ is assumed to be a full column rank, and substituting
(3.4) into (20) yields:
$\left({a\mathbf{E}_{n}+{\hat{\mathbf{J}}_{k}^{T+}}\mathbf{Q}\hat{\mathbf{J}}_{k}^{+}}\right)\left({{\mathbf{e}_{k}}-{\mathbf{e}_{k-1}}}\right)+\left({b-c}\right){\mathbf{e}_{k}}=0$
(21)
As $a>0,b-c>0$, and
${\hat{\mathbf{J}}_{k}^{T+}}\mathbf{Q}\hat{\mathbf{J}}_{k}^{+}$ is a positive-
definite matrix, the error $\mathbf{e}_{k}$ asymptotically converges to error,
namely,
$\mathop{\lim}\limits_{t\to\infty}{\mathbf{s}_{k}}=\mathbf{s}_{k}^{*}$.
However, when the reachability of the desired goal $\mathbf{s}_{k}^{*}$ is not
satisfied, $\hat{\mathbf{J}}_{k}$ may not be a column full-rank matrix. The
feedback error $\|\mathbf{e}_{k}\|$ may only converge to the neighborhood near
the origin. For such under-actuated visual servo control tasks, guaranteeing
the global asymptotic convergence is challenging hutchinson2006visual . The
block diagram of the proposed real-time deformation control strategy is shown
in Fig. 4.
###### Remark 3
The velocity controller (3.4) and deformation Jacobian estimators
(8)(9)(10)(11) only require visual feedback data without any additional
sensors, prior knowledge of the system model, and the requirement to calibrate
the camera.
Figure 5: Structure of DAE comprised of MLPs as the basic blocks of Encoder
and Decoder. The centerline $\bar{\mathbf{c}}$ is fed into the trained DAE to
generate shape feature denoted by $\mathbf{s}$.
## 4 SIMULATION RESULTS
The following case is considered: one end of an elastic rod is rigidly grasped
by a planar robot (2DOF) and the other end is static. For brevity, the robot
is not shown in the figures. The cable simulator is simulated as in
wakamatsu1995modeling by using the minimum energy principle hamill2014student
, and publicly available at
https://github.com/q546163199/shape_deformation/tree/master/python/package/shape_simulator.
All numerical simulations are implemented in Python.
Figure 6: Average shape reconstruction error comparison between DAE and PCA
among 200 shape sets in the simulation.
### 4.1 Feature Extraction Comparison
In this section, 40,000 samples ($N=100$) utilized to train the DAE are
generated by randomly moving the robot. As previously mentioned in Section
3.2, DAE comprises MLPs, as shown in Fig. 5. DAE is implemented on PyTorch and
trained by adopting an ADAM optimizer with an initial learning rate of 0.001
and a batch size of 500. RELU activation functions are adopted in the Encoder
and Decoder.
In Fig. 6, the reconstruction error between the simulated shape
$\bar{\mathbf{c}}$ and the shape $\hat{\bar{\mathbf{c}}}$ obtained from DAE or
PCA is denoted by $\left\|\bar{\mathbf{c}}-\hat{\bar{\mathbf{c}}}\right\|$.
$k$ and $p$ determine the dimension of shape feature $\mathbf{s}$ obtained
from PCA zhu2020vision and DAE, respectively. For the fairness of
competition, $k$ is set to be equal to $p$ for validating the feature
extraction and reconstruction performance of PCA and DAE under the same shape
feature dimension. The result shows that in each case, the reconstruction
performance of DAE is better than that of PCA. For DAE, the results show that
$p=4$ has the best reconstruction performance, followed by $p=6$ and $p=2$.
This finding indicates that $p$ is too low to represent the elastic rod fully.
Considering the trade balance of system controllability and shape
representation performance, DAE with ${p}=4$ is used in the following
sections.
### 4.2 Validation of the Jacobian Estimation
In this section, four deformation Jacobian estimators, namely, R1, SR1, DFP,
and BFGS, are considered, and their effectiveness is evaluated. The planar
robot grasps one end of the simulated rod and conducts a counterclockwise
circular motion with center (0.4, 0.4), as shown in Fig. 7a. Different from
the simple initialization of $\hat{\mathbf{J}}_{0}$, e.g., the identity
matrix, the robot moves in the initial sampling area (the motions are ensured
not to be collinear and close to the starting point) to initialize
$\hat{\mathbf{J}}_{0}$. The initialization accuracy of deformation Jacobian
matrix is improved using this method. This method can also reduce the
possibility of singular problems under the deformation Jacobian matrix in the
manipulation process. In turn, the safety of operations is enhanced. Two error
criteria (22) are utilized to compare each deformation Jacobian estimator
qualitatively.
$\begin{array}[]{*{20}{c}}{{T_{1}}=\left\|{{\mathbf{s}_{k}}-{{\hat{\mathbf{s}}}_{k}}}\right\|}&{{T_{2}}=\left\|{\Delta{\mathbf{s}_{k}}-{{\hat{\mathbf{J}}}_{k}}\Delta{\mathbf{r}_{k}}}\right\|}\end{array}$
(22)
where $\mathbf{s}_{k}$ is feedback shape feature generated by DAE,
$\hat{\mathbf{s}}_{k}$ is calculated by (23).
$\hat{\mathbf{s}}_{k}=\hat{\mathbf{s}}_{k-1}+\hat{\mathbf{J}}_{k}\Delta\mathbf{r}_{k}$
(23)
The deformation Jacobian estimators and the shape reconstruction accuracy of
DAE are verified, as depicted in Fig. 7b. The results show that the shape
reconstruction accuracy of DAE ($p=4$) is well, proving the effectiveness of
DAE in the shape representation. The plots of $T_{1}$ and $T_{2}$ during the
circular motion are demonstrated in Fig. 8. For the $T_{1}$ curve, all the
four deformation Jacobian estimators can accurately update the deformation
Jacobian matrix $\hat{\mathbf{J}}_{k}$, and the average error of BFGS is the
smallest. For $T_{2}$, BFGS has no apparent fluctuations, and the estimation
accuracy is the best. DFP is second-best, and R1 and SR1 share a similar
pattern, consistent with the theoretical analysis. The above analyses prove
that, when starting deformation, BFGS can calibrate and update the deformation
Jacobian matrix in time to identify the pseudo-physical parameters of the
elastic rods. Specifically, BFGS can estimate the change direction of shape
feature $\mathbf{s}$ in the latent space.
(a)
(b)
Figure 7: Deformation Jacobian matrix $\hat{\mathbf{J}}_{k}$ validation
framework. (a) Motion trajectory of robot’s end-effector. (b) Comparison
between the simulated cable profile (black solid line) and its reconstruction
shape obtained by DAE (red dashed line) with $p=4$. Figure 8: Profiles of the
criteria $T_{1}$ and $T_{2}$ that are computed along the circular trajectory
around the center (0.4, 0.4).
### 4.3 Manipulation of Elastic Rods
In this section, the robot is commanded by the velocity controller (3.4) to
deform the elastic rods into the desired constant shape
$\bar{\mathbf{c}}^{*}$, corresponding to $\mathbf{s}^{*}$. The error criterion
(24) is utilized to assess the deformation performance.
${T_{3}}=\left\|{{{\bar{\mathbf{c}}}_{k}}-{{\bar{\mathbf{c}}}}_{k}^{*}}\right\|$
(24)
The progress of the cable deformation under the velocity command (3.4) on the
basis of R1, SR1, DFP and BFGS is depicted in Fig. 9. The curve of $T_{3}$ and
the velocity command $\Delta\mathbf{r}_{k}$ are shown in Fig 10, and the
detailed time comparison is provided in Table 1. Both figures show that BFGS
is the best method with the shortest convergence time and smallest deformation
error, followed by DFP, and the effects of R1 and SR1 are similar.
Table 1: Results among R1, SR1, DFP and BFGS | R1 | SR1 | DFP | BFGS
---|---|---|---|---
Steps | 58 | 48 | 32 | 22
Time (second) | 33.64 | 27.84 | 18.56 | 12.76
(a) R1 result (b) SR1 result (c) DFP result (d) BFGS result
Figure 9: Profiles of of the shape deformation simulation among R1, SR1, DFP
and BFGS (red dashed curves represent the initial and transitional
trajectories, and the black solid curve represents the target shape
$\bar{\mathbf{c}}^{*}$ with shape feature $\mathbf{s}^{*}$). Figure 10:
Profiles of the criterion $T_{3}$ and velocity command $\Delta\mathbf{r}_{k}$
among R1, SR1, DFP and BFGS within manipulation. Figure 11: Experimental setup
comprised of two UR5 which support velocity control mode.
## 5 EXPERIMENTAL RESULTS
Various experiments with two UR5 that support velocity control mode are
conducted, as shown in Fig. 11.
$\Delta\mathbf{r}=[\Delta\mathbf{r}_{1}^{T},\Delta\mathbf{r}_{2}^{T}]^{T}\in\mathbb{R}^{6}$.
$\Delta r_{i1}$ and $\Delta r_{i2}$, $i=1,2$ represent the linear velocity of
end-effector along x-axis and y-axis of each UR5 in the world frame. $\Delta
r_{i3}$, $i=1,2$ represents the angular velocity of the sixth joint of each
UR5 along the direction parallel to the z axis in the world frame. A Logitech
C270 camera is used to capture the rod’s image and combined with OpenCV to
process on the Linux PC at 30 fps. The deformation trajectories display once
every two frames to compare the convergence effects of each algorithm. An
experimental video can be downloaded here
https://github.com/q546163199/experiment_video/raw/master/paper2/video.mp4.
Table 2: Comparison results among three centerline extraction algorithms with $N=50$ | Reference qi2020adaptive | CL park2000centroid | SOM
---|---|---|---
Time (Second) | 1.68 | 0.98 | 0.38
### 5.1 Image Processing
This section verifies the proposed SOM-based centerline extraction algorithm
and describes the image processing steps.
First, the SOM-based method proposed in this article is compared with two
other centerline extraction algorithms. The first one is based on the
_OpenCV/thinning_ developed in Reference qi2020adaptive , and the second one
is based on CL, which is the traditional clustering method park2000centroid .
For the fairness of competition, all algorithms need to provide an ordered,
fixed-length $N=50$, equidistant centerline. As the CL-based and SOM-based
algorithms only generate an unordered fixed-length centerline, the sorting
algorithm qi2020adaptive is used to sort unordered centerlines. For SOM, an
open-source toolbox provided by vettigliminisom is utilized. The SOM-based
algorithm is the fastest, and it can efficiently perform centerline
extraction, as shown in Table 2. One reason is that the provided SOM toolbox
is already highly optimized. Another reason is that the centerline produced by
CL-based and SOM-based clustering algorithms has the advantage of fixed-length
and equidistant-sampling. This method saves more time than qi2020adaptive ,
which sorts all the points and then perform down-sampling. The proposed SOM-
based algorithm and qi2020adaptive have similar centerline extraction
accuracy, and CL-based has the worst performance, as shown in Fig. 12. Thus,
in terms of accuracy, considering that the processing speed of SOM is the
fastest, the SOM-based centerline extraction algorithm is used.
(a) Reference qi2020adaptive
(b) CL park2000centroid
(c) Proposed SOM
Figure 12: Comparison of three centerline extraction algorithms, including
reference qi2020adaptive , CL-based park2000centroid and the proposed SOM-
based.
Second, the relevant image processing for centerline extraction is provided.
The overall process (shown in Fig. 13) is as follows:
1. 1.
First, segment the red areas nearby the Gripper1 and Gripper2 on the basis of
HSV color space and mark them as two green marker points. Then, segment the
region of the interest (ROI) containing the rod following both green marker
points (see Fig. 13a).
2. 2.
Next, identify the rod in ROI, remove the noise, and obtain a binary image of
the rod using OpenCV morphological opening algorithm (see Fig. 13b).
3. 3.
Subsequently, use the proposed SOM-based algorithm to get an unordered
centerline with $N=50$ (see Fig. 13c).
4. 4.
Finally, apply the sorting algorithm qi2020adaptive to get an ordered
centerline (see Fig. 13d). The starting point is the closest point to the
right marker point on the centerline.
(a) ROI selection
(b) Thresholding
(c) Centerline extraction (SOM)
(d) Coordinate sorting
Figure 13: Image processing steps. Figure 14: Average shape reconstruction
error comparison between DAE and PCA among 200 shape sets in the experiment.
### 5.2 Feature Extraction Comparison
Similar to Section 4.1, 40,000 samples are generated in the same way. The
structure of DAE is similar with Section 4.1, as shown in Fig. 5. Batch-
Normalization-1D (BN) is added after each layer. From the comparison results
shown in Fig. 14, the reconstruction accuracy of DAE is still better than PCA.
For DAE, the accuracy of ${p}=4$ is the best, and $p=5$ is the second best.
This finding is consistent with the simulation results. The results prove the
effectiveness of the proposed DAE in the shape feature extraction. Thus DAE
with $p=4$ is used in the following sections.
(a)
(b)
Figure 15: Deformation Jacobian matrix $\hat{\mathbf{J}}_{k}$ validation
framework. (a) Motion trajectory of robot’s end-effector. (b) Comparison
between the visually measured cable profile (green dot line) and its
reconstruction shape obtained by DAE (red dot line) with $p=4$. Figure 16:
Profiles of the criteria $T_{1}$ and $T_{2}$ that are computed along the
circular trajectory.
### 5.3 Validation of the Jacobian Estimation
Similar to Section 4.2, two UR5 are commanded to move along a fixed circular
trajectory, as depicted in Fig. 15a. The shape reconstruction performance of
DAE is accurate in the experiment, as shown in Fig. 15b. Thus, whether for
simulation or experiment, DAE can be applied in the shape representation. The
comparison results depicted in Fig. 16 show that BFGS has a minimal
deformation Jacobian approximation error, which validates its effectiveness
and adaptability in different regions.
### 5.4 Manipulation of Elastic Rods
Similar to Section 4.3, a desired shape $\bar{\mathbf{c}}^{*}$ should be given
in advance. The following steps are conducted to obtain the feasible target
shape.
* •
First, the robot is moved to a position while avoiding the singular shapes,
e.g., straight.
* •
Second, the current shape of the elastic rod is recorded by the camera and
denoted by $\bar{\mathbf{c}}^{*}$ as the target shape.
* •
Third, the target shape $\bar{\mathbf{c}}^{*}$ is fed into the trained DAE to
get target shape feature denoted by $\mathbf{s}^{*}$.
* •
Fourth, the robot moves back to the initial position and starts the
deformation with the given $\mathbf{s}^{*}$.
Considering safety, the saturation of $\Delta\mathbf{r}$ is set to,
$\left|{\Delta{r_{i1}}}\right|\leq 0.01m/s$,
$\left|{\Delta{r_{i2}}}\right|\leq 0.01m/s$ and
$\left|{\Delta{r_{i3}}}\right|\leq 0.1rad/s,i=1,2,3$, respectively. Four
experiments with different initial and target shapes are conducted to verify
the proposed algorithm’s effectiveness.
In the proposed algorithm, the elastic rod can be deformed using two UR5 to
the desired shape accurately without damaging the object during the
deformation process, as depicted in Fig 17. The profiles of $T_{3}$ and the
velocity command $\Delta\mathbf{r}_{k}$ are shown in Fig. 18. Corresponding to
the simulation results, BFGS still has the most significant control effect,
the fastest convergence speed, without apparent fluctuations (large
instantaneous deformation). Thus, BFGS has excellent adaptability and
robustness to various conditions in the shape deformation issue. In this
study, the experiment uses two UR5. Overall, the results prove the
effectiveness of the proposed algorithm for the multi-manipulator shape
deformation issue.
(a) Experiment1-R1
(b) Experiment2-R1
(c) Experiment3-R1
(d) Experiment4-R1
(e) Experiment1-SR1
(f) Experiment2-SR1
(g) Experiment3-SR1
(h) Experiment4-SR1
(i) Experiment1-DFP
(j) Experiment2-DFP
(k) Experiment3-DFP
(l) Experiment4-DFP
(m) Experiment1-BFGS
(n) Experiment2-BFGS
(o) Experiment3-BFGS
(p) Experiment4-BFGS
Figure 17: Initial (black solid line), transition (green solid line) and
target (red solid line) configurations in the four shape deformation
experiments which have a variety of different initial and target shape with
dual-UR5 robot among R1, SR1, DFP and BFGS.
(a) Experiment1 result (b) Experiment2 result (c) Experiment3 result (d)
Experiment4 result
Figure 18: Profiles of the criterion $T_{3}$ and velocity command
$\Delta\mathbf{r}_{k}$ among R1, SR1, DFP and BFGS within four shape
deformation experiments.
## 6 Conclusions
A framework for the deformation control of elastic rods is proposed without
any prior physical knowledge. It includes shape feature extraction,
deformation Jacobian matrix estimation, and a robust SOM-based centerline
extraction algorithm. First, new shape features based on DAE are utilized to
represent the elastic rod’s centerline in the low-dimensional latent space.
Second, the performance of four deformation Jacobian estimators (R1, SR1, DFP,
and BFGS) is evaluated. Third, the velocity controller is derived and the
system stability is proven. Finally, the effectiveness and feasibility of the
proposed algorithm are validated by the numerical and experimental results.
DAE is used in this study to map the high-dimensional geometric information of
elastic rods flexibly into a low-dimension latent space. The proposed feature
extraction algorithm has better shape representation capabilities than the
traditional PCA. It also does not require any artificial markers, making it
widely applicable to practical situations. Broyden algorithms are used to
approximate the deformation Jacobian matrix in real-time. In this way, the
physical parameters and camera models are not identified. From the results,
BFGS has the advantages of simple structure, fast calculation speed, and
accurate approximation performance. Simultaneously, a robust SOM-based
centerline extraction algorithm with a fast calculation speed and high
extraction accuracy is designed. The overall system is completely calculated
from the visual feedback data, without any prior physical characteristics of
the elastic rod and the requirement to calibrate the camera.
The proposed method also has some limitations. First, the manipulated object
is only soft elastic objects, e.g., carbon fiber rod. Thus the proposed
algorithm is not suitable for inelastic items, e.g., plasticine and rope.
Second, although DAE has a good shape representation ability, it needs an
extensive and rich-enough dataset to train itself, which has particular
difficulties in practical applications. Third, the approximation of
deformation Jacobian matrix based on Broyden algorithms is easy to fall into
the local optimum, which may generate the destructive operation, such as over-
tension and over-compression in the manipulation process.
In the future, 3D deformation tasks will be included to manipulate more
complex shapes, e.g., M-shaped and spiral. Moreover, the existing DAE needs to
be improved to be suitable for different scenarios and materials. Path
planning should be considered to avoid possible destructive operations during
the manipulation process.
###### Acknowledgements.
This work was supported in part by the Germany/Hong Kong Joint Research Scheme
sponsored by the Research Grants Council of Hong Kong and the German Academic
Exchange Service under grant G-PolyU507/18, in part by the Research Grants
Council of Hong Kong under grant number 14203917, in part by the Key-Area
Research and Development Program of Guangdong Province 2020 under project 76.
## References
* (1) Abolmaesumi, P., Salcudean, S.E., Zhu, W.H., Sirouspour, M.R., DiMaio, S.P.: Image-guided control of a robot for medical ultrasound. IEEE Transactions on Robotics and Automation 18(1), 11–23 (2002)
* (2) Broyden, C.G.: A class of methods for solving nonlinear simultaneous equations. Mathematics of computation 19(92), 577–593 (1965)
* (3) Cretu, A.M., Payeur, P., Petriu, E.M.: Soft object deformation monitoring and learning for model-based robotic hand manipulation. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42(3), 740–753 (2011)
* (4) Dennis, J.E., Moré, J.J.: A characterization of superlinear convergence and its application to quasi-newton methods. Mathematics of computation 28(126), 549–560 (1974)
* (5) Ebert, F., Finn, C., Dasari, S., Xie, A., Lee, A., Levine, S.: Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568 (2018)
* (6) Hamill, P.: A student’s guide to Lagrangians and Hamiltonians. Cambridge University Press (2014)
* (7) Henrich, D., Wörn, H.: Robot Manipulation of Deformable Objects (2000)
* (8) Hu, Z., Han, T., Sun, P., Pan, J., Manocha, D.: 3-d deformable object manipulation using deep neural networks. IEEE Robotics and Automation Letters 4(4), 4255–4261 (2019)
* (9) Hutchinson, S., Chaumette, F.: Visual servo control, part i: Basic approaches. IEEE Robotics and Automation Magazine 13(4), 82–90 (2006)
* (10) Kohonen, T.: Self-organized formation of topologically correct feature maps. Biological cybernetics 43(1), 59–69 (1982)
* (11) Nair, A., Chen, D., Agrawal, P., Isola, P., Abbeel, P., Malik, J., Levine, S.: Combining self-supervised learning and imitation for vision-based rope manipulation. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2146–2153. IEEE (2017)
* (12) Navarro-Alarcon, D., Liu, Y.H.: Fourier-based shape servoing: a new feedback method to actively deform soft objects into desired 2-d image contours. IEEE Transactions on Robotics 34(1), 272–279 (2017)
* (13) Navarro-Alarcon, D., Liu, Y.h., Romero, J.G., Li, P.: On the visual deformation servoing of compliant objects: Uncalibrated control methods and experiments. The International Journal of Robotics Research 33(11), 1462–1480 (2014)
* (14) Navarro-Alarcon, D., Yip, H.M., Wang, Z., Liu, Y.H., Zhong, F., Zhang, T., Li, P.: Automatic 3-d manipulation of soft objects by robotic arms with an adaptive deformation model. IEEE Transactions on Robotics 32(2), 429–441 (2016)
* (15) Nocedal, J.: Updating quasi-newton matrices with limited storage. Mathematics of computation 35(151), 773–782 (1980)
* (16) Ouyang, B., Mo, H., Chen, H., Liu, Y., Sun, D.: Robust model-predictive deformation control of a soft object by using a flexible continuum robot. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 613–618. IEEE (2018)
* (17) Park, D.C.: Centroid neural network for unsupervised competitive learning. IEEE Transactions on Neural Networks 11(2), 520–528 (2000)
* (18) Polic, M., Krajacic, I., Lepora, N., Orsag, M.: Convolutional autoencoder for feature extraction in tactile sensing. IEEE Robotics and Automation Letters 4(4), 3671–3678 (2019)
* (19) Qi, J., Ma, W., Navarro-Alarcon, D., Gao, H., Ma, G.: Adaptive shape servoing of elastic rods using parameterized regression features and auto-tuning motion controls. arXiv preprint arXiv:2008.06896 (2020)
* (20) Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (fpfh) for 3d registration. In: 2009 IEEE International Conference on Robotics and Automation, pp. 3212–3217 (2009). DOI 10.1109/ROBOT.2009.5152473
* (21) Rusu, R.B., Marton, Z.C., Blodow, N., Beetz, M.: Persistent point feature histograms for 3d point clouds. In: Proc 10th Int Conf Intel Autonomous Syst (IAS-10), Baden-Baden, Germany, pp. 119–128 (2008)
* (22) Siciliano, B.: Kinematic control of redundant robot manipulators: A tutorial. Journal of intelligent and robotic systems 3(3), 201–212 (1990)
* (23) Sun, P., Hu, Z., Pan, J.: A general robotic framework for automated cloth assembly. In: 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), pp. 47–52. IEEE (2019)
* (24) Tang, T., Wang, C., Tomizuka, M.: A framework for manipulating deformable linear objects by coherent point drift. IEEE Robotics and Automation Letters 3(4), 3426–3433 (2018)
* (25) Tokumoto, S., Hirai, S.: Deformation control of rheological food dough using a forming process model. In: Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), vol. 2, pp. 1457–1464. IEEE (2002)
* (26) Valencia, A.J., Nadon, F., Payeur, P.: Toward real-time 3d shape tracking of deformable objects for robotic manipulation and shape control. In: 2019 IEEE SENSORS, pp. 1–4. IEEE (2019)
* (27) Vettigli, G.: Minisom: minimalistic and numpy-based implementation of the self organizing map. GitHub.[Online]. Available: https://github.com/JustGlowing/minisom/
* (28) Wakamatsu, H., Hirai, S., Iwata, K.: Modeling of linear objects considering bend, twist, and extensional deformations. In: Proceedings of 1995 IEEE International Conference on Robotics and Automation, vol. 1, pp. 433–438. IEEE (1995)
* (29) Yan, M., Zhu, Y., Jin, N., Bohg, J.: Self-supervised learning of state estimation for manipulating deformable linear objects. IEEE Robotics and Automation Letters 5(2), 2372–2379 (2020)
* (30) Yeo, N., Lee, K., Venkatesh, Y., Ong, S.H.: Colour image segmentation using the self-organizing map and adaptive resonance theory. Image and Vision Computing 23(12), 1060–1079 (2005)
* (31) Zhou, P., Zhu, J., Huo, S., Navarro-Alarcon, D.: Lasesom: A latent representation framework for semantic soft object manipulation. arXiv preprint arXiv:2012.05412 (2020)
* (32) Zhu, J., Navarro-Alarcon, D., Passama, R., Cherubini, A.: Vision-based manipulation of deformable and rigid objects using subspace projections of 2d contours. Rob. Auton. Syst. (2020)
|
1Department of Astronomy and Astrophysics, Tata Institute of Fundamental
Research, Homi Bhabha Road, Mumbai 400005, India
2Department of Astronomy and Astrophysics (retd), Tata Institute of
Fundamental Research, Homi Bhabha Road, Mumbai 400005, India
3Centre for Astrophysics, University of Southern Queensland, QLD 4300,
Australia
4Inter-University Centre for Astronomy & Astrophysics, Ganeshkhind,
Pune-411007, India
5School of Physics and Astronomy, University of Southampton, Highfield Campus,
Southampton SO17 1BJ, UK
6Department of Physics, Indian Institute of Technology, Hyderabad 502285,
India
7Department of Physics, Indian Institute of Technology, Kanpur 208016, India
<EMAIL_ADDRESS>
# Large Area X-ray Proportional Counter (LAXPC) in Orbit Performance :
Calibration, background, analysis software
H. M. Antia1,* P. C. Agrawal2 Dhiraj Dedhia1 Tilak Katoch1 R. K. Manchanda3
Ranjeev Misra4 Kallol Mukerjee1 Mayukh Pahari5,6 Jayashree Roy4 P. Shah1 J. S.
Yadav7
###### Abstract
The Large Area X-ray Proportional Counter (LAXPC) instrument on-board AstroSat
has three nominally identical detectors for timing and spectral studies in the
energy range of 3–80 keV. The performance of these detectors during the five
years after the launch of AstroSat is described. Currently, only one of the
detector is working nominally. The variation in pressure, energy resolution,
gain and background with time are discussed. The capabilities and limitations
of the instrument are described. A brief account of available analysis
software is also provided.
###### keywords:
space vehicles: instruments — instrumentation: detectors
<EMAIL_ADDRESS>
## 1 Introduction
The Large Area X-ray Proportional Counter (LAXPC) instrument on-board AstroSat
(Agrawal 2006; Singh et al. 2014) consists of three co-aligned detectors for
X-ray timing and spectral studies over an energy range of 3–80 keV (Yadav et
al. 2016a; Agrawal et al. 2017). Apart from LAXPC, AstroSat has three more co-
aligned instruments, the Soft X-ray Telescope (SXT, Singh et al. 2016), the
Cadmium Zinc Telluride Imager (CZTI, Bhalerao et al. 2017) and the Ultra-
Violet Imaging Telescope (UVIT, Tandon et al. 2017). AstroSat was conceived to
carry out multiwavelength observations of various sources in the Visible, UV
and X-ray bands using these co-aligned instruments. AstroSat was launched on
September 28, 2015 and the initial calibration of the LAXPC instrument was
discussed by Antia et al. (2017). By now AstroSat has made more than 2000
distinct observations covering a wide variety of sources and a large amount of
data are publicly available from the AstroSat Data
Archive111https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp. A
quick look light-curves of all LAXPC observations are available at the LAXPC
website222https::www.tifr.res.in/~astrosat_laxpc/laxpclog.lc-hdr.html. A
number of science results from LAXPC instrument have been published and a
summary of these results is described in the companion paper (Yadav et al.
2020).
Each LAXPC detector has five layers divided into seven anodes (A1–A7), with
the two top layers having two anodes each. In addition there are three veto
anodes (A8–A10) on the three sides of the detector. The three faces covered by
veto anodes are, the bottom and the two faces covering the long side of the
detector as shown in Figure 2 of Antia et al. (2017). By default the LAXPC
detectors operate in the Event Analysis (EA) mode where the timing and energy
of each photon is recorded with the time-resolution of 10 $\mu$s. The EA mode
operation also generates the Broad Band Counting (BB) mode data, which gives
the counts with a predefined time-bin, for various energy bins and anodes,
including the counts of events beyond the Upper Level Discriminator (ULD)
threshold (nominally at 80 keV). The dead-time of the detectors is 42 $\mu$s
(Yadav et al. 2016b). In addition, there is a Fast Counter (FC) mode with a
dead-time of about 10 $\mu$s to allow for observation of bright sources. In
this mode only the counts with a fixed time-bin of 160 $\mu$s, as recorded in
only the top layer of the detector in four predefined energy bins are
available. However, this mode has not been used for any science observation
and no software to analyse the data is available. As a result, this mode is
not allowed to be configured by LAXPC users. Thus effectively only one mode
covering both EA and BB is allowed.
In the xenon gas counters, a large fraction of incoming photons above 34.5 keV
emit a fluorescence photon and depending on the cell geometry and filling
pressure, the fluorescent photon may generate a second localized electron
cloud in a different anode. Therefore, on board electronics is designed to
recognise such correlated double events and the energy deposited in the two
anodes is added. These are referred to as double events as opposed to single
events, where all energy is deposited in one anode.
In this article, we mainly focus on the performance of the LAXPC instrument in
orbit and its calibration and some software which is available for analysing
data. The three LAXPC detectors are labelled as LAXPC10, LAXPC20 and LAXPC30.
Currently, only LAXPC20 is working nominally. The rest of the paper is
organised as follows: Section 2 describes the performance of detectors during
the last five years. Section 3 describes the variation in background with time
and some procedures to correct for these. Section 4 describes some
capabilities and limitations of the detectors and their sensitivity. Section 5
describes some software for analysing the data. Section 6 describes the
various science goals of LAXPC detectors and how they are met by the data
obtained so far. Finally, Section 7 describes a summary of calibration and
performance of detectors.
Figure 1: The pressure in LAXPC detectors as a function of time. The left
panel shows the pressure as obtained from the pressure gauge for all three
detectors. The right panel shows the density for LAXPC30 using various
techniques. The dashed-line is the fit to linear part of the curve. Both
pressure and density are shown relative to their initial values.
## 2 Long Term Performance of LAXPC in Orbit
The health parameters of the detectors, like the temperature, pressure, high
voltage (HV) and various energy thresholds are monitored regularly. While the
temperature of detectors is steady, the pressure has changed over the last
five years from the initial value of about two atmospheres. The HV and
thresholds have been maintained, except for some adjustments that were made
from time to time. Further, to monitor the stability of detectors, the peak
position and energy resolution of 30 and 60 keV peaks in the veto anode A8
from the on board radioactive source Am241, are measured regularly and a log
is
maintained333https::www.tifr.res.in/~astrosat_laxpc/LaxpcSoft_v1.0/gaina8.dat.
Apart from these, we have also regularly monitored spectrum from Crab
observation to check the stability of the detector response.
### 2.1 Pressure in Detectors
Figure 1, shows the pressure as estimated from the pressure gauge in all
detectors as a function of time. The LAXPC30 developed a leak soon after
launch and the pressure was decreasing steadily. The HV of the detector was
turned off on March 8, 2018 when the pressure had reduced to about 5% of the
original pressure. LAXPC10 also has a fine leak and the pressure has been
reducing gradually. Curiously, the pressure gauge of LAXPC20 shows a slow
increase in pressure. This is most likely an artefact and shows the limitation
of on-board pressure gauge. Because of this, we used other techniques to
estimate the density in LAXPC30, as described by Antia et al. (2017) and the
results using these are shown in the right panel of Figure 1. Since these
techniques are based on absorption in Xenon gas, they yield the density which
is assumed to be a proxy for pressure, as the temperature is almost constant
during the entire period. The Cyg X-1 observations during the soft state were
used to measure the density by calibrating the ratio of counts around 20 keV,
observed in different layers of the detector. The soft state was used to
ensure very low flux beyond the Xe K-edge to avoid possible contamination from
events involving Xe fluorescence photons. The Crab spectra were fitted using
responses with different density to get the best fit for density. Other
techniques were based on the strength of 60 keV peak in veto anode A8 and the
observation of L-edge in the spectrum when the density was sufficiently low.
It can be seen that results from all these techniques agree with each other.
The density in LAXPC30 decreased linearly for some time at a rate of about
4.5% of original density per month. After that it followed an exponential
rate, as may be expected for a leak, with an e-folding time of about 200 days.
By now the pressure is below the sensitivity of pressure gauge and using the
exponential profile it can be estimated to be about 0.05% of the original
value. To maintain the gain of the detector in a reasonable limit the HV of
detector was reduced from time to time. To allow analysis of LAXPC30 data, the
response has been calculated at different densities and the software gives
recommendation of which response to use depending on the time of observation.
### 2.2 Energy Resolution and Gain of Detectors
Because of the leak, the gain of LAXPC10 and LAXPC30 was also increasing with
time and this needs to be estimated using the 30 keV line in the veto anode A8
from the Am241 source. The calibration source has two lines, one at 30 keV due
to K-escape event from Xe and another at 60 keV from the Am241 source, and
these peaks can be used to check the drift in gain as well as energy
resolution. To correct for the drift in the gain, the HV of LAXPC10 and
LAXPC30 was adjusted from time to time. This gives some steps in the gain.
After some stage the gain of LAXPC30 had to be adjusted frequently, giving a
band in the peak position as seen in the left panel of Figure 2. On January
22, 2018 the HV of LAXPC30 was reduced to the minimum possible value of about
930 V (as compared to the initial value of about 2300 V), after which the peak
channel kept shifting upwards. Even before this stage, the 60 keV peak was not
well defined due to low efficiency and hence its position could not be
determined. By the time the HV of detector was turned off, even the 30 keV
peak had shifted beyond the ULD and it was not possible to estimate the gain
of the detector.
Figure 2: The energy resolution and peak channel for the 30 keV ($p_{1}$, as
marked by cross) and 60 keV ($p_{2}$, as marked by open squares) calibration
peak in veto anode A8 is shown in the left panel. The black, red and blue
points show the result for LAXPC10, LAXPC20 and LAXPC30, respectively. The
right panel shows the quantity $2-p_{2}/p_{1}$ for the three detectors. The
lines mark the best fit straight lines for the three detectors.
The channel to energy mapping is defined by a quadratic (Antia et al. 2017)
and it is not possible to estimate the three coefficients of quadratic using
the peak position of two peaks. Hence, it is assumed that only the linear term
is changing with time and its value is estimated by the position of the 30 keV
peak. Responses for different values of the peak position of 30 keV peak are
provided and software makes appropriate recommendation based on the time of
observation. If the gain was linear the peak channel for 60 keV peak, $p_{2}$
would be twice that for 30 keV peak, $p_{1}$. Hence, the difference
$2-p_{2}/p_{1}$ gives a measure of non-linearity in the gain. This quantity is
shown in the right panel of Figure 2. It can be seen that the nonlinearity has
been decreasing for all detectors. However, it should be noted that this
quantity can also change if the constant term in the gain is changing. Thus it
is not possible to correct for this variation. It is advisable to use gain fit
command in Xspec to adjust the constant term, and even the linear term, to get
the best spectral fit.
On March 26, 2018, LAXPC10 showed erratic counts with strong bursts where the
dead-time corrected count rate reached 40000 s-1. The cause of this anomaly is
not known. To stabilise the counts, the HV of the detector was reduced.
Attempts were also made to control the noise by adjusting the Low Level
Discrimination (LLD) thresholds of some anodes which were showing low channel
noise, but that did not remove the bursts and hence the HV was kept at lower
value. After that the counts were stable to some extent though smaller bursts
continued. By looking at the value of counts beyond the ULD it is possible to
identify the time intervals when counts are not stable and this has been
implemented in the software which automatically removes these time intervals
from Good Time Intervals (GTI). This problem occurred a few times after that
and every time the HV was reduced to stabilise the counts. The last adjustment
was made on April 9, 2019 and since then no bursts have been observed, except
for a few days between April 23 and May 3, 2020. Because of these adjustments,
the HV of LAXPC10 has been reduced from about 2330 V initially to about 2190 V
in March 2018 (before the anomalous behaviour started) and to 1860 V now. Some
reduction in HV was also required to compensate for the leak in the detector.
About half of the 470 V reduction would have been needed to compensate for the
reduction in pressure. The status of LAXPC10 at any time can be checked on the
LAXPC website444https::www.tifr.res.in/~astrosat_laxpc/laxpc10.pdf. Even if
the detector remains stable, it would take a few years for it to reach a
reasonable gain due to leakage.
Because of the reduction in HV the energy thresholds of LAXPC10 have
increased. At the time of last adjustment, the LLD was around 30 keV and ULD
was about 400 keV. Since then because of the fine leak the thresholds have
reduced to some extent, with LLD around 22 keV and ULD around 320 keV. It is
difficult to estimate the gain of this detector reliably and to get the
corresponding response. As a result, it is not possible to use this detector
for spectroscopic studies. During the period just after March 26, 2018 the
gain of the detector was in a reasonable range. As a result, the data obtained
during that interval could be analysed if single event mode for only the top
layer of the detector is used. It is necessary to reject all double events
where the energy is deposited in two different anodes due to Xe K X-ray
photons being absorbed in a different anode, as the energy thresholds for
choosing these events have not been adjusted due to difficulty in estimating
them reliably. The restriction to top layer of detector is needed because the
LLD threshold of some other anodes has been increased giving an edge in the
spectrum.
Figure 3: The pulse profile of GRO J2058+42 as calculated from LAXPC10 data.
Even with the low gain, LAXPC10 does detect bursts, e.g., from GRB (Antia et
al. 2020a,b). Similarly, it is possible to detect pulsation in LAXPC10, e.g.,
for GRO J2058+42 during its outburst in April 2019, the pulsation period was
determined to be $194.256\pm 0.034$ s and spin-up rate was estimate to be
$\dot{\nu}=(1.7\pm 1.0)\times 10^{-11}$ Hz s-1. This can be compared with
$P=194.2201\pm 0.0016$ s and $\dot{\nu}=(1.65\pm 0.06)\times 10^{-11}$ Hz s-1
obtained with LAXPC20 (Mukerjee et al. 2020a). The error-bars represent the
90% confidence limits. The higher error in LAXPC10 is mainly due to lower
counts because of higher LLD and low efficiency. This observation was taken at
a time when the counts were not very stable in LAXPC10 and only about 7% of
exposure time was usable. The pulse profile obtained from LAXPC10 is shown in
Figure 3. The gain of the detector during this observation is uncertain and
the LLD was probably around 30 keV. This pulse profile can be compared with
pulse profile obtained for energy range 30–40 keV from LAXPC20 (Mukerjee et
al. 2020a).
Figure 4: The results of fit to Crab spectrum observed during the last 5 years
is shown as a function of days from launch of AstroSat for the three LAXPC
detectors as marked in the figure. The different panels show all fitted
parameters as well as the reduced $\chi^{2}$ for the fit.
The energy resolution of LAXPC10 was stable until March 2018, while that for
LAXPC30 was improving with time, probably due to reduction in pressure. The
energy resolution of LAXPC20 has been deteriorating with time and currently
the resolution at 30 keV is about 20%. The position of peak channel in LAXPC20
has been steady and the last adjustment in HV was made on March 15, 2017.
Since then the peak position has reduced by about 12 channels. Since this
detector is already operating at higher voltage (about 2600 V) as compared to
the other two, further increase in HV has not been attempted. Currently, this
is the only detector that is working nominally.
### 2.3 Fit to Spectra of Crab Observations
AstroSat has observed the Crab X-ray source several times during the last five
years and the spectra obtained during these observations have been fitted to
monitor the stability of detector response after accounting for known drift in
gain and pressure using appropriate responses. The Crab spectra were fitted to
a powerlaw form to obtain the spectral index and normalisation for each
observation (averaged over all orbits) and the results are shown in Figure 4.
All fits were performed with 3% systematics in spectra and background, and
line-of-sight absorption column density of $0.34\times 10^{22}$ cm-2 was used
(Shaposhnikov et al. 2012). Due to the anomalous behaviour and abnormal gain
change, LAXPC10 and LAXPC30 data were not fitted for 2018 onwards and hence
are omitted from respective plots. The gain fit was also used in Xspec v
12.11.1 to allow for small deviations in the gain of responses. The effect of
using the gain fit during the last five years for LAXPC20 is shown in Figure 5
where fitted spectral parameters with and without using gain fit are shown for
comparison. It turns out that the slope of best fit for LAXPC20 was always
within 2% of unity, which implies that this is largely taken care of in gain
shift estimated from the calibration source. However, the offset in gain fit
was found to change systematically with time, reaching a value of $-0.5$ keV
by now. This may be expected, as this was not calculated from the calibration
source. The inclusion of gain fit improved the fit significantly and is
recommended for all spectral fits. It can be seen that the fitted parameters
have held steady during the last 5 years, but the $\chi^{2}$ for the fits have
increased with time. Initially, 1% systematics was enough to get an acceptable
$\chi^{2}$ fit, but now it requires up to 3% systematics for LAXPC20. This
could be because of degradation in energy resolution. An example, of the fit
for observation during September 2020 is shown in Figure 6 An estimate of
systematic error in fitted parameter for Crab spectra can be obtained by
taking the standard deviation over all measurements. For the power-law index
we get the values, $2.099\pm 0.041$, $2.088\pm 0.013$ and $2.136\pm 0.043$ for
the three LAXPC detectors. Similarly the normalisation is $8.15\pm 0.74$,
$8.10\pm 0.31$ and $8.90\pm 0.89$. Some of the variation in normalisation
could be due to differences in pointing offset over different observations.
Figure 5: The results of fit to Crab spectrum observed by LAXPC20 during the
last 5 years is shown as a function of days from launch of AstroSat. The
results with and without applying the gain fit are shown. Figure 6: A fit to
the Crab spectrum observed by LAXPC20 during September 2020 to powerlaw model
with gain fit is shown. The lower panel shows the residuals. The reduced
$\chi^{2}$ of the fit with 3% systematics is 2.03. Figure 7: An example of the
diurnal variation observed in the fitted Crab spectral parameters from LAXPC10
(shown by triangles) and LAXPC20 (shown by circles) during one observation
during January 2018 as a function of time. Figure 8: The figure shows the
results of fitting the Crab spectra observed by LAXPC20 during a long
observation during September 2020. Marginal improvement of spectral parameters
as well the fitting statistics can be noted when a powerlaw with a Gaussian at
Xe K X-ray energy $\sim 30$ keV (shown by solid circles) is used compared to a
simple powerlaw (shown by hollow circles).
It is clear that the fitted parameters for the Crab spectrum are stable over a
long time scale. Figure 7 shows the variation over a short time scale of a few
days during January 2018 using data for individual orbits. It is clear that
there is a diurnal variation in the fitted parameters, which is similar to
that seen in the background as shown in the next section. This variation is
likely to be due to variation in the background and discrepancy between the
background estimated from background model and the actual background. Some of
the variation could be due to a shift in GTI with orbit. The period of Earth
occultation would drift with time across the AstroSat orbital phase and that
may account for some of these variations. As a result some orbits will have
more contribution from the region near the SAA passage and these may have
diurnal variation. Since the LAXPC spectra often show an escape peak around 30
keV due to Xe K X-rays, we also attempted a fit with an additional Gaussian
peak around this energy to account for this feature and the results are shown
in the Figure 8. This resulted in some improvement in the fit and also reduced
the amplitude of diurnal variation, but significant variations were still
seen. The addition of a Gaussian component has been used in some analysis of
LAXPC data to remove the feature in the spectrum around 30 keV (Sreehari et
al. 2019; Sridhar et al. 2019).
Figure 9: The figure shows the results of fitting the Crab spectra observed by
LAXPC20 during a long observation during January 2018 after removing the time-
interval near the SAA passage to reduce uncertainties in background estimates.
The fit includes gain fit and a Gaussian around 30 keV.
To identify the cause of the observed diurnal variation in the fitted
parameters we repeated the exercise for the January 2018 Crab observations, by
removing the time intervals that were within 600 s of entry or exit from SAA.
This should reduce the background uncertainties. With this modification the
diurnal trend is not clear as shown in Figure 9. These fits included the gain
fit as well as a Gaussian around 30 keV and hence these results should be
compared with those in Figure 8. However, with reduced exposure due to
truncating the GTI, the errors in fitted parameters are larger and the net
range of fitted parameters is not substantially reduced. It appears that some
of the diurnal variation could be due to uncertainties in background model
discussed in the next section. Although, Crab flux is much larger than the
background at low energies, at high energies it becomes comparable to
background and hence can be affected by uncertainties in background.
## 3 Detector Background
To determine the detector background, the instrument is pointed to a direction
where there are no known X-ray sources (Antia et al. 2017). Since the
background is found to show a variation over a time of about 1 day, all
background observations are for at least, 1 day interval. The background
counts are found to change during the orbit also with the counts increasing
near the SAA passage. These variations are fitted to latitude and longitude of
satellite as explained by Antia et al. (2017). To monitor the long term
variation in the background, the background observations are repeated about
once every month. However, the variation in the background counts and spectrum
are too complex to be captured by the models and some of these complexities
are described in this section.
Figure 10: The total count rate in background corrected for gain shift as a
function of time for LAXPC10 and LAXPC20.
The gain drift in detector would also result in change of background and for
spectral studies the background spectrum is corrected for this shift in the
software. Even after correcting for the gain shift, the background counts have
been changing with time and the results are shown in Figure 10. The LAXPC10
results are shown until March 2018 only, as after that the gain has changed
significantly. LAXPC30 results are not shown as the counts were decreasing due
to reduction in pressure and it is difficult to correct for large variations
in the gain. It can be seen that the variation is similar in both detectors
during the overlapping time and the counts have been generally increasing with
time. The reason for this increase is not clear. Some increase may be expected
from induced radioactivity, but it is not clear why it becomes nearly constant
over some time intervals. There is also a significant scatter about the best
fit curve, which could be due to various factors discussed below.
Figure 10 shows the long term variation in the total count rate during the
background observations, but there are short term variations also during each
orbit as well as some diurnal variations during the course of a day. To show
these variations Figure 11 shows the variation in the count rate for a long
background observation during July 2018. The diurnal variation can be seen
clearly in this figure. Figure 12 shows the light curves during a few
background observations during the last five years. It is clear that the
diurnal variation is present in all observations but there is some evidence
that the amplitude of variation has increased with time. Background model used
to generate the light-curve for background has an option to remove the diurnal
variation with a period of 1 day, which can be applied if the observation
covers at least 1 day of observation. For short observations it tends to
remove real trend in the light-curve and hence is not applied. For longer
observations also it can remove some real variation if it happens to have
similar periodicity.
Figure 11: The light curve for a background observation during July 2018 in
LAXPC20 with a time-bin of 32 s. Figure 12: The light curves for different
background observations as marked with the month and year in the respective
panels, as observed by LAXPC20 with a time-bin of 32 s.
Figure 13: The ratio of spectrum during different times of background
observation with respect to that during a ‘low’ orbit for LAXPC20. The left
panel shows the result for February 2017 background observation, while the
right panel shows that for September 2020 observation. The red line in top
panels show the ratio averaged over ‘high’ orbit, while that in the bottom
panel shows the ratio for entire observation, including all orbits. The black
lines show the ratio during the first 600 s of the orbit, the cyan line shows
that during the last 600 s, while the blue line shows that during the middle
part of the orbit.
During an orbit the counts are generally minimum in between the two SAA
passages and tends to increase as the satellite approaches the SAA, as well as
when it exits the SAA. However, the magnitude of variation near SAA passage is
more complex. In general, for orbits where the counts are near maximum in the
diurnal trend, the counts have a sharp spike as the satellite exits the SAA,
while the spike when it approaches the SAA is of much smaller magnitude. On
the other hand, for orbits where the counts are near minimum of the diurnal
trend, the two branches, towards the entry to SAA and while exiting the SAA
are comparable in magnitude. It is possible to avoid some of these artefacts
by cutting off the interval surrounding the SAA passage from GTI. However, the
software does not implement this as it can reduce the exposure time
significantly, which may not be desirable in some studies, e.g., study of
bursts. For more sensitive studies it may be advisable to remove 600 s on
either side of SAA from the GTI.
All these figures only show the total background count rate, but the spectrum
does not simply scale by this rate and hence to look for the variation in the
background spectrum, we selected the background observations during February
2017 and September 2020 and calculated the spectrum during a few parts of the
observation. For reference, we use the spectrum obtained during an orbit when
the count rate was close to the minimum (referred to as ‘low’) of diurnal
variation, and choose another orbit when the count rate was close to maximum
(referred to as ‘high’). Here the orbit is defined as the period between two
consecutive passages through SAA. Figure 13 shows the ratio of counts in the
spectrum with respect to the average spectrum during the ‘low’ orbit. The red
curve in the top panel shows the ratio when spectrum is averaged over the
entire ‘high’ orbit, while the red curve in the bottom panel shows the ratio
when the spectrum is averaged over the entire observation covering all orbits.
The black lines show the spectrum during the first 600 s of the orbit just
after the satellite comes out of SAA. The cyan line shows the ratio when the
spectrum is taken over the last 600 s before the satellite enters the SAA,
while the blue line shows the ratio for spectrum during the middle part of the
orbit, leaving out 900 s on both sides. The left panel shows the results for
February 2017 observations, while the right panel shows that during September
2020. It can be seen that during the ‘low’ orbit the difference is generally
within 10%. However, during the ‘high’ orbit all curves show a higher ratio
and also show a peak around 30 keV which is the Xe K X-ray energy. Further,
during the initial part of the orbit the flux is much higher at all energies
with the maximum difference exceeding 50% for September 2020 observation. We
have checked that even if we restrict to first 300 s of the orbit the ratio is
only sightly higher. Further, for ‘high’ orbit the blue and cyan curves are
close, indicating that there is not much difference in the spectrum as the
satellite enters the SAA. On the other hand, for ‘low’ orbit the counts
increase as the satellite is about to enter the SAA region. It turns out that
the ‘high’ orbits are the ones where the passage through SAA occurs when the
satellite is near the south end of its range. Although, the result is not
shown, the ratio of the spectrum during the two ‘low’ orbits is roughly
consistent with the expected ratio from Figure 10 except for a peak around 30
keV. We have also looked at similar ratio using individual layers of the
detector and the behaviour is similar to that in Figure 13. However, the count
rate in the top layer is about twice that in other layers. Hence, the
additional counts are larger in the top layer as compared to other layers.
Since the additional counts are larger after exit from SAA, some of these
could be due to induced radioactivity, while there could be additional
contribution from charged particles coming through the collimator. Thus it is
clear that there is a significant change in the background spectrum during
later times and most of the increase in background appears to be during ‘high’
orbits and for energy around 30 keV.
Figure 14: The residuals in the fit to background of LAXPC20 for the two
background observations obtained using the background model described by Antia
et al. (2017). The left panel shows the residuals in the light curve with a
time-bin of 32 s, while the right panel shows the residuals in the energy
spectrum.
Figure 15: The residuals in the fit to background for the two background
observations obtained using the background model for faint sources described
by Misra et al. (2020) which uses only the top layer of the detector. The top
panels shows the residuals in the light curve with a time-bin of 100 s, while
the bottom panels shows the residuals in the energy spectrum. The left panels
show the results for February 2017 background observation while the right
panels show the same for September 2020 observations.
The background model used in the software does account for the increase in the
count rate during the ‘high’ orbit to a large extent, but the spectrum is
scaled to the average counts and hence is likely to introduce a bump around 30
keV. Typical rms deviations in the fit to count rate are 10 s-1 when total
counts in all anodes are considered. For top layer in 3–20 keV energy range
this drops to about 1 s-1, which is comparable to statistical error for a
time-bin of 32 s, used in these fits. This may be expected as the deviations
are more prominent in energies above 20 keV. Figure 15 shows the residuals in
the background fit for the two observations described above using the
background model described by Antia et al. (2017). It can be seen that for the
restricted energy range using only top layer, the residuals are roughly
consistent with the statistical errors, except for the high orbits during
November 2020 observations. However, when all events are considered the
background model has significant residuals. Thus it is clear that for faint
sources it would be advisable to consider only top layer of the detector with
restricted energy range. Figure 15 shows the residuals in the background fit
for the same two observations using the background model for faint sources
described by Misra et al. (2020) which uses only the top layer of the
detector.
Since the background model performs better when only top layer of the detector
at low energies is used, it is advisable to use this option for faint sources.
This is justified as at low energies a large fraction of counts are registered
in the top layers. Figure 16 shows the energy dependence of relative fraction
of events in top layer (Anodes 1 and 2) and the top two layers (Anodes 1 to 4)
as compared to all layers (Anodes 1 to 7). It can be seen that up to about 10
keV, almost all events are registered in the top layer. Even at 20 keV about
50% of events are registered in the top layer and about 75% in the first two
layers. Thus for studies at low energies it is advisable to use only the top
layer of the detector, as it reduces the background. An alternative is to drop
the data during the ‘high’ orbits to avoid this contamination. This would
reduce the duty cycle of the observations and if the observation is for a
short duration, the entire part may be during the ‘high’ orbits. The
fluctuations in the background determine the sensitivity of LAXPC for faint
sources, as discussed in the next section.
Figure 16: The relative efficiency of only top layer and the top two layers of
LAXPC detector as compared to when all layers are used.
Figure 17: The light curve during geomagnetic storms on on September 8, 2017
(left panel) and May 28, 2017 (right panel). The red points show the part of
the light curve which would be outside the GTI.
Since the detector background increases close to SAA passage, the contribution
is most likely from the flux of charged particles. Although, the detector has
veto anode and shield on three sides to protect from charged particles
entering from these sides. On the two smaller faces there is only a shield
which would offer some protection. However, on the top side there is no veto
layer or shield, unlike that in RXTE/PCA which had a propane layer (Jahoda et
al. 2006). As a result, there is little protection other than a thin Mylar
window, against charged particle coming from the top along the collimator. The
opening angle of collimator is about $1^{\circ}$ which gives a solid angle of
about 0.0003 st. Thus even for a flux of 1 particle cm-2 s-1 st-1, the
detector with an effective area of 2000 cm2 would record 0.6 event per s. This
level of flux is entirely possible even outside SAA, while close to SAA the
flux could be larger.
During a geomagnetic storm the charged particle flux goes up and many more
counts are recorded. Figure 17 shows the count rate in detectors during two
geomagnetic storms on September 8, 2017 ($Kp=8^{+}$) and May 28, 2017
($Kp=7$). The $Kp$ index in the parentheses gives a measure of strength of
geomagnetic storm. The September 8, 2017 event was the strongest so far during
the AstroSat operation. The red points marks the time which would be outside
the GTI due to normal SAA definition and would not be considered for analysis.
Thus it can be seen that geomagnetic storms can yield significant counts near
SAA passage. The most affected regions in this case are those where SAA
passage occurs during the north end of the range. For weaker storms, the
effect will not be easily seen in light-curve as the net increase would be
smaller but such storms can be frequent during high activity part of the solar
cycle. Even for smaller geomagnetic storms, of the order of 10 c s-1 can be
added before the SAA passages. If such an event occurs during observation of a
variable source, it would be almost impossible to separate it out from
variation in source counts. Any burst seen close to SAA passage can be suspect
and would require further investigation. During the last two years the solar
activity has been low and such storms are rare, but in the coming year the
solar activity is likely to pick up and more such events would be seen.
## 4 Detector Sensitivity and Limitations
There is no difficulty in studying bright sources. The flux limit for faint
sources is determined by the fluctuations in the detector background and
ability of background model to match these variations. Even for long
observations where the spectrum is averaged over a long time, the intrinsic
variations in background cannot be modelled satisfactorily and it limits the
sensitivity of the detector for faint sources. From the discussion in the
previous section we can see that at least, a few counts per second from the
source would be required to get any meaningful results. The actual limit would
obviously depend on the level of details that need to be studied and the
fluctuation in background during actual observation. The Crab observation
yields a count rate of about 3000 s-1 in each detector, which gives a
sensitivity limit of about 1 mCrab for faint sources that can be studied. For
reference, low energy flux of about $10^{-11}$ erg cm-2 s-1 gives a count rate
of 1 s-1. The same difficulty arises even for relatively bright sources at
high energies where the count rate can be much less than that in the
background. The upper limit on energy to which the spectrum can be studied
depends on the source and the extent of details that are required. In the
following subsections we illustrate some limitations and capabilities of LAXPC
for studying different properties of X-ray sources. Instead of giving limit on
flux etc., we illustrate the capabilities and limitations by giving some
examples. All the results presented in this section are obtained using the
software described in Section 5.2 with default parameters, except for choice
of energy range and anodes as specified.
Figure 18: The light curve of NGC 4593 during the AstroSat observation
starting on July 14, 2017 in SXT and the three LAXPC detectors with a time-bin
of 100 s. Figure 19: The Cross-correlation between SXT and LAXPC20 light-curve
of NGC 4593 is shown as a function of time delay. The red line shows the fit
with 2 Gaussians. Figure 20: The spectrum of NGC 4593 during the AstroSat
observation starting on July 14, 2017 in the three LAXPC detectors. Figure 21:
The fit to the combined spectrum from SXT and LAXPC20 for NGC 4593 is shown
with the bottom panel showing the residuals.
### 4.1 Light Curve and Spectrum
For faint sources typically most flux is observed at low energies and it is
better to use only the top-layer of the detector and further restrict the
energy range to 3–20 keV. This reduces the background counts by a factor of
10, thus improving the signal to noise ratio. To illustrate the limitation we
have selected an AstroSat observation of an Active Galactic Nuclei (AGN), NGC
4593, on July 14, 2016555ID: 20160714_G05_219T01_9000000540. Figure 18 shows
the LAXPC light-curve using only top layer and energy range of 3–20 keV. This
source shows a count rate of about 5 s-1 in all three detectors. For
comparison the light curve at low energies from the SXT instrument is also
shown. All light curves are with a time-bin of 100 s to reduce the statistical
error. It can be seen that all LAXPC detectors and the SXT instrument show
similar variation. The SXT being an imaging instrument has low background and
further it operates at lower energy of 0.3–10 keV, where the flux is larger.
Hence, it is expected to give reliable light-curve for this source. The cross
correlation function crosscor of HEASoft v 6.28 tool was used to estimate the
time lag between the LAXPC20 (3–20 keV) and SXT (0.5–3.0) keV which is shown
in Figure 19. A standard plot of cross-correlation versus time delay was
generated using these energy bands with time resolution of 99.86 s with 512
intervals. Two Gaussian models were fitted to the cross-correlation to
evaluate the time lag. The resulting fit shows a clear time delay of about 400
s indicating that hard photons lag behind the soft photons (Brenneman et al.
2007).
Figure 20 shows the spectrum observed in each of the LAXPC detectors. There is
a reasonable agreement between the three detectors at low energies. The
LAXPC20 spectrum continues at high energy also, probably because the
background is more reliably estimated. To test the spectrum, the combined
spectrum from SXT and LAXPC20 covering 0.5–80 keV was fitted using Xspec to
the model phabs*(diskbb + gaussian + powerlaw) and the resulting fit is shown
in Figure 21. The model fit yielded a reduced chi-squared of 1.45 (639/442)
with 1.5% systematics. Also, significant residuals were seen in the 10–30 keV
range, which might be due to a broad reflection component. The disk
temperature ($T_{in}$) and photon index ($\Gamma$) obtained were $0.13\pm
0.01$ keV and $1.56\pm 0.01$, respectively, which are consistent with the
results of Ursini et al. (2016).
### 4.2 Pulsation
Several X-ray pulsars have been studied by LAXPC and in general there is no
difficulty in estimating the frequency and spin-up rate if the change in
frequency is significant. To illustrate the performance, we consider the
AstroSat observation of SMC X-2 on May 7, 2020 (ID
20200507_T03_205T01_9000003652), when the average count rate from the source
was 2.1 s-1. Nevertheless, it was possible to estimate the spin period of
$2.377441\pm 0.000016$ s at the beginning of observation (MJD 58976.575405)
and the spin-up rate of $(3.9\pm 1.1)\times 10^{-11}$ Hz s-1 using LAXPC20 in
single event mode and energy range of 3–20 keV. The resulting pulse profile
shown in Figure 22 can be compared with other observations (Li et al. 2016;
Jaiswal & Naik 2017). Thus, it is clear that even with a low count rate it is
possible to study pulsations.
Figure 22: The pulse profile of SMC X-2 as obtained by LAXPC20 in the energy
range 3–20 keV.
Apart from coherent pulsation, LAXPC has been extensively used to study Quasi
Periodic Oscillations (QPO) in a wide variety of sources, with frequency
ranging from 1 mHz in 4U 0115+63 (Roy et al. 2019) to 815 Hz in 4U 1907+09
(Verdhan Chauhan et al. 2017). The only problem with detecting QPO arises if
their frequency is close to the orbital frequency of AstroSat ($\sim 0.15$
mHz) or its harmonics. Up to 10 harmonics of orbital frequency can be easily
seen in the power density spectrum and need to be accounted for while
identifying QPO frequencies. Similarly, for X-ray pulsars there can be
interference from pulse frequencies or its harmonics. But this can be easily
removed by modelling the pulse including harmonics and removing their
contribution, e.g., for GRO J2058+42 (Mukerjee et al. 2020a).
### 4.3 Cyclotron Resonant Scattering Features
Detection and studies of Cyclotron Resonant Scattering Features (CRSF) has
always been of great interest for direct measurements of magnetic field of the
neutron stars and to understand structure of the line forming regions in the
accretion column in X-ray binary pulsars. The LAXPC instrument was designed to
study CRSF in X-ray pulsars in binaries, as one of its important defined
objectives with superior detection efficiency along with its high time
resolution capabilities covering 3–80 keV wide energy band. Detection of CRSF
requires accurate spectral response and a well defined continuum model which
enables detection of absorption features in the source spectrum due to CRSF.
For a genuine detection of absorption features in the spectrum and to minimise
the possibility of false detection due to any unknown discrepancy of spectral
response over entire energy band, one can cross verify and re-confirm presence
of these absorption features indirectly by comparing the ratio of source
spectrum to that of the well known Crab spectrum having well calibrated power-
law spectrum (e.g., Mukerjee et al. 2020a). One can also quantify energy non-
linearity, if present, in the spectral response by analysing the Crab ratio
plot from known CRSF sources. The LAXPC spectral response were generated
carefully by appropriately modelling and accounting for 30 keV florescence
photons produced due to Xe K shell interaction of incident X-rays. However,
some feature around 30 keV is still seen in most spectra which can interfere
with detection of CRSF. Nevertheless, LAXPC, has detected CRSF in many X-ray
pulsars such as in GRO J2058+42 (Mukerjee et al. 2020a), Her X-1 (Bala et al.
2020) and Cep X-4 (Mukerjee et al. 2020b) with energies in 30–40 keV region.
As an example of detection capability of LAXPC at the highest energy, we
attempt to study CRSF in GRO J1008–57, where such feature has been detected at
90 keV (Ge et al. 2020). We used AstroSat observation of this source on
November 7, 2017666ID: 20171107_A04_024T04_9000001670 with exposure time of 43
ks to investigate CRSF at such a high energy close to the ULD of LAXPC20.
Figure 23: The fit to LAXPC20 spectrum of GRO J1008–57 including CRSF at 61
and 84.4 keV with the ratio of the data and fitted model shown in the lower
panel. The position of 61 keV feature is marked by an arrow.
Figure 23 shows a fit to the GRO J1008-57 spectrum derived from AstroSat
observation. The spectral data was reasonably defined by the combined model
defined as phabs(gaussian+powerlaw)highecut*2gabs. The ratio of the data and
the fitted model is also shown below for clarity. The power-law with high
energy cut-off model defines the continuum reasonably well with parameter
values derived which is typical for an X-ray binary pulsar. The hydrogen
column density value was frozen at $n_{H}=1.22\times 10^{22}$ cm-2 as
calculated using HEASARC tool for the source. The well known Fe-line emission
was detected and its energy and width were fixed during the fit at 6.5 keV and
0.3 keV, respectively. The photon-index was found to be
$1.13^{+0.40}_{-0.07}$, E-cutoff at $7.96^{+0.75}_{-0.83}$ keV and E-fold at
$26.65^{+0.78}_{-0.79}$ keV. The centroid energy of CRSF was detected at
$84.4^{+7.3}_{-4.0}$ keV with the line-depth of $10.1^{+5.2}_{-4.4}$, while
the width was frozen at 5.5 keV to constrain its upper limit within ULD. The
reduced $\chi^{2}$ of the model fit was 1.6 for 128 degrees of freedom. A
systematic error of 2.5% was added to account for uncertainties in the
response. The CRSF is thus detected with a significance of about $3\sigma$.
Even though this energy is near the upper limit of the nominal range of LAXPC,
it turns out that the ULD in LAXPC20 is around 100 keV and hence an absorption
feature could be seen in the spectrum around 85 keV. The presence of this
feature was also cross verified in the spectral ratio with Crab spectrum. The
detected CRSF energy is within the error limit of that reported by Ge et al.
(2020). Interestingly, in the AstroSat spectrum, an additional absorption
feature was also detected around $61.2^{+1.7}_{-1.5}$ keV which has not been
reported earlier.
### 4.4 Source Contamination
A major problem with LAXPC is its large field of view, with FWHM of about
$1^{\circ}$ (Antia et al. 2017), which allows multiple sources in the field of
view. If the contaminating source has an angular offset of $30^{\prime}$, then
its flux would be reduced by about half, which can give significant
contamination depending on relative flux from the two sources. Even at an
offset of $1^{\circ}$, 5% of the flux may be registered in LAXPC instrument, a
part of which may be attributed to the leakage of higher energy ($>50$ keV)
photons through the collimator. A test of this is provided by slew
observations, where many known sources are seen through a bump in the light
curve. There have been cases when the source was clearly visible even when the
offset was $65^{\prime}$. For example, during the slew observation777ID:
202190507_SLEW_01234_9000002893 on May 7, 2019, the light-curve in LAXPC20
(Figure 24) shows two clear peaks. The first peak with additional counts of
300 s-1 is due to GX 5–1 which had an offset of $51^{\prime}$. This source has
been observed several times by AstroSat and typical count rate is about 3000
s-1, thus it appears that about 10% of flux is registered at this offset. The
second peak with height of about 100 s-1 is due to GX 9+1 which had an offset
of $62^{\prime}$. This source has also been observed twice by AstroSat with a
count rate of about 1800 s-1. Thus even at this offset about 5% of flux is
registered. Hence it is recommended that ideally there should be no
significant source within $75^{\prime}$ of the target source.
Figure 24: The light curve during a slew operation on May 7, 2019. The two
peaks in the light curve are due to GX 5–1 (offset $51^{\prime}$) and GX 9+1
(offset $62^{\prime}$).
Some well known examples of sources affected by contamination are GRS
1758–258, 4U 1630–472 and IGR J17091-3624. For GRS 1758–258 there is another
source, GX 5–1, $40^{\prime}$ away, which is more than 10 times brighter. As a
result, the contaminating source overwhelms the target. Even if the
observation is made with an offset of $30^{\prime}$ on the opposite side, GX
5–1 can still contribute 5% of its on-axis flux which would be comparable to
50% of the GRS 1758–258 flux. Thus it is difficult to study this source using
LAXPC. Further, with such offset the target would be outside the field of view
of the SXT and hence it would not be possible to do broadband spectral
studies. Similarly, 4U 1630–472 has contamination from AX J1631.9–4752 which
is $35^{\prime}$ away. In this case, both sources have comparable flux and it
may be possible to subtract the contribution from contaminating source (Baby
et al. 2020). The 1310 s pulsation from the contaminating source were used to
estimate its flux. There are a few other sources also in the field of view,
but they are probably transients and may not be in outburst at the time of
observation. But this needs to be checked. For IGR J17091–3624, there is a
contamination from GX 349+2, $40^{\prime}$ away, which could be a few times
brighter than the target. It may be possible to remove the contamination, if
simultaneous observation of the contaminating source are available from other
instruments, or an estimate of flux is available from monitoring instruments
like, MAXI or Swift/BAT (Katoch et al. 2020).
Figure 25: The relative efficiency of LAXPC20 detector with offset of
$30^{\prime}$ (black line) and $40^{\prime}$ (red line) as compared to on-axis
response.
To remove the contamination, the contaminating source needs to be modelled and
a response with offset needs to be used to calculate its contribution to the
observed spectrum. The responses with offset have been calculated for LAXPC20.
Figure 25 shows the relative efficiency of LAXPC20 for offset of $30^{\prime}$
and $40^{\prime}$ with respect to on axis response. It can be seen that
although at low energy the efficiency can be somewhat low, it increases with
energy. These responses have been averaged over a circle with a given offset.
There would be some dependence on the angle with respect to detector side. The
efficiency would be higher when the source is at the same offset along the
diagonal of the collimator cells, as compared to that along the side. This is
not accounted for as it is not possible to specify this angle during pointing.
Figure 26: The light curve during a slew observation on August 8, 2019 is
shown in the left panel. The right panel shows the fit to the spectrum of the
possible transient source taken over the duration of the peak in the light
curve.
### 4.5 Possible Detection of a New Transient
LAXPC can be used to scan the sky for new sources. Although, the pre-planned
scan has not been attempted so far, between two observations the satellite
slews from one source to another and this is similar to a scan. All slew
observations have been analysed and during one of these observations on August
8, 2019888ID: 20190808_SLEW_01234_9000003079 a peak was seen in the light
curve around 13:13 UTC when the instrument was pointing at RA 288.717 and Dec
17.367. Figure 26 shows the light curve for the observation and the source
spectrum taken during the peak. The spectrum was fitted to a powerlaw form
with a systematics of 1%. The resulting fit (Figure 26) with a reduced
$\chi^{2}$ of 1.2 yielded the index of $1.67\pm 0.09$. A ToO to observe this
region was proposed and the observation was carried out on August 15, 2019,
but no signal was seen during that observation.
## 5 Analysis Software
Three different software are available to process the level-1 data to obtain
science products. These are LAXPClevel2DataPipeline (Section 5.1) written in
C++ and LaxpcSoft written in Fortran. The latter includes two related
software, the basic software includes two Fortran programs, laxpcl1 and
backshift developed by TIFR team (Section 5.2) and a suite of Fortran programs
based on the basic programs for different tasks developed by the AstroSat
Science Support Cell at IUCAA (Section 5.3). These are described in the
following subsections.
### 5.1 LAXPClevel2DataPipeline
The LAXPC Level-2 data pipeline version
3.1999https://www.tifr.res.in/~astrosat_laxpc/LAXPC_lvl2_pipeline.html was
developed with support from ISRO Space Applications Centre (SAC), Ahmedabad.
The pipeline works on the level-1 data downloaded from the Indian Space
Science Data Centre (ISSDC) archive. The pipeline generates various level-2
output data files for the scientific studies and other ancillary files which
are inherited from the level-1 data in FITS format. Before producing these
files, the pipeline checks for the data frame sequence order, duplicate frames
or possible frame corruption during data transfer. If needed, the data frames
are reordered. The level-2 files generated for the scientific studies are :
event file, Good Time Interval (GTI) file, lightcurve file and spectrum file.
The other ancillary files inherited from the level-1 are : Time Calibration
Table (TCT) file, Make Filter (MKF) file giving information about the
satellite position etc., Low Bitrate Telemetry (LBT) house-keeping file, orbit
file and attitude file. Additionally, the pipeline also has independent
routines to reprocess the lightcurve and spectrum which can be generated as
per requirements. Currently, this pipeline has only limited support.
### 5.2 laxpcl1 and backshift
These are two standalone Fortran
programs101010https://www.tifr.res.in/~astrosat_laxpc/LaxpcSoft.html to
process the level-1 data that are available from the AstroSat archives. The
readme files with the package give the detailed instructions for usage. The
tar file also includes the background files required for all background
observations. While the detector response files for all detectors are
available from the LAXPC website. The level-1 inputs required includes the
Event Analysis (EA) mode files which give the record of each event that is
detected and Broad Band counting (BB) mode files giving the counts of various
categories of events in a predefined time-bin. Apart from these, the .mkf file
giving the orbital and pointing parameters and the time calibration table
(tct) file giving the conversion from instrument time to UTC are also
required. These programs can handle data from multiple orbits after removing
the overlapping part of data between consecutive orbits, but treat each
detector independently. This program also corrects for problems with frame
sequence ordering as well as event time ordering. The program also gives a
statistics of the data processed, including the total exposure, fraction of
gaps in data and frame-loss that has been detected. In most observations the
frame-loss is less than 0.1%, but in some cases it can be a few percent or
more. In such cases it may be necessary to discard the time intervals with
large frame-loss. Orbit-wise statistics of frame-loss is available in ‘.frame’
file, which can be listed using ‘grep Frame lxp2level2.frame’.
The program laxpcl1 is the main routine which generates the light curve,
spectrum, event file and GTI file giving the list of GTIs. All output files
are in both ASCII and FITS format. It also generates the background spectrum
and light curve using the background model to estimate the background
contribution. A FITS file giving orbital parameters as required by the tool
as1bary111111http://astrosat-ssc.iucaa.in/?q=data_and_analysis to apply the
barycentric corrections to time is also generated. The channel range and
anodes to be used can be specified. To suppress some oscillations in spectrum,
two channels are binned together for LAXPC10 and LAXPC30, while for LAXPC20,
four channels are binned. Thus the original spectrum in 1024 channels is
reduced to 512 or 256 channels. However, this binning is done at the end and
the input channel range to be specified to laxpcl1 is in the full range of
0–1023. This should be accounted for while specifying the input. The events in
event file are of two types, single events, where all energy is deposited in
one anode, and double events, where the energy is deposited in two anodes, of
which at least, one energy is in Xe K X-ray range. There is an option to
exclude double events. This may be useful in some cases as the response is
better defined for these, though the efficiency may be reduced to some extent
(Antia et al. 2017). Since the ULD threshold is applied to each anode
separately, the double events can have energies exceeding ULD and in
principle, it is possible to study the spectrum up to about 110 keV, and these
are included in the response files. However, the efficiency of detector is
very low for these events. Since the event files can be large, there is an
option to suppress writing these files. This program generally requires two
passes, one to generate the GTI list and another to do the calculations using
this GTI file, which could be edited to choose any subset of intervals. To
generate the GTI file it is advised to use a time-bin of at least 1 s. The
final light curve can be generated for any time-bin, though there is some
limit on dimension which may truncate the light curve when very small time-bin
is used. If the GTI file is already available then only one pass through
laxpcl1 is needed.
The program backshift is used to account for gain shift between the source and
background observation after running laxpcl1. It uses the log of gain to
estimate the shift in gain and corrects the background spectrum to match the
gain during source observation. The correction is applied both to the
background spectrum and light curve. The background observation to be used has
to be specified in an input file for laxpcl1. In general, the background
observation closest in time to source observation is to be used and backshift
gives recommendations about which background file can be used. It also gives
recommendation about which response file has to be used to fit the spectrum.
There are four sets of response files, the default is for all events and
anodes. This may be the name that is recommended by backshift. Other set of
responses are for single event mode with ‘SE’ in the name. The remaining two
sets are for only the top layer, ‘L1’ and ‘L1SE’ for all events and only
single events, respectively. There is also an option to remove diurnal
variation in the light curve, which would not affect the spectrum. For
LAXPC30, response files are available for different densities and the program
also recommends which response needs to be used. In some cases, neighbouring
density may also be tried. For LAXPC20, the program also recommends the value
of offset that may be used in the gain fit while fitting the spectrum. The
offset may be frozen to this value (and slope to 1.0) if there is a difficulty
in estimating the gain parameters using gain fit.
For faint sources or sources with soft spectrum, it may be advisable to use
the single event mode with energy restricted to 3–20 keV. For this the option
$\mbox{\tt ian}=1$ (for top layer), $\mbox{\tt nul}=-2$ (to apply background
fit for top layer) and $\mbox{\tt iev}=1$ or $-1$ (for selecting single events
only) should be used. The energy range to be used can depend on the source.
### 5.3 Software with Individual Routines
A wrap around software to the primary one described above has been developed
for ease of certain kind of analysis of LAXPC data. The software can be
obtained from the AstroSat Science Support Cell web-page
121212http://astrosat-ssc.iucaa.in/?q=laxpcData. Details of the software and
instructions for use are given in README files included in the software. Here
we outline some of the basic highlights and the functionality of the software.
The software can produce a combined merged clean event file (typically called
level2.event.fits) for all three LAXPC units which is time sorted, allowing
for ease of timing analysis. Apart from Time, the other columns of the event
file are Layer number, LAXPC unit, Channel and Energy. The Energy of the event
is estimated using the channel to energy conversion from the appropriate
response file. This allows the user to make appropriate selections, either
using the ftools command fselect, or by using the other routines of the
software. The Event file also contains as separate data unit, the appropriate
names of the response files for the three LAXPC units as a function of time
for the observation.
A number of individual routines are provided which can run from the command
line with flags as inputs. Hence they allow for easy customized scripting by
the user. The spectra and lightcurves generated are compatible with high level
HEASoft tools such as powspec, lcurve, Xspec, crosscor etc. The individual
routines can be used to obtain:
1. 1.
A Good Time Interval (GTI) file that takes into account earth occult and SAA
passage.
2. 2.
A merged orbit file that is required to make Barycentric corrections.
3. 3.
Lightcurves at a given time-bin for user specified multiple energy bands with
a GTI file as a input. Lightcurves can be generated for different combination
of the LAXPC units and for all layers or only for the first layer.
4. 4.
Spectra for each of the three units for the input GTI file. Spectra can be
generated for all layers or only for the first layer. Appropriate response
files are copied from the software database to the working directory and the
‘RESPFILE’ keyword in the spectra files is updated.
5. 5.
Estimated background spectra for each LAXPC for the input GTI file. The
background estimation can be done either based on blank sky observations
closest to data or gain. All or first layer can be chosen. The ‘BACKFILE’
keyword in the spectra file is changed to the resultant background file.
6. 6.
Estimated background lightcurve corresponding to the extracted lightcurve.
7. 7.
GTI file for providing the time intervals when the flux given in an input
lightcurve is within some user specified values. This is useful to generate
flux resolved spectroscopy.
8. 8.
Time-lag, fractional r.m.s, coherence and intrinsic coherence as a function of
both frequency and energy. These are computed directly from the event file
rather than lightcurves and hence is computationally efficient when performing
high frequency analysis. The routine also provides the power spectrum along
with the dead-time corrected Poisson level. A subsidiary routine rebins the
power spectrum and converts it into Xspec readable format, which allows the
user to fit the power spectrum using Xspec models and flexibility.
9. 9.
Dynamic Power spectra which are power spectra computed for consecutive time
intervals. This is useful to see the rapid variation in Quasi-periodic
oscillation (QPO) frequency.
10. 10.
Estimate of background spectra and lightcurve using an alternate method
applicable for faint sources as described by Misra et al. (2020).
## 6 Science Goals of LAXPC Instrument: Status of their Realisation so Far
AstroSat was conceived as a multiwavelength observatory for simultaneous
observations of galactic and extragalactic sources in Visible, UV and X-ray
bands using a suite of four co-aligned instruments for studies of different
classes of galactic and extragalactic objects. Three co-aligned X-ray
instruments, sensitive in the X-ray energy region 0.5–150 keV were designed to
investigate temporal and spectral characteristics of the sources to elucidate
their nature and probe the complex radiation process operating in them.
The LAXPC instrument was designed to measure the intensity variations of
bright X-ray sources over a wide time scale, from 0.1 ms to minutes, days and
months. The X-ray binaries, with a neutron star or a black hole as the compact
object, were special targets for the LAXPC studies, as they exhibit a range of
periodic and aperiodic variations on almost all time scales. Accurate
determination of continuum energy spectra of X-ray binaries to decipher the
dominant radiation processes as well as to search for the presence of weak
spectral features known as Cyclotron lines that provide a measurement of the
magnetic field of the neutron star, is another major objective of LAXPC. To
achieve these objectives, LAXPC tags the arrival time of every detected photon
with an accuracy of 10 $\mu$s and also achieves a moderate energy resolution
over the entire range of 3–80 keV.
In this section we demonstrate how the various science goals of LAXPC
instrument have been met by giving some representative examples in each case.
More detailed discussion of scientific result is covered by Yadav et al.
(2020). The LAXPC instrument was designed with the following objectives
(Agrawal et al. 2017):
1. 1.
Detailed studies of stellar-mass and supermassive black holes: Several black
hole sources in both mass ranges have been studied. For example, using the
combined SXT and LAXPC spectrum, Mudambi et al. (2020) have derived the spin
of the black hole in LMC X-1. Similarly, Pahari et al. (2018) and Sridhar et
al. (2019) have estimated the mass and spin of black holes in 4U 1630–472 and
MAXI J1535–571.An example of a study of supermassive black hole is the blazar
RGB J0710+591 by Goswami et al. (2020) using the multiwavelength capability of
AstroSat by combining data from UVIT, SXT and LAXPC.
2. 2.
Studies of periodic (pulsations, binary light curves) variabilities in X-ray
sources: The LAXPC instrument with its high time resolution capability is
ideally suited to measure the spin periods of the X-ray pulsars to a high
precision and deduce their derivatives. The spin-up or spin-down rates of
these pulsars is determined by many factors, including accretion and the
magnetic field, which in turn can be studied by the measured changes in the
spin period. Amin et al. (2020) measured the spin period and spin-up rate of
the neutron star in the LMXB source 3A 1822–371. The shape of the emitted
pulse depends on modes of accretion, geometry of accretion column and
configuration of its magnetic field with respect to an observer’s line of
sight. Therefore, such studies offer insight into the physical processes in
the vicinity of the pulsar. For example, Mukerjee et al. (2020a) studied the
spectral and timing properties of GRO J2058+42 using data from AstroSat during
a rare outburst in April 2019. LAXPC data have been used to study pulsation
over a wide range of period from 2.3 ms for a transient accretion powered
millisecond X-ray pulsar SAX J1748.9–2021 (Sharma et al. 2020) to 604 s for 4U
1909+07 (Jaiswal et al. 2020). LAXPC is well suited for studying the pulsar
period evolution and hopefully more such studies will emerge in the future.
LAXPC also offers the possibility of inferring the evolution of the binary
period by studying the binary light curve over a long base line of years.
Using LAXPC observations of Cyg X-3 covering nearly one year, Pahari et al.
(2018) determined the binary orbital period to be $17253.56\pm 0.19$ s.
3. 3.
Studies of QPOs and aperiodic variabilities in X-ray sources: LAXPC data have
been used to study QPOs in a number of X-ray sources covering a wide range of
frequencies. For example, Belloni et al. (2019) and Sreehari et al. (2020)
detected a clear HFQPO in well known black hole binary GRS 1915+105, whose
frequency varied between 67.4 and 72.3 Hz. A QPO of 90 mHz centroid frequency
was detected for the first time by LAXPC in GRO J2058+42 during its rare
outburst of 2019 (Mukerjee et al. 2020a). Apart from QPOs, the LAXPC
instrument with its high time-resolution also allows the study of the time lag
between different energies of X-rays. Because of wide energy coverage these
studies can be extended to energies of 30 keV and above. Misra et al. (2017)
have analysed LAXPC data from Cyg X-1 in the hard state to derive time lag
between Soft (5–10 keV) and Hard (20–30 keV) photons which increase with
energy for both the low and high frequency components. The event mode LAXPC
data allowed them to perform flux resolved spectral analysis on a time-scale
of 1 s, which clearly shows that the photon index increased from 1.72 to 1.80
as the flux increased by nearly a factor of two. Apart from QPO, the neutron
star X-ray binaries show thermonuclear X-ray bursts which have also been
studied by LAXPC. Verdhan Chauhan et al. (2017) used the LAXPC observation of
the LMXB 4U 1728-24 on March 8, 2016 to study a typical Type-1 burst of about
20 sec duration. The dynamical power spectrum of the data in the 3–20 keV
band, shows the presence of a burst oscillation whose frequency increased from
361.5 to 363.5 Hz. These results demonstrate the capability of LAXPC
instrument for detecting millisecond variability even from short observations.
Similarly, Devasia et al. (2021) have studied thermonuclear bursts in Cyg X-2.
They have carried out energy resolved burst profile analysis as well as time
resolved spectral analysis for each of the 5 bursts that were observed.
4. 4.
Low to moderate spectral resolution studies of continuum X-ray emission and
CRSF: One of the prime objectives of LAXPC is to measure the continuum energy
spectra of Neutron star binaries to a high precision for detecting usually
faint signal of the Cyclotron lines. Using the LAXPC observations, Cyclotron
lines have been discovered in at least half a dozen pulsars. In section 4.3
detection of CRSF in the spectra of several pulsars observed with the LAXPC,
has been discussed demonstrating the LAXPC capability for such investigations.
For example, Mukerjee et al. (2020a) carried out a phase resolved study of
CRSF in GRO J2058+42, to find significant variation in CRSF energy with pulse
phase. Bala et al. (2020) studied secular variation in energy of CRSF in Her
X-1 by comparing the result with earlier observations..
5. 5.
Search for transient X-ray sources by surveys in a limited region of the
galactic plane: So far a systematic survey has not been carried out, but
between two science observations the satellite slews from one source to
another. The slew data have been scanned for potential sources, but most
features were found to be associated with known sources, except for the one
instance described in Section 4.5.
The main goals of LAXPC have been realised to a certain extent. However, the
detection of new transient sources in limited regions of the galactic plane
through a dedicated survey is not yet planned. A large fraction of the LAXPC
and SXT data still remains to be analysed. This could be achieved by
encouraging and involving more number of students and researchers in data
analysis and improving our understanding of calibrations issues pertaining to
instrument gain, background, spectral response etc. The phenomenon of kHz
QPOs, a major discovery by RXTE in about 15 LMXBs, has not been revisited by
AstroSat even though it is ideally suited for this study. The multiwavelength
study was a prime objective of AstroSat to understand the relationship between
radiation in different bands in the AGNs i.e., is the UV flux from AGNs
generated by the reprocessing of X-rays in the accretion disk? In X-ray
binaries one measures time lag of hard X-rays with respect to the soft photons
to infer if hard X-rays are reflection component due to up scattering of soft
photons on the surface of the hot disk. It is hoped that some of these gaps in
the AstroSat/LAXPC science results will be addressed in the coming years.
## 7 Summary
AstroSat has completed five years of operation, during which more than 2000
different pointings and over 1000 distinct sources have been observed. The
data for most observations are now available from the AstroSat archive. By now
only one of the three detectors, i.e., LAXPC20 is working nominally. The
response of detectors is reasonably understood and is stable. The detector
background has been increasing with time and some diurnal variations have been
seen both in the background and in fitted parameters to source spectrum. The
variation in background puts a limit on the source flux of about 1 mCrab,
below which it would be difficult to study the source. Even at this limiting
flux it is possible to fit the spectrum to a reasonable extent and to study
pulsations. LAXPC has successfully detected CRSF in several sources. The
detection of these features can be confirmed by looking at the ratio of source
to the Crab spectrum.
To account for drift in the gain of detectors, it is recommended to use gain
fit in Xspec while fitting the spectrum, even when the recommended response is
used. The fitted slope and offset should be compared with the values shown in
Section 2.3 to check that the correction is in a reasonable range.
Significantly different values would imply that gain fit has fitted some other
spectral variation. In such cases the slope should be fixed at 1 and the
offset should be fixed at the value recommended by backshiftv3. A few
observations may have a significant frame-loss during data transmission and it
may be necessary to discard the data during these time intervals. For faint
sources it is advisable to use only the top layer of the detector to reduce
the background. Because of a relatively large field of view, there is a
possibility of another source in the field of view and this should be checked
before analysing the LAXPC data.
Many X-ray sources have been studied using LAXPC data resulting in more than
40 publications. The scientific results are summarised in a companion paper
(Yadav et al. 2020).
## Acknowledgment
We acknowledge the strong support from Indian Space Research Organization
(ISRO) in various aspects of instrument building, testing, software
development and mission operation and data dissemination. We acknowledge
support of the scientific and technical staff of the LAXPC instrument team as
well as staff of the TIFR Workshop in the development and testing of the LAXPC
instrument.
## References
* [1] Agrawal, P. C. 2006, AdSpR, 38, 2989.
* [2] Agrawal, P. C., Yadav, J. S., Antia, H. M., et al., 2017, JApA, 38, 30
* [3] Amin, N., Roy, J., Chakroborty, S., et al. 2020, JApA (this volume)
* [4] Antia, H. M., Yadav, J. S., Agrawal, P. C., et al. 2017, ApJS, 231, 10
* [5] Antia, H. M., Katoch, T., Shah, P., Dedhia, D., Gupta, S., Gaikwad, R., Sharma, V., Vibhute, A., Bhattacharya, D. 2020a, GCN 27313
* [6] Antia, H. M., Katoch, T., Shah, P., Dedhia, 2020b, GCN 27313
* [7] Baby, B. E., Agrawal, V. K., Ramadevi, M. C., et al. 2020, MNRAS, 497, 1197
* [8] Bala, S., Bhattacharya, D., Staubert, R. and Maitra, C. 2020, MNRAS, 497, 1029
* [9] Belloni, T. M., Bhattacharya, D., Caccese, P., Bhalerao, V., Vadawale, S., Yadav, J. S. 2020, MNRAS, 489, 1037
* [10] Bhalerao, V., Bhattacharya, D., Vibhute, A. et al. 2017, JApA, 38, 31
* [11] Brenneman, L. W., Raynolds, C. S., Wims, J., and Kaiser, M. E. 2007, ApJ, 666, 817
* [12] Devasia, J., Raman, G., Paul, B. 2021, NewA 8301479
* [13] Ge, M. Y., Ji, L., Zhang, S. N., et al. 2020, ApJ 899, L19
* [14] Goswami, P., Sinha, A., Chandra, S. 2020, MNRAS, 492,796
* [15] Jahoda, K., Markwardt, C. B., Radeva, Y., et al. 2006, ApJS, 63, 401
* [16] Jain, C., Paul, B., Dutta, A. 2010, MNRAS, 409, 755
* [17] Jaiswal, G. K., Naik, S. 2017, MNRAS, 461. L97
* [18] Jaiswal, G. K., Naik, S., Ho, W. C. G., Kumari, N., Epli, P., Vasilopoulos, G. 2020, MNRAS, 498. 4830
* [19] Li, K. L., Hu, C.-P., Lin, L. C. C., Kong, A. K. H. 2016, ApJ, 828, 74
* [20] Katoch, T., Baby, B. E., Nandi, A. et al. 2020, MNRAS (in press) arXiv:2011.13282.
* [21] Misra R., Yadav J. S., Verdhan Chauhan J., et al. 2017, ApJ, 835, 195.
* [22] Misra R., Roy, J., Yadav, J. S. 2020, JApA (this volume)
* [23] Mudambi, S. P., Rao, A., Gudennavar, S. B., Misra, R., Buggly, S. G. 2020, MNRAS, 498, 4404
* [24] Mukerjee, K., Antia, H. M., Katoch, T. 2020a, ApJ, 897, 72
* [25] Mukerjee, K., et al. 2020b, (in preparation)
* [26] Pahari, M., Antia, H. M., Yadav, J. S., et al., 2018, ApJ, 849, 16
* [27] Pahari, M., Bhattacharyya, S., Rao, A. R., et al., 2018, ApJ, 867, 86
* [28] Roy, J., Agrawal, P. C., Iyer, N, K. et al. 2019, ApJ 872, 33
* [29] Sasano, M., Makashima, K., Sakurai, S., Zhang, Z., Enoto, T. 2014, PASJ, 66, 35
* [30] Shaposhnikov, N., Jahoda, K., Markwardt, C., et al. 2012, ApJ, 757, 159
* [31] Sharma, R., Beri, A. Sanna, A., Datta, A. 2020, MNRAS, 492, 4361
* [32] Singh K. P., Tandon S. N., Agrawal P. C., et al. 2014, SPIE, 9144, 1SS.
* [33] Singh K. P., Stewart, F. C., Chandra, S., et al. 2016, SPIE, 9905, 1ES
* [34] Sreehari, H., Ravishankar, B. T., Iyer, N. et al. 2019, MNRAS, 487, 928
* [35] Sreehari, H., Nandi, A., Das, S., et al. 2020, arXiv:2010.03782
* [36] Sridhar, N., Bhattacharyya, S., Chandra, S., Antia, H. M. 2019, MNRAS, 487, 4221
* [37] Tandon, S. N., Hutchings, J. B., Ghosh, S. K. et al. 2017, JApA 38, 28
* [38] Ursini, F., Petrucci, P.-O., Matt, G., et al. 2016, MNRAS, 463, 382
* [39] Verdhan Chauhan, J., Yadav, J. S., Misra, R. et al. 2017, ApJ, 841, 41
* [40] Yadav, J. S., Agrawal, P. C., Antia, H. M., et al. 2016a, SPIE, 9905, 1D.
* [41] Yadav, J. S., Misra, R., Chauhan J. V.,et al. 2016b, ApJ, 833, 27
* [42] Yadav, J. S., Agrawal, P. C., Misra, R., Roy, J., Pahari, M., Manchanda, R. K. 2020, JApA, (this volume)
|
# The equation of state and radial oscillations of neutron stars
Ting-Ting Sun1 Zi-Yue Zheng1 Huan Chen1<EMAIL_ADDRESS>G. Fiorella
Burgio2 Hans-Josef Schulze2 1 School of Mathematics and Physics, China
University of Geosciences, Lumo Road 388, 430074 Wuhan, China
2 INFN Sezione di Catania and Dipartimento di Fisica, Universitá di Catania,
Via Santa Sofia 64, 95123 Catania, Italy
###### Abstract
We investigate radial oscillations of pure neutron stars and hybrid stars,
employing equations of state of nuclear matter from Brueckner-Hartree-Fock
theory, and of quark matter from the Dyson-Schwinger quark model, performing a
Gibbs construction for the mixed phase in hybrid stars. We calculate the
eigenfrequencies and corresponding oscillation functions. Our results for the
zero points of the first-order radial oscillation frequencies give the maximum
mass of stable neutron stars, consistent with the common criterion
$dM/d\rho_{c}=0$. Possible observations of the radial oscillation frequencies
could help to learn more about the equation of state, predict the maximum mass
of neutron stars more precisely, and indicate the presence of quark matter.
## I Introduction
Neutron stars (NSs), the densest observable stars in the Universe, are natural
laboratories for the study of cold dense nuclear matter. Theoretically, the
equation of state (EOS) of nuclear matter is the key input which determines
the structure and properties of NSs. Unfortunately, due to the nonperturbative
nature of the strong interaction, our knowledge about the EOS of dense nuclear
matter is still insufficient, especially at densities much higher than the
nuclear saturation density, where deconfined quark matter may probably be
present Klähn _et al._ (2006).
Many efforts have been made to constrain the EOS from the observation of NSs.
For the static and spherical case, one can obtain the equilibrium structure of
NSs by solving the Tolman-Oppenheimer-Volkov (TOV) equations combined with the
EOS, thus predicting mass-radius-central density relations of NSs. The most
recent observations have been performed by the NICER (Neutron Star Interior
Composition Explorer) mission, which reported a Bayesian parameter estimation
of the mass and equatorial radius of the millisecond pulsar PSR J0030+0451
Riley, T. E. et al. (2019); Miller _et al._ (2019). Additional constraints
are imposed by the largest mass observed up to now,
$2.14^{+0.10}_{-0.09}\,\,M_{\odot}$ for the object PSR J0740+6620 Cromartie
_et al._ (2019), and by recent analyses of the NS merger event GW170817, which
indicate an upper limit on the maximum mass of about $2.2-2.3\,\,M_{\odot}$
Shibata _et al._ (2017); Margalit and Metzger (2017); Rezzolla _et al._
(2018); Shibata _et al._ (2019).
Theoretically, the difference of mass-radius relations between hybrid stars
(HSs) and pure NSs is not significant, and depends sensitively on the various
models adopted for describing nuclear and quark matter Alford _et al._
(2005). Therefore, one cannot yet disentangle HSs from pure NSs on the basis
of the current observations.
NSs also undergo different kinds of mechanical deformations, e.g., radial and
non-radial oscillations, glitches, and even those resulting from NS mergers
Abbott _et al._ (2017a, b). These produce many kinds of electromagnetic and
gravitational-wave (GW) signals, and also indicate the internal structure of
NSs. In this work we will mainly concentrate on the NS radial oscillations,
which were first studied in general relativity by Chandrasekhar Chandrasekhar
(1964). Subsequently, many investigations were carried out Chanmugam (1977);
Glass and Harpaz (1983); Väth and Chanmugam (1992); Gondek _et al._ (1997);
Sahu _et al._ (2002); Kokkotas and Ruoff (2001); Gupta _et al._ (2002);
Brillante and Mishustin (2014); Panotopoulos and Lopes (2017); Flores _et
al._ (2017); Sagun _et al._ (2020). Altough radial oscillations cannot lead
to direct emission of GW radiation, they can couple with and amplify GWs
Passamonti _et al._ (2006, 2007), and therefore could be observed in GW
signals. They can also modulate the short gamma ray burst (SGRB) from the
hypermassive NSs formed after the merger of two NSs, and the frequency could
thus be observed in SGRB Chirenti _et al._ (2019). In this work, we will
investigate how the frequencies of the radial oscillations depend on the
internal structure and composition of the emitting source, thus identifying
pure NSs and HSs.
Many theoretical tools and models have been proposed to study the EOS of NSs,
see, e.g., Burgio and Fantina (2018) for a review. For nuclear matter in the
hadron phase, popular EOSs are based on relativistic mean field models Ring
(1996), phenomenological models based on energy-density functional theory with
generalized Skyrme effective forces Potekhin _et al._ (2013), Brueckner-
Hartree-Fock (BHF) theory Li and Schulze (2012); Kohno (2013); Fukukawa _et
al._ (2015); Lu _et al._ (2019), the variational method (APR) Akmal _et al._
(1998), the self-consistent Green’s functions approach Carbone _et al._
(2013), and chiral effective field theory Hebeler _et al._ (2011); Coraggio
_et al._ (2014); Wellenhofer _et al._ (2014); Drischler _et al._ (2016). For
quark matter, EOSs are mainly obtained with the MIT bag model Chodos _et al._
(1974), the Nambu-Jona-Lasinio (NJL) model Buballa (2005); Klähn _et al._
(2013); Klähn and Fischer (2015), the perturbative QCD Kurkela _et al._
(2010); Fraga _et al._ (2014); Jiménez and Fraga (2019), and the Dyson-
Schwinger equations (DSEs) Roberts and Williams (1994); Alkofer and von Smekal
(2001); Chen _et al._ (2011, 2012, 2015).
In this work, we model nuclear matter with the BHF theory, which is based on
realistic two- and three-body forces that describe accurately nucleon
scattering data in free space and the properties of the deuteron. Moreover the
BHF approach is able to describe properly the properties of symmetric nuclear
matter at the saturation density Li and Schulze (2012); Kohno (2013); Fukukawa
_et al._ (2015); Wei _et al._ (2020). For quark matter, we adopt the Dyson-
Schwinger quark model Chen _et al._ (2011, 2015). The phase transition
between the confined and deconfined phase is modelled with the Gibbs condition
Glendenning (1992); Chen _et al._ (2011). In this framework, the maximum
masses of the pure NSs and HSs fulfill the two-solar-mass constraint Demorest
_et al._ (2010); Antoniadis _et al._ (2013); Fonseca _et al._ (2016);
Cromartie _et al._ (2019).
The work is organized as follows. In Sec. II we briefly describe the formalism
for the EOSs, i.e., the BHF theory for the hadron phase and the DSEs for the
quark phase. In Sec. III we introduce the TOV and the Sturm-Liouville
eigenvalue equations for the internal structure and radial oscillations of
NSs. Numerical results are given in Sec. IV, and we draw the conclusions in
Sec. V. We use natural units $c=\hbar=1$ throughout.
## II Equation of state
### II.1 Nuclear matter
In the BHF theory, the key element to describe the dense nuclear matter is the
$G$-matrix, which satisfies the Bethe-Goldstone equation Baldo (1999)
$G[E;\rho]=V+\sum_{k_{a},k_{b}>k_{F}}V\frac{|k_{a},k_{b}\rangle Q\langle
k_{a},k_{b}|}{E-e(k_{a})-e(k_{b})}G[E;\rho]\>,$ (1)
where $E$ is the starting energy, $\rho$ is the nucleon number density, $V$ is
the interaction potential, and $Q$ is the Pauli operator. The single-particle
(s.p.) energy of the nucleon is
$e(k)=e(k;\rho)=\frac{k^{2}}{2m}+U(k,\rho)\>,$ (2)
where
$U(k;\rho)=\sum_{k^{\prime}\leq k_{F}}\langle
kk^{\prime}|G[e(k)+e(k^{\prime});\rho]|kk^{\prime}\rangle_{A}$ (3)
is the s.p. potential under the continuous choice.
By solving the equations (1,2,3), one can obtain the $G$-matrix and then the
energy per nucleon of nuclear matter
$\frac{B}{A}=\frac{3}{5}\frac{k_{F}^{2}}{2m}+\frac{1}{2\rho}\sum_{k,k^{\prime}<k_{F}}\langle
kk^{\prime}|G[e(k)+e(k^{\prime});\rho]|kk^{\prime}\rangle_{A}\>.$ (4)
In this work we use the Bonn B (BOB) Machleidt _et al._ (1987); Machleidt
(1989) and Argonne $V_{18}$ (V18) Wiringa _et al._ (1995) nucleon-nucleon
potentials, supplemented with compatible microscopic three-body forces Zuo
_et al._ (2002); Li and Schulze (2008). This is a common prescription adopted
in the BHF approach, and allows to reproduce correctly the saturation point of
symmetric nuclear matter Baldo (1999).
In order to study the structure of the NS core, we have to calculate the
composition and the EOS of cold, neutrino-free, catalyzed matter, imposing
that it contains charge-neutral matter consisting of neutrons, protons, and
leptons ($e^{-}$, $\mu^{-}$) in beta-equilibrium. The output of the many-body
calculation is the energy density of lepton/baryon matter as a function of the
different partial densities $\rho_{i}$ of the species $i=n,p,e,\mu$,
$\displaystyle\varepsilon(\rho_{n},\rho_{p},\rho_{e},\rho_{\mu})=$
$\displaystyle(\rho_{n}m_{n}+\rho_{p}m_{p})+(\rho_{n}+\rho_{p})\frac{B}{A}(\rho_{n},\rho_{p})$
$\displaystyle+\varepsilon_{e}(\rho_{e})+\varepsilon_{\mu}(\rho_{\mu})\>,$ (5)
where $m_{i}$ are the corresponding masses, $B/A(\rho_{n},\rho_{p})$ is the
energy per nucleon of asymmetric nuclear matter, and $\varepsilon_{e}$ and
$\varepsilon_{\mu}$ are the energy densities of electrons and muons, which are
usually considered as noninteracting (we use ultrarelativistic and
relativistic expressions for the energy densities of electrons
$\varepsilon(\rho_{e})$ and muons $\varepsilon(\rho_{\mu})$, respectively
Shapiro and Teukolsky (2008)).
Given the large computational efforts of the microscopic calculations, we have
used the parabolic approximation Baldo _et al._ (2000); Bombaci and Lombardo
(1991) of the energy per particle of asymmetric nuclear matter in Eq. (5),
with the symmetry energy calculated simply as the difference between the
energy per particle of pure neutron matter and symmetric nuclear matter,
$E_{\text{sym}}(\rho)\approx
E(\rho_{n}=\rho,\rho_{p}=0)-E(\rho_{n}=\rho/2,\rho_{p}=\rho/2)\>.$ (6)
Once the energy density, Eq. (5), is known, the various chemical potentials
can be computed,
$\mu_{i}={\partial\varepsilon\over\partial\rho_{i}}\>,$ (7)
and solving the equations for beta-equilibrium,
$\displaystyle\mu_{p}+\mu_{e}=\mu_{n}=\mu_{B}\>,$ (8)
$\displaystyle\mu_{e}=\mu_{\mu}=\mu_{C}\>,$ (9)
being $\mu_{B}$ the baryon number chemical potential and $\mu_{C}$ the
electric charge chemical potential, corresponding to the only two conserved
charges, along with the charge neutrality,
$\rho_{p}=\rho_{e}+\rho_{\mu}\>,$ (10)
allows one to find the equilibrium composition $\rho_{i}$ at fixed baryon
density $\rho$, and finally the EOS,
$p(\varepsilon)=\rho^{2}{d\over d\rho}{\varepsilon(\rho_{i}(\rho))\over
d\rho}=\rho{d\varepsilon\over d\rho}-\varepsilon=\rho\mu_{n}-\varepsilon\>.$
(11)
We notice that the above mentioned theoretical methods provide EOSs for
homogeneous nuclear matter, $\rho>\rho_{t}\approx 0.08\,\text{fm}^{-3}$. For
the low-density inhomogeneous part we adopt the well-known Negele-Vautherin
EOS Negele and Vautherin (1973) for the inner crust in the medium-density
regime ($0.001\,\text{fm}^{-3}<\rho<\rho_{t}$), and the ones by Baym-Pethick-
Sutherland Baym _et al._ (1971) and Feynman-Metropolis-Teller Feynman _et
al._ (1949) for the outer crust ($\rho<0.001\,\text{fm}^{-3}$).
However, the BHF approach is a non-relativistic theory, and the above EOS
predicts a superluminal speed of sound $v_{s}^{2}=dp/d\varepsilon>c^{2}$ at a
few times of the saturation density Luo _et al._ (2019), close to the central
density of the most massive NSs. As a simple remedy, we truncate the EOS at
$v_{s}^{2}=c^{2}$ and keep the speed of sound constant at higher densities.
Later we will investigate the effects of such a modification on the structure
and radial oscillations of NSs.
### II.2 Quark matter
As discussed above, it is reasonable to assume that a phase transition from
hadronic matter to QM occurs in NSs. Quarks are usually described in
relativistic theories, and most models predict that $v_{s}^{2}<1/3$. Therefore
the speed of sound drops with the phase transition to QM, and thus the
causality requirement is easily fulfilled if the phase transition occurs
before the speed of sound approaches unity in hadronic matter, which could
also constrain parameters in the models.
As in Ref. Luo _et al._ (2019), we use the Dyson-Schwinger model (DSM) Chen
_et al._ (2011) to describe QM. In the DSM, one starts from the quark
propagator at finite chemical potentials $S(p;\mu)$, which satisfies the
Dyson-Schwinger equation
$\displaystyle S(p;\mu)^{-1}=Z_{2}(i\gamma\cdot\tilde{p}+m_{q})$ (12)
$\displaystyle+Z_{1}g^{2}(\mu)\int\frac{d^{4}q}{(2\pi)^{4}}D_{\rho\sigma}(k;\mu)\frac{\lambda^{a}}{2}\gamma_{\rho}S(q;\mu)\frac{\lambda^{a}}{2}\Gamma_{\sigma}(q,p;\mu)\>,$
where $\tilde{p}\equiv(\bm{p},p_{4}+i\mu)$, $D_{\rho\sigma}(k\equiv p-q;\mu)$
is the full gluon propagator, $\Gamma_{\sigma}(q,p;\mu)$ is the effective
quark-gluon vertex, and $Z_{1}$ and $Z_{2}$ are the renormalization constants
for the quark-gluon vertex and the quark wavefunction. With a given ansatz for
the quark-gluon vertex and gluon propagator, one can solve the equation and
obtain the quark propagator. In Refs. Chen _et al._ (2011); Luo _et al._
(2019), the so-called rainbow approximation and a chemical-potential-modified
Gaussian-type effective interaction were used, see Ref. Chen _et al._ (2011)
for details.
The EOS for cold QM is given by the $q=u,d,s$ quark propagator at zero
temperature as in Refs. Chen _et al._ (2008); Klähn _et al._ (2010),
$\displaystyle\rho_{q}(\mu_{q})$
$\displaystyle=6\int\frac{d^{4}p}{(2\pi)^{4}}\mathop{\text{tr}_{D}}[-\gamma_{4}S_{q}(p;\mu_{q})]\>,$
(13) $\displaystyle p_{q}(\mu_{q})$
$\displaystyle=p_{q}(\mu_{q,0})+\int_{\mu_{q,0}}^{\mu_{q}}d\mu\rho_{q}(\mu)\>.$
(14)
The total density and pressure of the QM are given by summing contributions
from all flavors, and the pressure of QM at zero density is taken as a
phenomenological bag constant Wei _et al._ (2017),
$B_{\text{DS}}=-\sum_{q=u,d,s}p_{q}(\mu_{q,0})\>,$ (15)
which is set to $90\,\text{MeV}\,\text{fm}^{-3}$ Chen _et al._ (2012, 2015);
Wei _et al._ (2017).
In the pure quark phase, the beta-equilibrium and electrical neutrality are
expressed as
$\displaystyle\mu_{d}=\mu_{u}+\mu_{e}=\mu_{u}+\mu_{\mu}=\mu_{s}\>,$ (16)
$\displaystyle\frac{2n_{u}-n_{d}-n_{s}}{3}-\rho_{e}-\rho_{\mu}=0\>.$ (17)
The phase transition from nuclear phase to quark phase is considered as the
Gibbs construction Glendenning (1992); Chen _et al._ (2011), where the
chemical and mechanical equilibrium between both phases are expressed as
$\displaystyle\mu_{n}=\mu_{p}+\mu_{e}=\mu_{u}+2\mu_{d}\>,$ (18) $\displaystyle
p_{H}(\mu_{e},\mu_{n})=p_{Q}(\mu_{e},\mu_{n})=p_{M}(\mu_{n})\>.$ (19)
In the mixed phase, the hadron phase and quark phase are electrically charged
separately, while it remains globally neutral,
$\chi\rho_{Q}+(1+\chi)\rho_{H}=0\>,$ (20)
where $\chi$ is the volume fraction of QM in the mixed phase. Consequently,
the baryon number density $\rho_{M}$ and energy density $\varepsilon_{M}$ of
the mixed phase are
$\displaystyle\rho_{M}$ $\displaystyle=\chi\rho_{Q}+(1+\chi)\rho_{H}\>,$ (21)
$\displaystyle\varepsilon_{M}$
$\displaystyle=\chi\varepsilon_{Q}+(1+\chi)\varepsilon_{H}\>.$ (22)
With the above phase transition, the EOS is continuous at the phase transition
onset, different from the Maxwell phase transition considered in Ref. Pereira
_et al._ (2018). However, the corresponding speed of sound drops
discontinuously at the phase-transition point Luo _et al._ (2019).
## III Hydrostatic equilibrium structure and radial oscillations
Due to the strong gravitational field in NSs, their structure and dynamical
evolution are ruled by the Einstein equation in general relativity,
$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi GT_{\mu\nu}\>,$ (23)
where $R_{\mu\nu}$ is the Ricci tensor, $R$ is the Ricci scalar, and $G$ is
the gravitational constant. The energy-momentum tensor is
$T_{\mu\nu}=pg_{\mu\nu}+(p+\varepsilon)u_{\mu}u_{\nu}\>,$ (24)
where $g_{\mu\nu}$ is the metric tensor, $p$ is the pressure, $\varepsilon$ is
the energy density, and $u_{\mu}$ is the four-velocity. For simplicity, we
consider static spherically symmetric stars, described by the Schwarzschild
metric Chandrasekhar (1964)
$ds^{2}=e^{\nu(r)}dt^{2}-e^{\lambda(r)}dr^{2}-r^{2}(d\theta^{2}+\sin^{2}\\!\theta
d\varphi^{2})\>,$ (25)
where $e^{\nu(r)}$ and $e^{\lambda(r})$ are metric functions. By solving the
Einstein field equation with the above metric, one obtains the TOV equations
Oppenheimer and Volkoff (1939); Tolman (1939) for the equilibrium structure of
NSs,
$\displaystyle\frac{dp}{dr}$ $\displaystyle=-G\frac{(\varepsilon+p)(m+4\pi
r^{3}p)}{r^{2}(1-2Gm/{r})}\>,$ (26) $\displaystyle\frac{dm}{dr}$
$\displaystyle=4\pi r^{2}\varepsilon\>,$ (27)
and correspondingly the metric functions
$\displaystyle e^{\lambda(r)}$ $\displaystyle=(1-2Gm/r)^{-1}\>,$ (28)
$\displaystyle{\nu(r)}$
$\displaystyle=-2G\int_{r}^{\infty}dr^{\prime}\frac{e^{\lambda(r^{\prime})}}{r^{\prime
2}}\left(m+4\pi r^{\prime 3}p\right)\>.$ (29)
Combining with the EOS $p(\varepsilon)$ of the matter, one can solve the TOV
equations for the initial conditions $m(r=0)=0$ and $p(r=0)=p_{c}$, where
$p_{c}$ is the central pressure. The surface radius is defined by $p(R)=0$ and
the corresponding NS mass is $M=m(R)$.
Figure 1: The energy density (upper panels), speed of sound (central panels),
and adiabatic index (lower panels) of NS matter as functions of pressure with
different EOSs. See the text for a detailed description of the notation.
Figure 2: The mass-radius relations of NSs obtained with different EOSs. The
markers represent the minimum HS mass (QM onset) for each EOS.
Also the radial oscillation properties can be obtained from the Einstein field
equation Harrison _et al._ (1965); Bardeen _et al._ (1966) based on the
static equilibrium structure. Consider a spherically symmetric system with
only radial motion, where the metric Eq. (25) is now time dependent. Small
perturbations are described by $\xi\equiv\Delta r/r$, where $\Delta r$ is the
radial displacement, and the corresponding Lagrangian perturbation of the
pressure $\eta\equiv\Delta p/p$ Chanmugam (1977); Väth and Chanmugam (1992);
Sagun _et al._ (2020). The time dependence of all the perturbations can be
obtained as a summation of eigenmodes $\xi_{i}$ and $\eta_{i}\varpropto
e^{i\omega_{i}t}$ that are solutions of the system of differential equations
Chanmugam (1977); Gondek _et al._ (1997),
$\displaystyle\frac{d\xi}{dr}=$
$\displaystyle-\frac{1}{r}\Big{(}3\xi+\frac{\eta}{\Gamma}\Big{)}-\frac{dp}{dr}\frac{\xi}{p+\varepsilon}\>,$
(30) $\displaystyle\frac{d\eta}{dr}=$
$\displaystyle\frac{\xi}{p}\bigg{[}\omega^{2}e^{\lambda-\nu}(p+\varepsilon)r-4\frac{dp}{dr}$
$\displaystyle\hskip
14.22636pt+\Big{(}\frac{dp}{dr}\Big{)}^{2}\frac{r}{p+\varepsilon}-8\pi
Ge^{\lambda}(p+\varepsilon)pr\bigg{]}$
$\displaystyle+\eta\bigg{[}\frac{dp}{dr}\frac{1}{p+\varepsilon}-4\pi
Ge^{\lambda}(p+\varepsilon)r\bigg{]}\>,$ (31)
where $\omega=2\pi f$ is the eigenfrequency of radial oscillation, and
$\Gamma=\left(1+\frac{\varepsilon}{p}\right)v_{s}^{2}$ (32)
is the adiabatic index. Two boundary conditions are required in addition
Gondek _et al._ (1997). The condition at the center is
$\eta(0)=-3\Gamma\xi\>,$ (33)
while the perturbation of the pressure should vanish at the surface,
$\eta(R)=0\>.$ (34)
Eqs. (30) to (34) are the Sturm-Liouville eigenvalue equations for $\omega$.
The solutions provide the discrete eigenvalues $\omega_{i}^{2}$. For a given
NS, they can be ordered as
$\omega_{1}^{2}<\omega_{2}^{2}<\ldots<\omega_{n}^{2}$, where $n$ is the number
of nodes. Negative $\omega^{2}$ indicate unstable oscillations and thus
$\omega^{2}=0$ is the critical condition for the stability of NSs under radial
perturbations.
Table 1: Characteristic static properties of neutron stars and HSs with different masses $M_{\text{max}}[\,M_{\odot}]$, $2.0\,M_{\odot}$, and $1.4\,M_{\odot}$: Central nucleon number density $\rho_{c}\;[\,\text{fm}^{-3}]$ and pressure $p_{c}\;[\,\text{MeV}\,\text{fm}^{-3}]$, radius $R\;[\text{km}]$, and compactness parameter $\beta={GM}/{R}$. $M_{\text{min}}\;[\,M_{\odot}]$ is the minimum NS mass of a given (hybrid) EOS. Note that only the DS1.5 and DS2 QM EOSs allow hybrid $2\,M_{\odot}$ configurations. No hybrid $1.4\,M_{\odot}$ stars exist. $M$ | $M_{\text{min}}$ | $M_{\text{max}}$ | $2.0\,M_{\odot}$ | $1.4\,M_{\odot}$
---|---|---|---|---
EOS | | $M_{\text{max}}$ | $\rho_{c}$ | $p_{c}$ | $R$ | $\beta$ | $\rho_{c}$ | $p_{c}$ | $R$ | $\beta$ | $\rho_{c}$ | $p_{c}$ | $R$ | $\beta$
BOB | | 2.51 | 0.884 | 819.4 | 11.34 | 0.326 | 0.510 | 132.7 | 12.74 | 0.232 | 0.384 | 51.0 | 12.83 | 0.161
BOB-1.0 | 2.40 | 2.50 | 0.881 | 726.2 | 11.46 | 0.321 | | | | | | | |
BOB+DS0.5 | 2.40 | 2.43 | 0.860 | 449.2 | 11.93 | 0.301 | | | | | | | |
BOB+DS1 | 2.18 | 2.30 | 0.838 | 333.9 | 12.12 | 0.279 | | | | | | | |
BOB+DS2 | 1.62 | 2.02 | 0.910 | 268.3 | 11.94 | 0.249 | 0.750 | 208.9 | 12.21 | 0.242 | | | |
V18 | | 2.34 | 1.010 | 960.9 | 10.63 | 0.345 | 0.632 | 199.6 | 11.94 | 0.247 | 0.452 | 65.3 | 12.34 | 0.167
V18-1.0 | 2.23 | 2.33 | 1.002 | 830.9 | 10.77 | 0.319 | | | | | | | |
V18+DS0.5 | 2.25 | 2.28 | 0.983 | 547.7 | 11.17 | 0.301 | | | | | | | |
V18+DS1.0 | 2.04 | 2.16 | 0.966 | 417.2 | 11.36 | 0.281 | | | | | | | |
V18+DS1.5 | 1.73 | 2.04 | 1.001 | 374.9 | 11.29 | 0.266 | 0.785 | 249.4 | 11.71 | 0.252 | | | |
## IV Numerical results
Figure 3: Speed of sound (upper panels) and adiabatic index $\Gamma$ (lower
panels) in NSs with $1.4\,M_{\odot}$ (left panels) and $2.0\,M_{\odot}$ (right
panels), for various EOSs.
Figure 4: The dependence of the averaged adiabatic index $\bar{\Gamma}$ on
the compactness parameter $\beta$ with various EOSs, in comparison with the
critical value Eq. (36) (dashed curve).
### IV.1 Equilibrium structure of neutron stars
As stated above, we consider two kinds of EOSs, corresponding to pure NSs and
HSs respectively. Details can be found in Refs. Li and Schulze (2008); Chen
_et al._ (2011).
For the pure NS EOS we use the BOB and V18 BHF EOSs discussed before. The
energy density, speed of sound, and adiabatic index as functions of the
pressure are shown in Fig. 1. At large pressure the speed of sound exceeds the
speed of light and violates causality. As a simple remedy, we truncate the EOS
at $v_{s}/c=1$ and keep it constant at higher densities. The modified results
shown in the figure are labeled as “BOB-1.0” or “V18-1.0”. One can see that
the EOSs are softened with the reduction of the speed of sound.
For the EOS in HSs, labeled as V18/BOB+DS$\alpha$, we combine the EOS of
nuclear matter and the DSM EOS of QM with different parameters
$\alpha=0.5,1,1.5,2$, representing the strength of the in-medium modification
of the Gaussian-type effective interaction, see Refs. Chen _et al._ (2011);
Luo _et al._ (2019) for details. The EOS, the speed of sound, and the
adiabatic index are also shown in Fig. 1. At the onset of the mixed phase,
there is a discontinuous decrease of the speed of sound and the adiabatic
index due to the emergence of new degrees of freedom, similar to the onset of
muons at low density. In the mixed phase, the speed of sound is thus much
lower, without causality violation, and depends strongly and non-monotonously
on the pressure. The EOS is also much softened by the phase transition.
The corresponding mass-radius relations of NSs are shown in Fig. 2, obtained
in the standard way by solving the TOV equations for betastable and charge-
neutral matter. We remark that the effect of flattening the V18/BOB EOS at
$v_{s}/c=1$ is very small and the value of the maximum mass
$M_{\text{max}}=2.34/2.51\,M_{\odot}$ is larger than the current observational
lower limits Demorest _et al._ (2010); Antoniadis _et al._ (2013); Fonseca
_et al._ (2016); Cromartie _et al._ (2019). Regarding the radius, we found in
Burgio _et al._ (2018); Wei _et al._ (2020) that for the V18/BOB EOS the
value of a 1.4-solar-mass NS is $R_{1.4}=12.3/12.8\;$km, which fulfills the
constraints derived from the tidal deformability in the GW170817 merger event,
$R_{1.36}=11.9\pm 1.4\;$km Abbott _et al._ (2018), see also similar
compatible constraints on masses and radii derived in Refs. Margalit and
Metzger (2017); Ruiz _et al._ (2018); Most _et al._ (2018); Rezzolla _et
al._ (2018); Shibata _et al._ (2019); Most _et al._ (2020); Shao _et al._
(2020). The V18/BOB EOSs are also compatible with estimates of the mass and
radius of the isolated pulsar PSR J0030+0451 observed recently by NICER,
$M=1.44^{+0.15}_{-0.14}\,M_{\odot}$ and $R=13.02^{+1.24}_{-1.06}\,$km Riley,
T. E. et al. (2019), or $M=1.36^{+0.15}_{-0.16}\,M_{\odot}$ and
$R=12.71^{+1.14}_{-1.19}\,$km Miller _et al._ (2019).
The phase transition leads to smaller values of the maximum mass on the mass-
radius curves, obtained with the commonly used stability criterion
$dM/d\rho_{c}=0$ Harrison _et al._ (1965); Shapiro and Teukolsky (2008). In
Table 1 we also list some characteristic static properties of NSs with
different EOSs. One notes that for a too early onset of the quark phase the
maximum mass becomes too low. The condition $M_{\text{max}}>2.1\,M_{\odot}$
yields for example the constraint $\alpha<1.7/1.3$ for the BOB/V18+DS$\alpha$
EOS, see also Refs. Chen _et al._ (2011); Wei _et al._ (2019).
In Fig. 3 we show the profiles of the speed of sound and the adiabatic index
in NSs with two different masses $M/\,M_{\odot}=1.4,2.0$. In both cases the
hadronic EOS never becomes superluminal. One can see that due to the phase
transition, the speed of sound and adiabatic index in the inner core of heavy
NSs are reduced strongly, and even decrease as approaching the center. There
are also the discontinuities due to the emergence of the mixed phase in HSs,
similar as at the onset of muons in the outer layer but quantitatively much
larger.
It was often discussed in the literature Moustakidis (2017) that the stability
of NSs depends crucially on the averaged adiabatic index
$\bar{\Gamma}\equiv\frac{\int_{0}^{R}drr^{2}e^{(\lambda+3\nu)/2}p\,\Gamma}{\int_{0}^{R}drr^{2}e^{(\lambda+3\nu)/2}p}\>.$
(35)
In Ref. Chandrasekhar (1964) Chandrasekhar gave a critical value as stability
criterion,
$\bar{\Gamma}_{c}=\frac{4}{3}+\frac{38}{42}\beta\>.$ (36)
In Fig. 4 we compare our results of the averaged adiabatic index with various
EOSs and with this criterion. One finds that with emergence of the mixed phase
in HSs, the averaged adiabatic index decreases with the mass, compared to pure
NSs. However, in all cases it remains much larger than the critical value.
Therefore, Eq. (36) could be regarded as a necessary condition for the
stability of NSs, but far from the values obtained by realistic EOSs.
Figure 5: The radial displacement perturbation $\xi=\Delta r/r$ (upper
panels) and the radial pressure perturbation $\eta=\Delta p/p$ (lower panels)
of the first six eigenmodes of radial oscillations in $1.4\,M_{\odot}$ NSs
with BOB EOS (left panels) and the fundamental eigenmode in $2.0\,M_{\odot}$
stars with different EOSs (right panels).
Figure 6: Fundamental frequency $f_{1}$ vs mass $M$, and (inset)
$M_{\text{max}}-M$ vs $f_{1}$ for different EOSs. The notation is as in Fig.
1.
### IV.2 Radial oscillations and stability
We now investigate the radial oscillations of NSs. In the left panels of Fig.
5 we show the first six eigenmodes in a $1.4\,M_{\odot}$ NS, obtained with the
BOB EOS. There is no zero point in the first (fundamental) eigenmode. As the
order increases, the number of zero points increases, as is expected in a
Sturm-Liouville boundary problem. It can be seen that in the fundamental mode
$n=1$ the entire star oscillates nearly uniformly, whereas in the higher modes
only the inner parts tend to be affected. In the right panels of Fig. 5 we
show the fundamental eigenmodes in a $2.0\,M_{\odot}$ NS, obtained with the
various EOSs. For pure NSs, the fundamental eigenmodes are almost the same as
in a $1.4\,M_{\odot}$ NS, i.e., uniform oscillation. In HSs there is a kink in
$\xi$ due to the emergence of the mixed phase and the $\xi$ decreases much
faster towards the surface, i.e., the oscillation is stronger in the QM phase.
The kink in $\xi$ is due to the large discontinuity of the adiabatic index,
which appears explicitly in Eq. (30). This causes a related kink in
$d\eta/dr$, Eq. (31). There should also be a kink with the emergence of muons,
but quantitatively too small to be seen.
Table 2: The radial oscillation frequencies $f_{n}\,$[kHz] of $M/\,M_{\odot}=1.4,2.0$ NSs using different EOSs. $n$ | $1.4\,M_{\odot}$ | $2.0\,M_{\odot}$
---|---|---
BOB | V18 | BOB | BOB+DS2 | V18 | V18+DS1.5
1 | 3.16 | 3.09 | 2.73 | 1.25 | 2.64 | 1.59
2 | 6.74 | 6.65 | 6.80 | 5.90 | 6.85 | 6.11
3 | 8.81 | 9.25 | 10.08 | 8.57 | 10.05 | 9.16
4 | 9.69 | 10.15 | 12.24 | 11.44 | 12.93 | 11.78
5 | 11.59 | 11.90 | 13.45 | 13.39 | 14.72 | 14.36
6 | 13.83 | 14.20 | 15.08 | 14.51 | 16.04 | 15.64
The corresponding radial oscillation eigenfrequencies are listed in Table 2.
For a $1.4\,M_{\odot}$ NS we obtain the frequencies of the $n=1$ fundamental
mode $f_{1}=3.16/3.09\;$kHz with BOB/V18 EOS, which are quite close in the two
models. Comparing with the previous literature Sagun _et al._ (2020), our
results of $f_{1}$ are quite similar, but frequencies of higher modes $f_{n}$
are relatively smaller. According to the features of the eigenmodes in Fig. 5,
the fundamental frequency is determined by properties throughout the whole
star, whereas the higher frequencies depend mainly on the core properties. The
fundamental frequency is the easiest to be observed in the next generation of
GW detectors, but we expect also the first few lower frequencies to be
observable. For a $2.0\,M_{\odot}$ NS $f_{1}$ depends much stronger on the
EOS, in particular, the $f_{1}$ of HSs are much smaller than those of pure
NSs, while $f_{n>1}$ increase more quickly. This is compatible with the much
lower speed of sound in HSs, see Fig. 3.
To investigate more the relation between radial oscillations and stability of
NSs, we show the dependence of the radial oscillation frequencies $f_{1}$ on
the masses of NSs in Fig. 6. One can see that $f_{1}$ varies slowly around 1.4
solar masses, but decreases quickly close to the maximum mass. It decreases to
zero exactly at the numerical maximum mass on the $M-R$ curves, when the star
does not recover anymore from a small radial perturbation. That is to say, the
commonly used stability condition $dM/d\rho_{c}>0$ Harrison _et al._ (1965);
Shapiro and Teukolsky (2008) is consistent with the analysis of the radial
oscillations. This is the case for pure NSs and for HSs. Since $f_{1}$
decreases quickly close to the maximum mass, one can quite exactly predict the
maximum mass when observing sufficiently small values of $f_{1}$. For example,
as shown in the inset of Fig. 6, when observing a NS with $f_{1}=1.6\;$kHz and
mass $M$, one can predict that $M_{\text{max}}<M+0.05\,M_{\odot}$. (This is
the situation for both HSs with $2\,M_{\odot}$ in Table 2, for example). For
$f_{1}=1.0\;$kHz the constraint becomes even much stronger and yields
$M_{\text{max}}<M+0.01\,M_{\odot}$. A low oscillation frequency thus provides
great added value to a NS mass measurement, which by itself only represents a
lower limit on $M_{\text{max}}$ of NSs.
It is also interesting to study the dependence of $f_{1}$ on the compactness
parameter $\beta$, shown in the top panel of Fig. 7. We find that this
dependence is quite insensitive to the EOS of hadronic matter. Even in the
HSs, the two curves with the same QM EOS combined with different hadronic EOSs
almost coincide. In our approach, HSs close to the mass limit (indicated by
small values of $f_{1}$) have smaller masses and larger radii than pure NSs
(see Fig. 2), and therefore their maximum compactness is smaller. Thus a low
value of $\beta$ ($\lesssim 0.3$) together with a low value of $f_{1}$
($\lesssim 1\;$kHz) is an indication for the presence of QM inside the star.
This qualitative difference between HSs and pure NSs offers us an important
observational signal to disentangle them.
A similar feature applies to the so-called large separations $\Delta
f_{n}\equiv f_{n+1}-f_{n}$, which are widely used in astroseismology to learn
about star properties Sagun _et al._ (2020). We show the results of $\Delta
f_{1}=f_{2}-f_{1}$ in the lower panel of Fig. 7. Again one can find that the
curves are insensitive to the EOS of hadronic matter, and that in this case
large values of $\Delta f_{1}\gtrsim 5\;$kHz together with small values of
$\beta\lesssim 0.3$ indicate substantial QM content.
Figure 7: The dependence of $f_{1}$ (upper panel) and $\Delta
f_{1}=f_{2}-f_{1}$ (lower panel) on the compactness parameter $\beta$ for
different EOSs. The notation is as in Fig. 1.
Radial oscillations usually occur during strong transitions of NSs, e.g., in
newborn NSs after supernova explosions or the merger of NSs, after strong
starquakes and corresponding pulsar glitches. The signals can be observed in
the associated electromagnetic and GW emissions. In Ref. Chirenti _et al._
(2019), the authors point out that the radial oscillation can modulate the
SGRB from the hypermassive NSs formed after the merger of two NSs. The
possible observation of the radial oscillation frequencies in SGRB will
provide insight into the emission mechanism of the SGRB and aid to
understanding the EOS of NSs. Radial oscillations can also couple with and
amplify GWs Passamonti _et al._ (2006, 2007).
Although the frequency of the radial oscillation is too high for the current
GW detectors (but may become sufficiently low for metastable hypermassive
remnants), it could be possibly observed with the improvement of the
detectors, such as the Advanced LIGO Regimbau _et al._ (2017), and the third-
generation ground-based GW detectors such as the Einstein Telescope Punturo
_et al._ (2010) and Cosmic Explore Abbott _et al._ (2017c).
## V Conclusions
In this work we investigated the radial oscillations and stability of NSs,
including both pure NSs and HSs. The EOS of nuclear matter is based on the BHF
theory, limiting the speed of sound to the speed of light at high density.
Alternatively, we considered the phase transition to QM, combining the nuclear
EOS with a DSM EOS for QM via a Gibbs phase transition. With these EOSs, we
solved the TOV equation for the equilibrium structure and the equations for
the radial oscillations of NSs. For a $1.4\,M_{\odot}$ NS we obtain radii
around $12\,$km and $f_{1}\sim 3\,$kHz for the fundamental radial oscillation.
As the masses increase, $f_{1}$ decreases to zero at the maximum mass,
consistent with the stability criterion $dM/d\rho_{c}=0$. Small values of
$f_{1}$ can provide an accurate estimate of the maximum NS mass. Also, small
values of the compactness parameter $\beta$ together with small values of
$f_{1}$ or large values of $\Delta f_{1}$ characterize HSs in our approach and
allow to disentangle them from pure NSs.
To investigate radial oscillations in newborn NSs after supernova explosions
or the merger of NSs, the features of more realistic environments, e.g.,
temperature, rotation, and magnetic field, should also be included Panda _et
al._ (2016). We leave these studies to future work, together with those
employing other EOSs.
###### Acknowledgements.
We acknowledge financial support from the National Natural Science Foundation
of China (Grant Nos. 11475149 and U1738130).
## References
* Klähn _et al._ (2006) T. Klähn _et al._ , Phys. Rev. C 74, 035802 (2006).
* Riley, T. E. et al. (2019) Riley, T. E. et al., Astrophys. J. Lett. 887, L21 (2019).
* Miller _et al._ (2019) M. C. Miller _et al._ , Astrophys. J. 887, L24 (2019).
* Cromartie _et al._ (2019) H. T. Cromartie _et al._ , Nature Astronomy 4, 72 (2019).
* Shibata _et al._ (2017) M. Shibata, S. Fujibayashi, K. Hotokezaka, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Tanaka, Phys. Rev. D 96, 123012 (2017).
* Margalit and Metzger (2017) B. Margalit and B. D. Metzger, Astrophys. J. 850, L19 (2017).
* Rezzolla _et al._ (2018) L. Rezzolla, E. R. Most, and L. R. Weih, Astrophys. J. 852, L25 (2018).
* Shibata _et al._ (2019) M. Shibata, E. Zhou, K. Kiuchi, and S. Fujibayashi, Phys. Rev. D 100, 023015 (2019).
* Alford _et al._ (2005) M. Alford, M. Braby, M. Paris, and S. Reddy, Astrophys. J. 629, 969 (2005).
* Abbott _et al._ (2017a) B. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017a).
* Abbott _et al._ (2017b) B. Abbott _et al._ (LIGO Scientific, Virgo, Fermi-GBM, INTEGRAL), Astrophys. J. Lett. 848, L 13 (2017b).
* Chandrasekhar (1964) S. Chandrasekhar, Astrophys. J. 140, 417 (1964).
* Chanmugam (1977) G. Chanmugam, Astrophys. J. 217, 799 (1977).
* Glass and Harpaz (1983) E. N. Glass and A. Harpaz, Mon. Not. R. Astron. Soc. 202, 159 (1983).
* Väth and Chanmugam (1992) H. Väth and G. Chanmugam, Astron. Astrophys. 260, 250 (1992).
* Gondek _et al._ (1997) D. Gondek, P. Haensel, and J. L. Zdunik, Astron. Astrophys. 325, 217 (1997).
* Sahu _et al._ (2002) P. Sahu, G. Burgio, and M. Baldo, Astrophys. J. Lett. 566, L89 (2002).
* Kokkotas and Ruoff (2001) K. Kokkotas and J. Ruoff, Astron. Astrophys. 366, 565 (2001).
* Gupta _et al._ (2002) V. K. Gupta, V. Tuli, and A. Goyal, Astrophys. J. 579, 374 (2002).
* Brillante and Mishustin (2014) A. Brillante and I. N. Mishustin, Europhys. Lett 105, 39001 (2014).
* Panotopoulos and Lopes (2017) G. Panotopoulos and I. Lopes, Phys. Rev. D 96, 083013 (2017).
* Flores _et al._ (2017) C. V. Flores, Z. B. Hall, II, and P. Jaikumar, Phys. Rev. C 96, 065803 (2017).
* Sagun _et al._ (2020) V. Sagun, G. Panotopoulos, and I. Lopes, Phys. Rev. D 101, 063025 (2020).
* Passamonti _et al._ (2006) A. Passamonti, M. Bruni, L. Gualtieri, A. Nagar, and C. F. Sopuerta, Phys. Rev. D 73, 084010 (2006).
* Passamonti _et al._ (2007) A. Passamonti, N. Stergioulas, and A. Nagar, Phys. Rev. D 75, 084038 (2007).
* Chirenti _et al._ (2019) C. Chirenti, M. C. Miller, T. Strohmayer, and J. Camp, Astrophys. J. Lett. 884, L16 (2019).
* Burgio and Fantina (2018) G. F. Burgio and A. F. Fantina, Astrophys. Space Sci. Libr. 457, 255 (2018).
* Ring (1996) P. Ring, Prog. Part. Nucl. Phys. 37, 193 (1996).
* Potekhin _et al._ (2013) A. Y. Potekhin, A. F. Fantina, N. Chamel, J. M. Pearson, and S. Goriely, Astron. Astrophys. 560, A48 (2013).
* Li and Schulze (2012) Z. H. Li and H.-J. Schulze, Phys. Rev. C 85, 064002 (2012).
* Kohno (2013) M. Kohno, Phys. Rev. C 88, 064005 (2013).
* Fukukawa _et al._ (2015) K. Fukukawa, M. Baldo, G. F. Burgio, L. Lo Monaco, and H.-J. Schulze, Phys. Rev. C 92, 065802 (2015).
* Lu _et al._ (2019) J. J. Lu, Z. H. Li, G. F. Burgio, A. Figura, and H.-J. Schulze, Phys. Rev. C 100, 054335 (2019).
* Akmal _et al._ (1998) A. Akmal, V. R. Pandharipande, and D. G. Ravenhall, Phys. Rev. C 58, 1804 (1998).
* Carbone _et al._ (2013) A. Carbone, A. Rios, and A. Polls, Phys. Rev. C 88, 044302 (2013).
* Hebeler _et al._ (2011) K. Hebeler, S. K. Bogner, R. J. Furnstahl, A. Nogga, and A. Schwenk, Phys. Rev. C 83, 031301 (2011).
* Coraggio _et al._ (2014) L. Coraggio, J. W. Holt, N. Itaco, R. Machleidt, L. E. Marcucci, and F. Sammarruca, Phys. Rev. C 89, 044321 (2014).
* Wellenhofer _et al._ (2014) C. Wellenhofer, J. W. Holt, N. Kaiser, and W. Weise, Phys. Rev. C 89, 064009 (2014).
* Drischler _et al._ (2016) C. Drischler, K. Hebeler, and A. Schwenk, Phys. Rev. C 93, 054314 (2016).
* Chodos _et al._ (1974) A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn, and V. F. Weisskopf, Phys. Rev. D 9, 3471 (1974).
* Buballa (2005) M. Buballa, Phys. Rept. 407, 205 (2005).
* Klähn _et al._ (2013) T. Klähn, R. Lastowiecki, and D. B. Blaschke, Phys. Rev. D 88, 085001 (2013).
* Klähn and Fischer (2015) T. Klähn and T. Fischer, Astrophys. J. 810, 134 (2015).
* Kurkela _et al._ (2010) A. Kurkela, P. Romatschke, and A. Vuorinen, Phys. Rev. D 81, 105021 (2010).
* Fraga _et al._ (2014) E. S. Fraga, A. Kurkela, and A. Vuorinen, Astrophys. J. 781, L25 (2014).
* Jiménez and Fraga (2019) J. C. Jiménez and E. S. Fraga, Phys. Rev. D 100, 114041 (2019).
* Roberts and Williams (1994) C. D. Roberts and A. G. Williams, Prog. Part. Nucl. Phys. 33, 477 (1994).
* Alkofer and von Smekal (2001) R. Alkofer and L. von Smekal, Phys. Rept. 353, 281 (2001).
* Chen _et al._ (2011) H. Chen, M. Baldo, G. F. Burgio, and H.-J. Schulze, Phys. Rev. D 84, 105023 (2011).
* Chen _et al._ (2012) H. Chen, M. Baldo, G. F. Burgio, and H.-J. Schulze, Phys. Rev. D 86, 045006 (2012).
* Chen _et al._ (2015) H. Chen, J. B. Wei, M. Baldo, G. F. Burgio, and H.-J. Schulze, Phys. Rev. D 91, 105002 (2015).
* Wei _et al._ (2020) J. B. Wei, J. J. Lu, G. F. Burgio, Z. H. Li, and H.-J. Schulze, Eur. Phys. J. A 56, 63 (2020).
* Glendenning (1992) N. K. Glendenning, Phys. Rev. D 46, 1274 (1992).
* Demorest _et al._ (2010) P. B. Demorest, T. Pennucci, S. M. Ransom, M. S. Roberts, and J. W. Hessels, Nature (London) 467, 1081 (2010).
* Antoniadis _et al._ (2013) J. Antoniadis _et al._ , Science 340, 6131 (2013).
* Fonseca _et al._ (2016) E. Fonseca _et al._ , Astrophys. J. 832, 167 (2016).
* Baldo (1999) M. Baldo, Int. Rev. Nucl. Phys. 8, 1 (1999).
* Machleidt _et al._ (1987) R. Machleidt, K. Holinde, and C. Elster, Phys. Rept. 149, 1 (1987).
* Machleidt (1989) R. Machleidt, Adv. Nucl. Phys. 19, 189 (1989).
* Wiringa _et al._ (1995) R. B. Wiringa, V. Stoks, and R. Schiavilla, Phys. Rev. C 51, 38 (1995).
* Zuo _et al._ (2002) W. Zuo, A. Lejeune, U. Lombardo, and J. F. Mathiot, Nucl. Phys. A 706, 418 (2002).
* Li and Schulze (2008) Z. H. Li and H.-J. Schulze, Phys. Rev. C 78, 028801 (2008).
* Shapiro and Teukolsky (2008) S. L. Shapiro and S. A. Teukolsky, _Black Holes, White Dwarfs and Neutron Stars: The Physics of Compact Objects_ (Wiley, 2008).
* Baldo _et al._ (2000) M. Baldo, G. F. Burgio, and H.-J. Schulze, Phys. Rev. C 61, 055801 (2000).
* Bombaci and Lombardo (1991) I. Bombaci and U. Lombardo, Phys. Rev. C 44, 1892 (1991).
* Negele and Vautherin (1973) J. W. Negele and D. Vautherin, Nucl. Phys. A 207, 298 (1973).
* Baym _et al._ (1971) G. Baym, C. Pethick, and P. Sutherland, Astrophys. J. 170, 299 (1971).
* Feynman _et al._ (1949) R. P. Feynman, N. Metropolis, and E. Teller, Phys. Rev. 75, 1561 (1949).
* Luo _et al._ (2019) Z. H. Luo, J. B. Wei, G. Chen, H. Chen, and H.-J. Schulze, Mod. Phys. Lett. A 34, 1950202 (2019).
* Chen _et al._ (2008) H. Chen, W. Yuan, L. Chang, Y. X. Liu, T. Klähn, and C. D. Roberts, Phys. Rev. D 78, 116015 (2008).
* Klähn _et al._ (2010) T. Klähn, C. D. Roberts, L. Chang, H. Chen, and Y. X. Liu, Phys. Rev. C 82, 035801 (2010).
* Wei _et al._ (2017) J. B. Wei, H. Chen, G. F. Burgio, and H.-J. Schulze, Phys. Rev. D 96, 043008 (2017).
* Pereira _et al._ (2018) J. P. Pereira, C. V. Flores, and G. Lugones, Astrophys. J. 860, 12 (2018).
* Oppenheimer and Volkoff (1939) J. Oppenheimer and G. Volkoff, Phys. Rev. 55, 374 (1939).
* Tolman (1939) R. C. Tolman, Phys. Rev. 55, 364 (1939).
* Harrison _et al._ (1965) B. Harrison, K. Thorne, M. Wakano, and J. Wheeler, _Gravitation Theory and Gravitational Collapse_ (University of Chicago Pr., 1965).
* Bardeen _et al._ (1966) J. Bardeen, K. Thorne, and D. Meltzer, Astrophys. J. 145, 505 (1966).
* Burgio _et al._ (2018) G. F. Burgio, A. Drago, G. Pagliara, H.-J. Schulze, and J. B. Wei, Astrophys. J. 860, 139 (2018).
* Abbott _et al._ (2018) B. P. Abbott _et al._ , Phys. Rev. Lett. 121, 161101 (2018).
* Ruiz _et al._ (2018) M. Ruiz, S. L. Shapiro, and A. Tsokaros, Phys. Rev. D 97, 021501 (2018).
* Most _et al._ (2018) E. R. Most, L. R. Weih, L. Rezzolla, and J. Schaffner-Bielich, Phys. Rev. Lett. 120, 261103 (2018).
* Most _et al._ (2020) E. R. Most, L. J. Papenfort, L. R. Weih, and L. Rezzolla, Mon. Not. R. Astron. Soc. 499, L82 (2020).
* Shao _et al._ (2020) D. S. Shao, S. P. Tang, X. Sheng, J. L. Jiang, Y. Z. Wang, Z. P. Jin, Y. Z. Fan, and D. M. Wei, Phys. Rev. D101, 063029 (2020).
* Wei _et al._ (2019) J. B. Wei, A. Figura, G. F. Burgio, H. Chen, and H.-J. Schulze, J. Phys. G: Nucl. Part. Phys. 46, 034001 (2019).
* Moustakidis (2017) C. C. Moustakidis, Gen. Rel. Grav. 49, 68 (2017).
* Regimbau _et al._ (2017) T. Regimbau, M. Evans, N. Christensen, E. Katsavounidis, B. Sathyaprakash, and S. Vitale, Phys. Rev. Lett. 118, 151105 (2017).
* Punturo _et al._ (2010) M. Punturo _et al._ , Class. Quant. Grav. 27, 194002 (2010).
* Abbott _et al._ (2017c) B. P. Abbott _et al._ (LIGO Scientific), Class. Quant. Grav. 34, 044001 (2017c).
* Panda _et al._ (2016) N. Panda, K. Mohanta, and P. Sahu, Eur. Phys. J. A 52, 286 (2016).
|
Van Jaarsveld, Arts
Projected Inventory Level Policies for Lost Sales Inventory Systems
Projected Inventory Level Policies for Lost Sales Inventory Systems: Asymptotic Optimality in Two Regimes
Willem van Jaarsveld
School of Industrial Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands, PO BOX 513, 5600MB<EMAIL_ADDRESS>
Joachim Arts
Luxembourg Centre for Logistics and Supply Chain Management, Department of Economics and Management, University of Luxembourg, Luxembourg City, Luxembourg, 6, rue Richard Coudenhove-Kalergi L-1359<EMAIL_ADDRESS>
We consider the canonical periodic review lost sales inventory system with positive lead-times and stochastic i.i.d. demand under the average cost criterion.
We introduce a new policy that places orders such that the expected inventory level at the time of
arrival of an order is at a fixed level and call it the Projected Inventory Level (PIL) policy.
We prove that this policy has a cost-rate superior to the equivalent system where excess demand is back-ordered instead of lost and
is therefore asymptotically optimal as the cost of losing a sale approaches infinity under mild distributional assumptions.
We further show that this policy dominates the constant order policy
for any finite lead-time and is therefore asymptotically
optimal as the lead-time approaches infinity for the case of exponentially distributed demand per period.
Numerical results show this policy also performs superior relative to other policies.
Lost Sales, Inventory, Optimal policy, Asymptotic Optimality, Markov Decision Process
§ INTRODUCTION
The periodic review inventory system with lost sales, positive lead time and i.i.d. demand
is a canonical problem in inventory theory. The decision maker is interested in the
average cost-rate of this system. The optimal policy for such
a system can be computed in principle by stochastic dynamic programming, but
it is not practical due to the curse of dimensionality.
Research has therefore focused on devising
heuristic policies for the lost sales inventory system. Although many variants of lost sales inventory systems exist, results for the canonical system are
important as they serve as building blocks to design good policies for more
intricate lost sales inventory systems. [Bijvank and Vis, 2011] review the literature on many such more intricate inventory systems with lost sales.
There are two simple heuristic policies for the canonical lost sales system
that appeal to both practitioners and academics. These policies are the base-stock policy and the constant order policy.
The base-stock policy places an order each period such that the inventory
position (inventory on-hand + outstanding orders) is raised
to a fixed base-stock level. This policy is prevalent in practice
due to its intuitive structure and because it is the optimal policy
when excess demand is not lost but back-ordered. The most important merit
of the base-stock policy is that it is asymptotically optimal as the cost
of a lost sale approaches infinity under mild conditions on the demand
distribution [Huh et al., 2009]. This asymptotic optimality is robust in
the sense that it holds for a broad class of heuristics to compute
base-stock levels [Bijvank et al., 2014].
The constant order policy orders the same amount in each period regardless
of the state of the system. Although this may seem naive at first, this
policy is asymptotically optimal as the lead-time approaches infinity [Goldberg et al., 2016],
and can outperform the base-stock policy for long lead-times and moderate
costs for a lost sale. [Xin and Goldberg, 2016] show that the constant order policy converges to
optimality exponentially fast in the lead time.
The asymptotic optimality results of [Huh et al., 2009] and
[Goldberg et al., 2016] are both elegant and useful for
practice. We believe that these results should also inform the design of plausible heuristic
policies: With the knowledge that such asymptotic optimality results are
attained by relatively simple policies, new
heuristics for the lost sales system should be designed
to be asymptotically optimal for long lead times
and large lost sales penalties. Unfortunately, the constant
order policy is not asymptotically optimal for large lost sales
penalties and the base-stock policy is not asymptotically
optimal for long lead times. This paper introduces a single parameter policy that is asymptotically optimal for large lost sales penalties
under mild assumptions on the demand distribution and also for
long lead times when demand has an exponential distribution.
We call this policy the projected inventory level (PIL)
The PIL policy places
orders such that the expected inventory level at the time of
arrival of an order is raised to a fixed level which
we call the projected inventory level. The PIL policy is intuitive for academics and practitioners alike. In fact, the base-stock policy for the canonical inventory
system where excess demand is back-ordered, rather than lost, is
also a projected inventory level policy: Although the usual
interpretation of a base-stock policy in a system with
back-orders is that it raises the inventory position to a fixed
level, it is equivalent to say that it raises the expected
inventory level at the time of order arrival to a fixed level.
(These two policies are not equivalent in the lost sales inventory system.)
We exploit this equivalence and use it to compare the canonical inventory
systems with lost sales and back-orders respectively when all parameters are
identical. We prove that the cost-rate for the canonical lost sales system is lower
than the cost-rate for the canonical back-order system under the same
projected inventory level. As a corollary to this, we
find that optimal cost-rate of the canonical lost
sales system is lower than the optimal cost-rate for the equivalent
canonical back-order system. This means we recover a main result of [Janakiraman et al., 2007] via a new and different proof. The stochastic comparison technique used by [Janakiraman et al., 2007] holds for general convex per period cost functions but does not construct a policy. Our result is constructive in the sense that we identify a specific policy (the PIL policy) for which the costs in the lost sales system are lower than the optimal costs for the back-order system. Our construction requires that the cost per period is linear in on-hand inventory and lost sales, which is the most commonly used per period cost function.
The PIL policy also mimics the behavior of the constant order policy for long lead-times. We make this notion rigorous when demand per period has an exponential distribution. In that case, we show that the projected inventory level policy can be interpreted as a one-step policy improvement on the (bias function of) the constant order policy; we believe this to be an interesting proof technique. Under the same assumption, we show that the projected inventory level policy dominates the constant order policy
for any finite lead-time $\tau$. The PIL policy therefore inherits the property of the constant order policy that the gap with the optimal policy decreases exponentially with the lead-time cf. [Xin and Goldberg, 2016].
Note that the projected inventory level policy has a single
parameter and yet uses all the information in the state vector without aggregating it into the inventory position. This is a feature shared by the Myopic policy where orders are placed to minimize the expected cost in the period that the order arrives. These policies all require a projection of the inventory level at the time of order arrival.
The myopic and PIL policy can therefore both be considered as “projection” policies.
Empirically projection policies perform exceptionally well (see [Zipkin, 2008] and Section <ref>), but there is no theoretical underpinning that explains why such policies perform so well empirically. In particular, there are no known asymptotic optimality results for such policies. This paper contributes asymptotic optimality results for the projected inventory level policy in two asymptotic regimes.
Policies that are asymptotically optimal for high lost sales costs and long lead times can be constructed by using multiple
parameters to make the policy behave as either a base-stock policy or a constant order policy when needed. For example, a policy may order a convex combination of the base-stock and constant order policy decision, or cap the order placed by a base-stock policy [Johansen and Thorstenson, 2008, Xin, 2020]. The projected inventory level policy is different in that the parameter can not be set such that it trivially reduces to either a base-stock or constant order policy. Asymptotic optimality proofs therefore rely on new ideas.
In summary, this paper makes the following contributions:
* We introduce the projected inventory level policy and show that it is a natural generalization of the base-stock policy to systems where sales are lost rather than back-ordered. Furthermore, it is a single parameter policy that utilizes all state information without aggregating it into the inventory position, i.e., it is a projection policy.
* We provide the first tractable policy for the canonical lost sales inventory system with better performance than the optimal policy of the equivalent canonical back-order system. The proof uses a comparison based on associated random variables.
* We prove that the projected inventory level policy is asymptotically optimal as the penalty for a lost sale approaches infinity under mild conditions on the demand distribution.
* We prove that the projected inventory level policy is a 1-step policy improvement upon the constant order policy and dominates the performance of the constant order policy when demand has an exponential distribution.
* We prove that the projected inventory level policy is asymptotically optimal as the lead time approaches infinity when demand has an exponential distribution. The proof approach is to show dominance of the PIL policy over the constant order policy such that it inherits is asymptotic optimality properties.
* We demonstrate numerically that the projected inventory level has superior performance also outside of the regimes where it is asymptotically optimal. We also provide results that aid with efficient computation within this class of policies.
§ BRIEF LITERATURE REVIEW
The canonical lost sales inventory system was first studied by [Karlin and Scarf, 1958] and found
to have a more complicated optimal ordering policy than the canonical inventory system with
back-orders. The optimal ordering policy for the lost sales system depends on each outstanding order. Sensitivities of the optimal ordering decision to each outstanding order where first characterized by
[Morton, 1969] and the analysis was later streamlined by [Zipkin, 2008].
Despite these results, computation and implementation of the optimal policy is not practical. Most of the literature studies heuristic policies without any optimality guarantees <cit.> but with numerically favorable performance.
Notable exceptions are [Levi et al., 2008] who prove that their dual balancing policy has an optimality gap of at most 100% and [Chen et al., 2014] who provide a pseudo polynomial time approximation scheme. Another active area of research is the study of base-stock policies when the demand distribution is unknown and must be learned online [Huh et al., 2009, Zhang et al., 2020, Agrawal and Jia, 2022]. We refer to [Bijvank and Vis, 2011] for a general review of lost sales inventory models.
We use the remainder of this brief literature review to focus on asymptotic optimality results as this is the focus of this paper. The notion of asymptotic optimality in lost sales and other difficult inventory systems has gained recent traction. [Goldberg et al., 2021] provide a survey of such results and outline methodologies to prove such results. The most important results for the lost sales inventory system are asymptotic optimality of base-stock policies as the cost of losing a sale approaches infinity [Huh et al., 2009, Bijvank et al., 2014] and asymptotic optimality of constant order policies as the lead-time approaches infinity [Goldberg et al., 2016, Xin and Goldberg, 2016, Bu et al., 2020, Xin, 2020, Bai et al., 2020].
A natural question is whether any intuitive policies exist that are asymptotically optimal in both regimes. [Xin, 2020] propose the capped base-stock policy which was first introduced by [Johansen and Thorstenson, 2008]. Under this policy orders are placed to reach a base-stock level except when this would cause the order size to exceed the cap. The parameters of this policy can be set such that it effectively reduces to either a constant order or base-stock policy. [Xin, 2020] show that such a policy is asymptotically optimal for long lead-times. It is straightforward to show that a capped base-stock policy is also asymptotically optimal in the other regime.
The PIL policy has a single parameter that cannot be set such that it effectively reduces to either a constant order or base-stock policy. Despite this, we provide asymptotic optimality results both for long lead-times and high per unit lost sales costs.
Our analysis is based on comparison against simpler policies for which asymptotic optimality results have already been established. The comparison uses association of random variables when comparing the PIL with the base-stock policy. A similar technique has been used to bound the order fill-rates in assemble-to-order systems by [Song, 1998]. We use policy improvement to compare the PIL policy to the constant order policy. This idea is often used to create an improved policy for a system that suffers from the curse of dimensionality so that only a simple policy can be analyzed <cit.>. We are not aware of prior work that uses association of random variables and/or policy improvement to establish asymptotic optimality results.
§ MODEL
We consider a periodic review lost sales inventory system. In each period $t\in \N_0$ (where $\N_0:=\N\cup\{0\}$) we place an order that will arrive after a lead time of $\tau\in \N_0$ periods, i.e. at the start of period $t+\tau$. The inventory level at the beginning of period $t$, after receiving the order that was placed in period $t-\tau$, is denoted $I_t$. We denote the inventory level at the end of period $t$ as $J_t$. The order placed in period $t$ is denoted $q_{t+\tau}$ such that the order that arrives in period $t$ is denoted $q_t$. Demand in period $t$ is denoted by $D_t$ and $\{D_t\}_{t=0}^\infty$ is an i.i.d. sequence of random variables with $\mu:=\E[D_t]<\infty$. We let $D$ denote the generic one period demand random variable and its distribution $F(x):=P(D\le x)$.
Demand is satisfied from inventory whenever possible. If demand in a period exceeds the inventory level, there will be lost sales denoted by $L_t$ in period $t$. The system dynamics are given as:
\begin{align}
L_t & = (D_t-I_t)^+ \label{eq:dynamics1}\\
J_t & = (I_t-D_t)^+ \label{eq:dynamics2}\\
I_t & = (I_{t-1} - D_{t-1})^+ + q_{t} = J_{t-1}+q_t,\label{eq:dynamics3}
\end{align}
where $(x)^+:=\max(0,x)$. The state of this system in period $t$ is given by $\xb_t=(I_t,q_{t+1},q_{t+2},\ldots,q_{t+\tau-1})\in \R_+^{\tau}$. Since we focus on long run costs, and for ease of exposition, we assume the initial state $\xb_0$ to be $\mathbf{0}$, i.e. $I_0=0$ and $q_t=0$ for $t<\tau$ unless stated otherwise <cit.>. For convenience in notation we define $D[a,b]:=\sum_{i=a}^b D_i$ and similarly define $I[a,b]$, $J[a,b]$, $q[a,b]$ and $L[a,b]$.
In each period $t$ we may decide $q_{t+\tau}\ge 0$ based on $\xb_t$ and $t$. For our purposes, it will be convenient and sufficient to represent policies by a set of functions $\pol=\{\pol_t,t\in \N_0\}$, where $\pol_t$ maps states $\xb_t\in \R_+^{\tau}$ to actions $q_{t+\tau}\ge 0$.
We consider three policies in the analytical sections of this paper. The base-stock policy $\base^S$ <cit.> and the constant order policy $\cop^r$ <cit.> are defined as follows:
\begin{align*}
\base^S_t(\xb_t) & := \big(S-I_t-q[t+1,t+\tau-1]\big)^+ \\
\cop^r_t(\xb_t) &:= r.
\end{align*}
Here, $S$ and $r$ are the base-stock level and the constant order quantity, respectively. We assume for stability that $r<\mu$ <cit.>. The projected inventory level policy $\pil^U$ with projected inventory level $U$ is given by
\begin{align}
\label{eq:PUt}
\pil^U_t(\xb_t) & := (U-\E[J_{t+\tau-1}|\xb_t])^+ = \left(U-I_t -q[t+1,t+\tau-1] -\E\big[L[t,t+\tau-1]\big| \xb_t \big] +\tau\mu \right)^+.
\end{align}
Note that the expectations in (<ref>) take into account the state at time $t$. Therefore, at time $t$ the PIL policy places an order to raise the expected inventory level at time $t+\tau$ to $U$, if possible.
The following lemma specifies conditions that ensure that the projected inventory level $U$ can be attained in every period; the proof is in Appendix <ref>.
For any given PIL policy, it is possible to place a non-negative order in each period $t\geq0$ to attain the projected inventory level $U\geq0$ provided that it is possible to do so in period 0 (i.e. provided $\xb_0$ satisfies $\E[J_{\tau-1}|\xb_0]\leq U$).
Any excess inventory at the end of a period incurs a holding cost $h>0$ per item per period. Any lost sales accrued during a period incur a lost sales penalty cost $p>0$ per item lost. Denote the costs incurred in period $t$ by $c_t:=h J_t + p L_t$, and let $c[a,b]:=\sum_{t=a}^b c_t$. We write $c[a,b](\pol)$ to make the dependence on the policy $\pol$ explicit. The cost-rate associated with a policy $\pol$ is then
\[
C(\pol):=\limsup_{T\to\infty}\E\left[ \frac{1}{T-\tau+1} c[\tau,T](\pol)\right]
\]
Let $S^*\in \argmin_S{C(\base^S)}$, $r^*\in \argmin_r{C(\cop^r)}$, and $U^*\in\argmin_U{C(\pil^U)}$ denote the optimal base-stock level, constant order quantity, and projected inventory level respectively. Let $C^*$ denote the long run expected costs of an optimal policy.
§ LONG LEAD TIME ASYMPTOTICS
Constant-order policies were proven to be asymptotically optimal for long lead-time by [Goldberg et al., 2016] <cit.>. This result has deepened the understanding of lost sales inventory systems. Empirically, we observe that the PIL policy outperforms the constant order policy, also for long lead-times. In this sense, the PIL policy is unlike the base-stock policy, which cannot match the constant order policy performance for large lead-times. In this section, we will theoretically underpin this finding. In particular, we prove that the projected inventory policy is in expectation superior to the constant order policy when demand is exponentially distributed.
Our analysis is based on the following simple idea.
Consider the total costs incurred from time $t+\tau$ up to time $T$, given the state $\xb_t$ and order $q$ and assuming $q_k=r$ for $k>t+\tau$, and denote this total cost by $F^T(q) = hJ[t+\tau,T]+pL[t+\tau,T]$. Define $f(q|\xb_t)=\lim_{T\to\infty}\E[ F^T(q)-F^T(r) \mid \xb_t]$. A sensible policy may decide $q_{t+\tau}\in\argmin_{q\geq 0} f(q|\xb_t)$ for any pipeline $\xb_t$. This policy may be recognized as a single-step policy improvement to the constant order policy, and it may therefore be expected to dominate the constant order policy with order quantity $r$.
In the following, we sketch a heuristic argument that shows
$\pil^U (\xb_t)\in \argmin_{q\geq0} f(q|\xb_t)$ with $U=p(\mu-r)/h$. This heuristic argument as well as the intuition that this policy dominates the constant order policy will be made rigorous in Section <ref>.
Let us determine the $q=q_{t+\tau}$ that minimizes $f$ for some initial state $\xb$. Increasing $q$ by $\epsilon$ has two effects on the infinite horizon costs: 1) $\epsilon$ more demand is eventually satisfied (not lost) and a penalty cost $p\epsilon$ is averted 2) From time $t+\tau$ until the first stockout, the inventory level increases by $\epsilon$ (for $\epsilon$ small). Suppose $t+\tau=0$ for notational convenience. Let $R$ denote the time of the first lost sale from period $0$: $R=\min\{t\in\N_0|L_t>0\}$. Then
\begin{align}
\frac{df(q)}{dq}=\lim_{\epsilon\to 0} \frac{f(q+\epsilon)-f(q)}{\epsilon}=
\lim_{\epsilon\to 0} \frac{-p\epsilon +h\E[R]\epsilon}{\epsilon}=h\E[R]-p.\label{eq:fderivative}
\end{align}
The expectation of $R$ can be found by noting that $R$ is a stopping time and $\E[L_R]=\mu$ due to the memoryless property of the exponential demand distribution. For $t<R$ there are no stockouts so that $J_t=J_{t-1}+r-D_t$. Thus $J_t=I_0-D[0,t] + t r =J_{-1}+q -D[0,t] + t r$ while for $t=R$ we have $J_R=0$ and $L_R=-J_{-1}-q + D[0,R] - R r$. In particular this implies (using Wald's identity) $\E[L_R]=-\E[J_{-1}]-q+(\E[R]+1)\mu-\E[R]r$. Using that also $\E[L_R]=\mu$ and solving for $\E[R]$ yields
$\E[R]=\frac{\E[J_{-1}]+q}{\mu-r}$. Thus by (<ref>), $f$ must be a parabola in $q$ and $df(q)/dq=0$ if and only if $q=\frac{p(\mu-r)}{h}-\E[J_{-1}]$. Since $I_0=J_{-1}+q$, this holds if and only if $q$ follows a projected inventory level policy with level $U=\frac{p(\mu-r)}{h}$.
§.§ Dominance of PIL policies over COP policies
The heuristic argument above can be made rigorous as follows.
Observe that the orders under $\cop^r$ are independent of the state. Hence, the bias (or relative value function) of the $\cop^r$ policy can be expressed as a function of inventory level only as in the following definition.
Let $r\in [0,\E[D])$, and let $g^r=C(\cop^r)$ be the long run average costs of the $\cop^r$ policy. Then the bias $\mathcal{H}^r(\cdot):\R^+\to\R$ associated with $\cop^r$ satisfies:
\begin{align}
\mathcal{H}^r(x) = \E_{D}\big[ h(x-D)^+ + p (D-x)^+ + \mathcal{H}^r((x-D)^++r)\big]- g^r,\label{eq:biasdef}
\end{align}
for any non-negative $x\geq 0$. To make the bias unique we also impose $\mathcal{H}^r(0)=0$.
This bias $\mathcal{H}^r(x)$ can be interpreted as the additional cost
over an infinite horizon of starting with $x$ items in inventory and a pipeline with orders of size $r$ instead of starting with 0 items in inventory and a pipeline with orders of size $r$. Intuitively, $f(q|\xb_t)$ equals $\E[\mathcal{H}^r(J_{t+\tau-1}+q)-\mathcal{H}^r(J_{t+\tau-1}+r)|\xb_t]$, and informed by the heuristic argument made earlier one can guess that, like $f$, this bias should be a parabola.
When demand has an exponential distribution, the bias of the constant order policy with constant order quantity $r<\mu$ is a parabola:
\begin{equation}
\label{eq:biasexplicit}
\mathcal{H}^r(x)=\frac{h}{2(\mu-r)}x^2-px,
\end{equation}
and $g^r=C(\cop^r)=p(\mu-r) + h \frac{r^2}{2(\mu-r)}$.
The proof of Lemma <ref> can be found in Appendix <ref> and is a straightforward verification that the proposed solution satisfies the definition.
Confirming our intuition, we next show that the policy found by a single improvement step on the bias of a constant order policy is a projected inventory level policy.
Let $D$ be exponentially distributed and for any $h,p, r<\mu$, let $U(r)=p(\mu-r)/h$. For any $\xb_t$, if $q_{t+\tau}=\pil^{U(r)}(\xb_t)$ then $q_{t+\tau}\in \argmin_{q\ge 0} \E[\mathcal{H}^r(J_{t+\tau-1}+q)\mid\xb_t]$.
Using (<ref>), one may derive $\mathcal{H}^{r}(x)=a_1(x-U(r))^2+a_2$ with $a_1=\frac{h}{2(\mu-r)}>0$ and $a_2=\frac{-p^2(\mu-r)}{2h}$, thus $U(r)$ is the unique minimizer of $\mathcal{H}^{r}(\cdot)$. Now observe that
\begin{align}
\E[\mathcal{H}^{r}(J_{t+\tau-1}+q)\mid\xb_t] &= \E[a_1(J_{t+\tau-1}+q-U(r))^2+a_2\mid \xb_t] \notag\\
&=a_1\left[\var[J_{t+\tau-1}+q-U(r)\mid\xb_t] + \E[J_{t+\tau-1}+q-U(r)\mid\xb_t]^2 \right]+a_2 \notag\\
\label{eq:EHq}
&=a_1\var[J_{t+\tau-1}\mid\xb_t] + a_1(\E[J_{t+\tau-1}\mid\xb_t]+q-U(r))^2 +a_2,
\end{align}
where the final equality follows because $\var[q \mid \xb_t]=0$ for any deterministic policy $\pol$. Clearly $q=\pil^{U(r)}(\xb_t)=(U(r)-\E[J_{t+\tau-1}\mid \xb_t ])^+$ minimizes (<ref>).
For our continuous state-space, continuous action-space model, there appear to be no standard Markov decision process results that can be leveraged to prove from Lemma <ref> that the PIL dominates the constant order policy. Our proof uses the following result, the proof of which appears in Appendix <ref>:
Let $t_1\leq t_2$, $t_1,t_2\in\N_0$, $r\in [0,\E[D])$, $g^r=C(\cop^r)$, and suppose $q_t=r$
for all $t\in \{t_1+1,\ldots,t_2\}$. Then
\[ \E_{D_{t_1},\ldots,D_{t_2}}\left[ c[t_1,t_2](\cop^r) \mid I_{t_1}\right] = \mathcal{H}^r(I_{t_1})-\E_{D_{t_1},\ldots,D_{t_2}}[\mathcal{H}^r(I_{t_2+1})|I_{t_1}] + (t_2+1-t_1)g^r. \]
We are now ready to establish the main result of this section.
If demand has an exponential distribution then the best PIL policy $\pil^{U^*}$ outperforms the best constant order policy $\cop^{r^*}$. In particular
$C(\pil^{U^*})\leq C(\pil^{U(r^*)})\le C(\cop^{r^*})$ for any $\tau\in\N_0$, where $U(r)$ is given by Lemma <ref>.
We focus on bounding $\E[c[\tau,T](\pil^{U(r^*)}) - c[\tau,T](C^{r^*})]$. A device in the proof will be a policy $\G^{\tilde{t}}$ that places the first $\tilde{t}\in \N_0$ orders using the projected inventory policy $\pil^{U(r^*)}$, and subsequent orders using the optimal constant-order policy $\cop^{r^*}$:
\[ \G^{\tilde{t}}_t(\xb):=\begin{cases}
\pil^{U(r^*)}_t(\xb), & t< \tilde{t}\\
\cop^{r^*}_t(\xb)=r^*, & t\ge \tilde{t}.
\end{cases}
\]
Let $I_{t}(A)$ denote the random variable $I_{t}$ when policy $\pol$ is adopted, and let $\bar{t}=\tilde{t}+\tau$. We will compare expected interval costs for the policies $\G^{\tilde{t}+1}$ and $\G^{\tilde{t}}$:
\begin{align}
\E\left[c[\tau,T](\G^{\tilde{t}+1})-c[\tau,T](\G^{\tilde{t}})\right]&=\E\left[c[\bar{t},T](\G^{\tilde{t}+1})-c[\bar{t},T](\G^{\tilde{t}})\right]\nonumber\\
&=\E\left[\E_{D_{\bar{t}},\ldots,D_{T}}\left[ c[\bar{t},T] \Big| I_{\bar{t}}(\G^{\tilde{t}+1})\right] -\E_{D_{\bar{t}},\ldots,D_{T}}\left[ c[\bar{t},T] \Big| I_{\bar{t}}(\G^{\tilde{t}})\right] \right]\nonumber\\
\end{align}
Here and elsewhere in this proof, $\mathcal{H}$ denotes $\mathcal{H}^{r^*}$. Also, the first equality follows because $\G^{\tilde{t}+1}$ and $\G^{\tilde{t}}$ coincide for $t< \tilde{t}$, and thus, since $\xb_0=\mathbf{0}$ for $t\le \tilde{t}$ the distributions of $\xb_{t}$, $J_{t+\tau-1}$, $L_{t+\tau-1}$ and $c_{t+\tau-1}$ are the same for the two policies. The second equality is by conditioning on the inventory at time $\bar{t}$. For the third equality, note that $\G^{\tilde{t}+1}_t=\G^{\tilde{t}}_t=\cop^{r^*}$ for $t\ge \tilde{t}+1$, hence $q_{t}=r^*$ for $t\ge \tilde{t}+1+\tau$ for both policies, and hence, we can substitute the identity of Lemma <ref>.
Now note that $I_{\bar{t}}(\G^{\tilde{t}+1}) = J_{\bar{t}-1}+\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})$ and $I_{\bar{t}}(\G^{\tilde{t}}) = J_{\bar{t}-1}+r^*$, while $\xb_{\tilde{t}}$ and $J_{\bar{t}-1}$ are identically distributed for both policies since they coincide for $t<\tilde{t}$. We condition on $\xb_{\tilde{t}}$ and find:
\begin{align}
\E\left[\mathcal{H}\left(I_{\bar{t}}(\G^{\tilde{t}+1})\right)-\mathcal{H}\left(I_{\bar{t}}(\G^{\tilde{t}})\right)\right]&=\E\left[\E\left[\mathcal{H}\left(I_{\bar{t}}\left(\G^{\tilde{t}+1}\right)\right)-\mathcal{H}\left(I_{\bar{t}}\left(\G^{\tilde{t}}\right)\right)\big|\xb_{\tilde{t}}\right]\right]\nonumber\\
&=\E\left[\min_{q\ge 0}\left(\E\left[\mathcal{H}\left(J_{\bar{t}-1}+q\right)-\mathcal{H}\left(J_{\bar{t}-1}+r^*\right)|\xb_{\tilde{t}}\right]\right)\right]\nonumber\\
&=\E\left[\min_{q\in \R}\left(\E\left[\mathcal{H}\left(J_{\bar{t}-1}+q\right)-\mathcal{H}\left(J_{\bar{t}-1}+r^*\right)|\xb_{\tilde{t}}\right]\right)\right]\nonumber\\
&=-\E\left[\E\left[ a_1 \left(r^*-\pil_{\tilde{t}}^{U(r^*)}\left(\xb_{\tilde{t}}\right)\right)^2\middle|\xb_{\tilde{t}}\right]\right]\nonumber\\
& =-a_1\E\left[ \left(r^*-\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})\right)^2\right] \label{eq:boundingthecostdifference}
\end{align}
For the third equality, we use Lemma <ref>. For the fourth equality, observe that $\xb_0=0$, and hence $\E[J_{\bar{t}-1}|\xb_{\tilde{t}}]\le U(r^*)$ (cf. Lemma <ref>), which implies that the minimum over $q\in \R$ is attained by an element $q\ge0$.
For the fifth equality, we substitute equation (<ref>) and cancel terms.
With this, we obtain:
\begin{align}
\E\big[c[\tau,T](\pil^{U(r^*)})-c[\tau,T](\cop^{r^*})\big]&= \E\big[c[\tau,T](\G^{T+1-\tau})-c[\tau,T](\G^0)\big] \nonumber\\
&=\sum_{\tilde{t}=0}^{T-\tau} \E\big[c[\tau,T](\G^{\tilde{t}+1})-c[\tau,T](\G^{\tilde{t}})\big]\nonumber\\
&= \E[I_{T+1}(\G^0)] -\E[I_{T+1}(\G^{T+1-\tau})] +\sum_{\tilde{t}=0}^{T-\tau} \E\left[\mathcal{H}\left(I_{\bar{t}}(\G^{\tilde{t}+1})\right)-\mathcal{H}\left(I_{\bar{t}}(\G^{\tilde{t}})\right)\right] \nonumber\\
&= \E[I_{T+1}(\cop^{r^*})]-\E[I_{T+1}(\pil^{U(r^*)})]-a_1 \sum_{\tilde{t}=0}^{T-\tau} \E\left[ \left(r^*-\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})\right)^2\right] \label{eq:finitehorizoncostdifference}
\end{align}
Here, the first equality holds by definition of $\G^{\tilde{t}}$. The second equality follows by expressing the difference as a telescoping sum. The third equality uses (<ref>). For the fourth equality, we rearrange and cancel terms. The final equality holds by definition of $\G^{\tilde{t}}$, and by (<ref>).
Using the definition of the cost-rate, we find
\begin{align*}
C(\cop^{r^*})&= \lim_{T\to \infty} \E\left[\frac{1}{T-\tau+1}c[\tau,T](\cop^{r^*}) \right]\\
&=\lim_{T\to \infty}\E \left[\frac{1}{T-\tau+1}\left(c[\tau,T](\pil^{U(r^*)}) +a_1 \sum_{\tilde{t}=0}^{T-\tau} \left(r^*-\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})\right)^2\right) \right]\\
&=\limsup_{T\to \infty}\E \left[\frac{1}{T-\tau+1}c[\tau,T](\pil^{U(r^*)}) \right]+
a_1\liminf_{T\to \infty} \E\left[\frac{1}{T-\tau+1}\sum_{\tilde{t}=0}^{T-\tau} \left(r^*-\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})\right)^2\right]
\end{align*}
The first equality uses that for the constant order policy, the sequence converges such that the limit superior equals the limit. For the second equality, we use (<ref>) and $\lim_{T\to \infty}\frac{1}{T-\tau+1} (\E[I_{T+1}(\cop^{r^*})]-\E[I_{T+1}(\pil^{U(r^*)})])=0$. That this limit goes to zero follows because the steady state inventory level under the constant order policy exists (with finite mean), and because the inventory level under the PIL policy is bounded from below by $0$ and from above by $U(r^*)+\tau\mu$. This upper bound holds because $\pil^{U(r^*)}_t(\xb_t) \le \left(U(r^*)+\tau\mu-I_t -q[t+1,t+\tau-1] \right)^+$, i.e. the inventory position after placing an order $I_t+q[t+1,t+\tau-1]+q_{\tau}$ cannot rise above $U(r^*)+\tau\mu$, (and it is below this bound initially since $\xb_0=\mathbf{0}$ by assumption). Thus the inventory level is bounded by $U(r^*)+\tau\mu$.
The third equality now follows because all limit points of the sequence $\left(\E\left[\frac{1}{T-\tau+1}\sum_{\tilde{t}=0}^{T-\tau} \left(r^*-\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})\right)^2\right]\right)_{T= \tau,\tau+1,\ldots}$ must be finite since $\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})$ is bounded below by $0$ and above by $U(r^*)+\tau\mu$. Now note that for each limit point $x\in\R$ of that sequence, there must exist a subsequence that converges to that limit point. By the second equality above, the corresponding subsequence of $\left(\E \left[\frac{1}{T-\tau+1}c[\tau,T](\pil^{U(r^*)})\right]\right)_{T= \tau,\tau+1,\ldots}$ must converge to $y=C(\cop^{r^*})-x$. Thus every limit point $x$ of the former sequence must correspond to a limit point $y=C(\cop^{r^*})-x$ of the latter sequence, and vice versa. Also, the smallest limit point of the latter sequence must correspond to the largest limit point of the former sequence, which implies the third equality.
We thus find:
\begin{equation}
\label{eq:QuadDivergence}
C(\cop^{r^*})-C(\pil^{U(r^*)}) = a_1\liminf_{T\to \infty} \frac{1}{T-\tau+1} \sum_{\tilde{t}=0}^{T-\tau} \E\left[ \left(r^*-\pil_{\tilde{t}}^{U(r^*)}(\xb_{\tilde{t}})\right)^2\right]\ge 0.
\end{equation}
This completes the proof.
It is noteworthy that the difference between the cost of a PIL policy and a constant order policy can be expressed as a function of the quadratic differences between the order decisions of both policies; see (<ref>). The amounts by which the decisions of the PIL policy differ from a constant order policy also express how much better it performs.
§.§ Asymptotic optimality as $\tau\to\infty$
With the results of [Xin and Goldberg, 2016], Theorem <ref> establishes asymptotic optimality of the PIL policy as $\tau$ grows large:
The PIL policy is asymptotically optimal for long lead-times when demand has an exponential distribution:
\[
\lim_{\tau\to\infty}\left(C(\pil^{U(r^*)})-C^{*}\right) = 0.
\]
This result follows directly from our Theorem <ref>, and Theorem 1 in [Xin and Goldberg, 2016]. In fact, it follows from these theorems that the optimality gap of PIL policy decays exponentially in $\tau$.
§ PENALTY COST ASYMPTOTICS
The performance of the PIL-policy in the asymptotic regime that $p\to\infty$ will be studied
by using bounds in terms of the related inventory system in which demand in excess of inventory
is back-ordered rather than lost. Therefore, in this section, we first make a comparison to
the related canonical back-order system in Section <ref> and show that the cost of a PIL policy
for the lost sales system has lower cost than the optimal policy for the canonical back-order system. Then we provide our main result that the PIL policy is asymptotically optimal as the cost of losing a sale approaches infinity in Section <ref>.
§.§ Comparison to back-order system
The back-order system with lead-time $\tau\in\N_0$, holding cost
parameter $h>0$ per period per item, and back-order cost $p>0$ per period per item is much better
understood than the same system with lost sales. Comparison between these two systems has been studied before
by e.g. [Janakiraman et al., 2007],
[Huh et al., 2009], and [Bijvank et al., 2014] and we
mostly follow their notational conventions.
The dynamics for the back-order system are:
\begin{align}
I_{t}^\B=I_{t-1}^\B-D_{t-1}+q^\B_{t}, \quad J^\mathcal{B}_t=(I_t^\mathcal{B}-D_t)^+, \quad
\end{align}
where $B_t$ and $I_t^\B$ denote respectively the number of items on back-order and the inventory level in period $t$. We use the superscript $\B$ to denote that quantities belong to the back-order system
(rather than the lost sales system) and generally denote this system as $\B$.
It is well known that the optimal policy for $\B$ is a base-stock policy. Under this policy the order in each period $t$
is placed to raise the inventory position to a fixed base-stock level $S$:
\begin{equation}
\end{equation}
and the optimal base-stock level for system $\B$ is given by the newsvendor equation:
\begin{equation}
S^* = \inf\left\{S: \P(D[0,\tau]\leq S)\geq \frac{p}{p+h} \right\}.
\end{equation}
The optimal average cost-rate for this system satisfies:
\begin{equation}
C^{\B *}=h\E\left[(S^*-D[0,\tau])^+\right]+p\E\left[(D[0,\tau]-S^*)^+\right].
\end{equation}
For the lost sales system we similarly define $c^\mathcal{B}_t:=hJ^\mathcal{B}_t+p B_t$ and
$c^{\mathcal{B}}[a,b]:=\sum_{t=a}^b c^{\mathcal{B}}_t$. We also write $c^\mathcal{B}[a,b](\pol)$ to make the dependence on the control policy $\pol$ explicit. With this notation we can express the cost-rate of a policy $\pol$ in system $\mathcal{B}$ as $C^\mathcal{B}(\pol)=\limsup_{T\to\infty}\E\left[\frac{1}{T-\tau+1} c^{\mathcal{B}}[\tau,T](\pol)\right]$.
The lost sales system described in Section <ref> is denoted by $\L$, and the optimal cost for this
system is still denoted by $C^{*}$.
Note that both systems are defined on the same probability space induced by the initial state and
demand sequence. The main result of [Janakiraman et al., 2007] is that $C^{*}\leq C^{\B*}$, which is established via an ingenious stochastic comparison technique. The main result of this section is
that the best PIL-policy for $\L$ achieves lower cost than the optimal policy for $\B$: $C(\pil^{U^*})\leq C^{\B*}$. Via $C^{*}\leq C(\pil^{U^*})\leq C^{\B*}$, this result constitutes the first constructive proof of the main result of [Janakiraman et al., 2007]: Unlike their stochastic comparison proof, we identify a specific (PIL) policy that yields a cost-rate for system $\L$ that is lower than the optimal cost-rate for system $\B$. In addition to this, this result also enables us to leverage results in [Huh et al., 2009] to show (under mild conditions) that the PIL-policy is asymptotically optimal as $p$ grows large.
The main idea behind the proof of this result is that the base-stock policy with base-stock level $S>\tau\mu$ in $\B$ is
also a PIL-policy with projected inventory $S-\tau\mu$ in system $\B$. Indeed observe that
\[
\]
so that $\E[I^\B_{t+\tau}]=S-\tau\mu$. From this it immediately follows that $C^\mathcal{B}\left(\pil^{S^*-\tau\mu}\right)=C^{\mathcal{B}}\left(\base^{S^*}\right)=C^\mathcal{B^*}$ when
We will first show that $\E[L_t]\leq \E[B_t]$
for any period $t\geq \tau$ when $\mathcal{L}$ and $\mathcal{B}$ operate under the same PIL-policy with level $U\geq 0$. From that, we conclude that the cost-rate of $\mathcal{L}$ under the optimal
PIL policy is smaller than the optimal cost-rate for system $\mathcal{B}$.
The following technical lemma is needed in subsequent results. Its proof is in Appendix <ref>.
Let $X$ and $Y$ be random variables with joint distribution function $F(x,y)$, $-\infty<x,y<\infty$.
Then $\E[(X+Y)^+]=\int_{-\infty}^\infty \P(X\geq z, Y\geq -z) dz$.
Define the random variables
\[
Y= L[0,\tau-1] - \E[L[0,\tau-1]], \qquad\mbox{and}\qquad X=D_{\tau}-I_{\tau}=D[0,\tau]-S-Y,
\]
for $S\geq\tau\mu$.
Observe that $X^+=L_{\tau}$ and that $X+Y=D[0,\tau]-S$ such that $(X+Y)^+=B_{\tau}$
under a PIL-policy with level $U=S-\tau\mu\geq0$.
Our aim will be to prove that $\E[L_{\tau}]\leq \E[B_{\tau}]$. To this end,
we first prove that $X$ and $Y$ are associated random variables (cf. [Esary et al., 1967]).
The random variables $(A_1,\ldots,A_n)=\mathbf{A}$ are said to be associated if
\[
\mbox{Cov}[f(\mathbf{A}),g(\mathbf{A})]\geq 0
\]
for all non-decreasing functions $f,g:\R^n \to\R$ for which the covariance above exists.
$X$ and $Y$ are associated random variables.
The random variables $\mathbf{D}=(D_0,\ldots,D_{\tau})$ are associated by Theorem 2.1 of [Esary et al., 1967].
By property $P_4$ of [Esary et al., 1967] it suffices to show that $Y=f(\mathbf{D})$ and $X=g(\mathbf{D})$ are
non-decreasing functions (element wise).
We derive an expression for $L[0,t]$: The cumulative demands lost until $t$. Each arriving demand $D[0,t]$ until $t$ is either satisfied from inventory or lost. When $L_t>0$, the cumulative amount satisfied must equal the cumulative available inventory $I_0+q[1,t]$, hence $D[0,t]=L[0,t]+(I_0+q[1,t])\Rightarrow L[0,t]=D[0,t]-I_0-q[1,t]$. When $L_t=0$, we have $L[0,t]=L[0,t-1]$, and hence in general we find:
\begin{equation}
\label{eq:Lsum}
L[0,t] = \max_{k\in\{0,\ldots,t\}} \left( D[0,k] - I_0 - q[1,k] \right)^+.
\end{equation}
(To see this, note that the maximum is attained for the $k$ which corresponds to the period with the last stockout until $t$.) Next for $Y=f(\mathbf{D})$ we have
\begin{equation}
Y = L[0,\tau-1]- \E[L[0,\tau-1]] = \max_{k\in\{0,\ldots,\tau-1\}} \left( D[0,k]-I_0-q[1,k] \right)^+ - \E[L[0,\tau-1]],
\end{equation}
which is clearly non-decreasing in each $D_i$, $i\in\{1,\ldots,\tau\}$. (Note that $Y$ is independent of, and thus non-decreasing in $D_\tau$.)
Finally observe that
\begin{equation}
\label{eq:Xredef}
X=g(\mathbf{D})=D[0,\tau]-S-\max_{k\in\{0,\ldots,\tau-1\}} \left( D[0,k]-I_0-q[1,k] \right)^+ + \E[L[0,\tau-1]].
\end{equation}
Note that $dX/dD_{\tau} =1$ and $dX/dD_i \in \{0,1\}$ for $i\in\{0,\ldots,\tau-1\}$, thus $g$ is non-decreasing.
If system $\mathcal{B}$ and $\mathcal{L}$ both operate under a PIL policy with
level $U\geq 0$, then $\E[B_t]\geq \E[L_t]$ for any period $t\geq \tau$.
We will show that $\E[B_{\tau}]\geq \E[L_{\tau}]$ using only that $q_\tau$ can be placed to attain $U\geq0$. By Lemma <ref>
this implies the result.
Let $\tilde{X}$ and $\tilde{Y}$ be two independent random variables with the same marginal distribution as $X$ and $Y$ respectively.
Observe that
\begin{align}
\E[B_{\tau}] &= \E\left[(X+Y)^+\right] \notag\\
\label{eq:firststep}
& = \int_{z=-\infty}^\infty \P(X\geq z, Y\geq -z) dz \\
\label{eq:esarystep}
& \geq \int_{z=-\infty}^\infty \P(X\geq z) \P(Y\geq -z) dz \\
& = \int_{z=-\infty}^\infty \P(\tilde{X}\geq z) \P(\tilde{Y}\geq -z) dz = \E\left[(\tilde{X}+\tilde{Y})^+\right]\notag,
\end{align}
where (<ref>) follows from Lemma <ref> and (<ref>) follows from Theorem 5.1 of [Esary et al., 1967].
Now continuing and using that $\tilde{X}$ and $\tilde{Y}$ are independent, we find
\begin{align}
\E\left[(\tilde{X}+\tilde{Y})^+\right] \ge \E\left[(\tilde{X}+\E[\tilde{Y}])^+\right] = \E[\tilde{X}^+]%= \E[\tilde{X}^+] = \E[X^+]=\E[L_{\tau}].
\end{align}
where the inequality follows from Jensen's inequality and the independence between $\tilde{X}$ and $\tilde{Y}$, and the equality holds since $\E[\tilde{Y}]=0$. Note $\E[\tilde{X}^+]=\E[X^+]=\E[L_t]$.
If system $\mathcal{B}$ and $\mathcal{L}$ both operate under a PIL policy with
level $U\geq 0$, then for any initial state $\xb$ such that projected inventory $U\geq0$ can be attained, we have $p\E[L_{t}]+h\E[J_{t}]=\E[c_t]\leq \E[c^{\mathcal{B}}_t]=p\E[B_t]+h\E[J_t^\mathcal{B}]$ for any $t\geq\tau$.
We will show that $\E[c_{\tau}]\leq \E[c^{\mathcal{B}}_{\tau}]$ using only that $q_\tau$ can be placed to attain $U\geq0$. By Lemma <ref>
this implies the result.
The inventory level in $\mathcal{L}$ at the time of arrival of order $q_\tau$ is given by:
\begin{equation}
\label{eq:ItauL}
I_\tau = I_0 + q[1,\tau] - D[0,\tau-1] + L[0,\tau-1].
\end{equation}
Under a PIL-policy with projected inventory $S-\tau\E[D]$, system $\mathcal{L}$ will choose $q_\tau$ such that $\E[I_\tau]=S-\tau\E[D]$.
Using (<ref>) and solving for $q_\tau$ yields that
\begin{equation}
\label{eq:qtauPIL}
q_\tau=S-I_0- q[1,\tau-1] - \E[L[0,\tau-1]].
\end{equation}
Substituting (<ref>) back into (<ref>) yields
\begin{equation}
\label{eq:ItauSimplified}
I_\tau = S- D[0,\tau-1] + L[0,\tau-1] - \E[L[0,\tau-1]].
\end{equation}
Now we have for the expected costs that will be incurred in period $\tau$ by system $\mathcal{L}$:
\begin{align}
\E[c_{\tau}] &= h \E[(I_\tau-D_\tau)^+] + p \E[(D_\tau - I_\tau)^+] \notag\\
&= h \E[I_\tau-D_\tau] + h \E[(D_\tau-I_\tau)^+] + p \E[(D_\tau - I_\tau)^+] \notag\\
&= h \E\left[S- D[0,\tau] + L[0,\tau-1] - \E(L[0,\tau-1]) \right] + h \E[L_\tau] + p \E[L_\tau] \notag\\
&= h \E\left[S- D[0,\tau] \right] + h \E[L_\tau] + p \E[L_\tau] \notag\\
&= h\E\left[\left(S- D[0,\tau]\right)^+\right] - h \E\left[\left( D[0,\tau] - S\right)^+\right] + h \E[L_\tau] + p \E[L_\tau] \notag\\
&\leq h\E\left[\left(S- D[0,\tau]\right)^+\right] + p \E\left[\left( D[0,\tau] - S\right)^+\right] = \E[c_\tau^\mathcal{B}]=C^{\mathcal{B}*}.
\end{align}
The second and fourth equality follow from using the identity $x=x^+ + (-x)^+$, and the inequality follows from applying Lemma <ref> twice.
If system $\B$ is controlled by the optimal base-stock policy, or equivalently by a PIL policy with parameter $S^*-\tau\mu$, and if $\L$ is controlled by a PIL-policy with PIL-level $(S^*-\tau\mu)^+$, then
the cost-rate of system $\L$ dominates the optimal cost-rate of system $\B$, that is
\[
C(\pil^{U^*})\leq C(\pil^{(S^*-\tau\mu)^+})\leq C^{\mathcal{B}*}.
\]
Since $\xb_0=\textbf{0}$, we can attain the projected inventory level $U$ in $\L$ in every period. (Regardless of this assumption, the number of periods for which
it is not possible to attain a projected inventory level $U$ in $\L$ is finite almost surely.) Now consider two cases: $S^*\geq \tau\mu$
and $S^*<\tau\mu$.
* Case $S^*\geq\tau\mu$: Without loss of generality let period 0 be the first period in which it is possible to place an order to attain $U=S^*-\tau\mu\geq 0$. By Lemma <ref> it holds that $\E[c_t]\leq\E[c^{\mathcal{B}}_t]=C^{\mathcal{B}*}$ for all $t\geq\tau$ which implies that $C(\pil^{S^*-\tau\mu})\leq C^{\mathcal{B}^*}$.
* Case $S^*<\tau\mu$: Observe first that $C(\pil^0)=p\mu$ because under $U=0$ there is no inventory and all demand is lost. We now have
$C^{\mathcal{B}*} = C^\mathcal{B}(\pil^{S^*-\tau\mu})
p\E[D[0,\tau]-S^*]>p\mu=C(\pil^0)$, where the strict inequality holds because $0\leq S^*<\tau\mu$.
That $C(\pil^{U^*})\leq C(\pil^{(S^*-\tau\mu)^+})$ follows from the definition of $U^*$.
§.§ Asymptotic optimality as $p\to\infty$
To describe penalty cost asymptotics we need the following assumption on the distribution of lead time demand which is identical to
assumption 1 of [Huh et al., 2009] and [Bijvank et al., 2014]:
The random variable $D[0,\tau]$ has finite mean and is (i) bounded or
(ii) is unbounded and $\lim_{x\to\infty} \E\big[D[0,\tau]-x \big| D[0,\tau]> x\big]/x=0$.
Assumption <ref> is discussed in some detail in Section 3 of [Huh et al., 2009].
All distributions commonly used to model demand, including Gaussian, gamma, Poisson, negative-binomial, Mixed Erlang,
and Weibull distributions, satisfy this assumption.
Under assumption 1, the best PIL-policy is asymptotically optimal for the lost sales inventory system as the cost of a lost sale increases:
\[
\lim_{p\to\infty} \frac{C(\pil^{(S^*(p)-\tau\mu)^+})}{C^*}
=\lim_{p\to\infty} \frac{C(\pil^{U^*})}{C^*}=1.
\]
By Theorem 3 of [Huh et al., 2009] we have
$\lim_{p\to\infty} C^{\B*}/C^{*}=1$. Combining this with Theorem <ref> yields the result.
Recall that by Lemma <ref>, a 1-step policy improvement of a constant order policy yields a PIL policy. The former class of policies is not asymptotically optimal for $p\rightarrow \infty$ and tends to perform poorly in this regime, while Theorem <ref> demonstrates that the latter class of policies are asymptotically optimal as $p\rightarrow \infty$. Thus policy improvement qualitatively alters asymptotic performance.
§ COMPUTATIONAL ASPECTS
In this section we discuss the computation of the projected inventory level $\E[J_{t+\tau-1}|\xb_t]$ and the optimization of the projected inventory level $U$.
§.§ Inventory Projection
Implementation of the PIL policy requires that the projected inventory level, $\E[J_{t+\tau-1}|\xb_t]$, is computed every period $t$. This can be done relatively straightforwardly when demand has a discrete distribution by using the recursive expressions in (<ref>) through (<ref>). In this sub-section, we show how $\E[J_{t+\tau-1}|\xb_t]$ can be computed similarly straightforwardly when demand has a Mixed Erlang (ME) distribution. The class of ME distributions is a powerful modeling tool because it can approximate any non-negative distribution arbitrarily closely (cf. Theorem 5.5.1 of [Tijms, 2003]), can be fitted easily on moments, and has many computational advantages in multi-echelon inventory theory [van Houtum, 2006]. We next define ME distributions and discuss how to project inventory levels when demand has a ME distribution.
Let $\{E_{i,t}\}_{i=1}^\infty$ be an i.i.d. sequence of exponential random variables with mean $1/\lambda$ for each $t\in\N_0$. Furthermore let $\{K_t\}_{t=0}^\infty$ be a sequence of i.i.d. random variable on the non-negative integers with probability mass function $\P(K_t=k)=\theta_k$, $k\in\N_0$. Demand has an ME distribution when $D_t=\sum_{i=1}^{K_t} E_{i,t}$. In order to intuitively explain the efficient inventory projection method that follows, we imbue ME distributed demand with the following interpretation: Each period $K_t$ customers arrive, each demanding an exponentially distributed amount of stock.
Recall that $\E[J_{t+\tau-1}|\xb_t]=I_t+q[t+1,t+\tau-1]-\tau\mu +\E\left[L[t,t+\tau-1] \mid \xb_t\right]$ so that in order to project the inventory level, it suffices to evaluate $\E\left[L[t,t+\tau-1] \mid \xb_t\right]$. The main idea now is to count inventory and lost sales in terms of the number of customers (each with one exponential phase of demand) that can be satisfied.
Let $\tilde{I}_t$ denote the number of customers whose demand can be met fully with the inventory available at the beginning of period $t$. Then conditional on $\xb_t=(I_t,q_{t+1},q_{t+2},\ldots,q_{t+\tau-1})$, $\tilde{I}_t$ is Poisson distributed with mean $\lambda I_t$. Since $K_t$ customers will arrive in period $t$, there will be $\tilde{L}_t := (K_t-\tilde{I}_t)^+$ customers whose demand cannot be filled completely; note that for one of those customers the demand can be filled partially. Next, we crucially observe that for the customer whose demand is met partially, the amount of demand (in original units) that remains unfulfilled has an exponential distribution with mean $1/\lambda$ due to the lack of memory of the exponential distribution. Thus $\E[L_t \mid \xb_t] = \lambda^{-1} \E[\tilde{L}_t \mid \xb_t]$. Similarly, there is inventory to satisfy the demand of another $\tilde{J}_t:=(\tilde{I}_t-K_t)^+$ customers at the end of period $t$. Let $Q_{t+1}$ have a Poisson distribution with mean $\lambda q_{t+1}$. Then in period $t+1$ there is inventory to satisfy the demand of $\tilde{I}_{t+1}=\tilde{J}_t+Q_{t+1}$ customers. This reasoning can be continued to obtain the following dynamics:
\begin{align}
\tilde{J}_t &= (\tilde{I}_t-K_t)^+,\\
\tilde{L}_t &= (K_t - \tilde{I}_t)^+,\\
\tilde{I}_{t+1}&=\tilde{J}_{t}+Q_{t+1},
\end{align}
with the initial conditions that $\tilde{I}_t$ has a Poisson distribution with mean $\lambda I_t$ and $Q_t$ has a Poisson distribution with mean $\lambda q_t$, where $1/\lambda$ is the mean of the exponential distributions associated with the ME demand. Note that since $\tilde{I}_t$, $\tilde{J}_t$ and $\tilde{L}_t$ all denote a number of customers (whose demand can be met in full or is partially lost) they are distributed on the non-negative integers. Thus the distributions of $\tilde{I}_{t+j}|\xb_t$, $\tilde{J}_t+j|\xb_t$ and $\tilde{L}_{t+j}|\xb_t$ can be computed recursively for $j\in\{0,\ldots,\tau-1\}$; see Appendix <ref>. This recursion will require that the distributions of $\tilde{I}_t$, $\tilde{J}_t$, and $Q_t$ are truncated at a sufficiently high level. $K_t$ will usually have a finite support $k_{\max} = \sup\{k\in\N\mid \theta_k>0\}$ (under two moment fits $K_t$ will have a two-point distribution, see [Tijms, 2003]) and so $\tilde{L}_{t+j}$ will be supported up to $(j+1)k_{\max}$. From this it follows that the distributions of $\tilde{I}_t$ and $\tilde{J}_t$ can be truncated at $(\tau+1) k_{\max}$ for computational purposes.
Thus we can compute the projected inventory level as
\[
\E[J_{t+\tau-1} \mid \xb_t]=I_t+q[t+1,t+\tau-1]-\tau\mu + \frac{1}{\lambda}\E\left[\tilde{L}[t,t+\tau-1] \mid \xb_t\right],
\]
when demand has a ME distribution. The theoretical running time complexity of this procedure is polynomial in $k_{\max}$ and $\tau$. In practice, the procedure is efficient (see Section <ref>).
§.§ Optimization of the projected inventory level policy
We next discuss the optimization of $U$, i.e., we discuss and study the problem:
\[\min_{U\ge0} C(\pil^U) \]
The expected cost incurred in period $t$ can be written as $c_t =h (I_t-D_t)^+ + p (D_t-I_t)^+= h (I_t- D_t) + (h+p)(D_t-I_t)^+$. Since $\E [I_t]=U$ by Lemma <ref>,
we obtain
\begin{equation}\label{eq:costfunctionPIL}
C(\pil^U) = hU-h\E[D]+(h+p)\limsup_{T\rightarrow\infty} \E\left[\frac{1}{T-\tau+1}L[\tau,T](\pil^U) \right]
\end{equation}
To analyze the costs, the following lemma will be useful. Its proof is in Appendix <ref>.
For any given $\xb_0$ and demand sequence $D_1,\ldots, D_{T-1},D_{T},\ldots,D_{T+\tau}$:
* The cumulative inventory ordered until period $T+\tau$ (i.e. $q[1,T+\tau](\pil^U) | D_1,\ldots,D_{T-1}$) is non-decreasing in $U$.
* The cumulative lost demand until period $T+\tau$ (i.e. $L[1,T+\tau](\pil^U) | D_1,\ldots,D_{T+\tau}$) is non-increasing in $U$.
* The expected lost sales per period, $\limsup_{T\rightarrow\infty} \E\left[\frac{1}{T-\tau+1}L[\tau,T](\pil^U) \right]$, is non-increasing in $U$.
We next discuss the identification of near-optimal $U$. Observe that from (<ref>) it follows that $U^*\le (1+p/h)\mu =:\bar{U}$. Indeed, the holding costs for $U>\bar{U}$ already exceed the costs $C(\pil^0)=p\mu$ for $U=0$. Let $\alpha_D=\min_x \E[p/h(D-x)^++(x-D)^+]$, and for ease of exposition suppose $\var(D)>0$ s.t. $\alpha_D>0$. Then Lemma <ref> underlies the following Theorem. Its proof is in Appendix <ref>.
For $\epsilon>0$, let $n_\epsilon=\lceil 2\mu/(\alpha_D\epsilon)\rceil -1$ and $n'_\epsilon=\left\lceil\frac{\ln(p/h)}{\ln(1+\epsilon)}\right\rceil$. Let $\mathcal{U}_{\epsilon}=\{0,\epsilon\alpha_D,\ldots,n_\epsilon \epsilon \alpha_D\}\cup \{\mu+\mu (1+\epsilon)^i| i\in \{0,1,\ldots,n'_\epsilon\}\}$. Then $|\mathcal{U}_\epsilon|\le\frac{1}{\epsilon}\left( 2\mu/\alpha_D+\ln(p/h)\right)+2$ and
\[\min_{U\in \mathcal{U}_\epsilon}C(\pil^U) \le (1+\epsilon)\min_{U\ge 0}C(\pil^U).\]
Thus evaluating the costs $C(\pil^U)$ only for $U\in \mathcal{U}_{\epsilon}$ guarantees good performance. The search is quite efficient since $|\mathcal{U}_\epsilon|$ grows logarithmically in $p$ and since $\mu/\alpha_D$ tends to stabilize as $\mu$ grows for fixed coefficients of variation.
A simpler and even more efficient algorithm for optimizing $U$ may be obtained based on the hypothesis that $C(\pil^U)$ is convex in $U$; this hypothesis is supported by all our numerical experiments but remains without rigorous proof.
§ NUMERICAL RESULTS
The PIL policy is asymptotically optimal for large $\tau$ if $D$ has an exponential distribution. In Section <ref>, we provide numerical evidence that the PIL also outperforms the COP (and base-stock policy) for non-exponential distributions even for long lead times. Then in Section <ref>, we benchmark
the performance of the PIL policy against other policies including
the optimal policy for the standard test-bed of [Zipkin, 2008].
Finally in Section <ref>, we provide numerical benchmarks for a test-bed of instances of the size one is likely to encounter in practice. This section also provides results on the computational feasibility of computing inventory projections.
§.§ Long lead times
We created a test-bed in which demand has a Mixed Erlang distribution with mean 100 and the holding cost is fixed at $h=1$. We then varied the coefficient of variation of the one period demand ($\sqrt{\Var[D]/(\E[D])^2}$) to be either $\frac{1}{2}$ or $\frac{3}{2}$ using the two moment fitting procedure in [Tijms, 2003]. We also varied the lead-time to be anything between 1 and 20, $\tau\in\{1,\ldots,20\}$ and the penalty cost in the set $p\in\{4,9,19\}$. This makes for a total of $2\cdot3\cdot20=120$ instances. For each of these instances we optimized the constant order policy, the base-stock policy and the PIL policy. The results are shown in Figure <ref>.
[xlabel=Lead time ($\tau$), ylabel=Optimized cost rate, width=0.95, height=0.7,
legend pos=south east,
legend cell align=left, legend style=font=, ymin=165]
[mark=asterisk, color=blue, line width=.5pt,mark size=2pt] table[x=tau,y=CBS, col sep=comma]tauplot20220903_MEcvhalf_penalty4.csv;
Basestock policy
[mark=diamond, color=red, line width=.5pt,mark size=2pt] table[x=tau,y=CPIL, col sep=comma]tauplot20220903_MEcvhalf_penalty4.csv;
Projected inventory level policy
[mark=triangle, color=green, line width=.5pt,mark size=2pt] table[x=tau,y=COP, col sep=comma]tauplot20220903_MEcvhalf_penalty4.csv;
Constant order policy
legend pos=south east
$cv=0.5$, $p=4$
[xlabel=Lead time ($\tau$), ylabel=Optimized cost rate, width=0.95, height=0.7,
legend pos=south east,
legend cell align=left, legend style=font=, ymin=245]
[mark=asterisk, color=blue, line width=.5pt,mark size=2pt] table[x=tau,y=CBS, col sep=comma]tauplot20220903_MEcvonehalf_penalty4.csv;
Basestock policy
[mark=diamond, color=red, line width=.5pt,mark size=2pt] table[x=tau,y=CPIL, col sep=comma]tauplot20220903_MEcvonehalf_penalty4.csv;
Projected inventory level policy
[mark=triangle, color=green, line width=.5pt,mark size=2pt] table[x=tau,y=COP, col sep=comma]tauplot20220903_MEcvonehalf_penalty4.csv;
Constant order policy
legend pos=south east
$cv=1.5$, $p=4$
[xlabel=Lead time ($\tau$), ylabel=Optimized cost rate, width=0.95, height=0.7,
legend pos=south east,
legend cell align=left, legend style=font=]
[mark=asterisk, color=blue, line width=.5pt,mark size=2pt] table[x=tau,y=CBS, col sep=comma]tauplot20220903_MEcvhalf_penalty9.csv;
Basestock policy
[mark=diamond, color=red, line width=.5pt,mark size=2pt] table[x=tau,y=CPIL, col sep=comma]tauplot20220903_MEcvhalf_penalty9.csv;
Projected inventory level policy
[mark=triangle, color=green, line width=.5pt,mark size=2pt] table[x=tau,y=COP, col sep=comma]tauplot20220903_MEcvhalf_penalty9.csv;
Constant order policy
legend pos=south east
$cv=0.5$, $p=9$
[xlabel=Lead time ($\tau$), ylabel=Optimized cost rate, width=0.95, height=0.7,
legend pos=south east,
legend cell align=left, legend style=font=, ymin=400]
[mark=asterisk, color=blue, line width=.5pt,mark size=2pt] table[x=tau,y=CBS, col sep=comma]tauplot20220907_MEcvonehalf_penalty9.csv;
Basestock policy
[mark=diamond, color=red, line width=.5pt,mark size=2pt] table[x=tau,y=CPIL, col sep=comma]tauplot20220907_MEcvonehalf_penalty9.csv;
Projected inventory level policy
[mark=triangle, color=green, line width=.5pt,mark size=2pt] table[x=tau,y=COP, col sep=comma]tauplot20220907_MEcvonehalf_penalty9.csv;
Constant order policy
legend pos=south east
$cv=1.5$, $p=9$
[xlabel=Lead time ($\tau$), ylabel=Optimized cost rate, width=0.95, height=0.7,
legend pos=south east,
legend cell align=left, legend style=font=]
[mark=asterisk, color=blue, line width=.5pt,mark size=2pt] table[x=tau,y=CBS, col sep=comma]tauplot20220903_MEcvhalf_penalty19.csv;
Basestock policy
[mark=diamond, color=red, line width=.5pt,mark size=2pt] table[x=tau,y=CPIL, col sep=comma]tauplot20220903_MEcvhalf_penalty19.csv;
Projected inventory level policy
[mark=triangle, color=green, line width=.5pt,mark size=2pt] table[x=tau,y=COP, col sep=comma]tauplot20220903_MEcvhalf_penalty19.csv;
Constant order policy
legend pos=south east
$cv=0.5$, $p=19$
[xlabel=Lead time ($\tau$), ylabel=Optimized cost rate, width=0.95, height=0.7,
legend pos=south east,
legend cell align=left, legend style=font=, ymin=520]
[mark=asterisk, color=blue, line width=.5pt,mark size=2pt] table[x=tau,y=CBS, col sep=comma]tauplot20220907_MEcvonehalf_penalty19.csv;
Basestock policy
[mark=diamond, color=red, line width=.5pt,mark size=2pt] table[x=tau,y=CPIL, col sep=comma]tauplot20220907_MEcvonehalf_penalty19.csv;
Projected inventory level policy
[mark=triangle, color=green, line width=.5pt,mark size=2pt] table[x=tau,y=COP, col sep=comma]tauplot20220907_MEcvonehalf_penalty19.csv;
Constant order policy
legend pos=south east
$cv=1.5$, $p=19$
Optimized Cost rate as a function of lead time for different policies for Mixed Erlang demand with mean 100 and coefficient of variation ($cv$) of 0.5 (sub-figures a, c, and e) and 1.5 (sub-figures b, d, and f) for penalty costs of 4, 9 and 19 respectively and holding cost rate 1.
[xlabel=Lost sale penalty cost ($p$), ylabel=Optimal cost rate, width=0.8, height=0.55*, legend pos=south east,
legend cell align=left, legend style=font=]
[mark=asterisk, color=blue, line width=.5pt,mark size=3pt] table[x=p,y=CBS, col sep=comma]pplot2.csv;
Basestock policy
[mark=diamond, color=red, line width=.5pt,mark size=3pt] table[x=p,y=CPIL, col sep=comma]pplot2.csv;
Projected inventory level policy
[mark=triangle, color=green, line width=.5pt,mark size=3pt] table[x=p,y=COP, col sep=comma]pplot2.csv;
Constant order policy
Cost rate as a function of penalty cost when $h=1$, $\tau=5$, and demand is negative binomial with 5 required successes and success probability 1/21
§.§ Heuristic PIL-levels
Based on BS policy:
\[
U_{BS} = S^*-\tau \mu
\]
Based on COP and kingman approximation of $G/D/1$ queue:
Approximate cost function for a certain COP policy with $COP=q$
\begin{equation}
\label{eq:Ctilde}
\tilde{C}(q) = h\frac{c_D^2}{2}\frac{q^2}{\mu-q}+p(\mu-q).
\end{equation}
Setting $d\tilde{C}(q)/dq=0$ and solving for $q$ and rejecting the $q$ that is not physically feasible we find
\begin{equation}
\label{eq:qtildeopt}
q*=\mu\left(1-\sqrt{\frac{h c_D^2}{2p+h c_D^2}}\right)
\end{equation}
Thus under such a COP policy, the inventory level at the beginning of a period
is given by reinserting $q*$ into (<ref>):
\[
\tilde{U}=\frac{c_D^2}{2}\frac{q*^2}{\mu-q*}+q*
=\mu\left(1+\frac{c_D^2}{2}\frac{\left(1-\sqrt{\frac{h c_D^2}{2p+h c_D^2}}\right)^2}{\sqrt{\frac{h c_D^2}{2p+h c_D^2}}}-\sqrt{\frac{h c_D^2}{2p+h c_D^2}}\right)
\]
The PIL policy has superior performance relative to the COP and base-stock policy in all cases. Figure <ref> also suggests that the PIL policy is asymptotically optimal as $\tau\to\infty$ for demand distributions other than the exponential distribution.
§.§ Standard test-bed
[Zipkin, 2008] provides a test-bed to compare the performance of notable policies for the canonical lost sales inventory model. This test-bed has relatively small instances as the performance of all policies including the optimal policy are evaluated numerically.
This test-bed has two demand distributions, Poisson and geometric, both with mean 5.
The holding cost is fixed at $h=1$ and the other parameters are varied as a full factorial: $\tau\in\{1,2,3,4\}$, $p\in\{4,9,19,39\}$ leading to a total of 32 instances.
[Zipkin, 2008] report the performance of notable policies advocated in literature (e.g. [Morton, 1971, Levi et al., 2008, Huh et al., 2011]).
Here we report on the base-stock policy, constant order policy, myopic policy, capped base-stock policy, and the PIL policy. The myopic policy is the best performing policy in Zipkin's test-bed that has intuitive appeal. The myopic policy places an order in period $t$ to minimize the projected cost in period $t+\tau$ given the current state. The myopic policy is defined formally as:
\[
\myo_t(\xb):=\argmin_{q\geq0} \E[pL_{\tau}+hJ_{\tau} \mid \xb_0=\xb].
\]
The capped base-stock policy is formally defined as
\[
\cbs^{S,r}_t(\xb):=\min\{\base^S_t(\xb),\cop^r_t(\xb)\}.
\]
The capped-base-stock policy is also asymptotically optimal both as $p\to\infty$ and as $\tau\to\infty$, as is intuitively clear as the parameters of this policy can be set to mimic either a base-stock policy (by setting the cap $r$ arbitrarily high) or as a constant order policy (by setting the base-stock level $S$ arbitrarily high). We use Matlab's solver “fmincon” to find the `best' parameters for the capped base-stock policy, which is the same solver used by [Xin, 2020]. The parameter of the PIL policy is optimized by the single dimensional “fminbnd” solver in Matlab.
Table <ref> reports the performance of these policies.
The performance of the PIL policy is closest to optimal with an average optimality gap of 0.6% whereas the base-stock policy has an average optimality gap of 3.5%, the myopic policy of 2.8% and the capped base-stock policy of 0.7%.
The average performance of the best constant order policy is quite poor with an average optimality gap of 47.4%. It appears that the PIL policy has attractive asymptotic properties as well as superior empirical performance compared to state of the art heuristics.
Comparison of policies on Zipkin's test-bed
3-101r 4c|Poisson demand 4c|Geometric demand
3-101r 4c|Lead-time $\tau$ 4c|Lead-time $\tau$
1|l|Penalty per lost sale Policy 1 2 3 4 1 2 3 4
6[2]*$p=4$ Optimal 1r4.04 1r4.40 1r4.60 1r|4.73 1r9.82 1r10.24 1r10.47 1r|10.61
PIL 1r4.04 1r4.40 1r4.62 1r|4.74 1r9.84 1r10.28 1r10.51 1r|10.64
Myopic 1r4.11 1r4.56 1r4.84 1r|5.06 1r9.95 1r10.57 1r10.99 1r|11.31
Base-stock 1r4.16 1r4.64 1r4.98 1r|5.20 1r10.04 1r10.70 1r11.13 1r|11.44
Capped base-stock 1r4.06 1r4.41 1r4.63 1r|4.80 1r9.87 1r10.32 1r10.51 1r|10.70
COP 4c|5.27 4c|11.00
6[2]*$p=9$ Optimal 1r5.44 1r6.09 1r6.53 1r|6.84 1r14.51 1r15.50 1r16.14 1r|16.58
PIL 1r5.45 1r6.12 1r6.58 1r|6.90 1r14.55 1r15.60 1r16.27 1r|16.73
Myopic 1r5.45 1r6.22 1r6.80 1r|7.20 1r14.64 1r15.93 1r16.86 1r|17.61
Base-stock 1r5.55 1r6.32 1r6.86 1r|7.27 1r14.73 1r15.99 1r16.87 1r|17.54
Capped base-stock 1r5.48 1r6.12 1r6.62 1r|6.91 1r14.58 1r15.63 1r16.27 1r|16.73
COP 4c|10.27 4c|18.19
6[2]*$p=19$ Optimal 1r6.68 1r7.66 1r8.36 1r|8.89 1r19.22 1r20.89 1r22.06 1r|22.95
PIL 1r6.68 1r7.68 1r8.42 1r|8.95 1r19.28 1r21.03 1r22.73 1r|23.85
Myopic 1r6.69 1r7.77 1r8.56 1r|9.18 1r19.37 1r21.30 1r22.79 1r|24.02
Base-stock 1r6.73 1r7.84 1r8.60 1r|9.23 1r19.40 1r21.31 1r22.73 1r|23.85
Capped base-stock 1r6.69 1r7.72 1r8.40 1r|8.95 1r19.32 1r21.06 1r22.27 1r|23.28
COP 4c|15.78 4c|28.60
6[2]*$p=39$ Optimal 1r7.84 1r9.11 1r10.04 1r|10.79 1r23.87 1r26.21 1r27.96 1r|29.36
PIL 1r7.84 1r9.12 1r10.09 1r|10.91 1r23.94 1r26.37 1r28.18 1r|29.72
Myopic 1r7.88 1r9.16 1r10.17 1r|11.04 1r23.97 1r26.55 1r28.61 1r|30.31
Base-stock 1r7.86 1r9.19 1r10.22 1r|11.06 1r24.00 1r26.55 1r28.51 1r|30.12
Capped base-stock 1r7.84 1r9.14 1r10.08 1r|10.88 1r24.00 1r26.30 1r28.28 1r|29.76
COP 4c|18.21 4c|36.73
§.§ Large instance test-bed
We created a large test-bed of instances for which the optimal
policy cannot be tractably computed. However, we believe these
instances give a fair representation of instances that one may
encounter in practice. In all these instances, demand has a
Mixed Erlang distribution with $\E[D]=100$ and
$h=1$. We varied the lead-time $\tau\in\{1,2,3,4,5,6\}$,
the penalty cost parameter $p\in\{1,4,9,19,49,99\}$,
and the coefficient of variation of the one period demand $\sqrt{\Var[D]/(\E[D])^2}\in\{0.15,0.25,0.5,1,1.5,2.0\}$
for a total of 216 instances. (We use the two moment fitting procedure in [Tijms, 2003] to fit a ME distribution.)
We use the Matlab solver “fminbnd” to optimize the single parameter of the PIL, base-stock, and COP for each of these instances. Matlab's multi-dimensional solver “fmincon” is used to optimize the parameters of the capped base-stock policy. All instances used common random numbers and the length of the simulation was set such that the half-width of a 95% confidence interval was less than 1% of the point estimate for the cost-rate.
The PIL policy has the best performance of these four policies. Therefore we report the gap of the COP, base-stock and capped base-stock policies relative to the best PIL policy across this test-bed aggregated by setting; see Table <ref>. The performance of the PIL policy is again superior to the base-stock and constant order policy by a considerable margin on average (3.68% and 38.69%) whereas the gap with the capped base-stock policy is small (0.99%). The maximum percentage gaps are strikingly large at 16.52%, 198.25% and 8.36% for this test-bed. This is further evidence that the PIL policy is an attractive candidate for application in practical settings.
The optimization of the PIL policy does require some more computational effort as one is required to evaluate the projected inventory level, $\E[J_{t+\tau-1}\mid \xb_t]$, for each period that is simulated. Table <ref> shows how many millions of inventory projections can be performed per minute, averaged across instances in the test bed. These projections were implemented as a single threaded application in C and run on laptop with an 11-th generation Intel Core i7 processor with 3.00 GHz clock speed. The number of projections per minute averages at 18 million per minute. The slowest instance (which has $\tau=6$, $p=99$, and $cv=0.4$) still averages 0.45 million projections per minute. This confirms the projection can be done efficiently for practical purposes as claimed in Section <ref>.
Comparison of policy performance on large test-bed
3-11 1r 9c|Percentage gap with PIL policy
2-11 1|l|Policy 3c|base-stock 3c|constant order 3c|capped base-stock
2-11 1|l|Gap with PIL min max avg min max avg min max avg
1|c|6[2]*cv of demand 0.4 0.09 16.52 4.74 0.14 198.25 48.93 -0.67 6.30 1.38 [t]
1|c| 0.6 0.08 14.11 4.40 0.08 174.88 43.98 -0.74 7.55 1.61
1|c| 0.8 0.09 11.86 4.04 0.05 164.38 40.25 -0.66 3.45 0.46
1|c| 1 0.08 10.12 3.50 0.01 139.48 34.88 -0.49 3.72 0.44
1|c| 1.2 0.06 8.52 2.97 0.01 133.46 31.89 -0.23 8.36 1.09
1|c| 1.4 -0.10 6.80 2.40 0.00 139.75 32.22 -0.03 5.02 0.95 [b]
1|c|6[2]*lead time ($\tau$) 1 -0.10 5.25 1.20 0.62 198.25 60.40 -0.03 4.04 0.87 [t]
1|c| 2 0.17 9.09 2.41 0.17 159.92 46.59 -0.16 7.57 1.14
1|c| 3 0.28 11.64 3.41 0.07 132.35 38.43 -0.16 5.18 0.84
1|c| 4 0.39 13.88 4.28 0.03 117.04 32.68 -0.26 8.36 1.20
1|c| 5 0.53 15.35 5.05 0.01 98.97 28.56 -0.65 6.39 0.99
1|c| 6 0.64 16.52 5.70 0.00 94.42 25.48 -0.74 6.82 0.88 [b]
1|c|6[2]*Penalty cost ($p$) 1 1.74 16.52 7.83 0.00 4.12 0.56 0.01 8.36 2.60 [t]
1|c| 4 1.16 12.77 6.60 0.28 20.86 5.33 0.07 4.04 0.93
1|c| 9 0.29 8.09 4.22 4.07 42.60 14.91 -0.26 1.37 0.38
1|c| 19 -0.10 4.19 2.22 14.11 74.08 32.07 -0.67 3.71 0.53
1|c| 49 -0.09 1.54 0.81 39.96 135.19 69.01 -0.74 1.98 0.49
1|c| 99 -0.03 0.72 0.36 70.34 198.25 110.27 -0.22 7.55 1.00 [b]
2|c|Total -0.10 16.52 3.68 0.00 198.25 38.69 -0.74 8.36 0.99
Computational speed of inventory level projection
3|p4.5cm|Projections per minute $(\times 10^6)$
1|p1.5cm|min 1p1.5cm|max 1p1.5cm|average
0.45 144.82 18.00
Comparison of policy performance on large test-bed
2-10 1r| 9c|Percentage gap with PIL policy
Policy 3c|base-stock 3c|constant order 3c|capped base-stock
Gap with PIL min max avg min max avg min max avg
cv of demand
1|r|0.4 0.15 17.24 4.92 2.19 517.64 111.43 0.05 4.40 1.22 [t]
1|r|0.6 0.17 14.84 4.55 0.44 297.26 66.58 -0.17 4.17 0.75
1|r|0.8 0.13 12.19 4.12 0.06 206.37 48.51 -0.32 3.59 0.46
1|r|1 0.14 10.33 3.63 0.05 160.46 39.04 -0.12 1.78 0.28
1|r|1.2 0.13 9.17 3.14 0.11 131.98 33.67 -0.15 1.56 0.28
1|r|1.4 0.12 8.02 2.65 0.40 119.63 31.20 -0.08 1.98 0.31 [b]
Lead time ($\tau$)
1|r|1 0.12 5.31 1.33 1.04 517.64 81.18 0.00 1.76 0.22 [t]
1|r|2 0.21 9.58 2.54 0.39 430.40 64.84 -0.17 1.56 0.31
1|r|3 0.33 12.34 3.56 0.20 374.54 54.61 -0.32 2.67 0.55
1|r|4 0.43 14.60 4.45 0.12 337.07 47.96 -0.27 3.34 0.54
1|r|5 0.55 16.08 5.22 0.05 311.87 42.90 -0.18 4.40 0.81
1|r|6 0.66 17.24 5.91 0.05 288.07 38.93 -0.15 4.20 0.87 [b]
Penalty cost ($p$)
1|r|1 1.74 17.24 8.12 0.05 7.21 1.17 0.13 4.40 1.21 [t]
1|r|4 1.61 13.17 6.89 1.28 31.23 7.65 0.05 2.57 0.48
1|r|9 0.93 8.07 4.33 4.96 47.47 17.88 -0.17 0.95 0.26
1|r|19 0.49 4.20 2.30 14.88 102.60 37.71 -0.32 3.47 0.38
1|r|49 0.20 1.65 0.90 42.51 263.16 92.03 -0.10 1.66 0.51
1|r|99 0.12 0.93 0.46 64.21 517.64 173.97 0.05 1.76 0.48 [b]
Total 0.12 17.24 3.83 0.05 517.64 55.07 -0.32 4.40 0.55
§.§ Discussion
A key finding of Sections <ref>-<ref> is that the PIL policy consistently outperforms the COP also for non-exponential demand distributions, while the corresponding rigorous proof of Section <ref> crucially assumes exponentially distributed demand. We briefly hypothesize in an attempt to understand and partially reconcile the difference in scope.
Recall that we established dominance of the PIL policy over the COP policy for exponential demand, by showing that the PIL policy can be obtained by performing a policy improvement step on the COP policy. For general demand, the $1-$step policy improvement $\pi^{r,+}$ on the constant order policy must satisfy $\pi^{r,+}(\xb_t)\in \argmin_{q\ge 0} \E[\mathcal{H}^r(J_{t+\tau-1}+q)\mid\xb_t]$, where $\mathcal{H}^r(\cdot)$ represents the bias associated with $\cop^r$. For exponentially distributed demand, the bias exists and takes the form of a parabola: As a consequence, $\pi^{r,+}(\xb_t)$ depends only on the expectation of $J_{t+\tau-1}$ and takes the simple form of the PIL policy. However, $\mathcal{H}^r(\cdot)$ will be some general convex function for other demand distributions (provided existence). As a consequence $\pi^{r,+}$ will depend on the precise distribution of $J_{t+\tau-1}$, i.e. it will not be a PIL policy.
However, in line with common approaches in approximate dynamic programming [Powell, 2007], one might consider to approximate the bias, and in view of the convexity of $\mathcal{H}^r(\cdot)$, it would be reasonable to fit a second-degree polynomial. Such an approximation would yield an approximate $\pi^{r,+}$ that is a PIL policy; arguably, our numerical results indicate that such approximations tend to work well.
§ CONCLUSION
We introduced the projected inventory level policy for the lost sales inventory system. We showed that this policy is asymptotically optimal in two regimes and has superior numerical performance. The policy may also be applied to the back-order system, in which case it is equivalent to a base-stock policy.
It needs to be explored whether projected inventory level policies can be fruitfully used for other complicated inventory systems where the whole pipeline is relevant for ordering decisions. Such systems include perishable inventory systems <cit.>, dual sourcing systems <cit.>, and systems with independent (overtaking) lead-times <cit.>.
The authors thank Ton de Kok, Ivo Adan and Geert-Jan van Houtum for stimulating discussions. The second author thanks the Netherlands Foundation for Scientific Research
for funding this research.
natexlab#1#1url<#>1#1urlprefixURL urlstyledoi#1doi:#1doidoi:rm
[Agrawal and Jia, 2022]
Agrawal, Shipra, Randy Jia. 2022.
Learning in structured MDPs with convex cost functions: Improved
regret bounds for inventory management.
Operations Research 70(3) 1646–1664.
[Bai et al., 2020]
Bai, Xingyu, Xin Chen, Menglong Li, Alexander Stolyar. 2020.
Asymptotic optimality of semi-open-loop policies in markov decision
processes with large lead times.
Available at SSRN .
[Bijvank et al., 2014]
Bijvank, M., W.T. Huh, G. Janakiraman, W. Kang. 2014.
Robustness of order-up-to policies in lost-sales inventory systems.
Operations Research 62(5) 1040–1047.
[Bijvank and Johansen, 2012]
Bijvank, M., S.G. Johansen. 2012.
Periodic review lost-sales inventory models with compound poisson
demand and constant lead times of any length.
European Journal of Operational Research 220(1)
[Bijvank and Vis, 2011]
Bijvank, M., I.F.A. Vis. 2011.
Lost-sales inventory theory: A review.
European Journal of Operational Research 215(1) 1–13.
[Bu et al., 2022]
Bu, Jinzhi, Xiting Gong, Xiuli Chao. 2022.
Asymptotic optimality of base-stock policies for perishable inventory
Management Science articles in advance.
[Bu et al., 2020]
Bu, Jinzhi, Xiting Gong, Dacheng Yao. 2020.
Constant-order policies for lost-sales inventory models with random
supply functions: Asymptotics and heuristic.
Operations Research 68(4) 1063–1073.
[Chen et al., 2014]
Chen, Wei, Milind Dawande, Ganesh Janakiraman. 2014.
Fixed-dimensional stochastic dynamic programs: An approximation
scheme and an inventory application.
Operations Research 62(1) 81–103.
[Esary et al., 1967]
Esary, J.D., F. Prochan, D.W. Walkup. 1967.
Association of random variables with applications.
The Annals of Mathematical Statistics 38(5) 1466–1474.
[Goldberg et al., 2016]
Goldberg, D.A., D.A. Katz-Rogozhnikov, Y. Lu, M. Sharma, M.S. Squillante. 2016.
Asymptotic optimality of constant-order policies for lost sales
inventory models with large lead times.
Mathematics of Operations Research 41(3) 898–913.
[Goldberg et al., 2021]
Goldberg, D.A., M.I. Reiman, Q. Wang. 2021.
A survey of recent progress in the asymptotic analysis of inventory
Production and Operations Management 30(6) 1718–1750.
[Haijema et al., 2008]
Haijema, René, Jan van der Wal, et al. 2008.
An mdp decomposition approach for traffic control at isolated
signalized intersections.
Probability in the Engineering and Informational Sciences
22(4) 587–602.
[Huh et al., 2011]
Huh, Woonghee Tim, Ganesh Janakiraman, Mahesh Nagarajan. 2011.
Average cost single-stage inventory models: An analysis using a
vanishing discount approach.
Operations Research 59(1) 143–155.
[Huh et al., 2009]
Huh, W.T., G. Janakiraman, J.A. Muckstadt, P. Rusmevichientong.
An adaptive algorithm for finding the optimal base-stock policy in
lost sales inventory systems with censored demand.
Mathematics of Operations Research 34(2) 397–416.
[Huh et al., 2009]
Huh, W.T., G. Janakiraman, J.A. Muckstadt, P. Rusmevichientong.
Asymptotic optimality of order-up-to policies in lost sales inventory
Management Science 55(3) 404–420.
[Janakiraman et al., 2007]
Janakiraman, G., S. Seshadri, G.J. Shanthikumar. 2007.
A comparison of the optimal costs of two canonical inventory systems.
Operations Research 55(5) 866–875.
[Johansen and Thorstenson, 2008]
Johansen, Søren Glud, Anders Thorstenson. 2008.
Pure and restricted base-stock policies for the lost-sales inventory
system with periodic review and constant lead times.
15th International Symposium on Inventories.
[Karlin and Scarf, 1958]
Karlin, S., H. Scarf. 1958.
Inventory models of the Arrow-Harris-Marchak type with time lag.
K. Arrow, S. Karlin, H. Scarf, eds., Studies in the Mathematical
Theory of Inventory and Production. Stanford university press, Stanford,
[Levi et al., 2008]
Levi, R., G. Janakiraman, M. Nagaraja. 2008.
A 2-approximation algorithm for stochastic inventory models with lost
Mathematics of Operations Research 33(2) 351–374.
[Morton, 1969]
Morton, K. 1969.
Bounds on the solution of the lagged optimal inventory equation with
no demand backlogging and proportional costs.
SIAM Review 11(4) 572–596.
[Morton, 1971]
Morton, K. 1971.
The near-myopic nature of the lagged-proportional-cost inventory
problem with lost sales.
Operations Research 19(7) 1708–1716.
[Powell, 2007]
Powell, Warren B. 2007.
Approximate Dynamic Programming: Solving the curses of
dimensionality, vol. 703.
John Wiley & Sons.
[Song, 1998]
Song, Jing-Sheng. 1998.
On the order fill rate in a multi-item, base-stock inventory system.
Operations Research 46(6) 831–845.
[Stolyar and Wang, 2021]
Stolyar, A.L., Q. Wang. 2021.
Exploiting random lead times for significant inventory cost savings.
Operations Research 70(4) 2496–2516.
[Sun et al., 2014]
Sun, Peng, Kai Wang, Paul Zipkin. 2014.
Quadratic approximation of cost functions in lost sales and
perishable inventory control problems.
Fuqua School of Business, Duke University, Durham, NC .
[Tijms, 2003]
Tijms, H.C. 2003.
A First Fourse in Stochastic Models.
John Wiley & Sons.
[van Donselaar et al., 1996]
van Donselaar, K., T. de Kok, W. Rutten. 1996.
Two replenishment strategies for the lost sales inventory model: A
International Journal of Production Economics 46-47
[van Houtum, 2006]
van Houtum, G.J. 2006.
Multiechelon production/inventory systems: optimal policies,
heuristics and algorithms.
Tutorials in Operations Research INFORMS 163–199.
[van Jaarsveld, 2020]
van Jaarsveld, Willem. 2020.
Deep controlled learning of dynamic policies with an application to
lost-sales inventory control.
arXiv preprint arXiv:2011.15122 .
[Xin, 2020]
Xin, Linwei. 2020.
Understanding the performance of capped base-stock policies in
lost-sales inventory models.
Operations Research 69(1) 61–70.
[Xin and Goldberg, 2016]
Xin, Linwei, David A Goldberg. 2016.
Optimality gap of constant-order policies decays exponentially in the
lead time for lost sales models.
Operations Research 64(6) 1556–1565.
[Xin and Goldberg, 2018]
Xin, Linwei, David A Goldberg. 2018.
Asymptotic optimality of tailored base-surge policies in
dual-sourcing inventory systems.
Management Science 64(1) 437–452.
[Zhang et al., 2020]
Zhang, Huanan, Xiuli Chao, Cong Shi. 2020.
Closing the gap: A learning algorithm for lost-sales inventory
systems with lead times.
Management Science 66(5) 1962–1980.
[Zipkin, 2008]
Zipkin, P. 2008a.
Old and new methods for lost-sales inventory systems.
Operations Research 56(5) 1256–1263.
[Zipkin, 2008]
Zipkin, P. 2008b.
On the structure of lost-sales inventory models.
Operations Research 56(4) 937–944.
Technical proofs for companion
§ PROOF OF LEMMA <REF>
Suppose actions follow a PIL policy $\pil^U_t(\xb_t)$ for given $U\ge 0$. Suppose the state $\xb_0$ of period 0 satisfies $\E[J_{\tau-1}|\xb_0]\leq U$. Denote by $q_\tau\ge 0 $ the non-negative order placed in period 0 that attains $\E[I_{\tau}|\xb_0]=\E[J_{\tau-1}+q_\tau|\xb_0] =U$.
Conditional on $D_0$ being $d_0$, the projected inventory level as seen in period 1 before the order $q_{\tau+1}$ is placed is given by $\E[J_{\tau} \mid D_0=d_0]$. It is sufficient to show that
\begin{equation}
\label{eq:attainstep0}
\forall d_0\ge 0:\E[J_{\tau} \mid \xb_0,D_0=d_0]\leq U,
\end{equation}
since this implies that for period 1 it is always possible to place a non-negative order $q_{\tau+1}$ to attain the projected inventory level $U$. (Periods $2,3,\ldots$ then follow by induction.) Since $J_{\tau}$ is decreasing in $D_0$ for all $D_1,\ldots,D_\tau$ we find $\E[J_{\tau} \mid D_0=d_0]\le \E[J_{\tau} \mid D_0=0]$, and hence to show (<ref>) it suffices to demonstrate that $\E[J_{\tau} \mid D_0=0]\leq \E[I_{\tau}]= U$, which will be the subject of the remainder of this proof.
Define the random variable $G(y)$ such that $\P(G(y)\leq g)=\P(I_{\tau}\leq g \mid \xb_t, J_0 = y)$, or equivalently $G(y)= ((\ldots((y+q_1-D_1)^+ + q_2-D_2)^+\ldots)^+ + q_{\tau-1}-D_{\tau-1})^++q_\tau$. Observe that we have
$\E[I_\tau|\xb_0]=\E[G((I_0-D)^+)]$ and $\E[J_\tau\mid D_0 = 0]=\E[(G(I_0)-D)^+]$. Note that $0\leq dG(y)/dy \leq 1$ and $G(y)\geq 0$ so that
\begin{equation}
\label{eq:attainstep1}
(G(I_0)-D)^+ \leq G((I_0-D)^+).
\end{equation}
It follows from (<ref>) that $\E[J_\tau \mid D_0=0]=\E[(G(I_0)-D)^+]\leq \E[G((I_0-D)^+)]=\E[I_\tau]$.
§ PROOF OF LEMMA <REF>
First note that the inventory at the end of a period under $\cop^r$
satisfies the Lindley recursion $J_{t+1}=(J_t+r-D_t)^+$ so that
$\lim_{t\to\infty}\E[J_t]$ can be determined with the
Pollaczek Khinchine equation for the mean waiting time in an
$M/D/1$ queue. Therefore we can express $g^r$ in closed form as
\begin{equation}
\label{eq:gr}
g^r = p(\mu-r) + h \frac{r^2}{2(\mu-r)}.
\end{equation}
We can now directly verify that inserting (<ref>) and (<ref>)
into the right hand side of (<ref>) again
equals (<ref>) by using integration by parts and tedious but otherwise straightforward algebra:
\begin{eqnarray}
& &\E_{D}\big[ h(x-D)^+ + p (D-x)^+ + \mathcal{H}^r((x-D)^++r)\big]- g^r \notag\\
& = & p\E\left[(D-x)^+\right] + h \E\left[(X-D)^+\right] + \int_0^x \frac{\mathcal{H}^r(x-y+r)}{\mu}e^{-y/\mu} \diff y + \mathcal{H}^r(r) e^{-x/\mu} - g^r \notag \\
& = & p\mu e^{-x/\mu} + h(x-\mu+\mu e^{-x/\mu}) + e^{-x\mu}x\left(\frac{h(3r^2+3rx+x^2)}{6\mu(\mu-r)} +
\frac{pr(2r+x)}{2\mu(\mu-r)} - \frac{p/(2r+x)}{2(\mu-r)}\right) \\
& & + e^{-x/\mu} \left(\frac{hr^2}{2(\mu-r)}-pr^2\right)- p(\mu-r) -h \frac{r^2}{2(\mu-r)} \notag\\
& = & \frac{h}{2(\mu-r)}x^2-px = \mathcal{H}^r(x).
\end{eqnarray}
Clearly $\mathcal{H}^r(x)=\frac{h}{2(\mu-r)}x^2-px$ also satisfies $\mathcal{H}^r(0)=0$.
§ PROOF OF LEMMA <REF>
We prove the following statement by induction on $t_2$, starting at $t_2=t_1$:
\begin{align}
\mathcal{H}^r(I_{t_1})= \E_{D_{t_1},\ldots,D_{t_2}}\left[ c[t_1,t_2](\cop^r) +\mathcal{H}^r(I_{t_2+1}) \middle| I_{t_1}\right]-(t_2+1-t_1)g^r.
\label{eq:inductionHypothesis}
\end{align}
The base case $t_2=t_1$ holds by (<ref>), the definition of $c_{t}$, and $I_{t_1+1}=(I_{t_1}-D_{t_1})+r$. Now for the inductive step, assume that the statement holds for some $t_2\geq t_1$; we will show it holds also for $t_2+1$. Use (<ref>) to conclude that
\begin{align}
\E_{D_{t_1},\ldots,D_{t_2}}\left[ \mathcal{H}^r(I_{t_2+1}) \mid I_{t_1}\right] &= \E_{D_{t_1},\ldots,D_{t_2}}\left[\E_{D_{t_2+1}}[ c_{t_2 +1} + \mathcal{H}^r((I_{t_2+1}-D_{t_2+1})^+ +r) -g^r | I_{t_2 +1}] \middle| I_{t_1}\right] \notag\\
&= \E_{D_{t_1},\ldots,D_{t_2}}\left[\E_{D_{t_2+1}}\left[ c_{t_2+1} + \mathcal{H}^r(I_{t_2+2}) \mid I_{t_2+1}\right] \middle| I_{t_1}\right]-g^r\notag\\
\label{eq:onestepbias}
&= \E_{D_{t_1},\ldots,D_{t_2},D_{t_2+1}}\left[ c_{t_2+1} + \mathcal{H}^r(I_{t_2+2}) \mid I_{t_1} \right]- g^r.
\end{align}
Now substitution of (<ref>) back into the induction hypothesis (<ref>) yields:
\begin{align}
\mathcal{H}^r(I_{t_1})&=\E_{D_{t_1},\ldots,D_{t_2}}\left[ c[t_1,t_2](\cop^r) +\mathcal{H}^r(I_{t_2+1}) \mid I_{t_1}\right]-(t_2+1-t_1)g^r \notag\\
&= \E_{D_{t_1},\ldots,D_{t_2+1}}\left[ c[t_1,t_2+1](\cop^r) +\mathcal{H}^r(I_{t_2+2}) \mid I_{t_1}\right]-(t_2+2-t_1)g^r. \label{eq:martingale}
\end{align}
By induction, this shows that (<ref>) holds for all $t_2\geq t_1$.
§ PROOF OF LEMMA <REF>
We have
\begin{align}
\label{eq:idlem1}
\E[(X+Y)^+] = \int_{x+y\geq 0} (x+y) dF(x,y)= \int_{x=-\infty}^\infty \int_{y=-x}^\infty (x+y) dF(x,y).
\end{align}
For $x+y\geq 0$ we can write
\begin{equation}
\label{eq:idlem2}
x+y = \int_{z=-y}^x dz.
\end{equation}
Substitution of (<ref>) into (<ref>) yields
\begin{align}
\E[(X+Y)^+] &= \int_{x=-\infty}^\infty \int_{y=-x}^\infty \int_{z=-y}^x dz dF(x,y)
= \int_{-y\leq z \leq x} dF(x,y)dz \notag\\
&= \int_{z=-\infty}^\infty \int_{x=z}^\infty \int_{y=-z}^\infty dF(x,y) dz
= \int_{z=-\infty}^\infty \P(X\geq z, Y\geq -z) dz. \mbox{\Halmos}\notag
\end{align}
§ PROOF OF LEMMA <REF>
By induction in $T$. Let $0<U<U'$. By induction hypothesis assume that for all $t'<T$ and all $D_1,\ldots, D_{T-1}$ we have $\P\big[q[1,t'+\tau](\pil^U)\le q[1,t'+\tau](\pil^{U'}) |D_1,\ldots, D_{T-1}\big] =1$. By (<ref>), this implies
\begin{align}
\P\big[L[1,T+\tau-1](\pil^U) - L[1,T+\tau-1](\pil^{U'})\ge 0\big|D_1,\ldots, D_{T-1},D_{T},\ldots,D_{T+\tau-1}\big]=1\label{eq:lostsalesmonotonous}
\end{align}
which implies Claim 2 for $T-1$. We proceed to prove Claim 1 for $T$. Note that
\begin{align}\label{eq:monotony_p1}
J_{T+\tau-1}(\pil^U) = I_0+q[1,T+\tau-1](\pil^U) -D[0,T+\tau-1]+L[1,T+\tau-1](\pil^U).
\end{align}
Lemma <ref> and (<ref>) imply $q_{T+\tau}(\pil^{U})|D_1,\ldots,D_{T-1}=U-\E(J_{T+\tau-1}(\pil^{U'})|D_1,\ldots,D_{T-1})$.
Hence, we find
\begin{align}
&q_{T+\tau}(\pil^{U'})-q_{T+\tau}(\pil^{U})-U'+U|D_1,\ldots,D_{T-1} \label{eq:q1}\\
&=\E_{D_{T},\ldots,D_{T+\tau-1}}\big[J_{T+\tau-1}(\pil^{U})-J_{T+\tau-1}(\pil^{U'})|D_1,\ldots,D_{T-1}\big] \notag\\
&= q[1,T+\tau-1](\pil^{U})-q[1,T+\tau-1](\pil^{U'})|D_1,\ldots,D_{T-1}\notag\\
&+\E_{D_{T+1},\ldots,D_{T+\tau-1}}\big[L[1,T+\tau-1](\pil^{U})-L[1,T+\tau-1](\pil^{U'})|D_1,\ldots,D_{T-1} \big] \label{eq:q2}
\end{align}
where the last equality follows from (<ref>). Rearranging terms in (<ref>) and (<ref>), we obtain:
\begin{align*}&q[1,T+\tau](\pil^{U'})-q[1,T+\tau](\pil^{U})|D_1,\ldots,D_{T-1}\\&= \E_{D_{T+1},\ldots,D_{T+\tau-1}}\big[L[1,T+\tau-1](\pil^{U})-L[1,T+\tau-1](\pil^{U'})|D_1,\ldots,D_{T-1} \big] + (U'-U).
\end{align*}
The first term on the right hand side is positive by the induction hypothesis (<ref>), and the second term by assumption. This proves Claim 1 for $T$. Claim 3 follows from Claim 2.
§ PROOF OF THEOREM <REF>
Since $c_t=h(I_t-D_t)^+ + p(D_t-I_t)^+$, by Jensen's inequality we find (for any policy) that $\E[c_t]\ge h\alpha_D$, implying $C(P^{U^*})\ge h\alpha_D$.
Now, distinguish two cases: $U^*\in [0, 2\mu]$ and $U^*\in (2\mu,\bar{U}]$ ($\bar{U}:=(1+p/h)\mu$). Note that $n_\epsilon \epsilon \alpha_D\ge 2\mu - \epsilon \alpha_D$, thus for Case 1 there exists a $U'\in \mathcal{U}_{\epsilon}$ such that $0 \le U'-U^* \le \epsilon \alpha_D$. By Lemma <ref> we have that $\limsup_{T\rightarrow\infty} \E\left[\frac{1}{T-\tau+1}L[\tau,T](\pil^U) \right]$ is non-increasing in $U$, and thus from (<ref>), we find $C(P^{U'})-C(P^{U^*})\le \epsilon h \alpha_D\le \epsilon C(P^{U^*})$, which proves the result.
Now, for $U^*\in (2\mu,\bar{U}]$, note that $\mu+\mu(1+\epsilon)^{n'_\epsilon}\ge (1+p/h)\mu =\bar{U}$, hence there exists a $U'\in \mathcal{U}_\epsilon$ such that $ (U'-\mu)/(1+\epsilon) \le U^*-\mu \le U'-\mu$. Thus
\[(1+\epsilon)C(P^{U^*}) - C(P^{U'})\ge h(1+\epsilon)(U^*-\mu)- h(U'-\mu)\ge h\left(U^*-\mu-\frac{U'-\mu}{1+\epsilon}\right)\ge 0\]
Here, the first inequality follows part 3 of Lemma <ref>, the second inequality by dividing by $1+\epsilon$ and the final inequality by the choice of $U'$. This proves the case $U^*\in (2\mu,\bar{U}]$.
§ EXPLICIT RECURSIONS FOR INVENTORY PROJECTION
All probabilities below will be conditional on the state at time 0, $\xb_0$. We omit this condition as it makes the derivations more readable.
From equations (<ref>)-(<ref>), we obtain the following recursive expressions for any $t\in\{0,\ldots,\tau\}$:
\begin{align}
\P\left(\tilde{J}_{t}=x\right) &= \sum_{k=0}^{\infty} \P\left((\tilde{I}_t-K_t)^+=x\mid K_t = k\right) \P(K_t=k) = \sum_{k=0}^x \P\left(\tilde{I}_t=x-k\right)\theta_k \\
\P\left(\tilde{L}_t=x\right) &= \sum_{k=0}^{\infty} \P\left((K_t-\tilde{I}_t)^+=x\mid K_t = k\right) \P(K_t=k) = \sum_{k=x+1}^{\infty} \P\left(\tilde{I}_t=k-x\right) \theta_k\\
\P\left(\tilde{I}_{t}=x\right) &= \P\left(\tilde{J}_{t-1}+Q_{t}=x\right) = \sum_{y=0}^\infty \P\left(\tilde{J}_{t-1}=x-y\right)\P(Q_{t}=y),
\end{align}
These recursions can be applied running from $t=0$ till $t=\tau$ as $\tilde{I}_0$ and $Q_t$, $t\in\{1,\ldots,\tau-1\}$ have Poisson distributions with means $\lambda I_0$ and $\lambda q_t$ conditional on $\xb_0=(I_0,q_1,\ldots,q_{\tau-1})$ when demand has a ME distribution with scale $\lambda$:
\begin{align}
\P\left(\tilde{I}_0 = x \right) = e^{-\lambda I_0}\frac{(\lambda I_0)^x}{x!},\qquad
\P\left( Q_t = x \right) = e^{-\lambda q_t}\frac{(\lambda q_t)^x}{x!}.
\end{align}
Comparison of policy performance on large test-bed
2-5 1r| 2c|Average %-gap with best PIL 2c|Maximum %-gap with best PIL
1|l|Coefficient of variation of demand 1rBS 1r|COP 1rBS 1r|COP
0.15 5.09 61.04 19.81 251.32
0.25 4.95 54.86 18.51 230.43
0.5 4.60 46.16 15.83 181.27
1 3.52 35.83 10.51 141.33
1.5 2.36 27.72 7.07 113.89
2 1.55 22.51 4.82 93.01
1|l|Lead time ($\tau$)
1 1.23 65.52 5.76 251.32
2 2.39 49.95 10.18 192.43
3 3.37 40.90 13.34 155.29
4 4.26 34.77 15.92 137.58
5 5.04 30.11 17.75 116.69
6 5.76 26.88 19.81 113.63
1|l|Penalty cost ($p$)
1 8.21 0.84 19.81 6.42
4 6.53 6.65 13.79 26.72
9 4.09 17.14 8.24 53.97
19 2.13 34.74 4.30 93.46
49 0.77 74.06 1.54 171.70
99 0.33 114.70 0.67 251.32
1|l|Total 3.68 41.35 19.81 251.32
|
# Momentum2 Teacher: Momentum Teacher with Momentum Statistics
for Self-Supervised Learning
Zeming Li Songtao Liu Jian Sun
MEGVII Technology
{lizeming, liusongtao<EMAIL_ADDRESS>
###### Abstract
In this paper, we present a novel approach, Momentum2 Teacher, for student-
teacher based self-supervised learning. The approach performs momentum update
on both network weights and batch normalization (BN) statistics. The teacher’s
weight is a momentum update of the student, and the teacher’s BN statistics is
a momentum update of those in history. The Momentum2 Teacher is simple and
efficient. It can achieve the state of the art results (74.5%) under ImageNet
linear evaluation protocol using small-batch size(, 128), without requiring
large-batch training on special hardware like TPU or inefficient across GPU
operation (, shuffling BN, synced BN). Our implementation and pre-trained
models will be given on
GitHub111https://github.com/zengarden/momentum2-teacher.
## 1 Introduction
Student-teacher framework is one of the key ingredients in current state-of-
the-art self-supervised visual representation learning, (_e.g._ , MoCo [21,
9], BYOL [19]). The framework learns two networks with the same architecture,
where the student network is trained with gradient back-propagation and the
teacher network is a momentum version of the student network.
In the student-teacher framework, as illustrated in Figure 1 (a)(b), one image
is augmented into two different views for a student and a teacher. The student
is trained to predict the representation of the teacher, and the teacher is
updated with a “momentum update” (exponential moving average) of the student.
The success of MoCo and BYOL has proven the effectiveness of the student-
teacher framework. With the momentum update, the teacher obtains more stable
parameters in a temporal ensembling manner [45].
Besides network parameters, stable statistics (i.e. batch normalization) is
another critical factor for training modern deep networks, especially for
self-supervised learning. MoCo [21] adopts a “shuffling batch normalization
(shuffling BN)” on the teacher, which shuffles the sample order in the current
batch before distributing it among multiple GPUs and rolls back after
encoding. This operation prevents the information leaking of the intra-batch
communication and still allows training to benefit from BN [21]. However,
shuffling BN only operates on the samples independently for each GPU with
small sizes (typical 32 images per GPU), which limits the benefit from larger
batch normalization with more stable statistics.
By leveraging specially designed TPU, BYOL [19] uses larger batch (, 4096) on
both student and teacher to attain more stable statistics, which is equivalent
to perform a “synchronized batch normalization (synced BN)” on multiple GPUs.
The leading performances of BYOL indicates that stable statistics is
important. When training with small batch size (, 128 for BN), its performance
significantly downgrades [19]. The demand of large batching is unfriendly in
many scenarios. Accessing TPU is not common to the research community, and
synced BN is very inefficient across multiple GPUs on multiple machines.
Figure 1: Pipeline of student-teacher based self-supervised approaches. The
positive pairs ($v,v^{\prime}$), the different augment views of the same
image, are fed into student and teacher networks respectively to learn the
representations ($z,z^{\prime}$). In MoCo [21], the student is trained with
(small-batch) BN and the teacher uses shuffling BN.
($v^{\prime}_{0},v^{\prime}_{1},v^{\prime}_{2},...$) are negative pairs from
other images. In BYOL [19], both student and teacher are trained with large
batch and BN statistics with synchronized batch-normalization (Synced BN
[40]). In our Momentum2 Teacher, the student use small-batch statistics to
keep efficiency, and the teacher is equipped with a simple and efficient
“Momentum BN” to obtain more stable statistics.
In this paper, we propose _Momentum 2 Teacher_, which applies momentum update
on both network parameters and BN statistics to obtain a more stable teacher.
As shown in Figure 1 (c), we replace all the batch normalization layers of the
teacher with a simple and efficient operation, “Momentum BN”, to obtain more
stable statistics. Momentum BN conducts momentum update on the historical
batch statistics (_i.e._ , mean and standard deviation) to normalize features,
instead of just using current mini-batch calculations. This operation enables
small batch size training (128) to generate more stable statistics while
keeping the efficiency. Furthermore, as the teacher does not need back-
propagation, Momentum BN does not have a gradient issue of the
indifferentiable history statistics. Our method can achieve the state-of-the-
art result, without resorting to large batch training on TPU or slow synced BN
operation.
The main contributions of this work are:
1. 1
We propose a novel approach, named _Momentum 2 Teacher_, which keeps
efficiency and hardware-friendly of small batch training while achieves
competitive performance against large batch training.
2. 2
The core of our method, Momentum BN, which conducts momentum update on the
batch statistics, can benefit all student-teacher based self-supervised
methods. It can improves both MoCo and BYOL.
3. 3
We obtain the state-of-the-art result, 74.5% Top-1 accuracy, on ImageNet under
the linear evaluation protocol.
## 2 Related Works
##### Self-supervised learning:
Self-supervised approaches have largely reduced the performance gap between
supervised models and even achieved superior results on down-stream vision
tasks. Contrastive learning measures the (dis)similarities of the sample pairs
in a latent space, such as [21, 9, 8, 47, 37, 24, 3, 25, 46, 33]. Pretext
tasks are also heavily researched topic. Some mainly generate pseudo labels
by, e.g., clustering features [5, 6, 7], augmenting different views of the
single image (“exemplar”) [17], relative and ordered patches [35, 14, 15], or
consecutiveness in videos [51, 38]. Others propose to recover the input from
corruption, e.g., inpainting [39], denoising [50] and colorization [60, 61]
with auto-encoders and GAN.
##### Student-teacher framework:
Mean-teacher [45] introduces a student and teacher (moving-averaged version of
the student) network to learn with each other. MoCo [21, 9] combines the
contrastive mechanism and the student-teacher framework [45], with a memory
bank to maintain a large number of negative samples. BYOL [19] further removes
the negative samples, using an additional predictor to avoid collapsing. It
relies on the large number of positive samples, typically 4096 for achieving
the best performance.
##### Normalization:
BN [27] has widely proven effectively and efficiently in most of the vision
tasks. It normalizes the internal feature maps using channel-wise statistics
along batch dimension. In practice, BN relies on sufficient batch-size, which
is not easily satisfied, especially when large-batch training is acquired in
self-supervised learning. There are many techniques proposed to maintain or
approximate large batch statistics. Synced BN [40] increases the batch-size by
computing the mean and variance across multiple devices (GPUs), however,
introducing a lot of overhead. Batch Renormalization [26] and EvalNorm [43]
correct batch statistics during training and inference, while compared to
Synced BN they generally perform worse.
Moving averaged batch normalization [56] and online normalization [11] adopt
similar momentum updating of statistics like our momentum BN during the
forward pass. However, they need further correct backpropagation for valid SGD
optimization, which requires additional computation and memory resources. As
there is no back pass within the teacher network, our momentum BN do not need
this gradient revision, making it more efficient.
Another family of normalization is functionally based. Instead of normalizing
across samples, layer-norm [2] performs normalizes across features which makes
it irrelevant to batch-size. Group normalization [52] and instance
normalization [49] further extend this by partitioning features into groups.
Normalization which is applied to network weights are also proposed, such as
weight normalization [42] and normalization propagation [1]. The very recent
study [41] shows that BYOL even works using functional-based normalization.
But the result is still slightly worse than its counterpart like Synced BN
[40], which is consistent with the observation under supervised training.
## 3 Momentum2 Teacher
We first give our motivation by studying the impact of BN statistics in a
series of controlled experiments. Then, we present our method to increase the
stability of BN statistics.
### 3.1 Importance of Stable Statistics
To analysis the role of BN statistics under the student-teacher framework, we
design a set of exploratory experiments on STL10 [12] dataset, based on BYOL
baseline [19].
##### Experiment setup:
STL10 contains 96$\times$96 pixel RGB images belonging to ten classes. For
each class, it provides 500 images for training and 800 images for testing. We
use ResNet18 as the basic network for fast ablation. More training details
including image augmentation can be referred in Sec.3.4.
Table 1: Comparison on STL10 with different BN statistics in student and teacher. BN and Synced BN indicate the BN statistics are calculated within a single GPU and cross GPUs. All experiments are conducted with 400 epochs. We also report the training speed (seconds/iteration). Student | Teacher | Top1 | Sec./Iter
---|---|---|---
Synced BN | Synced BN | 88.06 | 1.1s
BN | Synced BN | 87.80 | 0.39s
Synced BN | BN | 87.12 | 0.49s
BN | BN | 84.16 | 0.24s
BN | Momentum BN | 88.18 | 0.25s
Synced BN | Momentum BN | 88.25 | 0.50s
##### Observations.
BYOL baseline applies “Synced BN” (performing BN cross all GPUs) on both
student and teacher. Since the teacher does not backprogpagate the gradient,
the role of BN in the teacher more replies on the statistics collected in the
forward pass. To exam if Synced BN matters, we replace Synced BN with “BN”
(performing BN on single GPU, independently) in student or teacher or both. As
showed in Table 1, we have four observations.
1. 1.
Synced BN is critical. Without Synced BN, the accuracy significantly drops
from 88.06 to 84.16. Similar result was also reported in [19] when using small
batch size. This verified the importance of stable statistics in BN.
2. 2.
Synced BN slows down the training speed. Because Sycned BN performs cross GPU
oprations, the communication overhead is very significant. For example, the
speed of Synced BN/Synced BN combination is more than four times slower than
BN/BN.
3. 3.
It is not essential to apply Synced BN on both student and teacher. We can get
decent results (87.80 or 87.12) with a stable teacher or a stable student.
This will enable us to decouple design of BN in student and teacher.
4. 4.
A stable teacher is essential. At the third row in the table, although we did
not directly apply Synced BN in the teacher, the teacher still benefit from
stable statistics by copying BN parameters in a moving averaged way. Directly
applying Synced BN on the teacher is better than on the student (87.80 v.s.
87.12).
Based on the above observations, we conclude that the stable BN statistics
brought by large batch training is crucial for student-teacher self-supervised
learning. Although stable statistics can be obtained from large-batched
samples, it hurts efficiency. Next, we will introduce a new, simple method for
higher accuracy and better efficiency.
### 3.2 Momentum BN
Synced BN simply uses large batch to obtain stable statistics. We note that in
the student-teacher framework, we do not propagate gradient in the teacher. If
we can leverage this characteristic, we may obtain stable statistics using
small batch.
We follow the annotation of the batch-normalization. Considering a mini-batch
$\mathcal{B}$ of size $m$, then we have $m$ values $x_{1..m}$ of feature. BN
operation first calculate two important statistics mean $\mu$ and variance
$\sigma$:
$\displaystyle\mu$ $\displaystyle=\frac{1}{m}\Sigma_{i=1}^{m}x_{i},$ (1)
$\displaystyle\sigma$
$\displaystyle=\frac{1}{m}\Sigma_{i=1}^{m}(x_{i}-\mu)^{2}.$
Then, BN performs a linear transformation to get final output $y_{1..m}$:
$\displaystyle\widehat{x_{i}}=\frac{x_{i}-{\mu}}{\sqrt{\sigma+\epsilon}},$ (2)
$\displaystyle y_{i}=\gamma\widehat{x}_{i}+\beta,$
where $\gamma$ and $\beta$ are two learnable parameters.
In the student-teacher framework, two statistics mean $\mu$ and variance
$\sigma$ are calculated from samples (encoded by the teacher) in current
batch, while $\gamma$ and $\beta$ are momentum version of the student.
Since we do not have to estimate $\gamma$ and $\beta$, and motivated by that
the teacher is a kind of temporal ensemble of the student, we perform a
momentum update of the BN statistics:
$\displaystyle\mu_{t}$ $\displaystyle=(1-\alpha)\mu_{t}+\alpha\mu_{t-1},$ (3)
$\displaystyle\sigma_{t}$
$\displaystyle=(1-\alpha)\sigma_{t}+\alpha\sigma_{t-1},$
where $\alpha$ is a momentum coefficient. We simply use an exponential moving
average of historical BN statistics to make the teacher more stable. We call
the BN with the above momentum update “ _Momentum BN_ ”.
Using momentum statistics is not new in the deep learning. For example,
running-mean and running variance have been provided in the deep learning
framework like Tensor-Flow for better validation and inference after the
training. Here momentum statistics are used for training in the student-
teacher framework.
##### Lazy update.
Information leaking is one of the main issues in self-supervised learning. For
two samples $v,v^{\prime}$ of the same image, BYOL _sequentially_ calculates
symmetrized losses as follows:
$\displaystyle\mathcal{L}_{1}$
$\displaystyle=||\text{student}(v),\text{teacher}(v^{\prime})||,$ (4)
$\displaystyle\mathcal{L}_{2}$
$\displaystyle=||\text{student}(v^{\prime}),\text{teacher}(v)||.$
After calculating the loss $\mathcal{L}_{1}$, the statistics of $v^{\prime}$
will be fed into the teacher model. When we calculate the loss
$\mathcal{L}_{2}$, if we use Momentum BN straightforward, we will include the
statistics of $v^{\prime}$ (as historical statistics) into the teacher. This
will make the learning more trivial and hurt the performance.
We address this issue by lazily updating Momentum BN statistics. We first
perform Momentum BN for L1 and L2 independently, using the statistics of
previous batch $\\{\mu_{t-1},\sigma_{t-1}\\}$:
$\displaystyle\\{\mu_{t}^{1},\sigma_{t}^{1}\\}$
$\displaystyle=(1-\alpha)\\{\mu_{t}^{v^{\prime}},\sigma_{t}^{v^{\prime}}\\}+\alpha\\{\mu_{t-1},\sigma_{t-1}\\},$
(5) $\displaystyle\\{\mu_{t}^{2},\sigma_{t}^{2}\\}$
$\displaystyle=(1-\alpha)\\{\mu_{t}^{v},\sigma_{t}^{v}\\}+\alpha\\{\mu_{t-1},\sigma_{t-1}\\},$
Then, we conduct BN transformation via $\\{\mu_{t}^{1},\sigma_{t}^{1}\\}$ and
$\\{\mu_{t}^{2},\sigma_{t}^{2}\\}$. Last, we update BN statistics:
$\displaystyle\\{\mu_{t},\sigma_{t}\\}$
$\displaystyle=(1-\alpha)\\{\frac{\mu_{t}^{v^{\prime}}+\mu_{t}^{v}}{2},\frac{\sigma_{t}^{v^{\prime}},\sigma_{t}^{v}}{2}\\}+\alpha\\{\mu_{t-1},\sigma_{t-1}\\}$
(6)
##### Results.
We replace Synced BN with Momentum BN in the teacher. As shown in Table 1,
Momentum BN achieves better performance (88.18 vs 87.8) without requiring
cross machine communication. It is as fast as we use small batch BN.
We also apply Momentum BN on the student. The improvement is marginal,
verifying that the stable statistics in teacher is essential.
### 3.3 Momentum2 Teacher
From Table 1, we can see that student with a small batch (32) statistics
already performs closely to that using large-batch (2048) Synced BN, which
demonstrates that student can be designed with much smaller batch-size for BN
statistics compared to teacher. Therefore, in this paper, we recommend the
combination of “ _student with small batch_ \+ _teacher with (small batch)
momentum BN_ ” as our main method. This method has the best trade-off between
accuracy and efficiency.
Because the teacher in our method uses momentum mechanism twice, one for
weights update from student, the other for calculating BN statistic, we call
our method “ _Momentum 2 Teacher_”.
### 3.4 Implementation Details
Figure 2: Momentum2 Teacher based on BYOL. Teacher maintains stable BN
statistics to improve the performance, and student uses plain BN to keep the
efficiency.
_Baseline:_ Figure 2 shows our method, which uses BYOL [19] as the baseline.
BYOL learns representations by maximizing the similarity between two different
augmented views $v$ and $v^{\prime}$ from the same image $X$. $v$ is passed
into the student which consists of a basic encoder $f_{\theta}$, a MLP
$g_{\theta}$, and a predictor $q_{\theta}$; and $v^{\prime}$ is fed into the
teacher has only a basic encoder $f_{\xi}$ and a MLP $g_{\xi}$. The parameters
of $f_{\xi}$ and $g_{\xi}$ are momentum update of $f_{\theta}$ and
$g_{\theta}$:
$\displaystyle f_{\xi}\leftarrow(1-m)f_{\xi}+f_{\theta},$ (7) $\displaystyle
g_{\xi}\leftarrow(1-m)g_{\xi}+g_{\theta}.$
where $m$ is a momentum coefficient.
_Image augmentations:_ We use the same set of image augmentations as in BYOL
and SimCLR [8]. Specifically, a crop of fixed size is taken from a random
resized and horizontal flipped image, followed by a commonly used color
distortion. Then, Gaussian blur and an optional gray-scale are applied to
these patches. Finally, solarization is adopted whose probability is set to 0
for student and 0.2 for teacher.
_Architecture:_ On STL10, we use ResNet-18 [23] as the basic encoder
($f_{\theta},f_{\xi}$), which produces a feature with 512 dimension
($y,y^{\prime}$) by average pooling. The dimension is set to 512 for first
linear layer in the MLP and 128 for the second linear layer.
_Training:_ We train STL10 with 64 2080TI GPUs in order to simulate the cross
machine (8 machines) communication in Synced BN. SGD with momentum of 0.9 is
adopted without LARS [57]. All experiments involve 32 image crops for each GPU
unless special statements. At the end of the training, we dump the model from
teacher. Learning rate is decay by cosine strategy [32]. Basic learning rate
is set to 0.1. It warm-ups 10 epochs with 0.001 factor and is scaled linearly
[18] with batch-size. Weight-decay is set to 1e-4. The momentum coefficient of
$m$ starts from $m_{base}=0.032$ and is decreased to zero at the end of the
training. In our Momentum BN, the momentum coefficient $\alpha$ starts from
$\alpha_{base}=1$ and is decreased to 0 with cosine schedule:
$\displaystyle\begin{split}m\leftarrow m_{base}\times(cos(\pi k/K)+1)/2,\\\
\alpha\leftarrow\alpha_{base}\times(cos(\pi k/K)+1)/2,\end{split}$ (8)
where, $k$ is the current iteration and $K$ is the total number of iterations.
_Evaluation:_ We follow the linear classification protocol. The features are
frozen and attached by a new linear classifier normalized by BN to fine-tune
with given classes. On STL10, we train 80 epochs using learning rate starting
from 0.5 (for 256 batch-size) and decayed with cosine schedule. We train on 8
2080 GPUs using SGD with a momentum of 0.9, without weight-decay.
## 4 Experiments
In this section, we perform training on ImageNet and evaluate the model with
the linear evaluation of classification task and some downstream tasks on
COCO. Our setup is briefed as follows:
ImageNet [13] ILSVRC-2012 dataset has about 1.28 million images belonging to
1000 different classes. The class labels are ignored in the self-supervised
learning. The image augmentation is same as STL10’s except that image crop
uses 224$\times$224.
We use ResNet-50 [23] as the encoder. After average pooling, the feature
($y,y^{\prime}$) has 2048 dimensions. MLP uses a 4098 layer and a 256 layer to
generate output ($z,z^{\prime}$).
We use 128 2080 GPUs with LARS [57] optimizer, but bias and BN weight are
excluded from the LARS adaptation and weight-decay. Basic learning rate is set
to 0.3, and weight-decay is set to 1.5e-6. The momentum coefficient of
parameter $m$ starts from $m_{base}=0.01$ and is gradually decreased to zero.
Fine-tuning uses learning rate 0.2.
### 4.1 Effectiveness of Momentum BN
We first validate the effectiveness of Momentum BN on both BYOL and MoCo
frameworks. Table 2 shows BYOL without Synced BN decreases from 72.5 to 61.5,
when we use batch size 128. Our Momentum2 Teacher significantly boosts the
performances to 72.9. It is worth noting that Momentum2 Teacher trains as fast
as BYOL w/o Synced BN, demonstrating the efficiency (0.6s v.s. 5.25s).
Table 2: Comparison of accuracies between BYOL [9] and Momentum2 Teacher. Following BYOL, we train with 300 epochs. “BN” indicates the number of samples to calculate BN. Model | Top1 | Top5 | BN | Sec./Iter
---|---|---|---|---
BYOL w/ Synced BN | 72.5 | 87.6 | 4096 | 5.25s
BYOL w/o Synced BN | 61.5 | 84.6 | 32 | 0.6s
Momentum2 Teacher | 72.0 | 90.6 | 32 | 0.6s
Momentum2 Teacher | 72.9 | 90.6 | 128 | -
Then, we replace Shuffling BN in MoCoV2 [9] with our Momentum BN. Following
the practice of MoCoV2, we fix the momentum coefficient $\alpha$ in Momentum
BN to 0.064. For fast experiment, we train on 32 2080TI GPUs222Training on 4
machines with 32 GPUs, slightly reduces the MoCo baseline (trained on 8 GPUs)
from 67.5 to 66.8.. Table 3 shows that more stable statistics also benefits
MoCo, which validates the generality of Momentum BN in the student-teacher
framework. Moreover, Momentum BN is near twice faster than shuffling BN.
Table 3: Accuracies of MoCoV2 [9] with Shuffling BN and Momentum BN on ImageNet. Following MoCo, we train with 200 epochs. | Top1 | Top5 | Sec./Iter
---|---|---|---
MoCo w/ Shuffling BN | 66.8 | 87.6 | 0.65s
MoCo w/ Momentum BN | 67.8 | 88.0 | 0.35s
### 4.2 Small Batch-Size
Next, we compare BYOL with Momentum2 Teacher at different batch sizes and the
same training schedule of 300 epochs. We keep using 128 GPUs and changes the
number of samples within each GPU. Thus the total batch-sizes will be changed.
To achieve stable performances, we extend the linear scaling rule [18] by
introducing a new equivalence rule for the parameter’s momentum coefficient:
When the mini-batch size is multiplied by $k$, multiply the learning rates by
$k$ and multiply the basic momentum coefficient $m_{base}$ of parameters by
$k$ simultaneously.
Figure 3: Comparison of using different batch-size to calculate batch-norm in student-network. We plot the ‘number” of batch-size to calculate the batch-normalization. BYOL results are referred from its paper. Momentum2 teacher significantly outperforms BYOL when the batch size is reduced. All results are pre-trained with 300 epochs. Table 4: Comparison among different BN batch-sizes (i.e. the number of samples per GPU) on ImageNet with 300 epochs. BYOL results are referred from its paper conducted by accumulating _N_ -steps gradients in parallel. Batch | | | | | | |
---|---|---|---|---|---|---|---
size | 16 | 32 | 48 | 64 | 128 | 256 | 512
BYOL | - | - | - | 59.7 | 69.6 | 71.8 | 72.2
Ours | 68.3 | 72.0 | 72.5 | 72.6 | 72.9 | - | -
From Table 4 and Fig. 3, we can see that the number of samples to calculate BN
statistics in the student can be very small in our Momentum2 Teacher. Using a
batch-size of 32 can still get a decent result, which is on par with BYOL at
512 batch-size. On the contrary, the performance of BYOL rapidly deteriorates
when reducing batch size.
### 4.3 Training on a Single Machine
Thanks to small batch-size training, Momentum2 teacher is also validated on a
single machine with 8 GPUs. We use a learning rate of $lr\times$BatchSize/256
(linear scaling rule [18]) with a basic $lr=0.05$, and then adopt a cosine
decay strategy. The weight decay is 0.0001 and the momentum of SGD is 0.9.
Large-batch optimizers such as LARS are not involved.
Results are shown in Tab. 5. Compared with large batch-size training, our
method achieve even better performances on a single machine with small batch-
sizes. We also compare Momentum2 teacher with recent work SimSiam [10].
Momentum2 teacher consistently outperforms other counterparts at different
settings. It is worth noting that our superior results is attained without
synced BN operation, which makes it more efficient for training.
Table 5: Top 1 accuracies of small batch training on ImageNet linear classification. Our result is obtained on a single machine with 8 V100 GPUs, using batch size of 1024. Method | | batch
---
size
100e | 200e | 300e
Ours | 256 | 70.4 | - | -
Ours | 1024 | 70.7 | 72.7 | 73.8
SimSiam [10] | 256 | 68.1 | 70.0 | -
BYOL [19] | 4096 | - | - | 72.5
### 4.4 Parameter Sensitivity
Momentum2 teacher introduces an additional coefficient $\alpha$ for updating
Momentum BN in the teacher. In the previous experiments, $\alpha$ was
_dynamically_ set from 1 to 0: at the beginning of the training, BN mainly
relies on the statistics of the current mini-batch for normalizing; as the
training stabilizes, it will rely more on the statistics attained by momentum
of historical iterations.
Table 6 compares dynamically adjusted $\alpha$ and fixed $\alpha$. All of
fixed $\alpha$ get inferior results. In dynamic adjustment, $\alpha$ will
eventually drop to 0, which means it can calculate statistics over the entire
data set, making more robust results. Table 7 comares the choices of different
$\alpha_{base}$ (in Eqn. 8). Large $\alpha_{base}$ consistently improves the
performance, further validating the early stage of the training should rely
more on current batch samples.
Table 6: Varying $\alpha$ on ImageNet. All results are pre-trained with 300 epochs. $\alpha$ | 1 $\to$ 0 | 1 | 0.5 | 0.2 | 0.1 | 0.01
---|---|---|---|---|---|---
Top1 | 72.0 | 61.6 | 71.2 | 69.1 | 69.8 | 68.9
Top5 | 90.6 | 84.6 | 90.0 | 89.1 | 89.2 | 88.4
Table 7: Varying $\alpha_{base}$ on ImageNet. All results are pre-trained with 300 epochs. $\alpha_{base}$ | 1 | 0.75 | 0.5 | 0.2 | 0.1 | 0.05
---|---|---|---|---|---|---
Top1 | 72.0 | 71.6 | 71.2 | 69.7 | 69.5 | 69.6
Top5 | 90.6 | 90.4 | 90.1 | 89.0 | 89.0 | 88.8
Table 8: Comparisons under the linear classification protocol on ImageNet. All methods are instantiated with ResNet-50 basic extractor. We report top-1 and top-5 accuracies in % on the test set. Method | Epoch | Top-1 | Top-5
---|---|---|---
Jigsaw [36] | 90 | 45.7 | -
InstDis [54] | 200 | 56.5 | -
BigBiGAN [16] | - | 56.6 | -
Local Agg. [62] | - | 60.2 | -
CPC v2 [24] | 200 | 63.8 | 85.3
CMC [46] | - | 66.2 | 87.0
SimCLR [8] | 200 | 66.6 | -
MOCOv2 [9] | 200 | 67.5 | -
PCL-v2 [28] | 200 | 67.6 | -
InfoMin Aug [47] | 200 | 70.1 | 89.4
BYOL [19] | 300 | 72.5 | 90.8
Ours | 300 | 72.9 | 91.2
PIRL [33] | 800 | 63.6 | -
SimCLR [8] | 1000 | 69.3 | 89.0
MOCOv2 [9] | 800 | 71.1 | -
InfoMin Aug [47] | 800 | 73.0 | 91.1
BYOL [41] | 1000 | 74.3 | 91.6
Ours | 1000 | 74.5 | 91.7
Table 9: Fine-tuning of proposal based object detection and instance
segmentation on COCO-train2017. We validate on FPN and Cascade R-CNN extended
with mask branch. We use ResNet-50-FPN extractor, and report the bounding box
AP and mask AP on val2017. 200e and 300e indicate the pre-training epochs.
pre-train | APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$ | APmk | AP${}^{mk}_{50}$ | AP${}^{mk}_{75}$
---|---|---|---|---|---|---
random init. | 31.0 | 49.5 | 33.2 | 28.5 | 46.8 | 30.4
supervised | 38.9 | 59.6 | 42.7 | 35.4 | 56.5 | 38.1
MoCo 200e | 38.5 | 58.9 | 42.0 | 35.1 | 55.9 | 37.7
BYOL 300e | 39.6 | 60.9 | 43.3 | 36.7 | 58.0 | 39.3
Ours 300e | 39.7 | 61.2 | 43.2 | 36.8 | 58.0 | 39.6
(a) Mask R-CNN, 1$\times$ schedule
APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$ | APmk | AP${}^{mk}_{50}$ | AP${}^{mk}_{75}$
---|---|---|---|---|---
36.7 | 56.7 | 40.0 | 33.7 | 53.8 | 35.9
40.6 | 61.3 | 44.4 | 36.8 | 58.1 | 39.5
40.8 | 61.6 | 44.7 | 36.9 | 58.4 | 39.7
41.6 | 62.9 | 45.8 | 38.2 | 59.9 | 41.1
41.8 | 62.8 | 45.9 | 38.4 | 60.1 | 41.2
(b) Mask R-CNN, 2$\times$ schedule
pre-train | APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$ | APmk | AP${}^{mk}_{50}$ | AP${}^{mk}_{75}$
---|---|---|---|---|---|---
random init. | 35.3 | 51.0 | 38.3 | 31.0 | 48.8 | 33.3
supervised | 42.1 | 59.8 | 45.9 | 36.4 | 57.1 | 39.3
MoCo 200e | 42.4 | 59.8 | 46.1 | 37.0 | 57.2 | 40.1
BYOL 300e | 43.6 | 61.5 | 47.5 | 38.3 | 59.0 | 41.6
Ours 300e | 43.6 | 61.6 | 47.3 | 38.2 | 58.8 | 41.2
(c) Cascade R-CNN, 1$\times$ schedule
APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$ | APmk | AP${}^{mk}_{50}$ | AP${}^{mk}_{75}$
---|---|---|---|---|---
40.7 | 57.7 | 44.2 | 35.9 | 55.5 | 39.1
43.5 | 61.3 | 47.4 | 37.7 | 58.7 | 40.8
44.3 | 61.9 | 48.1 | 38.7 | 59.5 | 42.0
45.0 | 62.9 | 48.8 | 39.3 | 60.5 | 42.6
45.1 | 63.0 | 49.0 | 39.4 | 60.7 | 42.8
(d) Cascade R-CNN, 2$\times$ schedule
### 4.5 Comparison with State-of-the-art
In this section, we first compare performances of Momentum2 Teacher’s
representation with recent state-of-the-art self-supervised approaches on
ImageNet. As the main merit of self-supervised learning is to learn
_transferrable_ feature, we then measure the transfer capabilities on COCO
[31] and LVIS [20] dataset which includes both annotations for object
detection and segmentation. To achieve better results, the pre-training uses
128 samples within each GPU.
#### 4.5.1 Linear Evaluation on ImageNet
As shown in Table 8, our method (ResNet-50 based) obtains 72.9% and 74.5 top-1
accuracies under 300 and 1000 epochs, which outperform the previous state of
the arts. It is worth noting that we only adopt 128 samples for batch-
normalization within student, while BYOL requires 4096. Our superior result is
also achieved without any additional architecture requirement and stronger
augmentation, keeping it simple and effective for practice.
Recent SwAV [7] method can achieve higher accuracy 75.3% by using additional 6
crops within each training iteration to, at the cost of more forward
computations.
Table 10: Fine-tuning of dense object detection on COCO-train2017. We validate
on RetinaNet and FCOS with ResNet-50-FPN and report the AP of bounding boxes.
pre-train | APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$
---|---|---|---
random init. | 24.4 | 38.8 | 25.8
supervised | 37.3 | 56.6 | 39.8
MoCo 200e | 37.2 | 56.4 | 40.0
BYOL 300e | 36.4 | 55.9 | 39.2
Ours 300e | 35.9 | 55.3 | 38.4
(a) RetinaNet, 1$\times$ schedule
APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$
---|---|---
31.2 | 48.1 | 33.3
38.7 | 58.2 | 41.4
39.0 | 58.3 | 41.6
39.5 | 59.3 | 42.6
39.4 | 59.3 | 42.2
(b) RetinaNet, 2$\times$ schedule
APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$
---|---|---
25.0 | 39.2 | 26.4
38.7 | 57.6 | 41.9
39.0 | 57.5 | 42.0
37.6 | 56.0 | 41.0
37.7 | 56.8 | 40.8
(c) FCOS, 1$\times$ schedule
APbb | AP${}^{bb}_{50}$ | AP${}^{bb}_{75}$
---|---|---
31.8 | 48.2 | 33.6
38.5 | 57.0 | 41.3
38.8 | 57.1 | 41.7
39.5 | 58.3 | 42.5
39.9 | 59.0 | 43.2
(d) FCOS, 2$\times$ schedule
#### 4.5.2 Transferring Features
For COCO dataset, we fine-tune on the _train2017_ (about 118k images) set and
tested on _val2017_ (5k). The typical FPN [29] backbone is adopted and further
cooperated with Synced BN. The short-edge of training image is in [640, 800]
sampled with 32 pixels intervals, while fixed at 800 during testing. We follow
the typical 1$\times$ or 2$\times$ training strategy provided by Detectron2
[53] repository. For LVIS 0.5 dataset, we uses 56K images over 1230 categories
for training, and 5k images for validation.
##### Proposal based Detector and Segmentor
We adopt FPN and Cascade R-CNN extended with mask branch [29, 22, 4] to
validate the transfer capabilities on COCO object detection and segmentation.
Following common practice, we stack 4 $3\times 3$ convolutions in the mask
branch and utilize 2 linear layers in the detection branch.
Table 9 shows the bounding box AP and mask AP on COCO _val2017_. For both two-
stage and multi-stage instance segmentor, our method significantly outperforms
the supervised counterpart, achieving comparable results as BYOL with more
efficient implementation. Meanwhile, we surpass another student-teacher based
self-supervised method, MoCo, which uses a small-batch of 32 by a large
margin.
##### Dense Object Detector
Next we compare the representation by fine-tuning dense object detectors
(a.k.a single-stage object detector). We adopt two typical dense detector,
namely anchor-based RetinaNet [30] and anchor-free FCOS [48]. We follow the
baseline provided by Detectron2. Particularly, FCOS utilizes group
normalization [52] for convolutions in detection head, while RetinaNet does
not. Moreover, RetinaNet adopt the standard multi-scale training setting while
FCOS trains with a single-scale of 800 pixels.
Table 10 shows the bounding box AP of the detectors initialized with different
pre-training weights. Our method shows great potential when training with a
long schedule (2$\times$). We achieve comparable performance as BYOL on
RetinaNet while yield much better results on FCOS, outperforming the
supervised counterpart. We observe that both BYOL and our method get inferior
results when training with the 1$\times$ schedule, which we leave its
exploration for future work.
##### LVIS Instance Segmentation
We further transfer the instance segmentation on LVIS which contains about
1000 long-tailed distributed categories. Our method significantly outperforms
the supervised pre-training by an mAP of 1.7% in Tab. 11, validating the
generalization of our approach. We hypothesis that self-supervised pre-
training can benefit more for the task with fewer annotated samples.
Table 11: Fine-tuning of Mask R-CNN on LVIS 0.5. All experiments are instantiated with ResNet-50-FPN and trained with 2$\times$ schedule. pre-train | APmk | AP${}^{mk}_{50}$ | AP${}^{mk}_{75}$
---|---|---|---
random init. | 22.5 | 34.8 | 23.8
supervised | 24.4 | 37.8 | 25.8
MoCo 200e | 24.1 | 37.4 | 25.5
Ours 300e | 26.1 (+1.7) | 39.8 (+2.0) | 28.1 (+2.3)
Table 12: Accuracy of models trained with few image labels. We report top-1 and top-5 accuracies in % on the test set. Method | epoch | Top-1 | Top-5
---|---|---|---
1% | 10% | 1% | 10%
Supervised[58] | - | 25.4 | 56.4 | 48.4 | 80.4
_Semi-supervised:_ | | | | |
Pseudolabels [59] | - | - | - | 51.6 | 82.4
VAT [34] | - | - | - | 47.0 | 83.4
UDA [55] | - | - | 68.8 | - | 88.5
FixMatch [44] | - | - | 71.5 | - | 89.1
_Self-supervised:_ | | | | |
InstDisc [54] | 200 | - | - | 39.2 | 77.4
PIRL [33] | 800 | - | - | 57.2 | 83.8
MoCov2 [9] | 800 | 42.3 | 63.8 | 70.1 | 86.2
PCL [28] | 200 | - | - | 75.3 | 85.6
SimCLR [8] | 1000 | 48.3 | 65.6 | 75.5 | 87.8
SWAV(B=256)[7] | 200 | 51.3 | 67.8 | 76.6 | 88.6
SWAV(B=4096)[7] | 200 | 52.6 | 68.5 | 77.7 | 89.2
SWAV(B=4096)[7] | 800 | 53.9 | 70.2 | 78.5 | 89.9
BYOL [19] | 1000 | 53.2 | 68.8 | 78.4 | 89.0
Ours | 300 | 57.7 | 70.2 | 80.8 | 89.3
Ours | 1000 | 62.3 | 72.2 | 84.1 | 90.1
### 4.6 Semi-Supervised Training on ImageNet
Last, we evaluate the classification performance obtained when fine-tuning
ours representation using a small subset of the ImageNet’s train set with
label information. We use the same fixed splits of 1% and 10 % provided by
SimCLR. The training most follows common semi-supervised protocol in [8, 19,
24], we perform training on 8 V100 GPUs with total batch-size of 1024.
Learning rate is set to 0.08 and decayed by cosine strategy. We fine-tune
20/15 epochs for splits of 1% /10% respectively. After training, the
statistics of BN are recounted for the later testing. We report the top-1 and
top-5 accuracies on the test-set in Tab. 12. Our method consistently
outperforms the previous approaches.
## 5 Conclusion
We have presented Momentum2 Teacher, for self-supervised learning, by
introducing a simple and efficient Momentum BN operation on the teacher. With
a more stable teacher, we are able to use fast small-batch training to obtain
the leading results, which is more friendly to the majority of the
researchers.
## References
* [1] Devansh Arpit, Yingbo Zhou, Bhargava U Kota, and Venu Govindaraju. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. arXiv preprint arXiv:1603.01431, 2016.
* [2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
* [3] Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, pages 15535–15545, 2019.
* [4] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6154–6162, 2018.
* [5] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), pages 132–149, 2018.
* [6] Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Armand Joulin. Unsupervised pre-training of image features on non-curated data. In Proceedings of the IEEE International Conference on Computer Vision, pages 2959–2968, 2019.
* [7] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems, 33, 2020.
* [8] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
* [9] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020.
* [10] Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. arXiv preprint arXiv:2011.10566, 2020.
* [11] Vitaliy Chiley, Ilya Sharapov, Atli Kosson, Urs Koster, Ryan Reece, Sofia Samaniego de la Fuente, Vishal Subbiah, and Michael James. Online normalization for training neural networks. Advances in Neural Information Processing Systems, 32:8433–8443, 2019.
* [12] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223, 2011.
* [13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [14] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE international conference on computer vision, pages 1422–1430, 2015.
* [15] Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2051–2060, 2017.
* [16] Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. In Advances in Neural Information Processing Systems, pages 10542–10552, 2019.
* [17] Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing systems, pages 766–774, 2014.
* [18] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
* [19] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020.
* [20] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5356–5364, 2019.
* [21] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729–9738, 2020.
* [22] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2980–2988. IEEE, 2017.
* [23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [24] Olivier J Hénaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019.
* [25] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
* [26] Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In Advances in neural information processing systems, pages 1945–1953, 2017.
* [27] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
* [28] Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, and Steven CH Hoi. Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966, 2020.
* [29] Tsung-Yi Lin, Piotr Dollár, Ross B Girshick, Kaiming He, Bharath Hariharan, and Serge J Belongie. Feature pyramid networks for object detection. In CVPR, volume 1, page 4, 2017.
* [30] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
* [31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740–755. Springer, 2014.
* [32] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
* [33] Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6707–6717, 2020.
* [34] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018.
* [35] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pages 69–84. Springer, 2016.
* [36] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pages 69–84. Springer, 2016.
* [37] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
* [38] Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2701–2710, 2017.
* [39] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536–2544, 2016.
* [40] Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. Megdet: A large mini-batch object detector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6181–6189, 2018.
* [41] Florent Altché Corentin Tallec Florian Strub Andrew Brock Samuel Smith Soham De Razvan Pascanu Bilal Piot Michal Valko Pierre H. Richemond, Jean-Bastien Grill. Byol works even without batch statistics. 2020\.
* [42] Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in neural information processing systems, pages 901–909, 2016.
* [43] Saurabh Singh and Abhinav Shrivastava. Evalnorm: Estimating batch normalization statistics for evaluation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3633–3641, 2019.
* [44] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685, 2020.
* [45] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pages 1195–1204, 2017.
* [46] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
* [47] Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning. arXiv preprint arXiv:2005.10243, 2020.
* [48] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE international conference on computer vision, pages 9627–9636, 2019.
* [49] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
* [50] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008.
* [51] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In Proceedings of the IEEE international conference on computer vision, pages 2794–2802, 2015.
* [52] Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
* [53] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github.com/facebookresearch/detectron2, 2019.
* [54] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733–3742, 2018.
* [55] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019.
* [56] Junjie Yan, Ruosi Wan, Xiangyu Zhang, Wei Zhang, Yichen Wei, and Jian Sun. Towards stabilizing batch statistics in backward propagation of batch normalization. arXiv preprint arXiv:2001.06838, 2020.
* [57] Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017.
* [58] Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4l: Self-supervised semi-supervised learning. In Proceedings of the IEEE international conference on computer vision, pages 1476–1485, 2019.
* [59] Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4l: Self-supervised semi-supervised learning. In Proceedings of the IEEE international conference on computer vision, pages 1476–1485, 2019.
* [60] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pages 649–666. Springer, 2016.
* [61] Richard Zhang, Phillip Isola, and Alexei A Efros. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1058–1067, 2017.
* [62] Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the IEEE International Conference on Computer Vision, pages 6002–6012, 2019.
|
††thanks<EMAIL_ADDRESS>††thanks<EMAIL_ADDRESS>††thanks: Corresponding author<EMAIL_ADDRESS>
An one-parameter regularization freedom of the Hamiltonian constraint for loop
quantum gravity is analyzed. The corresponding spatially flat, homogenous and
isotropic model includes the two well-known models of loop quantum cosmology
as special cases. The quantum bounce nature is tenable in the generalized
cases. For positive value of the regularization parameter, the effective
Hamiltonian leads to an asymptotic de-Sitter branch of the Universe connecting
to the standard Friedmann branch by the quantum bounce. Remarkably, by
suitably choosing the value of the regularization parameter, the observational
cosmological constant can emerge at large volume limit from the effect of
quantum gravity, and the effective Newtonian constant satisfies the
experimental restrictions in the meantime.
# Loop quantum gravity and cosmological constant
Xiangdong Zhang Department of Physics, South China University of Technology,
Guangzhou 510641, China Gaoping Long Department of Physics, South China
University of Technology, Guangzhou 510641, China Yongge Ma Department of
Physics, Beijing Normal University, Beijing 100875, China
cosmological constant, loop quantum cosmology, effective equation
## I Introduction
The origin of current cosmic acceleration is one of the biggest challenges to
modern physics, which is usually called as the dark energy issue. Many
possible mechanisms have been proposed to account for this issue, such as the
phenomenological modelsFriemann08 , modified gravity Banerjee ; Sen ; Qiang ;
Peebles03 , higher dimensions Tye01 and so on. Among them, the cosmological
constant is generally believed as the most simplest explanation Peebles03 ;
Weinberg89 . However, the nature of the cosmological constant is still
mysterious. Whether it a purely classical effect or it has a quantum origin is
a crucial open issue. It is well known that the awkward cosmological constant
problem would appear if one considered quantum matter fields on a classical
spacetime background Peebles03 ; Weinberg89 . A challenging question would be
whether a realistic cosmological constant could emerge from certain quantum
gravity theory.
How to unify general relativity(GR) with quantum mechanics remains as the
biggest theoretical challenge to fundamental physics. Among various approaches
to quantum gravity, loop quantum gravity (LQG) is notable with its background
independence Ro04 ; Th07 ; As04 ; Ma07 . The non-perturbative quantization
procedure of LQG has been applied not only to GR, but also to the metric
$f(\mathcal{R})$ theoriesZh11 ; Zh11b , scalar-tensor theoriesZM12a ; ZM11c ,
and higher dimensional gravity Thiemann13 . The idea and technique of LQG have
been successfully carried out in the symmetry-reduced models of loop quantum
cosmology (LQC). We refer to LQC5 ; Boj ; APS3 ; AS11 for reviews on LQC.
A remarkable result of LQC is that the classical big bang singularity of the
Universe can be avoided by a quantum bounce LQC5 ; Boj ; APS3 ; AS11 ; YDM ;
DMY . Moreover, LQC opens a promising avenue to relate quantum gravity effects
to cosmological observations of the very early Universe Agullo12 ; AG . As in
any quantization procedure of a classical theory, different regularization
schemes exist also in LQC as well as in LQG Th07 ; As04 ; YM15 ; Lewandowski15
; Singh18 . In particular, for the LQC model of flat Friedmann-Lemaitre-
Robertson-Walker (FLRW) universe, alternative Hamiltonian constraint operators
were proposed APS ; YDM . Note that, to inherit more features from LQG, the
so-called Euclidean term and Lorentzian term of the Hamiltonian constraint
were treated independently in the LQC model in YDM . It was recently shown in
Pawlowski18 ; Pawlowski19 that one of the quantum Hamiltonians proposed in
YDM can lead to a new evolution scenario where the prebounce geometry could
be described at the effective level by a de Sitter spacetime. This raises the
possibility to obtain a positive cosmological constant from a model of LQC.
However, the cosmological constant obtained in Pawlowski18 is very large
($\Lambda\approx 1.03\ell^{-2}_{Pl}\sim 10^{70}m^{-2}$) and thus fails in
fitting the current observations which requires a very small cosmological
constant ($\Lambda_{ob}\sim 1.09168\times 10^{-52}m^{-2}$). Fortunately, this
is not the end story. In this Letter, we will reveal the new possibility that
even if one started with the classical GR without cosmological constant, there
still exists certain regularization of the Hamiltonian in LQC such that a
small enough positive cosmological constant can emerge at the effective level.
This is a quantum dynamical effect significantly different form the usual
scenario where one could add a non-dynamical cosmological constant to both
classical and quantum Einstein’s equations and fix its value by observations.
Moreover, the regularization choice inherits that of full LQG. It is
reasonable to infer that the effective Hamiltonian could also be obtained by
suitable semiclassical analysis of certain Hamiltonian of LQG.
## II Classical setting
LQG is based on the connection-dynamical formulation of GR defined on a
spacetime manifold $M=R\times\Sigma$, where $\Sigma$ denotes a three-
dimensional spatial manifold. The classical phase space consists of the
Ashtekar-Barbero variables $(A_{a}^{i}(x),E_{i}^{a}(x))$, where $A_{a}^{i}(x)$
is a $SU(2)$ connection and $E^{i}_{a}(x)$ is a densitized triad As04 ; Th07 ;
Ma07 . The non-vanishing Poisson bracket is given by
$\displaystyle\\{A_{a}^{i}(x),E_{j}^{b}(y)\\}=8\pi
G\gamma\delta_{a}^{b}\delta_{j}^{i}\delta^{3}(x,y)$ (1)
where $G$ is the gravitational constant and $\gamma$ is the Barbero-Immirzi
parameter. The classical dynamics of GR is thus obtained by imposing the
Gaussian, diffeomorphism and Hamiltonian constraints on the phase space, where
the latter represents the reparametrization freedom of time variable. In LQG,
the notable Hamiltonian constraint operator proposed in Thiemann98 and an
alternative one proposed in YM15 are based on the regularization schemes of
the following expression of the Hamiltonian constraint
$\displaystyle H_{g}=\frac{1}{16\pi
G}{\int_{\Sigma}}d^{3}xN\left[F^{j}_{ab}-(\gamma^{2}+1)\varepsilon_{jmn}K^{m}_{a}K^{n}_{b}\right]\frac{\varepsilon_{jkl}E^{a}_{k}E^{b}_{l}}{\sqrt{q}},$
(2)
where $N$ is the lapse function, $q$ denotes the determinant of the spatial
metric, $F^{i}_{ab}$ is the curvature of connection $A^{i}_{a}$, and
$K^{i}_{a}$ represents the extrinsic curvature of $\Sigma$. The so-called
Euclidean term $H^{E}$ and Lorentzian term $H^{L}$ in Eq. (2) are denoted
respectively as
$\displaystyle H^{E}$ $\displaystyle=$ $\displaystyle\frac{1}{16\pi
G}{\int_{\Sigma}}d^{3}xNF^{j}_{ab}\frac{\varepsilon_{jkl}E^{a}_{k}E^{b}_{l}}{\sqrt{q}},$
(3)
and
$\displaystyle H^{L}$ $\displaystyle=$ $\displaystyle\frac{1}{16\pi
G}{\int_{\Sigma}}d^{3}xN\left(\varepsilon_{jmn}K^{m}_{a}K^{n}_{b}\right)\frac{\varepsilon_{jkl}E^{a}_{k}E^{b}_{l}}{\sqrt{q}}.$
(4)
There is another alternative Hamiltonian constraint operator proposed in
Lewandowski15 for LQG, which is based on the regularization scheme of the
following expression of the Hamiltonian constraint,
$\displaystyle H_{g}$ $\displaystyle=$ $\displaystyle-\frac{1}{16\pi
G\gamma^{2}}{\int_{\Sigma}}d^{3}xN\left[F^{j}_{ab}\frac{\varepsilon_{jkl}E^{a}_{k}E^{b}_{l}}{\sqrt{q}}+(\gamma^{2}+1)\sqrt{q}R\right]$
where $R$ is the 3-dimensional spatial curvature of $\Sigma$. It is easy to
see that expressions (2) and (II) are equivalent to each other by using the
classical identity (up to Gaussian constraint)Th07
$\displaystyle H^{E}=\gamma^{2}H^{L}-{\int_{\Sigma}}\sqrt{q}R.$ (6)
Here, we point out that there is an one-parameter freedom to express the
classical Hamiltonian constraint in the connection formalism. The general
expression can be written as
$\displaystyle H_{g}$ $\displaystyle=$ $\displaystyle\lambda
H^{E}-(1+\lambda\gamma^{2})H^{L}+(-1+\lambda){\int_{\Sigma}}\sqrt{q}R,$ (7)
where $\lambda$ is an arbitrary real number to represent the freedom of
choices. Clearly, the expression (2) corresponds to the choice of $\lambda=1$,
while the expression (II) corresponds to the case of
$\lambda=-\frac{1}{\gamma^{2}}$. It should be noted that all the classical
theories corresponding to the different choices of $\lambda$ are equivalent to
each other. However, the quantization of classically equivalent expressions
could lead to nonequivalent operators. Therefore, different choices of
$\lambda$ might correspond to different quantum theories. This is the case for
the LQC model which we are going to consider. Our idea is to use experiments
or observations to single out the preferred expression of the Hamiltonian (or
the free parameter $\lambda$ ).
Now we consider the spatially flat FLRW model. One has to introduce an
“elemental cell" $\mathcal{V}$ on the spatial manifold $\mathbb{R}^{3}$ and
restrict all integrals to this cell. Then one chooses a fiducial Euclidean
metric ${}^{o}q_{ab}$ on $\mathbb{R}^{3}$, as well as the orthonormal triad
and co-triad $({}^{o}e^{a}_{i};{}^{o}\omega^{i}_{a})$ adapted to $\mathcal{V}$
such that ${}^{o}q_{ab}={}^{o}\omega^{i}_{a}{}^{o}\omega^{i}_{b}$. Via fixing
the degrees of freedom of local gauge and diffeomorphism transformations, one
can obtain the reduced connection and densitized triad as LQC5
$\displaystyle A_{a}^{i}=\tilde{c}V_{0}^{-\frac{1}{3}}\
{}^{o}\omega^{i}_{a},\quad\quad\quad
E^{b}_{j}=pV_{0}^{-\frac{2}{3}}\sqrt{\det({}^{0}q)}\ {}^{o}e^{b}_{j},$
where $V_{o}$ is the volume of $\mathcal{V}$ measured by ${}^{o}q_{ab}$,
$\tilde{c},p$ are only functions of the cosmological time $t$. To identify a
dynamical matter field as an internal clock, we employ a massless scalar field
$\phi$ with Hamiltonian
$\displaystyle
H_{\phi}=\frac{p^{2}_{\phi}}{2{\left|{p}\right|}^{\frac{3}{2}}},$ (8)
where $p_{\phi}$ is the momentum of $\phi$. Hence the phase space of the
cosmological model consists of conjugate pairs $(\tilde{c},p)$ and
$(\phi,p_{\phi})$, with the following nontrivial Poisson brackets,
$\displaystyle\\{\tilde{c},p\\}$ $\displaystyle=$ $\displaystyle\frac{8\pi
G}{3}\gamma,\quad\\{\phi,p_{\phi}\\}=1.$ (9)
Note that the gravitational variables are related to the scale factor $a$ by
${\left|{p}\right|}=a^{2}V_{0}^{\frac{2}{3}}$ and
$\tilde{c}=\gamma\dot{a}V_{0}^{\frac{1}{3}}$. Since all the Hamiltonians
corresponding to (7) with different choices of $\lambda$ are equivalent to
each other, they all lead to the standard Friedman equation without
cosmological constant as
$\displaystyle H^{2}=\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G}{3}\rho.$
(10)
## III Quantum dynamics
In the physically reasonable ${\bar{\mu}}$ scheme of LQC APS3 , it is
convenient to introduce new conjugate variables for gravity by the canonical
transformation
$\displaystyle v:=2\sqrt{3}sgn(p){\bar{\mu}}^{-3},\quad
b:={\bar{\mu}}\tilde{c},$
where ${\bar{\mu}}=\sqrt{\frac{\Delta}{|p|}}$ with $\Delta=4\sqrt{3}\pi\gamma
G\hbar$ being the minimum nonzero eigenvalue of the area operator Ash-view .
The new variables also form a pair of conjugate variables as
$\displaystyle\\{b,v\\}=\frac{2}{\hbar}\ .$
In LQC, the kinematical Hilbert space for the geometry part is defined as
$\mathcal{H}_{\mathrm{kin}}^{\mathrm{gr}}:=L^{2}(R_{Bohr},d\mu_{H})$, where
$R_{Bohr}$ and $d\mu_{H}$ are respectively the Bohr compactification of the
real line and Haar measure on it LQC5 . The kinematical Hilbert space for the
scalar field part is defined as in usual Schrodinger representation by
$\mathcal{H}_{\mathrm{kin}}^{\mathrm{sc}}:=L^{2}(R,d\mu)$. Hence the whole
Hilbert space of our model is a direct product,
$\mathcal{H}_{\mathrm{kin}}=\mathcal{H}^{\mathrm{gr}}_{\mathrm{kin}}\otimes\mathcal{H}^{\mathrm{sc}}_{\mathrm{kin}}$.
In $\mathcal{H}^{\mathrm{gr}}_{\mathrm{kin}}$, there are two elementary
operators, $\widehat{e^{ib/2}}$ and $\hat{v}$. It turns out that the
eigenstates $|{v}\rangle$ of $\hat{v}$ contribute an orthonormal basis in
$\mathcal{H}_{\mathrm{kin}}^{\mathrm{gr}}$. In the $v$-representation, the
actions of these two operators on the basis read
$\displaystyle\widehat{e^{\frac{ib}{2}}}|{v}\rangle$ $\displaystyle=$
$\displaystyle|{v+1}\rangle,\quad\hat{v}|{v}\rangle=v|{v}\rangle.$ (11)
Let $|\phi)$ be the generalized eigenstates of $\hat{\phi}$ in
$\mathcal{H}^{\mathrm{sc}}_{\mathrm{kin}}$. We denote
$|\phi,v):=|{v}\rangle\otimes|\phi)$ as the generalized basis for
$\mathcal{H}_{\mathrm{kin}}$.
Notice that the spatial curvature $R$ vanishes in the spatially flat FLRW
model. Hence the general expression (7) reduces to
$\displaystyle H_{g}=\frac{1}{16\pi G}\int d^{3}x\left[\lambda
F^{j}_{ab}-(\lambda\gamma^{2}+1)\varepsilon_{jmn}K^{m}_{a}K^{n}_{b}\right]\frac{\varepsilon_{jkl}E^{a}_{k}E^{b}_{l}}{\sqrt{q}}$
By the regularization procedure mimicking that in full LQG, both the Euclidean
term $H^{E}$ APS3 and the Lorentzian term $H^{L}$ YDM have been quantized as
well-defined operators in $\mathcal{H}_{\mathrm{kin}}^{\mathrm{gr}}$.
Therefore the operators $\hat{H}_{G}$ corresponding to (III) is ready. Its
action on a wave function $\Psi(v,\phi)$ in $\mathcal{H}_{\mathrm{kin}}$ is
the following difference equation,
$\displaystyle\hat{H}_{G}\Psi(v,\phi)$ $\displaystyle=$ $\displaystyle
f_{+}(v)\Psi(v+4,\phi)+(f_{0}(v)+g_{0}(v))\Psi(v,\phi)$ (13) $\displaystyle+$
$\displaystyle f_{-}(v)\Psi(v-4,\phi)+g_{+}(v)\Psi(v+8,\phi)$ $\displaystyle+$
$\displaystyle g_{-}(v)\Psi(v-8,\phi),$
where
$\displaystyle f_{+}(v)$
$\displaystyle=-\frac{27\lambda}{16}\sqrt{\frac{8\pi}{6}}\frac{\sqrt{G\hbar}}{8\pi
G\gamma^{\frac{3}{2}}}{\Big{|}{{\left|{v+2}\right|}-{\left|{v}\right|}}\Big{|}}{\left|{v+1}\right|},$
$\displaystyle f_{-}(v)$ $\displaystyle=f_{+}(v-2),\quad
f_{0}(v)=-f_{+}(v)-f_{-}(v).$ $\displaystyle g_{+}(v)$
$\displaystyle=-\frac{(1+\lambda\gamma^{2})\sqrt{6}}{2^{8}\times
3^{3}}\,\frac{\gamma^{3/2}}{(8\pi
G)^{3/2}\hbar^{1/2}}\,\frac{1}{L}\tilde{g}_{+}(v),$
$\displaystyle\tilde{g}_{+}(v)$
$\displaystyle=\Big{[}M_{v}(1,5)f_{+}(v+1)-M_{v}(-1,3)f_{+}(v-1)\Big{]}$
$\displaystyle\quad\times(v+4)M_{v}(3,5)$
$\displaystyle\quad\times\Big{[}M_{v}(5,9)f_{+}(v+5)-M_{v}(3,7)f_{+}(v+3)\Big{]},$
$\displaystyle g_{-}(v)$
$\displaystyle=-\frac{(1+\lambda\gamma^{2})\sqrt{6}}{2^{8}\times
3^{3}}\,\frac{\gamma^{3/2}}{(8\pi
G)^{3/2}\hbar^{1/2}}\,\frac{1}{L}\tilde{g}_{-}(v),$
$\displaystyle\tilde{g}_{-}(v)$
$\displaystyle=\Big{[}M_{v}(1,-3)f_{-}(v+1)-M_{v}(-1,-5)f_{-}(v-1)\Big{]}$
$\displaystyle\quad\times(v-4)M_{v}(-5,-3)$
$\displaystyle\quad\times\big{[}M_{v}(-3,-7)f_{-}(v-3)-M_{v}(-5,-9)f_{-}(v-5)\big{]},$
$\displaystyle g_{o}(v)$
$\displaystyle=-\frac{(1+\lambda\gamma^{2})\sqrt{6}}{2^{8}\times
3^{3}}\,\frac{\gamma^{3/2}}{(8\pi
G)^{3/2}\hbar^{1/2}}\,\frac{1}{L}\tilde{g}_{o}(v),$
$\displaystyle\tilde{g}_{o}(v)$
$\displaystyle=\Big{[}M_{v}(1,5)f_{+}(v+1)-M_{v}(-1,3)f_{+}(v-1)\Big{]}$
$\displaystyle\quad\times(v+4)M_{v}(3,5)$
$\displaystyle\quad\times\Big{[}M_{v}(5,1)f_{-}(v+5)-M_{v}(3,-1)f_{-}(v+3)\Big{]}$
$\displaystyle+\Big{[}M_{v}(1,-3)f_{-}(v+1)-M_{v}(-1,-5)f_{-}(v-1)\Big{]}$
$\displaystyle\quad\times(v-4)M_{v}(-5,-3)$
$\displaystyle\quad\times\Big{[}M_{v}(-3,1)f_{+}(v-3)-M_{v}(-5,-1)f_{+}(v-5)\big{]},$
with $M_{v}(a,b):=|v+a|-|v+b|$ and $L=\frac{4}{3}\sqrt{\frac{\pi\gamma
G\hbar}{3\Delta}}$. Thus, the Hamiltonian constraint equation of our LQC model
can be written as
$\displaystyle\left(\hat{H}_{G}+\frac{\sqrt{3}\hat{p}_{\phi}^{2}}{(\Delta)^{\frac{3}{2}}}\widehat{{\left|{v}\right|}^{-1}}\right)\Psi(\phi,v)=0,$
(14)
where the action of the Hamiltonian of matter field reads
$\displaystyle\frac{\sqrt{3}\hat{p}_{\phi}^{2}}{(\Delta)^{\frac{3}{2}}}\widehat{{\left|{v}\right|}^{-1}}\Psi(v,\phi)=-\frac{\sqrt{3}}{(\Delta)^{\frac{3}{2}}}\hbar^{2}B(v)\frac{\partial^{2}\Psi(v,\phi)}{\partial\phi^{2}},$
(15)
with
$B(v)=\left(\frac{3}{2}\right)^{3}{\left|{v}\right|}{\left|{{\left|{v+1}\right|}^{1/3}-{\left|{v-1}\right|}^{1/3}}\right|}^{3}$
APS3 . Note that we still have the freedom to choose a particular value of the
parameter $\lambda$. It is obvious that, if one set
$\lambda=-\frac{1}{\gamma^{2}}$, Eq.(14) would coincide with the quantum
dynamics in APS3 , while by choosing $\lambda=1$, one of the Hamiltonians in
YDM would be obtained. Our viewpoint is that the value of $\lambda$ should be
fixed by observations. To this aim, let us study the effective theory
indicated by Eq.(14). It has been showed in YDM that the expectation values
of the Euclidean term $H^{E}$ and the Lorentzian term $H^{L}$ at sub-leading
order read respectively as
$\displaystyle\langle\widehat{H}^{E}\rangle$
$\displaystyle=\frac{3{\left|{v}\right|}\beta}{8\pi
G\Delta}\left[e^{-4\epsilon^{2}}\sin^{2}(b)+\frac{1}{2}(1-e^{-4\epsilon^{2}})\right],$
$\displaystyle\langle\widehat{H}^{L}\rangle$
$\displaystyle=\frac{3{\left|{v}\right|}\beta}{32\pi
G\gamma^{2}\Delta}\left[e^{-16\epsilon^{2}}\sin^{2}(2b)+\frac{1}{2}(1-e^{-16\epsilon^{2}})\right]$
(16)
where $\beta=2\pi G\hbar\gamma\sqrt{\Delta}$ and $\epsilon=\frac{1}{d}$ with
$d$ denoting the characteristic "width" of the semiclassical state. Thus the
total effective Hamiltonian constraint of the model at leading order reads
$\displaystyle H_{F}=-\frac{3\beta}{8\pi
G\gamma^{2}\Delta}|v|\sin^{2}b\left(1-(1+\lambda\gamma^{2})\sin^{2}(b)\right)+\beta|v|\rho,$
(17)
where $\rho=\frac{p^{2}_{\phi}}{2V^{2}}$ with
$V={\left|{p}\right|}^{\frac{3}{2}}$ being the physical volume of
$\mathcal{V}$. It should be noted that although Eq. (17) could be obtained
from the Hamiltonian studied in Refs. YDM ; Pawlowski18 after doing the
rescalings $\gamma^{2}\mapsto\lambda\gamma^{2}$ and
$\Delta\mapsto\Delta/\lambda$, the theory is not invariant under the
rescalings. The Hamiltonian (17) represents a family of effective theories
beyond that in Refs. YDM ; Pawlowski18 .
Now we consider the effective Hamiltonian (17) with $\lambda>0$. At the
kinematical level, the matter energy-density $\rho$ can be solved by the
effective Hamiltonian constraint $H_{F}=0$ as
$\rho=\frac{3}{8\pi
G\Delta\gamma^{2}}\sin^{2}b(1-(1+\lambda\gamma^{2})\sin^{2}b).$ (18)
This in turn implies two solutions $b_{+}$ and $b_{-}$ satisfying
$\displaystyle\sin^{2}(b_{\pm})=\frac{1\pm\sqrt{1-\frac{\rho}{\rho_{c}}}}{2(1+\lambda\gamma^{2})},$
(19)
where $\rho_{c}=\frac{3}{32\pi G(1+\lambda\gamma^{2})\gamma^{2}\Delta}$.
Takeing into account the fact that
$0<\sin^{2}b\leq\frac{1}{1+\lambda\gamma^{2}}$, it is easy to see that $\rho$
is bounded by its maximum value $\rho_{c}$. The effective equations of motion
of the model with respect to the cosmological time $t$ can be derived by the
Hamiltonian constraint (17). In particular, one can obtain
$\displaystyle\dot{v}$ $\displaystyle=\frac{3\beta}{4\pi
G\hbar\gamma^{2}{\Delta}}|v|\sin(2b)(1-2(1+\lambda\gamma^{2})\sin^{2}b),$ (20)
$\displaystyle\dot{b}$ $\displaystyle=-\frac{p_{\phi}^{2}}{\hbar\beta
v^{2}}-\frac{3\beta}{4\pi
G\hbar\gamma^{2}{\Delta}}\sin^{2}b(1-(1+\lambda\gamma^{2})\sin^{2}b).$ (21)
Hence we have $\dot{v}=0$ for $b_{c}$ satisfying
$\sin^{2}(b_{c})=\frac{1}{2(1+\lambda\gamma^{2})}$. Moreover, the second order
derivative of $v$ can be calculated at this point as
$\displaystyle\ddot{v}|_{b=b_{c}}$ $\displaystyle=$ $\displaystyle 24\pi
G(1+\beta^{2})\beta^{2}|v|\sin^{2}(2b_{0})(1+\lambda\gamma^{2})\rho|_{b=b_{c}}>0$
Hence the matter density $\rho$ takes its maximum at the bounce point.
Therefore the point $v|_{b=b_{c}}$ is the minimum where the quantum bounce of
the Universe happens. In terms of the scale factor $a$, the Friedmann and
Raychaudhuri equations of this model are derived as
$\displaystyle H^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{\gamma^{2}\Delta}\sin^{2}(b)(1-\sin^{2}(b))(1-2(1+\lambda\gamma^{2})\sin^{2}b)^{2}$
$\displaystyle\frac{\ddot{a}}{a}$ $\displaystyle=$
$\displaystyle(H)^{2}+\frac{1}{\gamma\sqrt{\Delta}}\dot{b}(1-2\sin^{2}(b)$
(23) $\displaystyle-2(1+\lambda\gamma^{2})\sin^{2}b(3-4\sin^{2}b)),$
where $H$ denotes the Hubble parameter and $b$ has two solutions as shown in
(19). We are going to show that the two solutions correspond to the two
periods of the evolution of the Universe, divided by the bounce point.
The Hamiltonian (17) implies that $p_{\phi}$ is constant of motion and hence
$\phi$ is monotonic with respect to $t$. By identifying $\phi$ as a dynamical
time, we obtain the following analytic solution of the effective equations of
motion in the case of $\lambda>0$,
$x(\phi)=\frac{1}{1+\lambda\gamma^{2}\cosh^{2}(\sqrt{\frac{3\beta^{2}}{\hbar^{2}\pi
G\Delta\gamma^{2}}}(\phi-\phi_{o}))},$ (24) $\rho(\phi)=\frac{3\lambda}{8\pi
G\Delta}(\frac{\sinh(\sqrt{\frac{3\beta^{2}}{\hbar^{2}\pi
G\Delta\gamma^{2}}}(\phi-\phi_{o}))}{1+\lambda\gamma^{2}\cosh^{2}(\sqrt{\frac{3\beta^{2}}{\hbar^{2}\pi
G\Delta\gamma^{2}}}(\phi-\phi_{o}))})^{2}$ (25)
and
$v(\phi)=\sqrt{\frac{4\pi
G|p_{\phi}|^{2}\Delta}{3\lambda\beta^{2}}}\frac{1+\lambda\gamma^{2}\cosh^{2}(\sqrt{\frac{3\beta^{2}}{\hbar^{2}\pi
G\Delta\gamma^{2}}}(\phi-\phi_{o}))}{|\sinh(\sqrt{\frac{3\beta^{2}}{\hbar^{2}\pi
G\Delta\gamma^{2}}}(\phi-\phi_{o}))|},$ (26)
where we defined $x:=\sin^{2}b$, and $\phi_{o}$ is an integral constant.
Eq.(24) indicates that the range of $x$ in the physical evolution covers the
interval $(0,\frac{1}{1+\lambda\gamma^{2}})$. This confirms that the bounce
point of $x=\frac{1}{2(1+\lambda\gamma^{2})}$ does appear in the physical
evolution. Moreover, by using the Hamiltonian (17) and Eq.(26), the relation
between $\phi$ and $t$ can be solved as
$t(\phi)-t_{0}=\frac{2\pi
G\hbar\Delta\gamma^{3}\sqrt{\lambda}\text{sgn}[p_{\phi}(\phi-\phi_{0})]}{3\beta}\left(\cosh(\sqrt{\frac{3\beta^{2}}{\hbar^{2}\pi
G\Delta\gamma^{2}}}(\phi-\phi_{0}))-\frac{1+\lambda\gamma^{2}}{\lambda\gamma^{2}}\ln|\coth(\sqrt{\frac{3\beta^{2}}{4\pi
G\hbar^{2}\Delta\gamma^{2}}}(\phi-\phi_{0}))|\right),$
where $\text{sgn}[p_{\phi}(\phi-\phi_{0})]$ denotes the sign of
$p_{\phi}(\phi-\phi_{0})$. Therefore, either of the two domains
$\phi>\phi_{0}$ and $\phi<\phi_{0}$ can cover the range of $t$. We thus
consider the domain $\phi>\phi_{0}$ without loss of generality. Then the
infinite past and infinite future of $t$ correspond to $\phi\to\phi_{0}^{+}$
and $\phi\to+\infty$ respectively. Hence Eq.(24) ensures that the two
solutions $b_{-}$ and $b_{+}$ in (19) correspond to the two periods given by
$0<\sin^{2}b_{-}\leq\frac{1}{2(1+\lambda\gamma^{2})}$ and
$\frac{1}{2(1+\lambda\gamma^{2})}\leq\sin^{2}b_{+}<\frac{1}{1+\lambda\gamma^{2}}$
of the evolution of the Universe, divided by the bounce point at $b_{c}$. By
Eqs. (26) and (24), it is obvious that $v\rightarrow\infty$ is achieved at
$(\phi-\phi_{o})\rightarrow+\infty$ or $(\phi-\phi_{o})\rightarrow 0^{+}$, and
correspondingly the behaviour of $b$ is given by
$\displaystyle b$ $\displaystyle\rightarrow$ $\displaystyle 0\quad\text{if}\
(\phi-\phi_{o})\rightarrow+\infty,$ $\displaystyle b$
$\displaystyle\rightarrow$ $\displaystyle
b_{0}\equiv\arcsin{(\frac{1}{\sqrt{(1+\lambda\gamma^{2})}})}\ \text{if}\
(\phi-\phi_{o})\rightarrow 0^{+}.$ (27)
To study the asymptotics of the effective Friedmann and Raychaudhuri
equations, we expand Eq.(III) and Eq.(23) by $b$ and $b-b_{0}$ respectively up
to the second order term in the above large $v$ limit and thus obtain
$\displaystyle H^{2}$ $\displaystyle=$ $\displaystyle\frac{8\pi
G}{3}\rho,\quad\quad(\mbox{for}\quad b\rightarrow 0)$ (28) $\displaystyle
H^{2}$ $\displaystyle=$
$\displaystyle\left(\frac{1-5\lambda\gamma^{2}}{1+\lambda\gamma^{2}}\right)\frac{8\pi
G\rho}{3}+\frac{\Lambda}{3},\quad(\mbox{for}\quad b\rightarrow b_{0})$ (29)
and
$\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3P),\quad(\mbox{for}\quad
b\rightarrow 0),$ (30)
$\frac{\ddot{a}}{a}=-\left(\frac{1-5\lambda\gamma^{2}}{1+\lambda\gamma^{2}}\right)\frac{4\pi
G}{3}(\rho+3P)+\frac{\Lambda}{3},\quad(\mbox{for}\quad b\rightarrow b_{0}),$
(31)
where we defined the pressure of matter by $P=-\frac{\partial
H_{\phi}}{\partial V}$, and an effective cosmological constant
$\displaystyle\Lambda\equiv\frac{3\lambda}{(1+\lambda\gamma^{2})^{2}\Delta}.$
(32)
The asymptotic behaviors of the scalar curvature
$R=6(H^{2}+\frac{\ddot{a}}{a})$ are given by
$R=-16\pi G\rho,\quad(\mbox{for}\quad b\rightarrow 0),$ (33) $R=-16\pi
G\rho\left(\frac{1-5\lambda\gamma^{2}}{1+\lambda\gamma^{2}}\right)+4\Lambda,\quad(\mbox{for}\quad
b\rightarrow b_{0}).$ (34)
Figure 1: The $\Lambda\Delta$ as a function of $\lambda$. The $\gamma=0.2375$
is used in the calculation.
Thus, if the Universe started to collapse from an initial state of spatially
flat FLRW configuration, it would undergo a quantum bounce and evolve into an
asymptotic de-Sitter branch. A notable feature of this picture is that an
effective cosmological constant $\Lambda$ emerges from the quantum gravity
effect. Eq.(29) implies an effective Newtonian constant
$\displaystyle G_{\lambda}=\frac{1-5\lambda\gamma^{2}}{1+\lambda\gamma^{2}}G$
(35)
in the de-Sitter epoch. If G took the usual value of the Newtonian constant,
for the special choice of $\lambda=1$, $G_{\lambda}$ would receive a large
correction which is inconsistent with the current experiments Wang18 .
However, the regularization freedom does admit us to choose certain
sufficiently small $\lambda$ such that $G_{\lambda}$ satisfies the
experimental restrictions.
The value of $\Lambda$ is determined by the value of $\lambda$ by Eq.(32), and
the function $\Lambda\Delta(\lambda)$ is plotted by Fig.1. To reproduce the
current observed cosmological constant $\Lambda_{ob}$, the corresponding
$\lambda$ has two solutions,
$\lambda_{1}\sim\frac{3}{\gamma^{4}\Delta\Lambda_{ob}}$ and
$\lambda_{2}\sim\frac{\Delta\Lambda_{ob}}{3}$. It is obvious that for a
reasonable choice of $\gamma\sim 0.2$ coming from the calculation of black
hole entropyLewandowski04 ; Ash-view , $\lambda_{1}$ is too big to give an
acceptable $G_{\lambda}$. Moreover, in this case the critical density
$\rho_{c}\sim\frac{3\Lambda_{ob}}{32\pi G}$ becomes very small and thus
conflicts with the experiments. However, $\lambda_{2}$ is sufficiently small
to give an acceptable $G_{\lambda}$ and in the meantime leads to a bouncing
density of Planck order as $\rho_{c}=\frac{3}{32\pi
G(1+\lambda\gamma^{2})\gamma^{2}\Delta}\sim\frac{3}{32\gamma^{2}\pi G\Delta}$.
For such a $\lambda$, Eqs. (29) and (31) reduce to
$\displaystyle H^{2}$ $\displaystyle=$ $\displaystyle\frac{8\pi
G\rho}{3}+\frac{\Lambda_{ob}}{3},$ (36) $\displaystyle\frac{\ddot{a}}{a}$
$\displaystyle=$ $\displaystyle-\frac{4\pi
G}{3}(\rho+3P)+\frac{\Lambda_{ob}}{3}.$ (37)
They are nothing but the standard Friedmann and Raychaudhuri equations with
the observational cosmological constant! Therefore, by choosing
$\lambda=\frac{\Delta\Lambda_{ob}}{3}\thicksim 10^{-122}$, the standard
dynamical equations can be obtained at large volume limit of the asympotic de-
Sitter branch.
## IV Discussion
Since the observational cosmological constant is so small, one may be worried
about that the sub-leading quantum correction terms such as those in Eq. (16)
could influence the observational choice of the parameter $\lambda$. To check
this issue, let us consider the effective Hamiltonian constraint of the model
with sub-leading order YDM
$\displaystyle H_{F}$ $\displaystyle=$ $\displaystyle-\frac{3\beta}{8\pi
G\gamma^{2}\Delta}|v|\sin^{2}b\left(1-(1+\lambda\gamma^{2})\sin^{2}(b)+2\epsilon^{2}\right)$
(38)
$\displaystyle+\beta|v|\rho\left(1+\frac{1}{2{\left|{v}\right|}^{2}\epsilon^{2}}+\frac{\hbar^{2}}{2\sigma^{2}p^{2}_{\phi}}\right),$
where $\sigma$ denotes the "width" of the Gaussian semiclassical state of
matter field. By assuming that the time derivatives of the quantum corrections
are neglectable higher order terms. Straightforward calculations similar to
those of Eqs. (III), (23), (28), (29), (30) and (31) give
$\displaystyle H^{2}$ $\displaystyle=$ $\displaystyle\frac{8\pi
G}{3}\rho_{eff}-\frac{2\epsilon^{2}}{\gamma^{2}\Delta},\quad(\mbox{for}\quad
b\rightarrow 0)$ (39) $\displaystyle H^{2}$ $\displaystyle=$
$\displaystyle\frac{8\pi G_{\lambda}}{3}\rho_{eff}+\frac{\Lambda}{3}$ (40)
$\displaystyle-\left(\frac{1-5\lambda\gamma^{2}}{1+\lambda\gamma^{2}}\right)\frac{2\epsilon^{2}}{\gamma^{2}\Delta},\quad(\mbox{for}\quad
b\rightarrow b_{0})$ $\displaystyle\frac{\ddot{a}}{a}$ $\displaystyle=$
$\displaystyle-\frac{4\pi
G}{3}(\rho_{eff}+3P)-\frac{2\epsilon^{2}}{\gamma^{2}\Delta},(\mbox{for}\quad
b\rightarrow 0)$ (41) $\displaystyle\frac{\ddot{a}}{a}$ $\displaystyle=$
$\displaystyle-\frac{4\pi G_{\lambda}}{3}(\rho_{eff}+3P)+\frac{\Lambda}{3}$
(42)
$\displaystyle-\left(\frac{1-5\lambda\gamma^{2}}{1+\lambda\gamma^{2}}\right)\frac{2\epsilon^{2}}{\gamma^{2}\Delta},\quad(\mbox{for}\quad
b\rightarrow b_{0})$
where
$\rho_{eff}=\rho\left(1+\frac{1}{2{\left|{v}\right|}^{2}\epsilon^{2}}+\frac{\hbar^{2}}{2\sigma^{2}p^{2}_{\phi}}\right)$.
Although the value of the quantum fluctuation or the Gaussian spread
$\epsilon$ could not be determined in the model, one may always ask
$\widetilde{\Lambda}=\Lambda-\left(\frac{1-5\lambda\gamma^{2}}{1+\lambda\gamma^{2}}\right)\frac{6\epsilon^{2}}{\gamma^{2}\Delta}$
to coincide with $\Lambda_{ob}$. Then , taking account of the observational
restriction of $G_{\lambda}$ in (35), the regularization parameter should be
fixed as
$\displaystyle\lambda$ $\displaystyle=$
$\displaystyle\frac{B-\sqrt{B^{2}-4(\Lambda_{ob}\Delta\gamma^{2}-30\epsilon^{2})(\Lambda_{ob}\Delta\gamma^{2}+6\epsilon^{2})}}{2(\Lambda_{ob}\Delta\gamma^{2}-30\epsilon^{2})\gamma^{2}}$
(43) $\displaystyle\approx$
$\displaystyle\frac{\Lambda_{ob}\Delta}{3}+\frac{2\epsilon^{2}}{\gamma^{2}}$
with $B=3+24\epsilon^{2}-2\Lambda_{ob}\Delta\gamma^{2}$. Therefore, even if
the sub-leading quantum corrections were comparable with the observational
cosmological constant, their effect could always be absorbed into the effect
of the parameter $\lambda$.
To summarize, an one-parameter regularization freedom of the Hamiltonian
constraint for LQG is introduced by Eq.(7). The corresponding spatially flat
FLRW model is studied. The quantum difference equation representing the
evolution of the universe and its effective Hamiltonian are obtained. The
general expression (7) includes the Euclidean term quantum dynamics APS3 by
choosing $\lambda=-\frac{1}{\gamma^{2}}$ and the Euclidean-Lorentzian term
dynamics YDM by choosing $\lambda=1$. The quantum bounce nature of LQC is
tenable in the general case when the matter density approaches certain
critical density. For a chosen $\lambda>0$, the effective Hamiltonian of our
LQC model can lead to a branch of Universe with an asympotic positive
cosmological constant connecting to the FLRW branch through the quantum
bounce. Remarkably, by suitable choice of $\lambda$, the standard Friedmann
equation with the observational cosmological constant $\Lambda_{ob}$ can be
obtained at large volume limit of the asymptotic de-Sitter branch. In the
meantime, unlike the case of $\lambda=1$, the effective Newtonian constant
$G_{\lambda}$ also satisfies the experimental restrictions. The significance
of the current work is that we found a choice of regularization such that the
acceleration expansion of our universe (dark energy) could be attributed to
the emergent effects of quantum gravity. From the expression of (17), there
does exist an one-parameter ambiguity. This is related to the well-known fact
that the classically equivalent expressions would generally be nonequivalent
after quantization. By using the observational data, we successfully single
out the preferred value of $\lambda$ in the expression of the Hamiltonian. In
addition, this effect is generated from dynamics of quantum gravity rather
than a non-dynamical constant added by hand. Since the effective Hamiltonian
(17) can also be derived from full LQG by suitable semiclassical analysis DL18
; HL , our result indicates a fair possibility to emerge the dark energy from
LQG.
## Acknowledgments
This work is supported by NSFC with Grants No. 11775082, No. 12047519, No.
11961131013 and No. 11875006.
## References
* (1) J. Friemann, M. Turner, D. Huterer, Dark energy and the accelerating Universe, Ann. Rev. Astron. Astrophys. 46, 385 (2008).
* (2) N. Benerjee and D. Pavon, Cosmic acceleration without quintessence, Phys. Rev. D 63, 043504 (2001).
* (3) S. Sen and A. A. Sen, Late time acceleration in Brans-Dicke cosmology, Phys. Rev. D 63, 124006(2001).
* (4) L. Qiang, Y. Ma, M. Han and D. Yu, 5-dimensional Brans-Dicke theory and cosmic acceleration, Phys. Rev. D 71, 061501(R) (2005).
* (5) P. J. E. Peebles and Bharat Ratra, The cosmological constant and dark energy, Rev. Mod. Phys. 75, 559 (2003).
* (6) S.-H. Henry Tye and Ira Wasserman, Brane World Solution to the Cosmological Constant Problem, Phys. Rev. Lett. 86, 1682 (2001).
* (7) S. Weinberg, The cosmological constant problem, Rev. Mod. Phys. 61, 1 (1989).
* (8) C. Rovelli, Quantum Gravity, (Cambridge University Press, 2004).
* (9) T. Thiemann, Modern Canonical Quantum General Relativity, (Cambridge University Press, 2007).
* (10) A. Ashtekar and J. Lewandowski, Background independent quantum gravity: A status report, Class.Quant.Grav. 21, R53 (2004).
* (11) M. Han, W. Huang, and Y. Ma, Fundamental structure of loop quantum gravity, Int. J. Mod. Phys. D 16, 1397 ,(2007).
* (12) X. Zhang and Y. Ma, Extension of loop quantum gravity to $f(R)$ theories, Phys. Rev. Lett. 106, 171301 (2011).
* (13) X. Zhang and Y. Ma, Loop quantum f(R) theories, Phys. Rev. D 84, 064040 (2011).
* (14) X. Zhang and Y. Ma, Loop quantum Brans-Dicke theory, J. Phys.: Conf. Ser. 360, 012055 (2012).
* (15) X. Zhang and Y. Ma, Nonperturbative loop quantization of scalar-tensor theories of gravity, Phys. Rev. D 84, 104045 (2011).
* (16) N. Bodendorfer, T. Thiemann, A. Thurn, New Variables for Classical and Quantum Gravity in all Dimensions I. Hamiltonian Analysis, Class. Quantum Grav. 30 (2013) 045001.
* (17) A. Ashtekar, M. Bojowald, and J. Lewandowski, Mathematical structure of loop quantum cosmology, Adv. Theor. Math. Phys. 7, 233 (2003).
* (18) M. Bojowald, Loop quantum cosmology, Living Rev. Relativity 8, 11 (2005).
* (19) A. Ashtekar, P. Singh, Loop quantum cosmology: A status report, Class. Quant. Grav. 28, 213001 (2011).
* (20) A. Ashtekar, T. Pawlowski, P. Singh, Quantum nature of the big bang: Improved dynamics, Phys. Rev. D 74, 084003 (2006).
* (21) J. Yang, Y. Ding and Y. Ma, Alternative quantization of the Hamiltonian in loop quantum cosmology, Phys. Lett. B 682, 1 (2009).
* (22) Y. Ding, Y. Ma, and J. Yang, Effective scenario of loop quantum cosmology, Phys. Rev. Lett. 102, 051301 (2009).
* (23) I. Agullo, A. Ashtekar, and W. Nelson, Quantum Gravity Extension of the Inflationary Scenario, Phys. Rev. Lett. 109, 251301 (2012).
* (24) A. Ashtekar, B. Gupt, D. Jeong, and V. Sreenath, Alleviating the Tension in the Cosmic Microwave Background Using Planck-Scale Physics, Phys. Rev. Lett. 125, 051302 (2020)
* (25) J. Yang, Y. Ma, New Hamiltonian constraint operator for loop quantum gravity, Phys.Lett.B 751:343-347,(2015).
* (26) M. Assanioussi, J. Lewandowski, I. Makinen, New scalar constraint operator for loop quantum gravity, Phys. Rev. D 92, 044042 (2015).
* (27) K. Liegener, P. Singh, Gauge-invariant bounce from loop quantum gravity,Class. Quant. Grav., 37 (2020) no.8, 085015.
* (28) A. Ashtekar, T. Pawlowski, and P. Singh, Quantum Nature of the Big Bang, Phys. Rev. Lett. 96, 141301 (2006).
* (29) M. Assanioussi, A. Dapor, K. Liegener, and T. Pawlowski, Emergent de sitter epoch of the quantum cosmos from loop quantum cosmology, Physical Review Letters, 121(8):081303, 2018.
* (30) M. Assanioussi, A. Dapor, K. Liegener, and T. Pawlowski, Emergent de Sitter epoch of the Loop Quantum Cosmos: a detailed analysis, Phys. Rev. D 100, 084003 (2019).
* (31) T. Thiemann, Quantum Spin Dynamics (QSD), Class. Quantum Grav. 15, 839 (1998).
* (32) M. Domagala, J. Lewandowski, Black hole entropy from Quantum Geometry, Class. Quant. Grav. 21 5233 (2004).
* (33) A. Ashtekar, Loop quantum cosmology: An overvie, Gen. Rel. Grav. 41, 707 (2009).
* (34) B. Li, P. Singh and A. Wang, Towards cosmological dynamics from loop quantum gravity, Phys. Rev. D 97, 084029 (2018).
* (35) A. Dapor, K. Liegener, Cosmological Effective Hamiltonian from full Loop Quantum Gravity Dynamics, Phys. Lett. B785, 506 (2018),
* (36) M. Han and H. Liu, Effective dynamics from coherent state path integral of full loop quantum gravity, Phys. Rev. D 101, 046003 (2020).
|
# VML-MOC: Segmenting a multiply oriented and curved handwritten text line
dataset
Berat Kurar Barakat, Rafi Cohen, Jihad El-Sana Department of Computer Science
Ben-Gurion University
berat, rafico<EMAIL_ADDRESS>Irina Rabaev Software Engineering
Department
Shamoon College of Engineering
<EMAIL_ADDRESS>
###### Abstract
This paper publishes a natural and very complicated dataset of handwritten
documents with multiply oriented and curved text lines, namely VML-MOC
dataset. These text lines were written as remarks on the page margins by
different writers over the years. They appear at different locations within
the orientations that range between $0^{\circ}$ and $180^{\circ}$ or as
curvilinear forms. We evaluate a multi-oriented Gaussian based method to
segment these handwritten text lines that are skewed or curved in any
orientation. It achieves a mean pixel Intersection over Union score of
$80.96\%$ on the test documents. The results are compared with the results of
a single-oriented Gaussian based text line segmentation method.
## I Introduction
Handwritten document image recognition has several processing phases,
including text line segmentation. Output of text line segmentation phase is
commonly used for word and character recognition in turn. Therefore, a robust
text line segmentation is crucial for successful handwriting recognition.
Driven by this importance text line segmentation is extensively studied in the
recent years. Most existing methods [1, 2, 3, 4, 5] are designed with the
assumption of horizontal or near-horizontal text lines. Consequently, there is
still a large gap when segmentation applied to text lines with arbitrary
orientations.
The challenge with the segmentation of skewed and curved text lines comes with
the lack of a natural benchmark dataset for testing and comparing algorithms
in realistic scenarious. There are works on slightly skewed handwritten text
lines [6, 7] and on curved printed text lines [8, 9, 10, 11]. However, their
dataset is either synthetic or slightly skewed or not available publicly.
We present a natural handwritten benchmark dataset, VML-MOC, for heavily
skewed and curved text lines (Figure 1). These text lines are side notes added
by scholars over the years on the page margins, each time with a different
orientation and sometimes in an extremely curvy form due to space constraints.
The dataset consists of 30 document images which are taken from multiple
manuscripts and contains various kinds of skewed and curved text lines.
Figure 1: Sample images from the VML-MOC dataset with colored visualization of
their bounding polygon labels.
We evaluate a multi-oriented Gaussian based method and a single-oriented
Gaussian based method. Multi-oriented Gaussian based method was previously
proposed as a part of a whole framework for simplifying reading of historical
manuscripts [12]. This paper investigates details of this method in further,
reports its results on the proposed benchmark dataset and compares with the
results of a single-oriented Gaussian based method[1].
## II Related work
Text line segmentation is a prior step for various algorithms, such as
indexing, word spotting and OCR. The vast majority of procedures for text line
extraction are designed to process horizontal or straight lines. Hence, they
are unsuitable for scenarios where text exhibit multi-skewed, multi-directed
and highly curled lines. Few methods address text line segmentation of warped
and multi-skewed lines, which we can divide into two broad categories: whether
a method requires a learning process or not.
Learning-free approaches are mostly based on projection profiles, Gabor
Transform, active contours, and features specific to application.
Basu et al. [6] assumed hypothetical flows of water, from both left and right
sides of the image boundary, which face obstruction from characters of the
text line. The stripes of areas left dry at the end of the process represent
text lines. Roy et al. [11] utilized water reservoir-based background
information to identify text lines in printed documents. Reservoirs are based
on connected component cavities, and are used to estimate skews of line-parts.
Bukhari et al.[8, 13, 14, 15] presented active contour models for segmenting
warped textlines from camera-captured printed documents. The snakes are
initialized over each connected component and, following the deformation,
joined together to result in textline. In [16, 10] active contour model was
adapted to handwritten documents. Ouwayed and Belaïd[17] adopted active
contour approach to estimate mesh side over the document image, where each
mesh is designed to contain parts of few lines. Then, skew in each mesh is
detected using projection profiles. Morillot et al. [18] used a sliding window
to estimate the lower baseline position for each image column, followed by a
vertical shift correction. Boubaker et al. [19] approximated baselines with
piece-wise linear curves, based on a language specific features. Herzog et al.
[20] and Asi et al. [21] adopted Gabor Transform for identifying multi-
oriented text blocks in handwritten documents, where lines of each text block
share homogeneous orientations.
Recently methods inspired by machine learning have proven to be efficient for
textline extraction. Zhang et al. [22] presented a framework for multi-
oriented text detection in natural images, where they integrated semantic
labeling by Fully Convolutional Networks. The method assumes that characters
from each text line are in the arrangement of straight or near-straight line.
Bukhari et al. [23] used machine learning to identify text areas of different
orientations in Arabic manuscripts. A number of features are extracted from
connected components, and fed into a multi-layer perceptron classifier.
Although textline segmentation has been extensively studied during the last
decades, it remains a challenging problem for documents with complex layout
such as those present in Figure 1.
## III VML-MOC dataset
Figure 2: Sample patches from document images of the VML-MOC dataset. There
are text lines with a skew range of $[0^{\circ},180^{\circ}]$ and with various
arc shapes.
VML-MOC (Visual Media Lab - Multiply Oriented and Curved) dataset is a
collection of 30 pages selected from several manuscripts. Some of these
manuscripts are from a private library in the old city of Jerusalem, and
others are from the Islamic manuscript digitization project of Leipzig
University Library. We consider that prospective algorithms can be learning
based and provide an official dataset split to evaluate different models under
the same conditions. Randomly, 20 pages were selected for train set and 10
pages for test set. The images with their corresponding ground truth files are
publicly available111https://www.cs.bgu.ac.il/~berat/data/moc_dataset.zip.
VML-MOC dataset document images purely contain side notes, which are binarized
using the algorithm from [24]. Hence, the researchers can focus only on text
line extraction of multiply oriented and curved text lines, devoid of dealing
with the challenges of page segmentation, heterogeneity of side text and main
text areas and binarization defects. Variance is mostly at the orientation and
curvature of the text lines. The dataset contains text lines with a skew range
of $[0^{\circ},180^{\circ}]$ and with all possible arc shapes. Figure 2 shows
sample patches from document images of the VML-MOC dataset.
For annotation we used Aletheia [25], a semi-automated ground truthing system.
The ground truth is provided in three forms: raw pixel labeling, DIVA pixel
labeling and PAGE [26] xml file.
### III-A Raw pixel labeling
Raw pixel labeling classifies each pixel as a part of a text line or
background. It is a matrix of non-negative integers of the same size as the
document image. Background pixels are labeled as $0$. Pixels of the first text
line are labeled as $1$, pixels of the second text line are labeled as $2$ and
so on. Figure 3 shows colored visualization of raw pixel labeling.
Figure 3: Colored visualization of raw pixel labeling. All the pixels of a
text line are assigned the same non-negative integer.
### III-B PAGE xml file
PAGE xml file contains a bounding polygon for every text line in a document
image. Bounding polygons were extracted using the raw pixel labeling as
follows: Pixels of a text line were considered as a set of points, and a
concave hull that envelops these points was computed and regarded as the
bounding polygon of this text line. Figure 1 shows colored visualization of
bounding polygons in PAGE xml file.
### III-C DIVA pixel labeling
DIVA pixel labeling is provided to be used with the ICDAR2017 competition line
segmentation evaluator [27]. It is a matrix of non-negative integers of the
same size as the document image, and assigns a label for every pixel in the
document image. It distinguishes text line pixels, background pixels and
boundary pixels. Boundary pixels are the pixel inside a bounding polygon of a
text line that do not belong to the foreground (Figure 4).
To prepare DIVA pixel labeling, we first overlaid the binarized document image
with the polygons given by the PAGE xml file. The foreground pixels within the
polygons are encoded as text line using the color code $(0,0,1)$ in RGB. The
background pixels within polygons are encoded as boundary using the color code
$(128,0,0)$ in RGB. All the pixels out of polygons are encoded as background
using the color code $(0,0,0)$ in RGB. Figure 4 illustrates these encodings
and corresponding color codes.
Figure 4: DIVA pixel labeling encodes the pixels out of polygons into
$(0,0,0)$, the background pixels within the polygons into $(0,0,1)$, and the
foreground pixels within the polygons into $(128,0,0)$.
## IV Method
The method starts with text line enhancement by convolving the image with
second derivative of multi-oriented and multi-scaled Gaussians. The enhanced
image is then binarized by Niblack algorithm. Binarization output contains
blob lines hovering the places of text lines. It may also contain some false
ligature blobs caused by the multiple orientations of Gaussian. Therefore, the
blob lines are classified as valid or invalid, based on how well a blob line
can be approximated by piecewise linear approximation. The invalid blob lines
are then morphologically skeletonized and decomposed at bifurcation points of
the skeleton. After the decomposition, energy minimization removes false
ligature blobs. Removal of false ligatures leaves some broken blob lines.
These broken blob lines are merged using Minimum Spanning Tree (MST). In the
final stage, the connected components of text lines are assigned to blob lines
using energy minimization.
In the rest of this section we further study each of the above steps.
### IV-A Text line enhancement and binarization
The pixels in an image can be regarded as two dimensional random variables
generated by an unknown probability distribution function. Usually pixels of
text lines have smaller intensity values than those in the rest of the image.
Therefore, convolution of a text line with second derivative of an anisotropic
Gaussian elongated along the text line direction generates ridges over the
text line areas. Here arise two issues with VML-MOC dataset: 1) Text line
height varies due to ascenders and descenders, 2) Text line direction varies
due to multiple orientations or curvatures.
---
(a)
(b)
(c)
Figure 5: (a) An input patch from VML-MOC. (b) Enhanced text lines resulted
from applying the multi-oriented and multi-scaled Gaussian filter bank. (c)
Final blob lines after Niblack binarization of enhanced text lines.
To deal with these issues, we generated a filter bank using second derivative
of anisotropic Gaussian. This bank contains all possible combinations of
orientations within the range of $[0^{\circ},180^{\circ}]$ and scales within
the range of $[\mu/2,(\mu+\sigma/2)/2]$, where $\mu$ and $\sigma$ are the
average and standard deviation of the heights of connected components in the
image. We applied this filter bank and considered the optimal scale and the
optimal orientation for each pixel. Therefore each pixel returns the maximum
possible response (Figure 5b) for it, using this filter bank. Enhanced text
lines were then binarized by Niblack algorithm, to get the final blob lines
that hovers over the text lines (Figure 5c).
### IV-B False ligature removal
Blob lines resulted from the binarization phase might contain false ligature
blobs (Figure 5c). To remove these ligatures we first classified blob lines as
valid or invalid. For each blob line, principal component analysis was applied
to its pixels. Then the blob line was horizontally aligned via the rotation
transformation matrix in Equation (1).
$\begin{bmatrix}\cos{-\theta}&-\sin{-\theta}\\\ \sin{-\theta}&\cos{-\theta}\\\
\end{bmatrix}$ (1)
where $\theta$ is the reference angle of the first principal component.
We fitted least square linear splines using $20$ knots, to the horizontally
aligned set of points (Figure 6a). For each spline on a blob line, fitting
score is the 1-norm between the linear fit and the blob line points in that
spline. Finally, we considered a blob line as valid, if its maximum fitting
score is less than the $80\%$ percent of maximum filter scale (Figure 6b).
Invalid blob lines (Figure 6c) were skeletonized and decomposed at their
bifurcation points (Figure 6d).
False ligatures in the set of decomposed blob lines were removed by energy
minimization. To do this, every blob line in the decomposed blob lines set was
assigned a label cost (Figure 6e) that describes how much its orientation
deviates from the dominant orientation in a small radius around it. This
radius is equal to $18$ times the ratio of total areas of blob lines to the
total perimeters of blob lines in the image. Dominant orientation
($\theta_{hist}$) within this radius is the peak value in histogram of
orientation angles of the filter bank that gave the highest response with the
pixels within this radius. Blob line orientation ($\theta_{pca}$) is the slope
of first principal component of this blob line’s pixels. For each blob line
the label cost is computed by using Equation (2)
$h_{\ell}=\exp(\gamma\cdot(1-|\theta_{his}-\cos(\theta_{PCA})|))$ (2)
where $\gamma$ is a constant which was set to $50$. Finally, these label costs
are fed into energy minimization function to remove false ligature blobs
(Figure 6f).
---
(a)
(b)
(c)
(d)
(e)
(f)
Figure 6: (a) Least square linear splines fitted to the blob lines. Each
spline in a blob line is represented by a color. Splines were fitted to
horizontally aligned points, this figure shows re-rotated data for
visualization purposes. (b) Valid blob lines. (c) Invalid blob lines. Notice
the false ligatures on invalid blob lines. (d) Invalid blob lines are
decomposed at their bifurcation points. (e) Each of the decomposed invalid
blob lines is assigned a label cost. (f) False ligature blobs with high label
cost are removed using energy minimization.
### IV-C Merging broken blob lines
We merged the broken blob lines using a minimum spanning tree (MST) on an
undirected weighted graph, $G=(V,E)$. The set of vertices, $V$, is composed of
all the end-points of the blob lines, in addition to a root vertex that is
connected to all the end-points. The set of edges is $E=E_{1}\cup E_{2}\cup
E_{3}$:
1. 1.
$E_{1}$ is the set of edges that connects two vertices of a blob line, their
weight is set to $0$.
2. 2.
$E_{2}$ is the set of edges between the root and all the vertices, their
weight is the normalized number of foreground pixels overlapping with the blob
line.
3. 3.
$E_{3}$ consists of edges between end-points that belong to different lines.
Their weight is a local linearity measure defined in the following. In set
$E_{3}$, for every end-point we considered only the edges with the most
successful linearity measure.
#### Local linearity measure
Local linearity measure shows how linear the connection is between two end-
points, $u$ and $v$, of two blob lines. For each end-point, $u$ and $v$, it
chooses a nearby point on its blob line. These near-by points are $s$ and $t$,
respectively (Figure 7a). Then local linearity measure is computed as in
Equation (3):
$w(u,v)=\exp\left(\gamma\left(\frac{||s-u||+||u-v||+||v-t||}{||s-t||}-1\right)\right)$
(3)
where $\gamma$ is a constant and $||u-v||$ is the Euclidean distance between
$u$ and $v$.
---
(a)
(b)
Figure 7: (a) Local linearity measure. (b) Merged blob lines resulted from
applying MST.
### IV-D Labeling connected components
This phase uses energy minimization [28] for assigning connected components to
text lines. Let $\mathcal{L}$ be the set of blob lines, and $\mathcal{C}$ be
the set of connected components in the image. Energy minimization finds a
labeling $f$ that assigns each component $c\in C$ a label
$l_{c}\in\mathcal{L}$, where $E(f)$ has the minimum.
$E(f)=\sum_{c\in{\mathcal{C}}}D(c,\ell_{c})+\sum_{\\{c,c^{\prime}\\}\in\mathcal{N}}d(c,c^{\prime})\cdot\delta(\ell_{c}\neq\ell_{c^{\prime}})+\sum_{\ell\in{\mathcal{L}}}h_{\ell}$
(4)
Energy function has three terms: data cost, smoothness cost, and label cost.
1. 1.
Data cost: For every $c\in{\mathcal{C}}$, $D(c,\ell_{c})$ is defined as the
Euclidean distance between the centroid of $c$ and the blob line $l_{c}$.
2. 2.
Smoothness cost: Let $N$ be the nearest component pairs. For every
$\\{c,c^{\prime}\\}\in\mathcal{N}$, $d(c,c^{\prime})=\exp({-\alpha\cdot
d_{e}(c,c^{\prime})})$ where $d_{e}(c,c^{\prime})$ is the Euclidean distance
between the centroids of the components, $c$ and $c^{\prime}$.
$\delta(\ell_{c}\neq\ell_{c^{\prime}})$ is 1 if the condition inside the
parentheses holds and 0 otherwise.
3. 3.
Label cost: For every blob line $\ell\in{\mathcal{L}}$, $h_{\ell}$ is defined
as $\exp({\beta\cdot r_{\ell}})$ where $r_{\ell}$ is the normalized number of
foreground pixels overlapping with blob line $\ell$.
Figure 8: The resultant pixel labels for text line segmentation of the sample
patch in Figure 5a
.
## V Evaluation
We used ICDAR2017 line segmentation evaluator tool [27] for the evaluation.
This tool is freely available as open source222https://github.com/DIVA-
DIA/DIVA_Line_Segmentation_Evaluator. An open source tool has a reviewed
source code that minimizes the risk of erroneous implementations. Besides, it
enables fair comparison of methods in the same way as publicly published
datasets.
Evaluation of text line segmentation is based on Intersection over Union (IU).
First, an IU score is computed for each possible pair of Ground Truth (GT)
polygons and Prediction (P) polygons according to the Equation 5:
$IU=\frac{IP}{UP}$ (5)
where IP denotes number of intersecting foreground pixels among the pair of
polygons and UP denotes number of foreground pixels in the union of foreground
pixels of the pair of polygons. Then, the pairs with maximum IU score are
selected as the matching pairs of GT polygons and P polygons. Pixel IU and
Line IU are calculated among these matching pairs.
### V-A Pixel IU
Pixel IU is measured at pixel level. First, for each matching pair, line TP,
line FP and line FN is computed.
* •
Line TP is the number of foreground pixels that are correctly detected.
* •
Line FP is the number of background pixels that are falsely detected as
foreground.
* •
Line FN is the number of foreground pixels that are not detected by the
method.
Then pixel IU is calculated according to the following Equation 6:
$\text{Pixel }IU=\frac{TP}{TP+FP+FN}$ (6)
where TP is the global sum of line TPs, FP is the global sum of line FPs, and
FN is the global sum of line FNs.
### V-B Line IU
Line IU is measured at line level. First, for each matching pair, line
precision and line recall is computed according to the Equations 7 and 8:
$\text{Line precision}=\frac{\text{line }TP}{\text{line }TP+\text{line }FP}$
(7) $\text{Line recall}=\frac{\text{line }TP}{\text{line }TP+\text{line }FN}$
(8)
Then, line IU is calculated according to the Equation 9:
$\text{Line }IU=\frac{\text{CL}}{\text{CL + ML + EL}}$ (9)
where CL is the number of correct lines, ML is the number of missed lines, and
EL is the number of extra lines. For each matching pair and threshold value of
$0.75$:
* •
A line is correct if both, the line precision and the line recall are above
the threshold value.
* •
A line is missed if the line recall is below the threshold value.
* •
A line is extra if the line precision is below the threshold value.
### V-C Mean pixel IU and mean line IU
For each page, first the pixel IU and the line IU are computed. Then, mean
pixel IU is obtained by averaging the pixel IU of all pages of the test set
and mean line IU is obtained by averaging the line IU of all pages of the test
set.
## VI Experiments
We run experiments on VML-MOC test set using our method based on multi-
oriented Gaussian, and a single-oriented Gaussian based method [1]. We report
the mean pixel IU and the mean line IU over the entire test set. The scores
achieved by the two methods are presented in Table I, and some qualitative
results are shown in Figure 9.
TABLE I: Performance percentage of two methods on the VML-MOC test set. | Mean pixel IU | Mean line IU
---|---|---
Single-oriented Gaussian [1] | 37.88 | 12.89
Multi-oriented Gaussian | 80.96 | 60.99
The results reveal that multi-oriented Gaussian based method can extract
multi-oriented and curved text lines in opposite to the single-oriented
Gaussian based method. However, these are baseline results and there is still
room for improvement.
---
(a)
(b)
Figure 9: (a) Result from single-oriented Gaussian based method on an example
patch. (b) Result from multi-oriented Gaussian based method on the same patch.
## VII Conclusion and Future Work
This paper presents a multiply oriented and curved handwritten text line
dataset, namely VML-MOC dataset. To the best of the authors’ knowledge, VML-
MOC dataset is the first publicly available dataset that introduces the
problem of segmenting multiply oriented and curved handwritten text lines.
Furthermore, we evaluated and compared two line extraction methods, a single-
oriented Gaussian based method and a multi-oriented Gaussian based method, on
this dataset. Results have shown that ordinary text line segmentation methods
are not successful on VML-MOC dataset, and text line segmentation methods
without horizontal/straight line assumption has to be developed. An important
direction for future work would be the evaluation of deep learning based
methods.
## Acknowledgment
Authors would like to thank Hamza Barakat for his helps in preparing the
dataset. This research was supported in part by Frankel Center for Computer
Science at Ben-Gurion University of the Negev.
## References
* [1] R. Cohen, I. Dinstein, J. El-Sana, and K. Kedem, “Using scale-space anisotropic smoothing for text line extraction in historical documents,” in _International Conference Image Analysis and Recognition_. Springer, 2014, pp. 349–358.
* [2] B. Moysset, C. Kermorvant, C. Wolf, and J. Louradour, “Paragraph text segmentation into lines with recurrent neural networks,” in _2015 13th International Conference on Document Analysis and Recognition (ICDAR)_. IEEE, 2015, pp. 456–460.
* [3] B. Moysset, C. Kermorvant, and C. Wolf, “Full-page text recognition: Learning where to start and when to stop,” in _2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR)_ , vol. 1. IEEE, 2017, pp. 871–876.
* [4] G. Renton, C. Chatelain, S. Adam, C. Kermorvant, and T. Paquet, “Handwritten text line segmentation using fully convolutional network,” in _2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR)_ , vol. 5. IEEE, 2017, pp. 5–9.
* [5] G. Renton, Y. Soullard, C. Chatelain, S. Adam, C. Kermorvant, and T. Paquet, “Fully convolutional network with dilated convolutions for handwritten text line segmentation,” _International Journal on Document Analysis and Recognition (IJDAR)_ , vol. 21, no. 3, pp. 177–186, 2018.
* [6] S. Basu, C. Chaudhuri, M. Kundu, M. Nasipuri, and D. K. Basu, “Text line extraction from multi-skewed handwritten documents,” _Pattern Recognition_ , vol. 40, no. 6, pp. 1825–1839, 2007.
* [7] N. Ouwayed, A. Belaïd, and A. Bela, “A general approach for multi-oriented text line extraction of handwritten document A general approach for multi-oriented text line extraction of handwrit-ten document,” _International Journal on Document Analysis and Recognition_ , vol. 14, no. 4, p. 10, 2011. [Online]. Available: https://hal.inria.fr/inria-00635363
* [8] S. Bukhari, F. Shafait, and T. M. Breuel, “Segmentation of curled textlines using active contours,” in _2008 The Eighth IAPR International Workshop on Document Analysis Systems_. IEEE, 2008, pp. 270–277.
* [9] S. Bukhari, F. Shafait, and T. Breuel, “Ridges based curled textline region detection from grayscale camera-captured document images,” in _International Conference on Computer Analysis of Images and Patterns_. Springer, 2009, pp. 173–180.
* [10] S. Bukhari, F. Shafait, and T. M. Breuel, “Text-line extraction using a convolution of isotropic gaussian filter with a set of line filters,” in _2011 International Conference on Document Analysis and Recognition_. IEEE, 2011, pp. 579–583.
* [11] P. P. Roy, U. Pal, and J. Lladós, “Text line extraction in graphical documents using background and foreground information,” _International Journal on Document Analysis and Recognition_ , vol. 15, no. 3, pp. 227–241, 2012\.
* [12] A. Asi, R. Cohen, K. Kedem, and J. El-Sana, “Simplifying the reading of historical manuscripts,” in _2015 13th International Conference on Document Analysis and Recognition (ICDAR)_. IEEE, 2015, pp. 826–830.
* [13] S. Bukhari, T. Breuel, and F. Shafait, “Textline information extraction from grayscale camera-captured document images,” in _2009 16th IEEE International Conference on Image Processing (ICIP)_. IEEE, 2009, pp. 2013–2016.
* [14] S. Bukhari, F. Shafait, and T. Breuel, “Performance evaluation of curled textline segmentation algorithms on cbdar 2007 dewarping contest dataset,” in _2010 IEEE International Conference on Image Processing_. IEEE, 2010, pp. 2161–2164.
* [15] S. Bukhari, F. Shafait, and T. Breuel, “Coupled snakelets for curled text-line segmentation from warped document images,” _International Journal on Document Analysis and Recognition (IJDAR)_ , vol. 16, no. 1, pp. 33–53, 2013.
* [16] S. Bukhari, F. Shafait, and T. M. Breuel, “Script-independent handwritten textlines segmentation using active contours,” in _2009 10th International Conference on Document Analysis and Recognition_. IEEE, 2009, pp. 446–450.
* [17] N. Ouwayed and A. Belaïd, “A general approach for multi-oriented text line extraction of handwritten documents,” _International Journal on Document Analysis and Recognition (IJDAR)_ , vol. 15, no. 4, pp. 297–314, 2012\.
* [18] A. Kölsch, A. Mishra, S. Varshneya, M. Z. Afzal, and M. Liwicki, “Recognizing challenging handwritten annotations with fully convolutional networks,” in _2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR)_. IEEE, 2018, pp. 25–31.
* [19] H. Boubaker, M. Kherallah, and A. M. Alimi, “New algorithm of straight or curved baseline detection for short arabic handwritten writing,” in _2009 10th International Conference on Document Analysis and Recognition_. IEEE, 2009, pp. 778–782.
* [20] R. Herzog, A. Solth, and B. Neumann, “Text block recognition in multi-oriented handwritten documents,” 2014. [Online]. Available: http://edoc.sub.uni-hamburg.de/informatik/volltexte/2014/207/
* [21] A. Asi, R. Cohen, K. Kedem, J. El-Sana, and I. Dinstein, “A coarse-to-fine approach for layout analysis of ancient manuscripts,” in _2014 14th International Conference on Frontiers in Handwriting Recognition_. IEEE, 2014, pp. 140–145.
* [22] Z. Zhang, C. Zhang, W. Shen, C. Yao, W. Liu, and X. Bai, “Multi-oriented text detection with fully convolutional networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 4159–4167.
* [23] S. Bukhari, T. Breuel, A. Asi, and J. El-Sana, “Layout analysis for arabic historical document images using machine learning,” in _2012 International Conference on Frontiers in Handwriting Recognition_. IEEE, 2012, pp. 639–644.
* [24] O. Biller, I. Rabaev, K. Kedem, J. J. El-Sana _et al._ , “Evolution maps and applications,” _PeerJ Computer Science_ , vol. 2, p. e39, 2016.
* [25] C. Clausner, S. Pletschacher, and A. Antonacopoulos, “Aletheia-an advanced document layout and text ground-truthing system for production environments,” in _2011 International Conference on Document Analysis and Recognition_. IEEE, 2011, pp. 48–52.
* [26] S. Pletschacher and A. Antonacopoulos, “The page (page analysis and ground-truth elements) format framework,” in _2010 20th International Conference on Pattern Recognition_. IEEE, 2010, pp. 257–260.
* [27] F. Simistira, M. Bouillon, M. Seuret, M. Würsch, M. Alberti, R. Ingold, and M. Liwicki, “Icdar2017 competition on layout analysis for challenging medieval manuscripts,” in _2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR)_ , vol. 1. IEEE, 2017, pp. 1361–1370.
* [28] A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” _International journal of computer vision_ , vol. 96, no. 1, pp. 1–27, 2012.
|
# $\Gamma$-convergence for a class of action functionals induced by gradients
of convex functions
Luigi Ambrosio Scuola Normale Superiore, Pisa. E-mail<EMAIL_ADDRESS>Aymeric Baradat111Institut Camille Jordan, Lyon. E-mail<EMAIL_ADDRESS>lyon1.fr_ Yann Brenier222École Normale Supérieure, Paris. E-mail:
<EMAIL_ADDRESS>
###### Abstract
Given a real function $f$, the rate function for the large deviations of the
diffusion process of drift $\nabla f$ given by the Freidlin-Wentzell theorem
coincides with the time integral of the energy dissipation for the gradient
flow associated with $f$. This paper is concerned with the stability in the
hilbertian framework of this common action functional when $f$ varies. More
precisely, we show that if $(f_{h})_{h}$ is uniformly $\lambda$-convex for
some $\lambda\in\mathbb{R}$ and converges towards $f$ in the sense of Mosco
convergence, then the related functionals $\Gamma$-converge in the strong
topology of curves.
## 1 Introduction
Action functionals of the form
$I_{f}(\gamma):=\int_{0}^{1}\Big{\\{}|\dot{\gamma}(t)|^{2}+|\nabla
f|^{2}(\gamma(t))\Big{\\}}\mathrm{d}t,$
and the closely related ones (since they differ by a null lagrangian, the term
$2f(\gamma(1))-2f(\gamma(0))$)
$\int_{0}^{1}|\dot{\gamma}(t)-\nabla f(\gamma(t))|^{2}\mathrm{d}t,$ (1)
appear in many areas of Mathematics, for instance in the Freidlin-Wentzell
theory of large deviations for the SDE $\mathrm{d}X_{t}^{\epsilon}=\nabla
f(X^{\epsilon}_{t})\mathrm{d}t+\sqrt{\epsilon}\mathrm{d}B_{t}$ (see for
instance [9]) or in the variational theory of gradient flows pioneered by De
Giorgi, where they correspond to the integral form of the energy dissipation
(see [4]). In this paper, we investigate the stability of the action
functionals $I_{f}$ with respect to $\Gamma$-convergence of the functions $f$
(actually with respect to the stronger notion of Mosco convergence, see
below). More precisely, we are concerned with the case when the functions
under consideration are $\lambda$-convex and defined in a Hilbert space $H$.
In this case, the functional $I_{f}$ is well defined if we understand $\nabla
f(x)$ as the element with minimal norm in the subdifferential $\partial f(x)$:
this choice, very natural in the theory of gradient flows, grants the joint
lower semicontinuity property of $(x,f)\mapsto|\nabla f|(x)$ that turns out to
be very useful when proving stability of gradient flows, see [12], [5] and the
more recent papers [10], [11] where emphasis is put on the convergence of the
dissipation functionals. In more abstract terms, we are dealing with
autonomous Lagrangians $L(x,p)=|p|^{2}+|\nabla f|^{2}(x)$ that are unbounded
and very discontinuous with respect to $x$, and this is a source of difficulty
in the construction of recovery sequences, in the proof of the $\Gamma$-limsup
inequality.
Our interest in this problem comes from [3], where we dealt with the
derivation of the discrete Monge-Ampère equation from the stochastic model of
a Brownian point cloud, using large deviations and Freidlin-Wentzell theory,
along the lines of [6]. In that case $H=\mathbb{R}^{Nd}$ was finite
dimensional,
$f(x):=\max_{\sigma\in\mathfrak{S}_{N}}\langle x,A^{\sigma}\rangle,$
(with $A=(A_{1},\ldots,A_{N})\in\mathbb{R}^{Nd}$ given and
$A^{\sigma}=(A_{\sigma(1)},\ldots,A_{\sigma(N)})$ for all
$\sigma\in\mathfrak{S}_{N}$, the set of all permutations of $\llbracket
1,N\rrbracket$), and the approximating functions $f_{\epsilon}$ were given by
$f_{\epsilon}(t,x)=\epsilon
t\log\biggl{[}\frac{1}{N{\rm!}}\sum_{\sigma\in\Sigma_{N}}\exp\bigg{(}\frac{\langle
x,A^{\sigma}\rangle}{\varepsilon t}\biggr{)}\biggr{]}.$
In that case, our proof used some simplifications due to finite
dimensionality, and a uniform Lipschitz condition. In this paper, building
upon some ideas in [3], we provide the convergence result in a more general
and natural context. For the sake of simplicity, unlike [3], we consider only
the autonomous case. However it should be possible to adapt our proof to the
case when time-dependent $\lambda$-convex functions $f(t,\cdot)$ are
considered, under additional regularity assumptions with respect to $t$, as in
[3].
In the infinite-dimensional case, Mosco convergence (see Definition 4.1) is
stronger and more appropriate than $\Gamma$-convergence, since it ensures
convergence of the resolvent operators (under equi-coercitivity assumptions,
the two notions are equivalent). Also, since in the infinite-dimensional case,
the finiteness domains of the functions can be pretty different, the addition
of the endpoint condition is an additional source of difficulties, that we
handle with an interpolation lemma which is very much related to the structure
of monotone operators, see Lemma 3.1.
Defining the functionals $\Theta_{f,x_{0},x_{1}}:C([0,1];H)\to[0,\infty]$ by
$\Theta_{f,x_{0},x_{1}}(\gamma):=\left\\{\begin{aligned}
&I_{f}(\gamma)&&\text{if $\gamma\in AC([0,1];H)$, $\gamma(0)=x_{0}$,
$\gamma(1)=x_{1}$;}\\\ &+\infty&&\text{otherwise,}\end{aligned}\right.$ (2)
our main result reads as follows:
###### Theorem 1.1.
If $(f_{h})_{h}$ is uniformly $\lambda$-convex for some
$\lambda\in\mathbb{R}$, if $f_{h}\to f$ w.r.t. Mosco convergence, and if
$\lim_{h\to\infty}x_{h,i}=x_{i},\qquad\qquad\sup_{h}|\nabla
f_{h}|(x_{h,i})<\infty,\qquad i=0,\,1,$
then $\Theta_{f_{h},x_{h,0},x_{h,1}}$ $\Gamma$-converge to
$\Theta_{f,x_{0},x_{1}}$ in the $C([0,1];H)$ topology.
As a byproduct, under an additional equi-coercitivity assumption our theorem
grants convergence of minimal values to minimal values and of minimizers to
minimizers. Obviously the condition $x_{h,i}\to x_{i}$ is necessary, and we
believe that at least some (possibly more refined) bounds on the gradients at
the endpoints are necessary as well. If we ask also that $x_{h,i}$ are
recovery sequences, i.e. $f_{h}(x_{h,i})\to f(x_{i})$, then the result can
also be read in terms of the functionals (1).
As a final comment, it would be interesting to investigate this type of
convergence results also in a non-Hilbertian context, as it happened for the
theory of gradient flows. For instance, a natural context would be the space
of probability measures with finite quadratic moment. Functionals of this
form, where $f$ is a constant multiple of the logarithmic entropy, appear in
the so-called entropic regularization of the Wasserstein distance (see [8] and
the references therein).
Acknowledgements. We dedicate this paper to Edoardo Vesentini, a great
mathematician and a former President of the Accademia dei Lincei. As Director
of the Scuola Normale, he has been the pioneer of many projects that shaped
the Scuola Normale for many years to come. The first author acknowledges the
support of the PRIN 2017 project “Gradient flows, Optimal Transport and Metric
Measure Structures”. This work was prepared during the stay of the second
author at the Max Planck Institute for Mathematics in the Sciences in Leipzig,
that he would like to thank for its hospitality.
## 2 Preliminaries
Let $H$ be a Hilbert space. For a function $f:H\to(-\infty,\infty]$ we denote
by $D(f)$ the finiteness domain of $f$. We say that $f$ is $\lambda$-convex if
$x\mapsto f(x)-\tfrac{\lambda}{2}|x|^{2}$ is convex. It is easily seen that
$\lambda$-convex functions satisfy the perturbed convexity inequality
$f\bigl{(}(1-t)x+ty\bigr{)}\leq(1-t)f(x)+tf(y)-\frac{\lambda}{2}t(1-t)|x-y|^{2},\qquad
t\in[0,1].$
We denote by $\partial f(x)$ the Gateaux subdifferential of $f$ at $x\in
D(f)$, namely the set
$\partial f(x):=\left\\{p\in H:\ \liminf_{t\to
0^{+}}\frac{f(x+th)-f(x)}{t}\geq t\langle h,p\rangle\,\,\,\forall h\in
H\right\\}.$
It is a closed convex set, possibly empty. We denote by $D(\partial f)$ the
domain of the subdifferential.
In the case when $f$ is $\lambda$-convex, the monotonicity of difference
quotients gives the equivalent, non asymptotic definition:
$\partial f(x):=\left\\{p\in H:\ f(y)\geq f(x)+\langle
y-x,p\rangle+\frac{\lambda}{2}|y-x|^{2}\,\,\,\forall y\in H\right\\}.$ (3)
For any $x\in D(\partial f)$ we consider the vector $\nabla f(x)$ as the
element with minimal norm of $\partial f(x)$. We agree that $|\nabla
f(x)|=\infty$ if either $x\notin D(f)$ of $x\in D(f)$ and $\partial
f(x)=\emptyset$. For $\lambda$-convex functions, relying on (3), it can be
easily proved that $\partial f(x)$ is not empty if and only if
$\sup_{y\neq
x}\frac{\bigl{[}f(x)-f(y)+\frac{\lambda}{2}|x-y|^{2}\bigr{]}^{+}}{|x-y|}<\infty$
(4)
and that $|\nabla f|(x)$ is precisely equal to the supremum (see for instance
Theorem 2.4.9 in [4]).
For $\tau>0$ we denote by $f_{\tau}$ the regularized function
$f_{\tau}(x):=\min_{y\in H}f(y)+\frac{|y-x|^{2}}{2\tau}$ (5)
and we denote by $J_{\tau}=(\operatorname{Id}+\tau\partial f)^{-1}:H\to
D(\partial f)$ the so-called resolvent map associating to $x$ the minimizer
$y$ in (5). When $f$ is proper, $\lambda$-convex and lower semicontinuous,
existence and uniqueness of $J_{\tau}(x)$ follow by the strict convexity of
$y\mapsto f(y)+|y-x|^{2}/(2\tau)$, as soon as $\tau<-1/\lambda$ when
$\lambda<0$, and for all $\tau>0$ otherwise (we shall call admissible these
values of $\tau$). We also use the notation $J_{f,\tau}$ to emphasize the
dependence on $f$.
Now we recall a few basic and well-known facts (see for instance [7], [4]),
providing for the reader’s convenience sketchy proofs.
###### Theorem 2.1.
Assume that $f:H\to(-\infty,\infty]$ is proper, $\lambda$-convex and lower
semicontinuous. For all admissible $\tau>0$ one has:
1. (i)
$f_{\tau}$ is differentiable everywhere, and for all $x\in H$,
$\nabla f_{\tau}(x)=\frac{x-J_{\tau}(x)}{\tau}\in\partial f(J_{\tau}(x)).$ (6)
2. (ii)
$J_{\tau}$ is $(1+\lambda\tau)^{-1}$-Lipschitz, and $f_{\tau}\in C^{1,1}(H)$
with ${\rm Lip}(\nabla f_{\tau})\leq 3/\tau$ as soon as there holds
$(1+\tau\lambda)^{-1}\leq 2$.
3. (iii)
For all $x\in D(\partial f)$,
$\nabla f_{\tau}(x+\tau\nabla f(x))=\nabla f(x).$ (7)
4. (iv)
The following monotonicity properties hold for all $x\in H$:
$|\nabla f|(J_{\tau}(x))\leq|\nabla
f_{\tau}|(x)=\frac{|x-J_{\tau}(x)|}{\tau}\leq\frac{1}{1+\lambda\tau}|\nabla
f|(x).$ (8)
###### Proof.
The inclusion in (6) follows from performing variations around $J_{\tau}(x)$
in (5).
Before proving the equality in (6), let us prove the Lipschitz property for
$J_{\tau}$ given in (ii). Recall that the convexity of
$g=f-\frac{\lambda}{2}|\cdot|^{2}$ yields that $\partial f$ is
$\lambda$-monotone, namely
$\langle\xi-\eta,a-b\rangle\geq\lambda|a-b|^{2}\qquad\forall\xi\in\partial
f(a),\,\,\eta\in\partial f(b).$
Given $x$ and $y$, we apply this property to $a:=J_{\tau}(x)$,
$b:=J_{\tau}(y)$, $\xi:=(x-J_{\tau}(x))/\tau$ and
$\eta:=(y-J_{\tau}(y))/\tau$. (Thanks to the inclusion in (6), we have
$\xi\in\partial f(a)$ and $\eta\in\partial f(b)$.) By rearranging the terms,
we get
$\langle
x-y,J_{\tau}(x)-J_{\tau}(y)\rangle\geq(1+\lambda\tau)|J_{\tau}(x)-J_{\tau}(y)|^{2}.$
Hence, by the Cauchy-Schwarz inequality, $J_{\tau}$ is
$(1+\lambda\tau)^{-1}$-Lipschitz.
Let us go back to proving the equality in (6). For any $x$ and $z$, one has
(using $y=J_{\tau}(x)$ as an admissible competitor in the definition of
$f_{\tau}(x+z)$)
$f_{\tau}(x+z)-f_{\tau}(x)\leq\frac{|J_{\tau}(x)-(x+z)|^{2}}{2\tau}-\frac{|J_{\tau}(x)-x|^{2}}{2\tau}=\left\langle
z,\frac{x-J_{\tau}(x)}{\tau}\right\rangle+\frac{|z|^{2}}{2\tau}$
and, reversing the roles of $x$ and $x+z$,
$f_{\tau}(x)-f_{\tau}(x+z)\leq\left\langle-z,\frac{x+z-J_{\tau}(x+z)}{\tau}\right\rangle+\frac{|z|^{2}}{2\tau}.$
These two identities together with the continuity of $J_{\tau}$ imply that
$f_{\tau}$ is differentiable at $x$ and provides the equality in (6) and hence
the one in (8). The Lipschitz property for $\nabla f_{\tau}$ announced follows
directly from this identity and the Lipschitz property for $J_{\tau}$.
To get (7), it suffices to remark that for all $x\in D(\partial f)$, $0$
belongs to the subdifferential of the strictly convex function
$y\mapsto f(y)+\frac{|x+\tau\nabla f(x)-y|^{2}}{2\tau}$
at $y=x$. Hence, $x$ is the minimizer of this function, and
$J_{\tau}(x+\tau\nabla f(x))=x$. Then, we deduce (6) from (7).
The first inequality in (8) follows from the inclusion in (6). In order to
prove the second inequality, we perform a variation along the affine curve
joining $x$ to $J_{\tau}(x)$, namely, $\gamma(t):=(1-t)x+tJ_{\tau}(x)$. Since
$\displaystyle f(J_{\tau}(x))+\frac{1}{2\tau}|x-J_{\tau}(x)|^{2}$
$\displaystyle\leq f(\gamma(t))+\frac{1}{2\tau}|x-\gamma(t)|^{2}$
$\displaystyle\leq(1-t)f(x)+tf(J_{\tau}(x))+\frac{t}{2\tau}\big{(}t-\lambda\tau(1-t)\big{)}|x-J_{\tau}(x)|^{2}$
for all $t\in[0,1]$, taking the left derivative at $t=1$ gives
$\left(\frac{\lambda}{2}+\frac{1}{\tau}\right)|x-J_{\tau}(x)|^{2}\leq
f(x)-f(J_{\tau}(x)),$
so that the representation formula (4) for $|\nabla f|(x)$ gives
$\left(\frac{\lambda}{2}+\frac{1}{\tau}\right)|x-J_{\tau}(x)|^{2}\leq|\nabla
f|(x)|x-J_{\tau}(x)|-\frac{\lambda}{2}|x-J_{\tau}(x)|.$
By rearranging the terms, this leads to the second inequality in (8). ∎
Another remarkable property of $|\nabla f|$, for $f$ $\lambda$-convex and
lower semicontinuous, is the upper gradient property, namely,
$f(\gamma(0)),f(\gamma(\delta))<\infty\quad\text{and}\quad|f(\gamma(\delta))-f(\gamma(0))|\leq\int_{0}^{\delta}|\nabla
f|(\gamma(t))|\dot{\gamma}(t)|\mathrm{d}t$
for any $\delta>0$ and any absolutely continuous $\gamma:[0,\delta]\to H$
(with the convention $0\times\infty=0$), whenever $\gamma$ is not constant and
the integral in the right hand side is finite (see for instance Corollary
2.4.10 in [4] for the proof).
## 3 A class of action functionals
For $\delta>0$ and $f:H\to(-\infty,\infty]$ proper, $\lambda$-convex and lower
semicontinuous, we consider the autonomous functionals
$I_{f}^{\delta}:C([0,\delta];H)\to[0,\infty]$ defined by
$I^{\delta}_{f}(\gamma):=\int_{0}^{\delta}\Big{\\{}|\dot{\gamma}|^{2}+|\nabla
f|^{2}(\gamma)\Big{\\}}\mathrm{d}t,$
set to $+\infty$ on $C([0,\delta];H)\setminus AC([0,\delta];H)$. Notice also
that $I_{f}^{\delta}(\gamma)<\infty$ implies $\gamma\in D(\partial f)$ a.e. in
$(0,\delta)$.
Identity (4) ensures the lower semicontinuity of $|\nabla f|$; hence, under a
coercitivity assumption of the form $\\{f\leq t\\}$ compact in $H$ for all
$t\in\mathbb{R}$, the infimum
$\Gamma_{\delta}(x_{0},x_{\delta}):=\inf\left\\{I^{\delta}_{f}(\gamma):\
\gamma(0)=x_{0},\,\,\gamma(\delta)=x_{\delta}\right\\}\qquad
x_{0},\,x_{\delta}\in H$ (9)
is always attained whenever finite.
Also, by the Young inequality and the upper gradient property of $|\nabla f|$,
one has that $I_{f}^{\delta}(\gamma)<\infty$ implies
$\gamma(0),\,\gamma(\delta)\in D(f)$ and
$2|f(\gamma(\delta))-f(\gamma(0))|\leq I_{f}^{\delta}(\gamma)$. The same
argument shows that we may add to $I_{f}^{\delta}$ a null Lagrangian. Namely,
as done in [3], we can consider the functionals
$\int_{0}^{\delta}|\dot{\gamma}-\nabla f(\gamma)|^{2}\mathrm{d}t$
which differ from $I^{\delta}_{f}$ precisely by the term
$2f(\gamma(\delta))-2f(\gamma(0))$, whenever $\gamma$ is admissible in (9)
with $I_{f}^{\delta}(\gamma)<\infty$.
Because of the lack of continuity of $x\mapsto\nabla f(x)$, very little is
known in general about the regularity of minimizers in (9), even when $H$ is
finite-dimensional. However, one may use the fact that $I^{1}_{f}$ is
autonomous to perform variations of type
$\gamma\mapsto\gamma\circ(\operatorname{Id}+\epsilon\phi)$, $\phi\in
C^{\infty}_{c}(0,\delta)$, to obtain the Dubois-Reymond equation (see for
instance [2])
$\frac{\mathrm{d}}{\mathrm{d}t}\bigl{[}|\dot{\gamma}|^{2}-|\nabla
f|^{2}(\gamma))\bigr{]}=0\qquad\text{in the sense of distributions in
$(0,\delta)$}.$
It implies Lipschitz regularity of the minimizers when, for instance, $|\nabla
f|$ is bounded on bounded sets (an assumption satisfied in [3], but obviously
too strong for some applications in infinite dimension).
We will need the following lemma, estimating $\Gamma_{\delta}$ from above, to
adjust the values of the curves at the endpoints. The heuristic idea is to
interpolate on the graph of $f_{\tau}$ and then read back this interpolation
in the original variables. This is related to Minty’s trick (see [1] for an
extensive use of this idea): a rotation of $\pi/4$ maps the graph of the
subdifferential onto the graph of an entire $1$-Lipschitz function; here we
use only slightly tilted variables, of order $\tau$.
###### Lemma 3.1 (Interpolation).
Let $f:H\to(-\infty,\infty]$ be a proper, $\lambda$-convex and lower
semicontinuous function and let $\tau>0$ be such that
$(1+\tau\lambda)^{-1}\leq 2$. For all $\delta>0$ and all $x_{0}\in D(\partial
f)$, $x_{\delta}\in D(\partial f)$, with $\Gamma_{\delta}$ as in (9), one has
$\Gamma_{\delta}(x_{0},x_{\delta})\leq 2\delta\min_{i\in\\{0,\delta\\}}|\nabla
f|^{2}(x_{i})+\biggl{(}\frac{40}{\delta}+\frac{12\delta}{\tau^{2}}\biggr{)}|x_{\delta}-x_{0}|^{2}+\biggl{(}12\delta+\frac{40\tau^{2}}{\delta}\biggr{)}|\nabla
f(x_{\delta})-\nabla f(x_{0})|^{2}.$
###### Proof.
We use Theorem 2.1 to interpolate between $x_{\delta}$ and $x_{0}$ as follows:
set
$\tilde{\gamma}(t):=\left(1-\frac{t}{\delta}\right)(x_{0}+\tau\nabla
f(x_{0}))+\frac{t}{\delta}(x_{\delta}+\tau\nabla
f(x_{\delta})),\qquad\xi(t):=\nabla f_{\tau}(\tilde{\gamma}(t)),$
and
$\gamma(t):=J_{\tau}(\tilde{\gamma}(t))=\tilde{\gamma}(t)-\tau\xi(t),$
where the second equality follows from (6).
Since $\xi(0)=\nabla f_{\tau}(x_{0}+\tau\nabla f(x_{0}))=\nabla f(x_{0})$ and
a similar property holds at time $\delta$, the path $\gamma$ is admissible.
Let us now estimate the action of the path $\gamma$.
Kinetic term (we use our Lipschitz bound for $\nabla f_{\tau}$ to deduce that
$|\dot{\xi}(t)|\leq\frac{3}{\tau}|\dot{\tilde{\gamma}}(t)|$):
$\displaystyle\int_{0}^{\delta}|\dot{\gamma}|^{2}\mathrm{d}t$
$\displaystyle\leq
2\int_{0}^{\delta}|\dot{\tilde{\gamma}}|^{2}\mathrm{d}t+2\tau^{2}\int_{0}^{\delta}|\dot{\xi}|^{2}\mathrm{d}t$
$\displaystyle\leq
20\int_{0}^{\delta}|\dot{\tilde{\gamma}}|^{2}\mathrm{d}t=\frac{20}{\delta}|(x_{\delta}+\tau\nabla
f(x_{\delta}))-(x_{0}+\tau\nabla f(x_{0}))|^{2}$
$\displaystyle\leq\frac{40}{\delta}|x_{\delta}-x_{0}|^{2}+\frac{40\tau^{2}}{\delta}|\nabla
f(x_{\delta})-\nabla f(x_{0})|^{2}.$
Gradient term (we use the first inequality in (8), our Lipschitz bound for
$\nabla f_{\tau}$, and finally (7)):
$\displaystyle\int_{0}^{\delta}|\nabla f|^{2}(\gamma)\mathrm{d}t$
$\displaystyle\leq\int_{0}^{\delta}|\nabla
f_{\tau}|^{2}(\tilde{\gamma})\mathrm{d}t$
$\displaystyle\leq\int_{0}^{\delta}\left(|\nabla
f_{\tau}|(\tilde{\gamma}(0))+\frac{3}{\tau}|\tilde{\gamma}(t)-\tilde{\gamma}(0)|)\right)^{2}\mathrm{d}t$
$\displaystyle\leq\int_{0}^{\delta}\left\\{2|\nabla
f|^{2}(x_{0})+\frac{18}{\tau^{2}}\frac{t^{2}}{\delta^{2}}|(x_{\delta}+\tau\nabla
f(x_{\delta}))-(x_{0}+\tau\nabla f(x_{0}))|^{2}\right\\}\mathrm{d}t$
$\displaystyle\leq 2\delta|\nabla
f|^{2}(x_{0})+\frac{6\delta}{\tau^{2}}|(x_{\delta}+\tau\nabla
f(x_{\delta}))-(x_{0}+\tau\nabla f(x_{0}))|^{2}$ $\displaystyle\leq
2\delta|\nabla
f|^{2}(x_{0})+\frac{12\delta}{\tau^{2}}|x_{\delta}-x_{0}|^{2}+12\delta|\nabla
f(x_{\delta})-\nabla f(x_{0})|^{2}.$
We get the result by gathering these two estimates, and by remarking that in
the second line, we could have controlled $|\nabla
f|_{\tau}(\tilde{\gamma}(t))$ by its value at time $\delta$ instead of its
value at time $0$.
∎
Choosing $\delta=\tau$, bounding $|\nabla f|(x_{i})$, $i=0,1$ by the max of
these two values, and using $|\nabla f(x_{\delta})-\nabla f(x_{0})|^{2}\leq
4\max_{i\in\\{0,1\\}}|\nabla f|^{2}(x_{i})$, we will apply the interpolation
lemma in the form
$\Gamma_{\delta}(x_{0},x_{\delta})\leq\frac{52}{\tau}|x_{\delta}-x_{0}|^{2}+210\tau\max_{i\in\\{0,\delta\\}}|\nabla
f|^{2}(x_{i}).$ (10)
## 4 Proof of the main result
In this section, $f_{h}$, $f$ denote generic proper, $\lambda$-convex and
lower semicontinuous functions from $H$ to $(-\infty,\infty]$.
Mosco convergence is a particular case of $\Gamma$-convergence, where the
topologies used for the $\limsup$ and the $\liminf$ inequalities differ.
###### Definition 4.1 (Mosco convergence).
We say that $f_{h}$ Mosco converge to $f$ whenever:
* (a)
for all $x\in H$ there exist $x_{h}\to x$ strongly with
$\limsup_{h}f_{h}(x_{h})\leq f(x);$
* (b)
for all sequences $(x_{h})\subset H$ weakly converging to $x$, one has
$\liminf_{h}f_{h}(x_{h})\geq f(x).$
It is easy to check that for sequences of $\lambda$-convex functions, Mosco
convergence implies the pointwise convergence of $J_{f_{h},\tau}$ to
$J_{f,\tau}$, contrarily to usual $\Gamma$-convergence. Indeed, for $\tau>0$
admissible, (a) grants
$\limsup_{h\to\infty}\min_{y\in
H}f_{h}(y)+\frac{|y-x|^{2}}{2\tau}\leq\min_{y\in
H}f(y)+\frac{|y-x|^{2}}{2\tau},$
while (b) grants
$\liminf_{h\to\infty}\min_{y\in
H}f_{h}(y)+\frac{|y-x|^{2}}{2\tau}\geq\min_{y\in
H}f(y)+\frac{|y-x|^{2}}{2\tau},$
and the weak convergence of minimizers $y_{h}$ to the minimizer $y$.
Eventually, the convergence of the energies together with
$\liminf_{h\to\infty}f_{h}(y_{h})\geq
f(y)\qquad\text{and}\qquad\liminf_{h\to\infty}|y_{h}-x|^{2}\geq|y-x|^{2}$
grants that both liminf are limits, and that the convergence of $y_{h}$ is
strong.
Recall that given $x_{h,0},\,x_{h,1}\in H$, the functionals
$\Theta_{f_{h},x_{h,0},x_{h,1}}$ defined in (2), are obtained from
$I_{f_{h}}^{1}$ by adding endpoints constraints. $\Theta_{f,x_{0},x_{1}}$ is
defined analogously.
We say that $\Theta_{f_{h},x_{h,0},x_{h,1}}$ $\Gamma$-converge to
$\Theta_{f,x_{0},x_{1}}$ in the $C([0,1];H)$ topology if
* (a)
for all $\gamma\in C([0,1];H)$ there exist $\gamma_{h}\in C([0,1];H)$
converging to $\gamma$ with
$\limsup_{h\to\infty}\Theta_{f_{h},x_{h,0},x_{h,1}}(\gamma_{h})\leq\Theta_{f,x_{0},x_{1}}(\gamma);$
* (b)
for all sequences $(\gamma_{h})\subset C([0,1];H)$ converging to $\gamma$ one
has
$\liminf_{h\to\infty}\Theta_{f_{h},x_{h,0},x_{h,1}}(\gamma_{h})\geq\Theta_{f,x_{0},x_{1}}(\gamma).$
In connection with the proof of property (a) it is useful to introduce the
functional
$\Gamma-\limsup_{h\to\infty}\Theta_{f_{h},x_{h,0},x_{h,1}}(\gamma):=\inf\left\\{\limsup_{h\to\infty}\Theta_{f_{h},x_{h,0},x_{h,1}}(\gamma_{h}):\gamma_{h}\to\gamma\right\\}$
so that (a) is equivalent to
$\Gamma-\limsup_{h}\Theta_{f_{h},x_{h,0},x_{h,1}}\leq\Theta_{f,x_{0},x_{1}}$.
Recall also that the $\Gamma-\limsup$ is lower semicontinuous, a property that
can be achieved, for instance, by a diagonal argument.
###### Proof of Theorem 1.1.
It is clear that the endpoint condition passes to the limit with respect to
the $C([0,1];H)$ topology, since $x_{h,i}$ converge to $x_{i}$. Also, it is
well known that the action functional is lower semicontinuous in $C([0,1];H)$.
Hence, the $\Gamma$-liminf inequality, namely property (b), follows
immediately from Fatou’s lemma and the variational characterization (4) of
$|\nabla f|$. Indeed, for all $y\neq x$ and all sequences $x_{h}\to x$
$\displaystyle\liminf_{h\to\infty}|\nabla
f_{h}|^{2}(x_{h})\geq\liminf_{h\to\infty}\frac{[f_{h}(x_{h})-f_{h}(y_{h})+\frac{\lambda}{2}|x_{h}-y_{h}|^{2}\bigr{]}^{+}}{|x_{h}-y_{h}|}\geq\frac{[f(x)-f(y)+\frac{\lambda}{2}|x-y|^{2}\bigr{]}^{+}}{|x-y|}.$
where $y_{h}$ is chosen as in (a) of Definition 4.1. Passing to the supremum,
we get the inequality $\liminf_{h}|\nabla f_{h}|(x_{h})\geq|\nabla f|(x)$, and
this grants the lower semicontinuity of the gradient term in the functionals.
Notice that this part of the proof works also if we assume only that
$\Gamma$-$\liminf_{h}f_{h}\geq f$, for the strong topology of $H$, but the
stronger property (namely (b) in Definition 4.1) is necessary because we will
need in the next step convergence of the resolvents.
So, let us focus on the $\Gamma$-limsup one, property (a). Fix a path $\gamma$
with $\Theta_{f,x_{0},x_{1}}(\gamma)<\infty$, $\tau>0$ (with
$(1+\tau\lambda^{-1})\leq 2$ if $\lambda<0$) and consider the perturbed paths
$\gamma^{\tau}_{h}(t)=J_{f_{h},\tau}(\gamma(t))$,
$\gamma^{\tau}(t)=J_{f,\tau}(\gamma(t))$; using the
$(1+\tau\lambda)^{-1}$-Lipschitz property of the maps $J_{f,\tau}$, the first
inequality in (8), the convergence of $\gamma^{\tau}_{h}$ to $\gamma^{\tau}$
and eventually the second inequality in (8) one gets
$\displaystyle\limsup_{h\to\infty}\int_{0}^{1}\left\\{|\dot{\gamma}^{\tau}_{h}|^{2}+|\nabla
f_{h}|^{2}(\gamma^{\tau}_{h})\right\\}\mathrm{d}t$
$\displaystyle\leq\limsup_{h\to\infty}\int_{0}^{1}\left\\{(1+\tau\lambda)^{-2}|\dot{\gamma}|^{2}+\frac{|\gamma-\gamma^{\tau}_{h}|^{2}}{\tau^{2}}\right\\}\mathrm{d}t$
$\displaystyle\leq\int_{0}^{1}\left\\{(1+\tau\lambda)^{-2}|\dot{\gamma}|^{2}+\frac{|\gamma-\gamma^{\tau}|^{2}}{\tau^{2}}\right\\}\mathrm{d}t$
$\displaystyle\leq(1+\tau\lambda)^{-2}\int_{0}^{1}\left\\{|\dot{\gamma}|^{2}+|\nabla
f|^{2}(\gamma)\right\\}\mathrm{d}t.$
Also, the convergence of resolvents gives
$\lim_{h\to\infty}J_{f_{h},\tau}(x_{i})=J_{f,\tau}(x_{i}).$
Finally, using again the inequalities (8) and once more the convergence of
resolvents, we get
$\limsup_{h\to\infty}\\!|\nabla
f_{h}|(J_{f_{h},\tau}(x_{i}))\leq\frac{|J_{f,\tau}(x_{i})-x_{i}|}{\tau}\leq(1+\tau\lambda)^{-1}|\nabla
f|(x_{i})\leq 2|\nabla f|(x_{i}).$
Since the endpoints have been slightly modified by the composition with
$J_{f_{h},\tau}$, we argue as follows. Denoting by $S$ an upper bound for
$|\nabla f_{h}|(x_{h,i})$ and $2|\nabla f|(x_{i})$, we apply twice the
construction of Lemma 3.1, with $\delta=\tau$, to $f_{h}$ with endpoints
$x_{h,i}$, $J_{f_{h},\tau}(x_{i})$, to extend the curves $\gamma^{\tau}_{h}$,
still denoted $\gamma_{h}^{\tau}$, to the interval $[-\tau,1+\tau]$, in such a
way that (we use (10) in the first inequality, and the second inequality in
(8) in the second one)
$\displaystyle\limsup_{h\to\infty}\int_{-\delta}^{1+\delta}\left\\{|\dot{\gamma}_{h}^{\tau}|^{2}+|\nabla
f_{h}|^{2}(\gamma_{h}^{\tau})\right\\}\mathrm{d}t$
$\displaystyle\qquad\leq(1+\tau\lambda)^{-2}\int_{0}^{1}\left\\{|\dot{\gamma}|^{2}+|\nabla
f|^{2}(\gamma)\right\\}\mathrm{d}t+420\tau
S^{2}+\frac{52}{\tau}\Big{\\{}|x_{0}-J_{f,\tau}(x_{0})|^{2}+|x_{1}-J_{f,\tau}(x_{1})|^{2}\Big{\\}}$
$\displaystyle\qquad\leq(1+\tau\lambda)^{-2}\left(\int_{0}^{1}\left\\{|\dot{\gamma}|^{2}+|\nabla
f|^{2}(\gamma)\right\\}\mathrm{d}t+472\tau S^{2}\right)$
and the endpoint condition is satisfied at $t=-\tau$ and $t=1+\tau$. The limit
of the curves $\gamma^{\tau}_{h}$ in $[-\tau,1+\tau]$, still denoted
$\gamma^{\tau}$, is the one obtained applying the construction of Lemma 3.1
with $x_{i}$ and $J_{f,\tau}(x_{i})$ in the intervals $[-\tau,0]$ and
$[1,1+\tau]$, and which coincides with $J_{f,\tau}(\gamma(t))$ on $[0,1]$.
By a linear rescaling of the curves $\gamma_{h}^{\tau}$ and $\gamma^{\tau}$ to
$[0,1]$ we obtain curves $\tilde{\gamma}_{h}^{\tau}$ converging to
$\tilde{\gamma}^{\tau}$ in $C([0,1];H)$, with $\tilde{\gamma}^{\tau}$
convergent to $\gamma$ as $\tau\to 0$ and
$\displaystyle\Gamma-\limsup_{h\to\infty}\Theta_{f_{h},x_{h,0},x_{h,1}}(\tilde{\gamma}^{\tau})$
$\displaystyle\leq\limsup_{h\to\infty}\Theta_{f_{h},x_{h,0},x_{h,1}}(\tilde{\gamma}_{\tau}^{h})$
$\displaystyle\leq(1+O(\tau))\int_{0}^{1}\left\\{|\dot{\gamma}|^{2}+|\nabla
f|^{2}(\gamma)\right\\}\mathrm{d}t+O(\tau).$
Eventually, the lower semicontinuity of the $\Gamma$-upper limit and the
convergence of $\tilde{\gamma}^{\tau}$ to $\gamma$ provide:
$\Gamma-\limsup_{h\to\infty}\Theta_{f_{h},x_{h,0},x_{h,1}}(\gamma)\leq\int_{0}^{1}\left\\{|\dot{\gamma}|^{2}+|\nabla
f|^{2}(\gamma)\right\\}\mathrm{d}t.$
∎
## References
* [1] G. Alberti, L. Ambrosio: A geometric approach to monotone functions in ${\bf R}^{n}$. Matematische Zeitschrift, 230 (1999), 259–316.
* [2] L. Ambrosio, G. Buttazzo, O. Ascenzi: Lipschitz regularity for minimizers of integral functionals with highly discontinuous coefficients. J. Math. Anal. Appl., 142 (1989), 301–316.
* [3] L. Ambrosio, A. Baradat, Y. Brenier: Monge-Ampère gravitation as a $\Gamma$-limit of good rate functions. Preprint, 2020.
* [4] L. Ambrosio, N. Gigli, G. Savaré: Gradient flows in metric spaces and in the space of probability measures. Lectures in Mathematics, ETH Zürich, Birkhäuser (2008).
* [5] L. Ambrosio, N. Gigli, G. Savaré: Calculus and heat flow in metric measure spaces and applications to spaces with Ricci bounds from below. Inventiones Mathematicae, 195 (2014), 289–391.
* [6] Y. Brenier: A double large deviation principle for Monge-Ampère gravitation. Bull. Inst. Math. Acad. Sin. (N.S.), 11 (2016), 23–41.
* [7] H. Brezis: Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. North-Holland Publishing Co., Amsterdam, 1973.
* [8] G. Clerc, I. Gentil, G. Conforti: On the variational interpretation of local logarithmic Sobolev inequalities. Preprint, 2021.
* [9] A. Dembo, O. Zeitouni: Large deviation techniques and applications. Applications of Mathemmatics 38, Springer, 1998.
* [10] P. Dondl, T. Frenzel, A. Mielke: A gradient system with a wiggly energy and relaxed EDP-convergence. ESAIM Control Optim. Calc. Var., 25 (2019), paper no. 68, 45pp.
* [11] A. Mielke, M.A. Peletier, D.R.M. Renger: On the relation between gradient flows and the large-deviation principle, with applications to Markov chains and diffusion. Potential Anal., 41 (2014), 1293–1327.
* [12] E. Sandier, S. Serfaty: $\Gamma$-Convergence of Gradient Flows with Applications to Ginzburg-Landau. Comm. Pure Appl. Math., 57 (2004), 1627–1672.
|
# Subspace exploration: Bounds on Projected Frequency Estimation
Graham Cormode University of Warwick , Charlie Dickens University of
Warwick and David P. Woodruff Carnegie Mellon University
(2021)
###### Abstract.
Given an $n\times d$ dimensional dataset $A$, a projection query specifies a
subset $C\subseteq[d]$ of columns which yields a new $n\times|C|$ array. We
study the space complexity of computing data analysis functions over such
subspaces, including heavy hitters and norms, when the subspaces are revealed
only after observing the data. We show that this important class of problems
is typically hard: for many problems, we show $2^{\Omega(d)}$ lower bounds.
However, we present upper bounds which demonstrate space dependency better
than $2^{d}$. That is, for $c,c^{\prime}\in(0,1)$ and a parameter $N=2^{d}$ an
$N^{c}$-approximation can be obtained in space $\min(N^{c^{\prime}},n)$,
showing that it is possible to improve on the naïve approach of keeping
information for all $2^{d}$ subsets of $d$ columns. Our results are based on
careful constructions of instances using coding theory and novel combinatorial
reductions that exhibit such space-approximation tradeoffs.
projection queries, distinct elements, frequency moments
††copyright: acmcopyright††journalyear: 2021††doi:
10.1145/1122445.1122456††conference: Proceedings of the 40th ACM SIGMODSIGACT-
SIGAI Symposium on Principles of Database Systems (PODS’21); ; ††booktitle:
Proceedings of the 40th ACM SIGMODSIGACT-SIGAI Symposium on Principles of
Database Systems (PODS’21)††price: 15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs:
Theory of computation Streaming models††ccs: Theory of computation Lower
bounds and information complexity††ccs: Theory of computation Communication
complexity††ccs: Theory of computation Sketching and sampling
## 1\. Introduction
In many data analysis scenarios, datasets of interest are of moderate to high
dimension, but many of these dimensions are spurious or irrelevant. Thus, we
are interested in subspaces, corresponding to the data projected on a
particular subset of dimensions. Within each subspace, we are concerned with
computing statistics, such as norms, measures of variation, or finding common
patterns. Such calculations are the basis of subsequent analysis, such as
regression and clustering. In this paper, we introduce and formalize novel
problems related to functions of the frequency in such projected subspaces.
Already, special cases such as subspace projected distinct elements have begun
to generate interest, e.g., in Vu’s work (Vu, 2018), and as an open problem in
sublinear algorithms (Sublinear.info, [n.d.]).
In more detail, we consider the original data to be represented by a (usually
binary) array with $n$ rows of $d$ dimensions. A subspace is defined by a set
$C\subseteq[d]$ of columns, which defines a new array with $n$ rows and $|C|$
dimensions. Our goal is to understand the complexity of answering queries,
such as which rows occur most frequently in the projected data, computing
frequency moments over the rows, and so on. If $C$ is provided prior to seeing
the data, then the projection can be performed online, and so many of these
tasks reduce to previously studied questions. Hence, we focus on the case when
$C$ is decided _after_ the data is seen. In particular, we may wish to try out
many different choices of $C$ to explore the structure of the subspaces of the
data. Our model is given in detail in Section 2.
For further motivation, we outline some specific areas where such problems
arise.
* •
Bias and Diversity. A growing concern in data analysis and machine learning is
whether outcomes are ‘fair’ to different subgroups within the population, or
whether they reinforce existing disparities. A starting point for this is to
quantify the level of bias within the data when different features are
considered. That is, we want to know whether certain combinations of attribute
values are over-represented in the data (heavy hitters), and how many
different combinations of values are represented in the data (captured by
measures like $F_{0}$). We would like to be able to answer such queries
accurately for many different (typically overlapping) subsets of dimensions.
* •
Privacy and Linkability. When sharing datasets, we seek assurance that they
are not vulnerable to attacks that exploit structure in the data to re-
identify individuals. An attempt to quantify this risk is given in recent work
(Chia et al., 2019), which asks how many distinct values occur in the data for
each partial identifier, specified as a subset of dimensions. This prior work
considered the case where the target dimensions are known in advance, but more
generally we would like to compute such measures for arbitrary subsets, based
on frequency moments and sampling techniques.
* •
Clustering and Frequency Analysis. In the area of clustering, the notion of
subspaces has been studied under a number of interpretations. The common theme
is that the data may look unclustered in the original space due to spurious
dimensions inflating the distance between points that are otherwise close.
Many papers addressed this as a search problem: to search through
exponentially many subspaces to find those in which the data is well-
clustered. See the survey by Parsons, Haque and Liu (Parsons et al., 2004). In
our setting, the problem would be to estimate various measures of density or
clusteredness for a given subspace. A related problem is to find subspaces (or
“subcubes” in database terminology) that have high frequency. Prior work
proceeded under strong statistical independence assumptions about the values
in different dimensions, for example, that the distribution can be modeled
accurately with a (Naïve) Bayesian model (Kveton et al., 2018).
## 2\. Preliminaries and Definitions
For a positive integer $Q$, let $[Q]=\\{0,1,\ldots,Q-1\\}$, and
$A\in[Q]^{n\times d}$ be the input data. The objective is to keep a summary of
$A$ which is used to estimate the solution to a problem $\mathbf{P}$ upon
receiving a column subset query $C\subseteq[d]$. Problems $\mathbf{P}$ of
interest are described in Section 2.1. Define the restriction of $A$ to the
columns indexed by $C$ as $A^{C}$ whose rows $A^{C}_{i}$, $1\leq i\leq n$, are
vectors over $[Q]^{|C|}$. We use the Minkowski norm
$\|X\|_{p}=(\sum_{i,j}|X_{ij}|^{p})^{1/p}$ to denote the entrywise-$\ell_{p}$
norm for vectors ($j=1$) and matrices ($j>1$).
Computational Model. First, the data $A$ is received under the assumption that
it is too large to hold entirely in memory so can be modeled as a stream of
data. Our lower bounds are not strongly dependent on the order in which the
data is presented. After observing $A$, a column query $C$ is presented. The
frequency vector over $A$ induced by $C$ is $f=f(A,C)$ whose entries
$f_{i}(A,C)$ denote the frequency of $Q$-ary word $w_{i}\in[Q]^{|C|}$. We
study functions of the frequency vector $f=f(A,C)$ after the observation of
$A$ and receiving column query $C$. The task is, during the observation phase,
to design a summary of $A$ which approximates statistics of $A^{C}$, the
restriction of $A$ to its projected subspace $C$. Approximations of $A^{C}$
are accessed through the frequency vector $f(A,C)$. Note that functions (e.g.,
norms) are taken over $f(A,C)$ as opposed to the raw vector inputs from the
column projection.
###### Remark 1 (Indexing $Q$-ary words into $f$).
Recall that the frequency vector $f(A,C)$ has length $Q^{|C|}$ with each entry
$f_{i}$ counting the occurrences of word $w_{i}\in[Q]^{|C|}$. To clearly
distinguish between the (scalar) index $i$ of $f$ and the input vectors
$w_{i}$ whose frequency is measured by $f_{i}$ we introduce the index function
$e(w_{i})=i$. We may think of $e(\cdot)$ as simply the canonical mapping from
$[Q]^{|C|}$ into $\\{0,1,2,\dots,Q^{C}-1\\}$, but other suitable bijections
may be used.
For example, suppose $Q=2$ and $A\in\\{0,1\\}^{5\times 3}$ with column indices
$\\{1,2,3\\}$ given below. If $C=\\{1,2\\}$, then using the canonical mapping
from $\\{0,1\\}^{|C|}$ into $\\{0,1,2,3\\}$ (e.g $e(00)=0,e(01)=1,\dots
e(11)=3$) we obtain $A^{C}$ and hence $f(A,C)=(1,1,0,3)$.
$A=\begin{bmatrix}1&1&0\\\ 0&1&0\\\ 0&0&1\\\ 1&1&1\\\ 1&1&0\\\
\end{bmatrix}\qquad\longrightarrow\qquad A^{C}=\begin{bmatrix}1&1\\\ 0&1\\\
0&0\\\ 1&1\\\ 1&1\\\ \end{bmatrix}$
The vector $f=f(A,C)$ is then the frequency vector over which we seek to
compute statistical queries such as $\|f\|_{0}$. In this example,
$\|f\|_{0}=3$ (there are three distinct rows in $A^{C}$), while $\|f\|_{1}=5$
is independent of the choice of $C$.
### 2.1. Problem Definitions.
The problems that we consider are column-projected forms of common streaming
problems ((Kane et al., 2010), (Braverman et al., 2017), (Braverman et al.,
2018a)). Here, we refer to these problems as “projected frequency estimation
problems” over the input $A$. We define
(1) $\displaystyle f_{i}(A,C)$
$\displaystyle=|\\{j:A_{j}^{C}=w_{i},j\in[n]\\}|$ (2) $\displaystyle
F_{p}(A,C)$ $\displaystyle=\sum_{i\in\\{0,1\\}^{|C|}}f_{i}(A,C)^{p}.$
* •
$F_{p}$ estimation: Given a column query $C$, the $F_{p}$ estimation problem
is to approximate the quantity $F_{p}(A,C)=\|f(A,C)\|_{p}^{p}$ under some
measure of approximation to be specified later (e.g., up to a constant
factor). Of particular interest to us is (projected) $F_{0}(A,C)$ estimation,
which counts the number of distinct row patterns in $A^{C}$.
* •
$\ell_{p}$-heavy hitters: The query is specified by a column query
$C\subseteq[d]$, a choice of metric/norm $\ell_{p},p>0$ and accuracy parameter
$\phi\in(0,1)$. The task is then to identify all patterns $w_{i}$ observed on
$A^{C}$ for which $f_{i}(A,C)\geq\phi\|f(A,C)\|_{p}$. Such values $w_{i}$ (or
equivalently $i$) are called $\phi$-$\ell_{p}$-heavy hitters, or simply
$\ell_{p}$-heavy hitters when $\phi$ is fixed. We will consider a
multiplicative approximation based on a parameter $c>1$, where we require that
all $\phi$-$\ell_{p}$ heavy hitters are reported, and no items with weight
less than $(\phi/c)\cdot\|f(A,C)\|_{p}$ are included.
* •
$\ell_{p}$-frequency estimation: A related problem is to allow the frequency
$f_{i}(A,C)$ to be estimated accurately, with error as a fraction of
$F_{p}(A,C)^{1/p}=\|f(A,C)\|_{p}$, which we refer to as $\ell_{p}$ frequency
estimation. Specifically, for a given $w_{i}$, return an estimate
$\hat{f}_{i}$ which satisfies
$|\hat{f}_{i}(A,C)-f_{i}(A,C)|\leq\phi\|f(A,C)\|_{p}$.
* •
$\ell_{p}$ sampling: The goal of this sampling problem is to sample patterns
$w_{i}$ according to the distribution
$p_{i}\in(1\pm\varepsilon)\frac{f^{p}_{i}(A,C)}{\|f(A,C)\|_{p}^{p}}+\Delta$
where $\Delta=1/\operatorname{poly}(nd)$, and return a
$(1\pm\varepsilon^{\prime})$-approximation to the probability $p_{i}$ of the
item $w_{i}$ returned.
When clear, we may drop the dependence upon $C$ in the notation and write
$f_{i}$ and $F_{p}$ instead. We will use $\tilde{O}$ and $\tilde{\Omega}$
notation to supress factors that are polylogarithmic in the leading term. For
example, lower bounds stated as $\tilde{\Omega}(2^{d})$ suppress terms
polynomial in $d$.
### 2.2. Related Work
The model we study is reminscent of, but distinct from, some related
formulations. In the problem of cascaded aggregates (Jayram and Woodruff,
2009), we imagine the starting data as a matrix, and apply a first operator
(denoted $Q$) on each row to obtain a vector, on which we apply a second
operator $P$. Our problems can be understood as special cases of cascaded
aggregates where $Q$ is a project-then-concatenate operator, to obtain a
vector whose indices correspond to the concatenation of the projection of a
row. Another example of a cascaded aggregate is a so-called correlated
aggregate (Tirthapura and Woodruff, 2012), but this was only studied in the
context of two dimensions. To the best of our knowledge, our projection-based
definitions have not been previously studied under the banner of cascaded
aggregates.
Other work includes results on provisioning queries for analytics (Assadi et
al., 2016), but the way these statistics are defined is different from our
formulation. In that setting there are different scenarios (“hypotheticals”)
that may or may not be turned on: this corresponds to “what-if” analysis
whereby a query is roughly “how many items are observed if a given set of
columns is present (turned on)?” The number of distinct elements for the query
is the union of the number of distinct elements across scenarios. In our
setting, we concatenate the distinct items into a row vector and count the
number of distinct vectors. Note that in the hypotheticals setting in the
binary case, each column only has $2$ distinct values, $0$ and $1$, and thus
the union also only has $2$ distinct values. However, we can obtain up to
$2^{d}$ distinct vectors. Consequently, Assadi et al. are able to achieve
$\operatorname{poly}(d/\varepsilon)$ space for counting distinct elements,
whereas we show a $2^{\Omega(d)}$ lower bound. Moreover, they achieve a
$2^{\Omega(d)}$ lower bound for counting (i.e., $F_{1}$), whereas we achieve a
constant upper bound. These disparities highlight the differences in our
models.
More recently, the notion of “subset norms” was introduced by Braverman,
Krauthgamer and Yang (Braverman et al., 2018b). This problem considers an
input that defines a vector $v$, where the objective is to take a subset $s$
of entries of $v$ and compute the norm. Results are parameterized by the
“heavy hitter dimension”, which is a measure of complexity over the set system
from which $s$ can be drawn. While sharing some properties with our scenario,
the results for this model are quite different. In particular, in (Braverman
et al., 2018b) a trivial upper bound follows by maintaining the vector $v$
explicitly, of dimension $n$. Meanwhile, many of our results show lower bounds
that are exponential in the dimensionality, as $2^{\Omega(d)}$, though we also
obtain non-trivial upper bounds.
## 3\. Contributions
The main challenge here is that the column query $C$ is revealed after
observing the data; consequently, applying a known algorithm to just the
columns $C$ as the data arrives is not possible. For example, consider the
exemplar problem of counting the number of distinct rows under the projection
$C$, i.e., the projected $F_{0}$ problem. Recall that $A^{C}_{i}$ denotes the
$i$-th row of array $A^{C}$. Then the task is to count the number of distinct
rows observed in $A^{C}$, i.e.,
$\displaystyle F_{0}(A,C)$
$\displaystyle=|\\{A_{j}^{C}:j\in[n]\\}|=\|f(A,C)\|_{0}.$
Observe that $F_{0}(A,C)$ can vary widely over different choices of $C$. For
example, even for a binary input $A\in\\{0,1\\}^{n\times d}$, $F_{0}(A,C)$ can
be as large as $2^{d}$ when $C$ consists of all columns from a highly diverse
dataset, and as small as $1$ or $2$ when $C$ is a single column or when $C$
selects homogeneous columns (e.g., the columns in $C$ are all zeros).
### 3.1. Summary of Results
Our main focus, in common with prior work on streaming algorithms, is on space
complexity. For the above problems we obtain the following results:
* •
In Section 4 we show that projected $F_{0}$ estimation requires
$2^{\Omega(d)}$ space for a constant factor approximation, demonstrating the
essential hardness of these problems. Nevertheless, we obtain a tradeoff in
terms of upper bounds described below.
* •
Section 5 presents results for $\ell_{p}$ frequency estimation, $\ell_{p}$
heavy hitters, $F_{p}$ estimation, and $\ell_{p}$ sampling. We show a space
upper bound of $O(\varepsilon^{-2}\log(1/\delta))$ for $\ell_{p}$ frequency
estimation when $0<p<1$ and complement this result with lower bounds for heavy
hitters when $p>1$, $F_{p}$ estimation and $\ell_{p}$ sampling for all $p\neq
1$, showing that these problems require $2^{\Omega(d)}$ bits of space.
* •
In Section 6 we show upper bounds for $F_{0}$ and $F_{p}$ estimation which
improve on the exhaustive approach of keeping summaries of all $2^{d}$ subsets
of columns, by showing that we can obtain coarse approximate answers with a
smaller subset of materialized answers. Specifically, for parameters $N=2^{d}$
and $\alpha\in(0,1)$ we can obtain an $N^{\alpha}$ approximation in
$\min\left(N^{H(1/2-\alpha)},n\right)$ space. Since the binary entropy
function $H(x)<1$, this bound is better than the trivial $2^{d}$ bound.
These bounds show that there is no possibility of “super efficient” solutions
that use space less than exponential in $d$. Nevertheless, we demonstrate some
solutions whose dependence is still exponential but weaker than a naïve
$2^{d}$. Thinking of $N=2^{d}$, the above upper and lower bounds imply the
actual complexity is a nontrivial polynomial function of $N$.
The bounds also show novel dichotomies that are not present in comparable
problems without projection. In particular, we show that (projected)
$\ell_{p}$ sampling is difficult for $p\neq 1$ while (projected)
$\ell_{p}$-heavy hitters has a small space algorithm for $0<p<1$. This differs
from the standard streaming model in which the (classical) $\ell_{p}$ heavy
hitters problem has a small space solutions for $p\leq 2$ without projection
(Larsen et al., 2016), and (classical) $\ell_{p}$ sampling can be performed
efficiently for $p\leq 2$ (Jayaram and Woodruff, 2018). Our lower bounds are
built on amplifying the frequency of target codewords for a carefully chosen
test word.
Note that there are trivial naïve solutions which simply retain the entire
input and so answer the query exactly on the query $C$: to do so takes
$\Theta(nd)$ space, noting that $n$ may be exponential in $d$. Alternatively,
if we know $t=|C|$ then we may enumerate all ${d\choose t}$ subsets of $[d]$
with size $t$ and maintain (approximate) summaries for each choice of $C$.
However, this will entail a cost of at least $\Omega(d^{t})$ and as such does
not give a major reduction in cost.
### 3.2. Coding Theory Definitions
Our lower bounds will typically make use of a binary code $\mathcal{C}$,
constituted of a collection of codewords, which are vectors (or strings) of
fixed length. We write $\mathcal{B}(l,k)$ to denote all binary strings of
length $l$ and (Hamming) weight $k$. We first consider the dense, low-distance
family of codes $\mathcal{C}=\mathcal{B}(d,k)$ but will later use more
sophisticated randomly sampled codes. When $k<d/2$, we have ${d\choose
k}\geq\left({d}/{k}\right)^{k}$ and when $k=d/2$, we have ${d\choose
d/2}\geq{2^{d}}/{\sqrt{2d}}$. A trivial but crucial property of
$\mathcal{B}(d,k)$ is that any two codewords from this set can have
intersecting $1$s in at most $k-1$ positions.
We define the support of a string $y$ as
$\operatorname{supp}(y)=\\{i:y_{i}\neq 0\\}$, the set of locations where $y$
is non-zero. We define child words to be the set of new codewords obtained
from $\mathcal{C}$ by generating all $Q$-ary words $z$ with
$\operatorname{supp}(z)\subseteq\operatorname{supp}(y)$ for some
$y\in\mathcal{C}$, and construct them with the star operator defined next.
###### Definition 3.1 ($\textsf{star}^{Q}$ operation, child words).
Let $d$ be the length of a binary word, $k$ be a weight parameter, and suppose
$y\in\mathcal{B}(d,k)$. Let $M=\operatorname{supp}(y)$. We define the function
$\textsf{star}^{Q}(y)$ to be the operation which lifts a binary word $y$ to a
larger alphabet by generating all the words over alphabet $[Q]$ on $M$.
Formally,
$\displaystyle\textsf{star}^{Q}(y\in\\{0,1\\}^{d})=\\{z:z\in[Q]^{d},\operatorname{supp}(z)\subseteq\operatorname{supp}(y)\\}$
Since the alphabet size $Q$ is often fixed when using this operation, when
clear we will drop the superscript and abuse notation by writing
$\textsf{star}(y)$. Elements of the set $\textsf{star}^{Q}(y)$ are referred to
as child words of $y$.
For any $y\in\mathcal{B}(d,k)$, there are $Q^{k}$ words generated by
$\textsf{star}^{Q}(y)$. When $\textsf{star}(\cdot)$ is applied to all vectors
of a set $U$ then we write $\textsf{star}(U)=\cup_{u\in U}{\textsf{star}(u)}$.
For example, if $y\in\\{0,1\\}^{d}$ and $Q=2$, then $\textsf{star}^{Q}(y)$ is
simply all possible binary words of length $d$ whose support is contained in
$\operatorname{supp}(y)$. For the projected $F_{0}$ problem, the code
$\mathcal{C}=\mathcal{B}(d,k)$ is sufficient. However, for our subsequent
results, we need a randomly chosen code whose existence is demonstrated in
Lemma 3.2. The proof follows from a Chernoff bound.
###### Lemma 3.2.
Fix $\epsilon,\gamma\in(0,1)$ and let
$\mathcal{C}\subseteq\mathcal{B}(d,\epsilon d)$ be such that for any two
distinct $x,y\in\mathcal{C}$ we have $|x\cap y|\leq(\epsilon^{2}+\gamma)d$.
With probability at least $1-\exp(-2d\gamma^{2})$ there exists such a code
$\mathcal{C}$ with size $2^{O(\gamma^{2}d)}$ instantiated by sampling
sufficiently many words i.i.d. at random from $\mathcal{B}(d,\epsilon d)$.
###### Proof.
Let $X$ be the random variable for the number of $1$s in common between $x$
and $y$ sampled uniformly at random. Then the expectation of $X$ is
$\mathbb{E}[X]=\frac{(\epsilon d)^{2}}{d}=\epsilon^{2}d$ and although the
coordinates of $x,y$ are not independent, they are negatively correlated so we
may use a Chernoff bound (see Section $1.10.2$ of (Doerr, 2020) for self-
contained details). Our aim is to show that the number of $1$s in common
between $x$ and $y$ can be at most $\gamma d$ more than its expectation. Then,
via an additive Chernoff-Hoeffding bound:
$\mathbb{P}(X-\mathbb{E}(X)\geq\gamma d)\leq\exp(-2d\gamma^{2}).$
This is the probability that any two codewords $x$ and $y$ are not too
similar, so by taking a union bound over the $\Theta(|\mathcal{C}|^{2})$ pairs
of codewords, the size of the code is
$|\mathcal{C}|=\exp(d\gamma^{2})=2^{\gamma^{2}d/\ln 2}$. ∎
### 3.3. Overview of Lower Bound Constructions
Our lower bounds rely upon non-standard reductions to the Index problem using
codes $\mathcal{C}$ defined in Section 3.2. These reductions are more involved
than is typically found as we need to combine the combinatorial properties of
$\mathcal{C}$ along with the $\textsf{star}(\cdot)$ operation on Alice’s
input. In particular, the interplay between $\mathcal{C}$ and
$\textsf{star}(\cdot)$ must be understood over the column query $S$ given by
Bob, which again relies on properties of $\mathcal{C}$ used to define the
input.
Recall that the typical reduction from Index is as follows: Alice holds a
vector $\mathbf{a}\in\\{0,1\\}^{N}$, Bob holds an index $i\in[N]$ and he is
tasked with finding $\mathbf{a}_{i}$ following one-way communication from
Alice. The randomized communication complexity of Index is $\Omega(N)$ (Kremer
et al., 1999). We adapt this setup for our family of problems, following an
approach that has been used to prove many space lower bounds for streaming
algorithms.
The general construction of our lower bounds is as follows: first we choose a
binary code $\mathcal{C}$ (usually independently at random) with certain
properties such as a specific weight and a bounded number of $1$s in common
locations with other words in the code. In the communication setting, Alice
holds a subset $T\subseteq\mathcal{C}$ while Bob holds a codeword
$y\in\mathcal{C}$ and is tasked with determining whether or not $y\in T$. Bob
can also access the index function (Remark 1) $e(y)$ which simply returns the
index or location that $y$ is enumerated in $\mathcal{C}$. The corresponding
bitstring for the Index problem that Alice holds is
$\mathbf{a}\in\\{0,1\\}^{|\mathcal{C}|}$ which has $\mathbf{a}_{j}=1$ for
every element $w_{j}\in T$ (under a suitable enumeration of $\\{0,1\\}^{d}$).
We use the $\textsf{star}(T)$ operator (defined in Section 3.2) to map these
strings into an input $A$ for each of the problems (i.e., a collection of rows
of datapoints). Upon defining the instance, we show that Bob can query a
proposed algorithm for the problem and use the output to determine whether or
not Alice holds $y$. This enables Bob to return $\mathbf{a}_{e(y)}$, which is
$1$ if Alice holds $y\in T$ and $0$ otherwise. Hence, determining if $y\in T$
or $y\in\mathcal{C}\setminus T$ solves Index and incurs the lower bound
$\Omega(|\mathcal{C}|)$. Our constructions of $\mathcal{C}$ establish that
$|\mathcal{C}|$ is exponentially large in $d$.
## 4\. Lower Bounds for $F_{0}$
In this section, we focus on the $F_{0}$ (distinct counting) projected
frequency problem. The main result in this section is a strong lower bound for
the problem, which is exponential in the domain size $d$.
We use codes $\mathcal{C}=\mathcal{B}(d,k)$ as defined in Section 3.2.
###### Theorem 4.1.
Let $Q\geq 2$ be the target alphabet size and $k<d/2$ be a fixed query size
with $Q>k$. Any algorithm achieving an approximation factor of $|Q|/k$ for the
projected $F_{0}$ problem requires space $2^{\Omega(d)}$.
###### Proof.
Fix the code $\mathcal{C}=\mathcal{B}(d,k)$, recalling that any
$x\in\mathcal{C}$ has Hamming weight $k$, and for distinct $x,y\in\mathcal{C}$
at most $k-1$ bits are shared in common. We will use these facts to obtain the
approximation factor.
Obtain the collection of all child words $\mathcal{C}_{Q}$ from $\mathcal{C}$
by using $\textsf{star}^{Q}(\cdot)$ as defined in Section 3.2. We will reduce
from the Index problem in communication complexity as follows. Alice has a set
of (binary) codewords $T\subseteq\mathcal{C}$ and initializes the input array
$A$ for the algorithm with all strings from the set $\textsf{star}(T)$. Bob
has a vector $y\in\mathcal{C}$ and wants to know if $y\in T$ or not. Let
$S=\operatorname{supp}(y)$ so that $|S|=k$ and Bob queries the $F_{0}$
algorithm on columns of $A$ restricted to $S$. First suppose that $y\in T$.
Then Alice holds $y$ so $\textsf{star}(y)$ is included in $A$ and there must
be at least $Q^{k}$ patterns observed. Conversely, if $y\notin T$, then Alice
does not include $y$ in $A$. However, by the construction of $\mathcal{C}$,
$y$ shares at most $(k-1)$ 1s with any distinct $y^{\prime}\in\mathcal{C}$.
Thus, the number of patterns observed on the columns corresponding to $S$ is
at most ${k\choose k-1}Q^{k-1}=kQ^{k-1}$.
We observe that if we can distinguish the case of $kQ^{k-1}$ from $Q^{k}$,
then we could correctly answer the Index instance, i.e., if we can achieve an
approximation factor of $\Delta$ such that:
(3) $\Delta=\frac{Q^{k}}{kQ^{k-1}}=\frac{Q}{k}.$
Any protocol for Index requires communication proportional to the length of
Alice’s input vector $\mathbf{a}$, which translates into a space lower bound
for our problem. Alice’s set $T\subset\mathcal{C}$ defines an input vector for
the Index problem built using a characteristic vector over all words in
$\mathcal{C}$, denoted by $\mathbf{a}\in\\{0,1\\}^{|\mathcal{C}|}$, as
follows. Under a suitable enumeration of
$\mathcal{C}=\\{w_{1},w_{2},\ldots,w_{|\mathcal{C}|}\\}$, Alice’s vector is
encoded via $\mathbf{a}_{i}=1$ if and only if Alice holds the binary word
$w_{i}\in T$. From the separation shown earlier, Bob can determine if Alice
holds a word in $T$, thus solving Index and incurring the lower bound. Hence,
space proportional to $|\mathcal{C}|={d\choose k}$ is necessary. We use the
standard relation ${d\choose k}\geq\left({d}/{k}\right)^{k}$ and choose
$k=ad/2$ for a constant $a\in[0,1)$ from which we obtain $|\mathcal{C}|\geq
2^{ad/2}$ to achieve the stated approximation guarantee. ∎
Setting $k=ad/2$ allows us to vary the query size and directly understand how
this affects the size of the code necessary for the lower bound. For a query
of size $k$, the size of the input to the projected $F_{0}$ problem is a
$\left(d/k\right)^{k}\times d$ array $A$ of total size ${d^{k+1}}/{k^{k}}$.
Theorem 4.1 is for $k<d/2$. When $k=d/2$ we can use the tighter bound for the
central binomial term on the sum of the binomial coefficients and obtain the
following stronger bounds. The subsequent results use the same encoding as in
Theorem 4.1. However, at certain points of the calculations the parameter
setttings are slightly altered to obtain different guarantees.
###### Corollary 4.2.
Let $Q\geq d/2$ be an alphabet size and $d/2$ be the query size. There exists
a choice of input data $A\in[Q]^{n\times d}$ such that any algorithm achieving
approximation factor $2Q/d$ for the projected $F_{0}$ problem on the query
requires space $2^{\Omega(d)}$.
###### Proof.
Repeat the argument of Theorem 4.1 with $k=d/2$. The approximation factor from
Equation (3) becomes: $\Delta=\frac{2Q}{d}$. The code size for Index is
$|\mathcal{C}|\geq 2^{d}/\sqrt{2d}$. Note that $|\mathcal{C}|$ is
$2^{\Omega(d)}$ as ${\frac{1}{2}\log_{2}(d)}$ can always be bounded above by a
linear function of $d$. The instance is an array whose rows are the $Q^{d/2}$
child words in $\textsf{star}^{Q}(\mathcal{C})$. Hence, the size of the
instance to the $F_{0}$ algorithm is:
$\Theta\left(2^{d}Q^{d/2}d^{1/2}\right)$. ∎
Corollary 4.3 follows from Corollary 4.2 by setting $Q=d$.
###### Corollary 4.3.
A $2$-factor approximation to the projected $F_{0}$ problem on a query of size
$d/2$ needs space $2^{\Omega(d)}$ with an instance $A$ whose size is
$\Theta(2^{d}d^{\frac{d+1}{2}})$.
Theorem 4.1 and its corollaries suffice to obtain space bounds over all
choices of $Q$. However, $Q$ could potentially grow to be very large, which
may be unsatisfying. As a result, we will argue how the error varies for fixed
$Q$. To do so, we map $Q$ down to a smaller alphabet of size $q$ and use this
code to define the communication problem from which the lower bound will
follow. The cost of this is that the instance is a logarithmic factor larger
in the dimensionality.
###### Corollary 4.4.
Let $q$ be a target alphabet size such that $2\leq q\leq|Q|$. Let
$\alpha=Q\log_{q}(Q)\geq 1$ and $d^{\prime}=d\log_{q}(Q)$. There exists a
choice of input data $A\in[q]^{n\times d^{\prime}}$ for which any algorithm
for the projected $F_{0}$ problem over queries of size $d/2$ that guarantees
error $\tilde{O}(\alpha/d^{\prime})$ requires space $2^{\Omega(d)}$.
###### Proof.
Fix the binary code $\mathcal{C}=\mathcal{B}(d,d/2)$ and generate all child
words over alphabet $[Q]$ to obtain the approximation factor $\Delta=2Q/d$ as
in Corollary 4.3. For every $w\in\mathcal{C}$ there are $Q^{d/2}$ child words
so the child code $\mathcal{C}_{Q}$ now has size
$n=\Theta(2^{d}Q^{d/2}/\sqrt{d})$ words. Since $Q$ can be arbitrarily large,
we encode it via a mapping to a smaller alphabet but over a slightly larger
dimension; specifically, use a function $[Q]\mapsto[q]^{\log_{q}(Q)}$ which
generates $q$-ary strings for each symbol in $[Q]$. Hence, all of the stored
strings in $\mathcal{C}_{Q}\subset[Q]^{d}$ are equivalent to a collection,
$\mathcal{C}_{q}$ over $[q]^{d\log_{q}(Q)}$. Although
$|\mathcal{C}_{Q}|=|\mathcal{C}_{q}|$, words in $\mathcal{C}_{Q}$ are length
$d$, while the equivalent word in $\mathcal{C}_{q}$ has length $d\log_{q}(Q)$.
This collection of words from $C_{q}$ now defines the instance
$A\in[q]^{n\times d\log_{q}(Q)}$, each word being a row of $A$. Taking
$\alpha=Q\log_{q}(Q)$ and $d^{\prime}=d\log_{q}(Q)$ results in an
approximation factor of:
(4) $\Delta=\frac{2Q}{d}=\frac{2\alpha}{d^{\prime}}.$
Alice’s input vector $\mathbf{a}$ is defined by the same code $\mathcal{C}$
and held set $T\subset\mathcal{C}$ as in Theorem 4.1 so we incur the same
space bound. Likewise, Bob’s test vector $y$ and column query $S$ also remain
the same as in that theorem.
∎
Corollary 4.4 says that the same accuracy guarantee as Corollary 4.2 can be
given by reducing the arbitrarily large alphabet $[Q]$ to a smaller one over
$[q]$. However, the price to pay for this is that the size of the instance $A$
increases by a factor of $\log_{q}(Q)$ in the dimensionality. These various
results are summarized in Table 1.
Table 1. Comparison of the lower bounds for $F_{0}$. Theorem 4.1 uses $\mathcal{C}=\mathcal{B}(d,k)$, corollaries use $\mathcal{C}=\mathcal{B}(d,d/2)$. | Instance $A$ for $F_{0}$ | Approx. Factor
---|---|---
Theorem 4.1 | $\left(\frac{d}{k}\right)^{k}\times d$ over $[Q]$ | $\frac{Q}{k}$
Corollary 4.2 | $2^{d}Q^{d/2}\times d$ over $[Q]$ | $\frac{2Q}{d}$
Corollary 4.3 | $2^{d}d^{d/2}\times d$ over $[d]$ | $2$
Corollary 4.4 | $2^{d}Q^{d/2}\times d\log_{q}Q$ over $[q]$ | $\frac{2Q}{d}$
## 5\. $\ell_{p}$-Frequency Based Problems
In this section, we extend the techniques from the previous section to
understand the complexity of projected frequency estimation problems related
to the $\ell_{p}$ norms and $F_{p}$ frequency moments (defined in Section
2.1). A number of our results are lower bounds, but we begin with a simple
sampling-based upper bound to set the stage.
### 5.1. $\ell_{p}$ Frequency Estimation
We first focus on the projected frequency estimation problem showing that a
simple algorithm keeping a uniform sample of the rows works for $p<1$. The
algorithm uSample$(A,C,t,b)$ first builds a uniform sample of $t$ rows
(sampled with replacement at rate $\alpha=t/n$) from $A$ and evaluates the
absolute frequency of string $b$ on the sample after projection onto $C$. Let
$g$ be the absolute frequency of $b$ on the subsample. To estimate the true
frequency of $b$ on the entire dataset from the subsample, we return an
appropriately scaled estimator $\hat{f}_{e(b)}=g/\alpha$ which meets the
required bounds given in Theorem 5.1, recalling that $e(b)$ is the index
location associated with the string $b$. The proof follows by a standard
Chernoff bound argument and is given in Appendix A.1.
###### Theorem 5.1.
Let $A\in\\{0,1\\}^{n\times d}$ be the input data and let $C\subseteq[d]$ be a
given column query. For a given string $b\in\\{0,1\\}^{C}$, the absolute
frequency of $b$, $f_{e(b)}$, can be estimated up to $\varepsilon\|f\|_{1}$
additive error using a uniform sample of size
$O(\varepsilon^{-2}\log(1/\delta))$ with probability at least $1-\delta$.
The same algorithm can be used to obtain bounds for all $0<p<1$. By noting
that $\|f\|_{1}\leq\|f\|_{p}$ for $0<p<1$ we can obtain the following
corollary.
###### Corollary 5.2.
Let $A,b,C$ be as in Theorem 5.1. Let $0<p<1$. Then uniformly sampling
$O(\varepsilon^{-2}\log(1/\delta))$ rows achieves
$\left|\hat{f}_{e(b)}-f_{e(b)}\right|\leq\varepsilon\|f\|_{p}$ with
probability at least $1-\delta$.
Both Theorem 5.1 and Corollary 5.2 are stated as if $C$ is given. However,
since the sampling did not rely on $C$ in any way, we can sample complete rows
of the input uniformly prior to receiving the query $C$, which is revealed
after observing the data. The uniform sampling approach also allows us to
identify the $\ell_{p}$ heavy hitters in small space: for each item included
in the sample (when projected onto column set $C$), we use the sample to
estimate its frequency, and declare those with high enough estimated frequency
to be the heavy hitters. By contrast, for $p>1$ we are able to obtain a
$2^{\Omega(d)}$ space lower bound, given in the next section.
### 5.2. $\ell_{p}$ Heavy Hitters Lower Bound
Recall that the objective of (projected) $\ell_{p}$ heavy hitters is to find
all those rows in $A^{C}$ whose frequency is at least some fraction of the
$\ell_{p}$ norm of the frequency distribution of this projection. For the
lower bound we need a randomly sampled code as defined in Lemma 3.2. The lower
bound argument follows a similar outline to the bound for $F_{0}$, although
now Bob’s query is on the complement of the support of his test vector $y$
(i.e., $S=[d]\setminus\operatorname{supp}(y)$) rather than
$\operatorname{supp}(y)$. Akin to Theorem 4.1, we will create a reduction from
the Index problem in communication complexity, and use its communication lower
bound to argue a space lower bound for projected $\ell_{p}$ heavy hitters. The
proof will generate an instance of $\ell_{p}$ heavy hitters based on encoding
a collection of codewords, and consider in particular the status of the string
corresponding to all zeroes. We will consider two cases: when Bob’s query
string is represented in Alice’s set of codewords, then the all zeros string
will be a heavy hitter (for a subset of columns determined by the query); and
when Bob’s string is not in the set, then the all zeros string will not be a
heavy hitter. We begin by setting up the encoding of the input to the Index
instance.
###### Theorem 5.3.
Let $\phi\in(0,1)$ be a parameter and fix $p>1$. Any algorithm which can
obtain a constant factor approximation to the projected $\ell_{p}$-heavy
hitters problem requires space $2^{\Omega(d)}$.
###### Proof.
Fix $\epsilon>0$. Let $\mathcal{C}\subset\mathcal{B}(d,\epsilon d)$ be a code
whose words have weight $\epsilon d$ and any two distinct words $x,y$ have at
most $(\epsilon^{2}+\gamma)d$ ones in common. By Lemma 3.2 such a
$\mathcal{C}$ exists and $|\mathcal{C}|=2^{\Omega_{\gamma}(d)}$.
Suppose Alice holds a subset $T\subset\mathcal{C}$. Let
$\mathbf{a}\in\\{0,1\\}^{|\mathcal{C}|}$ be the characteristic vector over all
length-$d$ binary strings for which $\mathbf{a}_{e(u)}=1$ if and only if Alice
holds $u\in T$. Bob holds $y\in\mathcal{C}$ and wants to determine if Alice
holds $y\in T$. Ascertaining whether or not Alice holds $y$ would be
sufficient for Bob to solve Index and incur the $\Omega(|\mathcal{C}|)$ lower
bound.
The input array, $A$, for the $\ell_{p}$-heavy hitters problem is constructed
as follows.
1. (1)
Alice populates $A$ with $2^{\epsilon d}$ copies of the length-$d$ all ones
vector, $\mathbf{1}_{d}$
2. (2)
Next, Alice takes $Q=2$ and inserts into $A$ the collection
$\textsf{star}^{Q}(T)$, which is the expansion of her input strings to all
child-words in binary. That is, for every $s\in T$, Alice computes all binary
strings $x$ of length $d$ with
$\operatorname{supp}(x)\subseteq\operatorname{supp}(s)$ and includes these in
$A$.
Let $S=[d]\setminus\operatorname{supp}(y)$, so that $|S|=d-\epsilon
d=(1-\epsilon)d$. Without loss of generality we may assume
$S=\\{1,2,\ldots,(1-\epsilon)d\\}$ and we denote the $(1-\epsilon)d$ length
vector which is identically $0$ on $S$ by $\mathbf{0}_{S}$. Suppose there is
an algorithm $\mathcal{A}$ which approximates the $\ell_{p}$-heavy hitters
problem on a given column query up to a constant approximation factor. Bob
queries $\mathcal{A}$ for the heavy hitters in the table $A$ under the column
query given by the set $S$, and then uses this information to answer whether
or not $y\in T$.
Case 1: $y\in T$. If $y\in T$, then we claim that $\mathbf{0}_{S}$ is a
$\phi$-$\ell_{p}$ heavy hitter for some constant $\phi$, i.e.,
$f_{e(\mathbf{0}_{S})}\geq\phi\|f\|_{p}$. We will manipulate the equivalent
condition $f_{e(\mathbf{0}_{S})}^{p}\geq\phi^{p}F_{p}$. Since $y\in T$, the
set $\textsf{star}(y)$ is included in the table $A$ as Alice inserted
$\textsf{star}(s)$ for every $s$ that she holds. Consider any child word of
$y$, that is, a $w\in\textsf{star}(y)$. Since $y$ is supported only on
$[d]\setminus S$ and $\operatorname{supp}(w)\subseteq\operatorname{supp}(y)$,
every $w_{i}=0$ for $i\in S$. So $\mathbf{0}_{S}$ is observed once for every
$w\in\textsf{star}(y)$ and there are $|\textsf{star}(y)|=2^{\epsilon d}$ such
$w$. Hence, $\mathbf{0}_{S}$ occurs at least $2^{\epsilon d}$ times.
Now that we have a lower bound on the frequency of $\mathbf{0}_{S}$, it
remains to upper bound the $F_{p}$ value when $y\in T$ so that we are assured
$\mathbf{0}_{S}$ will be a heavy hitter in this instance. The quantity we seek
is the $F_{p}$ value of all vectors in $A^{S}$, written $F_{p}(A,S)$; which we
decompose into the contribution from $\mathbf{0}_{S}$ present due to $y$ being
in $T$, and two special cases from the block of $2^{\varepsilon d}$ all-ones
rows and ‘extra’ copies of $\mathbf{0}_{S}$ which are contributed by vectors
$y^{\prime}\neq y$. We claim that this $F_{p}(A,S)$ value is at most
$|\mathcal{C}|^{1+p}2^{\epsilon d+(\epsilon^{2}+\gamma)dp}+3\cdot 2^{\epsilon
pd}$.
First, let $y^{\prime}\in\mathcal{C}$ with $y^{\prime}\neq y$ and consider
prefixes $z$ supported on $S$ which can be generated by possible child words
from $\textsf{star}(y^{\prime})$. Since our code requires that
$|y^{\prime}\cap y|\leq(\epsilon^{2}+\gamma)d$, $y^{\prime}$ can have at most
$(\epsilon^{2}+\gamma)d$ $1$s located in $\bar{S}=[d]\setminus S$, and hence
must have at least $(\epsilon-\epsilon^{2}-\gamma)d$ $1$s located in $S$.
Since $|\textsf{star}(y^{\prime})|=2^{\epsilon d}$, the number of copies of
$z$ inserted is at most $2^{\epsilon d-(\epsilon d-\epsilon^{2}d-\gamma
d)}=2^{\epsilon^{2}d+\gamma d}$. This occurs for every
$y^{\prime}\in\mathcal{C}$ so the total number of occurences of $z$ is at most
$|\mathcal{C}|2^{(\epsilon^{2}+\gamma)d}$. The contribution to $F_{p}$ for
this scenario is then $|\mathcal{C}|^{p}2^{(\epsilon^{2}+\gamma)dp}$. Observe
that each codeword $y^{\prime}$ generates at most $2^{\epsilon d}$ vectors
under the $\textsf{star}(y^{\prime})$ operator, so we have an upper bound of
$|\mathcal{C}|2^{\epsilon d}$ such vectors generated, with a total
contribution of $|\mathcal{C}|^{1+p}2^{(\epsilon^{2}p+\epsilon+\gamma p)d}$.
Next, we focus on the two special vectors to count which have a high
contribution to the $F_{p}$ value. Recall that Alice specifically included
$\mathbf{1}_{d}$ into $A$ $2^{\epsilon d}$ times so the $p$-th powered
frequency is exactly $2^{\epsilon pd}$ for this term. From the above argument,
$\mathbf{0}_{S}$ also has frequency $2^{\epsilon d}$ from $\textsf{star}(y)$.
But $\mathbf{0}_{S}$ is also created at most $2^{(\epsilon^{2}+\gamma)d}$
times from each $y^{\prime}\neq y$ in $T$, giving an additional count of at
most $|\mathcal{C}|2^{(\epsilon^{2}+\gamma)d}$. Based on our choice of
$\epsilon$ and $\gamma$, we can ensure that this is asymptotically smaller
than $2^{\epsilon d}$, and so the total contribution from these two special
vectors is at most $3\cdot 2^{\epsilon d}$. So in total we achieve that
$F_{p}$ is at most $|\mathcal{C}|^{1+p}2^{\epsilon
d+(\epsilon^{2}+\gamma)dp}+3\cdot 2^{\epsilon pd}$, as claimed.
Then $\mathbf{0}_{S}$ meets the definition to be a $\phi$-$\ell_{p}$ heavy
hitter provided
$2^{\epsilon pd}>\phi^{p}(|\mathcal{C}|^{1+p}2^{\epsilon
d+(\epsilon^{2}+\gamma)pd}+3\cdot 2^{\epsilon pd}).$
Assuming $p>1$, and choosing $\epsilon$ sufficiently smaller than $(p-1)/p$
and $\gamma$ sufficiently small, we have that
$|\mathcal{C}|^{1+p}2^{\epsilon d+(\epsilon^{2}+\gamma)pd}\leq
2^{O(\gamma^{2}d(1+p))+\epsilon d+\epsilon(p-1)d+\gamma pd}\leq 2^{\epsilon
pd}.$
Hence, we require $2^{\epsilon pd}>\phi^{p}O(2^{\epsilon pd})$, i.e.,
$2^{\epsilon d}>\phi O(2^{\epsilon d})$, which is satisfied for a suitably
small but constant $\phi$.
Case 2: $y\notin T$. On the other hand, suppose that $y\notin T$. Then the
claim is that $\mathbf{0}_{S}$ is not a $\phi$-$\ell_{p}$-heavy hitter. Now
the vector $\mathbf{0}_{S}$ does not occur with a high frequency because
$\textsf{star}(y)$ is not included in $A$. However, certain child words in
$\textsf{star}(T)$ could also generate $\mathbf{0}_{S}$ when projected onto
$S$ and this is the contribution we need to upper bound. Again, any codeword
$s\in T$ has at least $(\epsilon-\epsilon^{2}-\gamma)d$ $1$s present on $S$.
So for a particular $s\in T$, $\mathbf{0}_{S}$ can occur
$2^{\epsilon^{2}d+\gamma d}$ times. Taken over all $y^{\prime}\in\mathcal{C}$
for which Alice includes in $A$, the frequency of $\mathbf{0}_{S}$ in this
case is at most $|\mathcal{C}|2^{\epsilon^{2}d+\gamma d}$. Taking
$\varepsilon<1/3,\gamma<\varepsilon/3$ and using
$|\mathcal{C}|=2^{\gamma^{2}d/\ln 2}$ (Lemma 3.2) we have
$f_{e(\mathbf{0}_{S})}\leq 2^{0.72\varepsilon d}$. Meanwhile, there are
$2^{\epsilon d}$ copies of the string $\mathbf{1}_{d}$ inserted into $A$
meaning that $F_{p}(A,S)\geq 2^{\epsilon pd}$ and hence $F_{p}^{1/p}$ is
strictly greater than $f_{e(\mathbf{0}_{S})}$. Hence, $\mathbf{0}_{S}$ is
_not_ a $\phi$-$\ell_{p}$ heavy hitter provided that
$f_{e(\mathbf{0}_{S})}/F_{p}^{1/p}=2^{-0.28\varepsilon d}$ is strictly less
than $\phi=1/4$, this is satisfied for suitable $\varepsilon$ and $d$.
Concluding the proof. Bob can use his test vector $y$ and a query $S$ with a
constant factor approximation algorithm $\mathcal{A}$ for the $\ell_{p}$-heavy
hitters problem and distinguish between the two cases of Alice holding $y$ or
not based on whether $\mathbf{0}_{S}$ is reported. As a result, Bob can
determine if $y\in T$ and consequently solve Index, thus incurring the
$\Omega(|\mathcal{C}|)=2^{\Omega(d)}$ lower bound. ∎
The instance $A$ is initialized with $2^{\varepsilon d}$ rows of the vector
$\mathbf{1}_{d}$ and the child words $\textsf{star}^{Q}(T)$. For any
$t\in\textsf{star}^{Q}(T),|\textsf{star}^{Q}(t)|=2^{\varepsilon d}$ so the
size of the instance $A$ is $(|T|+1)2^{\varepsilon d}\times d$.
### 5.3. $F_{p}$ Estimation
The space complexity of approximating the frequency moments $F_{p}$ has been
widely studied since the pioneering work of Alon, Matias and Szegedy (Alon et
al., 1999). Here, we investigate their complexity under projection. For $p=1$,
the frequency is always the number $n$ of rows in the original instance
irrespective of the column set $C$, so only one word of space is required. We
therefore devote attention to $p\neq 1$.
The reduction to Index for Theorem 5.4 follows a similar outline as Theorem
5.3 for $p>1$. For $p<1$, we encode the problem slightly differently, closer
to that in Theorem 4.1. Again, the reduction to Index relies on Bob
determining whether or not Alice holds $y$, which for $F_{p}$ estimation
amounts to Bob evaluating $F_{p}(A,S)$ and comparing to a threshold value.
###### Theorem 5.4.
Fix a real number $p>0$ with $p\neq 1$. A constant factor approximation to the
projected $F_{p}$ estimation problem requires space $2^{\Omega(d)}$.
###### Proof.
For $p>1$ we begin by noticing that in the proof for Theorem 5.3 one can also
monitor the $F_{p}$ value of the input to the problem rather than simply
checking the heavy hitters. In particular, depending on whether or not Alice
holds Bob’s test word, $y$, the projected $F_{p}$ changes by more than a
constant. Consequently, we invoke the same proof for $F_{p}$, $p>1$ and obtain
the same $2^{\Omega(d)}$ lower bound.
On the other hand, suppose that $p<1$. We assume a code
$\mathcal{C}\subset\mathcal{B}(d,\epsilon d)$ with the property that any
distinct $x,x^{\prime}\in\mathcal{C}$ have $|x\cap x^{\prime}|\leq cd$ for
some small constant $c>\epsilon^{2}$ (see Lemma 3.2). Again, Alice holds a
subset $T\subseteq\mathcal{C}$ and inserts $\textsf{star}(T)$ into the table
for the problem $A$. Throughout this proof we use a binary alphabet so
suppress the $Q$ notation from $\textsf{star}^{Q}(\cdot)$. Bob holds a test
vector $y\in\mathcal{C}$ and is tasked with determining whether or not Alice
holds $y\in T$. We distinguish between the cases when Alice holds $y\in T$ or
not as follows. Bob uses $y$ to determine the query column set
$S=\operatorname{supp}(y)$ and will compare against the returned frequency
value from the algorithm.
Case 1: $y\not\in T$. Consider some
$y^{\prime}\in\mathcal{C}\setminus\\{y\\}$. Since $y$ and $y^{\prime}$ are
both codewords, they can have a $1$ coincident in at most $cd$ locations. So
if Alice does not hold $y$ then the codewords we need to consider are all
binary words in the code which have at most $cd$ $1$s in common with $y$ on
$S$. We denote this collection of words by $M$, i.e., the set of binary
strings of length $d$ that have at most $cd$ locations set to $1$. There are
$r$ such vectors, where $r$ is defined by:
$r\triangleq\sum_{i=0}^{cd}{d\choose i}\leq cd\cdot{d\choose
cd}=O(d)2^{\Theta(cd)}.$
The total count of all strings generated by Alice’s encoding is at most
$2^{\epsilon d}|\mathcal{C}|$: each string in $\mathcal{C}$ generates
$2^{\epsilon d}$ subwords from the $\textsf{star}(\cdot)$ operation. We now
evaluate the $\ell_{p}$-frequency of elements in the set $M$, denoted
$F_{p}(M)$. For $p<1$, the value $F_{p}(M)$ is maximized when every element of
$M$ has the same number of occurrences, $|\mathcal{C}|2^{\epsilon d}/r$. As
there are at most $r$ members of $M$, we obtain
$F_{p}(M)\leq|\mathcal{C}|^{p}2^{\epsilon dp}r^{1-p}$. Recalling the bounds on
$|\mathcal{C}|$ and $r$, this is:
(5) $2^{cdp+\epsilon dp+\Theta((1-p)cd)}\cdot O(d^{1-p}).$
We can now choose $c$ to be a small enough constant so that (5) is at most
$2^{(1-\alpha)\epsilon d}$ for a constant $\alpha>0$ by Lemma A.2 in Appendix
A.2.
Case 2: $y\in T$. Now consider the scenario when $y\in T$ so that Alice has
inserted $\textsf{star}(y)$ into the table $A$. Here, we can be sure that each
of the $2^{\epsilon d}$ strings in $\textsf{star}(y)$ appears at least once
over the column set $S$, and so the $F_{p}$ value is at least $2^{\epsilon
d}1^{p}=2^{\epsilon d}$.
We observe that these two cases obtain the constant factor separation, as
required. Then, Bob can use his test vector $y$ and a query $S$ with a
constant factor approximation algorithm to the projected $F_{p}$-estimation
problem and distinguish between the two cases of Alice holding $y$ or not.
Thus, Bob can determine if $y\in T$ and consequently solve the Index problem,
incurring the $\Omega(|\mathcal{C}|)=2^{\Omega_{c}(d)}$ lower bound for a $c$
arbitrarily small. ∎
###### Remark 2.
For $p>1$ we adopt the same instance as in Theorem 5.3 so the instance is of
size $(|T|+1)2^{\varepsilon d}\times d$. On the other hand, for $0<p<1$, only
the words in $\textsf{star}^{Q}(T)$ are required so $A$ has size
$|T|2^{\varepsilon d}\times d$.
### 5.4. $\ell_{p}$-Sampling
In the projected $\ell_{p}$-sampling problem, the goal is to sample a row in
$A^{C}$ proportional to the $p$-th power of its number of occurrences. One
approach to the standard (non-projected) $\ell_{p}$-sampling problem on a
vector $x$ is to subsample and find the $\ell_{p}$-heavy hitters (Larsen et
al., 2016). Consequently, if one can find $\ell_{p}$-heavy hitters for a
certain value of $p$, then one can perform $\ell_{p}$-sampling in the same
amount of space, up to polylogarithmic factors. Interestingly, for projected
$\ell_{p}$-sampling, this is not the case, and we show for every $p\neq 1$,
there is a $2^{\Omega(d)}$ lower bound. This is despite the fact that we can
estimate $\ell_{p}$-frequencies efficiently for $0<p<1$, and hence find the
heavy hitters (Section 5.1).
###### Theorem 5.5.
Fix a real number $p>0$ with $p\neq 1$, and let $\varepsilon\in(0,1/2)$. Let
$S\subseteq[d]$ be a column query and $i$ be a pattern observed on the
projected data $A^{S}$. Any algorithm which returns a pattern $i$ sampled from
a distribution $(p_{1},\ldots,p_{n})$, where
$p_{i}\in(1\pm\varepsilon)\frac{f_{e(i)}^{p}}{\|f(A,S)\|_{p}^{p}}+\Delta$
together with a $(1\pm\varepsilon^{\prime})$-approximation to $p_{i}$,
$\Delta=1/\operatorname{poly}(nd)$ and $\varepsilon^{\prime}>0$ is a
sufficiently small constant, requires $2^{\Omega(d)}$ bits of space.
###### Proof.
Case 1: $p>1$. The proof of Theorem 5.3 argues that the vector
$\mathbf{0}_{S}$ is a constant factor $\ell_{p}$-heavy hitter for any $p>1$ if
and only if Bob’s test vector $y$ is in Alice’s input set $T$, via a reduction
from Index. That is, we argue that there are constants $C_{1}>C_{2}$ for which
if $y\in T$, then $f_{e(\mathbf{0}_{S})}^{p}\geq C_{1}F_{p}$, while if
$y\notin T$, then $f_{e(\mathbf{0}_{S})}^{p}<C_{2}F_{p}$. Consequently, given
an $\ell_{p}$-sampler with the guarantees as described in the theorem
statement, then the (empirical) probability of sampling the item
$\mathbf{0}_{S}$ should allow us to distinguish the two cases. This holds even
tolerating the $(1+\varepsilon^{\prime})$-approximation in sampling rate, for
a sufficiently small constant $\varepsilon^{\prime}$. In particular, if $y\in
T$, then we will indeed sample $\mathbf{0}_{S}$ with $\Omega(1)$ probability,
which can be amplified by independent repetition; whereas, if $y\notin T$, we
do not expect to sample $\mathbf{0}_{S}$ more than a handful of times.
Consequently, for $p>1$, an $\ell_{p}$-sampler can be used to solve the
$\ell_{p}$-heavy hitters problem with arbitrarily large constant probability,
and thus requires $2^{\Omega(d)}$ space.
Case 2: $0<p<1$. We now turn to $0<p<1$. In the proof of Theorem 5.4, a
reduction from Index is described where Alice holds the set $T$ and Bob the
string $y$. Bob can generate the set $\textsf{star}(y)$ of size
$2^{\varepsilon d}$ which is all possible binary strings supported on the
column query $S$. From this, Bob constructs the set
$M^{\prime}=\left\\{z\in\textsf{star}(y):|\operatorname{supp}(z)|\geq\frac{\varepsilon
d}{2}\right\\}$. We observe that if $y\in T$ then at least half of the strings
in $\textsf{star}(y)$ are supported on at least $\varepsilon d/2$ coordinates
which implies $|M^{\prime}|\geq 2^{\varepsilon d-1}$. The total $F_{p}$ in
this case can be bounded by a contribution of
$|M^{\prime}|1^{p}+2^{\varepsilon d}$. The first term arises from the
$|M^{\prime}|$ strings in $M^{\prime}$ with a frequency of $1$, while the
second term is shown in Case 1 of Theorem 5.4. Since $|M^{\prime}|\leq
2^{\varepsilon d}$, we have that $F_{p}\leq 2^{\varepsilon d+1}$ in this case.
Consequently, the correct probability of $\ell_{p}$-sampling returning a
string in $M^{\prime}$ is at least $\frac{1}{4}$ for the “ideal” case of
$\varepsilon=0,\Delta=0$. Even allowing $\varepsilon<\frac{1}{2}$ and
$\Delta=1/\operatorname{poly}({nd})$, this probability is at least $1/10$.
Otherwise, if $y\not\in T$, we exploit that $y^{\prime}\neq y$ can coincide in
at most $cd=O(\varepsilon^{2}d)$ coordinates and
$|\operatorname{supp}(z)|\geq\varepsilon d/2>cd$ for any $z\in M^{\prime}$.
Hence, no $z\in M^{\prime}$ can occur in $\textsf{star}(y^{\prime})$ for
another $y^{\prime}\in\mathcal{C}\setminus\\{y\\}$ on the column projection
$S$. In this case, there should be zero probability of sampling a string in
$M^{\prime}$ (neglecting the trivial additive probability $\Delta$).
To summarize, in the case that $y\in T$, by querying the projection $S$ then a
constant fraction of the $F_{p}$-mass is on the set $M^{\prime}$, whereas when
$y\notin T$, then there is zero $F_{p}$-mass on the set $M^{\prime}$. Since
Bob knows $M^{\prime}$, he can run an $\ell_{p}$-sampler and check if the
output is in the set $M^{\prime}$, and succeed with constant probability. It
follows that Bob can solve the Index problem (amplifying success probability
by independent repetitions if needed), and thus again the space required is
$2^{\Omega(d)}$. ∎
###### Remark 3.
For $p>1$ we again adopt the same instance as in Theorem 5.3 which has size
$(|T|+1)2^{\varepsilon d}\times d$. However, for $0<p<1$, we require the
instance from Theorem 5.4 so $A$ has size $|T|2^{\varepsilon d}\times d$.
## 6\. Projected Frequency Estimation via Set Rounding
Although our lower bounds rule out the possibility of computing constant
factor approximations to projected frequency problems in sub-exponential
space, it is still possible to compute non-trivial approximations using
exponential space but still better than naiv̈ely enumerating all column
subsets of $[d]$. We design a class of algorithms that proceed by keeping
appropriate sketch data structures for a “net” of subsets. The net has the
property that for any query $C\subset[d]$ there is a $C^{\prime}\subset[d]$
stored in the net which is not too different from $C$. We can then answer the
query on $C$ using the summary data structure computed for columnset
$C^{\prime}$. To formalize this approach we need some further definitions, the
first of which conceptualizes the notion of a net over subsets.
###### Definition 6.1 ($\alpha$-net of subsets).
Let $\mathcal{P}\left([d]\right)$ denote the power set of $[d]$. Fix a
parameter $\alpha\in(0,1/2)$. An $\alpha$-net of $\mathcal{P}\left([d]\right)$
is the set $\mathcal{N}=\\{U:|U|\leq 2^{d/2-\alpha d}\text{~{}or~{}}|U|\geq
2^{d/2+\alpha d}\\}$ which contains all subsets
$U\in\mathcal{P}\left([d]\right)$ whose size is at most $2^{d/2-\alpha d}$ or
at least $2^{d/2+\alpha d}$.
Let $H(x)=-x\log_{2}(x)-(1-x)\log_{2}(1-x)$ denote the binary entropy
function.
###### Lemma 6.2.
Let $\mathcal{N}$ be an $\alpha$-net for $\mathcal{P}\left([d]\right)$. Then
$|\mathcal{N}|\leq 2^{H(1/2-\alpha)d+1}$.
###### Proof.
The total number of subsets whose size is at most $2^{d/2-\alpha d}$ is
$\sum_{i\leq\alpha d}{d\choose i}$ and $\sum_{i\leq\alpha d}{d\choose i}\leq
2^{H(1/2-\alpha)d}$ (Galvin, 2014, Theorem 3.1). By symmetry we obtain the
same bound for the number of subsets of size at least $2^{d/2+\alpha d}$,
yielding the claimed total. ∎
### 6.1. From -nets to Projections
Input: Data $A\in\\{0,1\\}^{n\times d}$, parameter $\alpha\in(0,1/2)$,
frequency estimation problem $P$, query $C$ revealed after $A$
1 Function _ProjectedFreq(_$A,\alpha,C$_)_:
2 Generate an $\alpha$-net $\mathcal{N}$
3 For every $U\in\mathcal{N}$ evaluate a $\beta$ approximate sketch to
estimate $P(A,U)$
4 Given a projection query $C$ after observing $A$:
5 Obtain $C^{\prime}$, an -neighbour to $C$ in $\mathcal{N}$
6 return _$P(A,C^{\prime})$ to $\beta$ relative error_
Algorithm 1 Projected frequency by query rounding Figure 1. Space-
approximation tradeoff for $d=20$ as $\alpha$ is varied from $0$ to $1/2$.
Relative space is $2^{H(1/2-\alpha)d}/2^{d}$.
Suppose that we are tasked with answering problem $P=P(A,C)$ on a projection
query $C$. We know that if $C$ is known ahead of time then we can encode the
input data $A\in[Q]^{n\times d}$ on projection $C$ as a standard stream over
the alphabet $[Q]^{|C|}$. The use of -nets allows us sketch some of the input
and use this to approximately answer a query. For a standard streaming
problem, we will say that an algorithm yields a _$\beta$ -approximation_ to
the true solution $z^{*}$ if the returned estimate $z\in[z^{*}/\beta,\beta
z^{*}]$. A sketch obtaining such approximation guarantees will be referred to
as a $\beta$ approximate sketch. We additionally need the following notion of
error due to the distortion incurred when answering queries on elements of the
-net rather than the given query.
###### Definition 6.3 (Rounding distortion).
Let $P=P(A,C)$ be a projection query for the problem $P$ on input
$A\in[Q]^{n\times d}$ with projection $C$. Let
$\mathcal{N}\subset\mathcal{P}\left([d]\right)$ be an -net. The _rounding
distortion $r(\alpha,P)$_ is the worst-case determinstic error incurred by
solving $P(A,C^{\prime})$ rather than $P(A,C)$ for an $\alpha$-neighbour
$C^{\prime}\in\mathcal{N}$ of $C$ so that $P(A,C)/r(\alpha,P)\leq
P(A,C^{\prime})\leq r(\alpha,P)P(A,C)$.
Definition 6.3 is easiest to conceptualize for the $F_{0}$ problem when
$A\in\\{0,1\\}^{n\times d}$. Specifically, $P=F_{0}$ and the task to solve is
$P=F_{0}(A,C)$. For a given query $C$, with an -neighbour $C^{\prime}$ in the
net, the gap between the number of distinct items observed on $C^{\prime}$ at
most doubles for each column in the set difference between $C$ and
$C^{\prime}$. Since $C^{\prime}$ is an -neighbour, we have
$|C^{\prime}\operatorname{\Delta}C|\leq\alpha d$ so the worst-case
approximation factor in the number of distinct items observed over
$C^{\prime}$ rather than $C$ is $2^{\alpha d}$.
More generally, we can categorize the rounding distortion for other typical
queries, as demonstrated in the following lemma. Note that if the query is
contained in the -net $\mathcal{N}$ then we will retain a sketch for that
problem; hence the distortion is only incurred for queries not contained in
the net.
###### Lemma 6.4.
Fix $\alpha\in(0,1/2)$, suppose $A\in\\{0,1\\}^{n\times d}$ and $\mathcal{N}$
be an -net. If $C$ is a projection query for the following cases, the rounding
distortion can be bounded as:
1. (1)
$P=F_{0}(A,C)$ then $r(\alpha,F_{0})=2^{\alpha d}$
2. (2)
$P=F_{p}(A,C),p>1$ then $r(\alpha,F_{p})=2^{\alpha d(p-1)}$
3. (3)
$P=F_{p}(A,C),p<1$ then $r(\alpha,F_{p})=2^{\alpha d(1-p)}$
###### Proof.
Item (1) is an immediate consequence of the discussion above following
Definition 6.3 so we focus on (2) and (3). Suppose $p\geq 1$. Let
$f_{C}=f(A,C)$ denote the frequency vector associated to the projection query
$C$ over domain $[2^{|C|}]$. First, consider a single index $j\in[2^{|C|}]$
with $(f_{C})_{j}=x$. Let $C^{\prime}$ be an -neighbour for $C$ in
$\mathcal{N}$, and without loss of generality, assume that $|C|<|C^{\prime}|$.
The task is to estimate $\|f_{C}\|_{p}^{p}=x^{p}$ from
$\|f_{C^{\prime}}\|_{p}^{p}$, where $f_{C^{\prime}}=f(A,C^{\prime})$ is a
frequency vector over the domain $[2^{|C^{\prime}|}]$ which is a
$|C^{\prime}\setminus C|$ factor larger than the domain for $f_{C}$. However,
observe that in $f_{C^{\prime}}$, the value of $x$ is spread across the at
most $2^{\alpha d}$ entries that agree with $j$ on columns $C$. The
contribution to $F_{p}$ from these entries is at most $x^{p}$ (if the mass of
$x$ is mapped to a single entry). On the other hand, by Jensen’s inequality,
the contribution is at least $2^{\alpha d}(x/2^{\alpha d})^{p}=x^{p}/2^{\alpha
d(p-1)}$. Hence, considering all entries $j$, we obtain
$\|f_{C}\|_{p}^{p}/2^{\alpha
d(p-1)}\leq\|f_{C^{\prime}}\|_{p}^{p}\leq\|f_{C}\|_{p}^{p}$. In the case
$|C|>|C^{\prime}|$, essentially the same argument shows that
$\|f_{C}\|_{p}^{p}\leq\|f_{C^{\prime}}\|_{p}^{p}\leq\|f_{C}\|_{p}^{p}2^{\alpha
d(p-1)}$. Thus we obtain the rounding distortion of $2^{\alpha d(p-1)}$. For
$p<1$, we proceed as above, except by concavity, the ordering is reversed. ∎
Observe that the distortion reduces to $1$ (no distortion) as we approach
$p=1$ from either side. This is intuitive, since the $F_{1}$ problem is simply
to report the number of rows in the input, regardless of $C$, and so the
problem becomes “easier” as we approach $p=1$.
With these properties in hand, we can give a “meta-algorithm” as described in
Algorithm 1. In Theorem 6.5 we can fully characterize the accuracy-space
tradeoff for Algorithm 1 as a function of $\alpha$ and $d$.
###### Theorem 6.5.
Let $A\in\\{0,1\\}^{n\times d}$ be the input data and $C\subseteq[d]$ be a
projection query. Suppose $P=P(A,C)$ is the projected frequency problem,
$\alpha\in(0,1/2)$ and $r(\alpha,d)$ is the rounding distortion. With
probability at least $1-\delta$ a $\beta r(\alpha,d)$ approximation can be
obtained by keeping $\tilde{O}(2^{H(1/2-\alpha)d})$ $\beta$-approximate
sketches.
###### Proof.
Let $\mathcal{N}$ be a -net for $\mathcal{P}\left([d]\right)$ and for every
$U\in\mathcal{N}$ generate a sketch with accuracy parameter $\epsilon$ for the
problem $P$ on the projection defined by $U\subseteq[d]$. Either the
projection $C\in\mathcal{N}$, in which case we can report a $\beta$ factor
approximation, or $C\notin\mathcal{N}$ in which case we take an -neighbour,
$C^{\prime}\in\mathcal{N}$ and return the estimate $z$ for $P(A,C^{\prime})$.
The sketch ensures that the answer to $P(A,C^{\prime})$ is obtained with
accuracy $\beta$, which by the rounding distortion is a $\beta r(\alpha,d)$
approximation. To obtain this guarantee we build one sketch for every
$U\in\mathcal{N}$, for a total of $O(2^{H(1/2-\alpha)d})$ sketches (via Lemma
6.2). By setting the failure probabilty for each sketch as $\delta=1/2^{\alpha
d}$ and then taking a union bound over the -net we achieve probability at
least $1-\delta$. ∎
We remark that similar results are possible for the other functions
considered, $\ell_{p}$ frequency estimation, $\ell_{p}$ heavy hitters and
$\ell_{p}$ sampling. The key insight is that all these functions depend at
their heart on the quantity $f_{j}/\|f\|_{p}$, the frequency of the item at
location $j$ divided by the $\ell_{p}$ norm. If we evaluate this quantity on a
superset of columns, then both the numerator and denominator may shrink or
grow, in the same ways as analyzed in Lemma 6.4, and hence their ratio is
bounded by the same factor, up to a constant. Hence, we can also obtain
(multiplicative) approximation algorithms for these problems with similar
behavior.
Illustration of Bounds. First, observe that, irrespective of the problem $P$,
the number of sketches needed is sublinear in $2^{d}$. This is due to the fact
that the entropy $H(1/2-\alpha)<1$ for $\alpha>0$, so the size of the net
$|\mathcal{N}|<2^{d}$. For $0\leq p\leq 2$, we have $\beta$-approximate
sketches with $\beta=(1+\epsilon)$ whose size is
$\tilde{O}(\varepsilon^{-2})$, which is constant for constant $\epsilon$. For
example, we obtain a $2^{\alpha d}$ approximation (ignoring small constant
factors) for $F_{0}$ in space $O(2^{H(1/2-\alpha)}d)$, using for instance the
$(1+\epsilon)$-approximate sketch from (Kane et al., 2010) which requires
$O(\varepsilon^{-2}+\log n^{\prime})$ bits for an input over domain
$\\{1,\dots,n^{\prime}\\}$. Since $n^{\prime}\leq 2^{d}$, and setting
$\epsilon=1$, we obtain the approximation in space $O(d2^{H(1/2-\alpha)d})$.
This is to be compared to the bounds in Section 4, where it is shown that
(binary) instances of the projected $F_{0}$ problem require space
$2^{\Omega(d)}$. These results show that the constant hidden by the $\Omega()$
notation is less than $1$.
In Figure 1 we illustrate the general behavior of the bounds for $d=20$. We
plot the _relative space_ by $2^{H(1/2-\alpha)}/2^{d}$ while varying $\alpha$
over $(0,1/2)$ (plotted in the leftmost pane). This shows the space reduction
in using the -net approach compared to naiv̈ely storing all $2^{d}$ queries.
The central pane shows how the approximation factor $2^{\alpha d}$ (on a log
scale) varies with $\alpha$. We plot the space-approximation tradeoff in the
rightmost pane and the approximation factor is again plotted on a
$\log_{2}$-scale. This plot suggests that if we reduce the space by a factor
of $4$ (i.e., permit relative space $2^{-2}$) then the approximation factor is
on the order of 10s. Meanwhile, if we use relative space $2^{-8}$, then the
approximation remains on the order of hundreds: this is a substantial saving
as the number of summaries kept for the approximation is $2^{12}=4096\ll
2^{20}\approx 10^{6}$.
## 7\. Concluding Remarks
We have introduced the topic of projected frequency estimation, with the aim
of abstracting a range of problems involving computing functions over
projected subspaces of data. Our main results show that these problems are
generally hard, in terms of the space requirements: in most cases, we require
space which is exponential in the dimensionality $d$ of the input. However,
interestingly, the exact dependence is not as simple as $2^{d}$: we show that
coarse approximations can be obtained whose cost is substantially sublinear in
$2^{d}$. Letting $N=2^{d}$, our upper and lower bounds establish that the
space complexity for a number of problems here is polynomial in $N$, though
substantially sublinear. And, in a few special cases ($\ell_{p}$ frequency
estimation for $p\leq 1$), a sufficiently constant-sized sample suffices for
accurate approximation of projected frequencies. It remains an intriguing open
question to close the gaps between the upper and lower bounds, and to find the
exact form of the polynomial dependence on $N$ for these problems.
#### Acknowledgements.
We thank S. Muthukrishnan and Jacques Dark for helpful discussions about this
problem. The work of GC and CD was supported by European Research Council
grant ERC-2014-CoG 647557. The work of DW was supported by NSF grant No.
CCF-1815840, National Institute of Health grant 5R01HG 10798-2, and a Simons
Investigator Award.
## References
* (1)
* Alon et al. (1999) N. Alon, Y. Matias, and M. Szegedy. 1999. The Space Complexity of Approximating the Frequency Moments. _JCSS: Journal of Computer and System Sciences_ 58 (1999), 137–147.
* Assadi et al. (2016) Sepehr Assadi, Sanjeev Khanna, Yang Li, and Val Tannen. 2016\. Algorithms for Provisioning Queries and Analytics. In _International Conference on Database Theory_. 18:1–18:18.
* Braverman et al. (2017) Vladimir Braverman, Stephen R Chestnut, Nikita Ivkin, Jelani Nelson, Zhengyu Wang, and David P Woodruff. 2017. BPTree: An $\ell_{2}$ heavy hitters algorithm sing constant memory. In _Proceedings of Principles of Database Systems_. ACM, 361–376.
* Braverman et al. (2018a) Vladimir Braverman, Elena Grigorescu, Harry Lang, David P. Woodruff, and Samson Zhou. 2018a. Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows. In _Approximation, Randomization, and Combinatorial Optimization Algorithms and Techniques (APPROX/RANDOM 2018)_ , Vol. 116. 7:1–7:22. https://doi.org/10.4230/LIPIcs.APPROX-RANDOM.2018.7
* Braverman et al. (2018b) Vladimir Braverman, Robert Krauthgamer, and Lin F. Yang. 2018b. Universal Streaming of Subset Norms. _CoRR_ abs/1812.00241 (2018). arXiv:1812.00241 http://arxiv.org/abs/1812.00241
* Chia et al. (2019) Pern Hui Chia, Damien Desfontaines, Irippuge Milinda Perera, Daniel Simmons-Marengo, Chao Li, Wei-Yen Day, Qiushi Wang, and Miguel Guevara. 2019. KHyperLogLog: Estimating Reidentifiability and Joinability of Large Data at Scale. In _IEEE Symposium on Security and Privacy (SP)_. 867–881.
* Doerr (2020) Benjamin Doerr. 2020\. Probabilistic tools for the analysis of randomized optimization heuristics. In _Theory of Evolutionary Computation_. Springer, 1–87.
* Galvin (2014) David Galvin. 2014\. Three tutorial lectures on entropy and counting. _arXiv preprint arXiv:1406.7872_ (2014).
* Jayaram and Woodruff (2018) Rajesh Jayaram and David P. Woodruff. 2018. Perfect $\ell_{p}$ Sampling in a Data Stream. In _59th IEEE Annual Symposium on Foundations of Computer Science, FOCS_. 544–555.
* Jayram and Woodruff (2009) T. S. Jayram and D. P. Woodruff. 2009. The Data Stream Space Complexity of Cascaded Norms. In _IEEE Symposium on Foundations of Computer Science (FOCS)_. 765–774. https://doi.org/10.1109/FOCS.2009.82
* Kane et al. (2010) Daniel M Kane, Jelani Nelson, and David P Woodruff. 2010\. An Optimal Algorithm for the Distinct Elements Problem. In _Proceedings of Principles of database systems_. ACM, 41–52.
* Kremer et al. (1999) Ilan Kremer, Noam Nisan, and Dana Ron. 1999. On Randomized One-Round Communication Complexity. _Computational Complexity_ 8, 1 (1999), 21–49.
* Kveton et al. (2018) Branislav Kveton, S. Muthukrishnan, Hoa T. Vu, and Yikun Xian. 2018. Finding Subcube Heavy Hitters in Analytics Data Streams. In _Proceedings of the 2018 World Wide Web Conference_. 1705–1714. https://doi.org/10.1145/3178876.3186082
* Larsen et al. (2016) Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, and Mikkel Thorup. 2016. Heavy Hitters via Cluster-Preserving Clustering. In _IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS_. 61–70.
* Parsons et al. (2004) Lance Parsons, Ehtesham Haque, and Huan Liu. 2004. Subspace Clustering for High Dimensional Data: a review. _SIGKDD Explorations_ 6, 1 (2004), 90–105. https://doi.org/10.1145/1007730.1007731
* Sublinear.info ([n.d.]) Sublinear.info. [n.d.]. Open Problem 94. https://sublinear.info/index.php?title=Open_Problems:94.
* Tirthapura and Woodruff (2012) Srikanta Tirthapura and David P. Woodruff. 2012. A General Method for Estimating Correlated Aggregates over a Data Stream. In _IEEE 28th International Conference on Data Engineering (ICDE 2012), Washington, DC, USA (Arlington, Virginia), 1-5 April, 2012_. 162–173.
* Vu (2018) Hoa Vu. 2018. _Data Stream Algorithms for Large Graphs and High Dimensional Data_. Ph.D. Dissertation. U. Massachusetts at Amherst.
## Appendix A Omitted Proofs
### A.1. Omitted Proof for Section 5.1
###### Theorem A.1 (Restated Theorem 5.1).
Let $A\in\\{0,1\\}^{n\times d}$ be the input data and let $C\subseteq[d]$ be a
given column query. For a given string $b\in\\{0,1\\}^{C}$, the absolute
frequency of $b$, $f_{e(b)}$, can be estimated up to $\varepsilon\|f\|_{1}$
additive error using a uniform sample of size
$O(\varepsilon^{-2}\log(1/\delta))$ with probability at least $1-\delta$.
###### Proof.
Let $T=\\{i\in[n]:A_{i}^{C}=b\\}$ be the set of indices on which the
projection onto query set $C$ is equal to the given pattern $b$. Sample $t$
rows of $A$ uniformly with replacement at a rate $q=t/n$. Let the
(multi)-subset of rows obtained be denoted by $B$ and the matrix formed from
the rows of $B$ be denoted $\hat{A}$. For every $i\in B$, define the indicator
random variable $X_{i}$ which is $1$ if and only if the randomly sampled index
$i$ satisfies $A_{i}^{C}=b$, which occurs with probability $|T|/n$. Next, we
define $\hat{T}=T\cap B$ so that $|\hat{T}|=\sum_{i=1}^{t}X_{i}$ and the
estimator $Z=\frac{n}{t}|\hat{T}|$ has $\mathbb{E}(Z)=|T|$. Finally, apply an
additive form of the Chernoff bound:
$\displaystyle\mathbb{P}\left(|Z-\mathbb{E}(Z)|\geq\varepsilon n\right)$
$\displaystyle=\mathbb{P}\left(\left|\frac{n}{t}|\hat{T}|-|T|\right|\geq\varepsilon
n\right)$
$\displaystyle=\mathbb{P}\left(\left||\hat{T}|-\frac{t}{n}|T|\right|\geq\varepsilon
t\right)$ $\displaystyle\leq 2\exp\left(-\varepsilon^{2}t\right).$
Setting $\delta=2\exp\left(-\varepsilon^{2}t\right)$ allows us to choose
$t=O(\varepsilon^{-2}\log(1/\delta))$, which is independent of $n$ and $d$.
The final bound comes from observing that $\|f\|_{1}=n,f_{e(b)}=|T|$ and
$\hat{f}_{e(b)}=Z$. ∎
### A.2. Omitted Proof for Section 5.3
A key step in the proof of Theorem 5.4 is that in Equation (5), the expression
$2^{cdp+\epsilon dp+\Theta((1-p)cd)}\cdot O(d^{1-p})$
can be bounded by a manageable power of two. We formalize this in Lemma A.2.
###### Lemma A.2.
Under the same assumptions as in Theorem 5.4, there exists a small constant
$c>0$ which bounds Equation (5) by at most $2^{(1-\alpha)\epsilon d}$ for some
$\alpha>0$.
###### Proof.
Here we use base-$2$ logarithms and let $0<c<1$ be a small constant which we
need to bound. Also, let $0<p<1$ be a given constant. Observe that the
$O(d^{1-p})$ term only contributes positively in the exponent term of (5) so
we can ignore it from the calculation. Write
$2^{\Theta(cd(1-p))}=2^{cd\alpha(1-p)}$ for $\alpha>0$. This follows from:
(6) ${d\choose cd}\leq\left(\frac{ed}{cd}\right)^{cd}\leq
2^{(2+\log\frac{1}{c})cd}$
so let $\alpha=2+\log\frac{1}{c}$. For clarity, we proceed by using the
trivial identity $1-(1-\nu)=\nu$ and show that $1-\nu>0$ for $\nu$ a function
of $c,p,d$. We need to ensure:
(7) $cpd+{\epsilon dp}+\alpha cd(1-p)\leq(1-\alpha){\epsilon d}.$
This amounts to showing that:
$\nu\triangleq cp/\epsilon+p+\alpha c(1-p)/\epsilon\leq(1-\alpha)$
Now, $\nu=p(c/\epsilon+1-\alpha c/\epsilon)+\alpha c/\epsilon$ and we require
$\nu<1$. We may enforce the weaker property of
$p(c/\epsilon+1-\alpha/\epsilon)<1$ because $c>0$ and for $c<4$ we also have
$\alpha>0$ (inspection on Equation (6)) so $\alpha c/\epsilon>0$, and so can
be omitted. Solving for $c$ we obtain $c(1-\alpha)<\epsilon(1/p-1)$. Recalling
the definition of $\alpha$ this becomes:
(8) $c(\log c-1)<\epsilon(1/p-1)$
from which positivity on $c$ yields $c\log c<\epsilon(1/p-1)$. Hence, it is
enough to use $c<\epsilon(1/p-1)$. ∎
|
# Role of nucleon-nucleon correlation in transport coefficients and
gravitational-wave-driven $r$-mode instability of neutron stars
X. L. Shang Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou
730000, China School of Physics, University of Chinese Academy of Sciences,
Beijing 100049, China P. Wang National Astronomical Observatories, Chinese
Academy of Sciences, Beijing 100012, China W. Zuo Institute of Modern
Physics, Chinese Academy of Sciences, Lanzhou 730000, China School of
Physics, University of Chinese Academy of Sciences, Beijing 100049, China J.
M. Dong<EMAIL_ADDRESS>Institute of Modern Physics, Chinese Academy of
Sciences, Lanzhou 730000, China School of Physics, University of Chinese
Academy of Sciences, Beijing 100049, China
###### Abstract
The thermal conductivity and shear viscosity of dense nuclear matter, along
with the corresponding shear viscosity timescale of canonical neutron stars
(NSs), are investigated, where the effect of Fermi surface depletion (i.e.,
the $Z$-factor effect) induced by the nucleon-nucleon correlation are taken
into account. The factors which are responsible for the transport
coefficients, including the equation of state for building the stellar
structure, nucleon effective masses, in-medium cross sections, and the
$Z$-factor at Fermi surfaces, are all calculated in the framework of the
Brueckner theory. The Fermi surface depletion is found to enhance the
transport coefficients by several times at high densities, which is more
favorable to damping the gravitational-wave-driven $r$-mode instability of
NSs. Yet, the onset of the $Z$-factor-quenched neutron triplet superfluidity
provides the opposite effects, which can be much more significant than the
above mentioned $Z$-factor effect itself. Therefore, different from the
previous understanding, the nucleon shear viscosity is still smaller than the
lepton one in the superfluid NS matter at low temperatures. Accordingly, the
shear viscosity cannot stablize canonical NSs against $r$-mode oscillations
even at quite low core temperatures $10^{6}$ K.
As a class of compact objects, neutron stars (NSs) with typical mass $M\sim
1.4M_{\odot}$ and radii $R\sim 10$ km, contain extreme neutron-rich matter at
supranuclear density in their interiors. Interestingly, they have many extreme
features that cannot be produced in terrestrial laboratories, such as
extremely strong magnetic field, superstrong gravitational field, extremely
high density, superfluid matter and superprecise spin period HPY , suggesting
their importance for fundamental physics. These intriguing features have drawn
great interest for researchers of various branches of contemporary physics and
astronomy since the discovery of pulsars (rapidly rotating NSs) in 1967.
Due to the dense matter with large isospin asymmetry inside NSs, a great deal
of attention has been paid to the recent astronomical observations that can be
used to uncover the knowledge of the NS interior. For instance, the
observations of stellar cooling enables one to constrain the equation of state
(EOS) of dense matter, superfluidity and transport properties, in combination
with indispensable theoretical analysis CAS1 ; Page1 ; CAS2 ; CAS3 ; CAS4 ;
CAS5 ; CAS6 . Moreover, a rapidly rotating NS is regarded as a gravitational
wave source due to $r$-mode instability. The $r$-mode is a non-radial
oscillation mode with Coriolis force as restoring force, which leads to the
gravitational wave radiation in rapidly rotating NSs due to the Chandrasekhar-
Friedmann-Schutz instability CFS1 ; CFS2 ; CFS3 and thus prevents the NSs
from reaching their Kepler rotational frequency Kep1 ; Kep2 . The
gravitational radiation is in turn able to excite $r$ modes in NS core and
hence enhances their oscillation amplitudes, and it is particularly
interesting from the perspective of the gravitational wave observations with
ground-based facilities. The gravitational wave signal from the $r$-mode
oscillation, if detectable in the future, could help one to probe the dense
matter properties inside NSs.
The reliable knowledge about transport coefficients of dense matter is crucial
for understanding the stellar thermal evolution and $r$-mode-instability
induced gravitational radiation. The thermal conductivity which measures the
ability to conduct the heat, is an important input for modeling NS cooling
Cool1 ; Cool2 . The shear viscosity is the primary damping mechanism that
hinders the gravitational-wave-driven $r$-mode instability of rapidly rotating
NSs at low temperatures ($<10^{9}$ K) FI1979 ; CL1987 ; IV2012 . These two
transport coefficients have been calculated by several authors based on the
formulism derived by Abrikosov and Khalatnikov (AK) from the Landau kinetic
equations for a multicomponent systems AK , where the required in-medium
nucleon-nucleon cross sections is obtained by employing the correlated basis
function method and the Brueckner-Hartree-Fock (BHF) approach with realistic
nucleon-nucleon interactions Benhar2007 ; Benhar2010 ; Zhang2012 ; Baldo2013 .
In the present work, within the AK framework, we calculate the transport
coefficients by adopting the Brueckner theory with the inclusion of the effect
of Fermi surface depletion. The bulk viscosity is expected to become the
dominant dissipation mechanism for newborn NSs with rather high temperatures
($T>10^{10}$ K), and we do not consider this situation here.
It is well-known that, the momentum distribution for a perfect Fermi gas
follows a right-angle distribution at zero-temperature, namely the well-known
Fermi-Dirac distribution. Yet, owing to the short-range repulsive core and
tensor interaction (collectively referred to as short-range correlation in
some references), the system deviates from the typical profile of an ideal
degenerate Fermi gas featured by a high-momentum tail SRC11 ; SRC12 ; SRC13 ;
SRC14 , and as a result a Fermi surface depletion may appear. The $Z$-factor
measures such a Fermi surface depletion. The correlation between nucleons or
its induced $Z$-factor has far-reaching impact on many issues such as nuclear
structure Science2008 ; Science2014 , superfluidity of dense nuclear matter
Dong-SRC1 ; Dong-SRC2 ; BAL , NS cooling Dong-SRC2 and the European Muon
Collaboration effect Nature2018 ; EMC2 , highlighting its fundamental
importance in nuclear physics and NS physics. For instance, Dong et al. have
shown that the superfluid gap of $\beta$-stable neutron star matter is
strongly quenched by the $Z$ factor within the generalized BCS theory Dong-
SRC1 ; Dong-SRC2 . The neutrino emissivity for NS cooling due to direct Urca,
modified Urca processes are also reduced by the $Z$-factor, and therefore the
cooling rates of young NSs are considerably slowed Dong-SRC2 .
Figure 1: (a) Energy per particle in symmetric matter, pure neutron matter,
and $\beta$-stable matter as a function of nucleonic density from the BHF
approach. The square shows the position of calculated saturation point. (b)
Density-dependent effective mass at Fermi surfaces for three different nuclear
matter configurations.
In this work, the roles of the $Z$-factor in the thermal conductivity and
shear viscosity are clarified based on the AK formulism. The neutron triplet
superfluidity in NS core quenched by the $Z$-factor effect is introduced to
examine its effects on the viscosity of $\beta$-stable NS matter. Then we
calculate the shear viscosity timescale and gravitation-wave-driven $r$-mode
growth timescale of canonical NSs to explore whether the shear viscosity is
sufficiently strong to damp the $r$-mode instability. The required in-medium
cross sections and nucleon effective masses to calculate transport
coefficients, and the $Z$-factor at the Fermi surface, together with the EOS
to establish the NS structure, are all obtained in an unified framework, i.e.,
the Brueckner theory with AV18 two-body interaction plus a microscopic three-
body force baldo ; zuo . We should stress here that in the calculation the
exact treatment of total momentum is adopted to obtain more reliable results
shangbhf .
The $Z$-factor that measures the effect of Fermi surface depletion is given by
$Z(k)=\left[1-\frac{\partial\Sigma(k,\omega)}{\partial\omega}\right]_{\omega=\epsilon(k)}^{-1}$
(1)
with the single-particle energy $\epsilon(k)$. Where $\Sigma(k,\omega)$ is the
self-energy versus momentum $k$ and energy $\omega$. The $Z$ factor at the
Fermi surface, labeled $Z_{F}$ ($0<Z_{F}<1$), is equal to the discontinuity of
the occupation number at the Fermi surface, according to the Migdal-Luttinger
theorem Migdal1960 . Once the nucleon-nucleon correlation is included, the
nucleon momentum distribution is given as
$\displaystyle n(k)=\int\frac{d\omega}{2\pi}S(k,\omega)n^{0}(\omega)$ (2)
at finite temperature $T$ KG1962 , where $\omega$ is the energy.
$n^{0}(\omega)=1/[1+\exp(\frac{\omega-\mu}{k_{B}T})]$ is the well-known Fermi-
Dirac distribution function under temperature $T$ and chemical potential
$\mu$. The spectral function $S(k,\omega)$ can be expressed as baldo
$\displaystyle S(k,\omega)\approx Z_{F}\delta(\omega-\epsilon(k_{F})),k\approx
k_{F},$ (3)
when momentum $k$ is extremely close to the Fermi momentum $k_{F}$.
Consequently, the momentum distribution near the Fermi surface is approximated
by Dong-SRC2
$\displaystyle n(x)\approx Z_{F}n^{0}(x),k\approx k_{F},$ (4)
with $x=(\epsilon(k)-\mu)/(k_{B}T)$. Hereafter we take $x$ as variable in the
Fermi-Dirac distribution for convenience. We stress that this approximation is
only valid when $k$ is extremely close to the Fermi surface. The nucleon-
nucleon correlation quenches the occupation probability by a factor $Z_{F}$ at
Fermi surface $k_{F}$, and thus it hinders particle transitions around the
Fermi surface.
To embody the effects of nucleonic Fermi surface depletion in the calculation
of the kinetic coefficients, we extend the Landau kinetic equation by
including the $Z$-factor in the collision integral. In the AK framework, at
temperature $T$, the collision integral without the $Z$-factor effect takes
the form of PRB35
$\displaystyle I_{1i}^{0}$ $\displaystyle=$
$\displaystyle-\frac{m_{i}^{*}k_{B}^{2}T^{2}}{8\pi^{4}\hbar^{6}}\int\int
dx_{2}dx_{3}n^{0}(x_{1})n^{0}(x_{2})[1-n^{0}(x_{3})]$ (5)
$\displaystyle\times$
$\displaystyle[1-n^{0}(x_{1}+x_{2}-x_{3})]\sum_{j}m_{j}^{*2}\int\int\frac{d\Omega}{4\pi}\frac{d\phi_{2}}{2\pi}$
$\displaystyle\times$
$\displaystyle\frac{W_{ij}(\theta,\phi)\beta_{ij}}{1+\delta_{ij}}[\psi(\bm{p}_{1})+\psi(\bm{p}_{2})-\psi(\bm{p}_{3})-\psi(\bm{p}_{4})],$
where $m^{*}$ is the effective mass of nucleon $i$ or $j$. And the small
quantities $\psi(\bm{p})$ measures the departure from equilibrium state. Here
the nucleon-nucleon scattering is limited to the Fermi surface. For
convenience, one can assume $1$ and $3$ ($2$ and $4$) are the same component,
i.e., $|\bm{p}_{1}|=|\bm{p}_{3}|=p_{i}$ ($|\bm{p}_{2}|=|\bm{p}_{4}|=p_{j}$).
And the transition probability $W_{ij}$ from two quasiparticle state
$|\bm{p}_{1},\bm{p}_{2}\rangle$ to state $|\bm{p}_{3},\bm{p}_{4}\rangle$,
depends only on $\theta$ and $\phi$ ($d\Omega=\sin\theta d\theta d\phi$),
where $\theta$ is the angle between $\bm{p}_{1}$ and $\bm{p}_{2}$, and $\phi$
is the angle between the $\bm{p}_{1}$-$\bm{p}_{2}$ plane and
$\bm{p}_{3}$-$\bm{p}_{4}$ plane.
$\beta_{ij}=p_{j}/(p_{i}^{2}+p_{j}^{2}+2p_{i}p_{j}\cos\theta)^{1/2}$ reduces
to $[2\cos(\theta/2)]^{-1}$ for $i=j$. $\phi_{2}$ is the azimuthal angle of
$\bm{p}_{2}$ with respect to $\bm{p}_{1}$. The factor $(1+\delta_{ij})^{-1}$
takes into account double counting of the final states in the case of like
particles.
Due to the temperature $T$ we discussed is several orders of magnitude lower
than the nucleonic Fermi temperatures (the nucleons are strong degenerate),
the main contribution to the above integral comes from the very narrow regions
of momentum space near the corresponding Fermi surfaces $k_{F}$, just as the
calculation of neutrino emissivity in Ref. Y2001 . If the $Z$-factor effect is
included, in the above collision integral, $1-n^{0}(x)$ (and $n^{0}(x)$)
representing the unoccupied (and occupied) state due to the temperature,
should be replaced by $n(x)|_{T=0}-n(x)=Z_{F}[1-n^{0}(x)]$ (and
$Z_{F}n^{0}(x)$) when the $Z$-factor effect is included. The collision
integral is just attribute to thermal excitations of particles located in a
very narrow region of $\sim k_{B}T$ close to their Fermi surfaces, and the
state with $|\epsilon(k)-\epsilon(k_{F})|\gg k_{B}T$ plays no role for the
collision integral because the thermal energy $k_{B}T$ is too low to excite
those states. Therefore, the high momentum tail makes no contribution to the
collision integral, just as the influence of the Fermi surface depletion on
neutrino emissivity processes discussed in detail in Ref. Dong-SRC2 .
Consequently, the collision integral turns into
$\displaystyle I_{1i}$ $\displaystyle=$
$\displaystyle-\sum_{j}\frac{Z_{Fi}^{2}Z_{Fj}^{2}m_{i}^{*}m_{j}^{*2}k_{B}^{2}T^{2}}{8\pi^{4}\hbar^{6}}\int\int
dx_{2}dx_{3}n^{0}(x_{1})n^{0}(x_{2})$ (6) $\displaystyle\times$
$\displaystyle[1-n^{0}(x_{3})][1-n^{0}(x_{1}+x_{2}-x_{3})]\int\int\frac{d\Omega}{4\pi}\frac{d\phi_{2}}{2\pi}$
$\displaystyle\times$
$\displaystyle\frac{W_{ij}\beta_{ij}}{1+\delta_{ij}}[\psi(\bm{p}_{1})+\psi(\bm{p}_{2})-\psi(\bm{p}_{3})-\psi(\bm{p}_{4})].$
Moreover, the driving term of the Landau kinetic equation, which is
proportional to $\frac{\partial n}{\partial x}$ at equilibrium state, provides
a $Z_{F}$ as well. Therefore, one can include the $Z$-factor effect in the
calculation of the transport coefficients by adopting $Z_{F}$ both in the
collision integral and the driving term by following the derivations in Ref.
PRB35 . For example, the collision integral reduces to a simple formula of
$I_{1i}=Z_{F}^{4}I_{1i}^{0}$ for pure neutron matter. One should note that the
momentum (energy) flux corresponding to the the shear viscosity (thermal
conductivity) also includes $\frac{\partial n}{\partial x}$, Consequently, the
shear viscosity (thermal conductivity) is given by $\eta=\eta^{0}/Z_{F}^{2}$
($\kappa=\kappa^{0}/Z_{F}^{2}$) for pure neutron matter, where $\eta^{0}$
($\kappa^{0}$) is the corresponding transport coefficient without the
inclusion of the $Z$-factor effect.
Within the BHF approach, the EOSs of symmetric nuclear matter ($\beta=0$),
pure neutron matter ($\beta=1$), and $\beta$-stable matter, where
$\beta=(\rho_{n}-\rho_{p})/(\rho_{n}+\rho_{p})$ denotes the isospin asymmetry
with the neutron (proton) number densities $\rho_{n}$ ($\rho_{p}$), are
displayed in Fig. 1(a). The solid square shows the calculated saturation point
of symmetric matter which is marginally in agreement with the empirical value
due to the introducing of three-body force. The proton fraction in
$\beta$-stable matter is determined by the density-dependent symmetry energy,
i.e., the isospin-dependent part of the EOS. The EOSs for pure neutron matter
and $\beta$-stable matter show a distinct difference that becomes more and
more visible at high densities, indicating the non-negligible proton fraction
in NS matter. The NS interior is assumed to be composed of nucleons, electrons
and possible muons. With the conditions of electric neutrality and
$\beta$-equilibrium, the fractions of leptons (electrons and muons as
degenerate ideal gas) and their contributions to the energy density
$\varepsilon(\rho)$ and pressure $p(\rho)$ can be determined uniquely. With
the obtained $\varepsilon(\rho)$ and $p(\rho)$ of the core matter and the EOS
from Baym, Pethick, and Sutherland (BPS) BPS for crust matter as inputs, the
stellar structure, e.g., the density profile $\rho(r)$ of a static and
spherically symmetric NS, is achieved by solving the Tolman-Oppenheimer-Volkov
(TOV) equation. The established stellar structure is essential for the final
estimation of the shear viscosity timescale and $r$-mode growth time scale of
NSs.
The nucleonic effective mass $m^{*}$ is defined from the single-particle
energy $\epsilon(p)$ by the relation
$m^{\ast}=k_{F}\left(\partial\epsilon(k)/\partial k\right)^{-1}|_{k=k_{F}}$.
It reduces the density of states at the Fermi surface with respect to non-
interacting Fermi gas since it is usually smaller than the free mass. As Ref.
Baldo2013 ; shangems , the rearrangement contribution of three-body force is
not included here. The calculated effective mass with the BHF approximation
are presented in Fig. 1(b). The neutron effective mass of pure neutron matter
is not much different from that of $\beta$-stable matter, but is distinctly
larger than that of symmetric matter at the same density.
We calculate the in-medium differential sections within the BHF method for
symmetric matter, pure neutron matter and $\beta$-stable matter, taking the
neutron-neutron scattering at density of $\rho=0.34$ fm-3 (twice the
saturation density) as an example, as shown in Fig. 2. The free-space cross
section is also shown for comparison. The in-medium effect leads to a
noticeable suppression of the cross sections, as other calculations within
microscopic nuclear many-body approaches, suggesting the important role of the
medium effect. Our calculated differential cross sections as functions of
center-of-mass scattering angle (and also the total cross sections versus
center-of-mass energy $E_{\text{c.m.}}$) have the same shape as that in Ref.
Baldo2013 for density $\rho=0.35$ fm-3, although different three-body forces
are used. We would like to stress that, the inclusion of the three-body force
increases the cross section at high $E_{\text{c.m.}}$, which is in agreement
with the conclusion of Ref. Baldo2013 , but disagrees with the results in Ref.
Zhang2012 ; Zhang2007 .
Figure 2: (a) Differential cross sections of neutron-neutron scattering in
symmetric matter, pure neutron matter, and $\beta$-stable matter, taking
$\rho=0.34$ fm-3 and center-of-mass energy $E_{\text{c.m.}}=75$ MeV as an
example. (b) The corresponding total cross sections versus $E_{\text{c.m.}}$.
Figure 3 exhibits the calculated $Z_{F}$ at Fermi surfaces for three different
nuclear matter configurations by employing the Brueckner theory where the
self-energy is expanded to the 2nd-order, i.e.,
$\Sigma=\Sigma_{1}+\Sigma_{2}$. The momentum distribution featured by a high
momentum tail and vacant position below the Fermi surface, is illustrated in
the inset. The behavior of $Z_{F}$ for symmetric matter is consistent with the
result in Refs. Dong-SRC2 ; shangzz . The $Z$-factor is caused by the short-
range repulsion core and tensor force. The tensor force is dominant at low
densities while the short-range repulsion is dominant at high densities. The
nonmonotonic behavior of $Z_{F}$ for symmetric matter and $\beta$-stable
matter displayed in Fig. 3 is exactly the results of competition between these
two effects, and the $Z_{F}$ is small both at very low and very high
densities. On the other hand, the $Z_{F}$ exhibits a strong isospin
dependence. At a given total nucleon density, the $Z_{F}$ of symmetric matter
is smaller obviously than that of pure neutron matter, that is, the
correlation in the former is stronger than that in the later, because the
${}^{3}SD_{1}$ tensor interaction component between neutrons and protons is
quite strong in symmetric matter but is completely absent in pure neutron
matter. Namely the pure neutron matter is much closer to the ideal degenerate
Fermi gas, as pointed out in Ref. Science2014 . The results displayed in Fig.
3 will be applied in the following calculations of transport coefficients.
Figure 3: Density-dependent $Z$-factor at Fermi surfaces in symmetric matter,
pure neutron matter, and $\beta$-stable matter. The inset presents a schematic
illustration of the Fermi surface depletion induced by the nucleon-nucleon
correlation.
When combining all the results that have been discussed above, we can now
compute the density-dependent shear viscosity under various temperatures
stemming from nucleon-nucleon collisions. The phase space is quenched in Eq.
(2) because of the depletion of Fermi surface, and therefore the thermal
conductivity $\kappa$ and shear viscosity $\eta$ are increased. The calculated
temperature-independent combinations $\kappa T$ and $\eta T^{2}$ versus
density are plotted in Fig. 4, respectively, without and with the inclusion of
the $Z$-factor effect. The lepton (electron and muon) shear viscosity
$\eta_{e\mu}$ and thermal conductivity $\kappa_{e\mu}$ mediated by collisions
of leptons with charged particles in electrically neutral NS matter, are taken
from Ref. Shternin2008 . Since the nucleon shear viscosity $\eta_{N}$ is
mediated by nucleon-nucleon collisions via strong nuclear force, the
$\eta_{N}$ and $\eta_{e\mu}$ can be treated independently. Yet, the
$\eta_{e\mu}$ ($\kappa_{e\mu}$) has different temperature-dependent behavior
as $\eta_{N}$ ($\kappa_{N}$). So here we show three cases: $T=10^{7}$,
$10^{8}$, and $10^{9}$ K. The relation between $\eta_{N}$ and $\eta_{e\mu}$ is
temperature dependent, that is, $\eta_{N}$ becomes more and more important as
temperature decreases. The proton contribution to the shear viscosity can be
neglected safely since the proton contribution is just $15\%$ even at high
density of $\rho=0.6$ fm-3.
Figure 4: Thermal conductivity $\kappa$ (upper panel) and shear viscosity
$\eta$ (lower panel) of nucleons and leptons as a function of density in
symmetric matter, pure neutron matter, and $\beta$-stable matter. The
nucleonic $\kappa_{N}$ and $\eta_{N}$ are calculated with the help of BHF
approach without and with the inclusion of $Z$-factors.
The $Z$-factor effect enhances the nucleonic $\kappa$ and $\eta$ for the three
nuclear matter configurations, in particular at high densities. For example,
at the density of $\rho=0.6$ fm-3, the $\kappa_{N}$ and $\eta_{N}$ can be
enhanced by about three to four times by the $Z$-factor effect. The nucleonic
thermal conductivity is much larger than the lepton ones for all densities of
NS matter and temperatures of interest. Yet, the situation is different for
shear viscosity. Without the $Z$-factor effect ($Z=1$), the primary
contribution to the shear viscosity $\eta=\eta_{N}+\eta_{e\mu}$ comes from the
lepton scattering which is just exceeds by nucleon scattering at low
densities, in agreement with the conclusion of Ref. Baldo2013 . Once the
$Z$-factor is taken into account, the $\eta_{N}$ and $\eta_{e\mu}$ become
comparable at intermediate densities, and the $\eta_{N}$ is about four times
larger than $\eta_{e\mu}$ at crust-core transition density $\rho\approx 0.08$
fm-3.
It is widely believed that superfluidity plays a crucial role in NS dynamics,
such as NS cooling and the observed pulsar glitch. It draw wide attention in
communities of nuclear physics and NS physics in particular after the rapid
cooling of the NS in Cassiopeia A was observed. The strong nuclear force
provides several attractive channels between nucleons in which superfluidity
is possible sh1 ; sh2 ; sh3 . The neutrons dripped out from the neutron-rich
nuclei in NS inner crust, are expected to be paired in a ${}^{1}S_{0}$ singlet
state with energy gap of $\sim 1.5$ MeV Lombardo2001 . The proton gas is so
dilute that the proton ${}^{1}S_{0}$ superconductivity (superfluidity of
charged particles) may survive until deep inside the star but the neutron
${}^{1}S_{0}$ superfluidity vanishes because the nuclear interaction in the
${}^{1}S_{0}$ channel becomes repulsive at short distances for high neutron
density. Nevertheless, at high density, neutron-neutron coupling in the
${}^{3}PF_{2}$ anisotropic pairing state could appear owing to the attractive
component of the nuclear interaction in this coupling channel. The coupling
between the ${}^{3}P_{2}$ and ${}^{3}F_{2}$ states is attributed to tensor
force. This neutron ${}^{3}PF_{2}$ superfluidity is of great interest because
it was employed to explain the rapid cooling of the NS in Cassiopeia A Page1 .
However, the superfluidity may reduced significantly by the nucleon-nucleon
correlation Dong-SRC1 ; Dong-SRC2 ; shangbcs . By performing fittings with
several parameters, the density-dependent gap for the neutron ${}^{3}PF_{2}$
superfluidity of $\beta$-stable matter is given by Dong2020
$\displaystyle\Delta_{n}(\rho)$ $\displaystyle=$
$\displaystyle(0.943\rho-0.050)\exp\left[-\left(\frac{\rho}{0.177}\right)^{1.665}\right],$
(7)
with a peak value of about 0.04 MeV at $\rho=0.17$ fm-3. The proton
${}^{1}S_{0}$ superfluid gap exists in a rather narrow region and is much
smaller than the neutron ${}^{3}PF_{2}$ superfluid gap as stressed in Dong2020
. In addition, the proton fraction is much smaller than the neutron one for
$\beta$-stable NS matter. Therefore, we do not consider it in the present
work. Here we only focus on the effects of neutron triplet superfluidity on
shear viscosity. As mentioned in Ref. Andersson2005 , we introduce a
suppression factor to estimate the nucleon shear viscosity via
$\eta^{(\text{SF})}_{N}\approx R_{n}\eta_{N}$, where $R_{n}$ is written as
Andersson2005
$\displaystyle R_{n}$ $\displaystyle\simeq$
$\displaystyle\left[0.9543+\sqrt{0.04569^{2}+(0.6971y)^{2}}\right]^{3}$ (8)
$\displaystyle\cdot\exp\left[0.1148-\sqrt{0.1148^{2}+4y^{2}}\right]$
with $y=\Delta(T)/T$. $\Delta(T)$ is the temperature-dependent energy gap, and
the critical temperature is $T_{c}=0.57\Delta(T=0)$. The $\eta_{N}$ due to
neutron-neutron scattering drops exponentially because of sharp decrease of
the number of momentum carriers near the Fermi surface.
Figure 5: Shear viscosity stemming from nucleon-nucleon scattering as a
function of density in $\beta$-stable matter with the inclusion of neutron
triplet superfluidity.
The $\eta T^{2}$ of each component as a function of density under different
temperatures $T$ in the presence of neutron ${}^{3}PF_{2}$ superfluidity are
displayed in Fig. 5. If the core temperatures of NSs are higher than $\sim
2\times 10^{8}$ K, the neutron ${}^{3}PF_{2}$ superfluidity disappears. The
neutrons in stellar core becomes superfluid as soon as the NS cools below the
critical temperatures, and accordingly the neutron-neutron scattering is
strongly depressed and the main contribution to the shear viscosity comes from
electron scattering processes. As a result, the $Z$-factor-quenched superfluid
effect plays an opposite role compared with the $Z$-factor effect itself, and
intriguingly it can be much more significant. For instance, at the temperature
$T=5\times 10^{7}$ K, the nucleon shear viscosity $\eta_{N}$ is reduced by
about six orders of magnitude at $\rho=0.17$ fm-3, and this suppression is
stronger at lower temperatures. It was concluded in other references such as
IV2012 that, at low temperatures $T<10^{7}$K, the contribution to the shear
viscosity from the neutron scattering is more important than the lepton
scattering. However, the $\eta_{e\mu}$ is still larger than $\eta_{N}$ in the
presence of such neutron triplet superfluidity. For example, at temperature
$T=10^{7}$ K, the $\eta_{N}$ of the nucleon scattering can be neglected at
density $\rho<0.5$ fm-3 in superfluid matter.
Table 1: The calculated shear viscosity time scale $\tau_{\eta}$, compared with gravitation-radiation-driven $r$-mode time scale $\tau_{\text{GW}}=196$ s for canonical neutron stars rotating at 716 Hz. The results with and without the neutron triplet superfluidity (SF) are listed, and the weights of the nucleon contribution are present in the brackets. Temperature (K) | $\tau_{\eta}^{\text{nSF}}$(s) | $\tau_{\eta}^{\text{SF}}$(s)
---|---|---
$10^{6}$ | 402 (66%) | 1200 (0%)
$10^{7}$ | $2.99\times 10^{4}$ (50%) | $4.38\times 10^{4}$ (9%)
$10^{8}$ | $2.05\times 10^{6}$ (34%) | $2.26\times 10^{6}$ (27%)
$10^{9}$ | $1.38\times 10^{8}$ (23%) | $1.07\times 10^{8}$ (23%)
After the stellar structure is established by solving the TOV equation with
the BHF EOS as an input, the time scales of shear viscosity and of
gravitation-radiation-driven growth of $r$-mode for $1.4M_{\odot}$ canonical
NSs are calculated. The overall time scale is
$1/\tau=-1/\tau_{\text{GW}}+1/\tau_{\eta}$, and if angular-velocity-dependent
$\tau_{\text{GW}}$ is smaller than temperature-dependent $\tau_{\eta}$, the
$r$-mode amplitude will exponentially grow, resulting in $r$-mode instability.
The equation of $1/\tau=0$ determines the critical frequency in frequency-
temperature space, above which is the usually referred to as the $r$-mode
instability window Andersson2001 ; Haskell2015 .
Table I lists the calculated shear viscosity $\tau_{\eta}$ and $r$-mode growth
time scale $\tau_{\text{GW}}$ for canonical NSs. In non-superfluid NSs, the
nucleon-nucleon scattering is indeed the dominant dissipation mechanism at low
temperatures. If the superfluid effect is included, the situation is
completely opposite. The $\eta_{N}$ becomes less and less important and even
negligible as temperature decreases. The $\tau_{\eta}$ is enlarged because of
the superfluid effect, indicating weaker shear viscosity damping. It is
generally believed that the $r$-mode instability limits the rotating angular
velocity of accretion millisecond pulsars. At present, the fastest spinning
pulsar is PSR J1748-2446ad spinning at 716 Hz PSR716 , and its corresponding
$r$-mode growth time scale $\tau_{\text{GW}}$ is 196 s if
$M_{\text{TOV}}=1.4M_{\odot}$ is assumed. At low temperatures $T=10^{6}$ K,
the shear viscosity $\tau_{\eta}$ is 402 Hz for nonsuperfluid NS core matter
which is comparable with the $\tau_{\text{GW}}$, and the weight of nucleonic
contribution is as large as $66\%$. However, if the superfluidity is taken
into account, the nucleon-nucleon scattering does not contribute to the
$\tau_{\eta}$ at such low temperature, and the $\tau_{\eta}$ is much larger
than the $\tau_{\text{GW}}$ and hence the shear viscosity is not much help to
damp the $r$-mode instability. Some authors proposed that the viscous
dissipation at the viscous boundary layer of perfectly rigid crust and fluid
core is the primary damping mechanism. However, it is questioned if the core-
crust boundary is defined by a continuous transition from non-uniform matter
to uniform matter through "nuclear pasta" phases PP1998 and consequently the
viscous boundary layer is smeared out Gearheart .
Figure 6: The calculated $r$-mode instability critical curves without the
superfluidity (SF) and $Z$-factor, with Z-factor only, with both the
$Z$-factor and neutron triplet superfluidity, are shown for comparison.
In order to more clearly reveal the roles of the $Z$-factor and superfluid
effects on the $r$-mode instability, the calculated $r$-mode instability
critical curves are presented in Fig. 6. The $Z$-factor effect is conducive to
damping the gravitational-wave-driven $r$-mode growth of NSs, in particular at
low temperatures. However, the neutron triplet superfluidity plays an opposite
role and is more significant. At temperatures higher than $\sim 10^{8}$ K,
both of the two effects are weak, which is because the neutron-neutron
scattering contributes secondary to shear viscosity and the superfluidity is
almost vanishes at such temperatures. The core temperature of NSs in low mass
X-ray binaries are estimated to be $(1\sim 5)\times 10^{8}$ K Ho2012 and
$10^{7}\sim 10^{8}$ K if the direct Urca process opens Dong2020 , therefore
the shear viscosity cannot be expected to stablize NSs against $r$-mode
oscillations in practical situation. Additional damping mechanisms perhaps is
required.
In summary, the $Z$-factor effects on the thermal conductivity and shear
viscosity have been calculated based on the AK framework, where the $Z$-factor
at Fermi surfaces ($Z_{F}$), the in-medium cross sections, nucleon effective
masses, and the EOS of NS matter, are calculated by using the Brueckner theory
with the two-body AV18 interaction plus microscopic three-body force. The
nucleon-nucleon correlations, induced by the effects of short-range repulsion
and tensor component of nuclear force, gives rise to the Fermi surface
depletion, i.e., the $Z$-factor effect. The calculated $Z_{F}$ of neutrons and
protons at Fermi surfaces presents a strong isospin dependence due to the
strong neutron-proton ${}^{3}SD_{1}$ tensor interaction. The two transport
coefficients are enlarged by several times for symmetric matter, pure neutron
matter and $\beta$-stable matter. The nucleonic thermal conductivity
$\kappa_{N}$ is much more important than lepton ones for different densities
and temperatures that we considered here, whether or not this $Z$-factor
effect is included. As temperature decreases, the nucleon shear viscosity
$\eta_{N}$ becomes more and more important with respect to the lepton
contribution $\eta_{e\mu}$. If we take into account the $Z$-factor effect, the
$\eta_{N}$ may become comparable with $\eta_{e\mu}$ at intermediate densities,
and larger than $\eta_{e\mu}$ at low densities. As concluded in the previous
works Dong-SRC1 ; Dong-SRC2 , the $Z$-factor effect suppresses the proton
${}^{1}S_{0}$ and neutron ${}^{3}PF_{2}$ superfluidity strongly, and the
proton ${}^{1}S_{0}$ superfluidity almost vanishes. Contrary to the role of
$Z$-factor itself, neutron superfluidity is able to reduce the shear viscosity
significantly (by several orders of magnitude) when the temperature drops
below the critical temperature. As a result, the contribution to the shear
viscosity from the lepton scattering is still more important than that from
the nucleon scattering at low temperature for the densities of interest in
superfluid matter. Finally, the shear viscosity time scales $\tau_{\eta}$
along with the time scales $\tau_{\text{GW}}$ of $r$-mode growth due to the
emission of gravitational waves for canonical NSs are calculated. At low
temperatures, the nucleon-nucleon scattering indeed contributes mainly to the
shear viscosity time scale $\tau_{\eta}$. However, if the $Z$-factor-quenched
superfluidity is present, it is less important and even negligible. In a word,
the appearance of superfluidity is not favorable to damping the $r$-mode
instability of NSs. The calculated $\tau_{\eta}$ is much larger than the
$\tau_{\text{GW}}$ and hence the shear viscosity is not able to damp the
$r$-mode instability even for very cold NSs with core temperature of $10^{6}$
K. The present work stretches our understanding of the $r$-mode instability of
pulsar physics.
This work was supported by the National Natural Science Foundation of China
(Grants No. 11775276, 11975282), the Strategic Priority Research Program of
Chinese Academy of Sciences (Grant No. XDB34000000), the Youth Innovation
Promotion Association of Chinese Academy of Sciences (Grant No. Y201871), the
Continuous Basic Scientific Research Project (Grant No. WDJC-2019-13), the
Leading Innovation Project (Grant No. LC 192209000701), and the Continuous
Basic Scientific Research Project (Grant No. WDJC-2019-13).
## References
* (1) P. Haensel, A. Y. Potekhin, D. G. Yakovlev, Neutron Stars 1, (Springer, 2006).
* (2) P. S. Shternin, et al., Mon. Not. Roy. Astron. Soc. 412 (2011) L108.
* (3) D. Page, M. Prakash, J. M. Lattimer, A. W. Steiner, Phys. Rev. Lett. 106 (2011) 081101.
* (4) A. Sedrakian, Astron. Astrophys. 555 (2013) L10.
* (5) D. Blaschke, H. Grigorian, D. N. Voskresensky, F. Weber, Phys. Rev. C 85 (2012) 022802(R).
* (6) W. G. Newton, K. Murphy, J. Hooker, B.-A. Li, Astrophys. J. 779 (2013) L4.
* (7) A. Bonanno, M. Baldo, G. F. Burgio, V. Urpin, Astron. Astrophys. 561 (2014) L5.
* (8) W. C. G. Ho, K. G. Elshamouty, C. O. Heinke, A. Y. Potekhin, Phys. Rev. C 91 (2015) 015806.
* (9) S. Chandrasekhar, Astrophys. J. 161 (1970) 561.
* (10) J. L. Friedmann, B. F. Schutz, Astrophys. J. 221 (1978) 937; 222 (1978) 281.
* (11) L. Lindblom, B. J. Owen, S. M. Morsink, Phys. Rev. Lett. 80 (1998) 4843\.
* (12) L. Bildsten, Astrophys. J. 501 (1998) L89.
* (13) N. Andersson, K. D. Kokkotas, N. Stergioulas, Astrophys. J. 516 (1999) 307.
* (14) D. Page, U. Geppert, F. Weber, Nucl. Phys. A 777 (2006) 497.
* (15) D. G. Yakovlev, C. J. Pethick, Annu. Rev. Astron. Astrophys. 42 (2004) 169.
* (16) E. Flowers, N. Itoh, Astrophys. J. 230 (1979) 847.
* (17) C. Cutler, L. Lindblom, Astrophys. J. 314 (1987) 234.
* (18) I. Vidana, Phys. Rev. C 85 (2012) 045808.
* (19) A. A. Abrikosov, I. M. Khalatnikov, Sov. Phys. JETP 5 (1957) 887; Rep. Prog. Phys. 22 (1959) 329.
* (20) O. Benhar, M. Valli, Phys. Rev. Lett. 99 (2007) 232501.
* (21) O. Benhar, A. Polls, M. Valli, I. Vidana, Phys. Rev. C 81 (2010) 024305\.
* (22) H. F. Zhang, U. Lombardo, W. Zuo, Phys. Rev. C 82 (2010) 015805.
* (23) P. S. Shternin, M. Baldo, P. Haensel, Phys. Rev. C 88 (2013) 065803.
* (24) J. P. Jeukenne, A. Lejeune, C. Mahaux, Phys. Rep. 25 (1976) 83.
* (25) A. Ramos, A. Polls, W. H. Dickhoff, Nucl. Phys. A 503 (1989) 1.
* (26) B. E. Vonderfecht, W. H. Dickhoff, A. Polls, A. Ramos, Nucl. Phys. A 555 (1993) 1.
* (27) P. Yin, J. Dong, W. Zuo, Chin. Phys. C 41 (2017) 114102.
* (28) R. Subedi, et al., Science 320 (2008) 1476.
* (29) O. Hen, et al., Science 346 (2014) 614.
* (30) J. M. Dong, U. Lombardo, W. Zuo, Phys. Rev. C 87 (2013) 062801(R).
* (31) J. M. Dong, U. Lombardo, H. F. Zhang, W. Zuo, Astrophys. J. 817 (2016) 6.
* (32) Bao-An Li, Bao-Jun Cai, Lie-Wen Chen, Jun Xu, Prog. Part. Nucl. Phys. 99 (2018) 29.
* (33) O. Hen, G. A. Miller, E. Piasetzky, L. B. Weinstein, Rev. Mod. Phys. 89 (2017) 045002.
* (34) The CLAS Collaboration, Nature 560 (2018) 617.
* (35) M. Baldo, I. Bombaci, G. Giansiracusa, U. Lombardo, C. Mahaux, and R. Sartor, Phys. Rev. C 41 (1990) 1748 ; Nucl. Phys. A 545 (1992) 741\.
* (36) W. Zuo, I. Bombaci, U. Lombardo, Phys. Rev. C 60 (1999) 024605.
* (37) X. L. Shang, J. M. Dong, W. Zuo, P. Yin, U. Lombardo, (unpublished).
* (38) A. B. Migdal, Sov. Phys. JETP 5 (1957) 333; J. M. Luttinger, Phys. Rev. 119 (1960) 1153.
* (39) L. P. Kadanoff, G. Baym, Quantum Statistical Mechanics, (New York, 1962).
* (40) R. H. Anderson, C. J. Pcthick, and K. F. Quader, Phys. Rev. B 35 (4) (1987) 1620.
* (41) D. G. Yakovlev, A. D. Kaminker, O. Y. Gnedin, P. Haensel, Phys. Rep. 354 (2001) 1.
* (42) G. Baym, C. J. Pethick, P. Sutherland, Astrophys. J. 170 (1971) 299; G. Baym, H. A. Bethe, C. J. Pethick, Nucl. Phys. A175 (1971) 225.
* (43) X. L. Shang, A. Li, Z. Q. Miao, G. F. Burgio, H. J. Schulze, Phys. Rev. C 101 (2020) 065801.
* (44) H. F. Zhang, Z. H. Li, U. Lombardo, P. Y. Luo, F. Sammarruca, and W. Zuo, Phys. Rev. C (2007) 054001.
* (45) Z. X. Yang, X. L. Shang, G. C. Yong, W. Zuo, Y. Gao, Phys. Rev. C 100 (2019) 054325.
* (46) P. S. Shternin, D. G. Yakovlev, Phys. Rev. D 78 (2008) 063006.
* (47) D. J. Dean and M. Hjorth-Jensen, Rev. Mod. Phys. 75 (2003) 607.
* (48) S. Frauendorf and A. O. Macchiavelli, Prog. Part. Nucl. Phys. 78 (2014) 24.
* (49) X. L. Shang, W. Zuo, Phys. Rev. C 88 (2013) 025806.
* (50) U. Lombardo, H.-J. Schulze, Physics of Neutron Star Interiors, edited by D. Blaschke, N. K. Glendenning, and A. Sedrakian, Lecture Notes in Physics Vol. 578, (Springer-Verlag, Berlin and Heidelberg, 2001), pp. 30¨C54.
* (51) X. H. Fan, X. L. Shang, J. M. Dong, W. Zuo, Phys. Rev. C 99 (2019) 065804.
* (52) J. M. Dong, (unpublished).
* (53) N. Andersson, G. L. Comer, K. Glampedakis, Nucl. Phys. A 763 (2005) 212\.
* (54) N. Andersson, K. D. Kokkotas, Int. J. Mod. Phys. D 10 (2001) 381.
* (55) B. Haskell, Int. J. Mod. Phys. E 24 (2015) 1541007.
* (56) J. W. T. Hessels, et al., Science 311 (2006) 1901.
* (57) C. Pethick, A. Y. Potekhin, Phys. Lett. B 427 (1998) 7.
* (58) M. Gearheart, W. G. Newton, J. Hooker, B. Li, Mon. Not. Roy. Astron. Soc. 418 (2011) 2343.
* (59) W. C. G. Ho, N. Andersson, B. Haskell, Phys. Rev. Lett. 107 (2011) 101101\.
|
# Local Complexity of Polygons
Fabian Klute Supported by the Netherlands Organisation for Scientific Research
(NWO) under project no. 612.001.651. ETH Zürich, Department of Computer
Science Meghana M. Reddy111The second author’s full last name consists of two
words and is _Mallik Reddy_. However, she consistently refers to herself with
the first word of her last name being abbreviated. Supported by the Swiss
National Science Foundation within the collaborative DACH project
_Arrangements and Drawings_ as SNSF Project 200021E-171681. Utrecht
University, Information and Computing Science Department Tillmann Miltzow
Supported by the NWO Veni project EAGER. ETH Zürich, Department of Computer
Science
###### Abstract
Many problems in Discrete and Computational Geometry deal with simple polygons
or polygonal regions. Many algorithms and data-structures perform considerably
faster, if the underlying polygonal region has low local complexity. One
obstacle to make this intuition rigorous, is the lack of a formal definition
of local complexity. Here, we give two possible definitions and show how they
are related in a combinatorial sense. We say that a polygon $P$ has _point
visibility width_ $w=\left\llbracket\texttt{pvw}\right\rrbracket$, if there is
no point $q\in P$ that sees more than $w$ reflex vertices. We say that a
polygon $P$ has _chord visibility width_
$w=\left\llbracket\texttt{cvw}\right\rrbracket$, if there is no chord
$c=\textrm{seg}(a,b)\subset P$ that sees more than w reflex vertices. We show
that
$\left\llbracket\texttt{cvw}\right\rrbracket\leq\left\llbracket\texttt{pvw}\right\rrbracket^{O(\left\llbracket\texttt{pvw}\right\rrbracket)},$
for any simple polygon. Furthermore, we show that there exists a simple
polygon with
$\left\llbracket\texttt{cvw}\right\rrbracket\geq
2^{\Omega(\left\llbracket\texttt{pvw}\right\rrbracket)}.$
## 1 Introduction
In Discrete and Computational Geometry we study many problems with respect to
the input size $n$ and other natural parameters. One famous example is the
computation of the convex hull of a set of points in the plane. While
$\Theta(n\log n)$ time is worst case possible, this can be improved to
$\Theta(n\log h)$, where $h$ is the number of vertices on the convex hull [4].
Here, the number of vertices on the convex hull is a natural parameter to
study this problem. We also say sometimes that the algorithm is output-
sensitive. Another famous example, is the spread $\Delta$ of a set of points
in the plane. That is the ratio between the largest and the smallest distance,
between any two points. Efrat and Har-Peled were the first to find an
approximation algorithm for the art gallery problem under the assumption that
the underlying set of vertices has bounded spread [2]. A third example is the
number of reflex vertices of a polygon. This parameter gave raise to some FPT
algorithms for the art gallery problem [1].
In this work, we introduce two new parameters that are meant to capture
rigorously the idea of local complexity. Consider the polygons shown in Figure
1, most researchers would probably agree that the polygon on the left has
lower local complexity than the polygon on the right. Yet, it is not
straightforward how to define this rigorously in a mathematical sense.
Figure 1: The polygon on the left has intuitively lower local complexity than
on the right.
Here, we give two possible definitions and show how they are related in a
combinatorial sense. We say that a polygon $P$ has _point visibility width_
$w=\left\llbracket\texttt{pvw}\right\rrbracket$, if $w$ is the smallest number
such that there is no point $q\in P$ that sees more than $w$ reflex vertices.
We say that a polygon $P$ has _chord visibility width_
$w=\left\llbracket\texttt{cvw}\right\rrbracket$, if $w$ is the smallest number
such that there is no chord $c=\textrm{seg}(a,b)\subset P$ that sees more than
$w$ reflex vertices.
We show the following theorem.
###### Theorem 1.
For every polygon with chord visibility width
$\left\llbracket\texttt{cvw}\right\rrbracket$ and point visibility width
$\left\llbracket\texttt{pvw}\right\rrbracket$, it holds that
$\left\llbracket\texttt{pvw}\right\rrbracket\leq\left\llbracket\texttt{cvw}\right\rrbracket\leq\left\llbracket\texttt{pvw}\right\rrbracket^{O(\left\llbracket\texttt{pvw}\right\rrbracket)}.$
Moreover, there are polygons such that
$\left\llbracket\texttt{cvw}\right\rrbracket\geq
2^{\Omega(\left\llbracket\texttt{pvw}\right\rrbracket)}.$
Note that Hengeveld and Miltzow already defined the notion of chord visibility
width in a very similar way [3]. Specifically, they showed that the art
gallery problem admits an FPT algorithm with respect to chord visibility
width. For a parameter to be interesting to study, we usually have three
criteria.
* naturalness:
Although there is no definition of what it means to be mathematically natural,
many researchers seem to have a common understanding of this notion.
* relevance:
The parameter is at least for some fraction of instances reasonably low.
* profitable:
Using the parameter, we should be able to design better algorithms and prove
useful run time upper bounds.
We believe that both parameters are mathematically natural. Theorem 1
indicates that the chord visibility width can be exponentially larger than the
point visibility width. Thus we would expect that chord visibility width is
potentially more profitable. We would expect that both parameters are equally
relevant as the example that we give is fairly contrived. The remainder of
this paper is dedicated to proving Theorem 1.
## 2 Chord visibility width vs Point visibility width
We prove Theorem 1 in two parts. First, we show the second half of the theorem
in Section 2.1 by constructing a polygon for which it holds that
$\left\llbracket\texttt{cvw}\right\rrbracket\geq
2^{\Omega(\left\llbracket\texttt{pvw}\right\rrbracket)}$. Second, we show the
first half of Theorem 1 in Section 2.2 by analysing how the reflex vertices
visible from a chord in a simple polygon $P$ restrict each others vision and
relating this to the point visibility width of the polygon.
### 2.1 Lower bound
Figure 2: Construction of the Iterated Comb.
We construct a polygon $P$, called the _Iterated Comb_ , see Figure 2. In the
following, let $k\in\mathbb{N}$. The Iterated Comb consists of $k$ _layers_ ,
each layer consists of two _spikes_ and each spike further splits into two
more spikes in the subsequent layer. Observe, that the entire polygon is
visible from the chord connecting the two left-most points of the polygon. The
distance between consecutive spikes in a layer, referred to as the _bridge_ ,
is adjusted such that if at least one vertex in the interior of a spike is
visible from a point $p$ on $c$, then $p$ cannot see any interior vertex of
any other spike. This property is achieved by stretching the bridges
vertically. More specifically, for $1<i\leq k$, the length of the bridge of
the $i^{th}$ layer is increased such that the property holds for layer $i$ and
then the bridge of the previous layer is adjusted accordingly. By iteratively
stretching the bridges from the last layer to the first layer, it can be
ensured that the property holds for every layer. This property is illustrated
in Figure 3 for $k=2$. In the first layer, the point $p$ sees an interior
vertex of the first spike and no interior vertex of the second spike.
Similarly, in the second layer point $p$ sees an interior vertex of the second
spike and no interior vertex of the first spike.
Figure 3: Point $p$ sees interior points of at most one spike of any layer
#### Chord visibility width of the Iterated Comb
Clearly, the chord which sees the highest number of reflex vertices is the
chord defined by the two left-most vertices. Let this chord be $c$. The number
of reflex vertices of the first layer visible from $c$ is two. Similarly, the
number of reflex vertices of the $i^{th}$ layer visible from $c$ is $2^{i}$.
Summing up over all $k$ layers, the number of reflex vertices visible from $c$
is $\Theta(2^{k+1})$, and hence
$\left\llbracket\texttt{cvw}\right\rrbracket=\Theta(2^{k+1})$.
#### Point visibility width of the Iterated Comb
###### Claim 1.
Chord $c$ contains at least one of the points in $P$ which see the highest
number of reflex vertices of $P$.
###### Proof.
Let $q$ be a point in polygon $P$ which sees the highest number of reflex
vertices of $P$. Let $p$ be a point on chord $c$ which has the same
$y$-coordinate as $q$. Assume $p\neq q$. Let $r$ be a reflex vertex visible
from $q$. Since $P$ is monotone with respect to y-axis, the triangle $pqr$
must be empty. This implies that $r$ is visible from $p$ as well. Hence, the
point $p$ also sees the highest number of reflex vertices in $P$ since $p$
sees at least as much as $q$. Refer to Figure 4 for an illustration. ∎
Figure 4: Point $p$ sees all the reflex vertices visible from $q$
Without loss of generality, assume the point with highest visibility is the
topmost point on $c$, denoted by $p$. Both the reflex vertices in layer one
are visible from $p$. In each subsequent layer, $p$ can see the reflex
vertices that are in the interior of the first spike, which is two reflex
vertices, $p$ cannot see any of the other reflex vertices in the other spikes
by construction. Summing it up, we can conclude that $2k$ reflex vertices are
visible from $p$, and thus $\left\llbracket\texttt{pvw}\right\rrbracket=2k$.
Hence the Iterated Comb has $\left\llbracket\texttt{cvw}\right\rrbracket\geq
2^{\Omega(\left\llbracket\texttt{pvw}\right\rrbracket)}$.
### 2.2 Upper bound
Next, we show that we can upper bound the chord visibility width in terms of
the point visibility width.
To this end, we prove the following lemma.
###### Lemma 2.
For every simple polygon, it holds that
$\left\llbracket\texttt{cvw}\right\rrbracket\leq\left\llbracket\texttt{pvw}\right\rrbracket^{O(\left\llbracket\texttt{pvw}\right\rrbracket)}.$
The rest of this paragraph is dedicated to the proof of Lemma 2.
For that purpose assume, we are given a simple polygon $P$ together with a
chord $s\subset P$. Furthermore, we assume that no point in $P$ sees more than
$k=\left\llbracket\texttt{pvw}\right\rrbracket$ reflex vertices of $P$. Let us
denote by $R$ the set of all reflex vertices that see at least one point of
$s=\textrm{seg}(a,b)$. Furthermore, we also include the two endpoints of $s$
in the set $R$. As $P$ is a simple polygon it holds that every reflex vertex
$r\in R\setminus\\{a,b\\}$ sees a subsegment $I(r)\subseteq s$. For
convenience, we also call $I(r)$ an _interval_.
Figure 5: The vertex $v$ sees a subinterval $I(v)\subseteq s$ which is
restricted by $a$ and $u$.
Note that every interval is _restricted_ by exactly two points in $R$, see
Figure 5. In case of ambiguity, due to collinearities, we say the point in $R$
closer to $s$ is the restricting point. Those vertices can be either the
endpoints of $s$ ($a$ and $b$) or a different reflex vertex in $R$. We show
the following claim.
###### Claim 2.
If $u$ is a reflex vertex that restricts the reflex vertex $v$ then it holds
that
$I(v)\subseteq I(u).$
###### Proof.
The triangle $T$ formed by $I(v)$ and $v$ is trivially convex and fully
contained inside $P$. The reflex vertex $u$ is on the boundary of the triangle
and thus sees every point of $T$. In particular also $I(v)$. ∎
Given the previous claim, we construct the visibility restriction graph $G$ as
follows. The vertices are formed by the points in $R$. We say that $uv$ forms
a directed edge, if $u$ is restricted by $v$. We summarize a few useful
properties of $G$ in the following claim.
###### Claim 3.
The visibility restriction graph of a polygon with point visibility width at
most $k$ has the following properties.
1. 1.
The segment endpoints $a,b$ are the only two sinks.
2. 2.
The out-degree is two for every vertex $v\in R\setminus\\{a,b\\}$.
3. 3.
The in-degree is at most $k-1$ for every vertex $v\in R$.
4. 4.
The longest path has at most $k+1$ vertices.
###### Proof.
By definition, every reflex vertex is restricted by exactly two vertices in
$R$. This implies Item 1 and 2.
Any reflex vertex $v$ can see itself and all its neighbors. Its in-degree
neighbors are also reflex vertices. As no point can see more than $k$ reflex
vertices $v$ has at most $k-1$ in-degree neighbors. This concludes the proof
of Item 3.
Finally, to prove Item 4, let $p=u_{1}u_{2}\ldots u_{l}$ be a directed path.
Then it holds that there is a point
$q\in I(u_{l})\subseteq\ldots\subseteq I(u_{2})\subseteq I(u_{1})=s.$
The point $q$ sees all reflex vertices of the path $p$. As no point sees more
than $k$ reflex vertices, it holds that $p$ has at most $k$ reflex vertices.
As all but potentially the first vertex is a reflex vertex, we have $l\leq
k+1$. ∎
The properties of the last claim enable us to give an upper bound on the size
of $G$ and thus also on the size of $R$.
###### Claim 4.
The visibility restriction graph $G$ of a polygon with point visibility width
$\left\llbracket\texttt{pvw}\right\rrbracket=k$ has at most $k^{O(k)}$
vertices.
###### Proof.
We organize $G$ into layers depending on the distance from $a$ and $b$. Note
that if layer $i$ has $t$ vertices then layer $(i+1)$ has at most $t\cdot k$
vertices. As there are at most $k+1$ layers and the first layer has size two
we get that $G$ has at most
$2+2k+2k^{2}+2k^{3}+\ldots+2k^{k}=k^{O(k)}$
vertices. ∎
## 3 Conclusion
We believe that local complexity has the potential to be a useful parameter.
We gave two ways to define local complexity in a rigorous way and showed how
those two ways relate to one another. We want to end with a few open
questions.
1. 1.
Can we find algorithms and data structures that can make use of low local
complexity?
2. 2.
Can we compute or approximate the point visibility width and chord visibility
width in an efficient manner? Note that this is more a theoretical question.
We do not necessarily need to know the chord visibility width of a polygon to
use the concept in the design and analysis of an algorithm.
3. 3.
Are there other ways to formalize the idea of low local complexity within a
polygonal region?
## References
* [1] Akanksha Agrawal and Meirav Zehavi. Parameterized analysis of art gallery and terrain guarding. In International Computer Science Symposium in Russia, volume 12159 of LNCS, pages 16–29. Springer, 2020. doi:10.1007/978-3-030-50026-9_2.
* [2] Alon Efrat and Sariel Har-Peled. Guarding galleries and terrains. Information Processing Letters, 100(6):238–245, 2006. doi:https://doi.org/10.1016/j.ipl.2006.05.014.
* [3] Simon Hengeveld and Tillmann Miltzow. A practical algorithm with performance guarantees for the art~gallery problem. CoRR, abs/2007.06920, 2020. arXiv:2007.06920.
* [4] David G. Kirkpatrick and Raimund Seidel. The ultimate planar convex hull algorithm? SIAM Journal on Computing, 15(1):287–299, 1986. doi:10.1137/0215021.
|
Single-RF Multi-User Communication Through
Reconfigurable Intelligent Surfaces: An Information-Theoretic Analysis
Roy Karasik, Student Member, IEEE,
Osvaldo Simeone, Fellow, IEEE,
Marco Di Renzo, Fellow, IEEE,
and Shlomo Shamai (Shitz), Life Fellow, IEEE
This work has been supported by the European Research Council (ERC) and by the Information and Communication Technologies (ICT) under the European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement Nos. 694630, 725731, and 871464).
R. Karasik and S. Shamai are with the Department of Electrical Engineering, Technion — Israel Institute of Technology, Haifa 32000, Israel. (royk@campus.technion.ac.il).
O. Simeone is with the Centre for Telecommunications Research,
Department of Informatics, King’s College London, London WC2R 2LS, U.K.
M. Di Renzo is with Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des Signaux et Systèmes, 3 Rue Joliot-Curie, 91192 Gif-sur-Yvette, France.
RISs are typically used in multi-user systems to mitigate interference among active transmitters. In contrast, this paper studies a setting with a conventional active encoder as well as a passive encoder that modulates the reflection pattern of the RIS. The RIS hence serves the dual purpose of improving the rate of the active encoder and of enabling communication from the second encoder. The capacity region is characterized, and information-theoretic insights regarding the trade-offs between the rates of the two encoders are derived by focusing on the high- and low-power regimes.
§ INTRODUCTION
A RIS is a nearly-passive device that can shape the wireless propagation channel by applying phase shifts to the incident signals <cit.>.
In MU systems, RISs can help mitigate inter-user interference and obtain beamforming gain for standard active transmitters <cit.>. To this end, the configuration of the RIS is kept fixed for the duration of a coherence interval and optimized to maximize some function of the achievable rates <cit.>. In this paper, we study a different use of RISs, whereby a single active transmitter coexists with a passive user, having no direct RF chain, that conveys its own message by modulating the reflection pattern of the RIS (see <ref>).
regular polygon,
regular polygon sides=3,
shape border rotate=180,
partial ellipse/.style args=#1:#2:#3
insert path=+ (#1:#3) arc (#1:#2:#3)
(EN) [thick,draw,minimum width=1cm,minimum height=1cm, font=] at (0,0) Encoder 1;
(corner1) [right=0.1cm of EN] ;
[draw] (EN.east) – (corner1.center);
(Ant1) [ant, above=0.2cm of corner1,scale=0.5] ;
[draw] (corner1.center) – (Ant1);
(user) [thick,draw,minimum width=1cm,minimum height=1cm,right = 4.3cm of EN.center, font=] Decoder;
(RxP1) [left=0.15cm of user] ;
(RxP6) [left=0.40cm of user] ;
(RxP2) [above=0.05cm of RxP1.center] ;
(RxP3) [above=0.05cm of user.west] ;
(RxP4) [below=0.05cm of RxP6.center] ;
(RxP5) [below=0.05cm of user.west] ;
[draw] (RxP3.center) – (RxP2.center);
[draw] (RxP5.center) – (RxP4.center);
(Ant2) [ant, above =0.2cm of RxP2.center,scale=0.5] ;
[draw] (RxP2.center) – (Ant2);
(Ant3) [ant, above =0.2cm of RxP4.center,scale=0.5] ;
[draw] (RxP4.center) – (Ant3);
[font=, above=0.0mm of user.north,xshift=0mm] $N$ antennas;
[thick] ($(user.west)+(-1.2mm,0mm)$) [partial ellipse=50:310:0.7mm and 2.7mm];
(IRS) [thick,draw,minimum width=1.4cm,minimum height=1.4cm, font=] at ($(EN)!0.5!(user)+(0,1.8cm)$) ;
(R1) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=5.2mm,xshift=-5.2mm]IRS.center);
(R2) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-5.2mm,xshift=-5.2mm]IRS.center);
(R3) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=5.2mm,xshift=+5.2mm]IRS.center);
(R4) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-5.2mm,xshift=+5.2mm]IRS.center);
(R5) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=+5.2mm,xshift=+1.7mm]IRS.center);
(R6) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=+5.2mm,xshift=-1.7mm]IRS.center);
(R7) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-5.2mm,xshift=+1.7mm]IRS.center);
(R8) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-5.2mm,xshift=-1.7mm]IRS.center);
(R9) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=1.75mm,xshift=-5.2mm]IRS.center);
(R10) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-1.75mm,xshift=-5.2mm]IRS.center);
(R11) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=1.75mm,xshift=+5.2mm]IRS.center);
(R12) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-1.75mm,xshift=+5.2mm]IRS.center);
(R13) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=+1.75mm,xshift=+1.7mm]IRS.center);
(R14) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=+1.75mm,xshift=-1.7mm]IRS.center);
(R15) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-1.75mm,xshift=+1.7mm]IRS.center);
(R16) [thick,fill=blue!50,draw,minimum width=0.26cm,minimum height=0.26cm] at ([yshift=-1.75mm,xshift=-1.7mm]IRS.center);
[font=, above=-0.5mm of IRS.north] RIS;
[font=, right=0.0mm of IRS.east,yshift=2mm,blue!50] $K$ elements;
(EN2P1) at ($(IRS.west)+(0,0mm)$) ;
(EN2) [thick,draw,minimum width=1cm,minimum height=1cm, font=, above = 0.76cm of EN] Encoder 2;
(A) at ($(Ant1)+(2mm,0)$);
[draw,->,line width=0.5mm,red] (A) – node[above,font=,pos=0.15,xshift=-1mm] $\mathbf{h}_i$ coordinate[pos=0.2] (P1) coordinate[pos=0.25] (P2) ($(IRS)+(0,0)$);
(B) at ($(Ant2)+(-2mm,-2mm)$);
[draw,->,line width=0.5mm,red] ($(IRS)+(0,0)$) – node[above,font=,pos=0.8,xshift=1mm] $\mathbf{H}_{r}$ coordinate[pos=0.75] (P3) coordinate[pos=0.8] (P4) (B);
[draw,->,line width=0.5mm,red] (A) – node[above,font=,pos=0.5,yshift=-1mm] $\mathbf{h}_d$ coordinate[pos=0.75] (P5) coordinate[pos=0.78] (P6) (B);
[font=, below=10mm of IRS,red] Wireless Link;
[draw,->,line width=0.2mm] ($(EN.west)-(15mm,0)$) – node[above,font=] $w_1$ node[below,font=] ($nR_1$ bits) ($(EN.west)+(0,0)$);
[draw,->,line width=0.2mm] ($(EN2.west)-(15mm,0)$) – node[above,font=] $w_2$ node[below,font=] ($nR_2$ bits) ($(EN2.west)+(0,0)$);
[draw,->,line width=0.2mm] ($(user.east)-(0mm,0)$) – node[above,font=] $(\hat{w}_1,\hat{w}_2)$ ($(user.east)+(13mm,0)$);
[line width=0.3mm,OliveGreen,->,dashed] (EN2.east) – (EN2P1.center);
[font=, above=0.0mm of EN2.north, OliveGreen, xshift=-3mm] Control Link ($\text{Rate}=1/m$);
In the system under study, Encoder 1 is active and it encodes its message $w_1$ into a codeword of $n$ symbols sent on the wireless link; whereas Encoder 2 is passive and it encodes the message $w_2$ into a control action, which is sent on the control link to the RIS at a rate of one action every $m$ channel symbols.
With reference to <ref>, the RIS is accordingly used for the dual purpose of enhancing the rate of the active encoder (Encoder 1) and of enabling communication for the passive encoder (Encoder 2). Unlike prior work <cit.> that focused on a specific transmission strategy, this paper concentrates on the information-theoretic analysis of the rate trade-offs between the two encoders, providing fundamental insights.
Related Work: A comprehensive survey of the state-of-the-art on RIS-aided MU systems is available in <cit.>. As notable representative examples of works involving active transmitters, the maximization of the weighted sum-rate in RIS-aided MU systems was studied in <cit.>, whereas references <cit.> focused on optimizing the energy efficiency, and papers <cit.> on physical-layer security and outage-probability enhancements. A MU system with an active transmitter and a passive encoder, akin to <ref>, was proposed in <cit.> by assuming binary modulation, a single receiver antenna, and a specific successive interference cancellation decoding strategy.
From an information-theoretic perspective, the single-RF MU communication system depicted in <ref> can be viewed as a MAC with both multiplicative and additive elements. The capacity of the Gaussian multiplicative MAC was derived in <cit.> for two active encoders. The capacity region of a backscatter multiplicative MAC, which can be viewed as a special case of the RIS-aided MU communication system in <ref> with one reflecting element, was studied in <cit.>. Under the assumptions of a single receiver antenna and Gaussian codebooks, this work shows that conventional time-sharing schemes are suboptimal in the high-power and weak-backscatter regimes. The capacity of an RIS-aided single-user channel was derived in <cit.>.
Main Contributions: In this paper, we study the RIS-aided MU system illustrated in <ref>, in which Encoder 1 is active, whereas Encoder 2 can only alter the reflection pattern of the RIS in a passive manner. We derive the capacity region under the practical assumptions of a multi-antenna decoder, a finite-input constellation, and a set of discrete phase shifts at the RIS. Then, we specialize the results for the high- and low-power regimes, showing that: (i) for sufficiently high transmission power, both encoders can simultaneously communicate at maximum rate; and (ii) in the low-power regime, Encoder 1 can achieve maximum rate if and only if Encoder 2 does not communicate, while Encoder 2 can achieve its maximum rate while still enabling non-zero-rate communication for Encoder 1. Finally, numerical examples demonstrate the dual role of the RIS as means to enhance the transmitted signal on the one hand and as the enabler of MU communication on the other hand.
Random variables, vectors, and matrices are denoted by lowercase, boldface lowercase, and boldface uppercase Roman-font letters, respectively. Realizations of random variables, vectors, and matrices are denoted by lowercase, boldface lowercase, and boldface uppercase italic-font letters, respectively. For example, $x$ is a realization of random variable $\mathrm{x}$, $\bm{x}$ is a realization of random vector $\mathbf{x}$, and $\bm{X}$ is a realization of random matrix $\mathbf{X}$.
For any positive integer $K$, we define the set $[K]\triangleq \{1,2,\ldots,K\}$.
The cardinality of a set $\mathcal A$ is denoted as $|\mathcal{A}|$.
The $\ell^2$-norm and the conjugate transpose of a vector $\bm{v}$ are denoted as $\lVert\bm{v}\rVert$ and $\bm{v}^\ast$, respectively.
$\diag(\bm{x})$ represents a diagonal matrix with diagonal given by the vector $\bm{x}$.
The vectorization of matrix $\bm{H}$, i.e., the operator that stacks the columns of $\bm{H}$ on top of one another, is denoted by $\stack(\bm{H})$.
The Kronecker product $\bm{I}_m\otimes\bm{B}$ of the identity matrix of size $m$ and matrix $\bm{B}$ is denoted as $\bm{B}^{m\otimes}$.
§ SYSTEM MODEL
We consider the system depicted in <ref>
in which two encoders communicate with a decoder equipped with $N$ antennas over a quasi-static fading channel in the presence of an RIS that comprises $K$ nearly-passive reconfigurable elements. Encoder 1 is equipped with a single-RF transmitter and encodes its message $w_1\in[2^{nR_1}]$ of rate $R_1$ [bits/symbol] into a codeword of $n$ symbols sent on the wireless link to the decoder. In contrast, Encoder 2 encodes its message $w_2\in[2^{nR_2}]$ of rate $R_2$ [bits/symbol] in a passive manner by modulating the reflection pattern of the RIS. The reflection pattern is controlled through a rate-limited control link, and is defined by the phase shifts that each of the $K$ RIS elements applies to the impinging wireless signal. Encoder 2 represents, for example, a sensor embedded in the RIS that applies metasurface-based modulation in order to convey its sensed data without emitting a new radio wave <cit.>.
A coding slot consists of $n$ symbols, which are divided into $n/m$ blocks of $m$ symbols each, with $n/m$ assumed to be integer. Specifically, the codeword transmitted by Encoder 1 as a function of message $w_1$ occupies the entire coding slot, and it includes $n$ symbols from a constellation $\mathcal{S}$ of $S=|\mathcal S|$ points. Furthermore, the RIS is controlled by Encoder 2 by selecting the phase shift applied by each of the $K$ elements of the RIS from a finite set $\mathcal{A}$ of $A=|\mathcal{A}|$ distinct hardware-determined values as a function of the message $w_2$.
Due to practical limitations on the RIS control rate, the phase shifts can only be modified once for each block of $m$ consecutive transmitted symbols.
During the $t$th block, the fraction of the codeword of Encoder 1 consisting of $m$ transmitted symbols is denoted by $\mathbf{s}(t)=(\mathrm{s}_{1}(t),\ldots,\mathrm{s}_{m}(t))^\intercal\in\mathcal S^{m\times 1}$, and is assumed to satisfy the power constraint
The phase shifts applied by the RIS in the $t$th block are denoted by the vector
with $\uptheta_{k}(t)\in\mathcal{A}$ being the phase shift applied by the $k$th RIS element, $k\in[K]$.
We assume quasi-static flat-fading wireless channels, which remain fixed throughout a coding slot. Specifically, the channel from Encoder 1 to the decoder is denoted by vector $\mathbf{h}_d\in\mathbb C^{M\times 1}$; the channel from Encoder 1 to the RIS is denoted by the vector $\mathbf{h}_i\in\mathbb C^{K\times 1}$; and the channel from the RIS to the $N$ receiving antennas is denoted by the matrix $\mathbf{H}_r\in\mathbb C^{N\times K}$. Furthermore, we assume that $\mathbf{h}_d$, $\mathbf{h}_i$, and $\mathbf{H}_r$ are drawn from a continuous distribution. Finally, we denote the signal received by the $N$ antennas for the $q$th transmitted symbol in block $t\in[n/m]$ by $\mathbf{y}_{q}(t)\in\mathbb C^{N\times 1}$, $q\in[m]$. The overall received signal matrix $\mathbf{Y}(t)=(\mathbf{y}_{1}(t),\ldots,\mathbf{y}_{m}(t))\in\mathbb C^{N\times m}$ in the $t$th sub-block can hence be written as
𝐘(t) = √(P)𝐇_r(e^j(t))𝐡_i𝐬^⊺(t)+𝐡_d𝐬^⊺(t)+𝐙(t)
= √(P)𝐇_rie^j(t)+𝐡_d𝐬^⊺(t)+𝐙(t),
where $P>0$ denotes the transmission power of Encoder 1; the matrix $\mathbf{H}_{ri}\triangleq \mathbf{H}_r\diag(\mathbf{h}_i)\in\mathbb{C}^{N\times K}$, combines the channels $\mathbf{h}_i$ and $\mathbf{H}_r$;
and the matrix $\mathbf{Z}(t)\in\mathbb C^{N\times m}$, whose elements are iid as $\mathcal{CN}(0,1)$, denotes the additive white Gaussian noise at the receiving antennas.
In order to characterize the distribution of the output signal $\mathbf{Y}(t)$ in (<ref>), we vectorize it as
where we have defined the vector $\mathbf{z}(t)\triangleq\stack(\mathbf{Z}(t))\in\mathbb{C}^{Nm\times 1}$.
We assume that both the encoders and the decoder have perfect CSI, in the sense that the channel matrix $\mathbf{H}_{ri}$ and channel vector $\mathbf{h}_d$ are known.
Having received signal $\mathbf{y}(t)$ in (<ref>) for $t\in[n/m]$, the decoder produces the estimates $\hat{w}_\ell=\hat{w}_\ell(\mathbf{y}(1),\ldots,\mathbf{y}(n/m),\mathbf{H}_{ri},\mathbf{h}_d)$, for $\ell=1,2$, using knowledge of the CSI.
Given channel realizations $\bm{H}_{ri}$ and $\bm{h}_d$, a rate pair $\rb{R_1(\bm{H}_{ri},\bm{h}_d),R_2(\bm{H}_{ri},\bm{h}_d)}$ is said to be achievable if the probability of error satisfies the limit $\Pr(\hat{w}_1\neq w_1,\hat{w}_2\neq w_2)\rightarrow 0$ when the codeword length grows large, i.e., $n\rightarrow\infty$. The corresponding capacity region $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ is defined as the closure of the set of achievable rate pairs.
§ CAPACITY REGION
In this section, we first derive a general characterization of the capacity region $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ for the channel in (<ref>). Then, we leverage this result to provide theoretical insights into the trade-offs between the rate of the two encoders in <ref> by focusing on the low- and high-power regimes.
Most existing works on the multiplicative Gaussian MAC <cit.> and on RIS-aided systems (see, e.g., <cit.>) consider Gaussian codebooks for the transmitted signal $\mathbf{s}(t)$. This implies that the resulting achievable rates are formulated in the standard form “$\log_2(1+\text{SNR})$”. In contrast, as described in <ref>, we focus our attention on the more practical model in which the transmitted symbols and the RIS elements' phase response take values from finite sets <cit.>. Therefore, in a manner similar to <cit.>, the expressions for the achievable rates that we present in this section are more complex, and require the following definition.
The CGF of a random variable $\mathrm{u}$ conditioned on a random vector $\mathbf{x}$ is defined as
κ_r(u|𝐱)≜𝔼log_2𝔼e^ru|𝐱, for r∈ℝ,
and the value of the conditional CGF for $r=1$ is denoted as $\kappa(\mathrm{u}|\mathbf{x})\triangleq\kappa_1(\mathrm{u}|\mathbf{x})$.
We now characterize the capacity region in the form of a union of rate regions, with each region corresponding to rates achievable for a specific choice of encoding distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$ of the transmitted symbols $\mathbf{s}(t)$ and RIS phase shifts $\pmb{\uptheta}(t)$ in (<ref>), respectively.
For input distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$, let
$\mathcal{R}(p_{\mathbf{s}},p_{\pmb{\uptheta}},\bm{H}_{ri},\bm{h}_d)$ be the set of rate pairs $\rb{R_1(\bm{H}_{ri},\bm{h}_d),R_2(\bm{H}_{ri},\bm{h}_d)}$ such that the inequalities
R_ℓ(H_ri,h_d)≤-Nlog_2(e)-1/mκ(u_ℓ|𝐬_1,_1,𝐳), ℓ∈{1,2},
hold, where random variable $\mathrm{u}_1$, $\mathrm{u}_2$, and $\mathrm{u}_3$ are defined as
u_1 ≜ -‖𝐳+√(P)H_rie^j_1+h_d^m⊗𝐬_1-𝐬_2‖^2,
u_2 ≜ -‖𝐳+√(P)H_rie^j_1-e^j_2^m⊗𝐬_1‖^2,
u_3 ≜ -‖𝐳+√(P)H_rie^j_1+h_d^m⊗𝐬_1
respectively, with independent random vectors $\mathbf{s}_1,\mathbf{s}_2\sim p_{\mathbf{s}}(\bm{s})$, $\pmb{\uptheta}_1,\pmb{\uptheta}_2\sim p_{\pmb{\uptheta}}(\pmb{\theta})$, and $\mathbf{z}\sim\mathcal{CN}(\mathbf{0},\bm{I}_{Nm})$.
The capacity region $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ is the convex hull of the union of the regions $\mathcal{R}(p_{\mathbf{s}},p_{\pmb{\uptheta}},\bm{H}_{ri},\bm{h}_d)$ over all input distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$ with $\bm{s}\in\mathcal{S}^{m\times 1}$, $\pmb{\theta}\in\mathcal{A}^{K\times 1}$, such that $\mathbb E[\mathbf{s}^\ast\mathbf{s}]\leq m$.
See Appendix <ref>.
Next, we specialize the results in <ref> to characterize the capacity region in the high- and low-power regimes.
§.§ High-Power Regime
The following corollary shows that the capacity region $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ converges to a rectangle as the power of Encoder 1 increases.
For any finite constellation $\mathcal S$ of $S=|\mathcal S|$ points and any set $\mathcal{A}$ of $A=|\mathcal A|$ phases, let $\overline{\mathcal C}$ be the set of rate pairs $\rb{R_1,R_2}$ such that
𝒞≜R_1,R_2:R_1≤log_2(S), R_2≤K/mlog_2(A).
The capacity region $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ converges to $\overline{\mathcal C}$ as the power $P$ increases in the sense that $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)\subseteq \overline{\mathcal C}$, and there exists a sequence of achievable rate pairs $\rb{R_1(\bm{H}_{ri},\bm{h}_d),R_2(\bm{H}_{ri},\bm{h}_d)}\in \mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ such that, almost surely,
lim_P→∞R_1(H_ri,h_d) = log_2(S),
lim_P→∞R_2(H_ri,h_d) = K/mlog_2(A).
See Appendix <ref>.
<ref> implies that, for sufficiently high power $P$, both encoders can simultaneously achieve their maximum rates. As a result, while not useful in increasing the high-power rate of Encoder 1, the presence of the RIS enables communication at the maximum rate for Encoder 2 without creating deleterious interference on Encoder 1's transmission.
§.§ Low-Power Regime
In this section, we characterize the capacity region $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ in the low-power regime. To simplify the analysis, we focus on a system with one receiver antenna, $N=1$, and an RIS control ratio of $m=1$. For this special case, the channel (<ref>) can be written as
where $\mathbf{h}_{ri}\in\mathbb{C}^{K\times 1}$ and $\mathrm{h}_d\in\mathbb{C}$ denote the reflected and direct channel paths, respectively, and $\mathrm{z}(t)\sim\mathcal{CN}(0,1)$ denotes the additive white Gaussian noise. Furthermore, we assume that the phase shift applied by each element of the RIS is chosen from a finite set of $A$ uniformly spaced phases, i.e., $\mathcal A=\{0,2\pi/A,\ldots,2\pi(A-1)/A\}$; and that $\mathcal{S}$ is a zero-mean input constellation, i.e.,
which is known to achieve the minimum energy per bit in many single-user channels <cit.>.
In order to formulate the capacity region in the low-power regime, we define the normalized rate $r_\ell(\bm{h}_{ri},h_d)$, $\ell\in\{1,2\}$, for unit of power as
The capacity region in the low-power regime $\underline{\mathcal{C}}(\bm{h}_{ri},h_d)$ is accordingly defined as the closure of the set of achievable normalized rate pairs (see, e.g., <cit.>).
For input distributions $p_{\mathrm{s}}(s)$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$, let
$\underline{\mathcal{R}}(p_{\mathrm{s}},p_{\pmb{\uptheta}},\bm{h}_{ri},h_d)$ be the set of normalized rate pairs $\rb{r_1(\bm{h}_{ri},h_d),r_2(\bm{h}_{ri},h_d)}$ such that the inequalities
r_ℓ(h_ri,h_d)≤𝔼[u_ℓ]/ln(2), ℓ∈{1,2},
and r_1(h_ri,h_d)+r_2(h_ri,h_d)≤𝔼[u_3]/ln(2)
hold, where random variable $\underline{\mathrm{u}}_1$, $\underline{\mathrm{u}}_2$, and $\underline{\mathrm{u}}_3$ are defined as
u_1 ≜ |h_ri^⊺e^j_1+h_ds_1-s_2|^2,
u_2 ≜ |h_ri^⊺e^j_1-e^j_2s_1|^2,
u_3 ≜ |h_ri^⊺e^j_1+h_ds_1-h_ri^⊺e^j_2+h_ds_2|^2,
respectively, with independent random variables $\mathrm{s}_1,\mathrm{s}_2\sim p_{\mathrm{s}}(s)$ and random vectors $\pmb{\uptheta}_1,\pmb{\uptheta}_2\sim p_{\pmb{\uptheta}}(\pmb{\theta})$.
The capacity region in the low-power regime $\underline{\mathcal{C}}(\bm{h}_{ri},h_d)$ is the convex hull of the union of the regions $\underline{\mathcal{R}}(p_{\mathrm{s}},p_{\pmb{\uptheta}},\bm{h}_{ri},h_d)$ over all input distributions $p_{\mathrm{s}}(s)$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$ with $s\in\mathcal{S}$, $\pmb{\theta}\in\mathcal{A}^{K\times 1}$, such that $\mathbb E[|\mathrm{s}|^2]\leq 1$.
See Appendix <ref>.
Unlike the high-power regime, the low-power capacity region (<ref>) is not a rectangle, implying that it is not possible for both encoders to communicate at their respective maximum rates. The next corollary elaborates on this point.
Let $\tilde{\pmb{\theta}}$ be the beamforming phase-shift vector maximizing Encoder 1's rate, i.e.,
In the low-power regime, Encoder 1 can achieve its maximum normalized rate
if and only if Encoder 2 does not communicate, i.e., $r_2(\bm{h}_{ri},h_d)=0$. In contrast, if $\lvert\bm{h}_{ri}^\intercal e^{j\tilde{\pmb{\theta}}}+h_d\rvert^2>\lVert \bm{h}_{ri}\rVert^2$, Encoder 2 can achieve its maximum normalized rate
while Encoder 1 communicates at a normalized rate of
See Appendix <ref>.
The asymmetry between Encoder 1 and Encoder 2 revealed by <ref> stems from the fact the, in order for Encoder 1 to obtain its maximum rate in the low-power regime, Encoder 2 needs to steer its phases according to the beamforming solution (<ref>). This in turn makes it impossible to encode additional information for Encoder 2. In contrast, Encoder 2's maximum rate can be obtained as long as Encoder 1's signal is transmitted at the maximum power and can be decoded while treating the modulation of the RIS's phases by Encoder 2 as a nuisance.
We finally remark that <ref> and <ref> imply that time-sharing, which would yield a triangular rate region, is suboptimal in both high- and low-power regimes.
This is in contrast to the multiplicative MAC studied in <cit.> that assumes two standard active encoders with separate power constraints.
§ EXAMPLES
In this section, we provide numerical examples for the capacity region derived in <ref>. For the phase response set, we consider $A$ uniformly spaced phases in the set $\mathcal A=\{0,2\pi/A,\ldots,2\pi(A-1)/A\}$, whereas, for the input constellation, we consider ASK and PSK modulations. In addition, we assume a channel vector $\bm{h}_d$ with elements having amplitude $1$, and a channel matrix $\bm{H}_{ri}$ with elements having amplitude $\alpha>0$, where $\alpha$ denotes the path-loss ratio between the reflected and direct paths. The phases of $\bm{H}_{ri}$ and $\bm{h}_d$ used in this section are summarized in <ref>.
Phases of $\bm{H}_{ri}$ and $\bm{h}_d$ used for the numerical examples
Figure $\angle \bm{H}_{ri}$ [rad] $\angle\bm{h}_{d}$ [rad]
<ref> $\begin{pmatrix}
1.11 & 0.71 & 2.92 & -2.29\\
2.52 & -0.72& 2.21 & 2.1
\end{pmatrix}$
3.11 \\
\end{pmatrix}$
<ref> $\begin{pmatrix}
-2.63 & -1.22 & -2.92 & -1.52\\
1.85 & 0.36& -0.87 & -2.59
\end{pmatrix}$
2.82 \\
\end{pmatrix}$
Furthermore, the expectation over Gaussian random vectors, e.g., $\mathbf{z}$ in <ref>, is evaluated via a Monte Carlo empirical averages.
In <ref>, we plot the capacity region for an average power constraint of $P=-20$ dB, $N=2$ receiver antennas, $K=4$ RIS elements, $A=2$ available phase shifts, a symbol-to-RIS control rate $m=2$, input constellation given by BPSK, i.e., $\mathcal S=\{-1,1\}$, and a path-loss ratio of $\alpha=0.5$ or $\alpha=1$.
0.5 !
Capacity region for $P=-20$ dB, $N=2$, $K=4$, $A=2$, $m=2$, and BPSK input constellation. The dashed line illustrates the capacity of Encoder 1 for a channel with no RIS.
In addition, we plot for reference the maximum rate achievable by Encoder 1 for a channel with no RIS, i.e., for $\bm{H}_{ri}=\mathbf{0}$.
By comparing with the capacity of the channel with no RIS, <ref> illustrates the two roles of the RIS: The RIS can be used to increase the rate of Encoder 1 by beamforming the transmitted signal, and it can enable communication from a passive secondary user.
In this regard, <ref> demonstrates that the insights obtained in <ref> by studying the low-power regime carry over to more general conditions. In particular, the maximum rate for Encoder 1 is achieved if and only if Encoder 2 does not communicate, while Encoder 2's maximum rate can coexist with a non-zero rate for Encoder 1.
In contrast, by <ref>, for sufficiently high power $P$, both encoders can communicate with the decoder at their respective maximum rates. This is verified by <ref>, where we plot the capacity region for an average power constraint of $P=40$ dB, $N=2$ receiver antennas, $K=4$ RIS elements, $A=2$ available phase shifts, a symbol-to-RIS control rate $m=1$, input constellation given by 4-ASK, i.e., $\mathcal S=\{\sigma,3\sigma,5\sigma,7\sigma\}$ with $\sigma=1/\sqrt{21}$, and a path-loss ratio of $\alpha=1$.
0.5 !
Capacity region for $P=40$ dB, $N=2$, $K=4$, $A=2$, $m=1$, and 4-ASK input constellation. The dashed line illustrates the capacity of Encoder 1 for a channel with no RIS.
Although Encoder 1 does not gain from the existence of the RIS in the high-power regime, the RIS enables MU communication with a single transmitter in a manner that resembles the single-RF MIMO system <cit.>.
§ CONCLUSION
In this work, we have studied the finite-input capacity region of an RIS-aided MU communication system, in which the RIS is not used solely for increasing the rate of an active encoder, but also for enabling communication for a secondary passive encoder. The fundamental trade-offs between the rates of the two encoders were characterized. It was shown that, for sufficiently high power, both users can communicate at their respective maximum rates. Furthermore, in the low-power regime, the maximum rate for the active encoder is achieved if and only if the passive encoder does not communicate, while the passive encoder's maximum rate can coexist with a non-zero rate for the active encoder. Finally, time-sharing was demonstrated to be suboptimal.
§.§ Proof of Proposition <ref>
The model (<ref>) can be viewed as a MAC with inputs $(\mathbf{s},\pmb{\uptheta})$ and output $\mathbf{y}$. Therefore, it follows from the capacity region of the MAC <cit.> that $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ is the convex hull of the union of regions $\tilde{\mathcal{R}}(p_{\mathbf{s}},p_{\pmb{\uptheta}},\bm{H}_{ri},\bm{h}_d)$ over all input distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$ such that $\mathbb{E}[\mathbf{s}^\ast\mathbf{s}]\leq m$, where $\tilde{\mathcal{R}}(p_{\mathbf{s}},p_{\pmb{\uptheta}},\bm{H}_{ri},\bm{h}_d)$ is the set of rate pairs $\rb{R_1(\bm{H}_{ri},\bm{h}_d),R_2(\bm{H}_{ri},\bm{h}_d)}$ such that inequalities
R_1(H_ri,h_d) ≤ 1/mI(𝐬;𝐲|),
R_2(H_ri,h_d) ≤ 1/mI(;𝐲|𝐬),
and R_1(H_ri,h_d)+R_2(H_ri,h_d) ≤ 1/mI(𝐬,;𝐲)
hold. Since inputs $\mathbf{s}$ and $\pmb{\uptheta}$ are selected from finite sets, the mutual information $I(\mathbf{s};\mathbf{y}|\pmb{\uptheta})$ in (<ref>) can be written as (see, e.g., <cit.>)
I(𝐬;𝐲|) = -NMlog_2(e)
with $\mathbf{z}\sim\mathcal{CN}(\bm{0},\bm{I}_{Nm})$ and where we have defined the scalar
By applying the conditional CGF definition in (<ref>) to (<ref>), we get
Similarly, we also have
I(;𝐲|𝐬) = -Nmlog_2(e)-κ(u_2|𝐬_1,_1,𝐳),
I(𝐬,;𝐲) = -Nmlog_2(e)-κ(u_3|𝐬_1,_1,𝐳).
Therefore, the region $\tilde{\mathcal{R}}(p_{\mathbf{s}},p_{\pmb{\uptheta}},\bm{H}_{ri},\bm{h}_d)$ in (<ref>) is identical to the region $\mathcal{R}(p_{\mathbf{s}},p_{\pmb{\uptheta}},\bm{H}_{ri},\bm{h}_d)$ in (<ref>).
§.§ Proof of Corollary <ref>
The inclusion $\mathcal{C}(\bm{H}_{ri},\bm{h}_d)\subseteq \overline{\mathcal C}$ is trivial since, for all input distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$ with $\bm{s}\in\mathcal S^{m\times 1}$ and $\pmb{\theta}\in\mathcal A^{K\times 1}$ we have $H(\mathbf{s})\leq m\log_2(S)$ and $H(\pmb{\uptheta})\leq K\log_2(A)$. In addition, in the high-power regime, we have the limits
I(𝐬;𝐲|) H(𝐬)≤mlog_2(S),
I(;𝐲|𝐬) H()≤Klog_2(A),
where equality is achieved for a uniform distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$.
Next, note that the noiseless received signal $\mathbf{y}(t)-\mathbf{z}(t)$ in (<ref>) takes values from a discrete set. Furthermore, since channel matrix $\mathbf{H}_{ri}$ and channel vector $\mathbf{h}_d$ are drawn from a continuous distribution, almost surely, for all $t\in[n/m]$, there exist unique inputs $\hat{\bm{s}}(t)\in\mathcal{S}^{m\times 1}$ and $\hat{\pmb{\theta}}(t)\in\mathcal{A}^{K\times 1}$ such that (see, e.g., <cit.>)
Therefore, for all input distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$, the transmitted signal $\mathbf{s}(t)$ and reflection pattern $\pmb{\uptheta}(t)$ can be correctly jointly decoded in the high-power regime, i.e., we have the limit
Let $\rb{R_1^u(\bm{H}_{ri},\bm{h}_d),R_2^u(\bm{H}_{ri},\bm{h}_d)}\in \mathcal{C}(\bm{H}_{ri},\bm{h}_d)$ be the rate pair achieved using uniform distributions $p_{\mathbf{s}}(\bm{s})$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$.
It hence follows from the region in (<ref>) and limits (<ref>) and (<ref>) that, almost surely, we have the limits
lim_P→∞R_1^u(H_ri,h_d) = log_2(S),
lim_P→∞R_2^u(H_ri,h_d) = K/mlog_2(A).
§.§ Proof of Proposition <ref>
For input distributions $p_{\mathrm{s}}(s)$ and $p_{\pmb{\uptheta}}(\pmb{\theta})$, let functions $\tilde{R}_\ell(P,\bm{h}_{ri},h_d)$, $\ell\in\{1,2,3\}$, be defined as
where $\kappa(\mathrm{u}_\ell|\mathrm{s}_1,\pmb{\uptheta}_1,\mathrm{z})$ are the conditional CGFs in <ref> for the special case in which $N=m=1$. By calculating the derivative of $\tilde{R}_\ell(P,\bm{h}_{ri},h_d)$ with respect to the power $P$ and taking the limit $P\rightarrow0$, we get
where random variables $\underline{\mathrm{u}}_\ell$ are defined in (<ref>). Therefore, it follows from <ref> that the normalized rate pairs $\rb{r_1(\bm{h}_{ri},h_d),r_2(\bm{h}_{ri},h_d)}$ satisfy
r_ℓ(h_ri,h_d) = lim_P→0R_ℓ(h_ri,h_d)/P
≤ lim_P→0R̃_ℓ(P,h_ri,h_d)/P
= lim_P→0∂R̃_ℓ(P,h_ri,h_d)/∂P
= 𝔼[u_ℓ]/ln(2), ℓ∈{1,2},
and similarly we have
§.§ Proof of Corollary <ref>
Since $\mathrm{s}_1$, $\mathrm{s}_2$, and $\pmb{\uptheta}_1$ in <ref> are all independent, we have
𝔼[u_1] = 𝔼|h_ri^⊺e^j_1+h_ds_1-s_2|^2
= 𝔼|h_ri^⊺e^j_1+h_d|^2𝔼|s_1-s_2|^2
≤ 2|h_ri^⊺e^j+h_d|^2.
Similarly, we have the upper bounds
𝔼[u_2] ≤ 2‖h_ri‖^2,
𝔼[u_3] ≤ 2|h_ri^⊺e^j+h_d|^2.
Equality in (<ref>) and (<ref>) is achieved for fixed RIS reflection pattern $\pmb{\uptheta}=\tilde{\pmb{\theta}}$ with probability one and uniform input distribution $p_{\mathrm{s}}(s)=1/S$. Furthermore, since the upper bounds in (<ref>) and (<ref>) are equal, Encoder 1 can achieve the maximum normalized rate if and only if $\pmb{\uptheta}=\tilde{\pmb{\theta}}$ with probability one.
In contrast, equality in (<ref>) is achieved for uniform phase-shift distribution $p_{\pmb{\uptheta}}(\pmb{\theta})=1/A^K$ and any input distribution $p_{\mathrm{s}}(s)$ for which $\mathbb E[|\mathrm{s}|^2]=1$.
That is, Encoder 2 can achieve the maximum normalized rate, while Encoder 1 transmits at a positive normalized rate.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.