text
stringlengths 1
2.28M
| meta
dict |
|---|---|
\section{Introduction}
A combined measurement of the hyperfine structure (HFS) splittings in hydrogenlike and lithiumlike ions of $^{209}$Bi has been suggested as early as 2001 \cite{Shabaev:01a} to be a sensitive probe for bound-state strong-field QED in the strongest static magnetic fields available in the laboratory. Such fields exist in the surrounding of heavy nuclei with nuclear spin and a large nuclear magnetic moment. The electron in H-like $^{209}$Bi$^{82+}$, for example, experiences on average a magnetic field of about 30\,000\,T, more than 1000 times stronger than available with the strongest superconducting magnet.
According to \cite{Shabaev:01a}, a special combination of the ground-state HFS splittings in H-like and Li-like ions ($\Delta E^{(1s)}$ and $\Delta E^{(2s)}$, respectively) of the same nuclear species, called the specific difference
\begin{eqnarray}
\Delta 'E = \Delta E^{(2s)} - \xi \Delta E^{(1s)},
\end{eqnarray}
provides the best means to test bound-state strong-field QED in the magnetic regime.
Here, the parameter $\xi = 0.16886$ \cite{Shabaev:01a,Volotka:12} is chosen to cancel the contributions of the nuclear-magnetization distribution (Bohr-Weisskopf effect) to $\Delta E^{(1s)}$ and $\Delta E^{(2s)}$.
This is required since the uncertainties of these contributions to the HFS splittings are commonly larger than the complete QED contribution and have failed all previous attempts to perform a QED test solely based on the HFS splitting in H-like heavy ions. However, at the time of the proposal \cite{Shabaev:01a} the experimental uncertainty of the HFS splitting in the Li-like $^{209}$Bi$^{80+}$ extracted
from x-ray emission spectra \cite{Beiersdorfer:98} was far too high to verify the predictions
for $\Delta 'E$.
The first laser spectroscopic observation of the splitting reported in 2014 was orders of magnitude more precise but still limited by systematical uncertainties \cite{Lochmann:14}. Finally, further improvement in accuracy by more than an order of magnitude was recently reported \cite{Ullmann:15,Ullmann:17} but the result was surprisingly more than $7\sigma$ off from the latest theoretical prediction \cite{Volotka:12}.
Since the experimental nuclear magnetic moment $\mu_I$ of $^{209}$Bi enters the calculation of the specific difference, an incorrect value will lead to a proportional change in $\Delta^\prime E$, which could be responsible for the discrepancy \cite{Karr:17}.
We also note that in Ref. \cite{Urrutia:96} the discrepancy between theory and experiment on the HFS splitting in H-like Ho was ascribed to an inaccurate value of the nuclear magnetic moment of ${^{165}}$Ho.
We have reexamined the literature value $\mu_I$($^{209}$Bi)
obtained from nuclear magnetic resonance (NMR) experiments from a theoretical point of view. This has motivated new NMR measurements of bismuth ions in different chemical environments. Results of these experiments are reported and analyzed applying high-level
four-component relativistic coupled cluster theory for advanced chemical shift calculations. We show that our result can completely resolve the hyperfine puzzle established in \cite{Ullmann:17}.
The specific difference $\Delta ' E$ has, so far, always been calculated using
the magnetic moment $\mu_I(^{209}{\rm Bi})=4.1106(2)\mu_N$ tabulated in
\cite{Raghavan:89}.
This value was obtained using the uncorrected (for shielding effects) experimental value of the magnetic moment $\mu_I(^{209}{\rm Bi})=4.03910(19)\mu_N$ reported in an NMR study \cite{Ting:53} of bismuth nitrate, Bi(NO$_3$)$_3$, which was then combined with the shielding constant for the Bi$^{3+}$ cation calculated in \cite{Johnson:68}.
In \cite{Bastug:96} the self-consistent relativistic molecular Dirac-Fock-Slater calculation of the shielding constant of the Bi(NO$_3$)$_3$ molecule using the Lamb formula \cite{Lamb:41} was performed.
The final value, $\sigma=17290(60)$\,ppm, with very small uncertainty was obtained by combining relativistic random phase approximation calculation of the Bi$^{3+}$ cation (17270 ppm) with the molecular correction. The authors concluded that the molecular correction is very small and thus supported the value from \cite{Raghavan:89}.
However,
the authors of \cite{Bastug:96} have not taken into account chemical processes that occur in an aqueous solution of bismuth nitrate molecule Bi(NO$_3$)$_3 \cdot$5H$_2$O: the compound
dissociates and the Bi$^{3+}$ cation is surrounded by water molecules (hydration). Neither the completeness nor the exact form of hydration as a function of concentration, \textit{p}H or temperature is well understood.
While it was suggested in \cite{Fedorov:98} that in strongly acidic solutions Bi$^{3+}$ exists as hexaaquabismuth(III)-cation [Bi(H$_2$O)$_6$]$^{3+}$, more recent studies \cite{Naslund:00} expect that the hydrated form is rather [Bi(H$_2$O)$_8$]$^{3+}$. We found that in both cases the electronic structure of the $n$-coordinated complex significantly differs from the Bi(NO$_3$)$_3$ molecule considered in \cite{Bastug:96}, which is expected. The molecular environment in
Bi(III/V)-containing
complexes strongly contributes to the shielding constant and a considerable chemical shift is introduced.
Consequently, the value of the shielding constant obtained in Ref.\,\cite{Bastug:96} cannot be used for the precise extraction of the $^{209}$Bi magnetic moment from the experimental NMR data.
There is, however, additional NMR data for another Bi containing system: the
hexafluoridobismuthate(V) anion ($^{209}$BiF$_6^-$) \cite{Morgan:83}. It has seven atoms and high spatial symmetry.
According to Morgan \textit{et} al.\ \cite{Morgan:83}, a measurement of BiF$_6^-$ with reference to a saturated solution of bismuth nitrate in concentrated nitric acid gave a chemical shift of $-24$\,ppm. Unfortunately, there is an inconsistency in the reported experimental data of \cite{Morgan:83}, since the measured frequency ratio is given as $\nu(^{209} {\rm BiF}_6^-) / \nu(^1{\rm H})$ =0.16017649(10). The comparison of this ratio with the one reported in \cite{Ting:53} indicates a massive chemical shift of about $\delta \approx +3200$\,ppm instead of $-24$\,ppm. We have performed NMR measurements of both samples to clarify these discrepancies.
\section{Experiment}
Since a dependence of the chemical state of the Bi$^{3+}$ ions in an aqueous solution is expected but details on the sample preparation are missing in the original NMR measurements \cite{Ting:53}, we performed a systematic study using various bismuth nitrate solutions.
Samples of ``Bi(NO$_3$)$_3$'' solutions were prepared with concentrations of 2.5\%, 5\% and 10\% Bi$^{3+}$ (wt \%) in concentrated (65 wt \%) and diluted aqueous solutions (50, 30, 20, 10 wt \%) of nitric acid (HNO$_3$).
\begin{figure}[t]
\includegraphics[width=0.98\linewidth]{Fig1New.pdf}
\caption{\label{fig:Spectra} NMR spectra of Bi(NO$_3$)$_3$ solution (10\% Bi (wt \%)) in concentrated nitric acid (gray) and NMe$_4$BiF$_6$ diluted in acetonitrile (blue).}
\end{figure}
BiF$_6^-$ anions were obtained by dissolution of $\mathrm{(CH_3)_4N^+BiF_6^-}$ (NMe$_4$BiF$_6$)
in acetonitrile to a saturated solution
\cite{Note1}.
All NMR measurements were performed at an 8.4-T magnet using the same double resonance probe for $^{209}$Bi NMR and $^{1}$H NMR calibration with tetramethylsilane. The sample temperature was stabilized with an accuracy of 1\,K employing a constant gas flow tempered by an electric heater. Spectra were obtained from the free induction decay following a 90$^\circ$ pulse of 3.5\,$\upmu$s length for $^{209}$Bi.
Typical spectra of the $^{209}$Bi atoms in BiF$_6^-$ and in the nitrate solution are shown in Fig.\,\ref{fig:Spectra}. The advantage of BiF$_6^-$ is obvious. It exhibits a much narrower linewidth (200 Hz) and the septet arising from indirect spin coupling of $^{19}$F atoms directly bonded to the bismuth atom assures the chemical environment.
The observed ratio of the peak intensities is close to the expected ratio 1\,:\,6\,:\,15\,:\,20\,:\,15\,:\,6\,:\,1 and a spin-spin coupling of 3807(14)\,Hz was determined, in good agreement with \cite{Morgan:83}.
Note that a $^{19}$F spectrum of the sample was taken as well and a decet consistent with the coupling of an $I=9/2$ nucleus to an octahedral environment of six fluorine atoms was observed.
The signal from the nitrate solution is much wider. Even at the highest temperature of 360\,K, the width of the $^{209}$Bi spectra was 4.4\,kHz due to the short spin-lattice and spin-spin relaxation times of $\approx 70$\,$\upmu$s. This width limits the accuracy of the $^{209}$Bi resonance frequency in the solution of the nitrate to 1\,ppm. The chemical shift of Bi$^{3+}$ in the solution of the bismuth nitrate with respect to Bi$^{5+}$ in BiF$_6^-$ is $-106$\,ppm, larger than the $-24$\,ppm reported in \cite{Morgan:83}. Contrary to \cite{Flynn:59} we found that the variation of the bismuth concentration between mass fractions of 2\% and about 40\% (saturation) in nitric acid of 30\% had no appreciable effect on the measured Larmor frequency as long as temperature and nitric acid concentration were kept constant.
Variations of the Bi(NO$_3$)$_3$ sample temperature from 250 to 360\,K were performed with the sample of 10\% Bi in concentrated nitric acid (65\%). We observed a strong linear temperature dependence of the frequency ratio in this range (Fig.\,\ref{fig:TempDependence}) with a slope of $+4.69(13)\times 10^{-7}$\,K$^{-1}$, corresponding to about 3\,ppm/K,
which might be caused by the change of density.
For standard NMR conditions at 298.15\,K a frequency ratio of $\nu_\mathrm{^{209}Bi^{3+}}/\nu_\mathrm{H}=0.160699(1)$ was determined, where the given uncertainty is purely statistical. This value is in excellent agreement with 0.160696(6) reported in \cite{Ting:53}. The temperature dependency of BiF$_6^-$ is 2 orders of magnitude smaller ($\approx 20$\,ppb/K) and of opposite sign. At 298.15\,K the frequency ratio to the proton is 0.1607167(2) far off from the value provided in \cite{Morgan:83}. However, the latter matches our value if one simply flips two digits [$0.160{\bf 17}65(1) \to 0.160{\bf 71}65(1)$].
\begin{figure}[t]
\includegraphics[width=0.98\linewidth]{Fig2Test3.pdf}
\caption{\label{fig:TempDependence} Temperature and HNO$_3$-concentration dependency of the NMR Larmor-frequency ratios of bismuth and hydrogen. A strong temperature effect is observed for Bi(NO$_3$)$_3$ solutions, here exemplified for a
10\% Bi$^{3+}$ (wt \%) solution in concentrated nitric acid (black), whereas only a minor effect was measured for NMe$_4$BiF$_6$ dissolved in acetonitrile (blue).
Inset: Larmor-frequency ratios
measured by NMR in Bi(NO$_3$)$_3$ solutions with 2.5\% Bi$^{3+}$ (wt \%) in nitric acid (HNO$_3$) of various concentrations at 300\,K.
The $y$ axis is identical to the main graph and the gray band represents the total variation.}
\end{figure}
Finally, we have studied the resonance position of Bi(NO$_3$)$_3$ as a function of the nitric acid concentration
(inset in Fig.\,\ref{fig:TempDependence}).
A clear dependence on the acidity is observed for all Bi$^{3+}$ concentrations, covering a range of typically $\approx 60$\,ppm.
In summary, the results clearly demonstrate that a large uncertainty is connected with the extraction of the magnetic moment of $^{209}$Bi from NMR measurements in aqueous solutions of Bi(NO$_3$)$_3$. The influence of the chemical environment was strongly underestimated in theory since the calculations performed to extract the chemical shift do neither account for the temperature nor for the concentration or acidity of the sample. In this respect, BiF$_6^-$ is a much better candidate to obtain a reliable value of the magnetic moment which will be substantiated now also from a theoretical point of view.
\section{Theory}
In the presence of the external uniform magnetic field \textbf{B} and nuclear magnetic moment $\mu_j$ of $j-$th atom in a molecule the corresponding Dirac-Coulomb Hamiltonian includes the following terms:
\begin{equation}
\label{HB}
H_B={\rm \bf{B}}\cdot \frac{c}{2}(\bm{r}_G \times \bm{\alpha}),
\end{equation}
\begin{equation}
\label{HHFS}
H_{\rm hyp}=\frac{1}{c} \sum_j \bm{\mu}_j\cdot \frac{(\bm{r}_j \times \bm{\alpha})}{r_j^3},
\end{equation}
where $\bm{r}_G = \bm{r} - \bm{R_G}$, $\bm{R_G}$ is the gauge origin,
$\bm{r}_j=\bm{r} - \bm{R_j}$, $\bm{R_j}$ is the position of nucleus $j$, and $\bm{\alpha}$ are the Dirac matrices.
The chemical shielding tensor of the nucleus $j$ can be defined as a mixed derivative of the energy with respect to the nuclear magnetic moment and the strength of the magnetic field
\begin{equation}
\label{SHIELDINGDer}
\left.\sigma^j_{a,b}=\frac{\partial^2E}{\partial\mu_{j,a}\partial B_b} \right|_{\bm{\mu}_j=0,{\rm \bf{B}}=0}.
\end{equation}
We are interested in its isotropic part.
In the one-electron case the shielding tensor (\ref{SHIELDINGDer}) can be calculated by the sum-over-states method within the second-order perturbation theory with perturbations (\ref{HB}) and (\ref{HHFS}). In the relativistic four-component approach the summation should include both positive and negative energy spectra \cite{Aucar:99}.
The part associated with positive energy is called the ``paramagnetic'' term while the part associated with negative energy states is called ``diamagnetic term'' though only their sum is gauge invariant \cite{Aucar:99}.
To avoid an ambiguity in calculations utilizing finite basis sets due to the choice of the gauge origin $\bm{R_G}$ one can use the so-called London atomic orbitals (LAOs) method (see e.g.\ \cite{Olejniczak:12,Ilias:13} for details).
In Refs.\,\cite{DIRAC15,Olejniczak:12,Ilias:13} the four-component density functional theory (DFT) using response technique and LAOs has been developed to calculate the shielding constant (\ref{SHIELDINGDer}).
To construct the atomic basis sets for the unperturbed Dirac-Coulomb Hamiltonian calculations one often uses the restricted kinetic balance (RKB) method. However, in the presence of the external magnetic fields the usual relation between the large and small component changes. In Ref.\,\cite{Olejniczak:12} the scheme of magnetic balance (MB) in conjunction with LAOs was proposed to take into account the modified coupling which is utilised below.
Most of the chemical shift calculations for heavy atom compounds are performed within the (relativistic) DFT. The drawback of the theory is that it is hard to control the uncertainty of the results as there is no systematic way of improving it. Even combinations with high-level nonrelativistic \textit{ab initio} wave-function-based calculations are also questionable in the case of heavy atom compounds. In Refs.\,\cite{Skripnikov:16b,Skripnikov:15a,Petrov:17b,Skripnikov:17c} it was shown that for such properties as the hyperfine structure constant and the molecular $g$ factor, the relativistic coupled cluster method gives the most accurate results if there are no multireference effects. Therefore, this method has been adopted here to control the uncertainty of the DFT results.
\section{Electronic structure calculation details}
In the present study we have used atomic basis sets of different qualities.
The NZ (where N~$=$~Double, Triple, Quadruple) basis set corresponds to the uncontracted core-valence N-zeta \cite{Dyall:07,Dyall:12} Dyall's basis set for Bi and augmented correlation consistent polarized valence N-zeta, aug-cc-pVNZ \cite{Dunning:89,Kendall:92} basis set for light atoms.
In the DZC basis set the contracted version of the aug-cc-pVDZ \cite{Dunning:89,Kendall:92} basis sets were used for light atoms.
Based on the nonrelativistic estimates, the hybrid density functional PBE0 \cite{pbe0} has been chosen because it reproduces the nonrelativistic coupled cluster value rather well.
Geometry parameters of the BiF$_6^-$ anion have been obtained in the scalar-relativistic DFT calculation using the generalized relativistic pseudopotential method \cite{Mosyagin:16}.
The contribution of the Gaunt interaction to the shielding constant was estimated as the difference between the values calculated at the Dirac-Hartree-Fock-Gaunt and Dirac-Hartree-Fock level of theory within the uncoupled scheme.
Nonrelativistic and scalar-relativistic calculations were performed within the {\sc us-gamess} \cite{USGAMESS1} and {\sc cfour} \cite{CFOUR} codes. Relativistic four-component calculations were performed within the {\sc dirac15} \cite{DIRAC15} and {\sc mrcc} \cite{MRCC2013} codes. For calculation of the hyperfine-interaction matrix elements and $g$ factors the code developed in Refs.\,\cite{Skripnikov:16b,Skripnikov:15b,Skripnikov:15a} was used.
\section{Results and discussion}
Table \ref{BiF6} contains results of the calculation of the BiF$_6^-$ anion.
\begin{table}[h]
\centering
\caption{The values of $^{209}$Bi shielding constants in BiF$_6^-$ in ppm.}
\label{BiF6}
\begin{tabular}{lccc}
\hline
\hline
Basis set/method & Diamagnetic & Paramagnetic & Total \\
\hline
\hline
DZ-MB-LAO/DHF & 8\,618 & 5\,768 & 14\,386 \\
DZ-MB-LAO/DFT & 8\,621 & 3\,726 & 12\,347 \\
TZ-MB-LAO/DFT & 8\,639 & 3\,733 & 12\,372 \\
\hline
DZC-RKB/DFT & & 3\,848 & \\
DZC-RKB/CCSD & & 4\,403 & \\
DZC-RKB/CCSD(T) & & 4\,286 & \\
\hline
QZ-MB-LAO/DFT & 8\,628 & 3\,763 & 12\,391 \\
Correlation correction & & 437 & \\
Gaunt correction & & -37 & \\
\hline
Final & & & 12\,792 \\
\hline
\hline
\end{tabular}
\end{table}
Comparing Dirac-Hartree-Fock (DHF) and DFT results in Table \ref{BiF6} it can be seen that the diamagnetic contribution to $\sigma(^{209}{\rm Bi})$ depends only weakly on the correlation effects, while the paramagnetic contribution is strongly affected.
To check the accuracy of the latter DFT result we have performed a series of relativistic coupled cluster calculations of $\sigma(^{209}{\rm Bi})$ taking into account only the positive energy spectrum.
Comparing values obtained within the coupled cluster with single, double and noniterative triple-cluster amplitudes (CCSD(T)) with that of CCSD shows that the triple amplitudes only slightly contribute to $\sigma(^{209}{\rm Bi})$ demonstrating good convergence of the results with respect to the electron correlation treatment
\cite{Note2}.
In the final value of $\sigma(^{209}{\rm Bi})$ we include the correlation correction calculated as the difference between the CCSD(T) and PBE0 results.
To investigate the importance of systematic treatment of the molecular environment
we have also performed an additional DHF study of one of the possible hydrated forms of Bi$^{3+}$ in an acidic solution of Bi(NO$_3$)$_3$ -- [Bi(H$_2$O)$_8$]$^{3+}$ cation in comparison with the unsolvated Bi$^{3+}$ cation.
It was found that the shielding constant of the $^{209}$Bi$^{3+}$ is significantly larger (by about 20\% at the DHF level) than that in [$^{209}$Bi(H$_2$O)$_8$]$^{3+}$.
Therefore, the interpretation of the \textit{molecular} NMR experiment in terms of the nuclear magnetic moment using a shielding constant obtained for the corresponding ion (as was done in earlier studies) is associated with considerable uncertainties.
We now use the value obtained for $\nu_\mathrm{^{209}BiF_6^-}/\nu_\mathrm{H}= 0.1607167(2)$ from our NMR measurements and the shielding constant of $\sigma(^{209}{\rm BiF}_6^-)=12\,792$\,ppm calculated above to obtain $\mu_I(\mathrm{^{209}Bi}) = 4.092(2)\,\mu_\mathrm{N}$ with an uncertainty dominated by theory.
Table \ref{NewOld} compares the experimental values \cite{Ullmann:17} of the HFS splittings with the theoretical values calculated with the old [$\mu_I$(old)$=$4.1106(2)$\mu_N$] and the new [$\mu_I$(new)$=$4.092(2)$\mu_N$] values of the nuclear magnetic moment \cite{Volotka:12}. The theoretical results include the most elaborated calculation of the Bohr-Weisskopf effect \cite{Senkov:02}.
\begin{table}[]
\centering
\caption{Theoretical values of $\Delta E^{(1s)}$ and $\Delta E^{(2s)}$ (in meV) calculated with old and new nuclear magnetic moment of $^{209}$Bi in comparison with the experimental values \cite{Ullmann:17}.
For the Bohr-Weisskopf effect the most elaborated calculation by Sen'kov and Dmitriev \cite{Senkov:02} was employed.
}
\label{NewOld}
\begin{tabular}{llll}
\hline
\hline
& \multicolumn{2}{l}{Theory} & Experiment \\
& $\mu_I$(old) & $\mu_I$(new) & \\
\hline
$\Delta E^{(1s)}$ & 5112(-5/+20) & 5089(-5/+20)(2) & 5085.03(2)(9) \\
$\Delta E^{(2s)}$ & 801.9(-9/+34) & 798.3(-9/+34)(4) & 797.645(4)(14) \\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[!h]
\includegraphics[width=0.98\linewidth]{deltaPrimeE_NMR_2.pdf}
\caption{\label{fig:CompExpTheory} Specific difference obtained in theory \cite{Shabaev:01a,Volotka:12} (red) and experiment \cite{Lochmann:14,Ullmann:17} (blue). The new nuclear magnetic moment established in this work yields a new value for the specific difference which matches the most recent experimental value within uncertainty.
}
\end{figure}
The new magnetic moment has been used to recalculate the specific difference and we obtain
$\Delta 'E_{\mathrm{theo}} = -61.043(5)(30)$\,meV, where the first uncertainty is due to uncalculated terms and remaining nuclear effects, while the second one is due to the uncertainty of the nuclear magnetic moment obtained in the present work. Revised value of $\Delta 'E_{\mathrm{theo}}$
is plotted in Fig.\,\ref{fig:CompExpTheory} combined with the previous theoretical and experimental data. Theory and experiment are now in excellent agreement and the $7\sigma$ discrepancy reported in \cite{Ullmann:17} disappears.
Unfortunately, the uncertainty of $\Delta 'E_{\mathrm{theo}}$ is now 14\% of the total QED contribution and about 1.5 times larger than the experimental uncertainty.
Hence, an improved value for the nuclear magnetic moment of $^{209}$Bi is urgently required, either from an atomic beam magnetic resonance experiment or from a measurement on trapped H-like ions. The latter will have the advantage that no shielding corrections have to be applied. Such an experiment is planned at the ARTEMIS trap \cite{Quint:2008} at the GSI Helmholtz Centre in Darmstadt. Only such a measurement combined with an improved determination of the HFS splitting in $^{209}$Bi$^{80+,82+}$ as it is foreseen at SPECTRAP \cite{Andelkovic:2013} can provide a QED test in the magnetic regime of strong-field QED. Our result also proves that a measurement of the specific difference can also be used to extract the nuclear magnetic moment.
Doing so results in $\mu_I(^{209}\mathrm{Bi})=4.0900(15)\,\mu_{\mathrm{N}}$ in excellent agreement with the NMR value obtained here.
\begin{acknowledgments}
\section*{Acknowledgments}
We thank Petra Th\"orle from the Institute of Nuclear Chemistry at the University of Mainz for the preparation of the NMR samples and Dmitry Korolev from Saint-Peterburg State University for valuable discussions.
The development of the code for the computation of the matrix elements of the considered operators as well as the performance of all-electron coupled cluster calculations were funded by RFBR, according to Research Project No.~16-32-60013 mol\_a\_dk; performance of DFT calculations was supported by the President of Russian Federation Grant No. MK-2230.2018.2. This work was also supported by SPSU (Grants No. 11.38.237.2015 and No. 11.40.538.2017) and by SPSU-DFG (Grants No. 11.65.41.2017 and No. STO 346/5-1). The experimental part was supported by the Federal Ministry of Education and Research of Germany under Contract No 05P15RDFAA and the Helmholtz
International Center for FAIR (HIC for FAIR).
\end{acknowledgments}
|
{
"timestamp": "2018-03-08T02:06:13",
"yymm": "1803",
"arxiv_id": "1803.02584",
"language": "en",
"url": "https://arxiv.org/abs/1803.02584"
}
|
\section{Introduction}
Large-scale distributed systems often resort to replication techniques to
achieve fault-tolerance and load distribution. These systems have to make a
choice between availability and low latency or strong consistency
\cite{abadi-cap,brewer-cap,gilbert-cap,golab-pacelc}, many times opting for the
first \cite{consistency1,consistency2}. A common approach is to allow
replicas of some data type to temporarily diverge, making sure these replicas
will eventually converge to the same state in a deterministic way.
\emph{Conflict-free Replicated Data Types} (CRDTs) \cite{crdts1,crdts2} can be
used to achieve this. They are key components in modern geo-replicated systems,
such as Riak~\cite{riak}, Redis~\cite{redis}, and Microsoft Azure Cosmos
DB~\cite{cosmosdb}.
CRDTs come mainly in two flavors: \emph{operation-based} and
\emph{state-based}. In both, queries and updates can be executed immediately
at each replica, which ensures availability (as it never needs to coordinate
beforehand with remote replicas to execute operations). In operation-based
CRDTs \cite{pure-op,crdts1}, operations are disseminated assuming a reliable
dissemination layer that ensures exactly-once causal delivery of operations.
State-based CRDTs need fewer guarantees from the communication channel:
messages can be dropped, duplicated, and reordered. When an update operation
occurs, the local state is updated through a mutator, and from time to time
(since we can disseminate the state at a lower rate than the rate of the
updates) the full (local) state is propagated to other replicas.
Although state-based CRDTs can be disseminated over unreliable communication
channels, as the state grows, sending the full state
becomes unacceptably costly. Delta-based CRDTs \cite{deltas1,deltas2} address
this issue by defining delta-mutators that return a delta ($\delta$),
typically much smaller than the full state of the replica, to be merged with
the local state. The same $\delta$ is also added to an outbound
$\delta$-buffer, to be periodically propagated to remote replicas.
Delta-based CRDTs have been adopted in industry as part of Akka Distributed
Data
framework~\cite{akka-data} and IPFS~\cite{ipfs, ipfs-deltas}.
However, and somewhat unexpectedly, we have observed (Figure \ref{fig:problem})
that current
delta-propagation algorithms can still disseminate much
redundant state between replicas,
performing worse than envisioned, and no better than the state-based approach.
This anomaly becomes noticeable when concurrent update
operations always occur between synchronization rounds,
and it is partially justified due to inefficient redundancy detection in
delta-propagation.
\begin{figure}[t]
\begin{center}
\begin{minipage}{0.33\textwidth}
\includegraphics[width=\textwidth,keepaspectratio]{first}
\end{minipage}
\begin{minipage}{0.105\textwidth}
\includegraphics[width=\textwidth,keepaspectratio]{second}
\end{minipage}
\end{center}
\caption{Experiment setup: 15 nodes in a partial mesh topology replicating
an always-growing set. The left plot depicts the
number of elements being sent throughout the experiment,
while the right plot shows the CPU processing time ratio
with respect to state-based. Not only does delta-based
synchronization not improve state-based in terms of state transmission,
it even incurs a substantial processing overhead.}
\label{fig:problem}
\end{figure}
In this paper we identify two sources of redundancy in current algorithms,
and introduce the concept of join decomposition of a state-based CRDT,
showing how it can be used to derive optimal deltas (``differences'') between states, as
well as optimal delta-mutators.
By exploiting these concepts, we also introduce an
improved synchronization algorithm, and experimentally
evaluate it, confirming that it outperforms current approaches
by reducing the amount of state transmission,
memory consumption, and processing time
required for delta-based synchronization.
\section{Background on State-based CRDTs}
A state-based CRDT can be defined as a triple
$(\mathcal{L}, \sqleq, \join)$ where $\mathcal{L}$ is a
join-semilattice (lattice for short, from now on),
$\sqleq$ is a partial order, and
$\join$ is a binary join operator that derives
the least upper bound for any two elements of $\mathcal{L}$.
State-based CRDTs are updated through a set of
\emph{mutators} designed to be inflations,
i.e. for mutator $\m$ and state $x \in \mathcal{L}$, we have $x \sqleq \m(x)$.
Synchronization of replicas is achieved by having
each replica periodically propagate
its local state to other
neighbour
replicas. When a remote state is received, a replica updates its state to reflect
the join of its local state and the received state.
As the local state grows, more state needs to be sent, which might affect the
usage of system resources (such as network) with a negative
impact on the overall system performance.
Ideally, each replica should only propagate the most
recent modifications executed over its local state.
Delta-based CRDTs can be used to achieve this,
by defining \emph{delta-mutators} that
return a smaller state which, when merged with the current state, generates the
same result as applying the standard mutators, i.e.
each mutator $\m$ has in
delta-CRDTs
a corresponding $\delta$-mutator
$\m^\delta$ such that:
\[
\m(x) = x \join \m^\delta(x)
\]
In this model, the deltas resulting from $\delta$-mutators
are added to a $\delta$-buffer,
in order to be propagated to neighbor replicas, as a $\delta$-group, at the next synchronization step.
When a $\delta$-group is received from a neighbor, it is also added
to the buffer for further propagation.
\subsection{CRDT examples}
\label{subsec:examples}
In Figure \ref{fig:crdt-spec} we present the specification
of two simple state-based CRDTs,
defining their lattice states, mutators, corresponding $\delta$-mutators,
and the binary join operator $\join$.
These lattices
are typically bounded and thus a bottom value
$\bot$ is also defined.
(Note that the specifications
do not define the partial order $\sqleq$ since it can
always be defined,
for any lattice $\mathcal{L}$, in terms of $\join$:
$x \sqleq y \iff x \join y = y$.)
\begin{figure}[!ht]
\begin{center}
\begin{subfigure}{0.48\textwidth}
\begin{align*}
\af{GCounter} & = \mathds{I} \mathrel{\hookrightarrow} \mathds{N} \\
\bot & = \varnothing \\
\af{inc}_i(p) & = p\{i \mapsto p(i) + 1\} \\
\daf{inc}_i(p) & = \{i \mapsto p(i) + 1\} \\
\af{value}(p) & = \sum \{ v | k \mapsto v \in p \} \\
p \join p' & = \{k \mapsto \max(p(k), p'(k)) | k \in l \} \\
& \textbf{where } l = \dom(p) \union \dom(p')
\end{align*}
\caption{Grow-only Counter.}
\label{fig:gcounter-spec}
\end{subfigure}
\end{center}
\hfill
\begin{center}
\begin{subfigure}{0.48\textwidth}
\begin{align*}
\af{GSet}\tup{E} & = \pow{E} \\
\bot & = \varnothing \\
\af{add}(e, s) & = s \union \{e\} \\
\daf{add}(e, s) & =
\begin{cases}
\{ e \} & \textbf{if}\enskip e \not \in s \\
\bot & \textbf{otherwise}
\end{cases} \\
\af{value}(s) & = s \\
s \join s' & = s \union s'
\end{align*}
\caption{Grow-only Set.}
\label{fig:gset-spec}
\end{subfigure}
\end{center}
\caption{Specifications of two data types, replica $i \in \mathds{I}$.}
\label{fig:crdt-spec}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\begin{subfigure}{0.48\textwidth}
\centerxy{
<0.5cm,0pt>:
(0,6)*+{\countab{2}{2}}="atwobtwo";
(-2,4.5)*+{\countab{2}{1}}="atwobone";
(2,4.5)*+{\countab{1}{2}}="aonebtwo";
(-4,3)*+{\counta{2}}="atwo";
(0,3)*+{\countab{1}{1}}="aonebone";
(4,3)*+{\countb{2}}="btwo";
(-2,1.5)*+{\counta{1}}="aone";
(2,1.5)*+{\countb{1}}="bone";
(0,0)*+{\bot}="empty";
"empty"; "aone" **\dir{-};
"empty"; "bone" **\dir{-};
"aone"; "atwo" **\dir{-};
"bone"; "btwo" **\dir{-};
"aone"; "aonebone" **\dir{-};
"bone"; "aonebone" **\dir{-};
"atwo"; "atwobone" **\dir{-};
"aonebone"; "atwobone" **\dir{-};
"btwo"; "aonebtwo" **\dir{-};
"aonebone"; "aonebtwo" **\dir{-};
"atwobone"; "atwobtwo" **\dir{-};
"aonebtwo"; "atwobtwo" **\dir{-};
}
\caption{$\af{GCounter}$, with two replicas $\mathds{I} = \{\A, \B\}$.}
\label{fig:gcounter-hasse}
\end{subfigure}
\end{center}
\hfill
\begin{center}
\begin{subfigure}{0.48\textwidth}
\centerxy{
<0.5cm,0pt>:
(0,6)*+{\{a, b, c\}}="abc";
(-3,4)*+{\{a, b\}}="ab";
(0,4)*+{\{a, c\}}="ac";
(3,4)*+{\{b, c\}}="bc";
(-3,2)*+{\{a\}}="a";
(0,2)*+{\{b\}}="b";
(3,2)*+{\{c\}}="c";
(0,0)*+{\bot}="empty";
"empty"; "a" **\dir{-};
"empty"; "b" **\dir{-};
"empty"; "c" **\dir{-};
"a"; "ab" **\dir{-};
"a"; "ac" **\dir{-};
"b"; "ab" **\dir{-};
"b"; "bc" **\dir{-};
"c"; "ac" **\dir{-};
"c"; "bc" **\dir{-};
"ab"; "abc" **\dir{-};
"ac"; "abc" **\dir{-};
"bc"; "abc" **\dir{-};
}
\caption{$\af{GSet}\tup{\{a, b, c\}}$.}
\label{fig:gset-hasse}
\end{subfigure}
\end{center}
\caption{Hasse diagram of two data types.}
\label{fig:hasse}
\end{figure}
\newcommand{\{a, \underline b\}}{\{a, \underline b\}}
\newcommand{\{\underline a, \underline b, c\}}{\{\underline a, \underline b, c\}}
\begin{figure*}[t]
\centerxy{
\xymatrix@C=1.5em @R2em{
\A &
\varnothing \ar@{->}[rr]^(0.45){\add_a} & & \{a\}
\ar@{.}[rr] & & \{a, b\}
\ar@{.}[r] & \bullet^2
\ar@{->}[rd]^(.6){\{a, \underline b\}}
\ar@{.}[rrr] & & & \{a, b, c\}
\\
\B &
\varnothing \ar@{->}[rr]^(0.45){\add_b} & & \{b\}
\ar@{.}[r] & \bullet^1
\ar@{->}[ru]_(.65){\{b\}}
\ar@{.}[r] & \ar@{->}[r]^(0.45){\add_c} & \{b, c\}
\ar@{.}[r] & \{a, b, c\}
\ar@{.}[r] & \bullet^3
\ar@{->}[ru]_(.65){\{\underline a, \underline b, c\}}
}
}
\caption{Delta-based synchronization of a $\af{GSet}$ with 2 replicas
$\A, \B \in \mathds{I}$. Underlined elements represent the $\BP$ optimization.}
\label{fig:delta-ex1}
\end{figure*}
\newcommand{\{a, \overline b\}}{\{a, \overline b\}}
\begin{figure*}[t]
\centerxy{
\xymatrix@C=1.5em @R2em{
\A &
\varnothing \ar@{->}[rr]^(0.45){\add_a} & & \{a\}
\ar@{.}[rr] & & \{a, b\}
\ar@{.}[r] & \bullet^6
\ar@{->}[rrdd]^(.78){\{a, b\}}
\\
\B &
\varnothing \ar@{->}[rr]^(0.45){\add_b} & & \{b\}
\ar@{.}[r] & \bullet^4
\ar@{->}[ru]_(.65){\{b\}} \ar@{->}[rd]^(.6){\{b\}}
\\
\C &
\varnothing
\ar@{.}[rrrr] & & & & \{b\}
\ar@{.}[r] & \bullet^5
\ar@{->}[rd]_(.6){\{b\}}
\ar@{.}[rr] & & \{a, b\}
\ar@{.}[r] & \bullet^7
\ar@{->}[rd]_(.6){\{a, \overline b\}}
\\
\D &
\varnothing
\ar@{.}[rrrrrr] & & & & & & \{b\}
\ar@{.}[rrr] & & & \{a, b\}
}
}
\caption{Delta-based synchronization of a $\af{GSet}$ with 4 replicas
$\A, \B, \C, \D \in \mathds{I}$. The overlined element represents the $\RR$ optimization.}
\label{fig:delta-ex2}
\end{figure*}
A CRDT counter that only allows increments is known as
a \emph{grow-only counter} (Figure \ref{fig:gcounter-spec}).
In this data type, the set of replica identifiers $\mathds{I}$
is mapped to the set of natural numbers $\mathds{N}$.
Increments are tracked per replica $i$, individually,
and stored in a map entry $p(i)$. The value of
the counter is the sum of each entry's value in the map.
Mutator $\af{inc}$ returns the updated map (the notation $p\{k \mapsto v\}$ indicates that only entry $k$ in the map $p$ is updated to a new value $v$, the remaining entries left unchanged), while
the $\delta$-mutator $\daf{inc}$ only returns the
updated entry. The join of two $\af{GCounter}$s computes,
for each key, the maximum of the associated values.
The lattice state evolution (either by mutation or
join of two states)
can also be understood by looking at
the corresponding Hasse diagram (Figure \ref{fig:hasse}).
For example,
state $\countab{1}{1}$ in Figure \ref{fig:gcounter-hasse}
(where $\A_1$ represents entry $\{\A \mapsto 1 \}$ in the map,
i.e. one increment registered by replica $\A$),
can result from an increment on $\counta{1}$ by $\B$,
from an increment on
$\countb{1}$
by $\A$, or from the join of these two states.
A \emph{grow-only set}, Figures \ref{fig:gset-spec}
and Figure \ref{fig:gset-hasse},
is a set data type that only allows element additions.
Mutator $\add$ returns the updated set, while $\daf{add}$
returns a singleton set with the added element
(in case it was not in the set already).
The join of two $\af{GSet}$s simply computes the set union.
Although we have chosen as running examples very simple CRDTs,
the results in this paper can be extended to more complex ones,
as we show in
Appendix \ref{app:compositions}.
For further coverage of delta-based CRDTs see \cite{deltas2}.
\subsection{Synchronization Cost Problem}
Figures \ref{fig:delta-ex1} and \ref{fig:delta-ex2} illustrate possible
distributed executions of the classic delta-based synchronization algorithm
\cite{deltas2}, with replicas of a \emph{grow-only-set}, all starting with a
bottom value $\bot = \varnothing$. (This classic algorithm is captured in
Algorithm \ref{alg:both}, covered in Section \ref{sec:revisited}.)
Synchronization with neighbors is represented by $\bullet$ and synchronization
arrows are labeled with the state sent, where we overline or underline
elements that are being redundantly sent and can be removed (thus improving
network bandwidth consumption) by employing two simple and novel optimizations
that we introduce next.
In Figure \ref{fig:delta-ex1}, we have two replicas $\A, \B \in \mathds{I}$
and each adds an element to the replicated set.
At $\bullet^1$, $\B$ propagates the content of the $\delta$-buffer,
i.e. $\{b\}$, to neighbour $\A$.
At $\bullet^2$, $\A$ sends to $B$ $\{a, b\}$,
i.e.
the join of $\{a\}$ from a local mutation,
and the received $\{b\}$ from $\B$,
even though $\{b\}$ came from $B$ itself.
By simply tracking the origin of each $\delta$-group in
the $\delta$-buffer, replicas can
\textbf{avoid back-propagation of $\delta$-groups} ($\BP$).
Before receiving $\{a, b\}$, $\B$ adds a new element $c$ to the set,
also adding $\{c\}$ to the $\delta$-buffer.
Upon receiving $\{a, b\}$, and since what was received produces changes in the local
state, $\B$ adds it to the $\delta$-buffer.
At $\bullet^3$, $\B$ propagates all new changes since last synchronization
with $\A$: $\{c\}$ from a local mutation, and $\{a, b\}$ from $\B$,
even though $\{a, b\}$ came from replica $\B$.
When $\A$ receives $\{a, b, c\}$, it will also add it to the buffer to be further propagated.
Note that as long as this pattern keeps repeating (i.e. there's always a state change between
synchronizations), delta-based synchronization will propagate the same amount of state
as state-based synchronization would, representing no improvement.
This is illustrated in Figure \ref{fig:delta-ex1},
and demonstrated empirically in Section \ref{sec:eval}.
In Figure \ref{fig:delta-ex2}, we have four replicas $\A, \B, \C, \D \in \mathds{I}$,
and replicas $\A, \B$ add an element to the set.
At $\bullet^4$, $\B$ propagates the content of the $\delta$-buffer
to neighbours $\A$ and $\C$.
At $\bullet^5$, $\C$ propagates the received $\{b\}$ to $\D$.
At $\bullet^6$, $\A$ sends
the join of $\{a\}$ from a local mutation and the received $\{b\}$
to $\C$.
Upon receiving the $\delta$-group $\{a, b\}$,
$\C$ adds it to the $\delta$-buffer
and sends it to $\D$ at $\bullet^7$.
However, part of this $\delta$-group has already been in the $\delta$-buffer
(namely $b$), and thus, has already been propagated.
This observation hints for another optimization:
\textbf{remove redundant state in received $\delta$-groups} ($\RR$),
before adding them to the $\delta$-buffer.
Both $\BP$ and $\RR$ optimizations are detailed in Section \ref{sec:revisited},
where we incorporate them into the delta-based synchronization algorithm
with few changes.
\section{Join Decompositions and Optimal Deltas}
\label{sec:efficient}
In this section we introduce state decomposition in state-based CRDTs, by
exploiting the mathematical concept of \emph{irredundant join decompositions}
in lattices. We then demonstrate how this concept can be used
to derive deltas and delta-mutators that are optimal,
in the sense that they produce the
smallest delta-state possible.
In Section \ref{sec:revisited} we show how this
same concept plays a key role in the $\af{RR}$ optimization
briefly described in the previous section.
\subsection{Join Decomposition of a State-based CRDT}
\label{subsec:jd}
\begin{definition}[Join-irreducible state]
State $x \in \mathcal{L}$ is join-irreducible if it cannot
result from the join of any finite set of states $F \subseteq \mathcal{L}$ not
containing $x$:
\[
x = \bigjoin F \implies x \in F
\]
\end{definition}
\begin{example}
Let the following $p_1$, $p_2$ and $p_3$ be $\af{GCounter}$ states,
and $s_1$, $s_2$ and $s_3$ be $\af{GSet}$ states.
\begin{center}
\begin{minipage}{.25\textwidth}
\begin{itemize}
\item[\ding{51}] $p_1 = \counta{5}$
\item[\ding{51}] $p_2 = \countb{6}$
\item[\ding{55}] $p_3 = \countab{5}{7}$
\end{itemize}
\end{minipage}
\begin{minipage}{.23\textwidth}
\begin{itemize}
\item[\ding{55}] $s_1 = \bot$
\item[\ding{51}] $s_2 = \{a\}$
\item[\ding{55}] $s_3 = \{a, b\}$
\end{itemize}
\end{minipage}
\end{center}
States $p_3$ and $s_3$ are not join-irreducible states,
since they can be decomposed into (i.e. result from the join of) two states
different from themselves: $\counta{5}$ and $\countb{7}$ for $p_3$,
$\{a\}$ and $\{b\}$ for $s_3$.
Bottom (e.g., $s_1$) is never join-irreducible, as it is the join over an empty
set $\bigjoin \varnothing$.
\end{example}
In a Hasse diagram of a finite lattice (e.g., in Figure \ref{fig:hasse})
the join-irreducibles are those elements with exactly one link below.
Given lattice $\mathcal{L}$, we use $\mathcal{J}(\mathcal{L})$ for the set of all
join-irreducible elements of $\mathcal{L}$.
\begin{definition}[Join Decomposition]
Given a lattice state $x \in \mathcal{L}$, a set of join-irreducibles $D$
is a join decomposition \cite{birkhoff1937} of $x$
if its join produces $x$:
\[
D \subseteq \mathcal{J}(\mathcal{L}) \mathrel{\wedge} \bigjoin D = x
\]
\end{definition}
\begin{definition}[Irredundant Join Decomposition]
A join decomposition D is irredundant if no element in it is redundant:
\[
D' \subset D \implies \bigjoin D' \ple \bigjoin D
\]
\end{definition}
\begin{example}
\label{ex:jd}
Let $p = \countab{5}{7}$ be a $\af{GCounter}$ state,
$s = \{a, b, c\}$ a $\af{GSet}$ state,
and consider the following sets of states as tentative decompositions of $p$
and $s$.
\begin{center}
\begin{minipage}{.25\textwidth}
\begin{itemize}
\item[\ding{55}] $P_1 = \{\counta{5}, \countb{6}\}$
\item[\ding{55}] $P_2 = \{\counta{5}, \countb{6}, \countb{7}\}$
\item[\ding{55}] $P_3 = \{\countab{5}{6}, \countb{7}\}$
\item[\ding{51}] $P_4 = \{\counta{5}, \countb{7}\}$
\end{itemize}
\end{minipage}
\begin{minipage}{.23\textwidth}
\begin{itemize}
\item[\ding{55}] $S_1 = \{\{b\}, \{c\}\}$
\item[\ding{55}] $S_2 = \{\{a, b\}, \{b\}, \{c\}\}$
\item[\ding{55}] $S_3 = \{\{a, b\}, \{c\}\}$
\item[\ding{51}] $S_4 = \{\{a\}, \{b\}, \{c\}\}$
\end{itemize}
\end{minipage}
\end{center}
Only $P_4$ and $S_4$ are irredundant join decompositions of $p$ and $s$.
$P_1$ and $S_1$ are not decompositions since their join does not result in $p$
and $s$, respectively;
$P_2$ and $S_2$ are decompositions but contain redundant elements,
$\countb{6}$ and $\{b\}$, respectively;
$P_3$ and $S_3$ do not have redundancy, but contain reducible elements
($S_2$ fails to be an irredundant join decomposition for the same reason,
since its element $\{a, b\}$ is also reducible).
\end{example}
As we show in Appendix \ref{app:unique-jds} and \ref{app:compositions},
these irredundant decompositions exist, are unique,
and can be obtained for CRDTs used in practice.
Let $\dec{x}$ denote the unique decomposition of element $x$.
From the Birkhoff's Representation Theorem~\cite{latticesAndOrder},
decomposition $\dec{x}$ is given by the maximals of the
join-irreducibles below $x$:
\[
\dec x = \max \{ r \in \mathcal J(\mathcal{L}) | r \pleq x \}
\]
As two examples,
given a $\af{GCounter}$ state $p$ and a $\af{GSet}$ state $s$,
their (quite trivial) irredundant decomposition is given by:
\[
\hfill
\dec{p} = \{ \{k \mapsto v\} | k \mapsto v \in p \}
\qquad
\dec{s} = \{ \{e\} | e \in s\}
\hfill
\]
We argue that these techniques can be applied to most (practical) implementations of CRDTs
found in industry. The interested reader can find generic decomposition rules in Appendix~\ref{app:decomposing}.
\subsection{Optimal deltas and \texorpdfstring{$\delta$}{}-mutators}
Having a unique irredundant join decomposition, we can define a function which
gives the minimum delta, or ``difference'' in analogy to set difference,
between two states $a, b \in \mathcal{L}$:
\[
\Delta(a, b) = \bigjoin \{ y \in \dec{a} | y \not \pleq b \}
\]
which when joined with $b$ gives $a \join b$, i.e. $\Delta(a, b) \join b = a \join b$.
It is minimum (and thus, optimal)
in the sense that it is smaller than any other $c$
which produces the same result:
$c \join b = a \join b \implies \Delta(a, b) \pleq c$.
If not carefully designed, $\delta$-mutators can be a source of redundancy
when the resulting $\delta$-state contains information that has already been incorporated
in the lattice state.
As an example, the original $\delta$-mutator $\daf{add}$ of $\af{GSet}$ presented in
\cite{deltas1} always returns a singleton set with the element to be added,
even if the element is already in the set
(in Figure \ref{fig:gset-spec} we have presented a definition of
$\daf{add}$ that is optimal). By resorting to function $\Delta$, minimum delta-mutators can be trivially derived
from a given mutator:
\[
\m^\delta(x) = \Delta(\m(x), x)
\]
\section{Revisiting Delta-based Synchronization}
\label{sec:revisited}
Algorithm \ref{alg:both} formally describes
delta-based synchronization at replica $i$.
The algorithm contains lines that belong to
\HLBase{classic} delta-based synchronization \cite{deltas1,deltas2},
and lines with \HLOpt{$\BP$} and \HLOpt{$\RR$} optimizations,
while non-highlighted lines belong to both.
In classic delta-based synchronization,
each replica $i$ maintains a lattice state $x_i \in \mathcal{L}$
(\qline{alg:state}),
and a $\delta$-buffer $B_i \in \pow\mathcal{L}$ as a set of lattice states
(\qline{alg:buffer}).
When an update operation occurs (\qline{alg:delta-operation}),
the resulting $\delta$ is merged with the local state $x_i$ (\qline{alg:merge})
and added to the buffer (\qline{alg:add-buffer}),
resorting to function $\store$. Periodically,
the whole content of the $\delta$-buffer (\qline{alg:buffer-collect})
is propagated to neighbors (\qline{alg:buffer-send}).
For simplicity of presentation, we assume that communication channels between
replicas cannot drop messages (reordering and duplication is considered), and
that is why the buffer is cleared after each synchronization step
(\qline{alg:buffer-clear}). This assumption can be removed by simply tagging
each entry in the $\delta$-buffer with a unique sequence number, and by
exchanging acks between replicas: once an entry has been acknowledged by every
neighbour, it is removed from the $\delta$-buffer, as originally
proposed in \cite{deltas1}.
When a $\delta$-group is received
(\qline{alg:delta-receive}),
then it is checked whether it will induce an inflation in the local state
(\qline{alg:delta-check}). If this is the case, the $\delta$-group is merged with the local state and added to the buffer
(for further propagation),
resorting to the same function $\store$.
The precondition in \qline{alg:delta-check} appears to be harmless,
but it is in fact, the source of most redundant state propagated
in this synchronization algorithm. Detecting an inflation
is not enough, since almost always there's something new to
incorporate. Instead, synchronization algorithms must extract
from the received $\delta$-group the lattice state responsible for the
inflation, as done by the $\RR$ optimization.
Few changes are required in order to incorporate
this and the $\BP$ optimization in the classic
algorithm, as we show next.
This happens because our approach encapsulates most of its complexity
in the computation of join decompositions and function $\Delta$.
The fact that few changes are required to the classic synchronization
algorithm is a benefit, that will minimize the efforts in incorporating
these techniques in existing implementations.
\algsinglecol{t}{
\newcommand\tab{\hspace{0.7em}}
\newcommand\block[3]{\leavevmode\rlap{\colorbox{#2}{\vphantom{$X_0^1$}\hspace{#1}}}\makebox[#1][l]{#3}}
\newcommand\leftright[2]{\block{0.4\hsize}{gray!15}{#1}\block{0.57\hsize}{gray!45}{ #2}}
\SetKw{kwif}{if }
\SetKw{kwfor}{for }
\inputs{
$n_i \in \pow\mathds{I}$, set of neighbors \;
}
\vspace{0.2cm}
\state{
$x_i \in \mathcal{L}$, $x_i^0 = \bot$ \; \label{alg:state}
\leftright
{$B_i \in \pow \mathcal{L}$, $B_i^0 = \varnothing$}
{$B_i \in \pow{\mathcal{L} \times \mathds{I}}$, $B_i^0 = \varnothing$}
\; \label{alg:buffer}
}
\vspace{0.2cm}
\on({$\operation_i(\m^\delta)$}){
\label{alg:delta-operation}
$\delta = \m^\delta(x_i)$ \;
$\store(\delta, i)$ \;
}
\vspace{0.2cm}
\periodically(// synchronize){
\label{alg:delta-ship}
$\kwfor j \in n_i$ \;
\leftright
{\tab $d = \bigjoin B_i$}
{$d = \bigjoin \{s | \tup{s, o} \in B_i \mathrel{\wedge} o \not = j \}$}
\; \label{alg:buffer-collect}
\tab $\send_{i,j}(\af{delta}, d)$ \; \label{alg:buffer-send}
$B_i' = \varnothing$ \; \label{alg:buffer-clear}
}
\vspace{0.2cm}
\on({$\receive_{j,i}(\af{delta}, d)$}){
\label{alg:delta-receive}
\leftright{}{$d = \Delta(d, x_i)$} \; \label{alg:delta-extract}
\leftright{$\kwif d \not \sqleq x_i$}{$\kwif d \not = \bot$} \; \label{alg:delta-check}
\tab $\store(d, j)$ \;
}
\vspace{0.2cm}
\fun({$\store(s, o)$}){
$x_i' = x_i \join s$ \; \label{alg:merge}
\leftright{$B_i' = B_i \union \{s\}$}{$B_i' = B_i \union \{\tup{s, o}\}$}
\label{alg:add-buffer}
}
\vspace{0.2cm}
}
{Delta-based synchronization algorithms at replica $i \in \mathds{I}$:
\HLBase{classic} version
and version with \HLOpt{$\BP$} and \HLOpt{$\RR$} optimizations.}
{alg:both}
\paragraph*{Avoiding back-propagation of $\delta$-groups}
For $\BP$, each entry in the $\delta$-buffer is tagged
with its origin (\qline{alg:buffer} and \qline{alg:add-buffer}),
and at each synchronization step with neighbour $j$,
entries tagged with $j$ are filtered out
(\qline{alg:buffer-collect}).
\paragraph*{Removing redundant state in received $\delta$-groups}
A received $\delta$-group can contain redundant state,
i.e. state that has already been propagated to neighbors,
or state that is in the $\delta$-buffer $B_i$ still to be propagated.
This occurs in topologies where the underlying graph has cycles,
and thus,
nodes can receive the same information through different
paths in the graph.
In order to detect if a $\delta$-group has redundant state,
nodes do not need to keep everything in the $\delta$-buffer
or even inspect the $\delta$-buffer: it is enough to compare the
received $\delta$-group with the local lattice state $x_i$.
In classic delta-based synchronization,
received $\delta$-groups were added to
$\delta$-buffer only if they would strictly inflate
the local state (\qline{alg:delta-check}).
For $\RR$,
we extract from the $\delta$-group what strictly inflates
the local state $x_i$ (\qline{alg:delta-extract}),
and $\store$ it if it is different from bottom
(\qline{alg:delta-check}).
This extraction is achieved by selecting
which irreducible states
from the decomposition of the received $\delta$-group
strictly inflate the local state,
resorting to function $\Delta$
presented in Section \ref{sec:efficient}.
\section{Evaluation}
\label{sec:eval}
In this Section we evaluate the proposed solutions and show the following:
\begin{itemize}
\item Classic delta-based synchronization
can be as inefficient as state-based synchronization in terms of
transmission bandwidth, while incurring an overhead in terms of memory usage
required for synchronization (Section \ref{sub:micro}).
\item In acyclic topologies, $\BP$ is enough to attain the best results,
while in topologies with cycles, only $\RR$ can greatly reduce the synchronization
cost (Section \ref{sub:micro}).
\item Alternative synchronization techniques (such as Scuttlebutt \cite{scuttlebutt} and operation-based synchronization \cite{crdts1,crdts2}) are metadata-heavy; this metadata represents a large fraction of all the data required for synchronization (over 75\%)
while for delta-based synchronization the metadata overhead can be as low as $7.7\%$
(Section \ref{sub:micro}).
\item In moderate-to-high contention workloads, $\BP$ + $\RR$
can reduce transmission bandwidth and memory consumption by several GBs;
when comparing with $\BP$ + $\RR$, classic delta-based synchronization has an unnecessary
CPU overhead of up-to 7.9$\af{x}$ (Section \ref{sub:retwis}).
\end{itemize}
Instructions on how to reproduce all experiments can be found in
our public repository\footnote{\url{https://github.com/vitorenesduarte/exp}}.
\subsection{Experimental Setup}
The evaluation was conducted in a Kubernetes cluster deployed in Emulab \cite{emulab}.
Each machine has a Quad Core Intel Xeon 2.4 GHz and 12GB of RAM.
The number of machines in the cluster is set such that two replicas
are never scheduled to run in the same machine, i.e. there is at least
one machine available for each replica in the experiment.
\paragraph*{Network Topologies}
Figure \ref{fig:topologies} depicts the two network topologies
employed in the experiments:
a partial-mesh, in which each node has 4 neighbors;
and a tree, with 3 neighbors per node, with the exception of the root
node (2 neighbors) and leaf nodes (1 neighbor).
The first topology exhibits redundancy in the links and tests the effect of cycles in the
synchronization, while the second represents an optimal propagation scenario
over a spanning tree.
\begin{figure}[t]
\begin{minipage}{.24\textwidth}
\centerxy{<0.5cm,0pt>:
(2.5,0.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="0";
(2.2838636441,1.0168416077)*+{\vcenter{\hbox{\tiny$\bullet$}}}="1";
(1.6728265159,1.8578620637)*+{\vcenter{\hbox{\tiny$\bullet$}}}="2";
(0.7725424859,2.3776412907)*+{\vcenter{\hbox{\tiny$\bullet$}}}="3";
(-0.2613211582,2.4863047384)*+{\vcenter{\hbox{\tiny$\bullet$}}}="4";
(-1.25,2.1650635095)*+{\vcenter{\hbox{\tiny$\bullet$}}}="5";
(-2.0225424859,1.4694631307)*+{\vcenter{\hbox{\tiny$\bullet$}}}="6";
(-2.4453690018,0.519779227)*+{\vcenter{\hbox{\tiny$\bullet$}}}="7";
(-2.4453690018,-0.519779227)*+{\vcenter{\hbox{\tiny$\bullet$}}}="8";
(-2.0225424859,-1.4694631307)*+{\vcenter{\hbox{\tiny$\bullet$}}}="9";
(-1.25,-2.1650635095)*+{\vcenter{\hbox{\tiny$\bullet$}}}="10";
(-0.2613211582,-2.4863047384)*+{\vcenter{\hbox{\tiny$\bullet$}}}="11";
(0.7725424859,-2.3776412907)*+{\vcenter{\hbox{\tiny$\bullet$}}}="12";
(1.6728265159,-1.8578620637)*+{\vcenter{\hbox{\tiny$\bullet$}}}="13";
(2.2838636441,-1.0168416077)*+{\vcenter{\hbox{\tiny$\bullet$}}}="14";
"5"; "9" **\dir{-};
"10"; "11" **\dir{-};
"4"; "8" **\dir{-};
"5"; "6" **\dir{-};
"0"; "14" **\dir{-};
"8"; "9" **\dir{-};
"3"; "7" **\dir{-};
"7"; "11" **\dir{-};
"3"; "14" **\dir{-};
"1"; "2" **\dir{-};
"6"; "7" **\dir{-};
"12"; "13" **\dir{-};
"6"; "10" **\dir{-};
"0"; "11" **\dir{-};
"1"; "5" **\dir{-};
"2"; "13" **\dir{-};
"0"; "4" **\dir{-};
"2"; "6" **\dir{-};
"4"; "5" **\dir{-};
"9"; "10" **\dir{-};
"2"; "3" **\dir{-};
"11"; "12" **\dir{-};
"0"; "1" **\dir{-};
"9"; "13" **\dir{-};
"1"; "12" **\dir{-};
"8"; "12" **\dir{-};
"13"; "14" **\dir{-};
"3"; "4" **\dir{-};
"10"; "14" **\dir{-};
"7"; "8" **\dir{-};
}
\end{minipage}
\begin{minipage}{.24\textwidth}
\centerxy{<0.5cm,0pt>:
(-2.0,-1.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(1, 2)";
(0.0,0.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(0, 1)";
(2.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 2)";
(1.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 3)";
(3.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 1)";
(3.0,-2.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(2, 1)";
(2.0,-1.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(1, 1)";
(-3.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 8)";
(-1.0,-2.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(2, 3)";
(-1.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 6)";
(1.0,-2.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(2, 2)";
(-2.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 7)";
(0.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 4)";
(-3.0,-2.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(2, 4)";
(-0.5,-3.0)*+{\vcenter{\hbox{\tiny$\bullet$}}}="(3, 5)";
"(2, 1)"; "(3, 1)" **\dir{-};
"(0, 1)"; "(1, 2)" **\dir{-};
"(2, 3)"; "(3, 6)" **\dir{-};
"(1, 2)"; "(2, 3)" **\dir{-};
"(2, 2)"; "(3, 4)" **\dir{-};
"(0, 1)"; "(1, 1)" **\dir{-};
"(2, 3)"; "(3, 5)" **\dir{-};
"(2, 2)"; "(3, 3)" **\dir{-};
"(1, 1)"; "(2, 2)" **\dir{-};
"(1, 1)"; "(2, 1)" **\dir{-};
"(2, 4)"; "(3, 7)" **\dir{-};
"(2, 1)"; "(3, 2)" **\dir{-};
"(1, 2)"; "(2, 4)" **\dir{-};
"(2, 4)"; "(3, 8)" **\dir{-};
}
\end{minipage}
\caption{Network topologies employed:
a 15-node partial-mesh (to the left)
and a 15-node tree (to the right).}
\label{fig:topologies}
\end{figure}
\subsection{Micro-Benchmarks}
\label{sub:micro}
We have designed a set of micro-benchmarks, in which
each node periodically
(every second)
synchronizes with neighbors and
executes an update operation over a CRDT. The update operation depends
on the CRDT type. In $\af{GSet}$, the update event is the addition of a globally
unique element to the set;
in $\af{GCounter}$, an increment on the counter;
and in $\af{GMap\ K\%}$ each node updates $\frac{\af{K}}{\af{N}}\%$ keys
($\af{N}$ being the number of nodes/replicas),
such that globally $\af{K\%}$ of all the keys in the \emph{grow-only map}
are modified within each synchronization interval.
Note how the $\af{GCounter}$ benchmark is a particular case of $\af{GMap\ K\%}$,
in which $\af{K} = 100$.
For $\af{GMap\ K\%}$ we set the total number of keys to 1000,
and for all benchmarks, the number of events per replica is set to 100.
\newcommand\mltext[2]{
\begin{minipage}{#1}
\vspace*{.02cm}
\begin{flushleft}
#2
\end{flushleft}
\vspace*{-.42cm}
\end{minipage}
}
\def1.5cm{1.5cm}
\def2cm{2.4cm}
\def1.7cm{2.6cm}
\begin{table}[]
\caption{Description of micro-benchmarks.}
\label{tab:micro}
\begin{center}
\begin{tabular}{c c p{1.7cm}}
\toprule
\mltext{1.5cm}{\textbf{Type}}
& \mltext{2cm}{\textbf{Periodic event}}
& \mltext{1.7cm}{\textbf{Measurement}} \\ \toprule
\mltext{1.5cm}{$\af{GCounter}$}
& \mltext{2cm}{single increment}
& \mltext{1.7cm}{number of entries in the map} \\ \midrule
\mltext{1.5cm}{$\af{GSet}$}
& \mltext{2cm}{addition of unique element}
& \mltext{1.7cm}{number of elements in the set} \\ \midrule
\mltext{1.5cm}{$\af{GMap\ K\%}$}
& \mltext{2cm}{change the value of $\frac{\af{K}}{\af{N}}\%$ keys}
& \mltext{1.7cm}{number of entries in the map} \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
These micro-benchmarks are summarized in Table \ref{tab:micro},
along with the metric (to be used in transmission and memory measurements)
we have defined:
for $\af{GCounter}$ and $\af{GMap\ K\%}$ we count the number of map entries,
while for $\af{GSet}$, the number of set elements.
We setup this part of the evaluation with 15-node topologies
(as in Figure \ref{fig:topologies}).
As baselines, we have state-based synchronization,
classic delta-based synchronization,
Scuttlebutt, a variation of Scuttlebutt,
and operation-based synchronization.
\paragraph*{Scuttlebutt}
Scuttlebutt \cite{scuttlebutt} is an anti-entropy protocol
used to reconcile changes in values of a key-value store. Each value is uniquely identified
with a version $\tup{i, s} \in \mathds{I} \times \mathds{N}$, where the first component $i \in \mathds{I}$
is the identifier of the replica responsible for the new value,
and $s \in \mathds{N}$ a sequence number, incremented on each local update, thus being unique.
With this, the updates known locally can be summarized by a vector $\mathds{I} \mathrel{\hookrightarrow} \mathds{N}$,
mapping each replica to the highest sequence number it knows.
When a node wants to reconcile with a neighbor replica, it sends the summary vector,
and the neighbor replies with all the key-value pairs it has locally that have versions
not summarized in the received vector.
This strategy is performed in both directions, and in the end, both replicas
have the same key-value pairs in their local key-value store
(assuming no new updates occurred).
Scuttlebutt can be used to synchronize state-based CRDTs with few modifications.
Using as values the CRDT state would be inefficient, since changes to the CRDT
wouldn't be propagated incrementally, i.e. a small change in the CRDT would require
sending the whole new state, as in state-based synchronization.
Therefore, we use as values the optimal deltas resulting from $\delta$-mutators.
As keys, we can simply resort to the version pairs.
When reconciling two replicas, a replica receiving new key-delta pairs,
merges all the deltas with the local CRDT. If CRDT updates stop, eventually
all replicas converge to the same CRDT state. We label this approach \texttt{Scuttlebutt}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.46\textwidth,keepaspectratio]{gset_gcounter}
\end{center}
\caption{Transmission of $\af{GSet}$ and $\af{GCounter}$
with respect to delta-based $\BP + \RR$
-- tree and mesh topologies.}
\label{fig:gset-gcounter-transmission}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=.92\textwidth,keepaspectratio]{gmap}
\end{center}
\caption{Transmission of $\af{GMap\ 10\%}$,
$\af{30\%}$, $\af{60\%}$ and $\af{100\%}$
-- tree and mesh topologies.}
\label{fig:gmap-transmission}
\end{figure*}
This strategy is potentially inefficient in terms of memory: a replica has to keep in the
Scuttlebutt key-value store \emph{all} the deltas it has ever seen, since a neighbor
replica can at any point in time send a summary vector asking for \emph{any} delta.
Since the original Scuttlebutt algorithm does not support deleting keys from the key-value
store, we add support for \emph{safe} deletes of deltas,
in order to reduce its memory footprint.
If each node keeps track of what each node in the system has seen
(in a map $\mathds{I} \mathrel{\hookrightarrow} (\mathds{I} \mathrel{\hookrightarrow} \mathds{N})$ from replica identifiers to
the last seen summary vector),
once a delta has been seen by all nodes,
it can be safely deleted from the local Scuttlebutt store.
We compare with this improved Scuttlebutt variant (labeled \texttt{Scuttlebutt-GC})
that allows nodes to only be connected
to a subset of all nodes, not requiring all-to-all connectivity,
while supporting safe deletes.
For completeness, we also compare with the original Scuttlebutt design
that is unable to garbage-collect unnecessary key-delta pairs.
\paragraph*{Operation-based}
Operation-based CRDTs \cite{crdts1,crdts2} resort to a causal broadcast
middleware~\cite{causal-multicast-survey} that is used to disseminate CRDT operations.
This middleware tags each operation with a vector clock that
summarizes the causal past of the operation.
Such vector is then used by the recipient to ensure causal delivery of operations,
i.e. each operation is only delivered when every operation in its causal past
has been delivered as well.
In topologies with all-to-all connectivity, each node is only responsible for disseminating
its own operations. In order to relax this requirement, we have implemented a middleware
that \emph{stores-and-forwards} operations: when an operation is seen for the first time, it is
added to a transmission buffer to be further propagated in the next synchronization step;
if the same operation is received from different incoming neighbors, the middleware simply
updates which nodes have seen this operation so that unnecessary transmissions are avoided.
To the best of our knowledge, this is the best possible implementation of such a middleware.
We label this approach \texttt{Op-based}.
\subsubsection{Transmission bandwidth}
Figure \ref{fig:gset-gcounter-transmission}
shows, for $\af{GSet}$ and $\af{GCounter}$,
the transmission ratio (of all synchronization mechanisms previously mentioned)
with respect to delta-based synchronization with $\BP$ and $\RR$ optimizations enabled.
The first observation is that
classic delta-based synchronization presents almost
no improvement,
when compared to state-based synchronization.
In the tree topology, $\BP$ is enough to attain the best result,
because the underlying topology does not have cycles,
and thus, $\BP$ is sufficient to prevent redundant state to be propagated.
With a partial-mesh, $\BP$ has little effect,
and $\RR$ contributes most to the overall improvement.
Given that the underlying topology leads to redundant communication
(desired for fault-tolerance),
and classic delta-based can never extract that redundancy,
its transmission bandwidth is effectively similar to that of state-based synchronization.
Scuttlebutt and Scuttlebutt-GC are more efficient than classic delta-based for $\af{GSet}$
since both can precisely identify state changes between synchronization
rounds. However, the results for $\af{GCounter}$ reveal a limitation of this approach.
Since Scuttlebutt treats propagated values as opaque,
and does not understand that the changes in a $\af{GCounter}$ compress naturally
under lattice joins (only the highest sequence for each replica needs to be kept),
it effectively behaves worse than state-based and classic delta-based in this case.
Operation-based synchronization follows the \emph{same trend}
for the \emph{same reason}:
it improves state-based and classic delta-based for $\af{GSet}$
but not for $\af{GCounter}$ since the middleware is unable to compress
multiple operations into a single, equivalent, operation.
Supporting generic operation-compression at the middleware level
in operation-based CRDTs is an open research problem.
The difference between these three approaches is related with
the metadata cost associated to each,
as we show in Section~\ref{subsub:metadata}.
Even with the optimizations $\BP + \RR$ proposed,
the best result for $\af{GCounter}$ is not much
better than state-based.
This is expected since most entries of the underlying map are being updated
between each synchronization step:
each node has almost always something new from every other node
in the system to propagate
(thus being similar to state-based in some cases).
This pattern represents a special case of a map
in which $\af{100\%}$ of its keys are updated between
state synchronizations.
In Figure \ref{fig:gmap-transmission} we study
other update patterns, by measuring
the transmission of $\af{GMap\ 10\%}$, $\af{30\%}$, $\af{60\%}$,
and $\af{100\%}$.
These results are further evidence of what we have
observed
in the case of $\af{GSet}$:
$\BP$ suffices if the network graph is acyclic,
but $\RR$ is crucial in the more general case.
As seen previously, Scuttlebutt and Scuttlebutt-GC behave much better
than state-based synchronization, yielding a reduction in the transmission cost between
$46\%$ and $91\%$, and $20\%$ and $65\%$, respectively. This is
due to the underlying precise reconciliation mechanism of Scuttlebutt.
Operation-based synchronization leads to a transmission reduction between
$35\%$ and $80\%$ since it is able to represent
incremental changes to the CRDT as small operations.
Finally, delta-based $\BP + \RR$ is able reduce the transmission
costs by up-to $94\%$.
In the extreme case of $\af{GMap\ 100\%}$
(every key in the map is updated between synchronization rounds, which is a less likely workload in practical systems)
and considering a partial-mesh, delta-based $\BP + \RR$
provides a modest improvement in relation to state-based of about $18\%$
less transmission, and its performance is below Scuttlebutt variants and
operation-based synchronization.
Vector-based protocols (Scuttlebutt and operation-based) however, have an inherent scalability problem.
When increasing the number of nodes in the system, the transmission costs may
become
dominated by the size of metadata required for synchronization, as we show next.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.32\textwidth,keepaspectratio]{metadata}
\end{center}
\caption{Metadata required per node when synchronizing a $\af{GSet}$
in a mesh topology. Each node has 4 neighbours (as in
Figure \ref{fig:topologies}) and each node identifier has size 20B.}
\label{fig:metadata}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.46\textwidth,keepaspectratio]{memory}
\end{center}
\caption{Average memory ratio with respect to $\BP + \RR$
for $\af{GCounter}$, $\af{GSet}$,
$\af{GMap\ 10\%}$ and $\af{100\%}$
-- mesh topology}
\label{fig:memory}
\end{figure}
\subsubsection{Metadata Cost}\label{subsub:metadata}
Figure~\ref{fig:metadata} shows the size of metadata required
for synchronization per node while varying the
total number of replicas (i.e. nodes). The results show a linear and quadratic cost
(in terms of number of nodes) for Scuttlebutt and Scuttlebutt-GC (respectively),
and a
linear cost for operation-based synchronization
(in terms of both number of nodes and pending updates still to be propagated).
Given $N$ nodes, $P$ neighbors, and $U$ pending updates,
the metadata cost per node is:
\begin{itemize}
\item Scuttlebutt: $NP$ (a vector per neighbor)
\item Scuttlebutt-GC: $N^2 P$ (a map of vectors per neighbor)
\item Operation-based: $NPU$ (a vector per neighbor per pending update)
\item Delta-based: $P$ (a sequence number per neighbor)
\end{itemize}
This cost may represent a large fraction of all data propagated during
synchronization.
For example, in our measurements with 32 nodes, this metadata
represents $75\%$, $99\%$, and $97\%$ of the transmission costs
for Scuttlebutt, Scuttlebutt-GC and operation-based, respectively,
while the overhead of delta-based synchronization is only $7.7\%$.
\subsubsection{Memory footprint}
In delta-based synchronization,
the size of $\delta$-groups being propagated not only affects
the network bandwidth consumption, but also
the memory required to store them in the $\delta$-buffer
for further propagation.
During the experiments, we periodically measure
the amount of state (both CRDT state and metadata required
for synchronization) stored in memory for each node.
Figure~\ref{fig:memory} reports the average
memory ratio with respect to $\BP + \RR$.
State-based does not require synchronization metadata,
and thus it is optimal in terms of memory usage.
Classic delta-based and delta-based $\BP$ have an overhead
of 1.1$\af{x}$-3.9$\af{x}$ since the size of $\delta$-groups in the $\delta$-buffer
is larger for these techniques.
For $\af{GSet}$ and $\af{GMap\ 10\%}$, Scuttlebutt-GC is close to $\BP + \RR$
since deltas are removed from the key-value store
as soon as they are seen by all replicas.
Key-delta pairs are never pruned in the original Scuttlebutt, leading to an increasing memory usage.
As long as new updates exist, the memory consumption
for Scuttlebutt can only deteriorate, ultimately to a point where it will disrupt the system operation.
Operation-based has a higher memory cost than Scuttlebutt-GC, since each
operation in the transmission buffer is tagged with a vector,
while in Scuttlebutt and Scuttlebutt-GC each delta is simply tagged with a version pair.
Considering the results for $\af{GCounter}$, the three vector-based algorithms exhibit
the highest memory consumption. This is justified by
the same reason they perform poorly
in terms of transmission bandwidth
in this case (Figure~\ref{fig:gset-gcounter-transmission}):
these protocols are unable to compress incremental changes.
Overall, and ignoring state-based which doesn't present any metadata memory costs,
$\BP + \RR$ attains the best results.
\subsection{Retwis Application}
\label{sub:retwis}
\def1.5cm{1.5cm}
\def2cm{2cm}
\def1.7cm{1.7cm}
\begin{table}[]
\caption{Retwis workload characterization: for each operation, the number of
CRDT updates performed and its workload percentage.}
\label{tab:retwis}
\begin{center}
\begin{tabular}{c c p{1.7cm}}
\toprule
\mltext{1.5cm}{\textbf{Operation}}
& \mltext{2cm}{\textbf{\#Updates}}
& \mltext{1.7cm}{\textbf{Workload \%}} \\ \toprule
\mltext{1.5cm}{Follow}
& \mltext{2cm}{1}
& \mltext{1.7cm}{15\%} \\ \midrule
\mltext{1.5cm}{Post Tweet}
& \mltext{2cm}{1 + \#Followers}
& \mltext{1.7cm}{35\%} \\ \midrule
\mltext{1.5cm}{Timeline}
& \mltext{2cm}{0}
& \mltext{1.7cm}{50\%} \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
We now compare classic delta-based with
delta-based $\BP + \RR$ using Retwis \cite{retwis},
a popular \cite{tapir,walter,tardis} open-source Twitter clone.
In Table \ref{tab:retwis} we describe the application workload,
similar to the one used in \cite{tapir}:
user $a$ can follow user $b$ by updating the set of followers of user $b$;
users can post a new tweet, by writing it in their wall and in the timeline
of all their followers;
and finally, users can read their timeline, fetching the 10 most recent tweets.
Each user has 3 objects associated with it:
1) a set of followers stored in a $\af{GSet}$;
2) a wall stored in a $\af{GMap}$ mapping tweet identifiers to their content; and
3) a timeline stored in a $\af{GMap}$ mapping tweet timestamps to tweet identifiers.
We run this benchmark with 10K users, and thus, 30K CRDT objects overall.
The size of tweet identifiers and content is 31B and 270B, respectively.
These sizes are representative of real workloads,
as shown in an analysis of Facebook's general-purpose key-value store
\cite{facebook-workload}. The topology is a partial-mesh, with 50 nodes,
each with 4 neighbors, as in Figure \ref{fig:topologies},
and updates on objects follow a Zipf distribution, with
coefficients
ranging from 0.5 (low contention) to 1.5 (high contention)
\cite{tapir}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.48\textwidth,keepaspectratio]{retwis}
\end{center}
\caption{Transmission bandwidth per node (top) and average memory per node (bottom) of
classic delta-based and $\BP + \RR$ for different Zipf coefficient values
(log scale).
The left and right side show these values for
the first and second half of the experiment (respectively).}
\label{fig:retwis}
\end{figure}
Figure~\ref{fig:retwis}
shows the transmission bandwidth and memory footprint of both algorithms,
for different Zipf coefficient values.
We can observe that in low contention workloads, classic delta-based
behaves almost optimally when compared to $\BP + \RR$.
Since updates are distributed almost evenly across all objects,
there are few concurrent updates to the same object between synchronization rounds,
and thus, the simple and naive inflation check in~\qline{alg:delta-check}
suffices.
This phenomena was not observed in the previous set of benchmarks,
since we had a single object, and thus, maximum contention.
As we increase contention, a more sophisticated approach like $\BP + \RR$ is required,
in order to avoid redundant state propagation.
For example, with a 1.25 coefficient, bandwidth is reduced from $1.46$GB/s to $0.06$GB/s per node,
and memory footprint per node drops from $1.58$GB to $0.62$GB (right side of the plots).
Also, as we increase the Zipf coefficient, we note that the bandwidth consumption continues to
rise, leading to an unsustainable situation in the case of classic delta-based,
as it can never reduce the size of $\delta$-groups being transmitted.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.34\textwidth,keepaspectratio]{retwis_processing}
\end{center}
\caption{CPU overhead of classic delta-based when compared to delta-based $\af{BP} + \af{RR}$.}
\label{fig:retwis-processing}
\end{figure}
During the experiment we also measured the CPU time spent in processing CRDT
updates, both producing and processing synchronization messages.
Figure \ref{fig:retwis-processing} reports the CPU overhead of classic delta-based,
when considering $\BP + \RR$ as baseline.
Since classic delta-based produces/processes larger messages
than $\BP + \RR$, this results in a higher CPU cost:
for the 1, 1.25 and 1.5 Zipf coefficients, classic delta-based incurs an overhead
of 0.4$\af{x}$, 5.5$\af{x}$, and 7.9$\af{x}$ respectively.
\section{Related Work}
In the context of remote file synchronization, \emph{rsync} \cite{rsync}
synchronizes two files placed on different machines,
by generating file block signatures,
and using these signatures to identify the missing blocks on the backup file.
In this strategy, there's a trade-off between the size of the blocks to be signed,
the number of signatures to be sent, and the size of the blocks to be received:
bigger blocks to be signed implies fewer signatures to be sent, but
the blocks received (deltas) can be bigger than necessary.
Inspired by \emph{rsync}, \emph{Xdelta}~\cite{xdelta} computes a difference between two files,
taking advantage of the fact that both files are present.
Consequently the cost of sending signatures can be ignored
and the produced deltas are optimized.
In \cite{join-decompositions}, we propose two techniques that can be used
to synchronize two state-based CRDTs after a network partition,
avoiding bidirectional full state transmission.
Let $\A$ and $\B$ be two replicas.
In \emph{state-driven} synchronization, $\A$
starts by sending its local lattice state to $\B$,
and given this state, $\B$ is able to compute a delta that reflects the updates missed by
$\A$.
In \emph{digest-driven} synchronization, $\A$ starts by sending a digest (signature)
of its local state (smaller than the local state), that still allows
$\B$ to compute the delta. $\B$ then sends the computed delta along with a digest
of its local state, allowing $\A$ to compute a delta for $\B$.
Convergence is achieved after 2 and 3 messages
in \emph{state-driven} and \emph{digest-driven}, respectively.
These two techniques also exploit the concept of join decomposition presented in this paper.
Similarly to \emph{digest-driven} synchronization,
$\Delta$-CRDTs \cite{making-deltas}
exchange metadata used to compute a delta that reflects missing updates.
In this approach, CRDTs need to be extended to maintain
additional metadata for delta derivation,
and if this metadata needs to be garbage collected,
the mechanism falls-back to standard bidirectional full state transmission.
In the context of anti-entropy gossip protocols,
\emph{Scuttlebutt} \cite{scuttlebutt}
proposes a \emph{push-pull} algorithm
to be used to synchronize a set of values between participants,
but considers each value as opaque, and does not try to represent
recent changes to these values as deltas. Other solutions try to minimize the communication overhead
of anti-entropy gossip-based protocols by exploiting either hash functions~\cite{clearhouse}
or a combination of Bloom filters, Merkle trees, and Patricia tries~\cite{Byers}. Still,
these solutions require a significant number of message exchanges to identify the source
of divergence between the state of two processes. Additionally, these solutions might incur
significant processing overhead due to the need of computing hash functions and manipulating
complex data structures, such as Merkle trees.
With the exception of \emph{Xdelta}, all these techniques do not assume knowledge prior
to synchronization, and thus delay reconciliation,
by always exchanging state digests in order to detect state divergence.
\section{Conclusion} \label{sec:conclusion}
Under geo-replication there is a significant availability and latency impact
\cite{abadi-cap} when aiming for strong consistency criteria such as linearizability
\cite{herlihy-linearizability}.
Strong consistency guarantees greatly simplify the programmers view of the system and are
still required for operations that do demand global synchronization.
However, several other system's components do not need that same level of coordination
and can reap the benefits of fast local operation and strong eventual consistency.
This requires capturing more information on each data type semantics, since a read/write
abstraction becomes limiting for the purpose of data reconciliation.
CRDTs can provide a sound approach to these highly available solutions and support the
existing industry solutions for geo-replication, which are still mostly grounded on state-based CRDTs.
State-based CRDT solutions quickly become prohibitive in practice, if there is no support for
treatment of small incremental state deltas. In this paper we advance the foundations of
state-based CRDTs by introducing minimal deltas that precisely track state changes.
We also present and micro-benchmark two optimizations,
\emph{avoid back-propagation of $\delta$-groups} and
\emph{remove redundant state in received $\delta$-groups},
that solve inefficiencies in classic delta-based synchronization algorithms.
Further evaluation shows the improvement our solution can bring to
a small scale Twitter clone deployed in a 50-node cluster,
a relevant application scenario.
\section*{Acknowledgments}
We would like to thank Ricardo Macedo, Georges Younes, Marc Shapiro and the anonymous reviewers for their valuable feedback on earlier drafts of this work.
Vitor Enes was supported by EU H2020 LightKone project (732505) and by a FCT - Funda{\c{c}}{\~{a}}o para a Ci{\^{e}}ncia e a Tecnologia - PhD Fellowship (PD/BD/142927/2018). Carlos Baquero was partially supported by SMILES within TEC4Growth project (NORTE-01-0145-FEDER-000020). Jo\~{a}o Leit\~{a}o was partially supported by project NG-STORAGE through FCT grant PTDC/CCI-INF/32038/2017,
and by NOVA LINCS through the FCT grant UID/CEC/04516/2013.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2019-03-12T01:25:49",
"yymm": "1803",
"arxiv_id": "1803.02750",
"language": "en",
"url": "https://arxiv.org/abs/1803.02750"
}
|
\section{Introduction}
Recovery of a signal from several measured intensity patterns, also known as the \emph{phase retrieval problem}, is of great interest in optics and imaging.
Recently it was shown in \cite{Antonello15} that the problem of estimating the wavefront aberration from measurements of the point spread functions can be formulated as a phase retrieval problem.
In this paper, we consider the general phase retrieval problem \cite{SheEld15}:
\begin{equation*}
\mbox{find}\quad \mathbf{a} \in \mathbb{C}^{n_a} \mbox{ such that }
{\mathbf{y}}_i = | {\mathbf{u}}_i^H {\mathbf{a}} |^2 \quad {\rm for}\;\; i=1,\ldots,n_y,
\end{equation*}
where ${\mathbf{y}}_i \in \mathbb{R}_+$ and ${\mathbf{u}}_i \in \mathbb{C}^{n_a}$ are known and $(\cdot)^H$ denotes the Hermitian transpose of a vector (matrix).
For brevity the following compact notation will be used in this paper to denote this general noise-free phase retrieval problem:
\begin{equation}\label{ProblemG}
\mbox{find}\quad \mathbf{a} \in \mathbb{C}^{n_a} \mbox{ such that }
{\mathbf{y}} = | U {\mathbf{a}} |^2,
\end{equation}
where ${\mathbf{y}} \in \Real_+^{n_y}$ are the measurements and $U \in \Complex^{n_y\times n_a}$ is the propagation matrix.
With noise on the measurements $y_i$, we consider the following related optimization problem:
\begin{equation}
\begin{aligned}
& \underset{{\mathbf{a}} \in \Complex^{n_a}}{\min}
& & \norm{{\mathbf{y}} - \abs{U{\mathbf{a}}}^2},
\end{aligned} \label{eq:problemG2}
\end{equation}
where $\norm{\cdot}$ denotes a vector norm of interest.
The sparse variant of the phase retrieval problem corresponds to the case that the unknown parameter $\mathbf{a}$ is a sparse vector.
A special case of this problem is when the measurements are the magnitude of the Fourier transform of multiples of $\mathbf{a}$ with certain phase diversity patterns.
A number of algorithms utilizing the Fourier transform have been proposed for solving this class of phase retrieval problems \cite{fienup1982phase,LukBurLyo02,Gespar}.
The fundamental nature of \eqref{ProblemG} has given rise to a wide variety of solution methods that have been developed for specific variants of this problem since the observation of Sayre in 1952 that phase information of a scattered wave may be recovered from the recorded intensity patterns at and between Bragg peaks of a diffracted wave \cite{Sayre52}.
Direct methods \cite{Hauptman86} usually use insights about the crystallographic structure and randomization to search for the missing phase information.
The requirement of such a-priori structural information and the expensive computational complexity often limit the application of these methods in practice.
A second class of methods first devised by Gerchberg and Saxton \cite{GSF72} and Fienup \cite{fienup1982phase} can be described as variants of the method of alternating projections on certain sets defined by the constraints. For an overview of these methods and latter refinements we refer the reader to \cite{Bauschke02,LukBurLyo02}.
In \cite{candes2015phase} \eqref{ProblemG} is relaxed to a convex optimization problem.
The inclusion of the sparsity constraint in the same framework of convex relaxations has been considered in \cite{Ohlsson}.
However, as reported in \cite{Gespar} the combination of matrix lifting and semidefinite programming (SDP) makes this method not suitable for large-scale problems.
To deal with large-scale problems, the authors of \cite{Gespar} have proposed an iterative solution method, called GESPAR, which appears to yield promising recovery of very sparse signals.
However, this method consists of a heuristic search for the support of ${\mathbf{a}}$ in combination with a variant of Gauss-Newton method, whose computational complexity is often expensive.
These algorithmic features are potential drawbacks of GESPAR.
In this paper, we propose a sequence of convex relaxations for the phase retrieval problem in \eqref{ProblemG}.
Contrary to existing convex relaxation schemes such as those proposed in \cite{candes2015phase,Ohlsson}, matrix lifting is not required in our strategy. The obtained convex problems are affine in the unknown parameter vector ${\mathbf{a}}$.
Contrary to \cite{candes2013phaselift}, our strategy does not require the tuning of regularization parameters when the measurements are corrupted by noise.
We then present an ADMM-based algorithm that can solve the resulting optimization problems effectively.
This potentially addresses the restriction of current SDP-based methods to only relatively small-scale problems.
In Section~\ref{sec:wavefrontestimation} we formulate the estimation problem of our interest for both zonal and modal forms.
In Section~\ref{sec:algorithm} we propose an algorithm for solving this problem.
Since this algorithm is based on minimizing a nuclear norm, a computationally heavy minimization problem, we suggest an ADMM-based algorithm in Section~\ref{sec:admm} that exploits the problem structure.
This ADMM algorithm features two minimization problems whose solutions can be computed exactly and with complexity $\bigO{n_y n_a}$, where $n_y$ is the number of measurements and $n_a$ is the number of unknown variables.
Analytic solutions for the ADMM algorithm update steps will be presented in Subsections~\ref{sec:aupdate} and \ref{sec:Xupdate}.
The convergence behaviour of the algorithm proposed in Section~\ref{sec:algorithm} is analysed in Section~\ref{sec:convergence}.
In Sections~\ref{sec:numericalexperiments} we describe and discuss the results of a number of numerical experiments that demonstrate the promising performances of our algorithms.
We end with concluding remarks in Section~\ref{sec:remarks}.
\section{Wavefront estimation from intensity measurements}\label{sec:wavefrontestimation}
The problem of phase retrieval from the point spread function images can be approached from 2 directions. We first describe the problem in zonal form, and then in modal form.
\subsection{Problem formulation in zonal form}
\label{subsec:zonal form}
In \cite{Antonello15} it was shown that reconstructing the wavefront from CCD recorded images of a point source may also be formulated as a phase retrieval problem.
These recorded images are called {\em point spread functions (PSFs)}.
As such approaches avoid the requirement of extra hardware to sense the wavefront, such as a Shack-Hartmann wavefront sensor, the problem is relevant and summarized here.
The PSF is derived from the magnitude of the Fourier transform of the generalized pupil function (GPF).
For an aberrated optical system the GPF is defined as the complex valued function \cite{goodman2008introduction}:
\begin{equation}
\label{eq:GPF}
P(\rho,\theta) = {\mathbf{A}}(\rho,\theta)\expp{j \phi(\rho,\theta)},
\end{equation}
where $\rho$ (radius) and $\theta$ (angle) specify the normalized polar coordinates in the exit pupil plane of the optical system.
In \eqref{eq:GPF}, $\mathbf{A}(\rho,\theta)$ is the amplitude apodisation function and $\phi(\rho,\theta)$ is the phase aberration function.
The aim of the wavefront reconstruction problem is to estimate $\phi(\rho,\theta)$.
Once this phase aberration of an optical system has been estimated, it can be corrected by using phase modulating devices such as deformable mirrors.
In order to estimate $\phi(\rho,\theta)$, a known phase diversity pattern $\phi_d(\rho,\theta)$ can be introduced (e.g., by using a deformable mirror) to transform the GPF in a controlled manner into the aberrated GPF:
\begin{equation}\label{eq:GPFd}
P_d(\rho,\theta)
= \mathbf{A}(\rho,\theta) \expp{j \phi(\rho,\theta)} \expp{j \phi_d(\rho,\theta)}.
\end{equation}
The noise-free intensity pattern of $P_d(\rho,\theta)$ measured at the image plane is denoted
\begin{equation}\label{eq:Intensity_d}
{\mathbf{y}}_d = \abs{\fourier{ {\mathbf{A}}(\rho,\theta) \expp{j \phi(\rho,\theta)} \expp{j \phi_d(\rho,\theta)} } }^2.
\end{equation}
If we sample the function $P_d(\rho,\theta)$ at points corresponding to a square grid of size $m \times m$ on the pupil plane, then ${\mathbf{A}}(\rho,\theta)$, $\phi_d(\rho,\theta)$ and $\phi(\rho,\theta)$ are square matrices of that size.
Let us define ${\ve}(\cdot)$ the vectorization operator such that ${\ve}(Z)$ yields the vector obtained by stacking the columns of matrix $Z$ into a column vector.
The inverse operator ${\ve}^{-1}(\cdot)$, which maps a column vector of size $m^2$ to a square matrix of size $m \times m$, is also well defined. Let in particular the matrix $Z$ and the vector ${\mathbf{a}}$ be defined as:
\begin{equation*}
Z = {\mathbf{A}}(\rho,\theta) e^{j\phi(\rho,\theta)} \in \mathbb{C}^{m \times m},\quad {\mathbf{a}} = {\ve}(Z) \in \mathbb{C}^{m^2}. \label{eq:unknownpupilF}
\end{equation*}
With the definition of the vector $\mathbf{p}_d$:
\begin{equation*}
\mathbf{p}_d = \vect{e^{j\phi_d(\rho,\theta)}}\in \mathbb{C}^{m^2},
\end{equation*}
and with $D_d = \dia{\mathbf{p}_d} \in \mathbb{C}^{m^2\times m^2} $ the diagonal matrix with diagonal entries taken from the vector $\mathbf{p}_d$, we can write the noise-free intensity measurements in \eqref{eq:Intensity_d} as
\begin{equation*}
{\mathbf{y}}_d = \abs{ \fourier{ \expp{j \phi_d(\rho,\theta)} Z} }^2
= \abs{ \fourier{ {\ve}^{-1}(D_d\mathbf{a})} }^2.
\end{equation*}
As the Fourier transform is a linear operator, we can write our noise-free intensity measurements in the form:
\begin{equation}\label{eq:Intensity_f}
{\mathbf{y}}_d = \left| U_d {\mathbf{a}} \right|^2,
\end{equation}
where in this case $U_d$ is a unitary matrix.
By stacking the vectors ${\mathbf{y}}_d$ and the matrices $U_d$, obtained from the $n_d$ images with $n_d$ different phase diversities, correspondingly into the vector ${\mathbf{y}}$ and the matrix $U$ (of size $n_d m^2 \times m^2$), the problem of finding ${\mathbf{a}}$ from noise-free intensity measurements can be formulated as in \eqref{ProblemG} and that from noisy measurements can be formulated as in \eqref{eq:problemG2} for $n_a=m^2$ and $n_y=n_dm^2$.
It is worth noting that the dimension of the unknown ${\mathbf{a}}$ with $m$ in the range of a couple of hundreds turns this problem into a non-convex large-scale optimization problem. For such a problem the implementation of PhaseLift \cite{candes2013phaselift} using standard semidefinite programming, using libraries like MOSEK \cite{mosek}, will not be tractable because of the large matrix dimensions of the unknown quantity.
If we assume that the computational complexity of semidefinite programming with matrix constraints of size $n \times n$ increases with $\bigO{n^6}$ \cite{vandenberghe2005interior}, then a naive implementation of the PhaseLift method applied to \eqref{eq:problemG2} involving a single image has worst-case computational complexity of $\bigO{m^{12}}$.
\subsection{Problem formulation in modal form}\label{sec:modal}
In general, only approximate solutions can be expected for a phase retrieval problem.
In the modal form of the phase retrieval problem, also considered in \cite{Antonello15} for extended Nijboer-Zernike (ENZ) basis functions, the GPF is assumed to be well approximated by a weighted sum of basis functions.
We make use of real-valued radial basis functions \cite{martinez2016computation} with complex coefficients to approximate the GPF. These are studied in the scope of wavefront estimation in \cite{Piet17} and an illustration of these basis function on a $4 \times 4$ grid in the pupil plane is given in Figure~\ref{fig:bf}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\columnwidth]{basis_functions.eps}
\caption{16 radial basis functions with centers in a $4 \times 4$ grid, with circular aperture support.}
\label{fig:bf}
\end{figure}
Switching from the polar coordinates $(\rho,\theta)$ to the Cartesian coordinates $(x,y)$ in the pupil plane, let us consider the radial basis functions and the approximate GPF given by
\begin{equation}
\begin{aligned}
G_i(x,y) &=
\chi(x,y)\expp{-\lambda_i \left((x-x_i)^2 + (y-y_i)^2\right) }, \\
P(x,y) &\approx \widetilde{P}(x,y,{\mathbf{a}}) = \sum_{i=1}^{n_a} a_i G_i(x,y),
\end{aligned} \label{eq:basis}
\end{equation}
where $(x_i,y_i)$ are the centers of basis functions $G_i(x,y)$, $a_i \in \Complex$, $\lambda_i \in \Real_+$ determines the spread of that function, $\chi(x,y)$ denotes the support of the aperture, and ${\mathbf{a}}$ is the coefficient parameter vector to be estimated. The parameters $\lambda_i$ are usually taken equal for all basis functions and for their tuning we refer to \cite{Piet17}.
The aberrated GPF corresponding to the introduction of phase diversity $\phi_d$ is
\begin{equation}\label{eq:aGPFd}
\widetilde{P}_d(x,y,{\mathbf{a}},\phi_d) = \sum_{i=1}^{n_a} a_i G_i(x,y) \expp{j\phi_d(x,y)}.
\end{equation}
The normalized complex PSF is the 2-dimensional Fourier transform of the GPF
\cite{janssen2002extended, braat2002assessment}.
The aberrated PSF corresponding to the aberrated GPF in \eqref{eq:aGPFd} is given as
\begin{equation}\label{eq:aPSF}
p_d({u},{v}) = \sum_{i=1}^{n_a} a_i \fourier{G_i(x,y) \expp{j\phi_d(x,y)} } = \sum_{i=1}^{n_a} a_i U_{d,i}({u},{v}),
\end{equation}
where $({u},{v})$ are the Cartesian coordinates in the image plane of the optical system.
We now drop the dependency on the coordinates and vectorize expression \eqref{eq:aPSF} for all $n_d$ diversities that have been applied to obtain the following compact form of a single matrix-vector multiplication,
\begin{equation}\label{p=Ua}
\mathbf{p} = U {\mathbf{a}}.
\end{equation}
The vector $\mathbf{p}$ is the obtained vectorization and combination over all the aberrated PSFs, and the matrix $U$ is the vectorized and concatenated version of the functions
$U_{d,i}$ sampled on a grid of size $m \times m$.
Let the intensity of the PSFs be recorded on the corresponding grid of pixels of size $m \times m$, and let the vectorization of this intensity pattern for different phase diversities be concatenated into the vector ${\mathbf{y}}$.
We can again formulate the problem of finding ${\mathbf{a}}$ from noise-free intensity measurements as in \eqref{ProblemG} and from noisy measurements as in \eqref{eq:problemG2}
for $n_y = m^2n_d$.
It is worth noting that the dimension of ${\mathbf{a}}$ is not dependent on the size of the sample grid (the size of the problem).
This is the fundamental advantage of the modal form formulation over the zonal form one, for which the size of ${\mathbf{a}}$ directly depends on the size of the problem, i.e. $n_a=m^2$.
In this paper two steps are combined to deal with the large-scale nature of optimization \eqref{eq:problemG2}:
\begin{enumerate}
\item The unknown pupil function $P(\rho,\theta)$ can be represented as a linear combination of a number of basis functions.
In \cite{Antonello15} use has been made of the ENZ basis functions, while in \cite{Piet17} use is made of radial basis functions instead of ENZ ones.
The radial basis functions are used here as \cite{Piet17} demonstrated their advantages over the ENZ type.
\item A new strategy is proposed for solving optimization \eqref{ProblemG} via a sequence of convex optimization problems.
Each of the subproblems can be solved effectively by an iterative ADMM algorithm that exploits the problem structure.
\end{enumerate}
In the following we assume that the problem is normalized such that all entries of ${\mathbf{y}}$ have values between 0 and 1.
\section{The COPR algorithm}\label{sec:algorithm}
Equation \ref{ProblemG} is equivalent to a rank constraint.
Define the matrix-valued function
\begin{equation}
M(A,B,C,X,Y) = \pmat{C+AY+XB+XY & A+X \\ B+Y & I}, \label{eq:M}
\end{equation}
where $I$ is the identity matrix of appropriate size.
Let ${\mathbf{b}} \in \Complex^{n_a}$ be a coefficient vector.
For notational convenience, we will denote
\begin{equation*}
\begin{aligned}
&M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}}) = \\
&M\left(\dia{{\mathbf{a}}^HU^H},\dia{U{\mathbf{a}}},\dia{{\mathbf{y}}},\dia{{\mathbf{b}}^HU^H},\dia{U{\mathbf{b}}}\right).
\end{aligned}
\end{equation*}
Our proposed algorithm in this paper relies on the following fundamental result.
\begin{lemma}\label{lem:rank}\cite{doelman2016sequential}
For any ${\mathbf{b}} \in \Complex^{n_a}$, the constraint ${\mathbf{y}} = \abs{U{\mathbf{a}}}^2$ is equivalent to the constraint
\begin{equation*}
\rank{M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})} = n_y.
\end{equation*}
\end{lemma}
For addressing problem \eqref{eq:problemG2}, Lemma~\ref{lem:rank} suggests a consideration of the following approximate problem, for a user-selected parameter vector ${\mathbf{b}}$,
\begin{equation} \label{min rank}
\min_{{\mathbf{a}} \in \Complex^{n_a}} \rank{M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})}.
\end{equation}
Since \eqref{min rank} is a non-convex problem and to anticipate the presence of measurement noise, we propose to solve the following convex optimization problem:
\begin{equation}
\min_{{\mathbf{a}} \in \Complex^{n_a}} f({\mathbf{a}}) := \norm{M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})}_*, \label{EQ:NN}
\end{equation}
where $\norm{\cdot}_*$ denotes the nuclear norm of a matrix, the sum of its singular values \cite{recht2010guaranteed}.
In the case that prior knowledge on the problem indicates that
${\mathbf{a}}$ is a sparse vector, the objective function in \eqref{EQ:NN} can easily be extended with an $\ell_1$-regularization to stimulate sparse solutions, since the vector ${\mathbf{a}}$ appears affinely in $M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})$:
\begin{equation}
\min_{{\mathbf{a}} \in \Complex^{n_a}} f({\mathbf{a}}) + \lambda \norm{{\mathbf{a}}}_1, \label{eq:sparse}
\end{equation}
for some regularization parameter $\lambda$.
Note that for ${\mathbf{b}} = -{\mathbf{a}}$,
\begin{equation}
\norm{M(U,{\mathbf{a}},-{\mathbf{a}},{\mathbf{y}})}_* = \norm{{\mathbf{y}} - \abs{U{\mathbf{a}}}^2}_1 + n_y.
\end{equation}
Since the result of optimization \ref{EQ:NN} might not produce a desired solution sufficiently fitting the measurements, we propose the iterative Convex Optimization-based Phase Retrieval (COPR) algorithm, outlined in Algorithm~\ref{ALG:COPR}.
\begin{algorithm}
\caption{Convex Optimization-based Phase Retrieval (COPR)}\label{ALG:COPR}
\begin{algorithmic}[1]
\Procedure{COPR}{${\mathbf{b}},\tau$}\Comment{Some guess for ${\mathbf{b}}$}
\While{$\norm{{\mathbf{y}} - \abs{U{\mathbf{a}}}}_1 > \tau$}\Comment{Termination criterion}
\State ${\mathbf{a}}_+ \in \argmin_{\mathbf{a}} \norm{M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})}_*$
\State ${\mathbf{b}}_+\gets -{\mathbf{a}}_+$
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
The nuclear norm minimization in Algorithm~\ref{ALG:COPR} is the main computational burden for an implementation.
Usual implementations of the nuclear norm involve semidefinite constraints, and require a semidefinite optimization solver.
If we assume that their computational complexity increases with $\bigO{n^6}$ \cite{vandenberghe2005interior} with constraint on matrices of size $n \times n$, then minimizing the nuclear norm of the matrix $M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})$ of size $2n_y \times 2n_y$ is computationally infeasible even for
relatively
small-scale problems.
Therefore, we propose a tailored ADMM algorithm of which the computational complexity of the iterations scales $\bigO{n_y n_a}$, and requires the inverse of a matrix of size $2n_a \times 2n_a$ for every iteration of Algorithm \ref{ALG:COPR}.
\section{Efficient computation of the solution to \eqref{EQ:NN}}\label{sec:admm}
The minimization problem \eqref{EQ:NN} can be reformulated as:
\begin{align}\label{eq:admmopt}
\underset{X,{\mathbf{a}}}{\min}\;
\norm{X}_*\quad \mbox{ subject to }\quad
X = M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}}).
\end{align}
Applying the ADMM optimization technique \cite{boyd2011distributed} to the constraint optimization problem \eqref{eq:admmopt}, we obtain the steps in Algorithm~\ref{alg:admm}.
\begin{algorithm}
\caption{An ADMM algorithm for solving \eqref{eq:admmopt}}\label{alg:admm}
\begin{algorithmic}[1]
\Procedure{NN-ADMM}{${\mathbf{b}},{\mathbf{y}},\rho,\tau$}
\State ${\mathbf{a}} \gets -{\mathbf{b}}$
\State $\pmb{X} \gets M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})$
\State $\pmb{Y} \gets 0 $
\While{$\abs{\norm{M(U,{\mathbf{a}}_{+},{\mathbf{b}},{\mathbf{y}})}_* - \norm{M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})}_*} > \tau$}
\State ${\mathbf{a}}_{+} \in $
\begin{equation}
\underset{{\mathbf{a}}}{\argmin} \norm{\pmb{X} - M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}}) + \frac{1}{\rho}\pmb{Y}}_F^2 \label{eq:aupdate}
\end{equation}
\State $\pmb{X}_{+} \in $
\begin{equation}
\underset{\pmb{X}}{\argmin} \norm{\pmb{X}}_* + \dfrac{\rho}{2}\norm{\pmb{X} - M(U,{\mathbf{a}}_{+},{\mathbf{b}},{\mathbf{y}}) + \dfrac{1}{\rho}\pmb{Y}}_F^2 \label{eq:Xupdate}
\end{equation}
\State $\pmb{Y}_{+} \gets \pmb{Y} + \rho\left(\pmb{X}_{+} - M(U,{\mathbf{a}}_{+},{\mathbf{b}},{\mathbf{y}})\right)$
\State update $\rho$ according to the rules in \cite{boyd2011distributed}
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
The advantage of using this ADMM formulation is that both of the update steps \eqref{eq:aupdate} and \eqref{eq:Xupdate} have solutions that can be computed analytically.
The efficient computation of the solutions are described in the following two subsections.
\subsection{Efficient computation of the solution to \eqref{eq:aupdate}}\label{sec:aupdate}
Upon inspection of \eqref{eq:aupdate}, we see that this is a complex-valued standard least squares problem since $M(U,{\mathbf{a}},{\mathbf{b}},{\mathbf{y}})$ is parameterized affinely in ${\mathbf{a}}$.
Let $\re{\cdot}$ and $\im{\cdot}$ respectively denote the real and the imaginary parts of a complex object.
Let the subscripts $(\cdot)_1$, $(\cdot)_2$ and $(\cdot)_3$ respectively denote the top-left, top-right and bottom-left submatrices according to \eqref{eq:M}.
Define
\begin{equation*}
\pmb{Z} = \pmb{X} + \dfrac{1}{\rho}\pmb{Y}, \quad X = \dia{b^HU^H}.
\end{equation*}
In the sequel, let $\adi{P}$ denote the vector with the diagonal entries of a square matrix $P$.
Reordering the elements in \eqref{eq:aupdate}, separating the real and the imaginary parts, removing all matrix elements in the argument of the Frobenius norm that do not depend on ${\mathbf{a}}$, and vectorizing the result, give the following least squares problem:
\begin{equation}
\min_{\mathbf{x}} \norm{{\mathbf{u}}_{ADMM} - {\mathbf{u}}_{COPR} - AB{\mathbf{x}}}_2^2. \label{eq:ls}
\end{equation}
The variables ${\mathbf{u}}_{ADMM},~{\mathbf{u}}_{COPR},~A,~B$ and ${\mathbf{x}}$ are given by
\begin{equation}
\begin{aligned}
&{\mathbf{u}}_{ADMM} = \pmat{\adi{\re{\pmb{Z}_{1}}} \\ \adi{\re{\pmb{Z}_{2}}} \\ \adi{\re{\pmb{Z}_{3}}} \\ \adi{\im{\pmb{Z}_{2}}} \\ \adi{\im{\pmb{Z}_{3}}} },
&&& &{\mathbf{u}}_{COPR} = \pmat{{\mathbf{y}}
+\adi{\abs{X}^2} \\ \adi{\re{X}} \\ \adi{\re{X}} \\ \adi{\im{X}} \\ -\adi{\im{X}}}, \\
&A = \pmat{2\re{X} & 2\im{X} \\ I & 0 \\ I & 0 \\ 0 & I \\ 0 & -I},
&&& &B = \pmat{\re{U} & - \im{U} \\ -\im{U} & -\re{U}},
\end{aligned}
\end{equation}
and ${\mathbf{x}} = \pmat{\re{{\mathbf{a}}}^T & \im{{\mathbf{a}}}^T}^T$.
This means that the optimal solution to \eqref{eq:ls} is given by
\begin{equation*}
{\mathbf{x}}^* = (B^TA^TAB)^{-1}B^TA^T({\mathbf{u}}_{ADMM} - {\mathbf{u}}_{COPR}).
\end{equation*}
During the ADMM iterations only ${\mathbf{u}}_{ADMM}$ changes. The inverse $ (B^TA^TAB)^{-1} $ has to be computed once for every iteration of Algorithm~\ref{ALG:COPR} (i.e. it remains constant throughout the ADMM iterations).
Since the complexity of computing an inverse is $\bigO{n^3}$ for matrices of size $n \times n$, the computational complexity of this inverse process scales cubically with the number of basis functions.
Once this inverse matrix is obtained, the optimal solution to the least squares problem in \eqref{eq:ls} can be computed by a simple matrix-vector multiplication, whose complexity scales with $\bigO{n_yn_a}$.
Note that in the case that the objective term includes regularization as in \eqref{eq:sparse}, the optimization \eqref{eq:ls} should be modified appropriately to include the additive regularization term $\lambda\norm{{\mathbf{a}}}_1$.
\subsection{Efficient computation of the solution to \eqref{eq:Xupdate}}\label{sec:Xupdate}
The optimization in \eqref{eq:Xupdate} is of the form
\begin{equation}
\underset{X}{\argmin} \norm{X}_* + \lambda\norm{X-C}_F^2. \label{eq:Xupdate_simple}
\end{equation}
Let $C= U_C\Sigma_CV_C^T$ be the singular value decomposition of $C \in \Complex^{2n_y \times 2n_a}$.
\begin{lemma}\label{lem:singular_vectors}
The solution $\pmb{X}$ to \eqref{eq:Xupdate_simple} has singular vectors $U_C$ and $V_C$.
\end{lemma}
\begin{proof}
Let
$X = U_X\Sigma_XV_X^T$ be a singular value decomposition of $X$.
Then
\begin{equation*}
\begin{aligned}
\norm{X}_* + \lambda\norm{X-C}_F^2 &= \trace{\Sigma_X} + \\
&\qquad \lambda\left(\inp{X}{X} + \inp{C}{C} -2\inp{X}{C}\right).
\end{aligned}
\end{equation*}
Using Von Neumann's trace inequality we get
\begin{equation*}
\begin{aligned}
& \min_{X} \paren{ \trace{\Sigma_X} + \lambda\left(\inp{X}{X} + \inp{C}{C} -2\inp{X}{C}\right)} \\
\geq\; &\min_{X} \paren{\trace{\Sigma_X} + \lambda\left(\inp{X}{X} + \inp{C}{C} -2\trace{\Sigma_X\Sigma_C}\right)}\\
\end{aligned}
\end{equation*}
with equality holds true when $C$ and $X$ are simultaneously unitarily diagonalizable.
The optimal solution $\pmb{X}$ to \eqref{eq:Xupdate_simple} therefore has the same singular vectors as $C$, i.e. $U_{\pmb{X}}=U_C,~ V_{\pmb{X}}=V_C$.
\end{proof}
Denote the singular values of $C$ in descending order as $\sigma_{C,1},\ldots,\sigma_{C,2n_y}$, and those of $X$ similarly.
Thanks to Lemma~\ref{lem:singular_vectors}, \eqref{eq:Xupdate_simple} can be simplified to
\begin{equation}\label{prob:sigma}
\underset{\sigma_{X,i}}{\argmin} \sum_{i=1}^{2n_y}
\paren{\sigma_{X,i} + \lambda\left(\sigma_{X,i} - \sigma_{C,i}\right)^2}.
\end{equation}
This problem is completely decoupled in $\sigma_{X,i}$ and the optimal solution to \eqref{prob:sigma} is computed with
\begin{equation*}
\sigma_{\pmb{X},i} = \max\left(0, \sigma_{C,i} - \frac{1}{2\lambda}\right),\quad i = 1,\ldots,2n_y.
\end{equation*}
By row and column permutations, the matrix $C$ is block-diagonal with blocks of size $2 \times 2$.
The SVD of this permuted matrix therefore involves block-diagonal matrices $U_C$, $\Sigma_C$ and $V_C$ and these blocks can be obtained separately and in parallel. Since the blocks are of size $2 \times 2$, the SVD can be obtained analytically.
This shows that a valid SVD can be computed very efficiently, in $\bigO{1}$.
That is, in theory, in a computation time independent of the number of pixels in the image, the number of images taken or of the number of basis functions.
\section{Convergence analysis of Algorithm \ref{ALG:COPR}}\label{sec:convergence}
Algorithm \ref{ALG:COPR} can be reformulated as a Picard iteration $\mathbf{a}_{k+1} \in T(\mathbf{a}_k)$, where the fixed point operator $T:\mathbb{C}^{n_a}\to \mathbb{C}^{n_a}$ is given by
\begin{equation}\label{T:operator}
T(\mathbf{a}) = \arg\min_{\substack{\mathbf{x}\in \mathbb{C}^{n_a}}}\;\norm{M(U,\mathbf{x},-\mathbf{a},\mathbf{y})}_*.
\end{equation}
Our subsequent analysis will show that the set of fixed points, $\Fix T$, of $T$ is in general nonconvex and as a result, iterations generated by $T$ can not be \emph{Fej\'er monotone} \cite[Definition 5.1 of]{BauCom11} with respect to $\Fix T$.
Therefore, the widely known convergence theory based on the properties of \emph{Fej\'er monotone operators} and \emph{averaging operators} is not applicable to the operator $T$ given at \eqref{T:operator}.
In this section, we make an attempt to prove convergence of Algorithm \ref{ALG:COPR}, which has been observed from our numerical experiments, via a relatively new developed convergence theory based on the theory of \emph{pointwise almost averaging operators} \cite{LukNguTam16}.
It is worth mentioning that we are not aware of any other analysis schemes addressing convergence of Picard iterations generated by general \emph{nonaveraging} fixed point operators.
Our discussion consists of two stages. Based on the convergence theory developed in \cite{LukNguTam16}, we first formulate a convergence criterion for Algorithm \ref{ALG:COPR} (Proposition \ref{p:convergence}) under rather abstract assumptions on the operator $T$.
Due to the highly complicated structure of the nuclear norm of a general complex matrix, we are unable to verify these mathematical conditions for general matrices $U$.
However, we will verify that they are well satisfied in the case that $U$ is a unitary matrix (Theorem \ref{T:MATRIX_GENERAL}).
From the latter result, we heuristically hope that Algorithm \ref{ALG:COPR} still enjoys the convergence result when the matrix $U$ is close to being unitary in a certain sense.
It is a common prerequisite for analyzing local convergence of a fixed point algorithm that the set of solutions to the original problem is nonempty.
That is, there exists $\mathbf{a}\in \mathbb{C}^{n_a}$ such that $\mathbf{y}=|U\mathbf{a}|^2$.
Before stating the convergence result, we need to verify that the fixed point set of $T$ is nonempty.
\begin{lemma}\label{LEM:FIX_NONEMPTY}
The fixed point operator $T$ defined at \eqref{T:operator} holds
\[
\parennn{\mathbf{a} \mid \mathbf{y} = |U\mathbf{a}|^2} \;\subseteq\; \Fix T := \left\{\mathbf{a}\in \mathbb{C}^{n_a}\mid \mathbf{a}\in T(\mathbf{a})\right\}.
\]
\end{lemma}
\begin{proof}
See Appendix~\ref{APP:FIX_NONEMPTY}
\end{proof}
The next proposition provides an abstract convergence result for Algorithm \ref{ALG:COPR}.
$\Fix T$ is supposed to be closed.
\begin{proposition}\label{p:convergence}\cite[simplified version of Theorem 2.2 of]{LukNguTam16}
Let $S\subset \Fix T$ be closed with $T(\mathbf{a}^*) \subset\Fix T$ for all $\mathbf{a}^* \in S$ and let $W$ be a neighborhood of $S$. Suppose that $T$ satisfies the following conditions.
\begin{enumerate}
\item[(i)]\label{t:subfirm convergence a} $T$ is \emph{pointwise averaging} at every point of $S$ with constant $\alpha\in (0,1)$ on $W$.
That is, for all $\mathbf{a}\in W$, $\mathbf{a}_+\in T(\mathbf{a})$, $\mathbf{a}^*\in P_S(\mathbf{a})$ and $\mathbf{a}^*_+\in T(\mathbf{a}^*)$,
\begin{align}\label{averaged of T}
\norm{\mathbf{a}_+ - \mathbf{a}^*_+}^2 \le \norm{\mathbf{a}-\mathbf{a}^*}^2 - \frac{1-\alpha}{\alpha}\norm{(\mathbf{a}_+ - \mathbf{a})-(\mathbf{a}^*_+ - \mathbf{a}^*)}^2.
\end{align}
\item[(ii)]\label{t:subfirm convergence b} The set-valued mapping $\psi:= T-\Id$ is \emph{metrically subregular} on $W$ for $0$ with constant $\gamma >0$, where $\Id$ is the Identity mapping.
That is,
\begin{equation}\label{met_subreg}
\gamma\dist(\mathbf{a},\psi^{-1}(0)) \le \dist(0,\psi(\mathbf{a})),\quad \forall \mathbf{a}\in W.
\end{equation}
\item[(iii)]\label{t:tech_assump} It holds $\dist(\mathbf{a},S) \le \dist(\mathbf{a},\Fix T)$ for all $\mathbf{a}\in W$.
\end{enumerate}
Then all Picard iterations $\mathbf{a}_{k+1}\in T(\mathbf{a}_k)$ starting in $W$ satisfy $\dist(\mathbf{a}_k,S)\to 0$ as $k\to \infty$ at least linearly.
\end{proposition}
Condition $(iii)$ in Proposition \ref{p:convergence} is, on one hand, a technical assumption and becomes redundant when $S=\Fix T$.
On the other hand, the set $S$ allows one to exclude from the analysis possible \emph{inhomogeneous} fixed points of $T$, at which the algorithm often exposes weird convergence behavior \cite[see Example 2.1 of]{LukNguTam16}.
The size of neighborhood $W$ appearing in Proposition \ref{p:convergence} indicates the robustness of the algorithm in terms of erroneous input (the distance from the starting point to a nearest solution).
We now apply the abstract result of Proposition \ref{p:convergence} to the following special, but important case.
\begin{thm}\label{T:MATRIX_GENERAL}
Let $U\in \mathbb{C}^{n_a\times n_a}$ be unitary and $\mathbf{a}^*\in \mathbb{C}^{n_a}$ be such that $|U\mathbf{a}^*|^2=\mathbf{y}$.
Then every Picard iteration generated by Algorithm~\ref{ALG:COPR} $\mathbf{a}_{k+1}\in T(\mathbf{a}_k)$ starting sufficiently close to $\mathbf{a}^*$ converges linearly to a point $\tilde{\mathbf{a}} \in \Fix T$ satisfying $|U\tilde{\mathbf{a}}|^2=\mathbf{y}$.
\end{thm}
\begin{proof}
See Appendix~\ref{APP:MATRIX GENERAL}.
\end{proof}
\section{Numerical experiments}\label{sec:numericalexperiments}
Three important numerical aspects of the CORP algorithm, including flexibility, complexity, and robustness, are tested on relevant problems.
First, we demonstrate the flexibility of the convex relaxation by comparing the COPR algorithm with an added $\ell_1$-regularization to the PhaseLift method \cite{candes2013phaselift} and to the CPRL method in \cite{Ohlsson} on an under-determined sparse estimation problem.
Second, we compare the practically observed computational complexity of COPR and a
naive implementation of PhaseLift \cite{candes2013phaselift}.
Finally, we investigate the robustness of CORP relative to noise
in a Monte-Carlo simulation for 25 and 100 basis functions. We compare four algorithms: COPR, PhaseLift \cite{candes2013phaselift}, a basic alternating projections method (Section 4.3 in \cite{candes2013phaselift}) and an averaged projections method based on \cite{Luke_Toolbox}.
We note that the latter method fundamentally employs the Fourier transform at every iteration and hence is, in generally, not applicable for phase retrieval in the modal form.
\subsection{Application of COPR to compressive sensing problems}
The first problem is to estimate 16 coefficients from 8 measurements, where the optimal vector is known to be sparse.
We generate a sparse coefficient vector ${\mathbf{a}}$ with two randomly generated non-zero complex elements. We generate two images ($n_d = 2,~m = 128$) by applying two different amounts of defocus with Zernike coefficients $-\frac{\pi}{8}$ and $\frac{\pi}{8}$, respectively. From each image we use the center $2 \times 2$ pixels, resulting in a total of $n_y = 8$ measurements.
The applied algorithms are the COPR algorithm, the COPR algorithm with an additional $\ell_1$-regularization, the PhaseLift algorithm \cite{candes2013phaselift} and the Compressive sensing Phase Retrieval (CPRL) algorithm of \cite{Ohlsson}. The results are displayed in Figure~\ref{fig:sparse}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{sparse_solutions2.eps}
\caption{The absolute values of 16 estimated coefficients according to 4 different algorithms.}
\label{fig:sparse}
\end{figure}
As can be seen from the figure, COPR and PhaseLift fail to retrieve the correct solution. The CPRL method and the regularized COPR algorithm compute the correct solution.
\subsection{Computational complexity}
The second problem demonstrates the trends of the required computation time when the number of estimated coefficients increases.
The underlying estimation problem consists of 7 images with different amounts of defocus applied as phase diversity, where each image is of size 128 by 128 pixels. A subset of 20 by 20 pixels of each image is used in the estimation.
We compare the COPR algorithm to the PhaseLift algorithm, which is implemented according to optimization problem (2.5) in \cite{candes2013phaselift}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{computation_time2.eps}
\caption{A computation time comparison between PhaseLift and COPR for different numbers of coefficients.}
\label{fig:trend}
\end{figure}
For PhaseLift, the reported time is the time it takes the MOSEK solver \cite{mosek} to solve the optimization problem. This does not include the time taken by YALMIP \cite{Lofberg2004} to convert the problem as given to the solver-specific form.
For COPR, the number of iterations is set beforehand according to convergence to the correct solution, and the total time is recorded.
By convergence we mean that the estimated vector $\hat{{\mathbf{a}}}$ satisfies the tolerance criterion:
\begin{equation}
\min_{c \in \Complex,~\abs{c} = 1} \norm{c\hat{{\mathbf{a}}} - {{\mathbf{a}}}^*}_2^2 \leq 10^{-5},
\end{equation}
where ${\mathbf{a}}^*$ is the exact solution.
The minimization over the parameter $c$ ensures that the (unobservable) piston mode in the phase is canceled.\footnote{Let $\pmat{\hat{{\mathbf{a}}} & {{\mathbf{a}}}^*} = QR$ be the QR decomposition. Then $\angle c^* = \angle \frac{R_{12}}{R_{11}}$. }
The computational complexity of PhaseLift is, as implemented, approximately $\bigO{n^4}$. The MOSEK solver ran into numerical issues for more than 25 estimated parameters.
The COPR algorithm's computational complexity is approximately $\bigO{n}$. The better complexity is offset by a longer computation time for very small problems.
\subsection{Robustness to noise}
When estimating of an unknown phase aberration, it is more logical to evaluate the performance of the algorithm on its ability to estimate the phase, and not the coefficients of basis functions.
We assume the phase is randomly generated with a deformable mirror. Let $H \in \Real^{m^2 \times n_u}$ be the mirror's influence matrix and ${\mathbf{u}} \in \Real^{n_u}$ be the input to the mirror's actuators, such that
\begin{equation}
\phi_{DM} = H {\mathbf{u}}. \label{eq:DM}
\end{equation}
The input values $u_i$ are drawn from the uniform distribution between 0 and 1.
The mirror has $n_u = 44$ actuators and the images have sides $m = 128$. The aperture radius is $0.4$.
Five different defocus diversities are applied with Zernike coefficients uniformly spaced between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$.
Gaussian noise is added to the obtained images such that
\begin{equation}
{\mathbf{y}} = \max(0, \abs{\fourier{P_d(\rho,\theta)}}^2+ \varepsilon),~ \varepsilon \in N(0,\sigma I).
\end{equation}
and $\sigma$ is the noise variance.
No denoising methods were applied.
The signal-to-noise ratio (SNR) is computed according to
\begin{equation}
10 \log_{10} \frac{\norm{{\mathbf{y}} - \abs{\fourier{P_d(\rho,\theta)}}^2 }_2^2}{\norm{\abs{\fourier{P_d(\rho,\theta)}}^2}_2^2}.
\end{equation}
The phase is estimated from ${\mathbf{y}}$ using four different algorithms.
The first is the COPR algorithm.
The second is the averaged projections (AvP) algorithm \cite{Luke_Toolbox}.
The third is the alternating projections (AlP) method (\cite{candes2013phaselift}, section 4.3), and the fourth algorithm is the PhaseLift method \cite{candes2013phaselift}.
The COPR and the AlP methods are applied for two cases corresponding to using 25 and 100 basis functions.
The PhaseLift method is applied for only the case with 25 basis functions due to numerical problems in the solver for larger problems.
The AvP method is not based on the use of basis functions but on the Fourier transform.
Due to the sensitivity to noise of this method, 100 basis functions were fit to the estimated object plane field.
The phase generated by these weighted basis functions was used to report performance.
The use of basis functions improved the phase estimate.
We make use of the Strehl ratio as a measure of optical quality.
The Strehl ratio $S$ is the ratio of the maximum intensity of the aberrated PSF and that of the unaberrated one and can be approximated with the expression of Mahajan:
\begin{equation*}
S \approx \expp{-\delta^2},
\end{equation*}
where $\delta = \norm{\phi_{DM} - \hat{\phi}}_2$ and the mean residual phase has been removed \cite{roddier1999adaptive}.
For every noise level, 100 different phases were generated with the deformable mirror model \eqref{eq:DM}.
The results are presented in Figure~\ref{fig:noise}.
\begin{figure}[ht]
\centering
\includegraphics[width=1\columnwidth]{strehl5.eps}
\caption{The Strehl ratio of the estimated phase aberration as a function of SNR. The shaded areas indicate the 10\% and 90\% quantiles.}
\label{fig:noise}
\end{figure}
The resulting Strehl-ratio's are plotted with a trend line and shaded quantile lines at 10\% and 90\%.
In the case of PhaseLift, the tuning parameter that trades off measurement fit and the rank of the `lifted' matrix is tuned once and applied to all problems.
This has the effect that the reported performance is not as high as it could be with optimal tuning for individual problems.
This points to another advantage of COPR: the absence of tuning parameters aside from the choice of basis functions.
The figure shows that COPR appears to be robust to noise.
Also, the figure on the right shows that when the number of basis functions is high, the estimated phase is very close to the exact phase in low noise settings, something that cannot be done with 25 basis functions.
However, when the noise level is high, the choice for a smaller number of basis functions shows better performance.
We attribute this to overfitting in high noise level circumstances.
\section{Concluding Remarks}\label{sec:remarks}
The convex relaxations in solving the phase retrieval problem as proposed in \eqref{EQ:NN} have the advantage over current convex relaxation methods, such as PhaseLift, that our strategy is affine in the coefficients that are to be estimated.
This allows for easy extension of the proposed method to phase retrieval problems that incorporate prior knowledge on the coefficients by regularization of the objective function.
One such successful extension is the regularization with the $\ell_1$-norm to find sparse solutions, as demonstrated in Figure~\ref{fig:sparse}.
In Section~\ref{sec:admm} an ADMM algorithm was proposed for efficient computation of the solution to \eqref{EQ:NN}.
The result is that for the COPR algorithm a better computational complexity is observed compared to PhaseLift, see Figure~\ref{fig:trend}.
COPR is also able to solve phase estimation problems with larger numbers of parameters.
The required computations are favourable both in computation time and accuracy (they have simple analytic solutions) and in worst-case scaling behaviour $\bigO{ n_y n_a}$ for every ADMM iteration, where $n_y$ is the number of pixels and $n_a$ is the number of basis functions.
We discussed convergence properties of the COPR algorithm in Section~\ref{sec:convergence} and showed that for selected problems this convergence is linear or faster.
Finally, COPR has been shown to be robust against measurement noise, and outperform the two projection-based methods whose naive forms are often sensitive to noise as expected.
We are aware that in practice the performance of projection methods can be substantially better than what we have observed in this study provided that appropriate denoising techniques are also applied.
Keeping aside from the matter of using denoising techniques, we have chosen to compare the algorithms in their very definition forms.
\section{Funding Information}
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement No. 339681.
\bibliographystyle{ieeetr}
|
{
"timestamp": "2018-03-08T02:08:31",
"yymm": "1803",
"arxiv_id": "1803.02652",
"language": "en",
"url": "https://arxiv.org/abs/1803.02652"
}
|
\section{Introduction}
For years, medical informatics researchers have pursued data-driven methods to automate disease diagnosis procedures for early detection of many deadly diseases. Treatment of Alzheimer's disease, which has become the sixth leading cause of death in the United States \cite{xu2016mortality}, is one of the conditions that could benefit from computer-aided diagnostic techniques. A particular challenge of Alzheimer's disease is that it is difficult to detect in early stages before mental decline begins. But medical imaging holds promise for earlier diagnosis of Alzheimer's disease \cite{mckhann2011diagnosis}. Magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans contain information about the effects of Alzheimer's disease on the brain’s structure and functioning. But analyzing such scans is very time consuming for doctors and researchers because each scan contains millions of voxels.
Deep learning systems are one potential solution for processing medical images automatically to make diagnosing Alzheimer's disease more efficient. 3D convolutional neural networks (3D-CNN), taking only MRI brain scans and disease labels as input and trained end-to-end, are reported to be on par with the performance of traditional diagnostic methods in Alzheimer's disease classification \cite{khvostikov20183d,korolev2017residual}. However, the process that 3D-CNNs use to arrive at their conclusions lacks transparency and cannot straightforwardly provide reasoning and explanations as human experts do in diagnosis. It is therefore difficult for human practitioners to trust such systems in evidence-centered areas like medical research.
The goal of this study is to break into the black box of 3D-CNNs for Alzheimer's disease classification. Particularly, we develop techniques to produce visual explanations that can indicate a 3D-CNN's spatial attention on MRI brain scans when making predictions. Our approaches give diagnosticians a better understanding of the behaviors of 3D-CNNs and provide greater confidence about integrating them into automated Alzheimer's disease diagnostic systems. In summary, the contributions of this study are as follows:
\begin{itemize}
\item
We propose a hierarchical MRI image segmentation based approach for sensitivity analysis of 3D-CNNs, which can discriminate the importances of homogeneous brain regions at different levels for Alzheimer's disease classification.
\item
We extend two state-of-the-art approaches for explaining CNNs in 2D natural image classification to 3D MRI images, which can track the spatial attention of 3D-CNNs when predicting Alzheimer's disease.
\item
We compare the developed approaches qualitatively by examining the visual explanations generated. We also conduct quantitative comparisons for their ability to localize important parts of the brain in diagnosing Alzheimer's disease.
\end{itemize}
The rest of the paper is organized as follows. Section \ref{sec2} surveys related work for this study. Section \ref{sec3} describes the methods development, data, and experimental setup. Section \ref{sec4} presents the qualitative and quantitative comparisons for proposed methods. Section \ref{sec5} presents study conclusions.
\section{Related Work}\label{sec2}
Works that are closely connected to this study are divided into three parts: 3D-CNNs for Alzheimer's disease classification, brain MRI segmentation, and visualizing and understanding CNNs for natural image classification.
\paragraph{3D-CNNs for Alzheimer's Disease Classification}
There are two major methods for using 3D convolutional neural networks for Alzheimer's disease classification from brain MRI scans. One uses 3D-CNNs to automatically extract generic features from MRIs and build other classifiers on top of them \cite{suk2014hierarchical,hosseini2016alzheimer}. The other trains the 3D-CNNs in an end-to-end manner that only takes MRI scans and labels as input \cite{korolev2017residual,khvostikov20183d}. Both approaches achieve comparable performance \cite{khvostikov20183d}. The user has more control over the first method and thus can understand it better. The latter needs little input from humans so that it is easier to use.
\paragraph{Brain MRI Segmentation}
As one of the fundamental problems in neuroimaging, brain segmentation is the building block for many Alzheimer's disease diagnosis methods. Semantic segmentation methods such as FreeSurfer \cite{fischl2012freesurfer} enable brain volume calculations from MRI scans of Alzheimer's disease subjects \cite{mulder2014hippocampal}. Unsupervised hierarchical segmentation methods detect homogeneous regions and separate them from coarse to finer levels, providing more flexibility for multilevel analysis than the one-level semantic segmentation \cite{corso2008efficient,yang2016supervoxel}.
\paragraph{Visualizing and Understanding CNNs for Natural Image Classification}
To explain the superior image classification performance for 2D-CNNs, researchers incorporate the spatial structure of the convolutional layer to visualize the discriminative object from activation maps \cite{zhou2016learning,selvaraju2016grad}. Sensitivity analysis by measuring the change of output class probability due to perturbed input is another popular method because it is not subject to the architectural constraints of CNNs. LIME, or local interpretable model-agnostic explanations \cite{ribeiro2016should}, is a regression-based sensitivity analysis approach that examines perturbed superpixels to make CNN results more interpretable. The perturbed superpixels could be further learned to be more semantically meaningful \cite{fong2017interpretable,yang2018global}. All these methods create a 2D spatial heatmap as a visual explanation that indicates where the CNN has focused to make its predictions. These can be extended to 3D for Alzheimer's disease classification.
\section{Method}\label{sec3}
In this section, we describe the methods that can produce visual explanations of predictions of Alzheimer's disease from brain MRI scans by deep 3D convolutional neural networks (3D-CNNs). First, we summarize the deep learning models we deploy for the Alzheimer's disease classification task. Then, we present the brain MRI data for the study and describe how we use the data in experiments. Finally, we introduce the three approaches that we develop for explaining the 3D-CNNs, which are sensitivity analysis by 3D ultrametric contour map (SA-3DUCM), 3D class activation mapping (3D-CAM), and 3D gradient-weighted class activation mapping (3D-Grad-CAM).
\subsection{Architecture of Deep 3D Convolutional Neural Networks}
The architecture of the deep 3D convolutional neural networks (3D-CNN) for Alzheimer's disease classification in this study are based on the network architectures proposed by Korolev et al.\cite{korolev2017residual}. Particularly, two types of 3D-CNNs are built for classifying brain MRI scans from an Alzheimer's disease cohort (AD) and a normal cohort (NC). The design ideas for both types of 3D-CNNs are rooted in successful 2D natural image classification models, specifically, VGGNet, the Very Deep Convolutional Networks \cite{simonyan2014very}, and ResNet, the Deep Residual Networks \cite{he2016deep}.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=130mm]{resnet_gap}
\caption{\textbf{Left:} The architecture of 3D-VGGNet; \textbf{Middle:} The architecture of 3D-ResNet; \textbf{Right:} The modified architecture of 3D-ResNet with global average pooling layer, 3D-ResNet-GAP, to produce 3D class activation mapping (3D-CAM). The only difference is that a global average pooling layer directly outputs to the softmax output layer (yellow boxes), replacing the original max pooling and fully connected layers.}\label{arch}
\end{center}
\end{figure*}
\paragraph{3D Very Deep Convolutional Networks (3D-VGGNet)}
VGGNet stacks many layer blocks containing narrow convolutional layers followed by max pooling layers. The 3D very deep convolutional network (3D-VGGNet) \cite{korolev2017residual} for Alzheimer's disease classification is a direct application of this idea to 3D brain MRI scans. It contains four blocks of 3D convolutional layers and 3D max pooling layers, followed by a fully connected layer, a batch normalization layer \cite{ioffe2015batch}, a dropout layer \cite{srivastava2014dropout}, another fully connected layer, and the softmax output layer to produce the probabilities of disease in the Alzheimer's disease cohort (AD) and the normal cohort (NC). The full network architecture of 3D-VGGNet is visualized in Figure \ref{arch} (left). To optimize model parameters, the ADAM optimizer \cite{kingma2014adam} is used with a learning rate of 0.000027, a batch size of 5, and 150 training epochs. The two-class cross-entropy calculated from the probabilities output by the softmax layer and the ground-truth labels are used as loss functions.
\paragraph{3D Deep Residual Networks (3D-ResNet)}
Residual network is the most important building block of the state-of-the-art of 2D natural image classification \cite{he2016deep,xie2017aggregated}. 3D deep residual networks (3D-ResNet) \cite{korolev2017residual} for Alzheimer's disease classification prove their effectiveness in the 3D domain. We deploy this important type of 3D-CNN in this study and try to explain its predictions. Specifically, a six-residual-block architecture is built. Each residual block consists of two 3D convolutional layers with 3 $\times$ 3 $\times$ 3 filters that have a batch normalization layer and a rectified-linear-unit nonlinearity layer (ReLU) \cite{nair2010rectified} between them. Skip connections (identity mapping of a residual block) add a residual block element-by-element to the following residual block, explicitly enabling the following block to learn a residual mapping rather than a full mapping. This eases the learning process for deeper architectures and results in better performance. The full architecture of 3D-ResNet is depicted in Figure \ref{arch} (middle). For optimization, Nesterov accelerated stochastic gradient descent \cite{nesterov1983method} is used. Optimization parameters are set as 0.001 for learning rate, 3 for batch size, and 150 for training epochs. The same loss function as 3D-VGGNet, the two-class cross-entropy function, is used.
\subsection{Data and Experiment Setup}\label{cv}
Brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative \footnote{Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: \url{http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf}} (ADNI) \cite{mueller2005alzheimer} are used for this study. Specifically, we used data from the "spatially normalized, masked, and N3-corrected T1 images" category to train the 3D-VGGNet and 3D-ResNet models to classify MRI scans from the Alzheimer's disease cohort (AD) and the normal cohort (NC). Each brain MRI scan is a 3D tensor of intensity values with size 110 $\times$ 110 $\times$ 110. As one subject could have more than one MRI scan in the database, to avoid potential information leak between the training and testing dataset, we only include the earliest MRI associated with each subject for this study. As a result, 47 MRI scans from the Alzheimer's disease cohort (AD) and 56 MRI scans from the normal cohort (NC) are selected for this study. We randomly set aside eight MRI scans (5 AD, 3 NC) for later visual explanation analysis. The rest of the dataset is used for training and testing the deep 3D convolutional neural networks (3D-CNNs).
For training and testing the 3D-VGGNet and 3D-ResNet models, we conduct five-fold cross-validation for five different splits of the dataset, totaling 25 training and testing rounds. As the batch size parameters are chosen as small numbers for both models (five for 3D-VGGNet and three for 3D-ResNet), we enforce that each batch in training contains samples from both the Alzheimer's disease cohort (AD) and normal cohort (NC) to stabilize the training process by avoiding biased loss.
\subsection{Explaining the 3D-CNNs}
In this section, we describe the methods that we develop for explaining the predictions of the 3D-CNNs in detail. We first revisit a baseline method using sensitivity analysis that can shed light on 3D-CNNs' attention \cite{korolev2017residual}. Then we show how we used an unsupervised 3D hierarchical volumetric image segmentation approach, the 3D ultrametric contour map (3D-UCM) \cite{yang2016supervoxel}, to improve the baseline, which we call sensitivity analysis by 3D ultrametric contour map (SA-3DUCM). Next, we describe how the successful 2D visual explanation method, class activation mapping (CAM) \cite{zhou2016learning} and its generalization, gradient-weighted class activation mapping (Grad-CAM) \cite{selvaraju2016grad}, are extended to 3D to explain predictions from 3D MRI scans. We call the two extended approaches 3D-CAM and 3D-Grad-CAM, respectively. As we mentioned, there are two major ways to explain the predictions of deep convolutional neural networks. One way applies perturbations to data and conducts sensitivity analysis. The baseline method and proposed SA-3DUCM approach belong to this category. The other way utilizes the architectural properties of CNNs to heuristically track the attention of neural networks. 3D-CAM and 3D-Grad-CAM fall into this category.
\paragraph{Baseline Approach}
A baseline approach is proposed alongside the work of 3D-VGGNet and 3D-ResNet \cite{korolev2017residual} to shed light on 3D-CNN's attention when classifying MRI scans. To be specific, for every voxel in the MRI scan, its 7 $\times$ 7 $\times$ 7 neighborhood is occluded from the image, and then the 3D-CNN re-evaluates the probability of Alzheimer's disease from the partially occuluded image. The change of probability is used as the importance of that voxel. More formally, for the brain MRI volume $V$ and each voxel of $V$ at $(x,y,z)$, we occlude the neighborhood $V_{x-3:x+3,y-3:y+3,z-3:z+3}$, resulting in a perturbed MRI volume occluded around $(x,y,z)$, denoted by $OV_{(x,y,z)}$. We want to measure the change of probability of Alzheimer's disease of $OV_{(x,y,z)}$, predicted by the 3D-CNN, compared to the original volume $V$. This change is assigned to the voxel at $(x,y,z)$. For a 3D heatmap, $C$, of the same size as $V$, to store these changes of probabilities as the importance score for all the voxels, the magnitude at $(x, y, z)$ of $C$ is calculated by
\begin{equation}
C_{x,y,z} = |P(OV_{(x,y,z)}) - P(V)|
\end{equation}
where $P(\cdot)$ is one forward pass of the 3D-CNN to evaluate the probability of Alzheimer's disease from the MRI volumes, and $|\cdot|$ is the absolute value function.
This approach is a direct application of the one-at-a-time sensitivity analysis at the single voxel level to test how the uncertainty of the output probability of the 3D-CNN could be assigned to different voxels of the MRI scan. This is straightforward to implement; however, this approach suffers from three important problems. First, the 7 $\times$ 7 $\times$ 7 cubical neighborhoods are not necessarily semantically meaningful and could be across different brain segments, e.g., half in cerebral cortex and half in white matter. Thus, occlusion of such an area results in an unaccountable change of output probability. Second, this approach could only capture the impact of the 7 $\times$ 7 $\times$ 7 local areas. The importances of larger or smaller areas are not tested. Third, as we evaluate a new output probability for each voxel, this approach is extremely computationally intensive. An MRI scan of size 110 $\times$ 110 $\times$ 110 has over 1 million voxels, requiring the same number of forward passes through the 3D-CNN, which could take hours even in GPU-assisted systems.
\paragraph{Sensitivity Analysis by 3D Ultrametric Contour Map (SA-3DUCM)}
We notice that the shortcomings of the baseline approach could be overcome by using a good segmentation of the brain volume instead of the 7 $\times$ 7 $\times$ 7 local neighborhood around each voxel. Particularly, we occlude each segment in the segmentation, instead of the cubical neighborhoods, before re-evaluating the probabilities. To resolve each of the three problems of the baseline approach, the segmentation method should be semantically meaningful, hierarchical, and compact. Most specifically, to be semantically meaningful, the segmentation should separate different homogeneous parts of the brain volume well, e.g., separating cerebral cortex and white matter, so that changes of probability could be ascribed to specific segments. To be hierarchical, the segmentation method should provide a hierarchy of segmentations that capture both coarse level parts, such as the whole white matter, as well as finer level parts. In this way, we can test the importances for both small and large areas. To be compact, the segmentation method should avoid over-segmentation and generate a manageable number of segments for analysis. Thus, we can reduce the number of forward passes needed through the 3D-CNN from the number of voxels to the number of segments, which is usually three to four orders of magnitude less.
3D Ultrametric Contour Map (3DUCM) \cite{yang2016supervoxel,huang2018supervoxel} is an effective approach for unsupervised hierarchical 3D volumetric image segmentation, which is the 3D extension of the 2D state-of-the-art, Ultrametric Contour Map for natural image segmentation \cite{arbelaez2011contour}. It provides compact hierarchical segmentation of high quality. For the brain MRI volume, $V$, it could generate a hierarchy of segmentation, $H=\{H_{1},H_{2},...,H_{N}\}$, where each level $H_{n}=S_{1}^{n}\cup S_{2}^{n}\cup ...\cup S_{K_{n}}^{n}$ is a full segmentation of the volume $V$. We occlude each segment $S_{k}^{n}$ , $k=1,2,...,K_{n}$, $n=1,2,...,N$, in $V$, denoting each resulting volume by $OV_{k}^{n}$ , and re-evaluate the probability of Alzheimer's disease through one forward pass of the 3D-CNN. The change of probabilities compared to what is obtained from the original volume, $|P(OV_{k}^{n})-P(V)|$, is assigned to every voxel in $S_{k}^{n}$. Since each voxel belongs to one segment at each level of the hierarchy, each voxel gets $N$ quantities from the calculation, where $N$ is the number of levels in the segmentation hierarchy. We compute the average quantity from the $N$ quantities as the importance score for each voxel and store it in a heatmap $C$. So for a voxel of $V$ at $(x, y, z)$, assuming that it belongs to $S_{k_{n}}^{n}$, for each level of hierarchy $H_{n}$, we calculate the importance score for it as
\begin{equation}
C_{x,y,z} = \frac{1}{N}\sum_{n=1}^{N}|P(OV_{k_{n}}^{n})-P(V)|
\end{equation}
Since the 3DUCM hierarchical segmentation usually provides homogeneous segments of the brain MRI, we expect the importance heatmap $C$ to distinguish important brain parts for Alzheimer's disease classification. In terms of computational burden, each level of the hierarchy contains at most hundreds of segments, and the hierarchy itself is no more than 20 levels. Thus, the number of forward passes needed to re-evaluate the probabilities is greatly reduced.
\paragraph{3D Class Activation Mapping (3D-CAM)}
One major problem with one-at-a-time sensitivity analysis based methods (baseline and SA-3DUCM) is that the correlations and interactions between segments of MRI volume are ignored. Although using the hierarchical segmentation method can cover most semantic segments from finer to coarser level, we cannot guarantee all combinations are tested. Therefore, we turn to methods based on the architectural properties of the 3D-CNN that directly visualize the activations of convolutional layers when predictions are made. Class activation mapping \cite{zhou2016learning} designs a global average pooling layer on top of convolutional layers in natural images classification, which enables remarkable localization performance on important objects in the images in spite of the fact that the CNN is trained on image-level labels. This fits our problem well. Our Alzheimer's disease labels (Alzheimer's disease cohort (AD) and normal cohort (NC)) are used at MRI scan level during the training of the 3D-CNNs. Our goal is to obtain visual explanations that can highlight brain parts important for Alzheimer's disease classification. Thus, extending class activation mapping to 3D provides a way to do this.
The idea of class activation mapping is that the last convolution layer of the CNN contains the spatial information indicating discriminative regions to make classifications. To visualize these discriminative parts, class activation mapping creates a spatial heatmap out of the activations from the last convolutional layer. Specifically, class activation mapping adopts a global average pooling layer between the final convolutional layer and output layer, which enables projection of class weights of the output layer onto the activation maps in the convolutional layer. The 3D extension of class activation mapping based on 3D-ResNet is shown in Figure \ref{arch} (right). Instead of using a max pooling layer and a fully connected layer before output, the modified 3D-ResNet only uses a global average pooling layer (3D-ResNet-GAP). To be specific, for a given MRI volume $V$ and a 3D-CNN, let $f_{u}(x,y,z)$ be the activation of unit $u$ in the last convolutional layer at location $(x, y, z)$. The global average pooling for unit $u$ is $F_{u}=\frac{1}{Z}\sum_{x,y,z}f_{u}(x,y,z)$, where $Z$ is the number of voxels in the corresponding convolutional layer. As the global average pooling layer is directly connected to the softmax output layer, by the definition of the softmax function, the probability of Alzheimer's disease, $P(V)$, given by
\begin{equation}
P(V) = \frac{\exp (\sum_{u}w_{u}^{AD}F_{u})}{\exp(\sum_{u}w_{u}^{AD}F_{u})+\exp(\sum_{u}w_{u}^{NC}F_{u})}
\end{equation}
where $w_{u}^{AD}$ and $w_{u}^{NC}$ are the class weights in the output layer for the Alzheimer's disease cohort (AD) and the normal cohort (NC), respectively. We ignore the bias term here because its impact is minimal on classification performance. Essentially, $\sum_{u}w_{u}^{AD}F_{u}$ and $\sum_{u}w_{u}^{NC}F_{u}$ are the class scores for AD and NC cohorts, respectively. By extending $F_{u}$ in the class score, we have
\begin{equation}
\textrm{Score}(AD) = \sum_{u}w_{u}^{AD}F_{u} = \sum_{u}w_{u}^{AD}\frac{1}{Z}\sum_{x,y,z}f_{u}(x,y,z) = \frac{1}{Z}\sum_{x,y,z}\sum_{u}w_{u}^{AD}f_{u}(x,y,z)
\end{equation}
The $\sum_{u}w_{u}^{AD}f_{u}(x,y,z)$ part of the quantity is defined for every spatial location $(x, y, z)$ and their sum is proportional to the class score for Alzheimer's disease. As areas significantly negatively contributing to the class score are also important, we adopt the absolute value and define the class activation mapping for the AD cohort as
\begin{equation}
\textrm{3D-CAM}_{x,y,z}(AD) = |\sum_{u}w_{u}^{AD}f_{u}(x,y,z)|
\end{equation}
which is essentially a heatmap of weighted sums of activations in every location $(x, y, z)$ and can be easily calculated by one forward pass when the volume $V$ is provided.
Though 3D-CAM is easy to obtain, and we expect it to highlight the important spatial areas for classification, there are two potential problems with this approach. First, as we modify the 3D-CNN architecture with the global average pooling layer, we need to re-train the model, possibly affecting the classification performance. Second, the resolution of the class activation mapping is of the same size as the last convolutional layer. We need to upsample it to the original MRI scan size to identify the discriminative regions, which means we would lose some details in the resulting heatmap. One solution could be to remove more layers and build the global average pooling layers on convolutional layers with higher resolution. But this could further decrease the classification performance.
\paragraph{3D Gradient-Weighted Class Activation Mapping (3D-Grad-CAM)}
To overcome class activation mapping's shortcoming of decreased classification performance, its generalization, gradient-weighted class activation mapping, is proposed in natural image classification \cite{selvaraju2016grad}. This approach does not need to modify the 3D-CNN's architecture and thus will do no harm to classification performance. Since no re-training is required, it is more efficient to deploy in deep learning systems. The core idea is still to identify the important activations from feature maps in convolutional layers. Using the same notation as the previous part, we first calculated the gradient of the $\textrm{Score}(AD)$ with respect to the activation of unit $u$ at location $(x, y, z)$, $f_{u}(x,y,z)$, in the last convolutional layer. Then, we use the global average pooling of the gradients, denoted by $a_{u}^{AD}$, as the importance weights for unit $u$ for the Alzheimer's disease cohort (AD). That is,
\begin{equation}
a_{u}^{AD}=\frac{1}{Z}\sum_{x,y,z}\frac{\partial \textrm{Score}(AD)}{\partial f_{u}(x,y,z)}
\end{equation}
where $Z$ is the number of voxels in the corresponding convolutional layer. Then, we combined the unit weights with the activations, $f_{u}(x,y,z)$, to get the heatmap of 3D gradient-weighted class activation mapping.
\begin{equation}
\textrm{3D-Grad-CAM}_{x,y,z}(AD) = |\sum_{u}a_{u}^{AD}f_{u}(x,y,z)|
\end{equation}
3D-Grad-CAM could be applied to a wider range of 3D-CNNs than 3D-CAM as long as the 3D-CNN has a fully convolutional layer. Also, it has been proven in 2D applications that CAM is a special case of Grad-CAM with the global average pooling layer \cite{selvaraju2016grad}. It does not require re-training so it quickly generates the 3D-Grad-CAM heatmap with just one forward pass. However, 3D-Grad-CAM still suffers from the low resolution problem because the 3D-Grad-CAM is a coarse heatmap of the same size as the last convolutional layer. We could have calculated it with gradients and activations from lower convolutional layers, but there is no guarantee that the spatial activations wouldn't change in the upper layers.
In summary, in this section, we introduce four approaches to obtain visual explanation heatmaps for predictions from 3D-CNNs. The baseline approach and sensitivity analysis by 3D ultrametric contour map (SA-3DUCM) are completely model-agnostic and can handle any type of 3D-CNNs, but they might have problems with correlations and interactions between different segments of the brain volume. 3D class activation mapping (3D-CAM) and 3D gradient-weighted class activation mapping (3D-Grad-CAM) are weighted visualizations of the activation maps in the convolutional layer, which avoids dealing with the correlations and interactions problem. However, they are limited by the low resolution of the convolutional layers. Upsampled heatmaps might not be able to provide enough detail to accurately identify important regions. For computational efficiency, the baseline approach is the slowest because it does a forward pass for every voxel. 3D-CAM only needs one forward pass to generate the heatmap, but it requires very time-consuming re-training. SA-3DUCM needs a few hundred forwarded passes. 3D-Grad-CAM is the best because it does not require re-training and only needs one forward pass when generating the heatmap. In the next section, we will compare the models' performances in identifying of discriminative brain parts for Alzheimer's disease classification from MRI scans.
\section{Results}\label{sec4}
In this section, we will present the classification performance of 3D-CNNs, visual comparisons of the heatmaps generated by the proposed visual explanation approaches, and a quantitative benchmark for the localization ability of the heatmaps in identifying important brain parts for Alzheimer's disease classification.
\subsection{Alzheimer's Disease Classification Performance}
We compare the classification performance of four different 3D-CNNs. These include 3D-VGGNet and 3D-ResNet as described. By implementing the 3D-CAM, we have a modified 3D-ResNet with global average pooling layer (GAP) as shown in Figure \ref{arch} (right), denoted as 3D-ResNet-GAP. The counterpart for 3D-VGGNet is not included because the classification performance drops too much, compared to 3D-VGGNet. Additionally, to obtain a higher resolution 3D-CAM, we remove the layers from {\tt conv4} to {\tt voxres9\_out}, resulting in a shallow version of 3D-ResNet-GAP, which we call 3D-ResNet-Shallow-GAP. All four 3D-CNNs are trained for classifying the Alzheimer's cohort (AD) in comparison to the normal cohort (NC). Classification performance is measured by the area under the ROC curve (AUC) and classification accuracy (ACC). Cross-validation as described in Section \ref{cv} is conducted. Average AUC and ACC and their standard deviations are reported. The results are presented in Table \ref{cls}. 3D-VGGNet and 3D-ResNet achieve good classification performances. However, there is a substantial drop in performance for 3DResNet-GAP and 3D-ResNet-Shallow-GAP, which means the global average pooling layer have a negative effect on classification performance.
\begin{table*}[t]
\begin{center}
\begin{tabular}{p{4cm}|p{3.5cm}p{3.5cm}}
\hline
\textbf{Method} & \textbf{AUC} & \textbf{ACC} \\
\hline
3D-VGGNet & 0.863$\pm$0.056 & 0.766$\pm$0.095 \\
3D-ResNet & 0.854$\pm$0.079 & 0.794$\pm$0.070 \\
3D-ResNet-GAP & 0.643$\pm$0.110 & 0.614$\pm$0.100 \\
3D-ResNet-Shallow-GAP & 0.751$\pm$0.083 & 0.585$\pm$0.122 \\
\hline
\end{tabular}
\end{center}
\caption{Classification performance of 3D-CNNs}
\label{cls}
\end{table*}
\subsection{Qualitative Comparison for Visual Explanations}
To visually check the quality of heatmaps generated by the introduced visual explanation methods, we take one MRI scan from the set-aside data for visual explanation analysis and present the heatmap from the horizontal, sagittal, and coronal sections. For comparison, we present the input brain MRI volume (Figure \ref{gt}) with highlighted areas of cerebral cortex, lateral ventricle, and hippocampus. These parts are believed to be important for Alzheimer's disease diagnosis by physicians \cite{juottonen1999comparative,mu1999quantitative}. The ground-truth cerebral cortex, lateral ventricle, and hippocampus regions are segmented by the FreeSurfer software \cite{fischl2012freesurfer}.
\paragraph{Baseline}The resulting heatmaps are labeled as VGG-Baseline and Res-Baseline and are presented in Figure \ref{vgg_baseline} and Figure \ref{resnet_baseline}, respectively. We can see from the figures that in both situations, the baseline method does not find the important areas. The heatmaps are irregularly shaped because heterogeneous regions are used for sensitivity analysis. Overall, the baseline method fails to identify discriminative regions.
\begin{figure*}[!ht]
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[Brain MRI with highlighted cerebral cortex, lateral ventricle, and hippocampus.]{\includegraphics[width=75mm]{gt_cortex.jpg}\label{gt}
}
\par\end{center}%
\end{minipage}
\\
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[VGG-Baseline]{\includegraphics[width=75mm]{vgg_baseline.jpg}\label{vgg_baseline}
}
\par\end{center}%
\end{minipage}
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[Res-Baseline]{\includegraphics[width=75mm]{resnet_baseline.jpg}\label{resnet_baseline}
}
\par\end{center}%
\end{minipage}
\\
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[VGG-SA-3DUCM]{\includegraphics[width=75mm]{vgg_3ducm.jpg}\label{vgg_3ducm}
}
\par\end{center}%
\end{minipage}
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[Res-SA-3DUCM]{\includegraphics[width=75mm]{resnet_3ducm.jpg}\label{resnet_3ducm}
}
\par\end{center}%
\end{minipage}
\\
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[Res-3D-CAM]{\includegraphics[width=75mm]{resnet_cam.jpg}\label{resnet_cam}
}
\par\end{center}%
\end{minipage}
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[Res-3D-CAM-Shallow]{\includegraphics[width=75mm]{resnet_shallow_cam.jpg}\label{resnet_shallow_cam}
}
\par\end{center}%
\end{minipage}
\\
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[VGG-3D-Grad-CAM]{\includegraphics[width=75mm]{vgg_grad_cam.jpg}\label{vgg_grad_cam}
}
\par\end{center}%
\end{minipage}
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[Res-3D-Grad-CAM]{\includegraphics[width=75mm]{resnet_grad_cam.jpg}\label{resnet_grad_cam}
}
\par\end{center}%
\end{minipage}
\\
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[VGG-3D-Grad-CAM-Shallow]{\includegraphics[width=75mm]{vgg_grad_cam_shallow.jpg}\label{vgg_grad_cam_shallow}
}
\par\end{center}%
\end{minipage}
\begin{minipage}[htp]{0.49\columnwidth}%
\begin{center}
\subfloat[Res-3D-Grad-CAM-Shallow]{\includegraphics[width=75mm]{resnet_grad_cam_shallow.jpg}\label{resnet_grad_cam_shallow}
}
\par\end{center}%
\end{minipage}
\caption{Horizontal, sagittal, and coronal view of the brain MRI and the visual explanation heatmaps.}\label{comp}
\end{figure*}
\paragraph{SA-3DUCM} After incorporating hierarchical segmentations into sensitivity analysis, we find that the results greatly improves, compared to baseline. Figure \ref{vgg_3ducm} presents the heatmap made by applying SA-3DUCM to 3D-VGGNet (VGG-SA-3DUCM), and the heatmap in Figure \ref{resnet_3ducm} is made by applying SA-3DUCM to 3D-ResNet (Res-SA-3DUCM). In both situations, the approach differentiates the importances of different homogeneous regions. There are clear boundaries separating the regions. The lateral ventricle area stands out as the most discriminative part. However, the cerebral cortex areas are not well identified. This is because cerebral cortex is widely and loosely distributed in the brain so the cerebral cortex is usually not segmented as one area in hierarchical segmentations. SA-3DUCM tested the importance of different segments one by one. Thus, it is not able to capture the correlations between all segments that belong to the cerebral cortex.
\paragraph{3D-CAM} We only apply 3D class activation mapping (3D-CAM) to 3D-ResNet because 3D-VGGNet loses too much classification performance after using the global average pooling layer. The class activation mapping heatmap of 3D-ResNet-GAP is labeled as Res-3D-CAM and is presented in Figure \ref{resnet_cam}. The heatmap is blurry because it is upsampled from a 14 $\times$ 14 $\times$ 14 coarse heatmap. To get a higher resolution 3D class activation mapping heatmap, Figure \ref{resnet_shallow_cam} (Res-3D-CAM-Shallow) is obtained from 3D-ResNet-Shallow-GAP with more convolutional layers removed. It is upsampled from a 55 $\times$ 55 $\times$ 55 heatmap and thus provides more detail. It identifies the lateral ventricle and most parts of the cortex as important areas, which matches the human experts' approach.
\paragraph{3D-Grad-CAM}The 3D gradient-weighted class activation mapping (3D-Grad-CAM) also has low resolution problems, especially when it is applied to 3D-VGGNet. Because the last convolutional layer of 3D-VGGNet is only of size 3 $\times$ 3 $\times$ 3, the resulting heatmap VGG-3D-Grad-CAM barely provides any information (Figure \ref{vgg_grad_cam}). When we apply the same approach to a lower convolutional layer, {\tt conv2b}, in 3D-VGGNet, the resulting heatmap, VGG-3D-Grad-CAM-Shallow (Figure \ref{vgg_grad_cam_shallow}), is able to highlight part of the lateral ventricle. 3D-ResNet has the same situation. Res-3D-Grad-CAM (Figure \ref{resnet_grad_cam}) and Res-3D-Grad-CAM-Shallow (Figure \ref{resnet_grad_cam_shallow}) are generated by the 3D-Grad-CAM approach applied to {\tt voxres9\_out} (last convolutional layer) and {\tt bn4} (an intermediate convolutional layer) of 3D-ResNet. They are of size 14 $\times$ 14 $\times$ 14 and 55 $\times$ 55 $\times$ 55, respectively. Though both of them identify most of the lateral ventricle and the cerebral cortex as discriminative, Res-3D-Grad-CAM-Shallow is of higher resolution and more accurate. However, as we stated, upper convolutional layers could change the activation maps from the lower convolutional layers. Thus sometimes, we may not trust the heatmap from lower layers as a good representation of spatial attention of the 3D-CNN.
To summarize the qualitative comparisons, SA-3UCM has the same resolution as the original MRI volume and differentiates homogeneous regions well. However, it fails to identify the correlations from the fragmented cerebral cortex segments because of the one-at-a-time process in sensitivity analysis. 3D-Grad-CAM and 3D-CAM both produce more blurry heatmaps than SA-3DUCM because of upsampling. But they are able to highlight the cerebral cortex that is loosely distributed in the brain.
\subsection{Quantitative Comparison for Localization}
Visual comparisons of the heatmap give us a general idea how well different visual explanation methods work. But we wonder how well these heatmaps could localize important regions such as cerebral cortex, lateral ventricle, and hippocampus. To quantitatively compare localization ability, we plot the precision-recall curve for the heatmaps that we have visualized in the previous section to identify cerebral cortex, lateral ventricle, and hippocampus regions from the 8 MRI scans that are set aside for visual explanation analysis. VGG-Baseline, Res-Baseline, and VGG-3D-Grad-CAM are not included because they do not generate usable heatmaps in the visual comparisons. The results are presented in Figure \ref{qf}.
\begin{figure*}[tbp]
\begin{center}
\includegraphics[width=140mm]{pr_cortex_less.jpg}
\caption{Precision-recall curve to localize cerebral cortex, lateral ventricle, and hippocampus regions using heatmaps.}\label{qf}
\end{center}
\end{figure*}
From the results, we can see VGG-SA-3DUCM, Res-SA-3DUCM, and Res-3D-Grad-CAM-Shallow have high precision on the low recall end. This matches our visual comparisons as SA-3DUCM method puts the homogeneous lateral ventricle regions on top, and Res-3D-Grad-CAM-Shallow identifies cerebral cortex and lateral ventricle parts with high accuracy. However, the precision drops for all methods on the high recall end, implying no method is close to perfectly identifying all important regions. The reasons would be different. SA-3DUCM could not discriminate the cerebral cortex because of fragmented segments. 3D-CAM and 3D-Grad-CAM are limited by low resolution of the heatmaps.
Overall, both qualitative and quantitative comparisons indicate that all visual explanation methods have some limitations. The correct method may be chosen based on the specific goals. When the goal is to get the importance for a homogeneous region, SA-3DUCM is more suitable. If tracking the attention of the 3D-CNN is the goal, 3D-Grad-CAM is the preferred choice. Generally 3D-Grad-CAM is better than 3D-CAM because it does not modify the 3D-CNN architecture, requires less computation, and better localizes important regions.
\section{Conclusion and Discussion}\label{sec5}
In this study, we develop three approaches for producing visual explanations from 3D-CNNs for Alzheimer's disease classification. All approaches can highlight important brain parts for diagnosis. However, they have limitations in different aspects. The one-at-a-time sensitivity analysis procedure of SA-3DUCM is not able to handle correlated or interacting images segments, causing underestimation of attention in the loosely distributed area such as cerebral cortex in our case. 3D-CAM and 3D-Grad-CAM build heatmaps from convolutional layer activations that have lower resolution than the original MRI scan, resulting in loss of details and decreased localization accuracy. Therefore, we suggest users choose the right approach based on their use cases for MRI analysis.
Though all approaches are developed for Alzheimer's disease classification, they are generic enough for other type of 3D image analysis. SA-3DUCM is completely model agnostic and can adapt to any classifiers taking 3D volumetric images as input. 3D-CAM and 3D-Grad-CAM can work on any deep learning model that has a 3D convolutional layer. They could be applied to other types of 3D medical images or even video analysis.
One common limitation of these approaches is that the visual explanation is still one step away from fully understanding the 3D-CNN. Human experts measure cerebral cortex thickness as a biomarker for diagnosis \cite{fischl2000measuring}. In the generated visual explanations, there is no such explicit summarized representation on top of the visual attention from the cerebral cortex. This leads to our future work of explicit biomarker representation learning from medical imaging to fully interpret the 3D-CNNs.
\section*{\uppercase{Acknowledgments}}
This work is partially supported by NSF 1743050 to A.R. and S.R..
\makeatletter
\renewcommand{\@biblabel}[1]{\hfill #1.}
\makeatother
\bibliographystyle{unsrt}
|
{
"timestamp": "2018-07-09T02:03:14",
"yymm": "1803",
"arxiv_id": "1803.02544",
"language": "en",
"url": "https://arxiv.org/abs/1803.02544"
}
|
"\\section{Introduction}\n\\emph{Evolutionary game theory} has been established as a modeling tool f(...TRUNCATED)
| {"timestamp":"2018-03-08T02:05:41","yymm":"1803","arxiv_id":"1803.02564","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction}\n\nLeast squares approximations of the form \n\\[\n\\min_{{\\mathbf{x}}\\(...TRUNCATED)
| {"timestamp":"2019-03-08T02:10:38","yymm":"1803","arxiv_id":"1803.02661","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{intro}\nYour text comes here. Separate text sections with\n\\secti(...TRUNCATED)
| {"timestamp":"2018-10-09T02:15:24","yymm":"1803","arxiv_id":"1803.02537","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{Sec:Introduction}\n\nThere is nowadays a huge amount of biological s(...TRUNCATED)
| {"timestamp":"2018-03-08T02:11:21","yymm":"1803","arxiv_id":"1803.02769","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\nWeyl semimetals (WSMs) represent materials allowing for a solid-state re(...TRUNCATED)
| {"timestamp":"2018-05-04T02:09:35","yymm":"1803","arxiv_id":"1803.02850","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction and the main result}\nLet $H$ be a real separable Hilbert space and let $L=((...TRUNCATED)
| {"timestamp":"2019-05-14T02:39:52","yymm":"1803","arxiv_id":"1803.02655","language":"en","url":"http(...TRUNCATED)
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 5